133 60 2MB
English Pages 464 [417] Year 2011
The Continuum Companion to Philosophy of Mind
The Continuum Companions series is a major series of single volume companions to key research fields in the humanities aimed at postgraduate students, scholars and libraries. Each companion offers a comprehensive reference resource giving an overview of key topics, research areas, new directions and a manageable guide to beginning or developing research in the field. A distinctive feature of the series is that each companion provides practical guidance on advanced study and research in the field, including research methods and subject-specific resources. The Continuum Companion to Continental Philosophy, edited by John Mullarkey and Beth Lord The Continuum Companion to Locke, edited by S.-J. Savonious-Wroth, Paul Schuurman and Jonathan Walmsley Forthcoming in Philosophy: The Continuum Companion to Aesthetics, edited by Anna Christina Ribeiro The Continuum Companion to Berkeley, edited by Bertil Belfrage and Richard Brook The Continuum Companion to Epistemology, edited by Andrew Cullison The Continuum Companion to Ethics, edited by Christian Miller The Continuum Companion to Existentialism, edited by Jack Reynolds, Felicity Joseph and Ashley Woodward The Continuum Companion to Hegel, edited by Allegra de Laurentiis and Jeffrey Edwards The Continuum Companion to Hobbes, edited by S. A. Lloyd The Continuum Companion to Hume, edited by Alan Bailey and Dan O’Brien The Continuum Companion to Kant, edited by Gary Banham, Nigel Hems and Dennis Schulting The Continuum Companion to Leibniz, edited by Brendan Look The Continuum Companion to Metaphysics, edited by Robert Barnard and Neil A. Manson The Continuum Companion to Political Philosophy, edited by Andrew Fiala and Ma Matravers The Continuum Companion to Plato, edited by Gerald A. Press The Continuum Companion to Pragmatism, edited by Sami Pihlström The Continuum Companion to Socrates, edited by John Bussanich and Nicholas D. Smith The Continuum Companion to Spinoza, edited by Wiep van Bunge The Continuum Companion to Philosophy of Language, edited by Manuel Garcia-Carpintero and Max Kolbel The Continuum Companion to the Philosophy of Science, edited by Steven French and Juha Saatsi
The Continuum Companion to Philosophy of Mind Edited by
James Garvey
Continuum International Publishing Group The Tower Building 80 Maiden Lane 11 York Road Suite 704 London SE1 7NX New York, NY 10038 www.continuumbooks.com © James Garvey and Contributors, 2011 All rights reserved. No part of this publication may be reproduced or transmi ed in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: HB: 0826431887 978-0-8264-3188-2 Library of Congress Cataloging-in-Publication Data The Continuum companion to philosophy of mind / edited by James Garvey. p. cm. Includes bibliographical references. ISBN: 978-0-8264-3188-2 1. Philosophy of mind. I. Garvey, James, 1967BD418.3.C6565 2011 128'.2–dc22
Typeset by Newgen Imaging Systems Pvt Ltd, Chennai, India Printed and bound in Great Britain
2010036913
This book is for V
This page intentionally left blank
Contents Acknowledgements Contributors How to Use This Book Introduction 1 Problems, Questions and Concepts in the Philosophy of Mind Ian Ravenscro
ix xi xv xix 1
2 Consciousness Daniel D. Hu o
35
3 The Mark of the Mental Fred Adams and Steve Beighley
54
4 Substance Dualism T. J. Mawson
73
5 Physicalism Barbara Montero
92
6 Folk Psychology and Scientific Psychology Barry C. Smith
102
7 Internalism and Externalism in Mind Sarah Sawyer
133
8 The Philosophies of Cognitive Science Margaret A. Boden
151
9 Representation Georges Rey
171
10 Mental Causation Neil Campbell
190
11 Personal Identity E. J. Lowe
203
12 Embodied Cognition and the Extended Mind Michael Wheeler
220
vii
Contents
13 Current Issues in the Philosophy of Mind Paul Noordhof
239
Glossary Chronology Research Resources Notes Bibliography Index
280 319 327 330 341 375
viii
Acknowledgements I relied on a large number of people for help in pu ing this volume together. First and most importantly, I am very grateful to all of the contributors. Some provided advice and read bits of the manuscript, and their suggestions always resulted in improvements. I am particularly in the debt of those who saved me and stepped in to do some last-minute writing. Each delivered good, solid philosophy in record time. You know who you are. So thanks are owed to: Fred Adams, Steve Beighley, Margaret A. Boden, Mark Cain, Neil Campbell, Adam Ferner, Daniel D. Hu o, Dale Jacque e, E. J. Lowe, Tim Mawson, Barbara Montero, Isabella Muzio, Paul Noordhof, Dan O’Brien, Dimitris Platchias, Ian Ravenscro , Georges Rey, Constantine Sandis, Sarah Sawyer, Barry C. Smith, and Michael Wheeler. David Avital, Carly Bareham, Tom Crick and Sarah Douglas at Continuum are all very nearly equally excellent – thank you all for your help and for pu ing up with a lot. Thanks are also owed to comrades at Crisis, Kim Hastilow, Ted Honderich, London Street Rescue, Justin Lynas, Anthony O’Hear, and my associates at UCLU Jitsu. I am particularly grateful to Judy Garvey for her unwavering support.
ix
This page intentionally left blank
Contributors Fred Adams is Professor of Philosophy, and Chair and Professor of Linguistics and Cognitive Science at the University of Delaware. He has published over 100 articles or reviews in philosophy and cognitive science. He is co-author (with Ken Aizawa) of The Bounds of Cognition (2008), co-editor (with Leemon McHenry) of Reflections on Philosophy (1993), and is editor of Ethical Issues for the 21st Century (2005) and editor of Ethical Issues in the Life Sciences (2007). Steve Beighley is on the Neurophilosophy Track at Georgia State University. His research focuses primarily on animal minds, specifically on primitive communication and emotions. Margaret A. Boden is a Fellow of The British Academy, and Research Professor of Cognitive Science at the University of Sussex. She is the author of The Creative Mind: Myths and Mechanisms (second edition, 2004), Mind as Machine: A History of Cognitive Science (2006), and Creativity and Art: Three Roads to Surprise (2010). Her earlier books included Purposive Explanation in Psychology (1972) and Artificial Intelligence and Natural Man (1977). She has two children and four grandchildren, and lives in Brighton. Mark Cain is Assistant Head of the Department of Religion and Philosophy, Oxford Brookes University. He is author of Fodor: Mind, Language and Philosophy (Polity, 2002) and The Philosophy of Cognitive Science (Polity, forthcoming). Neil Campbell is Professor of Philosophy at Wilfrid Laurier University. He is the author of Mental Causation: A Non-Reductive Approach (2008) and A Brief Introduction to the Philosophy of Mind (2005). He also edited Mental Causation and the Metaphysics of Mind (2003) and Freedom, Determinism, and Responsibility (2003). He has published over 25 articles in philosophy journals. Adam Ferner works for the Royal Institute of Philosophy and is studying for a Ph.D. in Philosophy at Birkbeck College, University of London. His thesis is on animalism, animals, and artefacts. Daniel D. HuĴo is Professor of Philosophical Psychology at the University of Hertfordshire. He is the author of The Presence of Mind (1999), Beyond Physicalism
xi
Contributors
(2000), Wi genstein and the End of Philosophy (2006) and Folk Psychological Narratives (2008). He is also the editor of Narrative and Understanding Persons (2007), Narrative and Folk Psychology (2009) and co-editor of Folk Psychology Re-Assessed (2007). A special yearbook issue of Consciousness and Emotion, entitled Radical Enactivism, which focuses on his philosophy of intentionality, phenomenology and narrative, was published in 2006. Dale JacqueĴe is Lehrstuhl ordentlicher Professur für Philosophie, Schwerpunkt theoretische Philosophie (Senior Professorial Chair in Theoretical Philosophy), at Universität Bern, Switzerland. He is the author of numerous articles on logic, metaphysics, philosophy of mind, and aesthetics, and has recently published Philosophy of Mind: The Metaphysics of Consciousness (2009), Ontology (2002), On Boole (2002), David Hume’s Critique of Infinity (2001), and Wi genstein’s Thought in Transition (1998). He has edited the Blackwell Companion to Philosophical Logic (2002, 2006), Cambridge University Press Companion to Brentano (2004), and for North-Holland (Elsevier) the volume on Philosophy of Logic in the Handbook of the Philosophy of Science series (2007). E. J. Lowe is Professor of Philosophy at Durham University, UK. His books include Locke on Human Understanding (Routledge, 1995), Subjects of Experience (Cambridge University Press, 1996), The Possibility of Metaphysics (Oxford University Press, 1998), An Introduction to the Philosophy of Mind (Cambridge University Press, 2000), A Survey of Metaphysics (Oxford University Press, 2002), The Four-Category Ontology (Oxford University Press, 2006), Personal Agency (Oxford University Press, 2008), and More Kinds of Being (WileyBlackwell, 2009). T. J. Mawson is Fellow and Tutor in Philosophy at St Peter’s College, Oxford. He is author of Belief in God (Oxford University Press, 2005) and Free Will (Continuum, forthcoming). He keeps the list of publications on his website, h p://www.philosophy.ox.ac.uk/members/tim_mawson, more or less up to date. Barbara Montero is Associate Professor of Philosophy at the City University of New York at the College of Staten Island and The Graduate Center. She is the recipient of fellowships from the National Endowment for Humanities and the American Council of Learned Societies. Most of her research has focused on one or the other of two different notions of body: body as the physical or material basis of everything, and body as the moving, breathing, flesh and blood instrument that we use when we run, walk, or dance. She has published numerous articles is currently writing a book, to be published by Oxford University Press, on expertise and awareness. xii
Contributors
Isabella Muzio has a Ph.D. in philosophy from University College London and teaches philosophy at The Open University. She is writing a book about our knowledge of our own emotions. Paul Noordhof is Anniversary Professor of Philosophy at the University of York. His main work on causation, mental causation, self-deception, belief and the will, imagination, and consciousness has been published in Philosophy and Phenomenological Research, Mind, Mind and Language, Proceedings of the Aristotelian Society, Australasian Journal of Philosophy, Synthese and Analysis. He is also reviews editor of Mind and joint editor of a collection of papers entitled Cause and Chance (with Phil Dowe) published in the Routledge International Library of Philosophy Series. He received a three year Major Leverhulme Research Fellowship in 2006 for research into the connection between consciousness and representation for a book provisionally entitled Cement of the Mind (under contract with Oxford University Press). The full presentation of his work in causation will occur in another book, A Variety of Causes (also under contract with Oxford University Press). Dan O’Brien is Research Fellow at Oxford Brookes University, Honorary Research Fellow at Birmingham University, and Associate Lecturer with the Open University. His publications include An Introduction to the Theory of Knowledge (Polity, 2006), Hume’s Enquiry Concerning Human Understanding: Reader’s Guide (Continuum, 2006), Gardening and Philosophy: Cultivating Wisdom (Blackwell, 2010). Dimitris Platchias is a lecturer at the University of Glasgow. He is the author of Phenomenal Consciousness: Understanding the Relation between Neural Processes and Experience (Acumen, 2010) and has co-edited Representationalism: Contemporary Readings (with Fiona Macpherson, MIT Press, forthcoming) and Hallucination (with Fiona Macpherson, MIT Press, forthcoming). Ian RavenscroĞ is Associate Professor of Philosophy at Flinders University. He is co-author (with Gregory Currie) of Recreative Minds (Oxford University Press, 2002), author of Philosophy of Mind (Oxford University Press, 2005) and editor of Minds, Ethics, and Conditionals (Oxford University Press, 2009). Georges Rey is Professor of Philosophy at the University of Maryland at College Park. He has wri en numerous articles in the philosophy of psychology, as well as a book, Contemporary Philosophy of Mind: A Contentiously Classical Approach, and was an editor (with Barry Loewer) of Meaning in Mind: Fodor and His Critics. Much of his work is available at his website: h p://sites.google.com/ site/georgesrey. xiii
Contributors
Constantine Sandis is a Senior Lecturer in Philosophy at Oxford Brookes University and New York University in London. He is the editor of New Essays on the Explanation of Action (Palgrave Macmillan) and (with Timothy O’ Connor) A Companion to the Philosophy of Action (Wiley-Blackwell). His is currently completing a monograph called The Things We Do and Why We Do Them. Sarah Sawyer is Senior Lecturer at the University of Sussex. Her research interests are based around the nature of and the connections between thought, language and knowledge. Her published work primarily concerns content externalism, justification, fiction and singular thought. She is also the editor of New Waves in Philosophy of Language (2010). Barry C. Smith is Professor of Philosophy and Director of the Institute of Philosophy at the School of Advanced Study, University of London. He has been Head of the School of Philosophy at Birkbeck College and held visiting positions at Simon Fraser University, the University of Califonia at Berkeley and the Ecole Normale Superieure in Paris. He is the editor of the Oxford Handbook of Philosophy of Language (with Ernest Lepore), Knowing Our Own Minds (with Crispin Wright and Cynthia Macdonald) and Questions of Taste. Michael Wheeler is Professor of Philosophy at the University of Stirling. Prior to joining the Stirling Department in 2004, he held teaching and research posts at the Universities of Dundee, Oxford, and Stirling (a previous appointment). His doctoral work was carried out at the University of Sussex. His primary research interests are in philosophy of science (especially cognitive science, psychology, biology, artificial intelligence and artificial life) and philosophy of mind. He also works on Heidegger. His book, Reconstructing the Cognitive World: The Next Step, was published by MIT Press in 2005.
xiv
How to Use This Book To help you find your way into this book, here is a short overview of each of the main sections.
Introduction Here you will find a very short take on the recent history of contemporary philosophy of mind, its Cartesian roots, and a few words about recent conceptual shi s, as well as a consideration of the general tone of the volume and the choice of contents. The Introduction also contains a short summary of each of the book’s main chapters, and this might serve as a springboard into the rest of the book.
Problems, Questions and Concepts in the Philosophy of Mind This overview by Ian Ravensco contains a mix of introductory and advanced material which fleshes out the basic problems and questions in the philosophy of mind as well as the answers currently in favour, alongside the relevant objections and replies. A beginning student might read this to get a feel for the philosophy of mind generally. It could also serve as a refresher for advanced students and researchers.
Current Research Here you will find eleven original essays wri en by experts in the field. Not only do they provide overviews of large sub-topics in the philosophy of mind, the authors take a stand and argue for their own positions. It’s this combination which makes the essays of interest to researchers at different levels. Again, there is an overview of each chapter in the Introduction.
Current Issues in the Philosophy of Mind This section, wri en by Paul Noordhof, aims to follow up from the previous essays with reflection on cu ing-edge thinking in the philosophy of mind. This xv
How to Use This Book
is not soothsaying. Instead, Noordhof provides an overview and explanation of very recent trends, new questions just being asked by contemporary researchers, and recent empirical findings which might shape the course of future enquiry.
Glossary This part of the volume includes short definitions and longer treatments wri en by experts in various subfields of the philosophy of mind. Each entry concludes with references to works which might be consulted for further information. The aim of course is to provide a good resource for someone who is reading this companion but might not be familiar with one or more of the technical terms used by the authors. It also might help someone just ge ing on with the study of the philosophy of mind who encounters a difficult word or concept in a book or article – some of the terms do not appear elsewhere in this book. The glossary contains a number of clever distillations of difficult concepts, so it might also be read independently, just for interest.
Chronology This timeline is a reference for students and researchers who require speedy access to a date or title as well as a bit of context for it. If you encounter an unfamiliar philosopher, book or paper, have a scan of the chronology to get a feel for where the topic fits in the history of mind. The entries up to 1950 contain a sentence or two of explanation. Beyond that date we are too close in time to be sure of the relevance, meaning or impact of a paper or book.
Research Resources This section includes a list of resources on the Web, associations and research centres, and periodicals devoted to the philosophy of mind – anything of a practical nature which might help a budding or seasoned researcher.
Bibliography The bibliography, compiled by Adam Ferner, includes details of all the works mentioned in this book. Additional books and articles not specifically mentioned by the contributors but which might nevertheless be of interest to those studying the philosophy of mind are included as well. xvi
How to Use This Book
What Now? You might acclimatize yourself by having a look at the Introduction – in particular, consult the summary of the main contributions to this volume. Old hands can then dip in and out of the essays according to interest. If you are new to the philosophy of mind or just want to have an overview of it with a fresh pair of eyes, I strongly suggest that you read Ravenscro ’s introductory essay and move on from there.
xvii
This page intentionally left blank
Introduction The introduction to nearly every companion volume to a philosophical topic does three things. First, it tells the reader that the subject ma er in question is undergoing a renewal or revival of some sort. For one reason or another it’s a hot topic, and conceptually the good times are rolling. Second, although many such companions exist, the introduction claims that this one genuinely serves a new and important purpose. Third, the editor confides that a much larger number of topics would have been included, but, alas, certain hard decisions were forced by fierce publishers, as well as the unyielding constraints of time and space. What remains is a fine but by no means ideal table of contents. In no other genre are cheers so quickly followed by sober apologies. I promised myself that I would never stoop to those three clichés, but now I can’t help it. They actually fit.
Renewal It turns out that the philosophy of mind really is undergoing a remarkably lively renewal. We have already had one rebirth in the seventeenth century, and we’re probably somewhere in the midst of another one now. Early philosophical reflection on the mind was by turns dominated by Greek thought or Roman thought or Scholastic thought until Descartes shook us free of all of that and entangled us in something else entirely. One can actually witness the shi in Meditations on First Philosophy, as Descartes quite plainly distances himself from Aristotle and the rest. Here is a representative passage: ‘What then did I formerly think I was? Undoubtedly I judged that I was a man. But what is a man? Shall I say a rational animal? Assuredly not . . . ’. From Descartes’ new point of view, Aristotle gets it almost entirely wrong. There’s nothing essentially animal about what he is, and something more than ‘rational’ is needed to give a clear description of it. Descartes characterizes himself as a thinking thing: ‘a thing that doubts, understands, affirms, denies, wills, refuses; that imagines also, and perceives.’ It’s a long and durable list, and with it Descartes derailed hundreds of years of thought about what it is to be human, simultaneously turning philosophical a ention and fashion to substance dualism. If modern reflection on mind was born with Descartes, contemporary philosophy of mind was born relatively recently in
xix
Introduction
an effort to find something be er than dualism. The philosophy of mind has had a second renaissance, and we find ourselves in it now. Some argue that one specific line of thought changed everything. Ryle and those who followed him might have been the ones to usher in a new era of reflection on the mind simply by finally and explicitly renouncing mind-body dualism. Certainly early scientific psychology aimed to avoid dubious, unreliable talk of private mental states. The wholesale rejection of the so-called Cartesian ‘ghost in the machine’ might have stimulated the philosophy of mind in a backhanded way. The implausibility of both dualism and behaviourism made room for fresh thinking about the reality of mental life, or so the story goes. Or maybe Chomsky’s work on language acquisition and the unconscious complexities behind it created the space needed for new reflection on the mind. Perhaps Turing’s work and the promise of artificial intelligence nudged us into fresh thoughts about mental life. Maybe the bare possibility of understanding cognition well enough to create or just model it was enough to shake us free of Descartes. Possibly a combination of these and other thoughts did it. Wherever you point the finger, something started happening in the 1950s. The ground really did shi . There was room for philosophers of mind to make a break from Descartes at last. Others are less definite about the precise origins of contemporary thoughts about the mind. They’ll say that Ryle, Chomsky, Turing and others were just some of the many a ershocks caused by a deeper, amorphous conceptual reorganization which got its start a very long time ago. New thinking about the mind in the last half century or so has had more to do with our growing scientific picture of the universe and our place in it than something as minor as the rise of behaviourism or even computing. Perhaps Descartes could take seriously the notion of a soul existing apart from the body – an unextended thing located precisely nowhere yet somehow running the show – but we no longer can. A er all, Descartes grew up in a world which had only just started to assimilate the notion that Earth was not the centre of the universe. As our understanding of such things as biology, evolution, psychiatry and physics expanded, the old view of mind just didn’t fit with the rest. How do you bolt a soul onto an evolved organism? What use is a thinking substance to theorizing about our psychological states? Where do you put soul stuff when your ontological inventory seems complete with ma er and energy? Nearby thinkers will point to the catalyst of the discoveries in specific sciences. They might say that what’s energized the rebirth in reflection on the mind are new thoughts and facts turned up by computing, neuroscience, experimental psychology, and on and on – maybe even something as unlikely as quantum mechanics. Contemporary philosophy of mind has always made more use of empirical discoveries than most of its philosophical cousins. Metaphysicians are largely unmoved by whatever pops up in particle accelerators. xx
Introduction
Philosophers of mind, however, are as likely to march forward with thought experiments as they are with empirical experiments – citing psychopathologies, split-brain patients, mirror neurons, childhood development studies, neuroscience, facts about computing, and on and on. No one doubts that contemporary philosophy of mind is enlivened by a connection to empirical enquiry, but some argue that it owes its status as a hot philosophical topic to the need for careful reflection on the constant supply of new discoveries turned up by the study of both the brain and the body.
A Useful, Argumentative Companion Reasonable people can argue about what caused the revival in reflection on mind, but few really doubt that something new is underway. Philosophers have turned their a ention to a huge number of new or nearly new problems in the past few decades, and by most lights they are still finding new topics, new arguments, new theories and distinctions, maybe even new answers. It’s no surprise that there is a large number of books on the philosophy of mind – even many valuable companion volumes – but this one really does aim to do something at least a li le different from the rest. For one thing, all of the companions in the Continuum Series have the slightly unusual aim of being of use to people operating at many levels of enquiry. They aren’t just for beginning students or old hands. The hope is that they’ll serve as useful desk references for people at many different stages, from classroom work to highly technical reflection and research. To this end you will find a number of sections designed to help beginning students hit the ground running as well as sections to assist advanced researchers move on through further thoughts on various subjects. For more details, have a look at the section called How to Use This Book. However, what really marks this volume out against many other companions or supplements to the philosophy of mind, I think, is that it is a piece of the philosophy of mind, not just a report on it. That’s been the intention from the start. The contributors not only scout the relevant territory with a view to ge ing the reader up to speed with who said what, they also pitch their essays at fellow researchers. They say what they think and argue for their own views or against claims made by others. The pleasing result is a companion which doesn’t just nod in agreement or politely show you around. Instead, you’ll read philosophers ge ing on with the job, doing philosophy, arguing, jostling, persuading, objecting, judging and generally trying to get the truth into clear view. The result might not be consistent or coherent – in fact I’m sure there’s disagreement somewhere among all the contributors – but their thoughts are interesting and worth reading, and their work will certainly stimulate further thinking. xxi
Introduction
Topics This book doesn’t cover everything or even almost everything that ma ers in the philosophy of mind. I used to cringe a bit when I read similar apologies for an obvious truth in other books: ‘if only we had more space to cover this and a bit more time to consider that’. No book could possibly include everything which ma ers, so why make excuses? Never again. Editorial decisions are almost always painful judgements – one thing ruled in and twelve perfectly respectable and interesting and important things ruled out. In most cases I’ve been guided by something more than my own feeble thoughts and limited experience: I’ve made repeated use of a number of helpful and patient advisors who know much more about the mind than I do. The topics covered in this book – from the main essays to the glossary entries – all made it in at the expense of something else based on the closest thing to a consensus I could get from many people. That’s not to say that I hereby abdicate responsibility – if the mix of topics could have been be er, that’s entirely my fault. I also gave the contributors a free hand to approach topics more or less as they liked, given the general constraints of the series, and maybe firmer editorial guidance would have resulted in a more comprehensive or balanced volume. Then again, giving experts the opportunity to scout the territory as they see it has many recommendations too. I also had to consider how the contributions might hang together, given who agreed to write about what, and that means that some topics could only appear within others, even though they might deserve star billing alongside a different mix of papers. So, for example, there’s nothing specific on qualia or the first person or intentionality, but each one of these important subjects turns up again and again in different papers. I also thought it might be good to consider not just large, abstract questions in the philosophy of mind but also some very specific, narrow problems. The hope is to convey something of the sweep of the philosophy of mind as it is really practiced. To give you a feel for the topics under consideration, we’ll now glance at each of the main papers.
Overview of the Contributions There are eleven central essays in this book. Some take up broad topics like the nature of consciousness and the mark of the mental. Others deal with particular theories of mind, such as physicalism and dualism. Still others examine specific sub-topics such as mental causation and personal identity. The central essays are bookended by two more general pieces. The first, by Ian Ravenscro , sets the tone with an introductory overview of the philosophy of mind. The last, by Paul Noordhof, considers some of the possible future directions the philosophy xxii
Introduction
of mind might take. We’ll begin with ‘Problems and Questions in the Philosophy of Mind’ by Ian Ravenscro and briefly summarize the rest.
Problems, Questions, and Concepts in the Philosophy of Mind Ian Ravenscroft Ravenscro identifies four broad areas of research in the philosophy of mind as it is practiced today: metaphysical issues concerning the relation between the mental and the physical, epistemological questions about our knowledge of our own minds and the minds of others, themes associated with the influence of the behavioural and cognitive sciences, and methodological issues concerning the right approach to the study of mind. His main focus is the philosophy of mind’s main focus: the mind-body problem. He works through various theories of mind: substance dualism, reductive and non-reductive physicalism, the supervenience relation, eliminativism, and instrumentalism. Closely connected to the question of the relationship between the mind and body is a set of problems having to do with mental representation. Ravenscro briefly takes up the representational theory of mind as well as several theories of content. Next, he considers mental causation and the specific sense in which certain problems arise for physicalist theories of mind along with some possible solutions to them. Finally, he considers various answers to what might be contemporary philosophy of mind’s understanding of its own central question: how can phenomenal consciousness exist in a purely physical universe? Jackson’s knowledge argument and several replies to it are considered. In the end he argues that even if we haven’t go en past certain apparently intractable problems, at least we have a more sophisticated set of tools and concepts to help us understand them than at any other time in our history. Maybe we have a good grip on what it is that we don’t know, and that’s a kind of progress.
Consciousness Daniel D. Hutto Hu o begins by admi ing that there is no ‘clean, clear and neutral’ account of what we mean by consciousness. He goes on to do what many others do: pin down his topic with examples, alongside certain nearby expressions which seem to strike a chord: ‘what it’s like’, for example. He also lists some of the characteristic features of conscious experience mentioned in this connection, such as phenomenality, intentionality, subjectivity, unity, temporal extension and self-awareness. He then takes up reductively naturalistic explanatory frameworks which by turns equate consciousness to something else – brain states, functional states, xxiii
Introduction
and so on. Working through the main arguments for and against non-reductive naturalism, Hu o se les on the so-called hard problem of consciousness: how could this functional state or that representational property ever give rise to conscious experience? The best we can do, it is sometimes concluded, is hope for a specification of the non-reductive relations which hold between conscious and other properties. The main replies to this thought are considered and judged. In the final section, ‘Rethinking Metaphors of Mind’, Hu o considers a new, or anyway different sort, of reply to the hard problem and the explanatory gap between mind and world: the a empt to explain away the differences which get in the way of the explanatory identities we posit. He takes up Denne ’s analysis of our a empts to understand consciousness, turning eventually to reflection on representationalism and enactivism. In the end, Hu o concludes that a satisfying naturalistic understanding of consciousness will require a network of theories operating at different levels. More than this, what’s needed is an understanding of which theories work best at which level – no doubt alongside fresh thinking about consciousness itself.
The Mark of the Mental Fred Adams and Steve Beighley What is a mind? What’s the difference between having a mind and not having one? Adams and Beighley take up these questions in detail, considering several candidates for what picks something out as mental as opposed to physical or merely non-mental. They consider single property views which hold that all mental states are mental because they share one property. Adams and Beighley work through points for and against incorrigibility as a mark of the mental as characterized by Rorty and Denne . They also consider work on intentionality in the writings of Crane, Dretske and Tye. All such views are found wanting. Next the authors examine a so-called system view, a type of view which ‘says there is a single set of properties that all minds must have, but not every state that is part of the system must possess these properties themselves’. Searle’s conception of consciousness is considered in this connection, and objections are raised. Their own account of the mental, a version of the systems view, is articulated and defended in the final section. Beginning with a conception of the function of mind, they go on to argue that mental systems share a cluster of properties. To count as mental such systems must first of all possess non-derived meaning. They must also do more than just carry information – the states of a truly mental system must rise to the level of meaning. States of a mental system must also be capable of misrepresentation. Finally, they say that to count as mental, a system must exhibit intentional behavior. xxiv
Introduction
Substance Dualism T. J. Mawson In this spirited defense of substance dualism, Mawson gets right to it. He assumes, along with almost everyone else, that there is physical stuff which one might pick out in part with paradigm cases: the stuff which makes up tables, chairs, stars and so on. If you take it that only this kind of stuff exists, you are a physical substance monist. Substance dualism, however, says that there is this physical stuff and another type of stuff as well. According to the dualist, the other type of stuff is in essence capable of thought in the broadest sense of that term, a property most definitely not had by physical stuff. He examines objections to substance dualism. There is of course the point that any monism has the advantage of simplicity over any dualism, but can we do be er than Ockham’s razor? Mawson considers two sets of problems. First, there are notorious difficulties associated with identifying souls. What makes one different from another, and how can we know anything about souls other than our own? The second set of problems has to do with perhaps the loudest objection to dualism: troubles with understanding the alleged interaction between physical and mental stuff. Mawson is unmoved by both sorts of problems, and he gives replies to each. He goes on to consider three reasons to believe that substance dualism is true. Dualism lines up well with certain commonsense intuitions having to do with personal identity, freedom and the qualitative features of conscious experience, but physicalism has more than a li le trouble with each one. He concludes with a Moorean argument which forces a choice between the simplicity of physicalism on the one hand and the truth of those commonsense intuitions which favor dualism on the other.
Physicalism Barbara Montero Just what is the main thesis of physicalism – what does it mean to say that everything is physical? Montero takes the ma er up in detail, working through the meanings of the words ‘everything’, ‘is’, and ‘physical’. She begins by considering different approaches to defining the scope of physicalism, concluding that we should understand ‘everything’ in the most inclusive way possible: everything, whatsoever, is physical. Next, she focuses on the relation between the fundamentally physical properties and higher-level properties such as mental properties. How should we understand the ‘is’ relation? Pu ing other possibilities to one side, her answer is couched in talk of upward determination – with some provisos, worlds which duplicate the fundamental physical properties xxv
Introduction
and laws of our world also duplicate all the properties of our world. What, then, is the physical? Again Montero considers a number of answers, plumping finally for a negative characterization: the physical is the fundamentally non-mental, non-divine, and non-normative.
Folk Psychology and Scientific Psychology Barry C. Smith We see other people as acting in accordance with beliefs and desires – it’s a large part of how we understand other people as people, how we see them as taking a course of action deliberately and with an end in view, and it’s how we explain and predict what they do. The sciences of the mind and brain have revealed a great deal about what makes us tick in other senses. We have a grip on the cognitive states and mechanisms which also serve to explain our behavior. Smith considers the relationship between these two views that we have of ourselves, primarily through the work of Davidson, Denne and Fodor. He claims that a good account of folk psychology should provide a rational explanation of action, accommodate the causal efficacy of the mental, and make room for the difference between first- and third-person ascriptions of mental states. Following a careful account of folk psychology and scientific psychology, as well as a brief wave at eliminative materialism, the work of all three philosophers is judged on the basis of these criteria. Smith maintains that even if we can find an account of folk psychology which satisfies those three requirements, we are still le with a number of questions. Exactly how do we succeed when we ascribe beliefs and desires to others? What role does consciousness play? How do the emotions jive with belief-desire psychology? He concludes by pointing towards some potentially fruitful answers.
Internalism and Externalism in Mind Sarah Sawyer There is a large debate between internalists and externalists concerning the very nature of mental properties – is it just what’s on the inside that counts? In this paper Sawyer wades in by first giving an account of both views. Internalism, roughly, holds that no two individuals could differ psychologically without differing physically. Externalism, roughly, holds that individuals could be physically identical but diverge psychologically, given certain external differences in, for example, their physical or social environments. Well-known thought experiments owed to Putnam and Burge are considered. Sawyer gives an account of the many possible forms of each kind of view which seem to fall out of such reflections. xxvi
Introduction
She goes on to outline metaphysical considerations having to do with naturalism and mental causation which are thought to score points in favor of internalism. Next, she considers epistemological claims concerning certain features of self-knowledge which are thought to undermine externalism. In adjudicating between the various objections and replies throughout her paper, Sawyer emerges as an externalist, and she says something about her reasons in a concluding section.
The Philosophies of Cognitive Science Margaret A. Boden As her title suggests, Boden maintains that there’s much more to cognitive science than merely the science of cognition – there are many thinkers working on various research programs producing a wealth of insights – and her paper captures something of the breadth of the subject. She takes up functionalism first, as many consider it the core philosophy of the field, and examines several possible variations in the work of Putnam, Fodor, and Denne . An interest in neuroscience led many from functionalism to connectionism and parallel distributed processing. Boden scouts the relevant objections and replies. She then takes up the contribution cognitive science has made to understanding the computational processes associated with representation. Thoughts about representation, and in particular the senses in which intentionality might be understood in terms of situatedness or embodiment, can lead to reflection on the extended mind. Boden considers Clark’s and Chalmers’ claim that our minds are somehow partly located in the external world, working through some difficulties for the view along the way. She then considers the nearby notions of embodiment, enactiveness and phenomenology as they appear in the continental tradition, as well as the influence this has lately had on cognitive science. She concludes by examining a large number of contributions cognitive science has made to central features of our understanding of consciousness, the mind and life. In a concluding note Boden makes a case for the claim that cognitive science really has provided not just good questions, but satisfying answers in the philosophy of mind.
Representation Georges Rey At least some mental states are representational – they ‘stand for’, ‘refer to’ or are ‘about’ something else. How might such states fit into our general understanding of the world? Rey considers the potential of the computational/ xxvii
Introduction
representational theory of thought to make sense of such states within a physicalist framework. Following a short characterization of the view, he considers various problems raised by our representational capacities which have to do with their referential opacity, and with the detection of non-local, non-physical properties. He considers two general strategies for providing an account of intentional content. One might go internalist and think that meaning is some sort of internally specifiable state in the head: an image or stereotype, or an inferential role; or one might opt for one of many externalist approaches: historical causal theories, co-variation ‘locking’ theories, or teleofunctional theories. Rey works through points, for and against each, and concludes with some remarks on the prospects for combining the two sorts of approaches.
Mental Causation Neil Campbell Mental events seem to stand in causal relations to physical events : my hopes and fears apparently cause my smiles and frowns. Questions about mental causation have a very long history, but many contemporary philosophers of mind who are drawn to some version of non-reductive physicalism face a new version of it. Motivations for non-reductive physicalism seem solid enough. The anomalousness of the mental leads many to the view, while others are persuaded by the innocent thought that mental events or properties are multiply realizable in different physical forms. Whatever the motivation, epiphenomenalism – the una ractive possibility that the mental has no effect on the physical – seems only a few steps away. Some worries about mental causation are raised against Davidson’s anomalous monism. Other objections are couched in terms of Kim’s principle of causal-explanatory exclusion. Campbell considers both objections in close detail. In his final section he argues that these objections depend on dubious metaphysical assumptions about the nature of events. The objections, he concludes, are either misguided or question begging.
Personal Identity E. J. Lowe Against the backdrop of some useful clarifications of both the identity relation and the notion of a criterion of identity, Lowe takes up the question of a criterion of personal identity. He follows Locke in thinking that before we can establish a criterion of identity for persons, we have to say what kind of thing persons xxviii
Introduction
are. We run into a certain sort of trouble here, however, because philosophers have come up with a very long list of candidates: immaterial substances, material substances, phases of substances, bundles, transcendental entities, and even mere fictions. Lowe speculates that our very status as persons – and in particular a certain fact about the first-person pronoun – is part of the problem. He goes on to consider Locke’s so-called memory criterion of personal identity, because it is the first explicitly formulated criterion and, no doubt, the one which has had the most influence. He considers Reid’s objection to Locke’s view and some possible modifications which might side-step it, alongside further objections and replies. He concludes with a consideration of some alternatives to variations on Locke’s criterion, including the possibility that personal identity is primitive and simple. This last, suggestive possibility goes some way towards explaining why formulating a criterion of personal identity is so difficult for us.
Embodied Cognition and the Extended Mind Michael Wheeler Wheeler describes the hypotheses of embodied cognition and the extended mind as two stopping-off points in the flight from the Cartesian view of intelligent action. In a nutshell the Cartesian view has it that the mind guides action in a manner largely conceptually independent of the facts of embodiment. However, recent thoughts about how we actually solve problems in the world have made the Cartesian account less and less a ractive. Wheeler’s aim is to examine the move not just away from the Cartesian picture but from embodied cognition to cognitive extension. He first sheds light on the embodied cognition hypothesis by marshalling examples from recent work in cognitive science. Once a case has been made for a certain conception of intelligent action, Wheeler argues that we face a philosophical choice between a radical body-centrism and a new sort of functionalism. Building up an argument from parity for the extended mind hypothesis, Wheeler argues that the functionalist option is more a ractive. He concludes by re-enforcing an aspect of the parity argument.
Current Issues in the Philosophy of Mind Paul Noordhof In this contribution Paul Noordhof brings us right up to date with an account of how three large topics central to the philosophy of mind have developed over the last few decades. He takes up recent developments in our understanding of xxix
Introduction
physicalism – how we now characterize it and how this characterization affects our understanding of mental causation. A discussion of phenomenal consciousness follows, and Noordhof considers the so-called explanatory gap and various a empts to put dualist intuitions to one side. He discusses a feeling, recently sinking in, that these efforts are doomed, and he goes on to examine the role that an appeal to representational properties might play in the debate. In a final section, Noordhof examines new approaches to our understanding of intentionality and the normativity of the mental.
xxx
1
Problems, Questions and Concepts in the Philosophy of Mind Ian Ravenscroft
Introduction Philosophy of mind has a long and distinguished history, but it is not the aim of this chapter to provide an historical overview of philosophical investigations of the mind. Rather I intend to elaborate on what I take to be the most significant themes in contemporary philosophy of mind. It is worth noting, right at the outset, that philosophy of mind has been one of the most dynamic – perhaps the most dynamic – areas of English-speaking philosophy over the last half century. Very roughly, research in this field falls into four broad areas. (Needless to say, a tangled web of connections exists between these areas, rendering the boundaries to some extent arbitrary). 1. Metaphysics of Mind: The nature of the mental, and in particular the study of how the mental relates to the physical, is one of the most enduring philosophical puzzles and has been at the forefront of contemporary work in philosophy of mind. This problem is o en simply called the ‘mindbody problem’, and will take centre stage in what is to come. Within the broad area of the mind-body problem are a host of specialized issues including mental causation, mental representation and consciousness. Other specialized metaphysical issues in the philosophy of mind include perception, memory, action and intention. Very o en these more specialized issues are connected with questions in other areas of philosophy. For example, discussions of moral responsibility o en involve claims about whether or not an action was intentional. 2. Epistemological Issues: The mind raises special epistemological questions concerning how, and to what the extent, we have knowledge of our own mind and the minds of other people. Over the last few decades philosophical work on these questions has been influenced by work in cognitive and developmental psychology and evolutionary biology. 1
The Continuum Companion to Philosophy of Mind
3. Behavioural and Cognitive Sciences: In addition to the vigorous development of the philosophy of mind over the last half century there has been an extraordinary growth in the behavioural and cognitive sciences. Philosophers have influenced – and been influenced by – these fields, which include computer science, neuroscience, developmental and cognitive psychology, evolutionary biology, and economics. 4. Methodological Issues: What is the correct approach to the philosophical study of mental phenomena? Is there room for a priori (or ‘armchair’) investigations, or are all the issues empirical? If all the issues are empirical, is there anything le for philosophers to do in this area, or should we just humbly abandon the field to the behavioural and cognitive sciences? Some have argued that the role of the philosopher of mind is integrative or synthetic. There is the task of integrating the various aspects of one cognitive science into a satisfying whole, and also the task of integrating a range of cognitive sciences into a comprehensive vision of human mentality. Kim Sterelny has called these the internal and external integrative projects respectively (Sterelny, 2003, pp. 3–5). So extensive has been the research over this period, and so enormous the resulting literature, that any author of an overview like this one is forced to make hard decisions about what and what not to include. I will focus on contemporary approaches to what I earlier described as ‘one of the most enduring philosophical puzzles’: the mind-body problem. Even within this restricted domain I will be forced to make some drastic editorial decisions. For example, there are important approaches to thinking about the mind-body problem over which I vault without a glance. (I flinch, but I do not glance.) In particular, there is an important strand of mid-twentieth century philosophy of mind which includes on the one hand the logical positivists and on the other the later Wi genstein and Ryle, that I do not discuss at all. This is not because I believe these thinkers to be unimportant, but because I believe that I can adequately develop an account of contemporary philosophy of mind without paying these figures direct a ention. Their influence on contemporary philosophy of mind was considerable, but it was channelled through figures such as Smart and Denne , about whose views I do have something to say. In the next section I outline some of the key moves that have been made on the mind-body problem, beginning with Descartes because, in striking ways, his problems have turned out to be our problems. Among other ma ers, reductive and non-reductive physicalism and the idea of supervenience are sketched in that section. Most contemporary philosophers of mind endorse some version of the representational theory of mind: they think of the mind as an organ for
2
Problems, Questions and Concepts in the Philosophy of Mind
developing and manipulating representations. The third section briefly surveys some of the difficulties which arise when trying to understand the nature of mental representation. The mind-body problem challenges us to account for the close connections which minds apparently enjoy with their associated bodies. Those connections apparently include causal connections: the state of my body causally influences the state of my mind, and vice versa. The fourth section is devoted to an exploration of the way the problem of mental causation emerges in the context of physicalism. The fi h section takes up the problem of phenomenal consciousness which presents the mind-body problem in especially sharp relief, and brings together many of the issues discussed in this overview. I allow myself a brief conclusion.
Mind and Body Many problems in the philosophy of mind emerge because we are deeply commi ed to two distinct ways of thinking about ourselves as human beings. Our commitments are in tension, forcing us to reconsider some of our most cherished ideas about what it is to be human. On the one hand, we think that human beings are closely connected with physical bodies living in a physical world. (I will have a lot more to say about what ‘closely connected’ might mean in subsequent sections.) Our bodies are assemblages of atoms and energy, existing in a space-time manifold and answering to the same laws as all other assemblages of atoms and energy. We are also biological creatures, having many features in common with other mammals and sharing a common ancestor with all the living creatures on Earth. On the other hand, we think of ourselves as having minds. We have an array of mental states and dispositions. We get hungry and we get hurt. We fall in and out of love. We are prone to anger or sympathy. We have beliefs and preferences. We can perceive our environment through a number of modalities and can remember the past. We can learn and we can reason. We represent the way the world was, is, and may come to be. We even represent ways the world cannot be. And, most mysterious of all, we are conscious. These conceptions of ourselves are in tension because it is not obvious how all of the features of mental states to which we are commi ed can be squared with our conception of ourselves as closely connected to physical bodies. I will set the scene for an exploration of these features by discussing in what sense we are ‘closely connected’ to physical bodies. In particular, I will discuss the relationship between mental properties and brain properties. We will see that every available answer to the question of the relationship between mind and body throws up new challenges.
3
The Continuum Companion to Philosophy of Mind
Dualism The great French philosopher, mathematician and scientist, René Descartes, articulated a theory of the relationship between mind and body which is now called ‘interactive substance dualism’ (or ‘Cartesian dualism’) (Descartes, 1637/1985). There is no agreed definition of the term ‘substance’; however, for present purposes I will follow David Armstrong (1968, p. 7) and take a substance to be something which could exist alone in the universe. Importantly, substances have properties. Thus the Sun is a substance because we can imagine a universe containing nothing but the Sun. One of the Sun’s properties is its having a mass of 2 × 1030 kg. Having a mass of 2 × 1030 kg is not a substance because the universe could not contain nothing but the having of a mass of 2 × 1030 kg: there would have to be in addition something which had that mass.1 Interactive substance dualism is the doctrine that there are two fundamentally different kinds of substances in the world: non-physical mental substance and physical substance. Human minds are mental substances; human bodies are physical substances. Descartes believed that mind and body interact: information about the body’s environment is sent via the sensory organs to the brain and from there it passes to the mind; instructions on how to respond to the environment are sent from the mind back to the brain which then orchestrates the body’s movements. Descartes did not regard the brain as merely a conduit through which information passed to and from the mind: he allowed that the brain played a role in processing perceptual signals and organizing motor responses. However, Descartes did insist that all higher cognitive functions, especially reason and language, are activities of the non-physical mind (Descartes, 1637/1985). For Descartes the ‘close connection’ between mind and body – or more specifically, between mind and brain – is causal. Our body’s being in a certain states (e.g. damaged) causes our mind to be in a certain state (pain). Similarly, our mind’s being in a certain state (pain) causes our body to be in a certain state (withdrawing from the source of damage). Famously, Princess Elizabeth of Bohemia, one of Descartes’ correspondents, pointed out an acute tension in Descartes’ views (Anscombe et al., 1954, pp. 274–5). On the one hand, Descartes argued that the mind and body are fundamentally distinct substances; on the other, he held that there are causal interactions between the two. How can such radically distinct substances interact? Descartes was unable to offer a persuasive reply. Elizabeth’s challenge is, quite properly, o en regarded as a very serious objection to interactive substance dualism. We will see, though, that it is a just one version of the general problem of explaining how mental states have causal powers.
4
Problems, Questions and Concepts in the Philosophy of Mind
Elizabeth challenged Descartes to account for the causal interactions between the non-physical mind and the physical brain. But this is not the only way causation makes trouble for interactive substance dualism. If interactive substance dualism is correct, then some non-physical mental event, M, caused a physical brain event, P. But modern science strongly supports the view that the world is physically closed; that is, it supports the view that every physical event has a sufficient physical cause.2 Consequently, it seems very likely that there exists a prior physical event, P*, which is a sufficient cause of P. So, interactive substance dualism is commi ed to endorsing one of two positions, neither of which is very a ractive. Position 1 Modern science is wrong: the world is not physically closed. Position 2 Event P is over-determined. That is, event M is a sufficient cause of event P’s existing and event P* is a sufficient cause of event P’s existing. Position 1 is una ractive because it amounts to endorsing substance dualism in the face of our best theories of the world. Position 2 is una ractive because it involves postulating a very large number of causes which makes mental properties a likely target of Ockham’s razor. The threat of over-determination is something we will come across again in the fourth section. To foreshadow: physicalists about the mind run into their own version of the over-determination problem. Descartes thought of the mind as a non-physical substance which had mental properties. But this way of conceiving of dualism is not compulsory. Interactive property dualists accept that there is only one kind of substance in the world – physical substance – but they think that there are two kinds of properties in the world – physical properties and non-physical mental properties. According to this view, some special physical objects have non-physical mental properties. The living human brain is the obvious – and perhaps only – example of a physical object which has non-physical mental properties. According to interactive property dualism, human brains can instantiate a certain physical property – let’s call it ‘Q’. Q causes the brain to have a nonphysical mental property, M. M in turn may cause a further physical brain property to be instantiated – say property P. Clearly, Princess Elizabeth’s problem arises with interactive property dualism: exactly how do physical and non-physical properties interact? In addition, the claim that the world is physically closed generates a problem for interactive property dualism. For, according to physical closure, the physical brain property P has a sufficient physical cause; for example, it was caused by
5
The Continuum Companion to Philosophy of Mind
physical brain property P*. Consequently, interactive property dualism is commi ed to endorsing one of two uncomfortable positions: Position 1 Modern science is wrong: the world is not physically closed. Position 2 Property P is over-determined. That is, property M is a sufficient cause of the existence of property P and property P* is a sufficient cause of the existence of property P. As we saw earlier, neither denying physical closure nor endorsing overdeterminism is a ractive. One way for the property dualist to avoid the problem of over-determination is to endorse epiphenomenal property dualism. On this view, physical brain properties cause non-physical mental properties, but not vice versa. Mental properties are ‘idle wheels’, driven by the physical engine of the brain but driving nothing. To adopt a political metaphor, they are aristocratic properties, relying for their existence on hardworking physical properties but doing no work themselves. Since mental properties have no causal impact on the physical realm, no over-determination of physical properties occurs. However, as we shall see in the fi h section, epiphenomenalism comes at a heavy price. In addition to his views on the causal relations between mind and body, Descartes also held that mental states are essentially objects of introspection; that is, they are essentially conscious. Two powerful intellectual movements unseated this idea in the twentieth century. The first was psychoanalysis. Freud sought to explain a range of neurotic symptoms in terms of unconscious desires (e.g. see Freud, 1917/1991). The aim of psychoanalytic therapy is to bring such desires into the light of consciousness. The second was the rise of cognitive psychology in the 1950s and 1960s. Chomsky’s account of syntactic processing postulated rich informational structures which play a central role in understanding and producing grammatical sentences, but which are not accessible to consciousness (Chomsky, 1994). Postulating such structures became standard practice in cognitive psychology. David Marr (1982) and Irvin Rock (1983), for example, postulated a range of unconscious informational structures in their work on vision. It is commonly supposed that consciousness poses a special problem to physicalist theories of the mind. How, Colin McGinn asked, can technicolour phenomenology arise from soggy grey ma er? (McGinn, 1991, p. 1). We will look at the challenge consciousness poses to physicalism in the fi h section. For the moment, it is worth noting that consciousness also poses a serious problem to dualism. How, we might ask, can technicolour phenomenology arise from non-physical soul stuff ? Even if we accept that our mental states are entirely revealed to us by introspection, it is apparent that consciousness 6
Problems, Questions and Concepts in the Philosophy of Mind
remains a mystery. At best we have introspective access to the contents of consciousness; introspection tells us nothing about the structures and processes which make consciousness possible. Descartes might reply that there are no such structures and processes: thoughts just are conscious entities. But this seems unsatisfactory. We deserve an account of how the universe came to contain entities which simply are conscious. The demand for such an account is especially pressing once we accept that humans evolved from animals which did not have conscious experiences.
Reductive Physicalism I will now leap ahead 300 years to the rise of reductive physicalism (also called the identity theory) in the 1950s. The theories of mind which populate the intervening decades are neither unimportant nor uninteresting; however, the problems on which I will focus in the next three sections – mental representation, mental causation and consciousness – take their modern forms in the context of the physicalist theories of the mind which have their origin in reductive physicalism. To a first approximation, reductive physicalism identifies mental properties with brain properties. Crucially, the brain properties with which mental properties are identified are held to be physical properties. To use an old example from the 1950s – an example whose details should not be taken too seriously – the property being in pain is identical to the property having c-fibre activity.3 It is important to stress that reductive physicalism proposes type identities. The claim is not merely that every instance of pain is identical to an instance of a physical property; rather, the claim is that all instances of the type pain are identical to instances of the type c-fibre firing. Early proponents of reductive physicalism took the property identities discovered by science as their model. For example, J. J. C. Smart drew an analogy between the identity of pain and c-fibre firing (on the one hand) with the identity of water and H2O (on the other). Crucially, the discovery that water is H2O was the outcome of a process of scientific investigation; it is not something that can be discovered by conceptual or linguistic analysis. Similarly, Smart thought that identities between mental properties and brain properties would be discovered by scientific investigation. It is no objection to reductive physicalism that ‘pain’ and ‘c-fibre activity’ don’t have the same meaning, nor is it an objection that the proposed identities cannot be discovered a priori. (See the seminal papers by U. T. Place [1956], H. Feigl [1958] and Smart [1959].)
Non-reductive Physicalism In the 1960s, two separate developments drove many philosophers to conclude that reductive physicalism is mistaken. The first development lead to functionalism; the second to anomalous monism. In 1967 Hilary Putnam pointed out that mental properties are, at least in principle, multiply realizable. 7
The Continuum Companion to Philosophy of Mind
Mental property M is multiply realized if and only if some instances of M are identical to instances of property P1, whereas other instances of M are identical to instances of property P2 (P1 ≠ P2). It might be, for example, that while pain in humans is identical to c-fibre firing, pain in dolphins is identical to d-fibre firings. It may even be that the pain I am experiencing now is identical to an instantiation of brain property B1, whereas the pain you are feeling now is identical to an instantiation of brain property B2 (B1 ≠ B2). The possibility of multiple realizations presents a challenge to reductive physicalism because it raises the possibility that mental property M cannot be identified with brain property B. Non-reductive physicalism takes the possibility of multiple realizations entirely seriously, and claims only that each instance of a mental property is identical to an instance of a physical property. Reductive physicalism has an easy answer to the question ‘What do all instances of pain have in common?’. According to reductive physicalism, every instance of pain is an instance of c-fibre firing. But this answer is not available to the non-reductive physicalist who denies that every instance of pain has to be identical to an instance of c-fibre firing. So what answer can non-reductive physicalism advance? Functionalism is, in effect, an answer to the question ‘What do all instances of mental property M have in common?’ which is compatible with nonreductive physicalism. According to functionalism, mental properties are characterized by their causal roles. Pain, for example, is the property which is caused by bodily damage and causes withdrawal from the source of damage; has important causal links to anxiety and to desire; and, in conjunction with certain beliefs, can lead to particular behaviours (e.g. if I believe that applying ice to the damaged part of my arm will reduce my pain, I will go the refrigerator and look for some ice). Putnam (1967) observed that different physical properties can occupy the same causal role. Consider the property of being a thermostat. A number of different physical properties can occupy the causal role characteristic of being a thermostat. Similarly, a number of different physical properties can occupy the causal role characteristic of pain. It may be that in humans the causal role characteristic of pain is occupied by c-fibre firings whereas in dolphins it is occupied by d-fibre firings. Or it may be that in me the characteristic causal role of pain is occupied by an instance of brain property B1, whereas in you it is occupied by an instance of brain property B2 (B1 ≠ B2). So functionalism is compatible with the multiple realization of mental properties and is an important way of elaborating non-reductive physicalism.4 Functionalism is one of the theories which rapidly replaced reductive physicalism in the 1960s. The other was Donald Davidson’s anomalous monism. Davidson begins by endorsing three principles which, prima facie, are inconsistent: (1) the principle of causal interaction: mental and physical events causally 8
Problems, Questions and Concepts in the Philosophy of Mind
interact; (2) the principle of the nomological character of causality: causal relationships always fall under strict laws; (3) the principle of the anomalousness of the mental: there are no strict laws relating mental and physical events (see Davidson, 1970, pp. 223–4). If, as (1) requires, there are causal relations between mental and physical events, then by (2), there must be strict laws relating mental and physical events. And yet by (3) there are no such laws. Davidson offers an ingenious resolution of this (apparent) inconsistency – a resolution which appeals to non-reductive physicalism. However, before turning to these ma ers I will briefly explain why Davidson takes the mental to be anomalous.5 Davidson correctly observes that we do not a ribute mental states one by one; rather, we a ribute extensive complexes of mental states (Davidson, 1970, p. 221). For example, if I observe Jones walking towards a mailbox with a le er in his hand, I will a ribute to him not only the desire to post a le er, but also the belief that placing a le er in a mailbox is the way to post it; that the red object nearby is a mailbox; that walking is an effective means of covering the distance between his present location and the mailbox; etc. But there is a very large number of sets of mental states that would account for Jones’ action. (Jones believes that his le er is about to explode; he desires to prevent the explosion; he believes that the mailbox is in fact a bomb disposal device placed there by MI5 . . . ) According to Davidson, we select one set of mental states (or a small number of such sets) from this vast range of possibilities by applying a principle of charity: we assume that the target is rational. But the notion of rationality is a normative one that is not found in the physical sciences. Consequently, we cannot expect to find laws linking the mental realm with the physical realm. It is important to note that Davidson is not claiming that the a ribution of mental states requires that the target be perfectly rational. On the contrary, he explicitly claims that the a ribution of minor cognitive slips is only possible because we assume that people are by and large rational: ‘Crediting people with a large degree of consistency . . . is unavoidable if we are to be in a position to accuse them meaningfully of error and some degree of irrationality’ (Davidson, 1970, p. 221). Let’s now turn to Davidson’s resolution of the inconsistency mentioned above. According to the anomalousness of the mental, there are no strict laws linking a mental event, described in psychological language, with a physical event, described in physical language. But if non-reductive physicalism is true, every mental event token is a physical event token, and therefore has a description in purely physical terms. Consider an event E which has both a mental description, M, and a physical description, P. By the anomalousness of the mental there is no strict law linking E, described as M, with some other physical event. But there maybe a strict law linking E, described as P, with some other physical 9
The Continuum Companion to Philosophy of Mind
event. It is in virtue of their physical realization that mental events exhibit lawful relationships with other physical events, and engage in causal relations with both other mental events and other physical events. So non-reductive physicalism provides us with a resolution of the tension we have noted between the three principles. To that extent non-reductive physicalism is supported (see Davidson, 1970, pp. 223–5).
Supervenience According to both reductive physicalism and non-reductive physicalism, mental properties depend on physical properties. It is in virtue of having a certain set of physical properties that a person has their mental properties. The notion of supervenience has been developed to articulate and explore what ‘depends on’ means in this context. There are a number of different ways of conceiving of supervenience; one perspicuous approach says that property P supervenes on property Q if, and only if, fixing the distribution of Q fixes the distribution of P. For example, if we fix the distribution of the property of being H2O we thereby fix the distribution of the property of being water; consequently, the property of being water supervenes on the property of being H2O. (God did not fix the distribution of H2O and then get on with the job of fixing the distribution of water; by fixing the distribution of H2O, God thereby fixed the distribution of water.) An alternative way to describe supervenience is in terms of variation: property P supervenes on property Q if, and only if, there can be no variation in P without variation in Q. In a free market, the price of fish depends on the demand for, and supply of, fish. There can be no variation in the price of fish without a variation in either the demand for or supply of fish. In other words, the price of fish supervenes on the demand and supply of fish. (For extended discussions of supervenience, see the essays in Kim, 1993c.) The property of being water is sometimes called the supervenient property (relative to the property of being H2O), and sometimes called the higher level property (relative to the property of being H2O). Conversely, the property of being H2O is sometimes called the subvenient property (relative to the property of being water), and sometimes called the lower level, or base, property (relative to the property of being water). Reductive physicalism posits the supervenience of mental properties on physical properties. If the type identity theory is true, the distribution of c-fibre firings fixes the distribution of pain; that is, pain supervenes on c-fibre firings. Alternatively, there can be no variation in the painfulness of a person’s experience without a variation of their c-fibre firings. Non-reductive physicalism also posits supervenience relations between mental properties and physical properties. However, if mental properties are multiply realized, it will not be the case 10
Problems, Questions and Concepts in the Philosophy of Mind
that mental properties supervene on simple neurological properties like c-fibre firing; rather, mental states will supervene on complex conjunctions of physical properties. Pain, for example, will supervene on human c-fibre firings and dolphin d-fibre firings and so forth. Brain states have both wide and narrow properties. The brain’s narrow properties are its intrinsic properties – the properties the brain has irrespective of its relations to other entities. They include the spatial relations of its anatomical parts and the spatio-temporal distribution of the neurotransmi ers it contains. The brain’s wide (or ‘broad’) properties are the properties it has in virtue of its relations to other entities. Being caused by the sound of a piano is a wide property of some brain properties. With this distinction in place, the question arises as to whether mental properties supervene exclusively on the narrow properties of the brain, or on a mix of wide and narrow properties. Many philosophers have denied that all mental properties supervene exclusively on narrow brain properties. The case for including wide properties in the subvenient base is sometimes advanced by appealing to thought experiments involving Swampman (Davidson, 1987a). Swampman is the outcome of an extraordinary conjunction of chance events in the Florida Everglades – lightening hi ing a swamp where anaerobic bacteria have produced just the right combination of amino acids. Let’s say that this event happens right now. By an amazing coincidence, at this moment Swampman has a brain exactly like my brain in all narrow respects. However, while I have a memory of Sydney Harbour Bridge, Swampman does not. This is because a necessary condition on being able to remember Sydney Harbour Bridge is being in the right causal relations to it, and while I am lucky enough to have those relations, Swampman, over there in the Everglades, is not. We have therefore a case of mental variation without variation of intrinsic brain properties. It seems that the subvenient base of at least some mental properties includes wide properties. This is a theme to which we will return in the Mental Representation and Mental Causation sections.
Eliminativism, instrumentalism and the intentional stance I will close my selective history of the mind-body problem by briefly discussing three further contemporary views of mental states: eliminativism, instrumentalism and the intentional stance. Eliminativism is the doctrine that mental states don’t exist. According to eliminativism, the ontological status of mental states is akin to the ontological status of phlogiston. Arguments for eliminativism o en begin with folk psychology. Very roughly, folk psychology is a theory of human psychology possessed by all normal human beings older than about five (Ravenscro , 2010). According to many philosophers and psychologists, it is by deploying folk psychology that we understand ourselves and other people 11
The Continuum Companion to Philosophy of Mind
as psychological agents. Now folk psychology posits a number of mental states – beliefs, desires, pains, imaginings, etc. Our commitment to these states should stand or fall with our commitment to folk psychology. (The analogy with phlogiston is o en pressed here. Scientists accepted the existence of phlogiston because the phlogiston theory was, at the time, the best theory of combustion available. When Lavoisier replaced the phlogiston theory with the superior oxidation theory, the rational grounds for believing in phlogiston were removed.) If it turns out that folk psychology is a poor theory, we should abandon the mental states it posits. Eliminativists insist that folk psychology is indeed a poor theory and conclude that mental states don’t exist. The claim that folk psychology is a poor theory is typically defended by identifying its explanatory weakness and by arguing that it cannot readily be reduced to neuroscience (see, especially, Churchland, 1981). Many counter-arguments have been offered against this kind of eliminativist argument. Prominent objections include a acks on the idea that folk psychology is a poor or inadequate theory e.g. see Horgan et al., 1985), and a acks on the claim that our commitment to the existence of mental states should stand or fall with the success of folk psychology (e.g. see Kitcher, 1984 and Von Eckardt, 1995).6 Like the eliminativist, the instrumentalist does not admit mental states into the ontological fold. But unlike the eliminativist, the instrumentalist still values mental states; in particular, mental states are regarded as indispensible instruments of prediction. Instrumentalism is not, though, especially a ractive. For we can ask how it is that positing mental states is predictively successful, and the obvious answer is that positing such states is successful because they are real. (Compare: Why is atomic theory so successful? Because the states over which it quantifies are real.) Daniel Denne is o en regarded as an instrumentalist (e.g. see Fodor, 1990a), although he would, I think, regard that label as at least partly misleading. Like Davidson, Denne stresses the normative character of belief and desire a ribution. According to Denne , in the majority of cases we predict people’s behaviour by adopting what he calls the ‘intentional stance’: Here is how it [the intentional stance] works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A li le practical reasoning from the chosen set of beliefs and desires will in many – but not all – instances yield a decision about what the agent ought to do; that is what you predict the agent will do. (Denne , 1987b, p. 17)
12
Problems, Questions and Concepts in the Philosophy of Mind
Denne offers the example of a chess playing computer program (Denne , 1971). Humans can most effectively play against a chess program by taking the intentional stance towards it. From that stance we find ourselves a ributing to it beliefs like ‘It thinks I’m going to sacrifice my rook’ and desires like ‘It wants to move its queen into a stronger position’. Of course, when we examine the program we do not find internal states with those contents; rather we find an algorithm which determines and ranks possible moves. Denne insists, though, that the question of the reality or otherwise of mental states does not turn on whether we can make straightforward identifications of brain states with mental states (Denne , 1987b, 1991b). He calls any system whose behaviour can be predicted from the intentional stance an ‘intentional system’. From the intentional stance we a ribute mental states to humans and other intentional systems, and once we grasp those mental states we can detect pa erns in the behaviour of those systems. For example we see that, despite considerable differences in history and environment, Joseph and Josephine have something in common – something which we capture with phrases of the form ‘___ really wants a cappuccino’. Armed with that a ribution, we can predict with some accuracy the kinds of behaviour Joseph and Josephine will undertake. According to Denne the pa erns we detect from the intentional stance are real – they are in the world to be detected by any intelligence capable of adopting the intentional stance. (Compare: the pa erns described by Kepler’s laws of planetary motion are there in the world to be detected by anyone capable of adopting the ‘astronomical stance’.) There is a sense, then, in which Denne is indeed a realist – although he is not what he has somewhat disparagingly called an ‘industrial strength’ realist (Denne , 1991b, p. 42). Earlier I raised the question of how, if mental states ascriptions aren’t literally true, appealing to mental states is a useful strategy. Denne ’s own response is that treating each other as rational agents works because we are, by and large, rational. And the explanation of our being rational agents in turn appeals to natural selection: rationality confers evolutionary fitness. However, it is an open question whether natural selection is likely to drive the evolution of largely rational creatures. (For discussion, see Stich, 1990.)
Mental Representation It is widely accepted that at least some mental states are about, or represent, states of affairs. This is sometimes expressed by saying that many mental states have content. Prominent among the mental states that are widely assumed to have content are the propositional a itudes, which include beliefs, desires, fears, hopes and wishes. The name ‘propositional a itude’ derives from the
13
The Continuum Companion to Philosophy of Mind
idea that such states consist of an a itude (e.g. of belief) towards a proposition which represents a state of affairs. Thus I can believe that the cat is wearing a hat and hope that Dr Suess is amusing. Some other kinds of mental states have contents, for example, perceptions. It is controversial whether the subjective ‘feels’ of mental states like perceptions and sensations are representational. (That is an issue to which we turn briefly in The Knowledge Argument subsection.) In this section I sketch some of the key issues surrounding mental representation, and some of the key theories of content. I will frequently refer to the syntactic and semantic properties of mental states. The syntactic properties of a mental state are those of its narrow properties in virtue of which it engages in cognitive processes. They are sometimes referred to as a mental state’s ‘shape’. (Think of how subway tokens engage with the turnstile mechanism in virtue of their narrow properties like shape and mass.) The semantic properties of a mental state are those properties it has in virtue of its representational properties. Truth and falsity are semantic properties par excellence.
The representational theory of mind The representational theory of mind claims that propositional a itudes are representations, and that cognition involves sequences of representations that bear appropriate semantic properties to one another (e.g. see Sterelny, 1990). The computational theory of mind is a well-known form of the representational theory of mind. Computational processes take one or more states as input and yield one or more states as output. Crucially, the transformational processes involved are only sensitive to the input states’ syntactic properties. (A string of 1s and 0s in a computer’s CPU may represent the velocity of a subatomic particle or the number of servings of French fries consumed in Montreal last fall: the semantic properties of the string make no difference at all to how the computer handles the string.) If the computational processes are properly arranged, the outputs bear appropriate semantic relations to the inputs; for example, the transformations may be truth-preserving. According to the computational theory of mind, propositional a itudes are computational states with syntax and semantics, and mental processes involving propositional a itudes are computational processes (e.g. see Fodor, 1975, 1980). Obviously, any representational theory of mind requires a theory of content – a theory of how mental states acquire their representational properties. In the Theories of Content subsection below, I provide brief sketches of two fundamental approaches to content.
14
Problems, Questions and Concepts in the Philosophy of Mind
Theories of content Recent discussion of mental representation has largely focussed on two broad approaches to the issue of mental content: (1) conceptual role approaches and (2) information-theoretic approaches.
Conceptual Role Approaches Beliefs form highly structured causal networks, and propositions form highly structured inferential networks. For any causal network of beliefs, C, we can identify an isomorphic inferential network of propositions, I. The isomorphism between C and I allows us to map each element of C onto an element of I. In other words, we can assign a proposition to each belief. That proposition is the belief’s content (e.g. see Block, 1986b). However, this approach to content faces at least three obstacles: (1) The approach assumes that humans are largely rational; without such an assumption there would be li le interest in finding isomorphisms between the causal structure of our corpus of beliefs and the inferential structure of a set of propositions. However, there is an extensive body of work in psychology which suggests that humans are not especially rational. (For details, and extensive discussion of the philosophical consequences, see Stich, 1990.) (2) There exists a very large number of inferential networks isomorphic to any one causal network; consequently, there will be no unique assignments of contents to beliefs. (3) The proposal in effect says that the content of a belief is dependent upon its causal relations, whereas intuitively the causal relations of a belief are dependent upon its content. An important variant of conceptual role semantics is sometimes called the ‘map theory’: [T]he proposal is that we match the head states that are beliefs with possible states of the world by the rule that each state of the head gets assigned the possible state of the world which is such that if it were the way things actually are, the behaviour that head state causes would realize what the subject desires. (Braddon-Mitchell et al., 1996, p. 181) This view requires that the basic unit of semantic interpretation is an agent’s entire corpus of beliefs and desires. We assign to the agent those beliefs which, were they true, would bring about behaviour that satisfied the agent’s desires;
15
The Continuum Companion to Philosophy of Mind
and we assign to the agent those desires which would bring about behaviour leading to their satisfaction if the agent’s beliefs were true.7 This proposal faces very similar difficulties to the proposal considered previously: (1) It assumes that agents are in fact rational. (2) It leads to the threat of non-unique assignments of content since, as remarked in the Non-Reductive Physicalism subsection above, there is a very large number of belief and desire sets capable of causing any given behaviour. (3) It makes content dependent on causal role rather than vice versa.
Information-Theoretic Approaches A variety of relations exist between a thought and states outside the body. According to externalist theories of content, a special subset of those relations determines the thought’s contents. Which external relations are the contentconferring relations? Many contemporary philosophers stress informationbearing relations.8 Back in the 1980s Fred Dretske proposed that a thought, T, is about a state of affairs, S, in virtue of carrying information about S (e.g. see Dretske, 1981). That is, T is about S if, and only if, the probability of S given T is 1. Smoke means fire because the probability of there being fire, given there is smoke, is 1; similarly, my sheep thought is about sheep because the probability there is a sheep present, given I have a sheep thought, is 1. Ingenious though this suggestion is, it is immediately confronted by a pair of problems. The first is semantic promiscuity: Dretske’s view entails that meaning is superabundant. There are a very large number of states of the world which carry information about other states of the world. Smoke carries information about fire; tsunamis carry information about earthquakes; tides carry information about the relative positions of the Earth, Sun and Moon. It follows that meaning is not an especially psychological notion – it is not a unique feature of minds and mind-generated artefacts. The second problem is o en called the disjunction problem. Under certain conditions I will have a sheep thought in the presence of a goat. (I might mistake a goat for a sheep on a particularly gloomy a ernoon.) If that is the case, the probability of there being a sheep present when I have a sheep thought is less than 1. Indeed, it may be the case that the probability of there being a sheep or a goat present given I have had a sheep thought is 1. In that case my sheep thought is about sheep-or-goats rather than about sheep. One important line of response to the disjunction problem appeals to natural selection. Frogs ‘snap’ their tongues at flies. Say that the frog tokens the concept FLY when a fly is in its visual field, and (ceteris paribus) tokens of
16
Problems, Questions and Concepts in the Philosophy of Mind
that kind cause tongue snapping. We naturally think that FLY is about flies. However, small boys sometimes throw BBs (lead pellets) at their pet frogs, eliciting tongue snappings; that is, frogs misrepresent BBs as flies. Let’s assume that the probability of there being a fly-or-BB, given that the frog tokens FLY is 1. It follows that the frog’s FLY concept is about flies-or-BBs, not flies. In response, the teleological theory of content introduces the idea of the biological function of the frog’s FLY tokens. It is because the modern frog’s ancestors tokened FLY in responses to flies that they were able to survive and reproduce. The existence of the modern frog is dependent upon ancestral tokenings of FLY in response to flies rather than inedible BBs. The theory of natural selection gives us a principled way of saying what FLY tokenings are for, and that in turn allows us to distinguish between appropriate tokenings which successfully represent flies, and inappropriate tokens which misrepresent BBs as flies. (See especially Millikan, 1984 and Papineau, 1984.) Jerry Fodor rather doubts that ‘Darwin is going pull Brentano’s chestnuts from the fire’ (Fodor, 1990d, p. 70). In the frog’s ancestral environment, small, black, fast moving objects in the frog’s visual field were almost always flies. An ancestral frog which responded to small, black, fast moving objects would have survived and reproduced just as successfully as an ancestral frog which responded to flies. So the teleological theory of content has no way of determining whether the modern frog’s FLY concept refers to flies or to small, black, fast moving objects. The disjunction problem has returned. (For countermoves see Godfrey-Smith, 1994a, pp. 273–4.) Very roughly, Fodor’s own solution to the disjunction problem turns on the asymmetric dependence of my goat-caused sheep thoughts on my sheep-caused sheep thoughts. Goats only get to cause sheep thoughts because sheep cause sheep thought: no sheep-caused sheep thoughts, no goat-caused sheep thoughts. Sheep thoughts are about sheep not goats (or sheep-or-goats) in virtue of the dependence of goat-caused sheep thoughts on sheep-caused sheep thoughts (see Fodor, 1990e). Some philosophers are concerned that Fodor’s approach still faces the problem of semantic promiscuity (e.g. see Adams et al., 1994). Fred Adams (2003, p. 161) offers the following example, which he a ributes to Colin Allen. Kadu antelopes bite the bark of the acacia tree which in turn emits tannin as a deterrent. Now while the tannin response evolved as a deterrent to Kadu rather than to humans, if a human were to damage the bark of the acacia tree tannins would be emi ed. So the acacia’s tannin emissions are about Kadu bitings rather than damage by humans (or Kadu-bitings-or-damage-by-humans) because Kadu bitings cause tannin emissions, and human-caused tannin emissions are dependent on Kadu-caused tannin emissions. (No Kadu-caused tannin emissions, no human-caused tannin emissions.) So Fodor seems to be commi ed to the claim that the emi ed tannin molecules are about Kadu bitings.
17
The Continuum Companion to Philosophy of Mind
Narrow versus wide content In the Supervenience subsection we considered whether two individuals, A and B, who are narrowly identical, are necessarily mentally identical. The example of Swampman suggests that they are not since Swampman and I are narrowly identical and yet only I can remember the Sydney Harbour Bridge. In recent decades a closely related question has arisen concerning content. If A and B are narrowly identical, is it necessarily the case that their mental states share the same content? A famous thought experiment due to Putnam suggests that it is not (Putnam, 1975).9 Consider Oscar who lives on Earth and uses the word ‘water’ to refer to the stuff that flows from taps, fills the oceans, is necessary for human life, etc. Assume also that he knows nothing of chemistry; in particular, he does not know that water is H2O. Now it so happens that, in the far reaches of the galaxy, there is another planet exactly like Earth in all respects except that the stuff that flows from taps, fills the oceans, is necessary for human life, etc, is XYZ not H2O. XYZ cannot be distinguished from H2O except by chemical analysis, and any biological process that involves H2O proceeds just as well with XYZ substituted for H2O. Let’s call this distant planet ‘Twin Earth’, and call the inhabitant of Twin Earth who is identical to Oscar in all physical respects except for having XYZ molecules where Oscar has H2O molecules, ‘Twin-Oscar’. Twin-Oscar and Oscar are narrowly identical in all relevant respects. Nevertheless, it seems that when Oscar has the thought ‘I want a glass of water’ he is thinking about H2O, whereas when Twin-Oscar has the thought ‘I want a glass of water’ he is thinking about XYZ. It follows that content is wide: what a thought is about supervenes not only on the thinker’s brain (narrowly conceived) but also on the thinker’s environment. This view is sometimes referred to as ‘anti-individualism’ about content because the content of an individual’s thoughts is held to supervene on states beyond the individual. Tyler Burge has developed and discussed a number of cases akin to the Twin Earth example in defence of a wide ranging anti-individualism in the philosophy of mind (see, especially, Burge, 1979, 1986a). Say that Oscar and Twin-Oscar both think ‘Beer is 90 per cent water’. According to the considerations pursued in the last two paragraphs, Oscar’s beer thought is about the proportion of H2O in beer; Twin Oscar’s thought is about the proportion of XYZ in beer. Some theorists, however, think that there is a sense in which Oscar’s and Twin-Oscar’s beer thoughts are the same and that, in a certain sense, their thoughts have the same content. Se ing aside the fact that Oscar’s brain contains H2O exactly where Twin-Oscar’s brain contains XYZ, their brains are identical; in particular, the narrow subvenient bases of their beer thoughts are identical. It follows that Oscar and Twin-Oscar’s beer thoughts
18
Problems, Questions and Concepts in the Philosophy of Mind
will bring about exactly the same behaviour and will be expressed by u ering exactly the same sounds or by writing exactly the same marks. Many philosophers have marked a distinction between wide (or ‘broad’) content and narrow content. Oscar’s thought that beer is 90 per cent water has different wide content to Twin-Oscar’s thought that beer is 90 per cent water, but the same narrow content. As we have seen, wide content supervenes on the agent’s wide properties; in contrast narrow content supervenes only on the agent’s narrow properties. Some philosophers have argued that psychology needs only narrow content; in contrast, other philosophers have wondered whether narrow content is really a kind of content at all. (For a wide ranging discussion and defence of narrow content, see Segal, 2000.)
Mental Causation Common sense tells us that physical and mental properties causally interact: bodily damage causes pain and pain causes wincing. Moreover, our commonsense notions of agency and responsibility invoke mental to physical causal relations. We have different moral and affective a itudes towards those whose destructive behaviour is caused by their intentions to behave destructively than we do towards those whose destructive behaviour is not caused by such intentions. If it were to turn out that there are no causal relations between mental and physical properties many of our most cherished views about human life would have to be reassessed. Fodor makes this point in an especially dramatic way: [i]f it isn’t literally true that my wanting is causally responsible for my reaching, and my itching is causally responsible for my scratching, and my believing is causally responsible for my saying . . . if none of that is literally true, then practically everything I believe about anything is false and it’s the end of the world. (Fodor, 1990b, p. 156) We have already noted (in the Dualism subsection) that interactionist dualism (whether of the substance or property variety) has a problem with mental causation. It might have been hoped that embracing physicalism would remove this difficulty. No such luck. In fact, the physicalist faces several distinct problems of mental causation (Kim, 1998, Chapter 2). Perhaps these problems are actually different faces of a single problem which will succumb to a single solution, but that has yet to be shown. In this section I briefly describe three physicalist problems of mental causation.
19
The Continuum Companion to Philosophy of Mind
Mental causation and the anomalousness of the mental As we saw in the Non-Reductive Physicalism subsection, Davidson denies the existence of psychophysical laws. If, as Davidson believes, a causal relationship between types A and B requires a corresponding law linking A and B, the absence of psychophysical laws seems to imply the absence of psychophysical causation. Davidson avoids this unhappy conclusion by proposing token identities between mental states and physical states. True, the mental properties of those physical states cannot causally impact on other physical states; however, the physical properties of those physical states can engage in causal relations with other physical states. At first glance, then, the problem of mental causation is solved. But a nagging worry remains. What we require is an account of how mental properties causally impact on the physical world, and that is exactly what Davidson has failed to deliver. On Davidson’s account, it is the physical properties with which mental properties are correlated that have the causal power to influence the world – the mental properties are merely riding piggy back on the causally efficient physical properties to which they are bound. In other words, Davidson seems to be commi ed to a form of epiphenomenal property dualism. (For discussion of anomalous monism and mental causation see Davidson, 1993 and Heil, 2008.)
The exclusion problem In the Non-Reductive Physicalism subsection we noted that substance dualism faces a problem of overdetermination. Substance dualism allows that some physical states are caused by mental states, for example, that mental state M caused physical state P. However, if the world is physically closed, P will itself have been caused by a prior physical state P*. Overdetermination now threatens: both M and P* are causally sufficient for P. A close analogue of this problem arises for non-reductive physicalist theories of the mind. According to most contemporary physicalists, the mental supervenes on the physical. Say that one of my mental states, M, supervenes on physical property P* of my brain, and that M causes physical state P. The claim that the world is physically closed entails that P has physical causal antecedents which include, presumably, P*. So again over-determination threatens: both M and P* are causally sufficient for P. One way to remove the threat of over-determination is to endorse property epiphenomenalism: mental properties supervene on physical properties but are causally inert, but this is deeply counterintuitive. As mentioned above, our commonsense notions of responsibility and agency appear to require that at least some human behaviours are caused by mental properties. Jaegwon Kim 20
Problems, Questions and Concepts in the Philosophy of Mind
calls this problem ‘the exclusion problem’ because mental properties appear to be excluded from causal interactions with the physical world (see, especially, Kim, 1998, Chapters 2 and 3.) In the Some Possible Responses to the Exclusion Problem subsection below I will briefly canvas some responses to the exclusion problem.
Mental causation and representational content Sally’s desire for tomatoes caused her to purchase tomatoes. If Sally had had a different desire (say for peppers), she would not have purchased tomatoes: her desire’s being about tomatoes is what caused the tomato purchase. More generally, an intentional state’s representational properties play a role in determining its causal relations. Now we saw in the Narrow v. Wide Content above that, according to many philosophers, the representational content of an intentional state depends in part on the agent’s environment (‘water’ refers to H2O in my mouth but XYZ in my Twin’s mouth). So if intentional states have causal powers they have them in part because of their wide properties. However, the causal powers of an object are entirely determined by its narrow properties. Fodor makes this point with the following example (Fodor, 1987, Chapter 2). A quarter activates a vending machine and causes it to emit a Coke in virtue of its mass, shape and size. Mass, shape and size are all narrow properties. The wide properties of the coin – for example, that it was minted on a certain date or in a certain place – are irrelevant to its causal properties. It is this fact that makes counterfeit coins possible. If a counterfeiter can succeed in making a metallic disc with the same narrow properties as a quarter, then he or she can steal Coke from Coke machines. The upshot of these considerations is that the representational properties of intentional states are epiphenomenal: they have no impact on the world.
Some possible responses to the exclusion problem In this subsection I will focus on possible resolutions to the exclusion problem. Many solutions have been proposed; I will restrict myself to briefly discussing three kinds of solution.
The Return to Reductive Physicalism The exclusion problem would be blocked if it could be shown that mental property M does not merely supervene on physical property P*, but is identical to P*. If M is identical to P* then the issue of over-determination of P by both M and P* does not arise. (It is important to recall at this point that properties 21
The Continuum Companion to Philosophy of Mind
are types, and so the identities we are considering are type identities.) As we saw in the subsection on reductive physicalism, reductive physicalists propose that mental properties are identical to physical properties. So one way to resolve the exclusion problem is to embrace reductive physicalism. But that involves rejecting the plausible claim that mental states can be multiply realized. (For an important assessment of the prospects of rehabilitating reductive physicalism as a resolution of the exclusion problem, see Kim, 1998, Chapter 4.)
Program Explanation Frank Jackson and Philip Pe it (1988; 1990) propose that while mental properties are not causally efficacious – that is, mental properties are strictly speaking epiphenomenal – they are nevertheless causally relevant because they pre y much guarantee that causally efficacious states are present: The property-instance does not figure in the productive process leading to the event but it more or less ensures that a property-instance which is required for that process does figure. A useful metaphor for describing the role of the property is to say that its realization programs for the appearance of the productive property and, under a certain description, for the event produced. The analogy is with a computer program which ensures that certain things will happen – things satisfying certain descriptions – though all the work of producing those things goes on at a lower, mechanical level. (Jackson et al., 1990, p. 114) As a consequence of the causal relevance of mental states we can have powerful explanations of behaviour in which mental states figure, even though those mental states are not causally efficacious. This strategy concedes something to the various problems of mental causation without giving up on the idea that we can predict and explain behaviour by appealing to mental states. One difficulty with this view is that computer programs are causally efficacious: the lines of code which constitute a computer program cause (in the right environment) lines of code in a machine language which in turn cause the computer to behave in the desired way. So programming can’t be taken too seriously as a metaphor for causal relevance without causal efficaciousness. As we have seen, it is widely accepted that the mental states supervene on physical states (in the Supervenience subsection). Perhaps the idea that M programs for P means only that M supervenes on P (see Kim, 1998, p. 74). But that does not seem to be very helpful. When M supervenes on P, M nomologically depends on P. In contrast, the idea of causal relevance requires the dependence of P on M. It’s not clear, then, that the program explanation idea helps us understand how mental states might be causally relevant without being causally efficacious.
Dual Explanandum Solutions The exclusion problem arises because it appears that the physical properties of the brain are sufficient to explain behaviour; there is not enough work for both 22
Problems, Questions and Concepts in the Philosophy of Mind
mental properties and physical properties to do, so the mental properties are rejected as causally inefficacious. Dual explanandum solutions to the exclusion problem challenge the idea that there is nothing for the mental properties to do. Here’s an example. Say that Trudy’s brain instantiates physical property P*, and that P* is causally sufficient to make her arm rise. P* is not, though, sufficient to explain one important aspect of Trudy’s arm’s rising: it is not sufficient to explain the fact that Trudy’s arm’s rising was the casting of a vote. Trudy’s arm raising has two aspects – its physical ‘shape’ and its property of being a vote casting. The former aspects of the arm raising are fully accounted for by the presence of P*. However, the presence of P* cannot account for the arm raising’s being a vote casting. The vote casting aspect of the arm raising was caused – inter alia – by Trudy’s intention to vote. We can only make sense of the distinction between what we might call ‘mere behaviour’ and intentional action by allowing that mental states are causally efficacious. Making such an allowance does not reintroduce the problem of over-determination because P* is not sufficient to explain Trudy’s action being a vote casting. Solutions along these lines are advanced in, for example, Yablo 1992 and Thomasson 1998. One way to understand Princess Elizabeth’s worry about Cartesian substance dualism (see the Dualism subsection above) is as a demand for an account of the mechanisms which link mental and physical properties. That worry re-emerges here: precisely how is it that Trudy’s intention to vote caused her arm raising to be a vote casting? Without such an account it is tempting to reverse the argument: there are no vote castings because, in order to count as a vote casting, an arm raising must be caused by an appropriate intention, and we cannot give an account of how intentions impact on the world.
Consciousness In the Mind and Body section above, I quoted Colin McGinn’s question, ‘How can technicolour phenomenology arise from soggy grey ma er?’ (McGinn, 1991, p. 1). McGinn is articulating a challenge: explain how consciousness can exist in a purely physical universe. More precisely, the challenge is to explain how phenomenal consciousness can exist in a purely physical universe. ‘Phenomenal consciousness’ is the term used to refer to the subjective properties of experiences. Thomas Nagel (1974) identified phenomenally conscious experiences as those which it is like something to have. For example, there is something that it is like to stare at a brightly lit scene, and there is something that it is like to smell smoke on a damp autumn evening. Phenomenal consciousness is o en contrasted with access consciousness. A mental state is said to be access conscious if it is (a) ‘inferentially promiscuous’ – that is, available for use in a wide range of reasoning tasks, and (b) readily available for the rational control 23
The Continuum Companion to Philosophy of Mind
of action, including speech (see Block, 1994, 1995). Ned Block argues that a mental state can be access conscious without being phenomenally conscious, or phenomenally conscious without being access conscious. As an example of the former he offers the mental states of philosophical zombies: creatures functionally identical to humans but lacking phenomenal consciousness. As an example of the la er he offers, rather controversially, the example of suddenly becoming aware that the refrigerator’s compressor, which has been humming for some time, has stopped. According to Block, we were phenomenally conscious of the compressor’s humming all along, but were not access conscious of it; it is only when the compressor stopped that we could report that it has been humming. Block stresses that some ‘solutions’ to the problem of consciousness involve conflating access and phenomenal consciousness. That is, the author advertises his or her theory as an account of phenomenal consciousness, but actually provides a theory of access consciousness.10 For present purposes I will use term the ‘consciousness’ to refer (exclusively) to phenomenal consciousness and ‘conscious’ to mean to ‘phenomenally conscious’. A further terminological note: I will use the term ‘qualia’ to refer to the subjective properties of conscious experiences. Thus the experience of twisting one’s ankle has the qualia of hurting; the experience of staring at the sky on a clear day has the qualia of blueness; and the experience of really wanting a cigare e has the qualia of craving.11 Occasionally philosophers use ‘qualia’ in such a way that, by definition, qualia are non-physical. However, I will use ‘qualia’ in a way that leaves open the issue of whether qualia are physical or not. I will briefly sketch three kinds of responses to the challenge of locating conscious properties in the physical world. David Chalmers (2003a) has offered a very useful taxonomy of positions in the metaphysics of consciousness. Where appropriate, I will indicate where positions on my taxonomy map on to his.
Physicalism The physicalist believes that consciousness is indeed a physical phenomenon. Different versions of physicalism can be distinguished along a number of different dimensions. I will distinguish optimistic, pessimistic and uncommi ed versions of physicalism. These versions of physicalism are located along an epistemic dimension: they vary in the degree to which they take the problem of consciousness to be humanly solvable. Chalmers (2003a) distinguishes between types A, B and C materialism (i.e. physicalism), however, the distinctions among types of physicalism he makes do not map precisely onto those I make here.
24
Problems, Questions and Concepts in the Philosophy of Mind
Optimistic Physicalism The optimistic physicalist believes that physicalism is true and that, moreover, human cognitive capacities are up to the task of locating consciousness in the physical world. Two strands of optimistic physicalism can be identified. Strongly ptimistic physicalists believe that we have already made significant progress towards understanding consciousness in physical terms. A very wide range of theorists fall into this category, including Gilbert Harman (1990), Daniel Denne (1991a), Fred Dretske (1995) and Frank Jackson (2003). Weakly optimistic physicalists believe that while we are yet to make significant progress on the problem, there are no good reasons to believe that we won’t do so in the future. Thomas Nagel (1974), for example, argues that we are currently unable to reason from a physical description of the brain to a description of its phenomenal properties because we lack the required concepts. Future research, though, may one day provide those concepts.
Pessimistic Physicalism The pessimistic physicalist believes that while consciousness is a physical phenomenon, humans will never achieve a completely satisfying account of how consciousness arises in the physical brain. Two strands of pessimistic physicalism can be identified. Strongly pessimistic physicalists believe that while consciousness is a physical phenomenon, developing adequate theories of the emergence of consciousness from the physical brain will forever transcend human cognitive capacities. The relevant theories reside, as it were, in a species-wide, cognitive blind spot. Colin McGinn holds this position (McGinn, 1991, especially Chapter1). While he calls it ‘transcendental realism’, it is sometimes referred to as ‘new mysterianism’. (The la er term is Owen Flanagan’s. See his 1992.) Weakly pessimistic physicalists believe that, while we may obtain – and perhaps already have obtained – an adequate physicalist theory of consciousness, we are likely to find any such theory unsatisfying. We will grasp the relevant physical theory, and follow each step of the physicalist explanation, but the conclusion will not force itself upon us. An analogy: many people understand the four dimensional theory of time, but can’t shrug off the intuition that time ‘flows’. Similarly, we may understand the physicalist theory of consciousness but be unable to shrug off the intuition that consciousness stands apart from the physical. Philip Pe it has articulated a view of this sort (Pe it, 2009).
Uncommitted Physicalism The uncommi ed physicalist accepts that consciousness is a physical phenomenon, but expresses no view on whether or not satisfying physicalist theories of consciousness are available. In the 1990s David Braddon-Mitchell and Frank
25
The Continuum Companion to Philosophy of Mind
Jackson articulated a response to the so-called knowledge argument (see below) which can be interpreted as an argument for uncommi ed physicalism (Braddon-Mitchell et al., 1996, pp. 134–5).
Anti-physicalism The anti-physicalist denies that consciousness is a physical phenomenon. There are a number of different versions of anti-physicalism, of which I will mention four. These positions are not of merely historical interest: a number of contemporary philosophers have defended anti-physicalism about consciousness (e.g. see Lockwood, 1989; Chalmers, 1996, 2003a; and Stoljar, 2001b).
Interactive Substance Dualism According to interactive substance dualism, consciousness is a (non-physical) property of non-physical mental substance (see the Dualism subsection above). This doctrine holds, in addition, that conscious properties may be caused by certain brain properties, and may in turn bring about other brain properties. (This is type D dualism on Chalmers’ [2003a] taxonomy.) In the Dualism subsection we noted the difficulties interactive substance dualism has with mental causation; the existence of qualia give rise to a special version of those difficulties. How does the neural activity associated with pain bring about the (non-physical) qualia of pain? And how does the (non-physical) qualia of pain bring about the neural activity responsible for expressions of pain?
Epiphenomenalism about Qualia According to epiphenomenalism about qualia, conscious properties are nonphysical properties which are caused by physical properties of the brain, but which do not cause physical properties of the brain. In addition, epiphenomenalism about qualia asserts that all other mental properties are physical properties. Se ing aside any phenomenal properties which some beliefs may have, beliefs are, according to epiphenomenalism about qualia, purely physical states. (Epiphenomenalism is type E dualism on Chalmers’ [2003a] taxonomy.) We will shortly examine a striking argument in favour of epiphenomenalism about qualia.
Emergentism According to emergentism, certain complex physical arrangements of ma er (e.g. human brains) cause new, non-physical mental properties to emerge. The emergence of non-physical mental properties from complex arrangements of ma er is a brute fact about the world. In particular, the existence of mental
26
Problems, Questions and Concepts in the Philosophy of Mind
properties is not physically necessitated; that is, the structure of the physical brain, together with the laws of physics, is insufficient to bring about the mental properties. The classic presentation of this view is Alexander (1920); for a more recent discussion see McLaughlin (1992).
Phenomenal Fundamentalism This position, once defended by Bertrand Russell (1927), holds that the intrinsic properties of the fundamental physical entities are phenomenal (or perhaps protophenomenal). On this view, the fundamental physical properties such as mass are in fact relational properties among entities whose intrinsic nature is phenomenal. (Phenomenal fundamentalism is type-F monism on Chalmers’ [2003a] taxonomy.)
Eliminativism about Consciousness We saw in the Eliminativism, Instrumentalism and the Intentional Stance subsection that eliminativism is the doctrine that there are no mental states. The eliminativist about consciousness advances the more limited claim that there are no phenomenally conscious mental states. There are, very broadly speaking, two kinds of eliminativism about consciousness.
Strong Eliminativism about Consciousness According to the strong eliminativist about consciousness, there is no problem of locating the conscious properties in the physical world because there are no conscious properties. It is as foolish to debate the metaphysical status of consciousness as it would be to debate the metaphysical status Aristotle’s crystal spheres. One important advocate of a view of this sort is Daniel Denne , who has argued that there is no principled distinction between processes which develop the content of an experience and post-experiential tamperings with that experience (Denne , 1991a; 1994). The determinate experience of a red circle moving le at a certain speed is a cognitive illusion cast by the working brain. This illusion is significant to us as human beings, but there is nothing here for the metaphysician to explain, much less worry about. (For a helpful presentation of Denne ’s views, see Akins, 1996.)
Weak Eliminativism about Consciousness Weak eliminativism about consciousness denies that the experiences we group together as phenomenal form a natural kind. Paul Griffiths has pointed out that the emotions may not form a natural kind, and if they do not, it will make no more sense to categorize psychological events into emotional and
27
The Continuum Companion to Philosophy of Mind
non-emotional kinds than to categorize astronomical events into super- and sub-lunary kinds (Griffiths, 1997, pp. 1–2). Similarly, Isabel Gois has argued that the experiences we identify as phenomenal do not form a natural kind (Gois, 2007).
The knowledge argument In the 1980s, Frank Jackson articulated an argument for epiphenomenalism about qualia – the knowledge argument (Jackson, 1982 and 1986). I will briefly review this argument and some of the countermoves which have been made against it. Many of the ideas discussed in this overview come together in assessing Jackson’s argument. (1) If physicalism is true then someone who knows everything about the physical knows everything simpliciter. (2) It is not the case that someone who knows everything about the physical knows everything simpliciter. Therefore, (3) Physicalism is false. (1) seems obviously true. Since physicalism is the doctrine that everything is physical, the truth of physicalism entails that a person who knows everything about the physical knows everything simpliciter. Jackson defends (2) by means of a famous thought experiment. Mary is a brilliant scientist whose colour visual system is normal but who has been raised from birth in a black and white environment. She learns everything about the physical aspects of the human visual system, but has never experienced the colour qualia; in particular, she has never experienced the qualia of red. Upon her release from the black and white environment Mary is exposed to a red surface in good light and exclaims ‘Now I know what red looks like’. It is natural to say that Mary learnt something when she le the black and white environment; that is, it is natural to say that she gained knowledge of the qualia of red. But if she gained knowledge then she must have previously lacked knowledge. Thus, even though (by hypothesis) she knew everything about the physical aspects of human colour vision, she did not know everything simpliciter. The conclusion, (3), follows from (1) and (2) by modus tollens. Upon her release from the black and white environment Mary learnt something about the qualia of red – it was knowledge of the colour qualia that escaped her when she was in the black and white environment. Since she knew 28
Problems, Questions and Concepts in the Philosophy of Mind
everything about the physical aspects of colour vision when she was in the black and white environment, the colour qualia must not be physical: (4) The colour qualia are not physical. Moreover, Jackson endorses the claim that the world is physically closed: (5) Every physical event has purely physical causal antecedents. It follows from (4) and (5) that (6) Qualia are epiphenomenal; that is, they have no physical effects. A great many responses have been made to the knowledge argument – including responses made by Jackson himself. (For an excellent survey of responses to the knowledge argument see Van Gulick, 2009.) Here’s an early response by Braddon-Mitchell and Jackson (Braddon-Mitchell et al., 1996, pp. 134–5): (7) Epiphenomenalism about qualia is false. Therefore, (8) Either qualia are physical or the world is not physically closed. However, (9) The world is physically closed. Therefore, (10) Qualia are physical. In support of (7), Braddon-Mitchell and Jackson point out how difficult it is to make sense of the Mary thought experiment if epiphenomenalism about qualia is true. For example, if epiphenomenalism about qualia is true, then it is not the case that Mary’s exclamation ‘Now I know what red looks like’ was caused by her exposure to the qualia of red. Moreover, if we accept that direct knowledge of the qualia of red involves a causal connection between an instance of the qualia of red and the tokening of the relevant knowledge state, then epiphenomenalism about qualia renders it impossible that Mary gained direct knowledge of the qualia of redness: having the qualia of redness could 29
The Continuum Companion to Philosophy of Mind
not have caused her to be in a state of knowledge about the qualia of redness. (8) expresses the fact that there are only two ways to close off the argument from the conclusion of the Mary thought experiment to epiphenomenalism – by denying either (4) or (5). Braddon-Mitchell and Jackson assert (9) because they accept that modern science overwhelmingly supports it. The conclusion, (10), follows from (8) and (9) by disjunctive syllogism. Braddon-Mitchell and Jackson call their argument the ‘there has to be a reply’ reply. It is a very striking example of what I earlier called ‘uncommi ed physicalism’. Uncommi ed physicalism accepts that consciousness is a physical phenomenon, but is uncommi ed on the question of whether or not satisfying physicalist theories of consciousness are available. The ‘there has to be a reply’ reply concludes that consciousness is indeed physical, but offers no view on whether humans will be able to arrive at a good understanding of consciousness as a physical phenomenon. More commi ed physicalist responses to the knowledge argument can be grouped into bold and modest versions. Bold physicalist responses to the knowledge argument insist that Mary learned nothing upon release from the black and white environment. On this view, the common intuition that Mary learned something when she finally le the black and white environment is mistaken. In contrast, modest physicalist responses claim that, although Mary knew all the physical facts, there were still things she had to learn about red a er her initial exposure to red surfaces. The a raction of modest physicalist responses to the knowledge argument is that they preserve the common intuition just mentioned: that Mary gains knowledge when she leaves the black and white environment. The difficulty for modest physicalist replies is explaining how Mary gained knowledge even though she already knew all the physical facts. A variety of modest physicalist replies to the knowledge argument exist in the literature. Laurence Nemirow argued that Mary would have all the relevant propositional knowledge (knowledge that) about colour qualia when she was in the black-and-white environment, but lacked certain skills (knowledge how) (Nemirow, 1980; see also Lewis, 1990). The intuition that Mary learns something when she leaves the black and white environment is explained by the fact that Mary learns the relevant skills (e.g. she can now imagine a red surface); however, there were no facts with which she was unfamiliar prior to her release. Jackson has advanced an ingenious argument against the ‘skills response’. He points out that prior to her release Mary would not know what other people’s mental lives are like. What, she might wonder, is it like for ordinary people to look at a ripe tomato in good light? Jackson urges that it is implausible that Mary is wondering about other people’s skills; it is facts about other people that she is missing (Jackson, 1986).
30
Problems, Questions and Concepts in the Philosophy of Mind
Another kind of modest physicalist response turns on the idea that the same propositional knowledge may be stored in different representational media. David Lewis advanced a view of this sort by way of an analogy: Imagine a smart data bank. It can be told things, it can store the information it is given, it can reason with it, it can answer questions on the basis of its stored information. Now imagine a pa ern-recognizing device that works as follows. When exposed to a pa ern it makes a sort of template, which it then applies to pa erns presented to it in future. Now imagine one device with both faculties, rather like a clock radio. There is no reason to think that any such device must have a third faculty: a faculty of making templates for pa erns it has never been exposed to, using its stored information about these pa erns. If it has a full description about a pa ern but no template for it, it lacks an ability but it doesn’t lack information. (Rather, it lacks information in a useable form.) When it is shown the pa ern it makes a template and gains abilities, but it gains no information. We might be rather like that. (Lewis, 1983b, pp. 131–2) Lewis describes a case in which a machine possess all the relevant propositional knowledge represented in a sentence-like medium, but does not possess all the relevant knowledge in an alternative, analogical medium. Similarly, before her release Mary might possess all the relevant propositional knowledge in a sentence-like medium, but only a er her release does she acquire new representations of that knowledge in an alternative, phenomenal, medium. Paul Churchland presses a similar point, and provides reasons for thinking that the human brain does indeed contain two (or more) distinct representational media with only a limited ability to translate between them (Churchland, 1989a). However, a nagging doubt remains. There is something that it is like to have a phenomenal representation in the alternative medium. And knowledge of what it is like to have a phenomenal representation in the alternative medium is exactly what Mary seems to be missing. In recent years Jackson has provided his own modest response to the knowledge argument (Jackson, 2003). His response turns on two claims (11) Qualia are essentially representational. (12) Mental representation is an entirely physical phenomenon. From which it follows that (10) Qualia are physical.
31
The Continuum Companion to Philosophy of Mind
In defence of (12) Jackson asserts that, while we do not yet have agreement on the correct form a physicalist theory of mental representation should take, we can nevertheless be confident that such a physical theory is in principle available. The idea that qualia are essentially representational is not original to Jackson, having been articulated by, for example, Gilbert Harman (1990) and Michael Tye (2000). On this view, qualia are distinguished from other forms of mental representations in virtue of their unique functional roles. Pe it puts this point succinctly: A state will count as experiential so far as it functions in a manner typical of experiences: it generally disposes an agent to come to believe that things are as they are represented to be; it does not control behaviour except when it leads to belief; it may remain in place continuing to represent things being thus and so, even when the subject has come to believe that they are not that way . . . and so on. (Pe it, 2009, p. 169) An important feature of many modes of representation is that the representation need not have the property it represents. Linguistic representations are like this. The following token green represents the property green but is not green. Similarly, my qualia of redness which, according to representationalism about qualia, represents surfaces as being red, need not itself be red. It’s long been observed that, when I have a mental image of a ripe tomato, there need be nothing red in my brain (e.g. see Smart, 1959). It has been suggested, though, that my image has the property of being phenomenally red – of instantiating a special phenomenal property which is sometimes referred to as ‘phredness’. Representationalists about qualia deny this. My qualia of redness represents red without being either red or ‘phred’. The ‘what it is like’ of the experience I have when I look at a ripe tomato in good light is entirely exhausted by the fact that it represents the tomato as being red. There is nothing else to be explained (see, especially, Jackson, 2003.) Jackson accepts, however, that Mary still makes an epistemic gain when she leaves the black and white environment; that is, he articulates what I have called a modest physicalist reply to the knowledge argument. While Jackson asserts that Mary knew all the propositional knowledge about qualia prior to her release, he does not claim that Mary learned nothing about qualia upon her release. In particular, he endorses Nemirow’s claim that Mary acquired certain skills upon her release. In response to Jackson’s representationalist reply to the knowledge argument, Robert Van Gulick has argued that physicalists can successfully respond to the knowledge argument without endorsing representationalism. The knowledge argument contains, he submits, a number of assumptions which 32
Problems, Questions and Concepts in the Philosophy of Mind
the physicalist can challenge. (See Van Gulick, 2009.) Chalmers has raised objections directly against representationalism (Chalmers, 2003a, p. 111). He distinguishes between functional and phenomenal representation as follows. A system has a functional representation of p when it responds to p appropriately. For example, I have a functional representation of the red traffic light when I respond to it by braking. In contrast, a system has a phenomenal representation of p when the system is phenomenally conscious of p. For example, I have a phenomenal representation of the red traffic light when there is something that it is like for me to see the red light. (This distinction is related to Block’s distinction between access and phenomenal consciousness, introduced at the beginning of this section.) Chalmer’s worry is that Mary could have full knowledge of the functional representational properties of the qualia of red without having knowledge of the phenomenal representational properties of red. That is, phenomenal properties cannot be reduced to – or even multiply realized by – functional properties.
Conclusion For most of the history of the philosophy of mind, the mental realm was taken to be exhausted by the phenomenal realm. But under the impact of logical positivism and behaviourism in the first half of the twentieth century, the mental was reconceived so that consciousness was only one, inessential, aspect of the mental. (Perhaps the last great philosophical work which took mind and consciousness to be synonymous was C. D. Broad’s The Mind and Its Place in Nature [1925]). Once it was recognized that the mental was not exhausted by the phenomenal, a change of focus took place. Central to that change of focus was functionalism. By identifying mental states as the occupants of characteristic functional roles, it became possible to identify mental states with brain states, thus planting the mental firmly in the physical world and providing new ways to think about the relationship between neuroscience and psychology. But two apparently intractable problems remain. Clearly, the problem of mental causation cannot be ‘functionalized away’. Functional roles are causal roles, and the functional roles characteristic of mental states include psychophysical causal relations. The problem of mental causation is a challenge to functionalism, not a puzzle that can be resolved by appealing to functionalism. The second apparently intractable problem is phenomenal consciousness. Qualia are resistant to a purely functional approach. Mary knew all the functional roles characteristic of the qualia of redness, and knew what states occupy those roles. Nevertheless, it seems that there was something about qualia which she did not know. Similarly, it seems that we can conceive of 33
The Continuum Companion to Philosophy of Mind
philosophical zombies which are functionally identical to normal human beings but which lack qualia. I said in the first section that in many ways Descartes’ problems have turned out to be our problems. Does that mean that we have made no progress beyond Descartes? I don’t think so. We are much clearer now on the scope of the problems and on their complexity. And we have a much greater range of sophisticated tools and concepts to bring to bear on them. But we should acknowledge, with a considerable degree of humility, our historical debts. As Newton might have said, if we can see further than our predecessors, it is only because we are standing on the ideas they bequeathed us.
34
2
Consciousness Daniel D. Hutto
What is Consciousness? There is no u erly clean, clear and neutral account of what exactly is covered by the concept of consciousness. The situation reflects, and is exacerbated by the fact that we speak of consciousness in many different ways in ordinary parlance. A consequence of our multifarious uses of the concept is that it has proved impossible to define its essential characteristics through conceptual analysis. We have nothing approaching a descriptively adequate philosophical consensus of what lies at the core of all and every form of consciousness in terms of necessary and sufficient conditions that would be accepted by all interested parties. This is not regarded as a cause for despair. The same is true of other philosophically important topics such as knowledge and causation. Despite this, consciousness remains of pivotal philosophical interest because of its centrality to our psychological lives and the way that it tantalizingly resists incorporation into a fully naturalized account of the world. Recognizing that a empts to provide a philosophically robust definition of consciousness are likely forlorn, a standard tactic for isolating core features of consciousness is to provide clear-cut exemplars as specimens. By means of this strategy we might still, at least, divine philosophically important a ributes of the quarry. Take your experience of reading these lines. Hopefully their content is at the focal centre of your a ention, but even if so there will be a range of other peripheral and background things of which you are consciously aware: colours, noises, feelings. Some of these may remain present throughout your intellectual activity while others intrude upon it momentarily, in largely expected but perhaps occasionally surprising ways, before vanishing from the stage. Despite such comings and goings you will not feel as if your overall experience is ruptured or fragmented. Conscious experience of this sort is u erly mundane and intimately familiar. It appears to be an all-or-nothing property that pervades the waking lives of many creatures. Human beings, cats, octopi (apparently), and spiders (perhaps) are kinds of beings commonly thought capable of possessing consciousness while inanimate objects, such as chairs, are not. We say of creatures or organisms 35
The Continuum Companion to Philosophy of Mind
that they are conscious if they are awake and sentient. Evidence of this is that they exhibit a certain degree of sensitivity or coordination with respect to aspects of their environment. However if the case just described is taken as our paradigm, then merely exercising capacities for such responding will not suffice for being truly conscious. It is easy to think of examples of complex intelligent activity, sometimes of a quite sophisticated kind, that are nevertheless apparently habitual, automatic, or unreflective. Most philosophers insist that, minimally and necessarily, to be conscious it must also be the case that a being possess or enjoy some degree of occurrent experiential awareness. In other words, there must be something that it-is-like for them to be awake, sentient or intelligently controlling its behaviour. A truly conscious being enjoys experiences that have phenomenal aspects; it feels a certain way to be such a creature in such and such circumstances. Experiential awareness can take different forms. It may be transitive in the sense of being awareness ‘of’ environmental surroundings or aspects thereof. For example, the subject may be aware of the red speck in the centre of its visual field. For this reason consciousness is o en regarded as being inherently intentional, as being directed at certain objects, not others. But it seems possible to be experientially aware in more intransitive ways too – in ways that lack directedness at specific objects. Diffuse and undirected forms of consciousness are surely possible, as is the case with moods, such as elation, calmness or depression. Other, even more basic forms of undirected conscious experience are also imaginable. Either way, to repeat, being conscious appears to require being in a state of mind with a characteristic feel – one in which there is something-that-it-is-like to be in it. This is seemingly common to all forms of consciousness; or, more cautiously, at least there are interesting forms of consciousness that have this feature necessarily. Conscious beings are essentially experiencers and the particular types of experiences that they enjoy have distinctive characteristics and notable aspects (i.e. they have specific phenomenal properties or characters). Experiencing itchiness, for example, is quite different from experiencing anger. Seeing the peculiar greenness of an aloe vera plant differs from seeing the peculiar greenness of a Granny Smith apple. We can specify the differences by using illocutions such as ‘this or that shade of greenness’. But this is to invoke inevitably crude and (still) relatively abstract categories in order to pick out something that is much more fine-grained, analog and particular. Experiencing phenomenal characters, apparently, ma ers. Having experiences seems to make a difference to what is done, in line with how such experiences are evaluated. Encountering the unusual taste and smell of durian, for example, may evoke reveries or prompt certain other actions, depending on whether one finds that taste pleasant or unpleasant. In line with this some are inclined to reserve, more stringently, the accolade of being phenomenally conscious only 36
Consciousness
for those beings that exhibit a certain degree of global control over their actions or that are capable of reporting, expressing and appraising how things appear to them. To achieve this, it is argued that conscious beings must not only be aware of and a end to aspects of their environment but to aspects of experiential mental states themselves. Accordingly this kind of capacity implies at least some degree of self-awareness or self-consciousness. If one accepts this, subjects that are truly phenomenally conscious must not only enjoy experiences with certain phenomenal qualities, they must be aware of the qualities of these experiences. If so, those states of mind that exhibit phenomenal consciousness do so at best only partly in virtue of having phenomenal characters. Still, even if all conscious beings are experiencers of some or other phenomenal properties it may be that they experience these in more or less unified ways. Human experience tends to integrate experienced phenomenal properties (i.e. those associated with different sensory modalities), continuously and seamlessly over time. Recent empirical studies concerning the phenomenon of ina entional or change blindness, raise doubts about the extent and degree to which we actually experience the world in fully detailed and non-gappy ways. Still, for human experiences – at least usually – it feels as if the way in which our experiences inter-relate and change happens in coherent, well-coordinated and expected manners. Typical human consciousness, at least, feels as if it were, in important respects, objective, temporally extended and unified. It involves having a coherent and unified individual perspective on reality. These unique points of view are internally complex. When we notice and a end to specific worldly features, such as the greenness of a particular apple, this involves being able to see an apple as something more than just the sum of its presented features. To see an apple as something in which greenness, and other properties, might inhere is to see it as having a continued existence over time. Experiencing a world of objects and their features always occurs against a larger and more complex background in which such items are systematically related to other things. To have experience of the world, as opposed to merely having sentient capacities, is to experience worldly offerings in a structured way.1 This entails, modestly enough, that different sorts of creatures may enjoy different forms of consciousness. What it is like to be a human being may vary considerably from what it is like to be a dolphin, or more famously still, what it is like to be a bat. Indeed, even what it is like to be a particular human being in a particular set of circumstances can differ qualitatively from what it is like to be a particular human being in another set of circumstances. Conscious experience is subjective at least in the sense that – as Nagel (1974) proposes – it is idiosyncratic. Being phenomenally conscious apparently equates to having a particular point of view or perspective that involves having a range of more or less unified experiences with individual phenomenal characters. 37
The Continuum Companion to Philosophy of Mind
Many philosophers hold that since we have no direct access to how things appear experientially to others, it is enigmatic whether others are conscious or what the exact character of their conscious experience is like. Thus unless it is possible to securely infer what it is like for the other from more objective available facts, then, for all we know, even apparently sophisticated and intelligent beings may lack conscious experience altogether, or they may enjoy experiences that possess radically different phenomenal or qualitative characters from our own. Moreover, the way that their experiences are normally integrated with one another or unified (to the extent that they are integrated or unified at all) may be quite alien to the way that typical human experience is organized. Even if a fully transparent conceptual analysis of consciousness is not on the cards, it seems that there are a number of identifiable – or at least apparent – properties that are fundamental to it that make it of real philosophical interest. Perhaps based on empirical or philosophical reflection it will be decided that not all of these seeming a ributes are genuine; perhaps they will not all make the final list of properties that warrant straight explanation. Nevertheless, phenomenality, intentionality, subjectivity, unity, temporal extension, minimal self-awareness are prima facie prominent features of consciousness that must be either explained or explained away.
Reductively Naturalistic Frameworks When thinking about consciousness, the mainstream tendency in contemporary analytic philosophy of mind is to focus on metaphysical (as opposed to conceptual) concerns. The working habit of those of a reductive naturalist bent is to propose equations about mental states and their properties. For example, they aim to provide general formula that will tell us, say, what conscious experience is by equating it to something else. In line with this agenda, a plethora of theories of consciousness have been advanced. To mention but a few, these include conjectures that equate consciousness with events or properties of the neurobiological sort (Crick, 1994; Churchland, 1989b), quantum mechanical (Penrose, 1994), functional/representational (Carruthers, 2000; Denne , 1991a, 2006; Dretske, 1995; Lycan, 1996; Rosenthal, 2000, 2005; Tye, 1996). Such theories do not aim to provide traditional conceptual analyses. If successful, ultimately they would tell us what is necessary and sufficient for having conscious experience (or conscious experience of a certain type) in the same spirit that the property of being water was identified with the property of being H20. That, of course, is a specific empirical hypothesis about a natural kind. What is usually on offer by reductive naturalists who theorize about consciousness is, as Putnam (1967) observes, typically not very detailed or ‘finished’ hypotheses but rather a kind of schemata for hypotheses. 38
Consciousness
Debates between naturalists of this stripe take the form of in-house assessments of (and sometimes proposed adjustments to) each other’s headline proposals. Not every framework is regarded as equally promising. For example, the more extreme versions of behaviourism are almost universally unpopular today. Their followers hold that any and all genuine mental phenomena need to be identified with behaviours or dispositions to behave that allow for operational definitions, however complex, in terms of observable causes and effects. A major criticism of this framework is that, in relying entirely on dispositions, however complex, in order to understand the mental it lacks the essential resources to satisfactorily account for the holistic and complex sorts of interactions that occur between mental states that take place before responses are produced. In this respect, functionalism, its natural successor, is deemed superior because it makes space for precisely this sort of complexity. Reductive functionalists take it that to be a conscious mental state of a certain type equates to being a functional state of a whole organism: a state that can be understood in terms of its wider systemic relations or teleological purposes and that has appropriate causal relations to perceptions, other mental states and actions. According to the analytic or commonsense version of the doctrine, mental phenomena, including the experience of sights, sounds, pains and other conscious mental states, are identified with specifiable higher order causal roles. And for reductive naturalists these, in turn, are, either directly or indirectly, identified with the physical states that happen to occupy, realize or fill those roles. Rich mental activity is thus thought to take place between stimuli and responses. Mental states, activated by environmental triggers, causally interact in specifiable ways with each other and other bodily states, and only then produce outward responses. In large measure functionalism’s popularity as a general framework for thinking about mentality derives from the fact that it gives both philosophers and psychologists the requisite apparatus and platform for positing inner, causally efficacious mental states without having to commit, in advance, to specific details about how such mental states are physically realized or implemented. This is useful because there appears to be an enormous stock of creatures – of the actual, terrestrial and imaginable, alien varieties – that are capable of conscious experiencing, despite the fact that they lack a physiology similar to our own. The key functionalist insight is that not every creature which might be capable of conscious experience need have central nervous systems or brains like ours. Consequently, we should not expect to uncover any neat, cross-species, one-to-one correlations holding between particular types of experience and particular types of neural or physiological states. To assume otherwise is, it is claimed, to promote an unwarranted species-biased chauvinism. Moreover, empirical work has revealed that brains, such as ours, are open to re-wiring; that the neural structures underpinning certain types of mental 39
The Continuum Companion to Philosophy of Mind
activity are highly plastic. For example, patients who have had hemispherectomies – in which the cortex of one hemisphere is removed – have succeeded in enlisting other parts of their brains to restore the lost functions, thus managing to compensate. In the light of such discoveries it appears mistaken to assume that there will be uniquely dedicated neural configurations supporting very specific kinds of mentality. It is likely that this is true of conscious experience too. Considerations of this sort cast doubt on strong, type-type versions of mindbrain identity theories: those that propose straightforward identifications of particular kinds of conscious experience with particular types of brain event. Type-type identity theorists hope to provide class-to-class identifications of brain events or processes that will be capable of grounding an interestingly predictive and informative science of the mind. Such theories hope to tell us that ‘being a middle-A sound is identical with being an oscillation in air pressure at 440 hertz; being red is identical with having a certain triplet of electromagnetic reflectance efficiencies; being warm is identical with a certain mean level of microscopically embodied energies, and so forth’ (Churchland, 1989b, p. 53). However if some version of functionalism is true then psychological laws can be cast at a higher order level even though conscious states will be variously, and thus disjunctively, realized in species specific (and perhaps even individual-specific and/or circumstance-specific) ways. Consider that the humble carbure or can be functionally defined in terms of the abstract causal role it fulfils. Something is a carbure or insofar as it mixes air with liquid fuel. In theory, such devices could be made of metal, rubber, plastic, possibly even soul-stuff – as long as they are capable of discharging the stated function. The relation between a functional role and what realizes it can be one-to-many as opposed to one-to-one. Thus a general description of the realizers of any given type of experience would take the form of a disjunction on the right side of the relevant equation that includes mention of a, perhaps, indefinitely long chain of different kinds of instantiating states. The trouble is that, without significant qualification, functionalism can appear overly inclusive when it comes to saying which sorts of systems ought to make the list of ‘the conscious’. This is illustrated by the fact that it is easy to imagine beings that produce the appropriate outward behaviour through functionally identical means but which plausibly lack any kind of experiential awareness. For example, Ned Block (1978) famously imagined a scenario in which the behaviours of a complex artificial body are orchestrated by communications between members of the Chinese nation, so as to mimic – in all functionally relevant respects – the responses of a human being undergoing a painful experience. In order to achieve this feat, the Chinese citizens are provided with rules on how to respond to instructions provided by sky-based display and are able to communicate with one another by means of two-way 40
Consciousness
radio-links so that, by working together, they are able to remotely generate just the right kinds of responses in the body. However outlandish the scenario, it is at least theoretically possible that the Chinese nation could simulate human pain behaviour in the artificial body by such means, mirroring at one level of description the ways in which such behaviour is normally functionally produced in humans. The worry is that if the two systems are identical in this respect then, assuming functionalism is true, it appears we have no principled grounds for saying that one is really undergoing the experience of pain and the other not. The situation is preposterous since, intuitively, we want to ascribe the experience of pain to the individual human being while withholding that ascription to an artificial body controlled by the conglomerate of Chinese individuals engaged in this bizarre exercise. Yet the only path to that verdict seems to require thinking of conscious experience as something distinct from functional properties per se. Indeed, this is precisely the moral that many are inclined to draw from this thought experiment; while functionalism might reveal something important about the processes and structures associated with experiencing it is incapable of providing a real insight into the nature of experience itself. On these grounds functionalists are o en accused of having no reasonable means of accommodating the phenomenal character of experience. Their proposals apparently miss out the most important ingredient: how it feels. Of course, it is possible for the functionalist to bite the bullet and insist, flying in the face of standard intuitions, that if a system’s behaviour is generated in the right way then it simply is conscious. So if the human is, then so too is the body governed by the Chinese nation. But this is not the only (nor the most convincing) line of reply. Arguably, the charge of liberalism might be answered by adjusting the level of grain of the proposed functional analysis. This requires a shi of a ention from abstract job descriptions – crudely, what the thing is doing – to a greater focus on precise engineering details – crudely, how the thing actually does what it does (Churchland, 1989b, Chapter 2; Flanagan 1991). Ironically, what is hailed as the chief virtue of functionalism – its capacity to abstract from specific details and its openness to the possibility of various realizability – looks to be a vice when it comes to understanding consciousness. Nevertheless, to assume that such objections are fatal is to underestimate the flexibility of the functionalist approach. It is quite open for defenders of this framework to insist, more in the spirit of identity theory, that the peculiarities of how a system is organized and even what materials it is composed of might ma er to having experience (or having certain kinds of experience). This is wholly consistent with acknowledging that two systems that differ in their lower level engineering details might be regarded as functionally equivalent at some higher level of analysis. 41
The Continuum Companion to Philosophy of Mind
What this possibility reveals is that, despite some traditional disagreements and border disputes, there is scope for seemingly distinct and opposing reductive naturalistic frameworks to become more closely aligned and tightly affiliated. Indeed some authors go as far as to claim that ‘the identity theory is just an empirically special case of functionalism, one that (implausibly) locates all mental states at the same very low level of institutional abstraction – the neuroanatomical’ (Lycan, 1996, p. 59). In the other direction even avid supporters of functionalism are quite happy to acknowledge that ‘when you make a mind, the materials ma er’ (Denne , 1997, p. 100). Importantly this thought is wholly compatible with accepting that when it comes to understanding consciousness ‘handsome is what handsome does’ and that ‘ma er ma ers only because of what ma er can do’ (Denne , 2006, p. 17). What this shows is that there may be ways and means of adequately dealing with so-called absent qualia cases and that if these were the only cases that had to be dealt with, then nothing in principle bars the development of a fully illuminating naturalistic theory of consciousness. The real devil is simply in the detail of deciding exactly which levels of functional analysis are most appropriate for understanding various forms of conscious experience and which aspects of the physical world are needed for instantiating such experiences. Determining that is a long term project that would be at least partly constrained by further theoretical considerations as well as empirical findings.
Arguments For and Against Non-Reductive Naturalism Not everyone is sanguine about the prospects of reductive naturalism. Indeed, many are convinced that the whole style of approach is wrong for the study of consciousness, full stop. This is because they hold that consciousness has special properties that are distinct and irreducible to properties of any other kind. Traditionally, this sort of view is associated with substance dualism of the kind promoted by Descartes. According to substance dualists reality is bifurcated, composed of two quite distinct and fundamentally different substances: the mental and the physical. Most of today’s dualists are thoroughgoing naturalists. They claim only that phenomenal properties are real, and although wholly natural, they cannot be equated with any other kind of properties. Rather they are primitive properties in their own right that exist alongside representational, functional and physical properties. Hence to fully understand how they relate to other worldly properties requires additional explanation in terms of special fundamental laws, laws that are as basic as any others to be discovered by a completed physics. Understood non-reductively, the ultimate aspiration of a science of consciousness is ‘to connect first-person data to third-person data; perhaps to explain the 42
Consciousness
former in terms of the la er, or at least to come up with systematic theoretical connections between the two’ (Chalmers, 1999a, p. 8). Ultimate success in this venture would take the form of a fundamental theory which would explicate the simple, universal laws that underwrite the principles connecting experiences and informationally driven brain processes. Yet even those who are commi ed to this project recognize that currently there is a lack of adequate formalisms for characterizing and ge ing at experiential properties. Thus in order for there to be a viable first-personal science of consciousness existing methods of investigation must be substantially improved and developed, as is promised by techniques such as neurophenomenology (for an overview see Lutz and Thompson, 2003). A range of thought experiments motivates belief in the metaphysical dualism that suggests a need for research programmes of this kind. Ultimately, reductive naturalists claim that physical features and facts exhaust all that there is. Once the physical details of the world are in place, everything else follows or is entailed automatically. Jackson’s (1982) famous knowledge argument is designed to cast doubt on the truth of this view. It features a thought experiment in which the central character, Mary, a super-scientist, knows every physical fact it is possible to know. It is imagined that Mary has been confined from birth in a wholly black and white environment and that she has only had access to the outside world via the black and white media of television monitors. Although she knows everything that is possible to know about every physical detail of the world, her knowledge is, nevertheless, incomplete. Specifically, she lacks knowledge of certain facts about colour experience. Due to her confinement, she does not know what it is like, either for herself or others, to experience colours. This is not something that she can learn without being released from her monochrome prison. The moral is that if Mary’s factual knowledge is incomplete, then knowing everything that there is to know about physics is not to know everything that there is to know simpliciter. When given a metaphysical twist, this conclusion is thought to imply that fixing the physical facts simply doesn’t fix all the facts, namely, that physicalism is incomplete. An even more direct a empt to demonstrate the limitations of reductive naturalism involves conjuring up zombies. Zombies are atom for atom, complete physical duplicates of conscious beings; they are identical in every behavioural, functional and physical detail, but despite this they are completely lacking in conscious experiences. For zombies the lights appear to be on even though no one is at home. This profound experiential vacancy is unnoticeable for all intents and purposes. It is not something that you, I, or even the zombie itself would be able to detect or articulate. It need not be supposed that zombies are actually possible; likely they contravene existing laws of nature that apply to our world. But this is entirely consistent with their existence as denizens of some logically possible world (Chalmers, 1996, p. 180). 43
The Continuum Companion to Philosophy of Mind
Conceiving of zombies seems to be no great strain. Their possibility seems quite coherent. Thus if it is allowed that bona fide or coherent conceivability implies metaphysical possibility then the mere possibility of zombies putatively shows that experiential properties need not, always and everywhere, be tied to or co-vary with behavioural, functional and/or physical properties. Experiential properties have intrinsic features that are logically distinct from all other properties. If so, because of this conceptual wedge, all forms of reductive naturalist theories of consciousness are false. Moved by these considerations, non-reductionists are confident that all a empts to understand conscious experience in terms of something else will be unable to clear a fundamental hurdle; they will fail to deal squarely with the so-called hard problem of consciousness. Chalmers’ (1996) formulation of it takes the form of wondering for any given proposal about the nature of conscious experiencing (i.e. that identifies it with a certain functional organization, representational properties, complex pa erns of neural or organismic activity, the global access and integration of information, or what you will) how it is that the proposed states or activities in question could ‘give rise to’ or ‘generate’ conscious experience. The assumption is that no satisfactory answer is or will be forthcoming. The very idea of making sense of the production of consciousness in terms of something else is a fool’s errand. Thus consciousness remains the biggest stumbling block to obtaining a complete understanding of the natural world using the standard categories offered by the objective sciences. Notably, the hard problem is not just hard it seems impossible to solve. Hence when it comes to explaining the relationship between the experiential and other worldly properties the best that can be hoped for is a non-reductive specification of relations that hold between them. There are a number of ways reductive naturalists can respond to these sorts of challenge. This always takes the form of trying to show that despite appearances these thought experiments do not have the implications that they appear to have. This can be handled piecemeal. For example, some hold that zombies are not coherently conceivable, and thus nothing interesting follows for metaphysics from the a empt to imagine them. In the case of Mary, some argue that, despite the fact that she learns something new on leaving her room, what she learns is not any kind of factual knowledge. A more popular general strategy is to appeal to special features of phenomenal concepts that are distinct and irreducible to physical concepts, even though phenomenal properties are not distinct from physical properties. This provides space for insisting that even though zombies are genuinely conceivable, their conceivability is not a faithful guide to metaphysical possibility. It also permits one to deny of Mary that her epistemic situation, as described, need have any of the interesting metaphysical consequences proposed. Knowing everything
44
Consciousness
there is to know under one conceptually based description does not entail knowing the very same things under another. What makes this line of reply plausible is that concepts of phenomenal consciousness are apparently special in key respects. Their special features are what systematically foster and explain the illusion that experiential properties can come apart from all other properties, even though this isn’t so. Public concepts of experience, such as ‘redness’ or ‘itchiness’ are, some hold, recognitional in nature. They are thought to involve perceptions of worldly properties. In contrast, phenomenal concepts, such as ‘seems red’ or ‘feels itchy’ – which some hold are formed on the basis of re-enacting or having higher order perceivings or believings about first order experiential states – are regarded as purely recognitional or inherently first-personally perspectival (Papineau, 2002; Tye, 2009). If phenomenal concepts are formed in special ways and have unique properties because of this, then it is arguable that it only seems to us that zombies are possible (i.e. one can conceptually imagine phenomenal properties as being distinct from all other properties even though they cannot be) and that it only seems that Mary doesn’t know all the facts (in fact she does, but she knows some of them under a limited description or mode of presentation). If so, those who are excited by the idea that such thought experiments damage the prospects of naturalism are subject to a persistent cognitive illusion.2 Those who endorse the phenomenal concepts strategy place different bets on the odds of closing the so-called explanatory gap (Levine, 1983). For even if naturalists are able to put their metaphysical house in order there remains a lingering and perturbing question of why a neural state should be the basis of a certain kind of experience or of any experience instead of none at all. This question suggests that there is a gap in our understanding that still appears to remain wide open even if one denies the force of the standard thought experiments. One tactic for dealing with it, promoted by McGinn (1991), is to simply concede that our minds are cognitively incapable of forming the relevant concepts required for closing it (i.e. of providing a constructive, scientific account of consciousness). This is so, he maintains, even though consciousness is a perfectly legitimate natural and, indeed, wholly physical phenomenon. In principle it is wholly explicable in physical terms even though we are, forever, cognitively closed to understanding how this could be so. We are prevented from this because we lack the appropriate cognitive faculties. Top-down a empts to understand the psychophysical link between the experiential and the physical are impeded on one side by the limits of introspection. There is nothing in our experience that provides us with the means of intelligibly understanding how experience is generated by the processes that
45
The Continuum Companion to Philosophy of Mind
underwrite it. Bo om-up scientific approaches are similarly limited by the perception-based methods they employ; hence they are equally unable to make sense of the productive link between the physical and experiential. It does not follow that the psychophysical link is inexplicable. All that is entailed is that the missing link cannot be characterised or made intelligible in either physical or mental terms. Given our inherent conceptual limitations, it is just that we are forever prevented from making sense of the relation between experience and its material substrate. The truth about the psychophysical nexus is out there but it is permanently beyond our ken. Consciousness will always remain a mystery. Understanding its place in the natural world is a perpetual, epistemic problem but not a metaphysical one. It might be wondered what, if we lack even the possibility of epistemic assurances, could warrant this staunch faith in the truth of physicalism? What justifies the idea that there exists an explanation of the psychophysical connection, that is forever beyond our grasp? McGinn’s answer is that nothing rationally justifies it, but rather it must be accepted as an ‘article of metaphysical faith’ (McGinn, 1991, p. 87). Other reductionists reject this pessimistic a itude wholesale, believing it is possible to solve the hard problem by changing the tools with which we currently think about consciousness (i.e. by fiddling with the concepts on both sides of the equation until the mystery disappears). The assumption is that in the course of time the concepts of mental and physical concepts will co-evolve and a solution will be revealed as we ‘bring them into correspondence’ (Van Gulick, 2000, p. 94). But this promise of conceptual reconciliation seems unlikely to be fulfilled since our quotidian concepts of experience do not develop a er the fashion of theoretical concepts in the basic sciences. Nor does the way the la er are developing seem likely to make them more like the former. The most aggressive strategy for dealing with the explanatory gap and the hard problem is to deny that addressing such ‘how’ and ‘why’ questions is legitimate. Some hold that they beg crucial questions and that in defending their proposed identity claims reductionists ‘should simply deny that there are two properties here’ (Papineau, 1993b, pp. 179–80). The key when adopting this line is to deny, from the get go, that there is any sense in playing the ‘generation game’ at all (i.e. of trying to answer the question of how the physical ‘gives rise’ to experience). Defenders of this view deny that a straight solution to the problems of consciousness is ever on the cards. But this is not because consciousness won’t reduce. Rather it is because it makes no sense to ask, in general, how or why the mental and the physical are related given that they are one and the same, though differently encountered by us. Put simply, ‘If feelings are one and the same as brain states, then brain states don’t “generate” a further realm of feelings (or “give rise to” them, or “accompany” them, or “are correlated with” them). Rather, brain states are the feelings’ (Papineau, 46
Consciousness
2002, p. 3). Accordingly the best policy for dealing with the hard problem is the same as in dealing with taxes; avoidance is permissible but evasion is illegal. In this case avoidance looks like the best move since ‘as soon as you suppose that conscious states are distinct from material states, then some very puzzling questions become unavoidable’ (Papineau, 2002, p. 2). A empts to identify conscious experience with a physical state of some kind or other would be doomed to fail from the outset if conscious experiences in fact shared no essential properties in common with such states. But the reply to this charge is that while this might seem to be the case it simply isn’t so. It is entirely possible that one and the same thing may present itself to us in different ways. There are plenty of cases in which a single referent is mistaken for numerous distinct ones and vice versa because of misleading appearances, names or descriptions. Noting this is all that defenders of reductive theories require if they are to establish that their hypotheses about consciousness might be possibly true in ways that would obviate having to deal with the problems of consciousness. To take this line is to hold that there is nothing more inherently absurd in claiming that conscious experiences might equate to certain kinds of physical happenings than there is in claiming that ‘the Morning star’ and ‘the Evening star’ are the same planet: Venus. In neither case is the identity immediately obvious or self-evident. If we allow this then there is no reason to deny, in advance, that conscious experiences could not be identified with something physical. To think otherwise, on the basis of appearances of difference, is to be under the sway of the stereoscopic or antipathetic fallacy. While a ractively simple, nevertheless this sort of reply only goes so far. It does nothing, by itself, to motivate acceptance of any of the proposed identity claims; at most it makes space for their possible truth; at best it secures the barest logical possibility of putative identity. And it does not deal with the root problem that underpins the explanatory gap or the hard problem because it fails to overcome worries about the intelligibility of making certain identity claims. The bo om line is that to make their favoured identity claims credible, it is necessary for reductionists to deal with the appearances of difference in some satisfactory way.
Rethinking Metaphors of Mind Credibly establishing identities always rests on showing that what outwardly appears to be different is in fact of the same kind – for example, a planet, a person, an event and so on. Establishing an identity in a fully satisfying way depends on the possibility, in principle, of being able to explain away appearances of difference. To make an identity claim intelligible requires showing 47
The Continuum Companion to Philosophy of Mind
that seemingly distinct things can possess all of their apparent properties without tension or contradiction. Thus to make any progress on the problems of consciousness, to render any given naturalistic equation about consciousness truly convincing, would involve showing how the properties proposed for the reduction could be the kinds of things that experiential states or properties might be. A nagging concern is that all existing reductionists proposals leave too many important questions unanswered about the appearances of difference. In particular, they give inadequate answers to question, such as: Why do experiences feel as they do? Who or what does the experiencing? How and where does this all come together? A deep-seated problem is that although reductive naturalists outwardly denounce the picture of mental objects as occupying an inner sanctum of the mind, they are inclined to take seriously questions and problems that do not wholly make sense without presupposing this picture in some, perhaps vestigial way. For example, some are tempted to ask: Where is my experience of pain located? The sense of this question is taken to be straightforwardly akin to the query; where is my pen located? But this leads directly to the problem of phenomenal space, which is the problem of finding a place for the world of experience within the world of physical space. In this context Denne is right to ask, ‘Now what is phenomenal space? Is it physical space inside the brain? Is it the on-stage space in the theater of consciousness located in the brain’ (Denne , 1991a, p. 130)? Denne ’s analysis of the assumptions grounding the enterprise of explaining consciousness is instructive. He believes that most philosophers, and many lay folk influenced by them, conjure up images of the mind as an inner, mental theatre complete with a self who examines various on-stage objects in the spotlight of consciousness (pains, colours, figments of the imagination, etc.). Those under the sway of this picture think of our verbal reports concerning consciousness as based directly upon what the self sees on its private, inner screen. Apparently it introspects mental items in a way similar to that in which we ordinarily inspect everyday things such as watches or pieces of china. Denne has done more than most to get us to critically question our thinking on this score so as to abandon the idea that there is any such place in the physical world (and, in particular, the brain) where all the events of consciousness ‘come together’. Rather than starting with such dangerous assumptions about our explanandum he thinks that we have no choice but to begin our investigations into the nature of consciousness by interrogating first-person reports in a public, intersubjective context. He gives the name heterophenomenology to this activity. While engaged in it, we, as interpreters, effectively allow the subjects to verbally describe to us the nature of their experiences. They generate texts about how things seem to them. They have authority concerning the 48
Consciousness
content of what is described. But what is described are best understood (at least in the first instance) as notional worlds that are analogous to fictional worlds, such as Sherlock Holmes’s London (not the real London). In being of a like nature to such fictional worlds, ‘The subject’s heterophenomenological world will be a stable intersubjectively confirmable theoretical posit’ (Denne , 1991a, p. 81). It follows that speech acts are the primary interpreted data for the study of consciousness. These are ‘reports’, ‘judgments’, and ‘beliefs’ that are made concerning purported conscious experiences. The question of whether or not what is described in these speech acts is real or fictional is le in abeyance. Officially, when we start investigating consciousness scientifically using this method, we are required to begin (but not end) by focusing on the contents of the speech acts of humans (and other ‘possible speakers’), staying studiously neutral on what – if anything – lies behind them/explains their etiology. The ontological moral Denne is inclined to draw is that although we ought to allow subjects to have the final word in saying how they judge that things appear to them, this in no way commits investigators to take seriously what they describe at the level of ontology. He maintains that this is the only ‘sound way to take the first-person point of view as seriously as it can be taken’ (Denne , 2003, p. 19). For him, interrogating such texts are our only means of neutrally analysing the reports about what is going on ‘in our minds’. He claims that the texts generated in these circumstances, and not something above and beyond to which they putatively refer, are the raw material for any theory of consciousness. In promoting this understanding of where we must start Denne offers a new metaphor for consciousness; the multiple dra s model. The multiple dra s model identifies consciousness with our ability to generate a coherent text concerning our putative mental episodes. James Joyce’s Ulysses is the model. But he goes further and advances a positive reductive theory of consciousness in terms of the ability to generate detailed, coherent serial reports. For Denne the business of explaining consciousness boils down to explaining how the brain is able to produce the relevant texts. By his lights we won’t have explained consciousness until we give a naturalistic account of our ability to produce coherent speech acts through which we describe our experience of what it is like for us to be conscious. And like all good reductionists he believes that, ‘Only a theory that explained conscious events in terms of unconscious events could explain consciousness at all’ (Denne , 1991a, p. 454). Explaining consciousness, for him, converts to explaining the capacity for a certain kind of text production. His task is not to explain the existence of conscious experience as it is usually imagined to be but rather to explain how our talk about how things seem to us is produced by underlying sub-systems. Thus he hopes to give an ontogenetic 49
The Continuum Companion to Philosophy of Mind
explanation of how those sub-systems were formed and an account of how they work. The essence of his proposal is captured in the following remark ‘I am suggesting conscious human minds are more or less serial virtual machines implemented – inefficiently – on the parallel hardware evolution has provided for us’ (Denne , 1991a, p. 218). In line with his multiple dra s model he calls the virtual machine that gives rise to consciousness a Joycean machine. And he is quite aware of the limits of his theory; he notes that ‘If consciousness is something over and above the Joycean machine, I have not yet provided a theory of consciousness at all’ (Denne , 1991a, p. 281). There are similarities between Denne ’s approach and ambitious higher order theories of consciousness – those that maintain that being phenomenally conscious requires a ending to or noticing the aspects of one’s mental states in ways that necessarily involve making reference to those aspects in higher order acts of perception or thought. Accordingly, phenomenal consciousness requires the use of higher order perceptions or thoughts (Lycan, 1996; Rosenthal, 2005; Carruthers, 2000). If such higher order operations are, in fact, partly constitutive of phenomenal consciousness then the neural basis of experience must include machinery for ‘inner sensing’ or for making ‘theory of mind’ ascriptions.3 For those who doubt that such mechanisms form part of our basic biological equipment, Denne ’s account has an advantage. He does not believe that the Joycean so ware is built-in; he regards it as the result of cultural design. He tells us that consciousness is, ‘largely a product of cultural evolution that gets imparted to brains in early training’ (Denne , 1991a, p. 219). But critics regard this as an admission that non-verbals, such as animals and infants, are incapable of having experiences. Denne ’s response to this worry is that our folksy intuitions regarding animal and infant consciousness are not sacrosanct. Je isoning some of our most deeply held intuitions concerning the nature of experience may be a price we must pay for adopting a neater criterion of consciousness. A deeply objectionable feature of Denne ’s theory, echoing the problems of certain versions of behaviourism and functionalism, is that a complex system would count as conscious if it produced ‘pa erns of behaviour’ identical to, say, those of yours or mine when we generate a stream of coherent u erances that are interpretable as saying how things seem to us. Highlighting this aspect of Denne ’s account, many have complained that his theory leaves out what is critically important for understanding phenomenal consciousness: phenomenal qualities themselves. A capacity for experiencing such qualities, it is argued, is logically independent from (and developmentally prior to) capacities for propositional believing, reportage and narrative text production. The problem for Denne ’s proposal and the offerings of higher order thought theorists is that, as stand-alone accounts, they allegedly place too much emphasis 50
Consciousness
on sophisticated extras or supplements. These may be plausibly required for having certain kinds of conscious experience but should not be confused with the essential ingredients of a more basic, biologically grounded capacity for phenomenal experiencing itself. The idea that experience is more basic than such accounts suppose is further supported by the observation that conscious experience is not primarily about the abstract recognition and identification of objects but rather has a more perspectival nature. This has independently motivated the conjecture that conscious experiences must occur at an intermediate and cognitively impenetrable level of perceptual processing; one that is neither very high nor very low (Jackendoff, 1987). In line with this, Block (2007) has suggested that the neural basis of experience does not include mechanisms of reportage. But he accepts that this generates a methodological puzzle for consciousness research given that sincere first-person reports are the accepted basis for investigating the neural correlates of consciousness. There are various ways of making sense of what the relevant kind of preconceptual, pre-linguistic form of perceptual processing involves. A popular conjecture is that such perceptual experiences might be best understood in representational terms. This conjecture is lent some initial plausibility by the fact that those sentient beings which are known to be perceptually conscious do not get by in the world by merely being responsive and reactive to its offerings. Rather they are conscious of things as being a certain way. As Tye observes, the idea that perceptual experiences must be contentful is ‘most strongly motivated by the thought that, in seeing objects, they look some way to us, together with the further thought that an object can look a certain way only if it is experienced as being that way. This in turn, seems to require that the object be represented as being that way’ (Tye, 2009, p. 88). Minimally, a state ‘represents as’ if it presents some portion of the world as being a certain (potentially) truth-evaluable way, for example, as ‘being hot’ or as ‘being red’. Importantly, a creature need not have sophisticated conceptual abilities in order to be in such states of mind; it is possible that such content is non-conceptual. All that is required for a mental state to possess content of this kind is for it to have inherently specifiable correctness or accuracy conditions. The intuition is that simply by instantiating the property of ‘phenomenal blueness’ a mental state is automatically capable of representing something in the environment as ‘being blue’. The most ambitious reductive variant of this idea takes it that the phenomenal character of an experience is exhausted by its representational content; phenomenal properties are nothing but representational properties. Such theories hold that the phenomenal aspects of experience are nothing over and above taking features of the world to be a certain truth-evaluable way. Accordingly, what it is like to be conscious boils down to representing how 51
The Continuum Companion to Philosophy of Mind
things might be (given that how the world seems to be and how it actually is may differ). Consequently, there can be no difference in phenomenal character without a corresponding difference in representational content because phenomenal character just is a kind of representational content. Weaker, nonreductive versions of representationalism hold that changes in phenomenal character lawfully co-vary with changes in content because, although distinct from representational properties, phenomenal properties perform representational service. Both strong and weak versions of representationalism about consciousness face a number of serious objections (for details see Hu o, 2009). Arguably, a major problem with all such accounts is that they a empt to understand basic perceptual activity by illicitly importing features that in fact necessarily depend on being a participant in sophisticated, linguistically-based practices (e.g. having mental states with the kind of semantic content that requires assessment by appeal to public norms and concepts, as in the a ribution of ‘blueness’ to aspects of the environment). If so, in imagining basic experiences to have more properties than is necessary or possible for them to have, such accounts make the opposite mistake to Denne and his followers. Plausibly, having a capacity for phenomenal experiencing is more rudimentary and fundamental than the capacity to represent the world as being a certain truth-evaluable way. Consequently, experiencing aspects of the world might be thoroughly non-contentful (and not just non-conceptual). Experiencing might not be intrinsically content-involving even though there is something-it-is-like to experience worldly offerings in phenomenologically salient ways. This non-representationalist view of experience features as the central plank of a radically enactivist approach to phenomenality; one that seeks to understand phenomenal experience by focusing on the ways in which creatures actively sense, perceive and engage with their environments (see Hu o and Myin, forthcoming). Enactivists propose that the core features of experiential properties are best explained by appeal to specific pa erns of sensorimotor activity, through which complex self-organising systems interact with aspects of their environment. Their slogan is: ‘Experience isn’t something that happens in us, it is something we do’ (Noë, 2004, p. 216; see also Thompson, 2007). They maintain that ‘Experience is not caused by and realized in the brain, although it depends causally on the brain. Experience is realized in the active life of the skilful animal’ (Noë, 2004, p. 227). Thus enactivists challenge traditional internalist thinking about the extent of the supervenience base of consciousness, holding that it constitutively involves not just the brain but also bodily and environmental features. In pressing this idea, enactivists are critical of endeavours to understand the phenomenal character of experience on a purely correlative basis, namely, 52
Consciousness
by looking exclusively at what goes on inside the craniums of experiencers (a style of approach exemplified by those who seek to identify the neural correlates of consciousness). Like those critics who are worried about the hard problem and the explanatory gap, enactivists hold that even if relevant mind/brain correlations are established, the fruits of such work would remain explanatorily sour: ultimately this alone would tell us precious li le about how or why experiences have the particular phenomenal characters or feels that they do. By way of contrast, it is argued that charting which environmentinvolving pa erns of sensorimotor interaction make a difference to having experiences with specific kinds of phenomenal character holds out much greater explanatory promise. While it is impossible to give a full and fair assessment of these proposals here, this analysis suggests that it seems likely that a completely satisfying naturalistic understanding of conscious experiences will require the complex balancing and selective integration of a range of different theories. Achieving this will require modifications, not only, of the ambitions and resources of specific proposals, in tune with a be er understanding of the level at which they best operate; it will also require a fundamental re-thinking of some of our basic assumptions about the nature of consciousness.
53
3
The Mark of the Mental Fred Adams and Steve Beighley
Introduction What’s a mind? What would it take to build one? There must be a difference between having a mind and not having one. What is it? This paper defends the view that there is a difference and tries to say what it is. Why be interested? First, inquiring minds want to know. The question is intrinsically interesting. It certainly seems that there is a natural divide among biological1 systems that have minds and those that don’t. If this is not an illusion, we should be able to discover what constitutes that difference. Second, we commonly talk about ‘minds’ and ‘mental states’. Is this just a convenient fiction? Some people think we talk about minds because we don’t yet know the real story about why people (and animals) do the things they do. Speaking about minds has practical predictive value in itself even if there aren’t any minds. Horticulturists say that some plants ‘like light’ or ‘like dark places’, even though no one literally thinks plants have minds, have likes or dislikes. Yet, while most people would accept that speaking of ‘likes’ for houseplants is a convenient fiction, very few think it’s fiction when talking about Grandma or the kids (or even the family pet). Here, we seem more commi ed to the a ributions of mind and mental states to people and pets being literally true. We think these uses are literally true and our job is to figure out what underlies that truth. Third, science, the law, even other areas of philosophy pin important issues on mental states. In the law, it is important to know if one who acts is legally sane. In epistemology, some internalists about justification claim that justification supervenes upon mental states. This requires knowing what constitutes a mental state. Researchers interested in embodied cognition are making some amazing claims in the scientific literature.2 We are told that the use of computers, cell phones, PDAs, or even pencil and paper while doing a complex math problem can involve cognition extending into the environment – across the boundaries of body and brain (Clark, 2009). The skin is an arbitrary boundary, only observed by an outmoded Cartesian view of the mind. Some of this may be true, but unless we are able to specify what counts as a cognitive process, it will be impossible to evaluate such claims. For now, researchers are 54
The Mark of the Mental
able to make such bold speculative claims precisely because there is no agreed upon account of what makes something a cognitive process. If these uses of tools manipulate representations and information, and if thinking (cognizing) is a kind of manipulation of representations, then why isn’t tool use a kind of cognizing? Fourth, suppose we were going to try to build a mind. Many artificial intelligence labs around the world are racing to be the first to build a computer or a robot that can think. Governments are funding projects to build genuinely intelligent agents to serve many different functions. To win the competition, these centers must determine what it takes to build a mind, a cognitive agent. There are, however, more and less radical views about why there might not be a mark of the mental at all. Eliminativism asserts that there are no minds. Mysterianism asserts that we can never know how a brain generates a mind. We will not discuss these views here, but we do so elsewhere.3 A less radical possibility is that researchers cannot agree on what the mark of the mental is or why people should want one. Susan Hurley’s comment is typical: ‘Criteria of the mental or the cognitive vary widely (if not wildly) across theorists; it isn’t even clear what agreed work such criteria should do’ (Hurley, forthcoming, 5). Kim too paints a dismal prospect for a unified conception of mind, saying: The diversity and possible lack of unity in our conception of the mental that the class of things and their states that we classify as mental is also likely to be a varied and heterogeneous lot . . . A question to which we do not yet have an answer is this: In virtue of what common property are both sensory states and intentional states ‘mental’. What do our pains and beliefs have in common in virtue of which they fall under the single rubric of ‘mental phenomena?’ They of course satisfy the disjunctive property ‘qualitative or intentional’, but that would be like trying to find a commonality between red and round by saying that both red things and round things satisfy ‘red or round’. To the extent that we lack a satisfying answer to the question we fail to have a unitary conception of what mentality consists in. (Kim, 2006, pp. 26–7) Hurley makes her point in the context of the extended mind debate, chiding Adams and Aizawa (2008) for requiring a mark of the mental in order to decide whether minds extend. She rightly points out that one can study the mind without having an agreed upon mark of the mental. We know that vision, memory, and decision making are mental, but if one wants to claim that mental events extend beyond body and brain, one must have a notion of what makes something or some process mental. Kim (2006) argues that there is a dichotomy between sensory states and intentional states. The former seem to take instances of properties as their 55
The Continuum Companion to Philosophy of Mind
contents (sweet tastes, round feels, loud sounds), while the la er seem to take propositions as their contents (believes that the Democrats will win; hopes that we will get out of Iraq). For Kim, a unified account of the mental would require something’s being the same across these types of states. Kim is pessimistic about this possibility.4
Views of the Mark of the Mental We will now consider a empts to overcome these sorts of worries by saying just what the mark of the mental is. As Kim suggested, there could be a single thing or common property that all mental states share and in virtue of which they are mental. Call this the ‘single property view’. Alternatively, there could be a cluster of properties that make a mind, and something may have to possess a certain number of the properties in the cluster to be a mind. Call this the ‘property cluster view’. Then, finally, there is the single system view that says there is a single set of properties that all minds must have, but not every state that is part of the system must possess these properties themselves. Some states may be fully part of the mental system in virtue of their causal contribution to the system properties. We will now examine single property views, property cluster views, and finally present our own account of the mark of the mental, a single system view.
Incorrigibility The first single property view that we will consider focuses on the property of incorrigibility. Rorty (1970a, 1970b, 1972) and possibly Denne (1996) want incorrigibility to serve as the mark of the mental. Rorty is careful to distinguish occurrent states (my foot hurts now; I’m now thinking it is time for lunch) from standing states (my background desire for self-preservation; my background belief that global warming is a bad thing). Given this distinction, Rorty limits incorrigibility to only first person, reportable, occurrent sensations and thoughts, admi ing that there is no single mark of the mental for all entities customarily called ‘mental’ (Rorty, 1970a, p. 409). According to Rorty ‘mental’ does not apply to standing states.5 On this view, incorrigible states are first-person self-reports about thoughts or sensations, for example, ‘I’m tired now’, ‘I’m angry now’, and so on. Rorty says: What makes an entity mental is not whether or not it is something that explains behaviour, and what makes a property mental is not whether or not it is a property of a physical entity. The only thing that can make either 56
The Mark of the Mental
an entity or a property mental is that certain reports of its existence or occurrence have the special status that is accorded to, e.g., reports of thoughts and sensations – the status of incorrigibility. (Rorty, 1970a, p. 414) For Rorty, incorrigibility amounts to situations such as ‘when the behavioural evidence for what Smith was thinking about conflicted with Smith’s own report of what he was thinking about, a more adequate account of the sum of Smith’s behaviour could be obtained by relying on Smith’s report than by relying on the behavioural evidence’ (Rorty, 1970a, p. 416). Why incorrigibility? At one point, Rorty (1970a) criticizes Armstrong for trying to find a ‘topic neutral’ way of characterizing mental states such that they might turn out to physical states. That is, science might discover that what was once a ributed to minds is had by brains. As we could discover that water is H2O or lightning is electrical discharge, we could discover minds are brains, that mental events are physical events. Rorty rejects this view on the grounds that it would not allow a conceptual distinction between the mental and the physical. If minds and mental phenomena were discovered to be brains and physical phenomena, there would be nothing to ground6 the conceptual difference between ‘the mental’ and ‘the physical’. Among Rorty’s motives is to find a logical or conceptual difference between ‘the mental’ and ‘the physical’ as categories. But categories of what? Not substance; this is not supposed to be a dualistic metaphysics. Instead, the mental and the physical are different categories of statements, assertions, or reports. Physical reports have the logical property of being able to be over-ridden, while mental reports have the logical property of not being able to be over-ridden or corrected by other physical reports, say, a third-person report about what is happening in one’s brain. But why reports? Why not behaviour? In a sense first-person reports are pieces of behaviour. Are they behaviour that non-mental things can’t produce? Rorty seems to think non-mental things can produce reports, but they won’t be mental reports because they don’t have the logical property of not being able to be overridden. Hence, Rorty is coming out of the behaviourist tradition. We can’t simply look inside one’s head and see events as mental. We don’t know what Rorty thought of modern techniques of neuroscience, but we suspect that he would maintain that it is only the first-person self-reports recorded from the same subjects scanned that give us genuine knowledge of the subject’s mental states. Rorty did mention the philosopher’s fiction, the ‘cerebroscope’ but maintained that reports from the cerebroscope would not override first-person reports. If they did, Rorty is prepared to admit that the category of ‘the mental’ would ‘lose its incorrigible status, and thus, status as mental’ (Rorty, 1970a, p. 421). So, why reports? Reports are our only access to the mental as construed by Rorty.7 57
The Continuum Companion to Philosophy of Mind
We will present three simple objections to Rorty and then argue against his whole approach. The first objection is the infants-and-animals objection. On Rorty’s view non-lingual infants and animals don’t have minds, but surely that’s not true. Of course, there have been those who have denied that animals have minds (Descartes) or that they have conscious minds (Carruthers), but most philosophers and scientists accept that both pre-lingual infants and nonlingual animals have minds. Rorty’s view simply cannot accommodate this. Infants and animals make noises, but they don’t give first-person self-reports, and so they are denied the incorrigible reports that are his hallmark of the mental. They have plenty of occurrent mental events that are simply not captured by Rorty’s criterion. The second objection is what we’ll call the ‘Cog-objection’. Denne (1996) sides sympathetically with Rorty as he tells of the project to build an intelligent agent named Cog at Rodney Brooks’ MIT lab. Cog is fi ed with cameras for eyes, microphones for ears, and microprocessors linked in parallel for a brain. The goal is to get Cog to think and to teach it language. Cog is a selfreprogramming system, and at some point the reports that come out of Cog may not match the interpretations placed on its internal states by its programmers. Denne agrees with Rorty: for Cog to think would be for Cog to make incorrigible reports on its internal states. We think at some point Cog may babble like a baby. At a later point Cog may come out with ‘It is hot in here’. Suppose Cog has an internal mechanism that is not unlike a quantum mechanism in this respect. To examine its internal states is to change them. So, in effect, Cog’s reports are incorrigible. Does this mean that Cog satisfies the criteria for ‘the mental’? It would seem so, but this seems like a limitation on us, on what we know or can know, not a breakthrough in cognitive science. Of course, Rorty’s official pronouncement had two conditions: S believes incorrigibly that p at t if and only if: (1) S believes that p at t, (2) There are no accepted procedures by applying which it would be rational to come to believe that not-p, given S’s belief that p at t (Rorty, 1970a, p. 417) . We suspect that Rorty had to mean that one knew (1) was satisfied in virtue of a self-report. We know that Cog believes it is hot in here because he u ers ‘It is hot in here’. If there is some other behavioural test for occurrent belief, then Rorty’s theory collapses, assuming occurrent beliefs are mental states. Condition (2) is satisfied by the nature of the internal mechanism that changes
58
The Mark of the Mental
the system upon any a empt to verify its current states. Of course, one could object that unless we have another way to validate Cog’s ‘beliefs’, we don’t really know that he made a report or satisfies (1). This takes us to our third simple objection. The third objection is that Rorty is not entitled to help himself to the idea that any verbal u erance is indeed a report. Some Japanese cars (e.g. the 1986 300ZX) have a system that says ‘door not closed’, to warn the driver not to drive until closing the door. Did the car make an internal report? We would claim not. A sensor detected that the door was ajar and sent a signal to a voice simulator that emi ed sounds interpretable by English speakers. To be a first person report telling us the internal state of the car, the u erance must be made with intention and purpose. These may have existed in the minds of the Nissan engineers who designed the car but not in the car itself. The same is true of Cog. If ‘It is hot in here’ comes out of Cog’s audio port, is that a first-person self-report? We suspect that it is not any more than it is a ‘report’ coming out of the 300ZX. Reports are linguistic u erances.8 They are intentional. They have meaning. They are for the purpose of communicating or conveying information. This is the type of thing minds do. In essence, genuine reports have to come from minds. The more general objection is that Rorty has things the wrong way around. If genuine reports come only from minds, then it can’t be the logical features of the reports, such as incorrigibility, that make them mental. There must be something else underlying the ability to make the report that accounts for the system having a mind, being a mental system. Furthermore, we deny Rorty’s worry9 that the distinction between the mental and the physical would collapse if we reject Rorty’s approach. True, if minds turn out to be physical things, then the mental will be a subset of the physical, but this doesn’t mean that there is no difference between a minded thing and a non-minded thing. Minded things can be physical things arranged in different ways from non-minded things, with different functions, different causal histories, and different internal and external behavioural capacities. Just because minds may be physical does not mean there is no difference between minds and non-minds. The category of the mental, and therefore the meaning of ‘the mental’, can be a subset of the physical. Gold is a kind, and it is physical. Water is a kind, and it is physical. They are both physical, but that does not mean that there is no significant categorical difference between them. We see no reason to worry that there will be no interesting conceptual difference between the mental and the physical if they both turn out to be physical, as Rorty seems to fear. Once one takes seriously the notion that possibly everything is physical, even minds, then Rorty’s fears vanish. If one takes seriously the thought that minds are natural kinds, then it is up to science to discover what minds really are.
59
The Continuum Companion to Philosophy of Mind
Intentionality The next single property view we will consider focuses on intentionality or aboutness. This view has a long and venerable history, going back to Brentano (1874). There is the worry of Kim and others that there could be no single property had by all mental states because there seems to be a fundamental divide between sensory states like sensations and intentional states like beliefs, desires, hopes, and fears. If Brentano’s property of aboutness were only a property of intentional states, not sensory states, that would leave sensory states out of the category of the mental. Of course, one could adopt a single system view and say that the system must have intentional states (aboutness), but not every mental state contributing to the system must itself be intentional. This would be one way to a empt to deal with the divide. As of late a list of distinguished philosophers including Crane10 (1998), Dretske (1995), and Tye (1995), have argued that sensory states are intentional. If true, then there may still be hope for the single property view. All mental states may have aboutness, and it is in virtue of this that they are mental. Crane nicely expresses the continuity of the intentionality thesis this way: ‘What is common between these different states of mind is expressed in Brentano’s formulation: “in the idea something is conceived, in the wish something is wished. And in the sensation something is sensed . . . “ ’ (Crane, 1998, p. 238) Crane approaches intentionality for all mental states from a phenomenological point of view, looking ‘for a sense in which something is “given” to the mind in sensation and emotion, just as something is given to the mind in thought and experience . . . in sensation something is felt, in emotion, something is apprehended . . . ’ (Crane, 1998, p. 243). He even goes so far as to ‘emphasize the priority of intentionality as a phenomenological notion’ (Crane, 1998, p. 249). He wants to contrast this aspect of intentionality with another that he seems to recognize, viz. ‘primitive forms of intentionality . . . only remotely connected with conscious mental life, say the intentionality of information processing, which goes on in our brain’(Crane, 1998, p. 249). He rejects the last type of intentionality as mental because it is not phenomenological. We find it curious that one would appeal to intentionality as the mark of the mental and then acknowledge a kind of intentionality that is not mental. Crane notes that there may appear to be a problem in this, saying: This would be a perverse or circular way to proceed if we did not already have a grasp on the concept of a mind. But we do have such a grasp: it is that concept which we try and express when we say that to have a mind is to have a point of view or perspective on the world, or when we say that there is something it is like to be conscious, or when we talk about the world being 60
The Mark of the Mental
manifest to a subject of experience, or when we talk about the world being a phenomenon for a subject. (Crane, 1998, p. 249) But he seems to think that he gets off the hook by appealing to the phenomenological side of conscious mental states. We don’t think he makes it off the hook. If his concept of mind is the phenomenologically conscious (what is given), why doesn’t he make that the mark of the mental? If it is not, then how does he rule out those information processing states in the brain as being mental? For instance, some parts of the brain detect low blood sugar or cold extremities and reduce insulin production or constrict the capillaries to hold blood from the extremities. Why aren’t these mental activities if they are intentional states of the brain? What about unconscious desires or beliefs? Surely these are mental states even if they lack a phenomenology. We are sympathetic with the idea that sensory states are intentional, but probably not because they are ‘given’ phenomenologically, rather because they are the right kinds of representations. So we are on board with Crane (1989) until the very end where he acknowledges intentional non-mental states. He needs a principled way to distinguish the mental intentional states from the non-mental ones. Crane wants to do it via phenomenology. We think there must be a be er way. Furthermore, since Crane defends only a ‘weak’ view that sensory states are intentional, but their qualia may not be, he acknowledges that this leaves him open to the question of what makes qualia mental. Qualitative states certainly seem to be mental states in good standing, and they seem to be on a par with phenomenological states generally. So it would be strange indeed if a mark of the mental le them out. Realizing this, Crane acknowledges that ‘more needs to be said’ (Crane, 1998, p. 251, n. 26). We believe accounts like those of Tye (1995) and Dretske (1995) have the advantage of making the qualitative characteristics of conscious states themselves representational and hence intentional, thereby leaving no ‘dangling qualia’ unaccounted for by the mark of the mental. While there are some important differences between their views, for our purposes here we emphasize their similarities.11 Consider the sweet taste of sugar in one’s mouth. For Dretske (1995), the qualia of experience arise due to an indicator function in the sensory system. Although Dretske uses the term ‘function’ in his theory, we don’t believe too much weight should be placed upon that term (Adams, 2003). What is essential is that there is a type of sustained causing (Adams, 1991). That is, the structure in the brain S that indicates the presence of sugar will cause some other brain activity or bodily movement M (say, swallowing). When the S causes M (rather than some contrasting N – spi ing) because of the indication of sugar by S, then and only then does S acquire the function of indicating sugar. What is important is the sustained, contrastive causing. The structure S must be sustained in its causing some relevant effect by the fact 61
The Continuum Companion to Philosophy of Mind
that it indicates the presence of sugar. If so, then S comes to represent the presence of sugar and makes the person in whom S does this conscious of sugar by virtue of the qualitative experience of sweetness. Tye (1995) defends a representational theory of the phenomenal mind on the basis of a co-variation view of representation. The view states that something S represents that P = df. If optimal conditions obtain, S is tokened in X if and only if P and because P. In this definition, ‘X’ is a place holder for a person and ‘P’ is a place holder for a proposition, due to the ‘that’ on the le hand side of the identity sign. The la er is a particularly bad choice because, first, propositions don’t cause things. Events or instantiations of states of affairs may cause things and may have propositional content, and may cause things because they have propositional content, but propositions themselves don’t cause things. Second, Tye himself says the structure of phenomenal representation is topographical (Tye, 1995, p. 120). To us this suggests that it would be far be er to interpret ‘P’ in the definition as a property instance. If so, then the definition says that S (sensation of sweetness) represents P (sugar in the mouth) if, under optimal conditions, S is tokened in person X if and only if there is sugar in the mouth and because there is sugar in the mouth. Hence, the sweet sensation represents sugar (in the mouth) because it is tokened when there is and caused by sugar (in the mouth). On the accounts of both Dretske and Tye, therefore, the qualia themselves arise out of the representational role of the sensory states. It is because the states are representational (intentional) that the qualia are as they are. One’s qualitative experience of sweetness is itself an intentional state – a representation of sugar in the mouth under normal conditions.12 A further issue raised by Crane (1998) and Enc (1982) is whether intentionality is a sufficient condition for the mental. As we’ve seen, Crane embraces the existence of two kinds of intentionality because he is willing to say that information processing in the brain is a ‘primitive form’ of intentionality but is itself not mental. If this ‘primitive intentionality’ is aboutness of the type generated by any informational connection in the world, then it exists everywhere, not only in the brain. Litmus paper’s turning pink is about a liquid’s being an acid. A thermometer’s rising is about an increase in temperature. The falling barometer is about a decrease in atmospheric pressure. This kind of informational aboutness is everywhere in the world, and barring panpsychism it is not sufficient for the existence of minds. So if this ‘primitive’ intentionality13 is indeed intentionality, it is not the right kind to qualify something as ‘mental’. Any single property view of the mental that claims intentionality is the mark of the mental is going to have to clarify what is the ‘right’ kind of intentionality to serve as the mark of the mental and explain why it is the right kind. Fodor (1986b) introduces the notion of the detection of non-nomic properties for precisely this type of reason. Lower organisms, such as paramecia, exhibit 62
The Mark of the Mental
‘purposive’ types of behaviour, such as photosensitivity. Fodor does not count this behaviour as ‘action’, as he reserves that term for the behaviour of intentional systems (Fodor, 1986b, p. 6). However, Fodor needs a principled reason to exclude lower organisms from the class of genuine intentional systems. To do that, he introduces a behavioural test: being able to differentially detect and respond to non-nomic properties. A non-nomic property is something ‘such that [if] objects fall under laws in virtue of possessing it, then that property is ipso facto nomic’ (Fodor, 1986b, p. 10). So, having mass or momentum are nomic, and being a crumpled shirt or a le shoe are not. In his view minds alone can detect a non-nomic property F qua F. So his view addresses the problem of finding the ‘right kind’ of intentionality for the mark of the mental in this way. As he puts it, ‘the difference between paramecia and us is that we can respond selectively to non-nomic stimulus properties and they can’t’ (Fodor, 1986b, p. 11). We are not sure how much emphasis should be placed on non-nomicity. For in the end Fodor seems to appeal mainly to the fact that paramecia and other lower organisms lack concepts (whether of nomic or non-nomic properties). They don’t have the proper inferential mechanisms to form concepts over wide ranges of varying stimuli, and it is what is lacking that ma ers more than the behavioural test. At one point Fodor puts his point this way: ‘What distinguishes intentional systems from the rest is that, whereas we’ve got perceptual categories, what they’ve got is, at most, sensory manifolds’ (Fodor, 1986b, p. 20). This leads us to think his principled difference between proper intentional systems (like us) and only apparent ones (like the paramecium), is that we have concepts and they are purely sensory systems. 14 Another person who appeals to intentionality is Fitch (2007). His view is a single hierarchical system view where the intentionality of a mental system is built upon a basis of nano-intentionality that starts at the level of the biological cell. Indeed, Fitch thinks this is the difference between computers and biological systems. By ‘nano-intentionality’ Fitch means the capacity cells have to rearrange molecules in individual circumstances in an autonomous and adaptive fashion (Fitch, 2007, p. 10). It seems to us that Fitch’s ‘nano-intentionality’ is placeholder for evolved biological function. If a cell or an organ has acquired the function to do F, then the cell or organ’s nano-intentionality is about F-ness. So something’s being for a biological purpose or goal is its having nanointentionality. When nano-intentionality involves the processing of information in a neuron, then Fitch thinks things rise to the level of mental ‘aboutness’ (Fitch, 2007, p. 12). For Fitch biologically being a neuron seems to be a necessary condition of mental aboutness, as opposed to nano-intentionality of cells generally. Still, he seems to think nano-intentionality itself is necessary for mental aboutness.15 He also says that the relevant difference between us and the amoeba is that we have ‘dedicated information-processing machinery’. Although he does 63
The Continuum Companion to Philosophy of Mind
not explain this term, we think he means something like the specialization of a property detector or concept. Fitch also stresses that the nano-intentionality of cells or systems responds to the novelty of their circumstances and records, in a non-mental sense, internal changes based upon the organism’s history of interactions. He maintains that if the instructions for responding are already encoded in the organism’s DNA, then its responding to circumstances accordingly is not nano-intentionality. He is interested in a biological kind of ‘learning’, or so it seems to us, that depends on processes not simply inherited or primarily fixed by genetic inheritance. Fitch thinks genuinely ‘mental’ representations are internal mental models or ‘possible worlds instantiated in neuronal firings’ (Fitch, 2007, p. 20). The system has to be sensitive to the model, as well as, via the model, sensitive to the world. These models also have to direct the system’s behaviour that is contingent upon tracking both the world and the internal models. Once models are in place, then representation and misrepresentation are possible. Hence, Fitch’s view is that these models or complex representations are the hallmark of the mental and only occur at higher levels of organization in the nervous systems of animals.
Consciousness We turn now to Searle’s view, which is actually not a single property view but a system view, where the system itself must possess consciousness to be mental. It is a system view because he appeals to the thesis of ‘the background’: Intentional phenomena such as meanings, understandings, interpretations, beliefs, desires and experiences only function within a set of Background capacities that are not themselves intentional . . . all representation, whether in language, thought, or experience, only succeeds in representing, given a set of nonrepresentational capacities. (Searle, 1992, p. 178) Searle blocks the spread of intentionality to lower parts of the brain, or merely information-processing parts of the brain, by insisting that ‘only a being that could have conscious intentional states could have intentional states at all, and every unconscious intentional state is at least potentially conscious’ (Searle, 1992, p. 132). This is Searle’s famous ‘connection principle’ (Searle, 1990a; 1992, pp. 155ff.). It is with the connection principle that Searle distinguishes unconscious mental states from other states of the brain that are not conscious and may have nothing to do with the mind at all, thermo-regulatory states or regulation of blood sugar levels, for example. 64
The Mark of the Mental
Searle (1992) cannot simply appeal to intentionality as the mark of the mental because he thinks sensory states are mental but lack intentionality. He accepts that both sensory states and intentional states are either conscious or are accessible to consciousness.16 He cannot define consciousness in terms of intentionality for similar reasons. He thinks consciousness arises out of the neurochemical properties of the brain, and it is the neurochemical properties that are responsible for consciousness. Consciousness itself has a cluster of structural properties, including: different sensory modalities, unity, intentionality, subjective feeling, figure-ground structure in experience, aspect of familiarity, overflow, center and periphery, boundary conditions, mood, and dimensions of pleasure and pain (Searle, 1992, Chapter 6). For Searle, what makes something a mental state is that it either is conscious or is potentially conscious. If it is an intentional state, then the connection principle is run via ‘aspectual shape’ (Searle, 1990a, p. 587; 1992, p. 156). If it is a sensory state, like an unconscious pain, then aspectual shape is dropped and the only appeal is to the underlying neurophysiological processes capable of generating a conscious state, like conscious pain, and appropriate pain behaviour (Searle, 1992, p. 165). Searle’s argument for the connection principle when the unconscious state is intentional is roughly this. Intentional states are essentially aspectual. For example: When you see a car, it is not simply a ma er of an object being registered by your perceptual apparatus; rather you actually have conscious experience of the object from a certain point of view and with certain features . . . as having a certain shape, as having a certain color, and so forth . . . and . . . [this] is true of intentional states generally (Searle, 1990a, p. 587). Unconscious intentional states must be aspectual since they are intentional, but since they are unconscious their aspectual shape is not manifest. Searle claims that there is no aspectual shape at the level of neurons. That leaves unconscious intentional states like any other neurophysiological state, even the non mental ones. So to be mental, unconscious mental states must preserve their aspectual shape. The only sense that we can give to their preserving their aspectual shape, when unconscious, is that they are possible contents of consciousness (Searle, 1992, pp. 159–60). The only fact about the underlying neurophysiology is that it can cause conscious states that manifest conscious intentional aspectual shapes. So the unconscious states are only mental because they have the capacity to produce conscious states with the right stuff: aspectual shape. No one would deny that being conscious is sufficient for being mental. Still, we worry about Searle’s claim that ‘there is no aspectual shape at the level 65
The Continuum Companion to Philosophy of Mind
of neurons’ (Searle, 1990a, p. 588). This may be true at the level of single neurons. However, in macaque monkeys, at the level of single cell recordings, there are cells that are sensitive to only goal-directed movements of other monkeys. This seems aspectual. More importantly, Searle gives no reason why there cannot be aspectual shape at the level of collections of neurons. A er all, he insists that the only things that cause consciousness are the neurophysiological properties of the brain. If these cause aspectual shape in conscious processes, then there must be something specific in the neural structure that accounts for the specific aspectual shape of one’s conscious experience when accessed. Of course, Searle may mean that the aspectual shape of conscious states only exist as an emergent property in the conscious access of the unconscious state. But since he thinks all conscious states are neural states, then at some level, perhaps the conscious level of organization in the brain, clusters of neurons must possess aspectual shape. They must be aspectually organized in their firings, so to speak. To deny this would make Searle an emergent dualist, something he would vehemently deny. In addition, we worry that there may be mental states in good standing that cannot be brought to consciousness. One type of case is blindsight, where an individual who lacks a conscious presentation of a visual scene nonetheless responds purposively by hand orientation in relation to a slit or negotiating objects in a room (Weiskrantz, 1997). Another type of case is vision for action (Milner and Goodale, 1995). Some actions guided by the dorsal stream in the visual system are able to correct for visual illusions, such as Titchener Circles. An object being reached for might appear larger than it is because of an illusion, but thanks to dorsal stream processing one’s reach is for the actual size of the circle.17 Milner and Goodale, as well as Weiskrantz, have given examples of states that are surely mental. They are guiding high-level purposive activity. Yet they are states that seem not consciously accessible to the blindsight subjects who cannot describe the scenes before them and whose dorsal stream processing is guiding their action.
Our view Our view is a systems view.18 We don’t think that every state in a mental system must possess a single type of property shared by all other core mental states. Some states contribute to a system’s having the property or properties that make a system mental without themselves possessing the core property or properties. We think that mental systems share a cluster of properties that we will articulate below. On our view one cannot properly say what minds are without saying what minds are for. Minds allow organisms to track changes in their environment 66
The Mark of the Mental
and respond differentially to those changes. This includes allowing them to maintain integrity and a empt to satisfy their needs for survival. Some tracking is coarse grained, as in sensory processing, and some fine grained, as in concept formation. An organism that senses a source of sugar may have a sensation of sweetness. Sugars are carbon based. So the organism would be sensing something carbon based, but not necessarily sensing it as something carbon based. Thoughts, however, may be fine-grained, and one may be able to think that something is a sugar and also think of it as carbon based if one has the requisite conceptual competence. Tracking changes in the organism’s environment, both inner and outer, is done by processing information. This requires a network of internal and external sensory mechanisms that transmit or process information to the structures in the organism that use this information to modify or control the organism’s behaviour. Following the mathematical model articulated by Shannon,19 we understand information as involving a law-like connection of properties set against a background of locally stable conditions. Some call this level of information processing done by transducer mechanisms and internal mechanisms that regulate body temperature or blood sugar ‘intentional’ (Crane, Fitch). We, however, distinguish levels of intentionality and maintain that ge ing to the level of intentionality that is mental aboutness requires rising above information to the level of semantic content (Dretske, 1981, 1985b). Smoke carries information about fire, but ‘smoke’ means smoke, whether or not an occasion of its use carries information about fire. Minds, genuine mental systems, rise to the level of meaning. Their internal states can mean or be about things in the way that a thought about smoke can mean smoke. This level of aboutness permits misrepresentation (Dretske, 1986). For us, the requirement that a mind has states that rise to the level of meaning is what determines that amoeba and paramecia lack minds. So we make the same distinctions between mental and non-mental systems as Crane, Fodor, and Fitch, but we get there by a slightly different route, although we are close to Fodor’s view, if one downplays non-nomicity. Semantic content can be as simple as a dedicated property detector in a lower-level mind or as complex as propositional a itudes in human minds. Creatures with minds do things because of the contents of their minds because of what their internal states mean or are about (Dretske, 1985b). We have no arguments that there could not be purely sensory systems. No one else that we’ve read has such arguments either. We are inclined to think that purely sensory systems arising in nature would not survive. They would have to be able to do things to their benefit because of what they sense. If they did this, they would have concepts – along with wants, desires, and the means of satisfying them. So we are inclined to the view that actual sensory systems are conjoined to intentional systems, that is, systems with concepts. 67
The Continuum Companion to Philosophy of Mind
We follow Dretske (1981) in distinguishing orders of intentionality. Purely sensory systems have only the first order of intentionality, the same order as information itself, what we called ‘coarse-grained’ information processing. So if all Fs are Gs, but not as a ma er of law, just as a ma er of coincidence, something that carries the information that t is F does not necessarily carry the information that t is G, even though it is G. A sensory system that detects property F need not detect property G. Hence, even a sensory system has this degree of intentionality. If Fs and Gs are connected as a ma er of natural law, however, a purely sensory system that carries the information that something is F by detecting Fs will also necessarily carry the information that something is G. Sensory systems lack a second order of intentionality: being able to distinguish Fs from Gs. Systems with concepts, on the other hand, appropriately equipped, would be able to make this fine-grained distinction. Grandma knew there was water in Lake Michigan even if she didn’t know there was H2O there, perhaps because she did not have the concept of H2O. So we differentiate purely sensory systems from systems with concepts on the basis of orders of intentionality. A question that haunts us and others is why amoebas or paramecia don’t have sensory states (qualia). Everyone we’ve discussed would deny that such creatures have minds largely because they don’t have intentional states, concepts, mental models, the ability to detect non-nomic properties, and so on. But why don’t they have sensations? For example, Fitch says they have nanointentionality, the ability to rearrange their parts based upon environmental conditions, and to do so in a way not determined by their DNA alone. This sounds like what minds do, viz. track changes in the environment, make internal changes on that basis, and then modify behaviour. So why don’t they have minds? We see Fitch denying that eukaryotic cells have minds or qualia even though they modify their internal states in response to external conditions and do so in novel ways. He denies that they have minds because they don’t employ mental models. Tye (1995) and Dretske (1995) both maintain that systems with sensory states must be harnessed to conceptual states to yield qualitative states. For instance Tye says ‘systems that altogether lack the capacity for beliefs and desires cannot undergo phenomenally conscious states’ (Tye, 1995, p. 144). He thinks this because he holds that qualitative sensory states lie at the interface between outputs from sensory modules and inputs to cognitive systems, but he doesn’t really justify or explain this. Dretske too maintains that experiences are types of representation ‘whose function it is to supply information to a cognitive system for calibration and use in the control and regulation of behaviour’ (Dretske, 1995, p. 19). For support Dretske refers the reader to Gareth Evans. Evans suggests that conscious perceptual experience arises only when sensory inputs are connected to behavioural dispositions (of perhaps some 68
The Mark of the Mental
phylogenetically ancient part of the brain’s motor system) and serve as input to thinking, concept application, and reasoning (Evans, 1982, p. 158). Evans says this just a er discussing a case of blindsight. In this case a subject differentially responds to an external visual stimulus while reporting no conscious visual field. This leads Evans to say if the sensory system only provided information to the motor system, there would be li le interest in producing qualitative states – blindsight seems to have no need of them. In blindsight visual information is processed, but there is no conscious perceptual experience or qualia. This leads Evans (and perhaps Tye and Dretske) to maintain that it is only when properly connected to a conceptual system, perhaps conjoined with a motor system, that qualitative experiences arise. Only then will there be qualitative visual fields, auditory experiences, and so on. Why don’t amoeba and paramecia have qualitative experiences? Fitch would say that although they have nano-intentionality, they don’t have internal mental models. Evans, Dretske and Tye would say that they don’t have sensory systems which feed into conceptual systems. Even though all would agree that information is processed at the first level of intentionality. We think that there is a perfectly good sense in which these organisms don’t need qualitative experiences. The differential responses that they make to environmental changes do amount to a low-level processing of information, Fitch’s nanointentionality, but the processes involved can all be explained at the level of chemistry or photo-chemistry alone, plus a bit of history of the organism in its local environment. There are no dedicated processors – no biologically selected structures that are recruited for the purpose of tracking information about the environment or the internal states of the organism as they respond to environmental changes. Thus, there are no more or less permanent structures that have the biological function of indicating to the organism changes of environment and self.20 Thus there are no internal structures that have the function of both informing a cognitive system and driving a motor system that serves the needs and desires of the organism. Having such internal, dedicated information-processing structures requires explanation that rises above the level of local chemical reactions. The organism’s use of the information generates a qualitative sensory experience. It may seem like magic21 that a biological structure whose function is to deliver information from a sensory transducer to a conceptual, cognitive system should generate a qualitative visual, tactile, auditory, or gustatory experience. This leads someone like Searle or Block to say that qualia must arise from the brain’s neurochemistry. To us that’s no less magical. How do the chemicals do it? Along with Dretske and Tye, we maintain that the qualitative nature of the sensory experiences is explanatory. It is because of the way things look, taste, or feel, that we do what we do when we experience them. In blindsight cases an 69
The Continuum Companion to Philosophy of Mind
individual may orient her hand vertically because a slit in front of her is oriented vertically, not because of how it looks phenomenally. Whereas, when we do the same thing with a full visual field, we do orient our hand because of how the slot ahead looks. So the qualitative representational content of the sensory experiences plays an explanatory role in our purposive behaviour. Of course, how the light of qualitative states comes on is no less a mystery, even if this is how or why it comes on in conscious sensory systems. Since we maintain a systems view, and since we think there aren’t actually any purely sensory mental systems, all actual minds must have concepts. Concepts have a semantic cognitive content or meaning, and this content rises to the level of non-derived semantic content. By ‘non-derived’ we mean that the content is not given by or dependent upon the mind of another. So, for example, the first minds that arose did not derive their mental contents from the mind of another. Internal structures had to acquire a content that was meaningful to the organism itself. This means that the system interprets the world on its own. This differentiates minds from contemporary computers that have only derived content, content supplied by the minds of the engineers who build them, content meaningful only to the engineers, but not to the computers themselves.22 This also means that the concepts in minds have to rise to the second level of intentionality. The best explanation currently available of how this goes is due to Dretske (Dretske, 1988). Dretske explains how a structure or concept rises to the level of non-derived meaning in the context of solving the dreaded disjunction problem – the problem of how C could come to mean just F when either Fs or Gs cause tokenings of C. Dretske’s solution to the disjunction problem has at least two components. The first component is ge ing to the first level of intentionality, being able to indicate that something is F without also indicating that it is G. The symbol ‘C’ must start out with the ability to naturally mean Fs and not Gs. If all of its natural meaning is disjunctive, if it only indicates Fs or Gs, then a disjunctive content is the only semantic content it could acquire. The second component is the jump to semantic content. Even if C’s indicate Fs only, to acquire semantic content, a symbol must lose its guarantee of possessing what Grice called ‘natural meaning’. Smoke naturally indicates fire because in the wild the two always co-occur. Hence smoke has the ability to indicate the presence of fire. A symbol that has smoke as its semantic content needs to become locked to smoke and permit robust and even false tokening that means smoke without infecting its semantic content. It has to lose its ability to always naturally indicate fire. Dretske appeals to the explanatory relevance of the natural meaning. For Dretske, it is not just what causes Cs, but what Cs in turn cause, and why they cause what they cause that is important in locking Cs to their content (F). 70
The Mark of the Mental
Let’s suppose that a squirrel needs to detect Fs (predators) to stay alive. If Fs cause Cs in the squirrel, then the tokening of Cs indicates Fs. Dretske claims that Cs come to have the content that something is an F when Cs come to have the function of indicating the presence of Fs. When will that be? Every predator is not just a predator; each one is also an animal (G), a physical object (H), a living being (I), and so on for many properties. Hence, tokens of C conjunctively will indicate all of these, not just Fs. Dretske’s answer is that when C’s indication of Fs (alone) explains the animal’s behaviour, then Cs acquire the semantic content that something is a predator (F). Hence, it is the intentionality of explanatory role that locks Cs to F, not to G or H or I. For Dretske, intentional behaviour is a complex of a mental state’s causing a bodily movement. So when C causes some bodily movement, M – say, the animal’s movement into a hole – the animal’s movement consists of its trajectory into its hole. The animal’s behaviour is its causing that trajectory. The animal’s behaviour – running into its hole – consists of Cs causing M (CJM). There is no specific behaviour that is required to acquire an indicator function. Sometimes the animal slips into its hole (M1). Sometimes it freezes (M2). Sometimes it scurries away (M3). This account says that Cs become recruited to cause such movements because of what Cs indicate, what Cs naturally mean. The animal needs to keep track of Fs, and it needs to behave appropriately in the presence of Fs to avoid predation. Hence, the animal’s thought content becomes locked to predators when tokens of C explain some appropriate movement M because of C’s indication or natural meaning. Not until C’s natural meaning has an explanatory role, does C mean that something is F in a non-derived way. The account does not require that the animal must be moving to be thinking. The relevant type of causing of M by C is required only to lock the content of C to Fs. Once so locked, the animal can think about predators even when not moving. What is more the animal can now mistakenly think there is a predator when C is tokened by a shadow or rustling leaves. Hence this gives a teleological naturalized account of how concepts come into existence and have non-derived meaning. It also explains how a concept’s having the content it does can explain behaviour. This completes our systems view of the mark of the mental. States of a mental system make a causal contribution to the core properties that make the system mental. For us, there is a cluster of core properties necessary and sufficient for something’s being a mental system. Hence, our property cluster list for mental systems is: (1) Mental systems possess non-derived meaning. (2) Mental systems possess states that rise to the second level of intentionality. 71
The Continuum Companion to Philosophy of Mind
(3) Mental systems have states capable of misrepresenting (or representing non-actual things or states). (4) Mental systems exhibit intentional behaviour explained via the representational content for the system (sensory states explain via their qualitative feel, and conceptual states explain via their semantic contents).23
72
4
Substance Dualism T. J. Mawson
Introduction Substance dualism could not have a more venerable lineage, being traceable back through Descartes at least as far as Plato and Socrates. However, the respect with which people treat the view has declined to such an extent in the last few hundred years that it has recently been described as, not so much a position to be argued against, as a cliff over which to push one’s opponent. Certainly within contemporary educated circles, were one to venture the opinion that we have souls, one should expect to find oneself held to have propounded an extravagance only slightly less great than had one ventured the opinion that visiting extra-terrestrial life was in part responsible for the construction of the pyramids or that Elvis may be seen working in the local chip shop. The most favourable response one could realistically hope for would be the concession that perhaps, before the development of such things as computer science and neuroscience, such a whimsy might have been excusable, but even so, now souls must surely go the way of phlogiston and light-carrying ether: onto the intellectual scrap heap.1 Here I shall advance the claim that, despite the near universality of the assumption that the theory may be easily cast aside, within the structure of a hylomorphic substance/property metaphysic, the only reason to suppose that we do not have souls is that provided by Ockham’s razor and even that reason is conditional upon an assumption, albeit an assumption that it is no more my intention to cast doubt upon here than it is my intention to cast doubt upon the substance/property metaphysical structure within which I shall be framing this debate.2 The assumption is that there is physical stuff. Given that there is physical stuff, it would indeed be simplest to suppose that the mind is ontologically reducible to that or to processes going on in that. But given that, as I shall also argue, there are some reasons to suppose that we do have souls – that is, that such a reduction cannot be accomplished, so one might find oneself, probably idly, reflecting on the fact that idealist substance monism would offer one all the advantages of simplicity offered by physicalist substance monism while in addition accommodating these reasons for supposing we have souls. This reflection would probably be idle as there is li le 73
The Continuum Companion to Philosophy of Mind
danger of idealist substance monism emerging, on balance, as the preferable theory of the mind for us: the assumption that there is physical stuff is held by most of us so deeply as to be near immovable by argument. (For similar reasons, I shall ignore neutral monism.) Thus it is that most of us will find ourselves weighing the reasons in favour of the claim that we have souls in the balance against the rational a ractiveness of the simpler metaphysic that a physicalist substance monism offers. Where this balance ultimately se les is something on which opinions will divide. All I can hope to secure consensus on by what follows is that no substance/property metaphysic will give us everything we want, which in itself of course is a reason to re-examine the substance/property starting point. If one refrains from doing that however (as I shall), one must conclude that either several of our assumptions concerning the nature of persons (assumptions which are not held significantly less deeply than is our assumption that there is physical stuff ) are in error or the world is more complex than physicalist substance monism allows, for we do have souls a er all.
What Substance Dualism is We have to start somewhere and time is pressing, so let us put onto the table without offering argument in its favour a certain commonsense realism about the physical world and our knowledge of it as gained through the natural sciences. First then, let us assume that there is physical stuff. This may be characterized as stuff of the sort that we suppose ourselves to encounter with our five senses in everyday life; that our folk science describes more or less adequately for our everyday purposes; and that our natural sciences describe with increasing accuracy as they develop. We may define the sort of stuff we have in mind by paradigm examples of things which are made of it: this desk, here; that star, there; and so on. In a previous century, we might have called this physical stuff simply ‘ma er’, but now we know that ma er may be converted into energy and vice versa and we hear scientists speculate concerning quarks, hyper-dimensional strings, and so forth as making up the more commonplace objects that we encounter in everyday life. These are things which, while striking us as no doubt physical, do not strike us as in any obvious way material, so, instead of ‘ma er’, we call this stuff ‘physical stuff ’. We shall call the view that this physical stuff is all the stuff that there is ‘physicalist substance monism’ or ‘physicalism’ for short. Obviously one might hold that in addition to this sort of stuff, there is another type of stuff as well. We shall call this second view ‘substance dualism’. Substance dualism is commi ed then to there being a type of stuff that resists full integration into the natural sciences. What we might call ‘partial 74
Substance Dualism
integration’ will need to be allowed for to take into account psycho-physical causal interactions, which – as we shall see – the most plausible substance dualist view will wish to maintain occur. Incapacity for full integration is not however by itself enough to characterize this second type of stuff adequately. There might turn out to be a sort of stuff that resisted the sort of integration that the substance dualist will wish to claim for his or her souls yet which it was obvious to commonsense was nevertheless purely physical; the unity of the natural sciences is a hope or perhaps something stronger, a regulative idea. But it could have turned out, or could even yet turn out, to be misguided. Similar problems would beset a empts to characterize this second type of stuff in terms of its failure to fit into the existing categories of the natural sciences. We will not wish now to draw up a list of what properties might be made mention of in a completed science, conscious as we are that some of the properties of physical stuff as quantum physics describes them are very different from the properties of it as we encounter it in everyday life or as would have been supposed to be primary and fundamental in the days of the corpuscularians; they are, it has been said with some understatement, spooky. But we can issue a promissory note here and that is sufficient for our current purposes: the second type of stuff in which the substance dualist believes is a type of stuff the nature of which will not be fully integrated into a completed science of objects such as our paradigm physical objects – tables, stars, et cetera – because it has properties that will not feature in that completed science and will not be reducible in any way to properties that do so feature. The obvious contender for the fundamental property here is the capacity to have mental properties such as beliefs, desires, emotions, and so on. We might think of intentionality and/or first-person privilege as their hallmarks. The substance dualist will maintain that the essence of soul substance is that it is capable of thought in the broadest sense of the term, and this is a property which would, in a completed science, turn out not to be a property of stuff of the sort that makes up tables, stars and so forth and turn out not to be reducible to any such properties. Nevertheless, the substance dualist maintains, contra the eliminative materialist, thinking is definitely going on and, given the substance/property metaphysic within which this debate takes place, he or she validly concludes from all of this that it must thus be going on in a substance other than a physical one – soul substance, as we have been calling it. We might say then, more or less following Descartes as we do so, that, according to the substance dualist, it is of the essence of physical substance to have properties of the sort that our paradigm examples of physical objects – tables, stars and so on – have, which will not include thinking. (Descartes se led on spatial extension as the essential property of physical stuff; we have le this more open; perhaps spatial extension as we ordinarily understand it will turn out in a completed science to be a property of only some physical stuff 75
The Continuum Companion to Philosophy of Mind
[e.g. tables], a property constructed out of more basic elements.) It is of the essence of soul substance that it is capable of thought, where thought is taken in the broadest of senses to include all mental happenings – beliefs, desires, sensations, emotions, acts of the will, and so on. Belief in the existence of these two types of substance is what is definitional of substance dualism. There are various views within substance dualism about the relationship between this soul stuff and us as persons. Are we as persons simply our souls, or do we persons have souls as one part and bodies as another? Descartes entirely identified the person with his or her soul, and thus would have had no objection to our talking of disembodied souls – were they to continue on a er bodily death – as fully the people they had been when earlier embodied. An alternative view is possible. Arguably it is that held by Aquinas. This is the view that persons are to be identified with the conjunction of body and soul and thus that where these two cease to be conjoined in the right sort of way – most obviously, perhaps, if they cease to be conjoined in any way at all as one of the conjuncts entirely ceases to exist (e.g. the body is vaporized by an exploding nuclear device) – what survives, if anything, is not the person in his or her entirety, but merely a part of the person. And it may be that a disembodied soul part (of a former person) would not, as a ma er of causal fact, be able to do any thinking once separated in this way from the body part with which it had previously formed a person. One might go even further down this track and think that the destruction of the body part would inevitably cause the destruction of the previously associated soul part and thus the entirety of the person. But if any of these things are so, then, according to substance dualism, they are so as a ma er of metaphysical contingency, not necessity. substance dualism makes it metaphysically possible for the person (Descartes) or a part of the person (Aquinas) to survive the complete and final destruction of his or her body, but it does not entail that this actually ever happen. It makes it metaphysically possible that any disembodied soul would be able to have a mental life as rich in what we might call ‘pure’ mental properties (not, for example, suffering from toothache – a concept which spans the ontological gap between body and soul) as an embodied soul, but it does not entail that this actually ever happens.3 There is a third view, interior to substance dualism, although it has not in fact ever been propounded by anyone who believes we do have souls; this is the view which would identify us entirely with our bodily parts. The most plausible variant of this view would, it strikes me, have to give up on the idea that we are fundamentally persons and thus may be pictured as having something akin to animalism: we are our animal selves; at the moment, these animal selves happen to be in causal contact with souls and thus happen to be able to think (and through doing so become persons), but, were such souls destroyed, no element of our animal selves, and thus no element of us, would be destroyed; 76
Substance Dualism
we would just cease to be able to be persons. The soul is entirely inessential to what makes us us even though it is not inessential to what makes us persons. As it has in fact never been propounded by a substance dualist, despite its being a potential variant of the view, I shall ignore this view in what follows. As well as providing the materials with which different views of the nature of the self and us-as-persons may be constructed, substance dualism also allows for various views on the causal commerce between souls and bodies. For various reasons which will become apparent as we progress, the most plausible variant of substance dualism is interactionist substance dualism’ (the body causally affects the soul and vice versa). The alternative views are psychophysical parallelist substance dualism (the two have no causal interchange whatsoever); epiphenomenalist substance dualism (the body causally affects the soul, but not vice versa); and the view – again a ‘neglected alternative’ in that no-one actually holds it – that the soul causally affects the body, but not vice versa.
Reasons to Suppose Substance Dualism False As mentioned in the introduction, as a theory about the nature of the mind, substance dualism is more ontologically extravagant than substance monism. Given that there is physical stuff, it would be simplest to suppose that the mind is somehow reducible to that stuff, or, more plausibly, to processes going on in certain bits of that stuff: brains, presumably; mind is to brain – mutatis mutandis – what digestion is to the digestive system. Given that the properties of physical stuff are by no means obvious to us – and recent scientific developments have indicated that some seem to be spookier than earlier generations of scientists would have found even imaginable – and given that, from what we already know, the brain is the most complex structure in the universe, it is not unreasonable for us to hold out hope that a completed science would be able to fill in the mutatis mutandis here. Of course it cannot do so yet, but these are early days. This is, it must be conceded, a reason to suppose substance dualism false. What reasons might we have to suppose it false beyond its complexity relative to physicalism? I shall consider two areas from which it is o en suggested additional reasons for supposing substance dualism false emerge. The first area centres on supposed problems in identifying souls, both ontologically and epistemically. What is it that makes one soul different from another and how can we ever know of souls that they are the same over time or know of souls other than our own that they exist at all? In short, my analysis here will be as follows: firstly, insofar as substance dualism faces problems that parallel those faced by physicalism (as it does in addressing the issue of what 77
The Continuum Companion to Philosophy of Mind
makes one fundamental unit of substance different from another and how we know of such units that they are the same over time), these problems cannot be reasons to favour physicalism over substance dualism and so are not properly construed as objections to it, rather than perhaps as objections to the wider substance/property metaphysic within which this debate is taking place. Secondly, insofar as substance dualism faces problems not faced by physicalism (as it does in addressing the issue of how we can ever know that units of substance other than our own exist at all), the fact that it commits one to a certain sort of scepticism here is a reason to suppose it true, not false. The second area of concern centres around supposed problems in explaining the causal interaction between the two sorts of substance the substance dualist posits. How can mind and body act on one another? Does not any answer to this question run into insuperable problems from what we already know of physics, for example concerning the causal closure of the physical world and the conservation of energy? In short, my analysis here will be that the interactionist substance dualist is not beholden to answer the question of how mind and body act on one another, rather than merely assert that they do, as it is not a commitment of interactionist substance dualism that this question will be answerable by us. Positing that there is an interaction of this kind does not in fact require one to contradict things which we already know of physics, although there is potential for physics (were it to move back into a deterministic mode) to put pressure on the claim that there is in fact interaction of the sort posited. At the moment then, there is no reason from science to suppose interactionist substance dualism false. Let us go into these objections in more detail.
Problems of identification We may sensibly ask the substance dualist what it is that makes one soul distinct from another and predict that he or she will have li le informative to say by way of reply. Obviously, he or she may maintain that it is extremely unlikely that two souls will have all the same properties as one another, so – he or she may point out – any two souls will in fact differ in this fashion. One will be thinking about strawberries, another, about cream, and so on. But exact qualitative identity between two souls is not a metaphysical impossibility generated by the nature of souls per se and even if it were somehow impossible for two souls to have exactly the same properties, this impossibility would not ground the numerical difference between two souls, but rather presuppose it. In any case, it looks as if the substance dualist should agree that there is nothing in the nature of souls per se that prevents there being two qualitatively identical yet numerically distinct souls, for it seems that there’s nothing in the nature of 78
Substance Dualism
souls per se that prevents there being an exact duplicate of this universe. In that universe there would consequently be a person thinking qualitatively identical thoughts to those that you are currently thinking. That person would, nevertheless, not be you; it would be your duplicate. So, the substance dualist should say that it is not fundamentally in virtue of their different properties that different souls are different. Rather, he or she should admit that souls might in principle differ solo numero. (They have what is sometimes called ‘thisness’.) Need he or she be embarrassed that he or she can say no more than this? I do not think so. Presumably the person who believes in units of physical substance will wish to maintain that at least with regards to some of these there is nothing in their nature that prevents their differing solo numero too. The classic thought experiment on this topic involves imagining a universe composed simply of two chemically pure iron spheres, each of the same diameter, hanging in otherwise unoccupied space a certain distance away from one another; these spheres would be qualitatively identical to one another, yet they would be numerically distinct. Can the physicalist substance monist say more about how these two spheres manage to retain ontological individuality than that they do, that they differ solo numero or have thisness? No. So the substance dualist need not feel embarrassed about being able to say no more than this about how two souls might retain their ontological distinctness even were they to have qualitative identity. As this discussion might have already indicated, this type of issue – and in fact the one we are about to go on to discuss – is an artefact of believing in substance as such, (i.e. of believing in things to which the principle of the identity of indiscernibles does not apply of necessity). As such, this type of issue and the one we are about to go on to discuss cannot be a reason to prefer any theory that claims that substances exist over any other that claims that they do. Thus it cannot be a reason to prefer substance dualism over physicalism. Belief in substance raises certain problems at the epistemic level. Of substance dualism, it is sometimes said, souls might be swapping bodies every few minutes but each inheriting the psychological properties of the soul that had just vacated the body into which the new one was now moving. Were this to be the case, no one would be able to detect these changes, yet people (Descartes) or significant parts of people (Aquinas) would constantly be swapping bodies. Furthermore, we seem to face on substance dualism a peculiarly intractable variant of the problem of other minds: how do you know, as you encounter another person through the medium of the physical world, that he or she is a person at all, that he or she has a soul in the right sort of causal connection with the body which you observe directly? Again we may observe that the first problem affects those who believe in substance per se and thus in substance of the physical sort; thus, whatever it 79
The Continuum Companion to Philosophy of Mind
is a reason to believe, it cannot be a reason to believe in physicalism over substance dualism. How do you know that the physical stuff underlying the properties of the desk in front of you has not been swapped out by some malign demon in the last few moments, leaving all the properties ‘behind’ in the sense of their being inherited by the physical substance which this demon instantaneously moved in to replace that which he was removing? So this worry generalizes to physical substance. But, having said that, it’s not too great a worry. The physicalist substance monist’s response to this sort of worry seems to me entirely adequate. It is indeed metaphysically possible for the substance of the desk to be being changed in the imperceptible way suggested, or, if this is not metaphysically possible, then that is for reasons exterior to the nature of physical substance per se (e.g. that there can be no spirits of the right sort). But unless we have positive reasons for supposing that such swaps are happening, as it would be simplest to suppose that they are not, so we should suppose that they are not. The same move, then, that both the physicalist substance monist and the substance dualist make with respect to physical substance, the substance dualist makes with respect to soul substance. If it works in one area, what reason is there to suppose it will not work in the other? None. The problem of other minds is o en thought to particularly affect – and thus speak against the truth of – substance dualism; were substance dualism true, it is suggested, there would be peculiar difficulties in our knowing that other people exist. I shall deploy a two-pronged approach to meeting this charge: first, I aim to show, similarly to previous objections, that, if this is a problem, it is a problem that is faced, at least to a greater extent than is o en appreciated, by physicalism too. Secondly, as it must be admi ed that, pace point one, it is faced to a greater extent by substance dualism, so I shall aim to show how this ‘extra’ problem of other minds is not, in fact, one it is implausible to suggest we face. Were substance dualism true, there would indeed be an extra difficulty in knowing that others have minds, but that is not a reason to suppose that substance dualism is false; indeed it is a reason to suppose it true for there is much plausibility in suggesting that we do face this extra epistemic hurdle in coming to know that others have minds. First, then, though it would take too much time to argue it here, the most plausible physicalism will identify the having of a mind with physical processes that are recondite in the extreme. For example, a crude behaviourism, whereby being angry is simply behaving in a certain fashion, which may be specified entirely adequately in terms of movements of the body, movements that are sufficiently macroscopic for us to be able to identify them without any great difficulty, using our unassisted five senses, will not prove adequate to the task. Rather, some neurological happenings of a certain type will need to be called upon in the analysis of anger, but as soon as the physicalist pushes 80
Substance Dualism
the happenings which are mind-happenings, interior to the skull, then, unless we meet people who are themselves in fMRI scanners of sufficient sophistication to reveal to us these happenings, we never ourselves see the happenings that are, on the physicalist account, being angry, or what have you. On physicalism no less than substance dualism, we never observe the having of minds other than our own. How then do we know, if physicalism is true, that others have minds? To cut a long story short, the answer to this question is that they tell us that they do, and we ordinarily have no reason to doubt them. Someone says that he or she is suffering, let us say, from anger. If physicalism is true, they will be speaking truly if a certain happening is occurring in their brain; but we do not see this happening and indeed at the current stage of science might not know that it was their feeling of anger, even if we did see it. But, unless we have reason to doubt them (e.g. they are performing in a play or some such), we are surely rational, whatever the theory of mind to which we subscribe, in believing that they are angry simply on the basis of their saying that they are. Without taking this sort of epistemic route into knowledge of others’ minds, it would be impossible for the physicalist substance monist to construct the theories by which he identifies to his satisfaction the having of anger with the brain happening that he could then, in principle, find to be universally correlated with the tendency to report it. (This is sometimes called the ‘privilege’ that must be given to first-person reports of the mental.) But if that is so, then this same route is open to the substance dualist. It is true that on the substance dualist view, the actual feeling of anger is something happening in a substance even more recondite than the inner parts of the brain. It is happening in a soul and thus in something that could never be revealed by investigation into the physical world however advanced fMRI scanners became. But the same route which the physicalist substance monist takes in everyday life, before hand-held fMRI scanners and the like become commonplace (and which he or she will have to hold as epistemically authoritative even were they to do so, to accommodate the issue of privilege), is open to the substance dualist. This is how the problem of other minds is to be overcome whatever one’s theory of mind: by taking claims to have minds as a prima facie reason to believe minds are had. However, moving on to the second point, it seems as if the physicalist substance monist may argue that whatever problems he or she faces in coming to knowledge of others’ minds, and however these are to be overcome, the substance dualist must face an additional problem unless he or she posits some direct and very reliable telepathic contact between minds as an alternative source of knowledge, which positing would itself be most implausible. This is a true point. But does it speak against or really in favour of substance dualism? 81
The Continuum Companion to Philosophy of Mind
Were physicalism true, then, a er science has been completed and presuming it has allowed for hand-held fMRI scanners or some such of sufficient accuracy – let us call them ‘brainoscopes’ – one could perhaps confidently bypass first-person reports as a source to knowledge of others’ minds; one could, instead of speaking to a person, directly apply one’s brainoscope to someone’s skull and, on the basis of its findings, confidently report things like, ‘No need to speak; I see from my brainoscope that you are angry at my having applied it to your head without first asking your permission’. These reports could be unfailingly accurate. (Note: not all physicalists believe that this will prove possible, but we are considering the views of one who does in order to point out the contrast with substance dualism and the ‘extra’ problem of other minds that it faces.) Let us consider a physicalist substance monist who contends that, a er science has been completed, one will be in a position to know that a certain brain state or some such may be identified with anger being felt at having had a brainoscope applied without having been asked for permission and, with the technology of the brainoscope properly applied, one will know that this brain state is being had, so, one will know that the person is angry in this way. For such a physicalist substance monist, there will then be no ‘gap’ into which a sceptical doubt may creep. It might appear that nothing similar could happen on substance dualism. But, in fact, the substance dualist may hold that it could. If substance dualism is right, then in a completed science this technology might well be possible. The substance dualist of course would not make the extra step of identifying the brain state or what have you that is revealed by the brainoscope with the mental state, but he or she can acknowledge that there might well turn out to be a perfect correlation of the sort the physicalist we are considering anticipates our finding, and thus the substance dualist might admit that the sort of brainoscope that is capable of bypassing first-person reports in the manner described could well turn out to be possible. But there is, nevertheless, it must be conceded, a gap for the substance dualist here relative to his physicalist substance monist counterpart, a gap generating an ‘extra’ problem of other minds. The extra problem for the substance dualist is generated because it will always remain possible that the brainoscope is in error, even once the science is completed and the brainoscope working (for all we know) properly, for, according to the substance dualist, the brain state or what have you that the completed science finds universally to be conjoined with a thought of a certain kind (and we are supposing that this is what it will find) and that the brainoscope correctly reports to be present in this case is not to be identified with the thought of that kind. According to the substance dualist, one could know everything about the physical world, yet not know without the possibility of error what mental state a person was in (or indeed even if they were a person at all) for there is – according to substance dualism – an ontological gap between the 82
Substance Dualism
physical world and the mental, a gap which may be ‘bridged’ by causation, but – causation not being a conceptual relationship – any particular bridge across which may or may not hold and thus any particular judgements using which may be in error. But now this extra problem of other minds for the substance dualist looks more like an asset than a liability, for, as we shall see when looking at ‘Mary-type’ arguments for substance dualism, it is apparently possible that someone might know everything about the physical world yet not know something about the mental, which appearance has to be ruled out as deceptive by the physicalist substance monist we are considering.
Problems of interaction The version of substance dualism on which we are focusing suggests that there is two-way causal exchange between physical substance and soul substance. This is o en held to generate problems for the view. First, it is suggested that it runs contrary to a finding of physics. In particular, it looks as if the principle that ma er/energy is conserved across a closed system such as the physical universe must be violated if substance dualism of the interactionist sort is true. Second, it is suggested that there is something problematic in general in any case – regardless of whatever physics might be telling us – about non-physical substances causing changes in physical ones and vice versa. We know, a priori, that such is an impossibility.4 I do not find either of these two lines of thought tempting. Let us suppose for a moment, what we shall later see is in any case false, that the interactionist substance dualist is commi ed to laws of physics being violated. It does not seem that an objection arising from this commitment would be any more than a restatement of the objection from the relative complexity of substance dualism over physicalism. Obviously it would be simpler were the universe closed and the laws of physics not violated, and that is indeed, we have already conceded, a reason to suppose that it is so. We should not ‘double count’ this objection to substance dualism. In fact though, the interactionist substance dualist is not commi ed to his or her souls’ violating natural laws. With the advance of physics beyond determinism, another possibility arises. The substance dualist may maintain that happenings in the brain which are caused directly by the soul are caused in ways compatible with the preceding brain state and the laws of nature, but – these two not being such as to necessitate what state emerges from them – they are caused to be the particular way that they are by the soul. That the brain be in state q, rather than state r, a er it has previously been in state p is something which was always allowed for by the preceding physical states (given indeterminacy), but, in fact, the substance dualist may maintain, that it ended up in state q was caused by the relevant 83
The Continuum Companion to Philosophy of Mind
person’s soul. It is no bar to this theory to point out, if such a fact can be pointed out (and it is doubtful that it can be), that, of any individual sub-microscopic event where such quantum indeterminacy plausibly reigns, it seems incapable of producing cascade effects up to the macroscopic level which results in arms being moved and so forth. For presumably some brain state leads to macroscopic happenings such as arms being moved, and this is made up at the submicroscopic level of many such quantum happenings. So, the substance dualist may maintain that the soul’s influence on the brain, in causing it (e.g. to raise one’s arm, occurs in a number of disparate tiny locations, any one of which is perhaps not sufficient, or perhaps even necessary, for the event to occur, but which then jointly cause one’s arm to rise). Those quantum happenings in the brain which are similar in the properties they reveal to the natural sciences as those happening in an ‘inanimate’ object where they are indeed uncaused are in fact, when they happen in the animate object that is the brain, caused by the soul of the relevant person. The universe is not indeed causally closed, but no laws of nature need be violated. So, in short, even were fundamental physics to return to a deterministic mode, the interactionist substance dualist could maintain that souls are able to influence physical stuff (and vice versa) although by doing so he or she would be positing that the laws of physics are violated – li le bits of energy come into and go out of existence. However, within the current indeterministic paradigm, no such violations are required as a part of the substance dualist’s account of this interaction. The substance dualist may maintain that the soul operates in the causal ‘gaps’, otherwise filled by randomness, that indeterminism opens up. And of course even were the dominant paradigm of interpretation of the laws of nature within the community of physicists to revert to determinism, it would still be just a paradigm of interpretation; there would be no necessity that the substance dualist follow it. Of course, such suggestions on the part of the substance dualist presuppose that in general a spiritual substance may cause a change in a physical substance and vice-versa, and someone might hold as a ma er of principle that the only possible relata of causation are physical events, so such a suggestion may be ruled out in advance. But why adopt such a principle? It may be rejected by the substance dualist as mere prejudice if argued for a priori (although of course if argued for validly a priori, the substance dualist will need to find one or more premises to which to object) and the substance dualist will insist that such a principle cannot be discovered a posteriori, for the actual universe is one which has souls operative in it, so does not follow it. Descartes himself said all that, it strikes me, needs to be said on this issue in a le er to one of his objectors. These questions presuppose among other things an explanation of the union between the soul and the body, which I have not yet dealt with at all. 84
Substance Dualism
But I will say, for your benefit at least, that the whole problem contained in such questions arises simply from a supposition that is false and cannot in any way be proved, namely that, if the soul and the body are two substances whose nature is different, this prevents them from being able to act on each other. (Descartes, in Co ingham, vol II, 1994, p. 275).5 So, in summary: the reasons for supposing interactionist substance dualism false and physicalism true reduce to the simplicity of the la er over the former. Simplicity is a reason to prefer one theory over the other, but so is explanatory adequacy, and it is far from clear that physicalism will prove adequate, as we shall now see.
Reasons to Believe Substance Dualism True Various arguments in favour of substance dualism have been put forward over the last two and a half thousand years, and it would be impossible to provide an adequate treatment of all of them in anything smaller than a substantive book. That being so, in the space that remains for me, I wish to focus on just three areas where, it strikes me, the substance dualist can plausibly contend that substance dualism does be er than physicalism in accommodating various ‘commonsense intuitions’ we have about ourselves. Of course commonsense intuitions are hardly the basis for conclusive arguments in favour of substance dualism. A er all, if our commonsense intuitions about such issues were not sometimes wrong, there would hardly be any point in the discipline of metaphysics. I conclude then by discussing what weight we may in general give to this type of argument relative to the weight we may give to the virtue of simplicity which, it has been conceded, physicalism has over substance dualism. The three areas are personal identity, freedom, and consciousness. I shall consider them in order.
Personal identity What is it that makes a person at a later time, t+1, the same person as existed at an earlier time, t? Substance dualism has a simple answer: it is fundamentally the continuity of the same soul (or, for Aquinas perhaps, the same soul and the same body), and souls themselves do not continue in virtue of anything more basic continuing (bodies presumably do). For the physicalist substance monist, the issue is more complicated: there are three options. The person may be identified with a certain set of properties (usually psychological properties are chosen); with a part of the physical substance which makes up his or her 85
The Continuum Companion to Philosophy of Mind
body (usually the brain is chosen); or with a combination of these (e.g. psychological properties p going on in brain b). However, none of these options seems to offer a satisfactory theory of personal identity. There are problems peculiar to each, but a general defect may be observed in play in their dealing with almost all the thought experiments that are used, it is supposed, to illuminate this issue. So, for example, one is asked to imagine a brain bisection, a er which the two resultant hemispheres are transplanted into separate clones of the original body where they take up more or less functional residence. To add weight to the situation, perhaps one of the resultant people is then tortured to death over the next five minutes while the other is given a gin sling to enjoy. Which of these two resultant people, if either, is the person who originally underwent the brain bisection? one is asked. Then the details of the experiment are altered; perhaps one of the two resultant people gets more psychological continuity and the other more of the physical substance of the original brain. What then do we say? For some proportions of psychological continuity and continuity of physical substance, the physicalist must say that it either becomes ontologically indeterminate whether a resultant person is the same as the original, or it remains determinate, yet he or she does not know whether he or she is the same or instead a new person inheriting some of the original’s psychology and/or brain ma er. But our commonsense intuitions about personal identity do not allow for indeterminacy, as shown most markedly when one thinks of these possibilities from the first-person perspective of someone about to undergo the relevant experiment: ‘Either I will survive or I won’t; it cannot be ontologically indeterminate in a few minutes time whether I’m there or not’. But nor is there anything unknown le for the physicalist to hang a determinate fact of personal identity from, something which again we might perhaps see most sharply by imagining the first-person perspective: ‘If I can know where all the properties are going and know where all the physical substance is going, yet still not know where I am going, then I cannot be identified with any combination of properties or the physical substance; I must be something else, and the only something else le (once we’ve swept properties and physical substance off the table) is soul substance’. This is not conclusive of course, for one could be – unbeknownst to oneself – identical to some indivisible property or indivisible bit of physical stuff and thus even if one knew in advance of this property/bit of stuff where it was going to go, one would not know that in knowing this one was knowing where one was oneself going. However, each of these claims – that one is to be identified with an indivisible property or an indivisible bit of physical stuff – would itself be most implausible. Properties and sets of properties (whether properties of physical substance or soul substance) are capable of multiple instantiation, and the sorts of sets with which people might most plausibly be identified (that go into 86
Substance Dualism
psychological continuity accounts of personal identity) are themselves capable of degrees of survival. But people are not the sorts of things which seem to commonsense capable of multiple instantiation, or ‘division’ as the most discussed variant of multiple instantiation is sometimes called. One would not think, ‘Maybe, in five minutes time, I’ll be two people, one being quickly tortured to death and one enjoying a pleasant drink, so that in ten minutes time I’ll be both alive and dead’. And, as already observed, people are not the sorts of things which seem capable of survival by degrees. So it is implausible to suggest that persons are to be identified with any set of properties or any one indivisible property. Physical substance is not capable of multiple instantiation, but the set of bits of physical substance with which people might most plausibly be identified – brains – are capable of division and, as already observed, people are not capable of division. Of course there might be some genuinely indivisible bit of physical substance within the brain – an ‘atom’ in the original Greek sense – which enables one to side-step this issue if one identifies oneself with that, but to posit such a thing and to identify oneself with it would be most implausible. Substance dualism, then, gives the best theory of personal identity by reference to our commonsense intuitions about persons as not being capable of multiple instantiation/division and as not being capable of survival to a degree.6
Freedom In everyday life, we o en suppose ourselves to have been able to do something different from whatever it is that we ended up doing, even had everything else in the physical universe up to the moment of our choice remained exactly the same. Of course, if everything else had really remained exactly the same, then we might wonder why we would ever have behaved differently from however it is we ended up behaving, but some of our choices are, we believe, whimsical in the following way. I am offered the choice between tea and coffee; I have no preference between the two but a strong preference to have one rather than remain thirsty and, not wishing to be like Buridan’s ass, I thus say, on a whim, ‘Tea, please’. In reflecting back moments later, I believe of myself that I could have said ‘Coffee, please’ instead. In a situation such as this, although perhaps most vividly in situations where things of great moral moment turn on what we end up doing, we suppose that the fact that we end up doing whatever it is we do end up doing is – to some extent at least – down to us. I say ‘to some extent at least’ as it must be conceded that we always operate within a finite range of options and sometimes this finitude exculpates us from at least some, possibly all, responsibility (‘I agree that what I did was bad, but look at the alternatives I faced; each was worse’), but we ordinarily suppose that this finite 87
The Continuum Companion to Philosophy of Mind
range is greater than one – we do genuinely have options – and, when we have options and end up realizing one rather than another as a result of the right sort of conscious choice on our part, we suppose that in that way the causal and moral buck stops with us. We are in this way free agents, responsible to a greater or lesser extent for the choices we make and thus for the shape of our lives and the lives of those we affect. Substance dualism – of the interactionist sort – gives a straightforward and simple account of how all of this gets to be so. (That is the long-promised reason why interactionist substance dualism, rather than, for example, psychophysical parallelist or epiphenomenalist substance dualism is the most plausible.) According to interactionist substance dualism, the soul, while of course being affected by things going on in the physical world (e.g. in coming to the beliefs that it has about that world), is not always necessitated to do what it does by those effects; sometimes it initiates causal chains, which then impinge upon the physical world when it could yet have initiated different causal chains and thus impinged differently, had it chosen to do so. When my soul does so, that is me (Descartes) or a part of me (Aquinas) making a choice. The commonsense view of ourselves as articulated in the previous paragraph finds its metaphysical grounding.7 Physicalism cannot ground this commonsense view. On physicalism, either what I ended up doing was entirely causally necessitated by preceding states extending back though time to the big bang or there was a certain amount of randomness (uncaused-ness) involved in the causal chain that ended up with my doing whatever it was I did. In neither case would the causal – and, one might hence think, moral – buck stop with me; either the happening was caused by factors beyond my control (for they go back to the big bang, which is certainly beyond my control); or it was random; or it was some mixture. Various accounts of how the moral buck might stop earlier than the causal buck and in the right spot – me – have of course been advanced by physicalist substance monists keen to accommodate moral responsibility to their worldview. So, for example, one might say that if my body does what I want it to do as a result of me wanting it to do that thing, then that’s me being morally responsible for the doing of that thing, and the fact that my wanting it to do that thing rather than something else was itself caused by factors beyond my control does not detract from that. This account is open to easy counter-arguments, but there are of course much more sophisticated accounts. However, they all suffer from the common feature that whatever psychological states are posited as sufficient to lead to the agent being morally responsible, it seems possible to imagine a skilled enough hypnotist inducing those states in a person and yet we not hold such a victim of such hypnosis to any extent responsible for the actions that then flowed from these states. In cases where we can identify causal responsibility, moral responsibility, we think, falls straight through to it; we are 88
Substance Dualism
strongly commi ed to the causal and moral buck stopping in the same place. Substance dualism of the interactionist sort is the view that accommodates this strong commitment in offering a ‘third way’ to causal necessitation of the physical sort stretching back beyond our births and randomness: my actions are caused by me (Descartes) or my mental part (Aquinas). Substance dualism, then, gives the best theory of freedom by reference to our commonsense intuitions about ourselves as being the initiators of and thus morally responsible for our actions.8
Consciousness The classic thought experiment here concerns someone called Mary, who, we are asked to imagine, has been brought up in an entirely black and white room. In this room she has access to black and white science textbooks and science is now completed. She thus learns everything there is to know about the physical properties of colour and indeed, let us say, about the physical properties of brains too. She then leaves the room and goes into the outside world. For the first time, she herself sees a red apple. Is it not plausible to suppose of her that she thereby learns something new: what red looks like? We may call this new fact a fact about red qualia: what it is like to see red. From the fact that Mary – ex hypothesi – knew everything about the physical qualities of the colour red and the brain prior to leaving her room yet did not know about this ‘qualiatative’, as we may call it, property, so we can conclude that this qualiatative property is not a physical property of red or the brain; of what is it a property then? The substance dualist has a ready answer: of red as it is experienced by the soul. There have been various physicalist responses to Mary-type thought experiments; they tend to deny the fact that Mary comes to know about a qualiatative property; rather, they tend to assert, she comes to have an ability which she did not previously have, the ability to recognize red objects in a new way.9 This however seems wrong-headed to me, for Mary plausibly will not gain the ability to recognize red objects simply by ge ing out of the room and seeing a red apple for the first time. She will only gain that ability once someone provides her with information in the following manner: ‘That apple you’re looking at, Mary, it’s red’. In hearing someone say that, she will plausibly gain a new ability to recognize red objects therea er, but she had already come to know what red objects looked like prior to hearing someone say that, just by looking at the red apple. She wouldn’t say back to the person who’d just said this to her, ‘Now, for the first time, I know what red is like’; she’d say something like this: ‘Now you’ve told me that that apple is red, I realize that I already knew – just by looking at it – what it was that red was like, rather than what it 89
The Continuum Companion to Philosophy of Mind
was that blue was like, and so on. But although I didn’t know that it was red, the qualiatative nature of which I knew about by looking at the apple prior to your telling me, it was red that I had discovered something new about simply by looking at the apple’. 10 Of course the analysis presented here has been, perforce, terribly brief (cannot property dualism deal with the issue of consciousness to which we have recently adverted?), but, even so, it appears that the ‘facts’ of personal identity, freedom of choice and consciousness, as they present themselves to commonsense are, when taken together, easily accommodated by substance dualism and fail to be accommodated by physicalism. Either these facts are not facts at all – commonsense is wrong – or physicalism is wrong.
Conclusion If we suppose a substance/property structure to our metaphysic of the mind and we suppose that there is physical stuff (two suppositions I have not called into question in anything but the most oblique way here), then reasons of ontological economy alone would suggest that we should believe that we do not have souls, that our mental life could in principle be explained in terms of our physical. Such a world would be simpler – to the tune of one whole class of substance – than the world posited by substance dualists. physicalist substance monism has simplicity on its side when compared with substance dualism, but, as we have seen, it does not seem to have anything else; there are no other reasons for thinking substance dualism false. While simplicity is a virtue, so is explanatory adequacy, and there are things that we have reason to suppose a physicalism cannot explain. That current natural science cannot explain something is of course in itself very slight reason to suppose that future science will not be able to explain it, but there are at least three areas where, it has been argued, we are able to detect difficulties in principle. First, the ‘facts’ of personal identity as they are presented to commonsense seem to suggest that we as persons are (Descartes) or are constituted in part by (Aquinas) units of substance which are indivisible over time, and souls are the best candidate for such. Secondly, the ‘facts’ of freedom of choice as they are presented to commonsense – roughly that the causal and moral responsibility bucks stop in the same place – can be accommodated by substance dualism, but not by physicalism. And finally, what we have called the ‘qualiatative’ facts of consciousness, what it is like to see things like red, are not reducible to facts about the physical properties of colours (or indeed colours and brains), something which again can be accommodated by substance dualism but not physicalism. Our discussion of all these points has perforce been very brief, but I hope sufficient to suggest reasons for this analysis. If so, one might sum up 90
Substance Dualism
our findings thus: there’s one argument against substance dualism (it’s more complex) and three in favour (it be er explains personal identity, freedom of choice, and consciousness). If that is so, neither substance dualism nor physicalist substance monism will give us everything we want, and we shall naturally turn to considering how we should weigh simplicity against these other considerations when deciding what we have, on balance, most reason to believe. Moore has taught us that we may take any valid argument in either of two directions, as articulating a reason to suppose its conclusion true or a reason to suppose one or more of its premises false, and that the direction in which it is most reasonable to take a given argument will depend on whether the premises are jointly more obviously true than the conclusion is obviously false. So we may give the considerations presented here some direction by finally asking ourselves this question: Knowing now that you can only believe one, which of the following seems more obviously right to you? z We are persons in more or less the same way that commonsense suggests;
we have freedom of the sort supposed in everyday life; and colours – and indeed mental happenings in general – have qualiatative properties. z The world is as simple a place as physicalism suggests.11
91
5
Physicalism Barbara Montero
Physicalism, as some see it, takes the fun out of life. In their eyes, if physicalism is true, the pleasure of a great bo le of wine, the euphoria of that first kiss, the thrill of a hole in one and so much more are nothing but the workings of the brain. At the same time, physicalism is probably the most widely held general philosophical theory of the nature of the world, and many of those philosophers who think that physicalism takes the fun out of life still defend it tooth and nail. But what exactly is the theory of physicalism? Here I hope to make some headway towards understanding physicalism, the theory that many philosophers both love and hate. In particular, I aim to arrive at an understanding of the thesis of physicalism that captures its essence and at the same time can be used to ground the contemporary debate over whether it is true. Physicalism is a view about the ultimate nature of the world along the lines of Thales’s view that all is water or Democritus’ view that all is atoms in the void. But rather than pronouncing all is water or all is atoms in a void, physicalism pronounces that all is physical, or as it is usually phrased, ‘everything is physical’. Of course, this isn’t very informative unless you know what it is to be physical. Indeed, each term – ‘everything’, ‘is’, and ‘physical’ – is open to various interpretations. In what follows, I examine each of these components in turn.
The Domain of Physicalism Ontology is the very general study of reality. And physicalism is typically thought of as an ontological theory: it tells us that everything is physical. But ‘everything’ is not always taken to mean literally everything. But if it doesn’t, just how much of reality is supposed to be captured by the physicalist’s net? (Here and throughout I use the term ‘physical’ broadly to cover not only physical entities and properties at the fundamental level, but also physical phenomena, such as rocks, trees and chairs.) How one restricts the scope of physicalism depends on one’s purposes. And since the central physicalist target is typically the mental, it is not unusual for physicalists and their foes to simply focus on the question of whether the 92
Physicalism
mental is physical. Indeed, some may even simply refer to the theory that the mental is physical as ‘physicalism’. It may be that this is simply intended as shorthand for the view that everything (or some significant subset of everything) is physical. Yet this shorthand can be confusing when a more encompassing type of physicalism is evoked to justify physicalism with respect to the mental, such as when physicalists argue that the mental is very likely to be physical because everything else is physical. Obviously, here the scope of ‘everything else’ is not just the mental. So what, then, is supposed to count as ‘everything else’? Some understand physicalism in the broadest sense possible. It is theory about everything whatsoever, a theory that says that all reality is physical. On this inclusive conception, physicalism implies not only that people, animals, rocks, trees, and all other concrete objects are physical, but also that abstract objects – which on some accounts include numbers, properties, classes, relations, and propositions – are all physical. Even God, if she exists, would need to be deemed physical given the truth of this conception of physicalism. Others think that physicalism ought to have a more restricted scope. For example, some understand it as a theory about only the concrete world, that is, roughly about phenomena in space or time. Physicalism, then, is true if and only if all phenomena in space or time are physical. This understanding of physicalism ensures that the status of the mental is relevant to the truth of physicalism, since, whatever else they are, mental processes do seem to occur over time. However, the existence of abstract numbers (regardless of what they are like in other respects) would not refute such a physicalism. Jeffrey Poland can be seen as defending this conception of the scope of physicalism (if we assume, as many do, that the abstract world has no causal influence on us) when he claims that ‘physicalists are (or should be) concerned with what exists in nature – that is, with what can be spatially and temporally related to us, with that with which we can interact and by which we can be influenced, and with that of which we and the things around us are made’ (Poland, 2001, p. 228). A related approach to defining the scope of physicalism is to think of physicalism as a theory about the empirical world, that is, about the phenomena that we come to know via our senses, or to put it more carefully, about phenomena that are such that our knowledge of them must be justified via our sense experience. If, as is o en thought, our senses do not justify knowledge of abstracta, this restriction allows for the existence of non-physical abstract entities to be consistent with physicalism. However, if abstracta are known via our senses, then, the truth of physicalism, on this interpretation, implies that abstracta are physical. A more encompassing view, such as Andrew Melnyk’s, takes physicalism to be a theory about the contingent and/or causal world (Melnyk, 2003). If abstracta are not causal or if they exist necessarily, this restriction comes close 93
The Continuum Companion to Philosophy of Mind
to the previous restrictions. However, on this view, the truth of physicalism implies that anything that has causal powers is physical. So, for example, if abstract numbers have causal powers, then, on this version of physicalism, numbers would need to be physical in order for physicalism to be true. Moreover, on this understanding of physicalism, even something that has no causal affect on us, as long as it is contingent, would need to be physical if physicalism were true. Should physicalism have a restricted scope? If we were to restrict physicalism to only the concrete world we would not be able to make sense of what might be called ‘physicalist structuralism’. Physicalist structuralists, such as James Ladyman, hold that the fundamental properties of physics are purely structural, revealing only the relationships between things and nothing of the things themselves (Ladyman et al., 2007). Thus, the fundamental physical world on his view is entirely abstract. Moreover, Ladyman holds that since the fundamental physical world determines everything, there is nothing else besides structure, or as the title of his book declares, ‘everything must go’. If we were to hold that physicalism is a theory of only the concrete world, Ladyman’s view would be physicalistic in only a trivial sense. Melnyk’s restriction, however, accommodates the physicalist structuralist (assuming that the fundamental properties of physics are either contingent or causal). His restriction also accounts for the intuition that if our world had undetectable contingently existing spirits cohabitating happily among themselves, physicalism would be false. But what would the status of physicalism be if there were a necessarily existing God who had no causal influence on us or the world as we know it? On Melnyk’s view, the existence of such a God is compatible with physicalism. But it is not clear that it should be. Physicalists are drawn to restricted versions of physicalism as they are easier to defend; Occam’s razor notwithstanding, it is very difficult to argue for the view that, say, no undetectable spirits exist. Nevertheless, it seems to me that an argument for physicalism in a non-restricted sense would still count as successful even if it does not rule out impossible-to-rule-out situations, as no theories can do that. In all theories outside philosophy, and most theories in philosophy, save for in the domain of skepticism, one need not present a theory as applicable to only the knowable world. So I think physicalists as well need not say that the scope of physicalism is only that of which we can in principle have knowledge. If it is false about that, it is still good enough. Of course, restricting the scope of physicalism so that the existence of abstracta, no ma er what their nature, could not refute physicalism is a different issue. It seems that physicalists who take this route have the sense that abstracta are not a threat to physicalism. However, I think that a be er way to accommodate this intuition is, as I shall describe in section three, to merely count them as
94
Physicalism
physical. I propose, then, that we understand ‘everything’ in the most inclusive way possible: Physicalism: Everything, whatsoever, is physical.
The relation between mountains and molecules When the physicalist claims everything is physical, what is being said about everything? Typically physicalists deem something physical if its existence depends in the right way on basic or fundamental physical properties.1 And typically the fundamental physical properties they have in mind are the microphysical properties countenanced by physics, such as the property of having a charge, of being a quark, and so forth. In the third section I shall question this conception of the fundamental physical properties. Here, however, I want to ask, what exactly is the relation between the fundamental physical properties and higher-level properties, such as mental properties, which is thought to make the higher-level properties count as physical? In other words, when physicalists say that everything is physical, just what is meant by ‘is’? Some hold that the relation between higher-level physical properties and fundamental physical properties is that of explanation (Jackson, 2006; Witmer, 2006). On this view it is thought that physicalism is true if and only if everything is either a fundamental physical property or law, or can be explained in terms of such properties and laws. As such, physicalism is an epistemic thesis about what we can explain. It may have ontological implications since typically we think that a good indication of whether the fundamental nature of r is p is the fact that we can explain r in terms of p. Nonetheless, such a view is primarily an epistemic thesis. Many philosophers, however, see physicalism as an ontological thesis, a thesis that tells us about what the world is like, whether or not we can understand how it could be like this. Physicalism, many think, could still be true even if we never arrive at a physical explanation of, say, pain, as long as pain is an entirely physical phenomenon. As Joseph Levine puts it, ‘I am prepared to maintain that materialism must be true, though for the life of me I don’t see how’ (Levine, 1998, p. 475). And some philosophers such as Brain Loar (1990) and Colin McGinn (1989) have proposed theories about why we cannot understand physicalism could be true of the mind, even though they think that physicalism might very well be true. To make sense of positions such as these, physicalistic dependence relations cannot be formulated in terms of explanation. Of course, most advocates of thinking about physicalism in terms of explanation do not mean that we can
95
The Continuum Companion to Philosophy of Mind
provide a physicalistic explanation of pain now, nor even sometime in the future. Rather, they think that for physicalism to be true, such an explanation must be in principle possible. But it is usually not clear what principle is at use here. The idea that there is an explanation that the human mind can grasp might seem too restrictive. Why should there not be phenomena that are beyond the grasp of human intelligence?2 However, it is difficult to grasp what it would mean for an explanation to be possible for an ideal mind, a mind that is capable of knowing everything. In any event, many formulations of physicalism employ an ontological relation between lower-level physical properties and higher-level properties that is supposed to capture the idea that higher-level properties are ‘nothing over and above’ lower level properties. For example, it is supposed to capture the idea that a mountain’s height is nothing over and above the cumulative height of the rocks, pebbles and earth that compose the mountain, and that the rocks, pebbles and earth are nothing over and above the molecules out of which they are composed, and that the molecules are nothing over and above the atoms out of which they are composed, and so on. Already, however, we run into difficulties, for aren’t there properties of, say, Mt. Fuji that are not dependent on the properties of the dirt that composes it? For example, Mt. Fuji has the property of being revered in Japanese society, yet it is not clear that the rocks and pebbles have this property or have any other properties that would imply that the mountain should have this property. Physicalists address this type of worry by broadening the ‘dependence base’ for the physical world. Perhaps all the properties of Mt. Fuji do not depend entirely on the properties of its parts, but they nonetheless do, says the physicalist, depend on fundamental physical properties. If we set all the fundamental physical properties of the world, we will have set Mt. Fuji’s property of being revered in Japanese society since, according to the physicalist, we will have set the Japanese people’s reverence of it as well. But what exactly is the relationship between the properties of Mt. Fuji and the properties of molecules? Though there is considerable disagreement over how physicalists should explain the relationship between Mt. Fuji’s properties, or other higher-level properties, and the fundamental physical properties, many think that the relationship involves, at a minimum, ‘upward determination’, or what is also called supervenience. Upward determination is typically expressed as the view that any world that duplicates all the fundamental physical properties and laws of our world also duplicates all properties of our world. So it implies that any world that duplicates the microphysical properties of our world would duplicate Mt. Fuji, as well as all other higher-level features of our world, including minds. The relation of upward determination, or supervenience, is sometimes explained metaphorically as the view that all God had to do in order to create 96
Physicalism
the world was to create the fundamental properties of physics. A er this she could rest, as everything else came along for free. How close does upward determination take us to physicalism? Upward determination states that any world that duplicates all the fundamental physical properties and laws of our world also duplicates all properties of our world. But now imagine a necessarily existing God. A world that duplicates all the fundamental physical properties of our world would also duplicate such a God. Yet, intuitively, the existence of God refutes physicalism. If this is correct, then upward determination is not a sufficient condition for physicalism. To be sure, if this necessary God interferes freely with the workings of the world, a fundamental physical duplicate of our world might not duplicate all aspects of our world, for God might arrange things so that in the duplicate world, although all the fundamental feature of the world are the same, I prefer coffee to tea. As such, upward determination would fail. However, if the role of God were merely to set the fundamental nature of the world, merely to be the hand behind the big bang, as it were, then a necessarily existing God would be consistent with upward determination. If you accept Hume’s view that there are no necessary connections between distinct entities, then such a God cannot exist.3 Such a God is distinct from the rest of the world, yet her existence is necessary, given the world. Alternatively, one could restrict the scope of physicalism so that such a God would be consistent with the truth of physicalism. But if you reject Hume’s view and also think that the existence of God is incompatible with physicalism, you are led to reject upward determination as perhaps a necessary condition for physicalism, but not a sufficient one. The desire to find both a necessary and sufficient condition for physicalism has led some philosophers to hold that explanation plays a role in our understanding of physicalism a er all.4 Physicalism, as they see it, is not just the view that everything is determined by fundamental physical properties, but that everything is determined and ultimately explained by the fundamental physical properties. Such a view presumably rules out a necessarily existing God from counting as physical. And if it doesn’t, such a God would seem to be physical. But many are content with a mere necessary condition since much of the action in the literature on physicalism involves various arguments against physicalism, all of which purport to show that upward determination, which is taken to be a necessary condition for physicalism, fails to hold. For example, the zombie argument against physicalism is intended to show that the possibility of zombies – not the lumbering Hollywood variety, but creatures that duplicate our microphysical structure yet lack consciousness – implies that consciousness is not physical. 97
The Continuum Companion to Philosophy of Mind
Is there a way to satisfy the desire that physicalism should be both an ontological thesis and incompatible with a necessarily existing God. Physicalism: Any world that duplicates all the fundamental physical properties and laws of our world (and contains no other fundamental properties) also duplicates all properties of our world and everything in our world is ultimately constituted by fundamental physical entities. Assuming that both immaterial souls and a necessarily existing God have nonphysical fundamental properties, this view implies that their existence is incompatible with physicalism, which is just what we want.
The Physical Now we must address the question, ‘what is the physical?’ When we say, for example, that everything is determined by fundamental physical phenomena, what are these fundamental physical phenomena? Most define the fundamental physical properties in terms of the entities and properties and perhaps laws posited by microphysics: the fundamental physical phenomena are those entities and properties mentioned in the theories of microphysics. But what is meant by microphysics? Is it current microphysics? This would provide a relatively clear position: physicalism would then be the view that all of the fundamental properties are properties of microphysics. Unfortunately, this is a theory that is rather difficult to accept since we know that current microphysics is most likely neither entirely true nor complete and thus we now know that it is most likely not true that all higher level properties are determined by the properties of microphysics. A more common understanding of what counts as the fundamental physical properties in the thesis of physicalism is that they are the properties posited by an ideal physics, a true and complete physics, or a physics ‘in the end’. Can we formulate physicalism in terms of a true and complete physics? Of course, we do not currently know what future physics will be like, and therefore we cannot now determine whether physicalism is true. But perhaps physicalism can be seen as a hypothesis that awaits scientific confirmation (or, for that ma er, refutation). Physicalists, on this understanding, are be ing that it is correct, but do not claim to be able to now determine that it is correct.5 I see no problem with making physicalism a thesis that awaits empirical support. However, it seems that far from turning physicalism into a thesis whose truth awaits empirical support, defining the physical in terms of a true and complete physics actually seems to turn physicalism into a trivial truth. For what is a true and complete physics, save for one that accounts for the fundamental nature of everything? If free floating souls exist in our world, a completed physics will, by definition, account for the most fundamental nature 98
Physicalism
of these souls. Yet neither physicalists nor their foes think that at this time in the debate physicalism is true merely as a ma er of definition. Physicalists think the thesis needs to be argued for and, as many hold, will ultimately depend on what scientific investigation reveals. And their foes clearly do not think that they are denying what amounts to, more or less, an analytic truth. It seems, then, that physicalists who define physicalism over a true and complete physics cannot simply mean by this a theory of everything since then their claim that the mind is physical is trivially true. Yet, there is also reason to think that they do not simply intend to refer to the temporal end of physics. For this physics might still be inaccurate and incomplete; even worse, for all we know, physics might regress. We need, then, another route to defining the physical.6 Some argue that there are phenomena that physics and perhaps scientific investigation in its entirety does not aim to cover. Rather, physicists, they argue, in their role as physicists, are only concerned to account for a certain class of phenomena and souls and spirits are not in this class. As such, the truth of physicalism becomes open to debate. The question, then, is: ‘Are there no other fundamental properties than those that are under the hegemony of a true and complete physics, where what counts as being an object of study for physics is restricted in certain ways?’ This makes physicalism admirably more risky, but should we assert that physics has identifiable limits (besides, of course, that which is by definition unknowable)? As I see it, it makes good methodological sense to hold that scientific inquiry should not accept a priori barriers. Certainly, it would be reasonable to say that as things stand, government grant money ought not to fund physics research into the properties of souls. This research would seem to be currently hopeless. However, the claim that physics should never investigate the nature of souls – even if in some currently unfathomable way a physics lab reveals signs of souls – is a much stronger claim. And, indeed, it seems that such barriers could hinder progress. In other words, it seems that a good approach to scientific investigation is that when you discover territory that does not conform to your map, change the map, not the territory. Such changes might involve not only expanding our scientific ontology, but changing our scientific method as well. For example, if standard controlled experiments fail to reveal phenomena that we nonetheless think exist – as some have claimed could be the case with parapsychological phenomena – we should try to find a way to change the control. If we were somehow convinced that there was a spiritual realm that was causally isolated from our world, let us try to understand it. Where does this leave us? I think that it indicates that, despite the consonance of the two terms, the physical should actually not be defined over physics. Physics is the study of the fundamental nature of the world, whatever that nature may be. But physicalism is more discriminating about what is to count 99
The Continuum Companion to Philosophy of Mind
as fundamentally physical. Even if fundamental acts of pure consciousness were part of the domain of physics (as the physicist Eugene Wigner claimed were required to explain the collapse of the wave function) they should not count as physical. But if physics is not our guide to what counts as physical, what, then, is? Physicalism is an ontological thesis, but it is an ontological thesis that is supposed to capture the sentiments of those who call themselves physicalists while presenting a thesis that those who think of themselves as opposing physicalism will reject. And thus we are looking for an understanding of physicalism that classifies free floating minds, a God that is not determined by anything other than God, and fundamental, irreducible norms all as nonphysical.7 I think that we can achieve this if we merely define the fundamental, physical properties negatively, that is, in terms of the types of properties it excludes. The fundamental physical properties, then are the fundamental non-mental, non-divine and non-normative properties. But why should those and only those be excluded on a physicalistic conception of the world? While most philosophers would agree that physicalism does indeed exclude those sorts of properties, what exactly it ought to exclude is somewhat of an open question. For example, some but not all see vitalism as anti-physicalistic as it posits a fundamental life force. But in any event, as long as one makes it clear at the outset what types of fundamental properties are to count as non-physical, we have a framework around which debates over the truth of physicalism can proceed. Filling the framework in, here is the theory of physicalism we have arrived at: Any world that duplicates all the fundamental non-mental, non-normative and non-divine properties and laws of our world (and contains no other fundamental properties) also duplicates all properties of our world, and everything in our world is ultimately constituted by fundamental nonmental, non-normative and non-divine entities. This way of understanding physicalism may be somewhat of a mouthful, but it seems to capture the spirit of physicalism since it is inconsistent with the existence of such things as immaterial souls and mental properties that are over and above the physical (even if their existence follows from necessity given the physical domain.) Or rather, it is inconsistent with such things as long as they count as fundamental. If, however, they are determined by (but do not determine) non-mental, non-celestial and non-normative properties, they count as physical, as they should. But doesn’t this leave us with just a disparate list of properties that are to count as non-physical? Physicalism, according to Frank Jackson, is ‘the very 100
Physicalism
opposite of “big list” metaphysics’. Rather, he sees physicalism as ‘highly discriminatory, operating in terms of a small set of favoured properties and relations’ (Jackson, 1998, p. 5). To be sure, the list of properties excluded by via negativa physicalism is hardly large, yet one might still want to know what unifies the nonphysical properties besides the fact that there are a number of people who call themselves physicalists who simply don’t like them. Why is it that physicalists do not like these properties? Why should certain properties, such as fundamental properties that are mental, count as non-physical? I think that certain properties have been deemed physically unacceptable because they hint at a world that was created with us in mind. If mental phenomena were fundamental, being, for example, part of the original brew that was set in motion in the big bang or as emerging as something extra along the way, mentality would have a place of prominence in the world. And this, I think, for many, suggests the existence of a God who was looking out for us. This hint, however, is not an implication, and antiphysicalists can be atheists. However, I think that non-physical properties have go en their ‘bad’ reputation because on many accounts of God, these are the sorts of properties that would exist, if God were to exist. And the reputation remains, even when its origin is forgo en. As should be the case, if you are a theist, you will reject physicalism, as defined. However, this physicalism does not seem to take all the fun out of life. If physicalism is true, the pleasure of a great glass of wine need not be merely something going on in your brain. Rather, if physicalism is true, such pleasure is determined by neural properties and ultimately fundamental non-mental properties, but it is as real as anything. But is physicalism true? This question, alas, I shall need to save for another discussion.
101
6
Folk Psychology and Scientific Psychology Barry C. Smith
Rational beings have a propensity to recognize other rational creatures. We anticipate and respond to others, not as moving bodies but as agents: minded creatures with motives for action. To treat their behaviour as action is to see it as governed by their intentions; intentions fixed by their beliefs and desires. It is because we see agents as acting in the light of their beliefs and desires that are able to make sense of their behaviour in rational terms, and it is this tendency to see people’s behaviour in rational terms that lies at the heart of our everyday, folk psychological understanding of others. At the same time, psychologists and neuroscientists have made great progress in uncovering the cognitive states and mechanisms that explain human capacities for perception, speech and action. So how do these two, very different types of explanation fit together? This essay explores the relations between folk psychology and scientific psychology by considering the views of three very different philosophers of mind and where they stand on this issue. The philosophers are Donald Davidson, Daniel Denne and Jerry Fodor.
Common Sense Psychology When we explain people’s behaviour in intentional terms, we take their behaviour to be the upshot of mental states that gave them reasons to act. Citing individuals’ wants and wishes, thoughts and feelings, hopes and fears, helps to explain both what they are doing and why they are doing it. We routinely go in for such explanations as part of our psychology of other people. Central to such commonsense, or folk psychology, is ascription of particular beliefs or desires that belong to a larger pa ern of a itudes and actions that makes sense of people’s acts and u erances. For example, when we notice someone leaving the room discreetly, we may say, ‘She le the room because she wanted to escape from the party unnoticed’. Ascribing to her this desire is one part of a larger explanation that would include her beliefs about her surroundings and her beliefs about what she has to do to satisfy her desire. (Notice that simply 102
Folk Psychology and Scientific Psychology
having a desire to escape the party unseen would not lead her do anything unless she also believed she could leave the party unseen by exiting swi ly from the room.) Similarly, we could have cited a belief, ‘She tiptoed past the drawing room because she thought no one would hear her leave’. Again, the explanation is partial, and we are assuming that she had a desire to leave without anyone hearing her. We may mention a belief or a desire even though it is beliefs and desires together that constitute someone’s reasons for acting, and in citing these reasons we are offering rational explanations of an agent’s behaviour. The beliefs and desires posited to explain behaviour, are hypotheses that can be re-worked in the light of further evidence. The beliefs ascribed must make sense in the light of other beliefs, and similarly, desires must make sense in the light of further desires. Thus it is part of our competence in giving such explanations that we stand ready to adjust our a ributions of belief and desire if they are not consistent with what else it makes sense to ascribe to an individual on the basis of what they say and do elsewhere and at other times. The overall picture must make sense of the individual as by and large a rational thinker, and these relations between beliefs and beliefs, between desires and desires, and between beliefs, desires and actions, are logical or rational relations.
Propositional Attitudes and Intentional Actions We call beliefs and desires propositional a itudes because an agent can take different a itudes – such as believing, desiring, hoping, or fearing – towards the same proposition, for example, that the war on terror will continue for a long time. Alternatively, one can take the same a itude towards different propositions. It is the logical relations between propositions believed that ensures the rational connections between propositional a itudes. For example, if we believe that the US president J. F. Kennedy was assassinated we must also believe that he is dead, as well as believing that there was someone called J. F. Kennedy who was president of the United States, etc. If Oswald intended to kill Kennedy then he must have wanted Kennedy dead and believed that by firing the gun he could bring about that outcome (i.e. he must believe the death of Kennedy would satisfy that desire). If he didn’t want Kennedy dead (but wanted merely to frighten him) then we can suppose his intentional action is incorrectly described as his killing Kennedy – even if the shot he fired accidentally resulted in Kennedy’s death. Perhaps he was genuinely surprised that Kennedy died – a er all, it was a very improbable shot. His subsequent surprise would make no sense if he all along desired to kill Kennedy and believed that pulling the trigger would result in Kennedy’s death. Even if Oswald’s observable behaviour, his li ing the rifle, aiming and pulling the trigger was the same in both scenarios, his surprise would be a reason for supposing that he didn’t 103
The Continuum Companion to Philosophy of Mind
intend to kill Kennedy, and for denying that we should call his action a killing. By contrast, his lack of surprise and satisfaction would make rational sense of supposing he did intend to kill Kennedy, and that killing is the correct description of his action. The correct description – the description under which an action is intentional – depends logically on the intentions of the agent: the particular beliefs and desires he or she has. These propositional a itudes rationalize the agent’s behaviour, enabling us to see it as an intentional action. Thus if the intentional description of the action were different, the a itudes we ascribe to the agent would have to be different too. In courts of law whether someone’s action should be described as murder or manslaughter depends on whether they had a premeditated intention to kill before they acted as they did. The behavioural event is the same but whether it counts – can be intentionally described – as an act of murder or an act of manslaughter depends on which mental states it is correct to a ribute to the agent (i.e. what his reasons were in behaving as he did).
Beliefs and Desires as Part of an Intentional Network We make sense of people’s beliefs and desires in the light of further beliefs and further desires we have grounds to a ribute to them, based on what they do and say elsewhere and at other times. And we aim for the most consistent overall interpretation of someone’s actions and u erances by constructing a network of intentional a itudes and actions that makes best sense possible of their overall behaviour. We constantly rework and revise our portrait of someone’s mental life in the light of further evidence. We find ourselves equipped to do so without any explicit training. We operate quite instinctively in forming views about other people’s states of mind. Premack and Woodruff (1978) coined the term theory of mind for this set of abilities. It is not an explicit theory, of course, but it amounts to a tendency or ability on the part of normal human thinkers – one we develop fully by about the age of four – to make hypotheses about the beliefs, desires, hopes and fears of fellow humans and to explain and, to some extent predict, their actions. It is the nature and status of this everyday folk psychology that we will now examine.
Folk Theories Folk psychology plays much the same role in dealing with others, as folk (or naive) physics plays in our understanding of physical objects and forces. We know that folk physics is not literally true. We, the folk, say that the sun goes down behind the hill even though it is the earth that goes round the sun. 104
Folk Psychology and Scientific Psychology
We say that cold water cools the hot water in our bath, even though what actually happens is that the hot water heats up the cold and thus loses its kinetic energy. In these cases, the proper scientific explanation replaces the rough and ready generalizations of folk theorizing about how the world works. Is the same true of the generalizations of folk psychology? Will they eventually be replaced or revised when we learn more from the science of the mind? Will neuroscience gradually replace the false but appealing assumptions of the folk? If that is not the model of the relation between folk psychology and scientific psychology, will belief-desire psychology be vindicated or refined by scientific psychology? The worry is that if folk psychology is not reducible to scientific psychology we seem to have competition between two explanations of the same purposeful behaviour and perhaps only one of them can be genuinely explanatory. To ask whether the particular claims and generalizations of folk psychology are true we need to know what they commit us to, and to know what kind of explanation folk psychology provides. As we have already seen, folk psychological explanations of behaviour are rational explanations: the giving of reasons as to why people do what they do. But such explanations are also causal explanations. The reasons we cite – the beliefs and desires that make sense of people acting as they do – are also the causes of their behaviour. When we say Charlo e le the party because she wanted to catch her train, the ‘because’ is used in a causal sense. (Contrast this with, ‘She broke the law because she parked on a double yellow line’.) But if folk psychological explanations are intended as causal explanations – explanations of what brought about certain events – the key question is whether folk psychological explanations are literally true of human agents?
The eliminativist threat to common-sense psychology Are our descriptions of our own and others behaviour as rational actions really true: is behaviour really the upshot of beliefs and desires at work in us? Or are we actually caused to behave as we do by something entirely different: configurations of neural firings in the brain that have nothing like the neat structure of belief and desire? What aspects of reality are we picking out when we make ascriptions to people of beliefs and desires, citing these states as the causes of their actions? Let us look at the eliminativist challenge. What if states like belief, desire, hope and fear, cited in our common sense psychological framework, turn out at no level of organisation to be among the causes of human behaviour? Our everyday psychological scheme would misrepresent our internal states and activities, just as conceptions of the world that mentioned witches, ether, or phlogiston, in earlier thinking, misrepresented the nature of reality. Paul 105
The Continuum Companion to Philosophy of Mind
Churchland has suggested that there may be some reason to accept the legitimacy of this challenge to common sense psychology because of the failure of its theoretical concepts to line up with the categories of neuroscience. The former may have to be eliminated by, rather than reduced to the categories of a fully mature neuroscience of cognition and action (see Churchland, 1978). This is the eliminative materialist’s option. And even if we reject it, the threat it poses is real enough. Consider the following passage from Brian Loar’s Mind and Meaning (1981): If it were to turn out that the physical mechanisms that completely explain human behaviour at no level exhibited the structure of beliefs and desires, then something we had all along believed, viz. that beliefs and desires were among the causes of behaviour, would turn out to be false. Naturally, we would continue to use the belief-desire framework to systematize behaviour, but that should then at the theoretical level have the air of fictionalising and contrivance. (Loar, 1981, pp. 14–5) This ‘fictionalising and contrivance’ offers a form of ‘irrealism’ about psychological talk, where we are either in error when we speak this way, or else merely using such talk without any pretention to describe what is real. We can call this option, non-factualism about psychological states.
Realism about folk psychology If we are to resist non-factualism and adopt a realist construal of folk psychological descriptions of people’s mental lives, we need to answer a number of questions. What does an account of folk psychology – our common sense ascription of beliefs and desires to one another to explain our behaviour in rational terms – commit us to? What would be required of any successful a empt to vindicate folk psychology? What aspects of reality are picked out by the notions it uses? Can they give us an adequate philosophical conception of the mind? If they can we need to say what relation obtains between this level of mental description and the levels of description invoked by cognitive psychology and neuroscience. There are three key desiderata any satisfactory folk psychology: (1) It should provide a rational explanation of certain behavioural events as actions. (2) It should accommodate the causal efficacy of the mental. (3) It should accommodate first- and third-personal ascription of beliefs, desires. 106
Folk Psychology and Scientific Psychology
We have touched on the rational and causal requirements already, and we will return to them below. But now let us consider the third requirement. Any adequate account of folk psychology has to acknowledge that the very same psychological states (beliefs and desires) that we a ribute to others can also be a ributed to ourselves. It is part of our psychological self-knowledge that we currently know what we are thinking and what we want. And yet the basis for ascribing psychological states to ourselves differs from that on which we ascribe such states to others. Ascriptions of beliefs and desires to others depends on evidence and inference: we observe people’s behaviour and a ribute states that, together with other background beliefs and desires we are prepared to ascribe to them, make rational sense of their behaviour; while in our own case, we ascribe a itudes to ourselves from the first-person point of view without relying on evidence or inference. We do not have to observe our own behaviour in order to know what we currently thinking or what we want. We just know. We know our own minds best, be er than we know the minds of others and be er than they know our minds. So there is an asymmetry in grounds for making first-personal and third-personal psychological ascriptions to someone.
Folk psychology and self-knowledge We ascribe psychological a itudes to ourselves without basing our ascriptions on evidence. We do not have to theorize about ourselves. (Although there are occasions on which we try to figure out what we really want, or what our motives were in acting as we did.) We are authoritative in our self-ascriptions, but not similarly authoritative about someone else’s mental states. Ordinarily, others know best what they think and what they want. This is a further epistemic asymmetry between first- and third-person perspectives on the mind. Any satisfactory account of our psychology must recognize that we ascribe beliefs and desires to others and to ourselves: that there are first-person and third-person a ributions of beliefs and desires and asymmetries between these two modes of a ribution. The grounds of our psychological self-knowledge are very different from that of our knowledge of other minds, so any decent account of our everyday folk psychology will have to make room for a satisfactory account of our psychological self-knowledge.
Scientific Psychology The science of psychology aims to explain a vast range of human cognitive capacities and abilities that underpin and make possible our propensity for perception thought and action. As currently practiced, it is unlikely to satisfy 107
The Continuum Companion to Philosophy of Mind
key requirements (1) and (3) on a satisfactory folk psychology. The various accounts it gives of the cognitive states and processes that subserve specific abilities for vision, language, audition, and motor-control are states not known to agents first-personally. Nor are they ordinarily known third-personally. Instead, we should think of them as states of sub-personal mechanisms – not states we a ribute to persons – posited by a theory of the internal cognitive mechanisms that subserve particular capacities. Such content-bearing cognitive states do not provide rational explanations of the behaviour in which we display our abilities and capacities We see distance and depth in the visual field, but the psychological states and processes responsible for this aspect of the visual scene do not give the sighted person reasons to see things this way: they causally explain why creatures with stereoscopic vision do see things this way in virtue of the content of those states. The common element that such underlying states of our cognitive systems share with the psychological states posited by folk psychology is that both are content-bearing or representational states; but the cognitive states invoked by scientific psychology do not have propositional contents that sustain logical and hence rational relations to one another. Scientific psychology, in particular cognitive psychology, extends the domain of content-bearing psychological states without extending the domain of rationally governed states, thus marking a division within the mind. Given this division within the mind between different kinds of mental states and the different kinds of psychological explanations in which they feature, how can a psychology that rationalizes behaviour in terms of beliefs, desires and intentions accommodate an underlying sub-personal psychology that explains capacities exhibited in the same behaviour? Does the science underpin or undermine the folk psychological we give of ourselves and others? We shall look first at a purely a priori defence of the legitimacy of folk psychology. This view rejects the idea that common sense psychology could be answerable to scientific psychology by denying that there could be a science of the mind. This is the interpretationist view of the mental set out by Donald Davidson.
Davidson’s Philosophy of Mind Is the commonsense (or folk) psychology we use to make sense of one another’s verbal and non-verbal behaviour vulnerable to scientific challenge? Could findings in scientific psychology or neuroscience show that a scheme we use for explaining human action is fundamentally flawed and mistaken? An a priori defence of commonsense psychology, if successful, would render it immune to scientific challenge. This is the view of the mind proposed by Davidson, who sees mental life as constituted and exhausted by the application 108
Folk Psychology and Scientific Psychology
of a priori principles of interpretation used to make sense of one another’s behaviour in rational terms. It is this a priori characterisation of the principles of commonsense psychology that renders further empirical investigation of its standing irrelevant. For Davidson, these principles characterize what it is to be minded in the first place.
Levels of description According to Davidson, our understanding of the mind depends wholly on the concepts and categories we use to ascribe mental states to one another. We are gradually inculcated into a practice of ascribing beliefs and desires to one another, and to seeing each other as rational agents engaged in purposeful activity. The practice of ascribing or reporting our own or other’s beliefs plays a constitutive role in the nature of those beliefs; in an account of what beliefs fundamentally are. For Davidson, the theory of belief is a theory of beliefascription. The mental is constituted and exhausted by the intentional categories we have for ascribing states of mind to one another. We ascribe such states to make sense of one another’s actions in rational terms and our psychological idiom cannot be reduced to the vocabulary of the physical sciences. Our everyday psychological talk describes a normative structure of a itudes and actions that makes rational sense of people’s behaviour. For Davidson it is the rational relations between the a itudes (and actions) that cannot be captured in purely physical terms. Having a particular belief requires us to have others logically connected with it. But these rational connections have no echo at the physical or neural level. Physical or neural states do not rationally or logically require the existence of other physical states. The irreducibility of intentional to physical vocabulary does not, however, mean that there are irreducible mental entities. There is just one set of entities (events) and two vocabularies to describe it: the intentional and the physical, with some of the events we describe in physical terms being also describable intentionally. Events are mental events just in case they can support intentional descriptions: an event is an action if an only if it can be described in a way that makes it intentional. (Davidson, 1980, p. 229) But what are the constraints on whether an event is intentional or not, that is, on whether it can be described in intentional terms? Not just any physical event is a mental event. We need a way of deciding which physical events are mental events, and this is se led by which physical events sustain intentional re-description as mental events. Such intentional re-descriptions are introduced by interpretations of bits of the agent’s behaviour as rational actions: an episode of behaviour for which a rational explanation can be given, an explanation that 109
The Continuum Companion to Philosophy of Mind
gives the agent’s reasons for behaving as she did. The reasons cited are given in terms of what people believe, desire and intend. Notice that the target of rational explanation is not behaviour itself, where behaviour is construed as bodily movement, but action: intentionally described episodes of behaviour or bodily movements that amount to an agent’s performing an action for a reason. It is these intentional actions as observable aspects of mind for which we give rational explanations. Equally, actions as part of the mind of a creature only come into view when we interpret that creature as a rational agent acting on beliefs and desires. Thus rational interpretation constitutes its own explananda by coming to see certain bodily movements as actions. When a bodily movement is interpreted as an intentional action – part of someone’s mind – it is the very same event that can be described in physical and mental terms: all we have are two descriptions of the same thing. The mental is an intentional level of description of otherwise physical events. As Davidson puts it: ‘events are mental only as described’ (Davidson, 1980, p. 215). Events are particular, datable unrepeatable occurrences, and all events are physical (i.e. physically describable). Some of these events are also mental, that is, correctly describable in mental terms. When we re-describe a episode in a person’s physical history – a bodily movement – in intentional terms, describing it as the action of an agent intentional under a certain description, we see it as part of her mental life. And in treating certain behaviours as part of someone’s mental life – as actions undertaken – we thereby introduce beliefs and desires as the agent’s reasons for performing those actions, and at that stage we retrospectively identify those mental states with the physical or neurological states that are the causes of the bodily movement in question. Keeping the mental and physical levels of description separate, Davidson argues for a non-reductive physicalism he terms anomalous monism: (a) Some mental events interact causally with physical events. (b) Events related as cause and effect fall under strict deterministic laws. (c) There are no strict psycho-physical laws (and no strict psychological laws). The mental events that cause bodily movements are identical with physical events: an event described in mental terms can also be described in physical terms, and it is under its physical description that it can instantiate a strict causal law couched in purely physical vocabulary. The physically described event, which is also described in mental terms, enters into causal relation with another event described in physical terms. Events are related as cause and effect when they have descriptions that instantiate causal physical laws. So even though there are no strict laws for predicting or explaining psychological phenomena, mental events can still cause physical events. The mental 110
Folk Psychology and Scientific Psychology
event that is the cause of some piece of behaviour has a description under which it instantiates a causal law that links physical events of that kind to physical events of some other kind. The singular causal statement linking A’s reasons on a particular occasion to what he did will be true only if it is backed by a general causal law that relates events of that kind when physically described.
Davidson and the Causal Requirement It may seem that it is only at the physical level that causality occurs, and so the mental may seem causally irrelevant or epiphenomenal. Physically described events would continue to have the effects they do whether or not they were describable in mental terms. But care is needed here if we are not to misunderstand Davidson’s position. Davidson believes in the supervenience of the mental on the physical: there can be no mental difference without a physical difference, and any two events alike in all physical respects will be alike in all mental respects. So when one event, physically described, causes another, physically described, and the first is also describable in intentional terms as a mental event and the cause of the resulting behaviour, it could not have been that very event if it did not have that intentional description; for if it had been different in some mental respect (i.e. not having any mental description), it would have had to be different in some physical respect on pain of violating supervenience, and then it would have to be a different event all together. Whenever physically described events are also describable in mental terms, those events count as mental events and cannot be otherwise unless things had been physically different. So the challenge to causal efficacy that suggests things could have taken place physically whether the mental was present or not fails, and Davidson can still claim that the mental events can causes physical events.1
Anomalousness and the Holism of the Mental At the outset, each a empt at explanation is a move within an interpretative scheme, an initial description of someone’s action that makes sense only in the light of certain beliefs and desires that give the agent reasons for acting. And the ascription of these beliefs and desires in turn makes sense only in the light of further propositional a itude ascriptions: Beliefs and desires issue in behaviour only as modified by other beliefs and desires, a itudes and a endings, without limit. Clearly this holism of the 111
The Continuum Companion to Philosophy of Mind
mental realm is a clue to the autonomy and to the anomalous character of the mental. (Davidson, 1980, p. 215) Each of these further rationally related a itudes must be justified in the light of what the person says and does elsewhere and at other times. When interpreting a creature, each piece of evidence on behalf of a given interpretation retains its evidential standing just in case it forms part of an overall interpretative theory that incorporates more and more of a person’s speech and behaviour within an overall intentional scheme: ‘Every case tests a theory and depends on one’ (Davidson, 1980, p. 221). In this way, the accumulating evidence in favour of a given interpretation is part of the constitutive fabric of the interpretation itself. And the question of the correctness of any single intentional description of an agent depends on the cogency of the overall intentional network of beliefs and desires to which that particular description belongs. Whether a given intentional description is correct depends not on its fit with some isolable physical or neurological fact about the creature, but on whether we have respected the prevailing conditions for interpretation and satisfied the interpretative principles of rationality and charity (see below), and whether that interpretation enjoys the best possible fit (up to indeterminacy) with the overall facts of that agent’s physical history. Thus what it is for an agent to have a particular belief (or desire, etc.) is for him or her to be apt to be ascribed that belief (desire, etc.) in the course of giving an interpretation that makes the best sense possible of his or her total history and behavioural conduct.
A Priori Principles of Interpretation The principles that ensure the coherence of a person’s a itudes and actions, from the point of view of an interpreter, are the principles of rationality and charity. Beliefs should make rational sense in the light of other beliefs to which they are logically related by the contents of those beliefs; and the actions we see people as performing should make sense in the light of the beliefs and desires we are prepared to ascribe to them and which give them reason to do and say what they do and say. In addition to rationality, the other constraint on correct interpretation is the principle of charity, which ensures that a person’s beliefs about their surroundings should be, by and large, correct by the interpreter’s lights. To be interpretable, most of an agent’s ordinary beliefs about the world around her should be true. And the correct interpretation of a person’s behaviour will be the one that ensures the best fit with these two principles. The principles of rationality and charity must guide us in the course of building up a portrait of someone’s overall propositional a itudes and actions. They do so when we ascribe particular beliefs, desires and meanings that make 112
Folk Psychology and Scientific Psychology
sense of someone’s actions and u erances, in the light of further beliefs, desires and other meanings that comport with the person’s overall behaviour and their surroundings. Rationality requires that if there are two interpretations, each consistent with everything the person does and says, we should favour the one that makes their network of beliefs, intentions and actions, more and not less rational. The principles of practical reason suggest that people will by and large do what they believe will secure their fondest wish at that moment, assuming no countervailing beliefs and no countermanding desires. For you to do otherwise would make no sense to us as interpreters. Charity requires interpreters to make sense of people by a ributing to them beliefs it would make sense for them to have given their current surroundings. Generalizing, we should interpret people charitably by ascribing to them beliefs that are largely true (by our lights). That is, we should not gratuitously ascribe to someone a bizarre belief about what is going on around them, but assume, charitably, that they are like us in having reliable beliefs about their current surroundings. Notice that the principles of rationality and charity interact: to ascribe to someone an outlandish belief may result, through the holistic connection between a itudes, in a scheme of interpretation for that person that makes him less rather than more rational. On the other hand, if ascribing to someone veridical beliefs about his current surroundings would make his behaviour less rationally explicable than ascribing a false belief to him, we must forgo charitable interpretation on this occasion. By and large, Davidson thinks the conditions for interpretation show that ‘there is a large degree of truth and consistency in the thought and speech of an agent’.2 The coherence of belief is guaranteed by the joint application of charity and rationality. If there is not enough coherence in someone’s beliefs we would not be justified in regarding them as rational agents at all. Even irrationality assumes that a prior rational standard is operative and that a person is going against it on this occasion. To assume that no rational standard is operating is to see a creature as non-rational. As interpreters we must always a empt to see people’s a itudes and actions as making rational sense by our lights. This is not a subjective view of rationality. What enables me to see pa erns of rationality in the a itudes and actions of others is the rationality at work in my own thought and talk. The standards of rationality that enable me to make sense of others are the standards that enable me to make sense simpliciter; this is what enables other to make intelligible sense of me. Thus, there is just one standard of rationality for all or at any rate for those who are mutually intelligible. Rationality and charity are relative a priori principles: they are not principles designed to get at the (independently constituted) facts of someone’s mental life; rather it is only when the intentionally described states of a creature conform to them that creatures count as having mental states. What it is correct to 113
The Continuum Companion to Philosophy of Mind
say about someone’s psychology is answerable to these principles: the mental is constituted and exhausted by interpretations governed by principles of rationality and charity. In effect, they are synthetic a priori principles telling us what minds are. However, a creature’s possessing mental states is not dependent on the existence of an actual interpreter who interprets the creature: it suffices that the creature exhibits behaviour that would sustain intentional re-description in terms of a network of a itudes and actions that would make sense of the creature’s behaviour in rational terms. Wherever a creature could be interpreted in accordance with the principles of charity and rationality the creature counts as minded. No account is given of how ordinary interpreters produce the interpretations they do. But limits on what interpreters can know about an agent are made explicit by what a fully informed theorist of interpretation could ascribe to the agent on the basis of the non-intentional evidence about his behaviour and physical history, and deployment of the intentional vocabulary under the control of the a priori principles of rationality and charity. The theorist of interpretation can provide a justification for the particular ascription of beliefs, desires and intentions that an ordinary interpreter goes in for.
Linguistic and Non-Linguistic Interpretation: Belief and Meaning When interpreting other people, much of our information about what they think comes from what people say. But to find out what they are saying we have to know what their words mean. Even in everyday speech we can still wonder whether other people are using words in the same way we are. So to confirm that others mean the same thing as us by their use of words, or to adjust for the difference, we have to resort to interpretation. Interpreting speech is part of interpreting a person’s overall behaviour. Thus, in addition to ascribing beliefs, desires and intentions to an individual we also have to assign meanings to her words, and ensure the best holistic fit with her linguistic and non-linguistic behaviour. In this way the interpretative principles are also at work in justifying our ascription of meanings to people’s words. But if we have to know what someone means to know what they believe and cannot know what they mean without first interpreting them – ascribing them beliefs and desires – we go round in a circle. We must either solve for two unknowns simultaneously or find a way to break into the circle. Davidson proposes to do the la er by focusing on the case of a speaker A holding a sentence S true under certain circumstances. We can know the speaker holds S true without knowing what S means, and we can see the holding true as depending on what the speaker means by S and what she believes to be the case. Now if we charitably interpret a speaker 114
Folk Psychology and Scientific Psychology
as having a true belief about the prevailing circumstances (believing what we do), we can take those circumstances as the truth conditions for sentence S and thus know what someone believes and means. When we have interpreted enough sentences we will see them as having parts in common, and by assigning meaning (truth-affecting properties) to their parts, we can figure out what new sentences composed of those parts mean and under what conditions they are true. This will enable us to both interpret and test our interpretations with respect to the further things people say. In this way, the principle of charity enables us to break into the hermeneutical circle. By knowing the conditions under which the speaker holds sentence S holds true, the interpreter can reasonably assume these are the conditions in the speaker’s language for that sentence S to be true (its truth conditions). At times it will be necessary to assume the interpreted subject has false beliefs, for there are occasions on which a speaker asserts a sentence that we interpret as being true under quite different circumstances. So we must either revise the ascription of truth conditions to the sentence or assume that the speaker believes falsely that those conditions do in fact obtain. Belief and meaning are thus inextricable for Davidson: if people don’t assent to a sentence we expect them to, we can either interpret them as believing what we do and adjusting the meanings we assign to their words, or we interpret them as meaning what we do and interpret their beliefs uncharitably. Either way such interpretations must bring the meanings, a itudes and actions a ributed to a person into a rationally coherent pa ern that makes sense of the totality of their linguistic and non-linguistic behaviour. Davidson’s position satisfies the rational and causal requirements of a satisfactory account of folk psychology, but can it also leave room for selfknowledge?
An Essentially Third-Person Epistemology of Mind This is an essentially third-personal or a ributionist account of the mental. Once we have assured ourselves that the principles of interpretation governing our ascriptions of a itudes have been observed there is nothing more to say about someone’s intentional states. The beliefs a person has are beliefs they can be ascribed as having by a fully informed interpreter. Beliefs and other propositional a itudes are exhausted by the criteria we have for a ributing them to one another. As Davidson puts it, beliefs, desires, hopes and fears are just those states whose contents can be discovered in well-known ways. If other people or creatures are in states not discoverable by these methods, it can be, not because the methods fail us, but because these states are not 115
The Continuum Companion to Philosophy of Mind
correctly called states of mind – they are not beliefs, desires, wishes or intentions. (Davidson, 1986b, p. 160) In commenting on Davidson’s third-personal view of the mental, Michael Root writes: Other minds, on Davidson’s view, are what we get when we interpret the behaviour of others. Bodies are what we have before we interpret their behaviour. (Root, 1986, p. 294) Is there room on such an account for a first-person point of view? The idea that the mind of a person depends on what it is correct for an interpreter to ascribe to that person risks losing sight of the first-person point of view all together. Does the mind exist only in the eye of the interpreter? Not exactly. The best interpretation we can give is one that portrays the beliefs and desires that make sense of that person’s own way of seeing the world, that reflect her view of how things are and that explain her reasons for acting, and in this sense, arguably, we capture that person’s own point of view. What would it be to see her as acting in accordance with our view of her reasons, leaving out how she saw things: such an interpretation would get things about her wrong. But what of the subject’s own knowledge of her mental states? She doesn’t have to interpret her own mind to know what she is thinking. Subjects know their own minds without interpretation. Davidson uses this point to explain an important feature of the first-person perspective: namely our first-person authority about our current mental states. First-person authority is the authority a subject enjoys in her judgements about her own mental states. What sort of authority is it? Well, when a subject takes herself to have a particular belief or desire she is typically right. Her knowledge of what she is currently thinking is effortless and groundless. The subject doesn’t base her judgement on anything else: what she says about what she is currently thinking is typically correct and needs no further justification. Others are not similarly authoritative about what another person is thinking. There is no presumption that when they claim someone is in such and such mental states they are usually right. Notice, however, that a subject’s authority about what she is currently thinking or desiring does not amount to infallibility: there are occasions on which we are entitled to overrule what the subject says about herself; she may be self-deceived or simply un-self-knowing. What confers first-person authority on a subject’s judgements about her own mental states? For Davidson, the advantage comes from an evidential asymmetry between the grounds one has for ascribing thoughts to oneself and those one has for ascribing thoughts to others: unlike me, others cannot simply pronounce about my mental states without grounds; they rely on observation 116
Folk Psychology and Scientific Psychology
and evidence to ground their claims. So in ascribing thoughts to you I have to rely on what you say and do in order to figure out what you think. In my own case when I say what I think I don’t have to first work out what my words mean: I just know. I take one less step and am therefore at an epistemological advantage. On this view I know what I think because I know what I mean when I say what I think. I literally speak my mind (see Davidson, 1984a and 1987a). But such an account of how I know my own mind does nothing to explain how I know what I mean without interpreting my words. It is here that Davidson insists that I cannot but know what my words mean.3 This may be true, but it doesn’t tell us anything about the nature of such knowledge or what makes it available to the subject (see Smith, 1998).
Davidson against Scientific Psychology Notice that on this a priori defence of common sense psychology li le or nothing can be said about the relation between the intentional level of description and the cognitive or neural levels posited by research in empirical psychology. The relation is one of imposition of intentional descriptions on the levels below: the levels are levels of description, not levels of organisation in the organism. Nothing discovered scientifically at the levels below can have anything to contribute to our understanding of the mental given that rationality and charity are constitutive and exhaustive of the mental terms we use to identify minds. Davidson does not eschew all empirical data, but he insists that it is a ma er of purely a priori reflection to determine which empirical details bear on the nature of mind. Which creatures have propositional a itudes? According to Davidson, ‘The question is not empirical: the question is what sort of empirical evidence is relevant to deciding when a creature has propositional a itudes’ (Davidson, 1982, p. 317). Put this way, there is still room for an a priori, philosophical dispute about the type of empirical evidence that bears on the correctness of psychological ascription. However, this view forces us to treat psychology not as a science but as part of philosophy. In effect, Davidson leaves no room for scientific psychology: as far as levels of description of reality go there is just physics and the folk. Claims at both of these levels give us true descriptions of reality. Notice that a realist who maintains an a priori defence of belief-desire psychology may simply refuse the challenge to substantiate its constructs at any lower level. The claims of commonsense psychology, Davidson will say, are answerable to no other criteria than their own: what makes any particular folk psychological explanation we give of someone’s behaviour true is a ma er of whether we have applied those criteria correctly, not an objective internal ma er about what states they are in. 117
The Continuum Companion to Philosophy of Mind
Let us turn now to a position that respects our interpretations of one another in terms of beliefs and desires but which makes room for cognitive psychology.
Dennett on the Intentional and Other Stances The intentional stance For Denne , beliefs and desires go together with the idea of an intentional system – a system whose behaviour can be predicated by ascribing it beliefs and desires and assuming it will act rationally. To treat a creature as if it were a rational agent is to adopt an intentional strategy, to take up the intentional stance towards it. According to Denne , this means we ascribe it the beliefs it ought to have given its place in the world and its progress through it. Likewise, we ascribe it the desires it ought to have given its place in the world and its purposes. We then predict what it will do given the a ributions we make to it and the assumption that it is rational. (When we don’t succeed we may have to modify particular a ributions, or give up the assumption of rationality.) Taking up the intentional stance is a strategy to predict how the creature (or system) will behave. But note that to predict is not necessarily to explain. Denne ’s analogy is the chess-playing computer. We predict its moves by assuming it has certain beliefs and goals – it wants to get its queen out early; it thinks I’m weak on the le flank – and treating it as a rational opponent. But in some sense, it doesn’t really have beliefs and desires. This is just a heuristic, or useful assumption that helps us to predict the computer’s moves so we can compete against it. If we want to know what actually accounts for and explains its behaviour, we have to drop down to what Denne calls the design level by adopting the design stance.
The design stance At this level predictions are in line with the functioning components of the parts of the system and what they are designed to do. For example, the chess-playing computer’s workings are predictable by programmer on the assumption that everything is functioning properly and working as it should do at the physical level. The program may not be all we need to know in order to know the behaviour of the machine. It may not be working as it was designed to do because physical components may have failed. This requires us to turn to the next level down by adopting the physical stance towards the system. 118
Folk Psychology and Scientific Psychology
The physical stance To know about the workings of the system at the physical level is to adopt the physical stance towards its workings. At this level we expect to achieve physical law-like predications. For Denne , the physical and the design stance (descriptions of the functional organisation and mechanical working of the system) offer not only predictions but also genuine explanations of the behaviour of the system. However, when we adopt the intentional stance, we are only involved in predication, not explanation.
Dennett’s Original Position Just as it’s not literally true to say the chess playing computer has beliefs and desires that cause it to act, so it’s not literally true of us either. We simply ascribe beliefs and desires to one another and assume people will act rationally. In this way, we succeed in making fairly reliable predictions about one another’s behaviour. But at the design level, the level of our inner organisation studied by cognitive psychologists, there may be nothing like beliefs and desires causing us to act. Belief-desire talk does not describe our innards. Denne is a realist about our internal functional organization but a non-factualist, or instrumentalist about beliefs and desires. Beliefs and desires are a ributed to us by others to make sense of our behaviour, but they don’t feature in our inner workings, and they don’t causally explain our behaviour. To do that we need a scientific psychology that addresses the functional architecture of the mind, and at that sub-personal level (another of Denne ’s distinctions about systems smaller than persons which lack the properties assigned to persons), we will have no use for belief-desire talk. The trouble with this view, as Brian Loar put it in the quote above, is that our continuing ‘to use the belief-desire framework to systematize behaviour . . . [would then] have the air of fictionalizing and contrivance’. But Denne wants to reject this portrayal of his view. He wants to maintain that he does, in some sense, still believe in the reality of beliefs and desires. But in what sense?
Dennett’s Revised View Like Davidson, Denne is an a ributionist about the mind; they both hold an essentially third-personal view of belief-desire psychology. But their views differ. Davidson is a realist about beliefs and desires. He thinks that beliefdesire psychology gives us genuine explanations but not predictions of human agency. Denne , on the other hand thinks that belief-desire psychology gives us 119
The Continuum Companion to Philosophy of Mind
predication but not explanation. The difference between these views hinges on the rival conceptions of rationality. According to Davidson, rationality is uncodifiable: there are no psychological or psycho-physical laws. Rational explanation is always post hoc and partial and what counts as rational is just what makes sense to us as rational creatures. By contrast, Denne thinks rationality is law-like although rationality is at best an approximation of our behaviour. The real explanations are found at the levels below. For Denne the laws of folk psychology are at best rough and ready approximations (or idealizations) that help us to predict and interact with others. Hence: Folk psychology is best seen not as a sketch of internal processes, but as an idealized, abstract, instrumentalistic calculus of prediction. (Denne , 1987a, p. 48) Beliefs and desires are ‘calculus-bound entities, or logical constructs’. So although we find it useful to a ribute such intentional properties to thinkers, they correspond to no real joints in nature. At best they are pa erns in the behaviour of creatures perceived by creatures like us, creatures who engage in interpretative practices, ascribing beliefs and desires to one another. This may lead us to think that there really are no such things as beliefs according to Denne . But this is too quick. On the revised view, he stresses that any object – whatever its innards – that is reliably and voluminously predicatable from the [intentional] stance is, in the fullest sense of the word, a believer. (Denne , 1987a, p. 15) There is no science of intentional psychology; there is no scientifically respectable property, which all thinkers who are ascribed a given intentional property must share. There will still be room, however, for scientific psychology to explain the cognitive sub-systems of creatures that give rise to the behaviour we characterize, heuristically, in those intentional terms. But the very disparate nature of the actual inner causes of behaviour in different creatures undermines the explanatory pretentions of folk psychology. So Denne ’s moral is that folk-psychology is idealized in that it produces its predictions by calculating in a normative system. It is abstract in that the beliefs and desires it a ributes need not be presumed to be intervening states of an internal behaviourcausing system. (Denne , 1987a, p. 52) However, this diagnosis is not offered as a reason to replace or dispense with the categories of intentional psychology; it is simply that we mustn’t project
120
Folk Psychology and Scientific Psychology
details from the intentional level of folk psychology onto the levels of organisation below. But we are still allowed to ask the question: Exactly what feature must we share for [a given belief ascription] to be true of us? More generally . . . what must be in common between things truly [italics mine] ascribed an intentional predicate such as ‘wants to visit China’ or ‘expects noodles for supper’? (Denne , 1987a, p. 43) Denne ’s answer is a shared property that is visible, as it were, from one very limited point of view: the point of view of folk psychology. Ordinary folk psychologists have no difficulty imputing such useful but elusive commonalities to people. If they then insist that in doing so they are postulating a similarly structured object in the head, this is a gratuitous bit of misplaced concreteness, a regre able lapse in ideology. (Denne , 1987a, p. 55) The point of view of folk psychology is the perspective from which we make use of the intentional strategy, ascribing creatures (and computers) beliefs and desires under the idealising assumption that they are rational agents. The assumption of rationality, however, is only an ideal; ‘the myth of our rational agenthood structures and organizes our a ributions of belief and desire’ (Denne , 1987a, p. 43). A belief is just a ‘useful but elusive commonality’; but either these intentional ascriptions pick out real properties of thinkers or we must succumb to the ‘air of fictionalising and contrivance’. For there are just two kinds of answers to the original question of what thinkers who are truly ascribed the same intentional property (like believing or desiring something) must have in common. The first answer says they are disposed to do and say such and such, and to judge so and so, an answer that requires further a ributions. The second kind of answer cites some underlying micro-property or functional property that they all share. The first type of answer belongs to commonsense psychology and the second to scientific psychology. Denne ’s mistake is to have stranded himself between both, denying the strict literalness of commonsense talk and failing to revise or vindicate it in scientific terms. This leaves an embarrassing gap when it comes to explaining why many predications from the intentional stance are so reliable. But this is the very heart of the ma er. What do all thinkers who share a given thought have in common? For Denne , the new answer is that beliefs and desires are real paĴerns in behaviour, but these are ‘real pa erns discernible from (and only from) the intentional stance’ (Denne , 1991b) Minds are pa erns seen by (and only by) other minds.
121
The Continuum Companion to Philosophy of Mind
Is this a satisfactory realism about beliefs and desires, a satisfactory account of folk psychology? It acknowledges a rational structure of beliefs and desires. However the believes and desires that are our reasons for action cannot be the causes of it. For beliefs and desires are real pa erns in behaviour and it is hard to see how pa erns exhibited in behaviour can also be the causes of that behaviour. Denne ’s revised view has no satisfactory way to accommodate the causal constraint.4 We now turn to a strongly realist view of the mental that takes seriously the causal powers of beliefs and desires by rejecting the interpretational view of folk psychology in favour of scientific account of beliefs and desires as internal behaviour-causing states.
Fodor’s Intentional Realism Taking up the challenge where Denne ’s view gives out, Jerry Fodor asks what all thinkers who share a given thought have in common if it’s not their neurophysiology: How much (and what kinds of similarity between thinkers does the intentional identity of their thoughts require? This is, notice, a question one had be er be able to answer if there is going to be a scientifically interesting propositional a itude psychology. (Fodor, 1986a, p. 9) Fodor advocates the view that scientific psychology will vindicate commonsense psychology. The problem is to understand how a science of the mental can explain what we do without replacing or reducing our common sense concepts. Fodor aims to steer between these two dangers – reduction (not likely) or replacement (not palatable) – by insisting that the causal laws of belief-desire psychology that capture intentional generalizations about individuals have to be explained by the computational and syntactic laws that implement them. The difference between the intentional and computational laws is not a ma er of levels of description but levels of organisation within a creature. The computational laws govern the mechanisms that mediate the connections between intentional causes and their behavioural effects. They are many and varied, so no strict reduction of the intentional laws is possible. However, the lack of reduction doesn’t loosen all connection between the levels, for although there are intentional causal laws that explain why we think and act as we do, we are always entitled to ask, for any such law, how do those causes bring about those effects in that individual.
122
Folk Psychology and Scientific Psychology
The special sciences This strategy is a general feature of the special sciences5, according to Fodor. For any special science law L which says that Fs cause Gs we can always ask why they do, and so long as the law in question is not a basic law of physics, there will always be some further explanation of it in terms of the properties of F-instantiations that render them sufficient to bring about something with the properties of G-instantiations. In the case of intentional psychology, the mediating mechanisms that ensure that states with certain intentional contents bring about certain behavioural effects relate to mental representations and work according to computational operations defined over them. The reason that law-like explanations in intentional psychology hold is ultimately a ma er of underlying computational laws, though we can expect different laws (and different mechanisms) to be operative in different thinkers. All that is required is that there be some computational account of the mediating mechanisms that explain why those intentional states engender those behavioural effects in the thinker. So a law of intentional psychology applies to an individual thinker in virtue of another law (a computational law) that implements the higher law, and therefore shows why those law-like connections are sustained in the thinker. Unlike Denne , Fodor takes the laws of intentional psychology that govern the relations of propositional a itudes to one another and to behaviour to be true of us in virtue of our in-head states. Propositional a itudes like beliefs and desires are inner states with semantic and causal powers. For Fodor, psychological a itudes like believing and desiring are computational relations to mental representations with propositional contents. The computational laws that implement the laws of intentional psychology govern the causal connections among mental representations, keeping the causal relations in step with relations of content between those mental representations. Hence, when mental representation A entails mental representation B, the computational laws should ensure that a tokening of A brings about a tokening of B, thereby showing why law-like connections between psychological a itudes involving A and psychological a itudes involving B are sustained in creatures like us. To sustain a psychological law one has to have computationally organized innards that mediate the connections between mental states that respect the relations of content between those states. Fodor wants his story about folk psychology to be based on three key assumptions: (1) The laws of folk psychology are intentional laws that cite contents. (2) The content (semantics) of intentional states is informational and external.
123
The Continuum Companion to Philosophy of Mind
(3) Laws of intentional psychology are implemented by computational laws (mechanisms). Are they compatible? First, it is a consequence of Fodor’s implementation story that semantics doesn’t cross implementation boundaries. This is because the computational laws of mental processing that implement the higher level intentional laws are purely syntactic. Although to preserve what needs to be explained and to secure the widest application to thinkers, we have to frame our generalizations in content-using terms at the intentional level (2). What do all thinkers who share a particular belief have to have in common? They are not required to be functionally identical: to make just the same inferences or to have just the same accompanying states. This is extremely unlikely in all but molecule for molecule duplicates (Twin-Earth twins). So what else do they share? Pre y much they will be subsumed by the same intentional psychological laws (1) (i.e. they all satisfy the same causal regularities picked out by an intentional covering law). But what makes this true? Fodor’s answer will be different for different thinkers since the vast disjunction of intervening states of mind and cognitive computational mechanisms are, for him, typically quite heterogeneous. He can agree with Denne that there is no reason to suppose possessors of particular beliefs must be ‘ultimately in some structurally similar internal condition’ (Denne , 1987a, p. 55, italics mine) narrowly construed; though Fodor has independent reasons for thinking they must each be in some internally structured condition. This is the Fodor’s commitment to mental representations whose syntactic structures follow the conceptual contours of the content of those representations: the language of thought hypothesis. But his appeal to a language of thought doesn’t require thinkers to share type identical mental representations; it is enough that each thinker have some syntactically structured vehicle to play its content-bearing role. The commonalities among thinkers are to be found not at the level of computational psychology, but rather at the higher intentional level. All that such thinkers share is the property of satisfying the same laws of intentional psychology; what it is to satisfy this property is explained differently for different thinkers. Thus Fodor’s argument for a content-using psychology rests on the claim that intentional psychological laws remain indispensable in securing generalizations across individuals who may have no physical or computational states in common. Many different internal mechanisms serve to implement the same higher level laws, but all that unifies these diverse creatures is the intentional generalizations – the intentional causal laws – they fall under. So the scientifically interesting commonalities for psychology are not to be found at the level of computational psychology, the level addressed by scientific psychology, but the highest and relatively observational ‘level of description at which mental states are represented as having intentional content’: the level at which we 124
Folk Psychology and Scientific Psychology
predict behaviour by finding generalizations across individuals such as when they have particular beliefs and particular desires, they will do such and such, ceteris paribus. Folk psychology is to be vindicated by science by simply treating the laws of intentional psychology as (special) scientific laws. It is only in so far as there are intentional causal laws that there can be a science of psychology at all. But then if vindication is not achieved by recourse to the levels below, in what sense is vindication really illuminating? Fodor will say that there has to be some grounding of the higher level in the levels below. For the laws of intentional psychology, treated as scientific laws, albeit ceteris paribus laws, have to be implemented computationally. There are intentional laws because there are mechanisms in nature that sustain (implement) those laws in creatures like us. This is what explains how content-bearing states can have their causal effects on behaviour. But scientific vindication comes at the higher level, and not at the level of computational psychology with its Fodorian commitment to computational operations defined over mental representations (sentences in the language of thought). This is a distinct level of scientific theorizing, thinks Fodor, with its own motivations. We are le with an unexplained mystery: why should creatures with such different innards all satisfy the same intentional laws? On Fodor’s official story, our intentional generalizations quantify over a variety of computational mechanisms that sustain law-like behaviour. So although a belief of a given type will have to be a computational state of an agent, it is a mistake to think that for any such belief there is one kind of computational state it has to be. Thinkers with different internal organizations will still be answerable to the same higher level psychological laws; which means that laws that capture those generalizations cannot be formulated at the computational level. However, Fodor does think that creatures with very different sets of beliefs and desires will still be subject to the same intentional generalizations and so to the same psychological laws. So commonalities preserve the claim that one is thereby picking out genuine properties of individuals that generalize across cases. Moreover, what they must have in common to fall under the same intentional taxonomy is a ma er for science to decide, not a priori philosophy. The urge to secure the scientific status of propositional a itude psychology may lead to revision of what Fodor calls granny psychology: the everyday use of belief and desire psychology which is pragmatic, context-sensitive and vague. These aspects will not affect the causally explanatory laws of intentional psychology, whose terms have been adapted to fit the generalizations we can explain in computational terms, though these features may cause problems for it. Two big problems are Frege cases where someone has two representations with the same content that each have different behavioural effects (e.g. ‘Oedipus’ mother’ and ‘Jocasta’ for Oedipus) and twin cases, where one acts similarly with respect to mental representations with different contents 125
The Continuum Companion to Philosophy of Mind
(e.g. Elm/Beech, or H20 and XYZ). Fodor’s answer is that both these cases mostly don’t occur, and when they do we can explain what has gone wrong. In Frege cases, the different syntactic forms of the representations explains how they affect computational mechanisms differently: semantics and syntax come apart. Semantics and syntax march in step in twin cases, but because of oddities of the environment, distinct representations with different contents trigger the same computational mechanisms, which is extremely rare. There will be minor revisions in our everyday intentional idiom but not massive revision, or else the laws of intentional psychology would not work as well as they do, nor would the computational mechanisms operating in us really be implementations of those laws and thus sustain intentional regularities. In other words, Fodor is holding out for the claim – a hostage to empirical fortune – that there is a class of computational syntactic processes that makes up a natural domain for psychological explanation. This will be the domain for creatures like us who satisfy the same intentional generalizations we do, and the relation between the computational underpinnings and the level of intentional states, though contingent, will be reliable and explicable. Certainly, all believers and desirers have this much in common: they must all have mental representations which are syntactic vehicles for the contents of those states. This is the independently motivated claim for the language of thought which does generalize across thinkers however different their beliefs and desires. So regardless of their specific mental states, and the particular psychological explanations we give of their behaviour, they will all satisfy the same general explanatory pa erns of acting in such a way as to do what they believe will secure for them their fondest wish. And the claim that the internal mediating mechanisms of thinkers can differ from individual to individual will still have to leave room for the claim that they have enough overlap to ensure that the syntax of their mental representations, whatever they are, mediate the same broad behavioural similarities among thinkers: The syntax of the mental representations which have the facts that P in their causal histories [and so are about P] tend to overlap in ways that support robust behavioural similarities among P-believers. (Fodor, 1994, p. 53, italics and brackets mine.) The trick is to find in each of us mentalese sentences, which are the causes ‘of the sorts of behavioural proclivities that the laws of psychology say that P-believers share’ (Fodor, 1994, p. 54). For if no such overlapping set of causes exists it is hard to see what relation of constraint between the levels Fodor’s view imposes. And if it turned out to be none at all, we should have le Fodor no be er off for a story about the relationship between folk psychology and scientific psychology than Denne . 126
Folk Psychology and Scientific Psychology
On the other hand, a Fodorian view which found commonalities, that is to say overlap, among the heterogeneous class of syntactic vehicles of thought would challenge Denne ’s picture by foisting on him a language of thought among the otherwise diverse sub-systems of thinkers, a language of thought story which Fodor argues is essential in order to have conceptually organised thoughts – the objects of intentional psychological a itudes. Claims to intentional realism rest on a detailed working out of this issue.
Folk Psychology and the Full Extent of Mental Life Even if we succeed in providing a philosophically satisfying characterization of our common sense psychology in line with the key requirements given above, several further questions arise: (a) How do we succeed in ascribing beliefs and desires to one another? (b) What role does consciousness play in a conception of the mind framed in terms of belief-desire psychology? (c) What role is there for including the emotions in belief-desire psychology? The issue in (a) concerns not what makes the ascription of a belief to an individual true, but rather how we go about ascribing beliefs and other a itudes in the first place. What equips us to make such ascriptions? What provides us with the means to a ribute mental states to other creatures? We shall briefly look at two empirically different accounts of how we a ribute mental states to one another, namely the theory-theory and the simulation accounts of our theory of mind. So far as (b) is concerned, consciousness has had li le if anything to do with the issues we have been discussing so far. It is a central aspect of the human mind and a full and final theory of human psychology will have to account for it, but it plays a much less prominent role in our folk psychology. We know very li le about the conscious character or experiences of other people and yet we are still adept at figuring out their reasons for action. Finally, under (c), we surely need to accommodate the emotions as part of folk psychology. A er all, in our everyday dealings with one another, we use a wide repertoire of emotional terms such as anger, fear, humiliation, jealousy, joy, happiness, envy, longing, and loneliness to help explain and predict one another’s behaviour. And yet, when we turn to philosophical descriptions of common sense psychology, we have, until very recently, seen li le or no acknowledgement of the role emotions play in our mental lives. Can a psychology that rationalizes behaviour in terms of beliefs, desires and intentions leave room for emotions to play more than a merely disruptive role as ‘sand in the mechanism’? Let us end by looking at each of these issues in turn. 127
The Continuum Companion to Philosophy of Mind
Our Mentalizing Abilities How do we make ascriptions of propositional a itudes to one another? Although Davidson and Denne have li le or nothing to say about this, Fodor assumes that folk psychology is a theory that we (perhaps unconsciously) deploy by having implicit, perhaps innate knowledge it. This view of our mentalizing about others’ states of mind is called the theory-theory. Notice, however, that theory-theory yields no insight into how we a ribute mental states to ourselves. For we surely don’t need to theorise about our own behaviour in order to know what we are currently thinking An alternative view is that when we a ribute mental states to others we start from the first-person perspective and use ourselves and knowledge of our own psychology to simulate the mental states of others. This is called simulation or simulation theory. (O) I observe you doing F. (S) I consider what mental states M I would be in were I to do F. (Sim) I a ribute mental states M to you when you do F. We judge what mental states would give us reason to behave in a certain way and then use our own simulated mental states to understand other’s mental states. To simulate what someone might do, or think, we imagine ourselves in their shoes, see what we would do, think or feel in those imagined circumstances, and use our own reactions to understand them. We take off-line simulated inputs, note our reactions and read off others’ mental states from our own. However, our mind-reading abilities also have to take into consideration the ways other people differ from us if we are going to simulate them accurately. We need to adjust for what we know about those people and what they are like (e.g. fearful, hasty, excitable, etc.) But to do this don’t we need to appeal to a general theory of mind to make the relevant adjustments, and doesn’t this simply bring back in the theory-theory? For this reason, some have suggested that we need a mixed system of simulation and theory-theory (see Davies and Stone, 1995). Notice too, that while simulation gives central and prior place to the first-person perspective, it offers no account of our knowledge of our own minds; it merely takes it for granted.
Mirror-neurons and direct perception of others’ mental states Some have objected that we do not need to theorize about others to know what they are up to and to know their mental states. The mirror neuron system in humans and monkeys activates the same populations of neurons in the pre-motor 128
Folk Psychology and Scientific Psychology
cortex when performing an action or observing someone else performing the same action. Such representation is before and below full mentalizing, and does not provide to knowledge of someone’s intentions in acting but the mirror system does code for the goal of a movement performed by self or other, and thus enables a creature to recognize not just bodily movements but motor goals in behaviour. The Davidsonian interpreter tells us nothing about how we go from observing mere behaviour to choosing the intentional states to explain that behaviour as action. The mirror system may explain, in part, how to bridge that gap. Arguably, without this level of neural matching of another’s goal with one’s own potential for goal-directed movement, we could not even identify in someone’s behaviour the targets for intentional explanation (see Gallese, 2006). Others believe that instead of theorizing about others’ mental states we have direct awareness of others’ minds. Following Wi genstein we might say that we see someone’s mind in their face (see Gallagher, 2008). How we do this is not clear. Are mental states visible, or just the expression of mental state? Are the psychological states we ‘see’ read off or read into the face and bodily movements of others? Are psychological states something we see in the face of others? Knowledge of other minds may not be inferential. We may see expressions of mindedness as sorrow, or pensiveness, or desire. But seeing-as when perceiving expressions of mind may not amount to direct – unmediated – perception of minds. Besides, we also predict and explain what people do when we are not perceiving them, so the mental states we are said to see must be integrated into a rational structure of intentional states that explain more that perceptual appearances, and so a further story about our conception of the mind will take us beyond its perceptible aspects.
Consciousness When we talk about folk psychological concepts like belief and desire – concepts with both first and third person applications – all mention of consciousness is missing. Davidson makes no use of it, nor of perceptual experience. Experience does not give grounds for believing what we do. Instead, he argues that the only thing that can justify a belief is another belief. And the only things that can bring about beliefs in non-rational ways are the causal impacts the environment has on us: ‘sensation plays a crucial role in the causal process that connects beliefs and the world’ (Davidson, ‘The Myth of the Subjective’ in his 2001, p. 46). When we think about what we see we form beliefs about what we are looking at. So Davidson pays no a ention to consciousness. No doubt we have it but it is not playing an important epistemological or metaphysical role in characterizing mental states. We know a lot about other people’s minds while knowing li le or nothing about their conscious experiences. 129
The Continuum Companion to Philosophy of Mind
Similarly, Fodor takes consciousness to be a mystery about which, he says, we know precisely nothing. We can give naturalistic accounts of intentional content and of the causal (computational) powers of such intentional states. But we can say nothing so far about what consciousness is or about what gives rise to it. Consciousness is tackled by Denne alone, but his interpretative, thirdpersonal stance means he struggles to accommodate it. Heterophenomenology replaces phenomenology as it features in the familiar picture of mental life. In the familiar picture we have three stages: brain states give rise to experience and following the appearance of experiences in consciousness, subjects report on their experiences. Denne claims we can do without the middle level and simply se le for brain states giving rise to reports on supposedly pre-existing experiences. What is more, Denne provides evidence against the existence of any fixed facts about the middle level arguing that there is no fixed place or time where all the sensory inputs come together in the brain to create a conscious experience. Instead, subjects offer reports about experience and the content of these reports will changing depending on timing: the precise point at which one asks the subject to make a reflective judgement. We collect these different judgements, not necessarily aware of how they may conflict. This leads Denne to his multiple dra s model of consciousness (see Denne , 1991a).
Emotions How do emotions, conceived as primitive feelings, fit into our everyday (rational) psychology? Three strategies suggest themselves: (a) Minimal accommodation: emotions play no real role in psychological explanations but are just disruptive accompaniments (like sand in the mechanism). (b) Less minimal accommodation: reduce them to, or re-construct them from existing categories in the mental inventory (as sensations or perceptions, beliefs, desires, a mixture of the two, or as uncompleted actions. See Prinz, 2004 for a perceptual treatment.) (c) Full accommodation: emotions belong in a separate category of sui generis mental states which play a proper role in everyday psychological explanations. (Requires revision to folk psychology; see Elster, 2003.) Option (a) has it that emotions play no real role in explaining others. The virtue of this strategy is that it would cause minimal mutilation to existing philosophical accounts of everyday or folk psychology. But it neglects the role 130
Folk Psychology and Scientific Psychology
of emotions in explaining others and ourselves and giving us knowledge of the world and others. The most conservative version of option (b) would invoke the analogy between emotions and sensations. Emotions, like sensations, are both objects of knowledge and sources of knowledge: they are narcissistic in telling us something about ourselves and outward looking in telling us something about world beyond; they have a characteristic felt quality; episodic (flash of anger, pang of grief, feeling of euphoria); we undergo rather than undertake them; they are an accompaniment to other (rational) mental states. (For more on these options, see Smith, 2002b.) Are emotions just sensations? No; emotions can be dispositional as well as occurrent; they need not announce their presence in the mind in order to exist (cf. exceptions such as humiliation, joy, elation). Besides, the a empt to identify emotions and sensations is based on a false view of both that treats them as private and ineffable. But how would we identify such states in ourselves or others? Being in a mental state is one thing: knowing which state one is in is another. On what basis do subjects classify their privately felt sensations or emotions? What generates the phenomenological taxonomy? And how is it related to the taxonomy for others? There is a need once again to acknowledge both the first and third personal dimensions of the mental, while recognising the differences. We need an account of the emotions that respects the tie between the first and third person perspectives on the mind. We need option (c): emotions are a separate class of mental states, though they enjoy close connections with almost all other categories of mental state. This makes them central and indispensable to folk psychology. (The folk always knew this from common sense: common sense is one thing; the philosopher’s view of common sense is another.) How do emotions relate to, or impact on, other kinds of mental states? One view is to treat them as motivating states with expressive or a itudinal content but no rational dimension. However a cognitivist view of the emotions sees them fi ing into larger pa erns of rationality in the mind. Particular emotions are appropriate or inappropriate, they are proportionate or not, they are to be trusted or doubted. Many of one’s own assessments of the emotions assume they are guides to action, judgements of a sort and so can count as evaluations. This makes them centrally important to a wellbalanced mental outlook (see Wollheim, 2005). They can sponsor and be sponsored by other propositional a itude states: what I believe, what I want, and so on. They can be mental dispositions but also occurrent phenomena. They play long-term roles in our lives and short-term disruptive roles too. But perhaps we should make a distinction here between one’s deeper feelings and one’s simply being emotional. We still need a good account of how emotions fit into an overall commonsense psychology by means of which we a empt to understand ourselves and others as creatures with minds, as perhaps the only creatures who can understand themselves and recognize the minds of others. 131
The Continuum Companion to Philosophy of Mind
We have seen that folk psychological explanations appear to float free of any appeal to the underlying cognitive states and processes that sustain our capacities for perception, action and language. And yet a person’s thoughts, wants and wishes are not entirely independent of the perceptual, cognitive and affective states they are in. So there is scope for both for a re-examination of the way philosophers characterize folk psychology and of what people really do appeal to or depend on in making ascription of mental states to others. Additionally, we need not contemplate seriously the eliminativist option in trying to reconcile the lived experience of our inner lives with the findings of neurobiology and neuroscience, for these disciplines too must make sense of the experience, thoughts and reflections of subjects at the personal level. They may cast light on why our experience has the form and character it is, and what happens when the underlying mechanisms break down, but it cannot dispense with the level at which fixes on the mental states it is interested in explaining. Thus a non-reductive cognitive neuroscience and a non-reduced but rich and detailed folk psychology must eventually be aligned.
132
7
Internalism and Externalism in Mind Sarah Sawyer
Internalism and Externalism: The Basics The individuation conditions of psychological properties is the topic of this chapter.1 There are two opposing views: internalism and externalism. According to the former – also known as individualism – psychological properties are individualistically individuated, which is to say that their instantiation by an individual depends entirely on the individual’s intrinsic physical make-up. According to the la er – also known as anti-individualism – psychological properties are anti-individualistically individuated, which is to say that their instantiation by an individual depends not only on the individual’s intrinsic physical make-up, but in addition on objective relations she bears to objective properties in her environment. If a psychological property is individualistic, then its associated content is said to be narrow; if it is anti-individualistic, then its associated content is said to be broad. Internalism, then, is the view that psychological properties supervene locally on physical properties: no two individuals could differ psychologically without differing in some intrinsic physical respect. Externalism rejects this local supervenience thesis, maintaining in contrast that individuals could be exactly alike with respect to their intrinsic physical properties and yet differ psychologically – if, for instance, they were related to relevantly different environments. Both internalism and externalism are consistent with global psycho-physical supervenience, the claim that no two worlds could differ psychologically without differing physically. Local supervenience entails global supervenience (since worlds can be construed as individuals), but not vice versa. The local supervenience thesis is, therefore, the stronger claim, and the question of its truth lies at the heart of the internalism/externalism debate.2 This paper provides an overview of the prevailing issues concerning the debate. In the second section I distinguish various kinds of externalism and outline some considerations in their favour. In the third section I discuss various forms of internalism. In the fourth section I deal with metaphysical considerations concerning naturalism and mental causation that have motivated internalism 133
The Continuum Companion to Philosophy of Mind
and been thought to tell against externalism. In the fi h section I deal with epistemological considerations concerning the direct, non-empirical, authoritative nature of self-knowledge that have been thought to tell against externalism. I then conclude briefly in the sixth section.
Kinds of Externalism Different considerations are thought to be relevant to the individuation conditions of different kinds of psychological property. Consequently, one might embrace externalism for certain kinds of psychological property but internalism for others. In this section I introduce a number of considerations that favour externalism and catalogue various resulting kinds of externalism. The kinds of externalism fall into two broad camps: externalism about concepts expressed by predicative terms, which I will call predicative externalism, and externalism about concepts expressed by singular terms, which I will call singular externalism.
Predicative externalism The most widely recognised consideration in favour of externalism emerges from reflection on counterfactual scenarios in which a subject’s intrinsic physical make-up is hypothesized to remain constant while the broader physical environment in which she is embedded is hypothesized to differ.3 Such an environmental difference, it is urged, would be responsible for a difference in the subject’s psychological states precisely because non-intentional causal relations to objective properties in one’s environment partly determine what one can represent in thought. For example, a subject S, related in the right kind of nonintentional way to silver, might have various thoughts involving the concept silver, such as that silver jewellery is cheaper than gold jewellery. She may be unable to distinguish (either practically or theoretically) various other actual or possible metals from silver and may well acknowledge this. Nevertheless, she possesses the concept silver because she is related in the right kind of way to silver, and hence can think various things about silver by means of that concept. Now suppose S had lived in different circumstances, circumstances in which there was no silver for her to be related to either directly (via perception) or indirectly (via other people). In such a situation S would be unable to think about silver as such because there would be nothing to ground her possession of the concept silver. How could she have acquired the concept? Suppose instead that she had been related to one of the actual or possible metals that she is unable to distinguish from silver. Call this metal ‘twilver’. In such circumstances, 134
Internalism and Externalism in Mind
S would have been related to twilver in just the same way as she is actually related to silver, and hence it is plausibly the concept twilver that S would have acquired. Consequently, where S thinks that silver jewellery is cheaper than gold jewellery, counterfactual S thinks instead that twilver jewellery is cheaper than gold jewellery. The difference in representational content between the belief S has and the belief S would have lies in the difference between the objective properties to which she is related (silver) and would be related (twilver) respectively. If these considerations are persuasive, then what determines the representational content of a subject’s beliefs goes beyond her intrinsic physical make-up and her discriminative capacities (which are hypothesized to be identical in the actual and the counterfactual scenarios alike) and depends in addition on the objective properties to which she is related.4 This kind of thought experiment is taken by many to establish externalism specifically with respect to natural kind concepts: concepts that ‘carve nature at its joints’ and feature in the true final set of scientific theories: concepts such as (perhaps) quark, electron, hydrogen, water, heart, tiger, planet.5 However, reflection on two further kinds of counterfactual scenario favours a more general externalism. The first draws upon the possibility of incomplete linguistic understanding6; the second draws upon the possibility of non-standard theory7. I outline each in turn. First suppose that a subject S has a wide range of ordinary beliefs a ributable by means of the term ‘game’: she believes that some games are more fun than others, that chess is a game, that children like party games, and so on. However, she believes in addition (and mistakenly) that games must involve at least two people, a point she would readily accept correction on if her mistake were pointed out to her. Next consider a counterfactual scenario in which her intrinsic physical make-up is hypothesized to remain constant while her linguistic community is hypothesized to differ. In the counterfactual scenario the term ‘game’ is defined and standardly used to apply to games that involve at least two people. Since ‘game’ and ‘game involving at least two people’ mean different things, the word-form ‘game’ in the counterfactual scenario has a different meaning than it does in the actual situation. In like fashion, the concept expressed by the word-form differs in the actual and the counterfactual situations. In the actual situation the word-form ‘game’ expresses the concept game and includes in its extension games such as solitaire and patience. In the counterfactual situation, in contrast, the word-form ‘game’ expresses a different concept that does not include in its extension either solitaire or patience. Consequently, S may in fact believe that pass the parcel is a game, but had she been a member of the counterfactual linguistic community she would have possessed a distinct concept, believing instead that pass the parcel is a ‘shgame’, say. Once again, S’s intrinsic physical make-up and classificatory capacities are identical in the actual and the counterfactual situations, 135
The Continuum Companion to Philosophy of Mind
and the difference in representational content between the belief she has and the belief she would have lies beyond her intrinsic physical properties, this time anchored by the classificatory practices of the wider linguistic community of which she is and takes herself to be a part. Behind this counterfactual scenario lies a certain understanding of linguistic meaning according to which the conventional linguistic meaning of a term (roughly its dictionary definition) is a complex abstraction from communal rather than individual use. Linguistic meaning is determined by actual and possible agreement among the most competent users, where the most competent users are those to whom others do and would defer if a question about an individual’s use were to arise. On this view, understanding the meaning of a word is not an all-or-nothing thing, but rather comes in degrees. And it is the possibility of understanding a word incompletely that allows for the difference in linguistic meaning in the actual and the counterfactual situations to be consistent with there being no difference in intrinsic physical make-up between actual and counterfactual S. The difference in linguistic meaning is then taken to imply a difference in concept expressed. The final consideration that favours a general externalism trades on the fact that even a subject with a full understanding of the linguistic meaning of a term can doubt whether the dictionary definition that reflects that meaning correctly characterizes the things referred to by that term. Thus suppose a subject S has a full understanding of the term ‘sofa’ and yet comes to wonder whether sofas are really religious artefacts and not pieces of furniture made for si ing. Her proposed theory about sofas is false, but this need not compromise either her full understanding of the term ‘sofa’ or her ability to think with the concept sofa; rather, it reflects a strange view about the nature of sofas thought of as such. Now hypothesize a counterfactual situation in which S’s false theory is standard and true of a different yet superficially indistinguishable class of entities (call them ‘safos’).8 The linguistic meaning of the term ‘sofa’ in the actual situation differs from the linguistic meaning of the term ‘sofa’ in the counterfactual situation even though the entities referred to are superficially indistinguishable. This is because the actual linguistic community and the counterfactual linguistic community have agreed upon different characterizations of the relevant entities. Moreover, the concept expressed by the term differs in the two situations because the entities referred to differ: in the actual situation they are sofas (pieces of furniture made for si ing), whereas in the counterfactual situation they are safos (religious artefacts). Consequently, while actual S believes that sofas are religious artefacts, counterfactual S believes that safos are religious artefacts. Behind this counterfactual scenario is a certain understanding of the difference between the linguistic meaning of a term and the concept expressed by that term. The linguistic meaning of a term goes beyond individual use and is 136
Internalism and Externalism in Mind
grounded in communal use, as mentioned above. Communal use may well change over time, and hence the linguistic meaning of a term may well change over time. (Dictionaries are plausibly updated in part to reflect such changes in linguistic meaning.) But the concept expressed by a term may well remain unaltered even while the linguistic meaning of that term changes. This will happen, for instance, when entities of a given kind are identified through perception and then characterized. The concept will be anchored to the entities through perception, whereas the linguistic meaning will reflect received views about the entities, and this characterization may well need updating as investigation proceeds and even while the concept remains unchanged. It is the fact that we can be mistaken in our characterizations of the things we perceive that allows for non-standard theory to be entertained, and this in turn grounds a general form of externalism. In the sofa/safo case, of course, S’s theory would prove false under empirical tests and hence would not lead to a change in linguistic meaning. Cases where proposed theories are adopted, however, would lead to corresponding changes in linguistic meaning. This is what allows us to make sense of genuine theoretical disagreement about a class of entities thought about by means of the same concept, and grounds constancy of reference through theory change. Thus far I have distinguished two broad kinds of externalism: natural kind externalism, based on noting subjects’ relations to natural kinds; and social externalism, based on the possibility of incomplete linguistic understanding and the possibility of theoretical doubt. Both are kinds of what I have called ‘predicative externalism’ since they concern concepts expressed by predicative terms. Natural kind externalism has gained more support than social externalism, but so long as we take seriously, as we must, the thought that our concepts concern a world about which we can be in error, there is reason to adopt a general predicative externalism.9
Singular externalism According to singular externalism, the representational content of a subject’s thoughts about particulars (singular thoughts) is individuated partly by the particulars those thoughts concern. This is directly analogous to predicative externalism according to which the representational content of a subject’s thoughts about properties is individuated partly by the properties those thoughts concern. There are two main kinds of singular externalism: externalism about thoughts expressed by sentences containing demonstratives; and externalism about thoughts expressed by sentences containing proper names. To take a demonstrative example first, suppose that actual S is looking at a particular apple, A1, 137
The Continuum Companion to Philosophy of Mind
while counterfactual S is looking at a different apple, A2. Suppose further that S and counterfactual S u er the sentence ‘That is nutritious’. It is clear that S’s u erance (and thought) concerns A1, whereas counterfactual S’s u erance (and thought) concerns A2. This is so even if S’s intrinsic physical make-up is identical in the two situations. Moreover, S’s u erance (and thought) is true if and only if A1 is nutritious, while counterfactual S’s u erance (and thought) is true if and only if A2 is nutritious. Crucially, according to singular externalism this difference in truth conditions is due to a difference in representational content. Parallel remarks hold for externalism concerning thoughts expressed by sentences containing proper names. Thus if S u ers the sentence ‘Danny is interesting’, referring to Danny Alpha, with whom she is acquainted, and counterfactual S u ers ‘Danny is interesting’, referring to Danny Beta, with whom she is acquainted, their u erances and thoughts have different truth conditions, and this is consistent with S’s intrinsic physical make-up being the same in both the actual and the counterfactual situations. Again, according to singular externalism this difference in truth conditions is due to a difference in representational content. Singular externalism is upheld by a number of people in a number of different ways. According to Gareth Evans and John McDowell, all thoughts are composed of Fregean senses, but singular thoughts contain de re senses which exist only if there is an object to which they refer. Evans and McDowell advocate this kind of singular externalism for all thoughts about particulars, whether the particulars are thought about by means of demonstratives or by means of proper names.10 According to direct reference theorists, in contrast, the thought expressed by a sentence containing a proper name contains not a de re sense of the object named but the very object itself.11 Here again, the existence of the thought depends upon the existence of the object thought about. This view is typically not extended to demonstrative thought, although in principle it could be. A variant of the direct reference theory that accommodates Fregean insights (about different ways of thinking about an object) without countenancing de re senses holds that the thought expressed by a sentence containing a proper name contains the object named together with a mode of presentation of that object, but the implication is the same: the existence of the thought depends upon the existence of the object thought about. This view could also in principle be extended to the demonstrative case. What makes all these views forms of singular externalism is the common claim that the content of a singular thought is object-dependent. Considerations that bear on singular externalism thus far parallel considerations that bear on predicative externalism, as noted at the outset. But the question of the individuation conditions of singular thoughts introduces the
138
Internalism and Externalism in Mind
possibility of a distinction between a singular thought and its representational content; and this distinction has no analogue in the predicative case. The distinction opens up the possibility of accepting that the truth conditions of singular thoughts are object-dependent while denying that this is in virtue of a difference in representational content. What results is a theory according to which the representational content of a singular thought is preserved across intrinsic physical duplicates but can be thought of (and hence true or false of) different individuals on different occasions. To take the demonstrative example above, on this view, S and counterfactual S both have a thought the representational content of which is given by the open sentence ‘is nutritious’. Actual S thinks this of A1, whereas counterfactual S thinks this of A2. The difference in truth conditions between the thoughts is on this view due to a difference in contextual application rather than representational content. To accept such a distinction between a thought and its content is to embrace a kind of two-factor theory of singular thought according to which the object thought about is a constituent of the thought but is not referred to by a conceptual constituent of the thought. On such a view the object contributes to the truth conditions of a thought concerning it but does not affect its representational content. Hence the view is a form of singular internalism.12 This view has been popular in the demonstrative case, but has gained li le support in the proper name case due to the dominance of direct reference theories. However, if one were to accept singular internalism for the demonstrative case and in addition think of singular uses of proper names as involving a demonstrative element, then one would naturally be led to embrace singular internalism for the proper name case too. To take the second example above, a singular use of a name such as ‘Danny’ is to be understood as involving a demonstrative element and hence as semantically equivalent to ‘That Danny’, which can be used to refer to different Dannys on different occasions. On this view, S and counterfactual S both have a thought the representational content of which is given by an open sentence something like ‘is a Danny and is interesting’. Actual S thinks this of Danny Alpha, whereas counterfactual S thinks this of Danny Beta. The difference in truth conditions between the thoughts is again due to a difference in contextual application rather than representational content.13
Kinds of Internalism I have already discussed singular internalism above to contrast and clarify singular externalism. Consequently I will confine my discussion in this section to versions of predicative internalism, of which there are four primary forms.
139
The Continuum Companion to Philosophy of Mind
Two kinds of thorough-going internalism The most straightforward way of being a predicative internalist is to reject outright the interpretation of the counterfactual scenarios taken above to ground externalism. An alternative, internalist interpretation would maintain instead that psychological properties are necessarily preserved across intrinsic physical duplicates precisely because they are and must be grounded in the discriminative capacities and transparent epistemic outlook of the individual.14 An individual’s psychological make-up cannot outstrip what that individual can do and how things seem to her, as it were. S and counterfactual S in each of the scenarios have the same capacities to discriminate and classify things, and (in some sense) have the same views about the things they encounter: there is nothing that allows them to distinguish the actual from the counterfactual situation in each case. Consequently, according to thorough-going internalism, there can be no psychological difference between them. One way of upholding the view is to think of the relevant concepts as descriptive, encapsulating the subject’s beliefs (or theories) about the things referred to. For instance, the concept both S and counterfactual S express by the term ‘silver’ might be shiny metal oĞen used to make jewellery and that needs to be polished to be kept clean and . . . The concept they express by the term ‘game’ might be kind of activity undertaken for enjoyment, involving at least two people, involving rules in accordance with which you can win or lose; and the concept they express by the term ‘sofa’ might be religious artefacts that look as if they may be sat upon but . . . Note that if the original concepts (here thought of as descriptive) are to be individualistically individuated, then the concepts used in the descriptions must of course themselves be individualistically individuated. But this kind of descriptivism (which many will view as independently problematic) is not essential to the view. One could instead treat the concepts minimally.15 On this view, the concept both S and counterfactual S express by the term ‘silver’ is a concept that has in its extension silver, twilver and everything else that S cannot distinguish from them (as it does on the descriptive view). But in order to express the concept we would need to introduce a new term such as ‘shmilver’. Similarly, the concept they express by the term ‘game’ is the concept shgame, which has in its extension games that involve at least two people; and the concept they express by the term ‘sofa’ is the concept safo, which has in its extension religious artefacts that look like sofas. The subject’s discriminative capacities and epistemic outlook here serve to individuate her concepts and hence determine the extensions of those concepts but are not taken up as descriptive elements of the concepts themselves. On both the descriptive and the non-descriptive versions of through-going internalism new terms need to be introduced into our language in order to express with accuracy the concepts had by individuals whose beliefs differ from the norm (or, more generally, from our own), 140
Internalism and Externalism in Mind
as illustrated by the use of new terms in the examples just given. And on both views, S possesses the same concepts as her counterfactual self in virtue of having the same discriminative capacities and epistemic outlook on the world, but she has different concepts from those in her linguistic community. This stands in marked contrast to predicative externalism, according to which S has different concepts from her counterfactual self but shares many concepts with those in her linguistic community despite varying degrees of understanding and competence which result in a wide variety of discriminative capacities and epistemic outlooks across individuals within that community.
Two kinds of two-factor internalism The third and fourth kinds of predicative internalism are more complicated. They acknowledge that the counterfactual scenarios outlined establish that S and counterfactual S have different thoughts in some sense, but aim nonetheless to retain a sense of content which is preserved across intrinsic physical duplicates, in order to respect the internalist conception of sameness of epistemic outlook. Both therefore maintain that a thought has a narrow and a broad content and are thus kinds of two-factor theory. According to the first of these, the internal component of a thought (its narrow content) is a function that determines its external component (its broad, truth-conditional content) given a context (an environment).16 Thus when S and counterfactual S u er the sentence ‘Silver jewellery is cheaper than gold jewellery’, the narrow content of their thoughts is the same, but the broad content of their thoughts differs simply in virtue of their location in different environments: S’s thought concerns silver, and is true if and only if silver jewellery is cheaper than gold jewellery; whereas counterfactual S’s thought concerns twilver, and is true if and only if twilver jewellery is cheaper than gold jewellery. There are similarities between this two-factor theory of predicative thought and the two-factor theory of singular thought discussed in the Singular Externalism section above. On both views the only form of truth-conditional content is broad. And yet on both views the thoughts of intrinsic physical duplicates share a kind of content even though they have different truth conditions. However, the similarity does not extend beyond the superficial level, and the differences are important. According to the two-factor theory of singular thought, singular thoughts have contents that are intrinsically representational independent of context, and can be applied to (or thought of) different individuals in different circumstances. The two factors involved in a singular thought are first, a content (‘is nutritious’, say), and second, (potentially) an individual of whom the content is thought (a particular apple, for instance). 141
The Continuum Companion to Philosophy of Mind
The content of a singular thought is not itself divided into a narrow and a broad component. According to the two-factor theory of predicative thought now under consideration, in contrast, the content of a predicative thought is itself divided into a narrow component and a broad component. Crucially, the narrow component is not representational: only the broad component is. The narrow component is a function and can be understood only in terms of its inputs and outputs: that is, only in terms of the broad, truth-conditional content it produces once the individual is situated in a particular environment. This puts pressure on the idea that the narrow component of a predicative thought is properly conceived as a form of content at all. The second kind of two-factor theory of predicative thought is also a racted both by the externalist interpretation of the counterfactual scenarios and by the internalist conception of sameness of epistemic outlook. However, it aims to draw a distinction between broad and narrow content consistent with all content being representational in some sense. On this view, the broad content of a subject’s thought is determined, in line with externalist considerations, in part by relations she bears to objective properties in her environment. The narrow content of a thought, on the other hand, is individuated by the epistemic possibilities it allows and excludes.17 The underlying thought here is that intrinsic physical duplicates are in the same epistemic position in the sense that they cannot distinguish between the relevant actual and counterfactual situations and that the narrow content of a thought encapsulates this fact. For example, suppose that S and counterfactual S both u er the sentence ‘Silver jewellery is cheaper than gold jewellery’. The thoughts they thereby express have different broad, truth-conditional contents: one concerns silver whereas the other concerns twilver. However, the thoughts are taken to have the same narrow content because a purely qualitative description of a situation in which silver jewellery is cheaper than gold jewellery is identical to a purely qualitative description of a situation in which twilver jewellery is cheaper than gold jewellery. Given the epistemic position of S and counterfactual S, both situations ‘verify’ their thoughts and hence they share a narrow content. The success of the position clearly depends on the possibility of describing situations in purely qualitative terms – terms not subject to externalist considerations. As such, the position depends upon the truth of a restricted rather than a general form of externalism. If all terms were subject to externalist considerations then there would be no terms available to feature in the qualitative descriptions required to ground this notion of narrow content. Moreover, there is a question about whether it makes sense to think of a thought as having two forms of representational content where only one of these is truth conditional. I have discussed and argued against all four forms of internalism elsewhere and will not repeat the arguments here.18 Instead I now turn to some of the
142
Internalism and Externalism in Mind
primary metaphysical and epistemological considerations that surround the internalism/externalism debate.
Metaphysical Considerations According to externalism, psychological properties do not supervene locally on a subject’s intrinsic physical make-up. This throws up two related metaphysical concerns: first, how to retain a naturalistic theory of the mind; and second, how to make sense of mental causation. The two concerns are intimately connected and have provided much of the motivation for internalism. Since the late 1950s and early 1960s, the question of how psychological properties relate to ‘lower level’ properties – and ultimately to properties of interest to the physical sciences – has dominated discussions in philosophy of mind. The requirement that they must be related in some intimate and significant way has been regarded as crucial to an account of the mind that is scientifically and naturalistically respectable. Type-physicalism, according to which psychological properties are identical to physical properties, clearly satisfies the requirement. However, the postulated identity of psychological properties with physical properties comes under pressure from arguments to the effect that psychological properties are multiply realizable – that individuals in different physical states could nonetheless be in the same mental state. Such arguments have motivated forms of token-physicalism, according to which each token mental state of an individual is identical to or realized by some physical state of that individual, even if the psychological property of which it is an instance is not identical to the physical property of which it is an instance. But the dispute between type-physicalism (of various kinds, including behaviourism) and token-physicalism (of various kinds, including functionalism) takes place within a common theoretical framework: physicalism. Externalism, in contrast, rules out all forms of physicalism.19 The minimal claim of physicalism is that every token psychological state of an individual is either identical to or realized by a token physical state of that individual.20 More specifically, physicalism is defined by its commitment to local psychophysical supervenience. This makes clear why it is inconsistent with externalism. As such, externalism has been thought to sever the psychological from the physical, and hence to rule out a naturalistic theory of the mind. However, although externalism is inconsistent with physicalism, it is consistent with materialism – the view that every entity is composed of physical ma er. This is a weaker doctrine than physicalism, but is strong enough to secure a naturalistically respectable theory of the mind. Since externalism is consistent with materialism, it is consistent with naturalism.
143
The Continuum Companion to Philosophy of Mind
There is, however, a related worry about anti-individualistically individuated properties. In particular, it has been thought that only individualistic properties can be causal properties: that causal powers must be intrinsic. If this is right, then externalism is commi ed to the claim that psychological properties are not causal properties, which undermines the intuitive and commonly held idea that our beliefs and desires cause our actions.21 The worry here is that although externalism is consistent with naturalism, its understanding of psychological properties renders them insignificant because psychological properties thus conceived would have no causal powers and hence make no difference in the world. Consequently, even if externalism is naturalistically respectable, it does not yield an account of the mind that is scientifically respectable.22 However, the assumption that causal properties must be intrinsic is misguided. Indeed, scientific practice demonstrates that many sciences study pa erns of causation involving entities in their normal environment, and the properties to which they appeal in causal explanations are individuated in a way that presupposes such relations between entities and their environment.23 Thus ‘astronomy studies the motions of the planets; geology studies land masses on the surface of the Earth; physiology studies hearts or optic fibres in the environment of a larger organism; psychology studies activity involving intentional states in an environment about which those states carry information; the social sciences study pa erns of activity among persons’ (Burge, 1989, p. 317). Because such properties are individuated with reference to a normal environment, the properties are anti-individualistic; and yet such properties are also individuated with reference to their causal powers, and hence there is no question that they are causal properties. The view that emerges is a view according to which psychological properties are causal and yet fail to supervene on lower level properties. It follows that individuals who are classified as of the same kind from the perspective of one science may be classified as of different kinds from the perspective of another. Thus for example two individuals may be exactly similar from the perspective of neuroscience but significantly different from the perspective of psychology because they instantiate the same neurophysiological properties but different psychological properties. This is the case for S and counterfactual S in each of the scenarios described in the Predicative Externalism section. This allows us to identify an error in the internalist’s reasoning. Internalists o en point out that S and counterfactual S would exhibit the same behaviour non-intentionally described (they would follow exactly similar trajectories through space, exhibit the same speech pa erns, classify things in the same way, and so on), which of course is true, but they go on to conclude that S and counterfactual S instantiate the same psychological properties. However, the similarity in behaviour is to be explained by the fact that S and counterfactual S instantiate the same neurophysiological properties and is consistent with their instantiating 144
Internalism and Externalism in Mind
different psychological properties. The former may well be individualistically individuated even though the la er are not. Externalism is inconsistent with physicalism. However, it is consistent both with a naturalistic theory of the mind and with the claim that psychological states are causally efficacious.24
Epistemological Considerations Central to the internalism/externalism debate in the philosophy of mind has been the question of whether externalism is consistent with the intuitive claim that a subject knows what she is thinking in an epistemically privileged way. There are two primary areas of concern. The first is ‘the achievement problem’ and the second is ‘the consequence problem’. I deal with each in turn.25
The achievement problem According to externalism, what concepts we possess and hence what thoughts we can think depends on contingent, empirical relations we bear to objective properties in our environment. The question then arises, how can we know our thoughts in a direct, non-empirical, authoritative manner when those thoughts depend on our relations to the environment? Imagine that S is periodically switched from the actual situation (in which she is related to silver) to the counterfactual situation (in which she is related to twilver). Suppose further that a er each switch she stays long enough to acquire the concept appropriate to the new environment. Under such a hypothesis S will at certain points in time think that silver jewellery is cheaper than gold jewellery, and at other points in time think that twilver jewellery is cheaper than gold jewellery. And yet there would be no break in the continuity of S’s life because there would be (by hypothesis) no discernible difference between the two environments. The changes in her environment would pass undetected, and so, crucially, would the changes in her thoughts. On this basis it is argued that S does not know what she thinks in a direct, non-empirical, authoritative manner. Rather, S requires empirical knowledge of her environmental relations in order to know what she thinks. There are various ways one might take the argument: one might conclude that the mere possibility of such switches undermines the direct, non-empirical, authoritative nature of self-knowledge; or one might conclude that the close epistemic possibility of such switches undermines the direct, non-empirical, authoritative nature of self-knowledge; or one might conclude that only actual switching undermines the direct, non-empirical, 145
The Continuum Companion to Philosophy of Mind
authoritative nature of self-knowledge.26 However it is taken, externalists have been broadly uniform in their response, which has two main strands. First, it is pointed out that the concepts available at the second-order level of thought (concepts employed to think about one’s first-order thoughts) are determined (in part) by relations to the very same set of environmental conditions that determine the concepts available at the first order level of thought (concepts employed to think about the world). As such, S could not be in error about her thoughts simply in virtue of the dependence of those thoughts on her environmental relations. S could not, as it were, think she was thinking that silver jewellery is cheaper than gold jewellery but really be thinking that twilver jewellery is cheaper than gold jewellery. This kind of error would involve using the concept twilver at the first-order level while simultaneously using the concept silver at the second-order level. Rather, the same concept (whether it be silver or twilver) would be employed at all levels of thought. Consequently, the kind of threat envisaged is ill-conceived.27 Second, it is pointed out that the (partly environmental) conditions that individuate a thought are presupposed in the thinking of that thought but need not themselves be known in order for it to be known that that is the thought one is thinking. Perceptual knowledge presupposes that certain background conditions obtain (that lighting conditions are reasonable, that one is not hallucinating, and so on), but such background conditions need not be established by the subject before she can be said to know by looking that there is, for instance, an apple on the table in front of her. Similarly, it may be that particular instances of self-knowledge presuppose relations to objective properties in one’s environment, but a subject need not know that such relations obtain in order to know what she thinks.28 Indeed, if one had to know the individuating conditions of a thought in order to know one was thinking it, then neither internalism nor externalism would be consistent with the direct, non-empirical, authoritative nature of self-knowledge: we do not have such direct, non-empirical, authoritative knowledge about our environmental relations (as would be required if externalism were true); but we do not have such direct, non-empirical, authoritative knowledge about our intrinsic physical make-up either (as would be required if internalism were true). This merely shows that the demand for such knowledge of individuating conditions is irrelevant to questions about self-knowledge.29 But the achievement problem surfaces again in the guise of the argument from memory.30 Suppose S thinks a second-order thought at t1: I think silver is shiny. Suppose she is then switched from the actual to the counterfactual environment where she remains long enough to acquire the concept twilver. At t2 (some point later), when reflecting on what she thought at t1, she will think with concepts relevant to the counterfactual environment and hence think, it is argued: I thought twilver was shiny. The content of her thought at t2 is 146
Internalism and Externalism in Mind
false, since it does not capture the content of her thought at t1. Consequently, S does not know at t2 what she was thinking at t1. This is taken to undermine self-knowledge of externally individuated past thoughts. Moreover, it is argued, self-knowledge of externally individuated current thoughts is also undermined. A er all, if S does not know at t2 what she was thinking at t1, and there is no reason to think she has forgo en anything in the interim, there is reason to think she never knew at t1 what she was then thinking. As with the initial argument, there are various ways the argument might be taken depending on whether one thinks mere possibility, close possibility or actuality the relevant epistemic factor. But here two different responses have emerged. According to the first, the argument shows that externalism does undermine the direct, non-empirical, authoritative nature of one’s knowledge of one’s past thoughts, but it does not show that externalism undermines the direct, non-empirical, authoritative nature of one’s knowledge of one’s current thoughts.31 This can be made plausible, for instance, by acknowledging a new way in which one might be said to forget something (namely, by being switched between subjectively indistinguishable environments), or alternatively, by maintaining that forge ing is not the only way in which one might fail to know at t2 what one knew at t1 (since one might instead be switched between subjectively indistinguishable environments). The general moral here is that although one may need to rely on empirical considerations to the effect that the environment has remained broadly stable in order to know what one thought in the past, the non-empirical warrant for knowledge of one’s current thoughts is not thereby undermined. The disruption, as it were, is confined to knowledge of past thoughts. According to the second line of response, the argument does not show that externalism undermines the direct, non-empirical, authoritative nature of one’s knowledge either of one’s current or of one’s past thoughts.32 This second line of response is bolder and can be made plausible, for instance, by showing how the content of past thoughts can be preserved in memory even across undetectable switches between differing environments. This has been advocated by Burge, who distinguishes substantive event memory, which refers back to earlier events, from preservative memory, the function of which is to hold the contents of thought in place so the subject can determine logical and epistemic relations between them. Preservative memory does not refer back to earlier thinkings, but rather holds contents in place for the purposes of, for instance, critical reasoning.33 While substantive event memory might be undermined by externalism, preservative memory will not be, precisely because preservative memories do not refer back to independent events.34 It is important to note that externalists have not offered a theory of selfknowledge in response to the achievement problem in either of its guises. Rather, they have tried to show how the arguments are misguided. Two things 147
The Continuum Companion to Philosophy of Mind
are clear. First, there is no widely accepted theory of self-knowledge, either internalist or externalist, and work remains to be done here. But, second, the achievement problem does not bring to light any specific difficulties for the externalist. Rather, and perhaps unsurprisingly, externalist theories of selfknowledge and of memory will look rather different from internalist ones.
The consequence problem The consequence problem emerges once an answer to the achievement problem has been assumed. The problem arises when one combines a non-empirical warrant for the claim that one is thinking a particular thought, with a nonempirical warrant for the claim that thoughts of that kind depend on the environment’s being a particular way, to yield, surprisingly, a non-empirical warrant for a claim about the nature of one’s environment. For example, S might reason as follows: (P1) I think silver is shiny; (P2) if I think silver is shiny then I must be related to silver; therefore (C) I am related to silver. (P1) is an instance of selfknowledge and hence taken to be non-empirically warranted; (P2) is arrived at through philosophical theorising and hence taken to be non-empirically warranted; but then it seems that (C) can be warranted non-empirically, which is generally thought to be wildly implausible. Given the implausibility of having a non-empirical warrant for claims such as (C), it is argued, externalism is inconsistent with the direct, non-empirical, authoritative nature of selfknowledge.35 The consequence problem has generated a vast amount of literature and three primary externalist responses have emerged. The first concerns the externalist conditional that connects thoughts with the environmental conditions they presuppose. According to this position, conditionals such as (P2) are false because they commit the externalist to a stronger thesis than is either plausible or established by the counterfactual scenarios that support it. And a true externalist conditional, which stated a genuine dependency relation between the thinking of a thought and environmental conditions necessary for it, would, according to this view, be so weak that non-empirical knowledge of its consequent would not be implausible at all. For instance, it might state that in order for S to think that silver is shiny, S would have to be related to some basic kinds of things (but not necessarily to silver).36 The second response abstracts away from questions about the content of the externalist conditional and focuses instead on what is wrong with S’s reasoning even if her reasoning is sound. According to this strategy, arguments such as the one S reasons through are epistemically defective in roughly the same way that a question-begging argument is epistemically defective: neither a
148
Internalism and Externalism in Mind
question-begging argument nor an externalist argument of this kind will be persuasive to a subject who doubts the conclusion. According to this position, the non-empirical warrant available at (P1) fails to transmit across the known conditional (P2) to provide a non-empirical warrant for (C), because, roughly, the non-empirical warrant for (P1) is only available on the assumption that the environmental conditions stated in (C) obtain.37 The final position argues, in contrast, that there is nothing epistemically wrong with S’s reasoning. On this view, self-a ributions are warranted nonempirically and without the need of a prior warrant for the claim that the environmental conditions that help to individuate the thoughts obtain. Moreover, in the absence of a doubt about whether the environmental conditions obtain, the non-empirical warrant for (P1) remains undefeated and can legitimately transmit via (P2) to (C). The response has met with incredulity, but is, I think, reasonable given two facts. First, a claim is equally open to doubt whether it is warranted empirically or non-empirically. This is important because it separates the question of whether there is anything epistemically defective with S’s reasoning from the question of whether S’s conclusion can be deployed in a straightforward argument against the sceptic. S’s reasoning is epistemically legitimate, but does not refute external world scepticism. Second, (although I have not argued this here) externally individuated thoughts depend not only upon the existence of relations between the thinking subject and objective properties in her environment, but on the subject’s knowledge of such relations. Consequently, while externalist arguments of this kind can provide a subject with non-empirical warrants for claims about her environmental relations, the subject will already have empirical warrants for such claims.38, 39
Conclusion In this chapter I have tried to provide a relatively comprehensive overview of the internalism/externalism debate in the philosophy of mind and its implications. However, it should be clear where my allegiance lies. In the predicative case I find the considerations that motivate externalism persuasive, the theoretical gains of externalism significant, and the considerations against externalism misguided. On the metaphysical side, externalism is consistent with a naturalistic theory of the mind and with the claim that mental states are the causes of our actions. On the epistemological side, externalism is consistent with the direct, non-empirical, authoritative nature of self-knowledge, and in addition has the potential to ground an adequate theory of justification.40 In the singular case, however, a two-factor theory strikes me as theoretically superior
149
The Continuum Companion to Philosophy of Mind
for thoughts expressed both by sentences containing demonstratives and by sentences containing proper names. This yields a theory according to which names and demonstratives do not function in the same way as predicates, whether in language or in thought: our fundamental contact with the world is demonstrative, which itself grounds an externalist theory of the mind.
150
8
The Philosophies of Cognitive Science1 Margaret A. Boden
Cognitive Science and Pluralism There’s no such thing as the philosophy of cognitive science. Rather, there are competing philosophies of – and within – the field. That’s partly because the concepts and techniques of artificial intelligence and artificial life have been changed and enriched since the 1940s. For although psychology is the thematic heart of cognitive science, its intellectual heart is AI/A-Life – or AI, for short. Psychology (both animal and human) is the thematic heart because cognitive science studies all aspects of the mind or mind/brain, or, if you prefer, embodied experience and behaviour. It ranges from low-level vision to enculturated thought, from infantile development to adult personality, and from individual behaviour to social phenomena. It investigates not only cognition, but emotion and motivation too. So the field is badly named: outsiders are o en misled, assuming that it deals only with cognition. Cognitive science differs from other forms of psychology in using computational concepts of various kinds. Very broadly speaking, these fall into two main types: formalist/symbolic and connectionist/dynamical. Much of the philosophical interest lies in the differences between these approaches. Some ‘computational’ concepts, in the broad sense intended here, denote formal computations on symbolic representations. These typify classical AI, or GOFAI, Good Old-Fashioned AI (Haugeland, 1985, p. 112). Others draw on cybernetic ideas about embodied and self-organizing systems. These include situated robotics, wherein the robots rely on direct ‘reflex’ responses to environmental cues; dynamical systems understood in terms of physical laws; and self-equilibrating neural networks. And all approaches sometimes include the sort of ‘computation’ (mutation and natural selection) that’s effected by evolution. Cognitive scientists o en express their theories as computer models, because this is the best way of testing their coherence and implications. (Testing for their truth, of course, involves comparisons with the actual phenomena.) The computational concepts implemented in such models are substantive theoretical 151
The Continuum Companion to Philosophy of Mind
terms. That is, the mind and the brain are theorized as mechanisms that actually carry out computations of one kind and/or another. Put another way, the mind is conceptualized as what computer scientists call a virtual machine, defined in abstract (computational) terms but implemented in the brain. The varieties of computation mentioned above are best suited for different tasks. All of them (and more) will be needed for an adequate account of the complex virtual machine which is the human mind (Sloman, 2000; Minsky, 1985, 2006). Some interesting hybrid systems have already been implemented, in which psychological phenomena are modelled by a combination of GOFAI and connectionist methods (Norman and Shallice, 1986; Cooper et al., 1995). However, the various techniques are o en wrongly assumed to be mutually exclusive (e.g. Dreyfus and Dreyfus, 1988). That’s part of the reason for the multiplication of philosophies of cognitive science. Another reason for this pluralism is the fundamental divide between analytic and continental philosophy. Almost all the early philosophers of cognitive science came from the analytic community, and were commi ed to functionalism. Philosophical traditions such as phenomenology were scarcely mentioned, although Hubert Dreyfus (1967, 1972) was an important exception. Recently, there have been a empts to combine these two traditions, or even to forsake one for the other; see the Embodiment, Enactiveness, and Phenomenology section below.
The Fate of Functionalism Functionalism analyses mental states as causal-computational functions: internal representations and information-processes, which interact with each other and mediate between input and output. These were defined by Hilary Putnam (1960, 1967) in terms of Turing-computation, the mind being glossed as the program implemented in the brain. This implies multiple realizability, because definitions of abstract functions say nothing about the details of implementation and because a given program can be realized on many different computers. In other words, philosophers – and theoretical psychologists, too – can ignore the brain. Putnam was the first professional philosopher to recommend functionalism as the basis for a computational research programme in psychology. But it had been outlined much earlier in Kenneth Craik’s (1943) work on cerebral models (see the Varieties of Representation section below) and in the seminal paper of cognitive science (McCulloch and Pi s, 1943). Moreover, Allen Newell and Herbert Simon had already implemented a theory of problem-solving which, they insisted, in no way depended on how neural mechanisms realize information processing in the brain (Newell et al., 1958). A few years later (and 152
The Philosophies of Cognitive Science
independently of Putnam), functionalist philosophies were developed at length by Newell and Simon (1972, 1976), myself (Boden, 1965, 1970, 1972), and Aaron Sloman (1978). The general thesis of functionalism is still widely (though not universally: see the Embodiment, Enactiveness, and Phenomenology section below) accepted as the core philosophy of the field. But it has developed into several mutually disputatious positions. One is due to Fodor. He remains the philosopher of cognitive science most faithful to Putnam, or rather, to early Putnam (see the Embodiment, Enactiveness, and Phenomenology section below). He too describes mental states in terms of GOFAI’s formal-symbolic representations and computation. He too ignores neuroscience, regarding connectionism as concerned only with implementation (see the Concepts and Connectionism section below). And he too takes belief/desire explanation (folk psychology) as the starting-point for a scientific psychology. But he has added two highly controversial claims. First, that mental representations are composed of atomic items in a language of thought (LOT), whose primitives cognitive science must discover, and second, that these primitives are present at birth. Admi edly, babies don’t understand natural language. But Fodor insisted that learning the meaning of ‘airplane’ is a ma er of identifying and combining the relevant already-present LOT primitives. Two further controversial claims were soon added (Fodor, 1983). Namely, that the mind contains a number of innate, functionally distinct, informationprocessing modules; and that these are the only aspects of mentality which can be scientifically understood. Non-modular computation, Fodor said, does occur in the ‘higher’ mental processes. But he argued that there are so many degrees of freedom here that psychologists can’t hope to find laws to predict specific thoughts, nor even to explain them post hoc. The many discussions of this viewpoint (e.g. Karmiloff-Smith, 1992; Samuels, 1998) involve both empirical and philosophical arguments. That’s not surprising, for the philosophy of cognitive science isn’t, and shouldn’t be, a ma er for philosophers alone; see Conclusion below. Daniel Denne ’s view on mental representations was very different from Fodor’s. Whereas Fodor posited representations (formulae in the LOT internal code) actually present in the mind/brain, Denne described them in a more ambiguous fashion. The ambiguity took two forms. On the one hand, Denne didn’t restrict himself to GOFAI computation. And on the other, he held that to ascribe representations – beliefs, desires, goals, fears – to an organism is to speak instrumentally, not realistically. Even qualia, he said, are fictions; see the Consciousness section below. In his first book (Denne , 1969), Denne spoke of a personal (intentional) ‘stance’ for explaining behaviour, and he clarified this idea soon a erwards 153
The Continuum Companion to Philosophy of Mind
(Denne , 1971). Now, he distinguished three descriptive/explanatory stances: the physical stance considers the system as a material thing; the design stance – the ‘proper direction’ for philosophy and psychology – focuses on the teleological functions for which the system was designed or evolved; and the intentional stance considers it as a system with beliefs and desires linked by rational principles. Even chess-playing computers, he said, have to be described in this folkpsychological way for their behaviour to be predicted and understood. But even in the human case, intentional discourse (which he saw as highly imprecise, and which assumes rationality rather than explaining it) was a ma er of interpretation, not discovery. On being accused of instrumentalism (e.g. by John Searle, 1980), he back-pedalled, comparing intentional states to ‘real pa erns’ and ‘abstracta’ such as centres of gravity (Denne , 1981b, 1987a). Denne ’s comments on the multiple imprecision of ascriptions of propositional a itudes would be endorsed by Paul and Patricia Churchland (Churchland and Churchland, 1981; Paul Churchland, 1986; Patricia Churchland, 1986). Their philosophy of eliminative materialism compared folk-psychological terms to mediaeval talk about witches, a pseudo-factual discourse that would eventually be wholly eclipsed by science. (That science, on their view, would eschew GOFAI-based explanations for connectionist ones; see the Concepts and Connectionism section below.) So they were functionalists, of a sort (as Andy Clark was too: 1989, 1993). But they had moved a long way from Putnam. Putnam’s functionalism was an advance on the previous scientifically oriented philosophy of mind, identity theory (Place, 1956). For it posited only token-token identity of mental and brain states, not type-type identity, which last implied that dogs and Martians simply can’t feel pain (Lewis, 1980). But opponents accused it of four major flaws: inability to admit the existence of qualia (Block, 1978); failure to account for intentionality (Searle, 1980); neglect of the implications of Godel’s theorem (Lucas, 1961; Penrose, 1989, pp. 102–8); and a sidelining of neuroscience and the brain in the commitment to multiple realizability. That last criticism was sometimes made even by philosophers highly sympathetic to functionalism. Indeed, it was one of the reasons for the move towards connectionism, discussed in the next section.
Concepts and Connectionism Connectionist cognitive science began in the early 1940s (McCulloch and Pi s, 1943), but was ignored by most philosophers until the late 1980s, when a particular version of it became prominent: parallel distributed processing (PDP). This offered a more philosophically plausible account of concepts than 154
The Philosophies of Cognitive Science
GOFAI did. Broadly inspired by the brain, it was less biologically implausible too. Even so, there are many differences between PDP systems and real neural networks; multiple realizability had been diluted only a li le. In GOFAI, in the psychological research that inspired it (Bruner et al., 1956; Hunt 1962), and in philosophical writings based on it (Fodor, 1975), concepts were defined in terms of necessary and sufficient conditions. This contributed to the notorious bri leness of GOFAI systems: in the absence of explicit exceptions, just one missing criterion would render a concept inapplicable. Later psychological work suggested that concepts were less neat and tidy (Rosch and Mervis, 1975). And some philosophers, of course, had already said so (Wi genstein, 1953). In PDP systems, concepts are implemented not by all-or-none activations of neatly listed defining criteria, but by equilibrium states of the whole network, involving mini-representations of many different, and even partially conflicting, facets. Since each facet is continuously weighted, and the weights can change according to the contextual evidence, concepts aren’t accepted/rejected outright but are given varying degrees of confidence. Many philosophers were enthusiastic (Clark, 1989, 1990, 1993, 1996; P. M. Churchland, 1986, 1989b; P. S. Churchland, 1986; Churchland and Sejnowski, 1992; Thagard, 1988, 1989, 1990). They used connectionist ideas not just as a way of glossing concepts, but as a way of thinking about mind in general including epistemology, philosophy of science, and ethics. One major implication of PDP appeared to be that the GOFAI/Chomskian emphases on explicit rules and on innateness were each mistaken. One PDP network not only learnt from input examples to form the past tenses of regular and irregular verbs, but the changes in its performance over time seemed to match aspects of infants’ behaviour, which Chomsky had claimed were incontrovertible evidence for innate explicit rules (Rumelhart and McClelland, 1986). The Chomskians fought back. While some focused on criticizing the details of that specific network (Pinker and Prince, 1988), others argued that PDP models in general, because of their holistic nature, couldn’t satisfy the generality constraint (Evans, 1982), nor capture the compositionality, productivity, and systematicity of language (Fodor and Pylyshyn, 1988). For Fodor (and for Newell too: 1980, 1990), connectionism was not a theory of cognition as such, but of its implementation. Fodor still insists that a GOFAI-based psychology is ‘the only [theory of cognition] we’ve got that’s worth the bother of a serious discussion’ (Fodor, 2000, p. 1). Worth the bother or not, this dispute has triggered a huge literature. Some philosophers have claimed that PDP helps us to understand the nature of non-conceptual content, and how it can lead to genuinely conceptual meaning (Cussins, 1990). Paul Smolensky (1987, 1988) rebu ed Fodor’s critique 155
The Continuum Companion to Philosophy of Mind
of PDP (for Fodor’s response, see Fodor and McLaughlin, 1990), saying that GOFAI-explanations of thought and language are approximations to finely detailed sub-symbolic accounts. Smolensky’s fellow connectionist Clark (1989, Chapter 8; 1991) disagreed. In arguing that that PDP could deliver systematicity, he admi ed that productivity was problematic for PDP systems (more so than sequential order). But he said that compositionality needn’t be built into the basic architecture, as it is for GOFAI systems; instead, it could emerge from it. Given the public availability of language, our brain can model individual words and thereby create a virtual machine with formalist properties. So Smolensky was wrong: GOFAI accounts of thinking aren’t mere approximations of PDP accounts, but are true (or false) descriptions of processes in the relevant virtual machine. The PDP group themselves had suggested such a machine: they believed that, to be able to do logic/mathematics or engage in hierarchical thinking, the brain must somehow emulate a von Neumann computer (Norman, 1986). But despite various a empts (e.g. Touretsky and Hinton, 1985, 1988; Hinton, 1990; Elman, 1990, 1993), no one has yet shown how this is possible. PDP models still can’t match the formalist strengths of GOFAI. The efforts of many philosophers to justify either GOFAI or connectionism as the key to the philosophy (and psychology) of mind are ill judged. The mind is a complex virtual machine that includes both these types of computation and probably many others (see the What is Computation? section below). Moreover, the fact that the brain consists of interconnected neurones doesn’t justify dismissing connectionism as ‘mere implementation’, or GOFAI as ‘non-biological’. For a virtual machine is defined in terms of its computational functions, not its physical implementation. It follows that brain anatomy alone can’t justify eliminative materialism. It may be that the concepts of folk psychology aren’t useful in a scientific psychology. But that’s to say that they may not figure crucially in the brain’s virtual machine. Their relevance/irrelevance can’t be decided purely on the grounds that the brain is a mass of interconnected neurones.
Varieties of Representation Cognitive science typically posits ‘cerebral models’ (Craik, 1943) – that is, representations in the mind/brain. And in positing such models/representations, cognitive scientists also offer hypotheses about the computational processes that manipulate them (build, compare, transform, combine them). In GOFAI, in psychological theories inspired by GOFAI, and also in Putnam’s functionalism, these are thought of as symbolic representations with compositional semantics. Another way of pu ing this is to say that the virtual 156
The Philosophies of Cognitive Science
machine which is the mind is supposed by many to be purely symboliccomputational. However, work in AI and in neuroscience has shown that there are many other possibilities. One set of alternatives comprises connectionist representations of various kinds. These may be localist (each network unit being assigned a specific meaning) or distributed (the representation being spread across the whole network, wherein individual units may carry different meanings according to context). Yet more types of representation have been posited. The evidence offered is largely behavioural, but is increasingly backed by research in neuroscience (Parker et al., 2002). The varieties include: physical and/or spatiotemporal analogues; feature detectors; deictic (situationist), linguistic, and iconic representations; sensorimotor models based on tensor network geometry; hierarchical conceptual and sensorimotor schemas; culturally relevant scripts; temporary ‘online’ representations; and neurophysiological emulators. Some of these types of internal model are believed to be symboliccomputational by some philosophers, whereas others are explicitly contrasted with GOFAI representations. Not all of them are accepted by all workers in the field. But nearly all cognitive scientists offer theories couched in terms of some sort/s of representation. (For exceptions, see the Embodiment, Enactiveness, and Phenomenology section). From the philosophical point of view, there is another question: namely, what counts as a representation. This was a query raised long ago by Craik (1943) who leant heavily on the criteria of use (for survival) and of similarity, and also by Marvin Minsky (1965) who stressed the importance of use. However, most empirical cognitive scientists employ the words ‘representation’ and ‘model’ in largely intuitive senses, obscuring important philosophical distinctions and begging controversial philosophical questions. Philosophers of cognitive science have tried to clarify the issues here, usually with some reference to aspects of the scientific literature (e.g. Sloman, 1971, 1975, 1978, Chapter 7; Pylyshyn, 1973; Cussins, 1990; Kirsh, 1991; van Gelder, 1995; Clark and Grush, 1999; Clark and Wheeler, 1999; Grush, 2004). Their views differ greatly in detail, but three things are now clear. First, there need not be, although there may be, a similarity between the representation and the thing represented. Second, if there is, the theorist must explain just how the similarity is exploited and used. And third, for something to function (sic) as a representation, it must somehow mediate in the agent’s thought or behaviour where that ‘somehow’ should be explicitly spelt out, preferably in computational and/or neuroscientific terms. Philosophers of mind with scientific sympathies, but no special interest in cognitive science, ignore the specific mechanisms suggested by AI and neuroscience. But they do offer naturalistic accounts of representation, and/or of intentionality, explaining sense-making in terms of evolutionary biology 157
The Continuum Companion to Philosophy of Mind
(Papineau, 1984, 1987; Millikan, 1984), causal regularities (Dretske, 1984, 1995a), or biological autopoiesis (Jonas, 1966; Di Paolo, 2009). Since the concept of representation is o en used in defining cognitive science, disagreements about what representation is can be reflected in conflicting judgments about the field’s scope and success. Given Fodor’s formalist sense of representation, connectionism is either a refutation of cognitive science (as Dreyfus claims) or a mere implementational adjunct to it (as Fodor believes). Given a more catholic definition of representation, connectionism is an interesting example of cognitive science, and further examples have been mentioned above. Similarly, dynamical and autopoietic theories and situated robotics (all of which deny representations) must be excluded from the field unless one can show, as some critics have argued, that they too involve (non-formalist) representations. Because of these problems, and despite the huge influence – in philosophical circles – of Fodor’s view, it’s best not to use the term ‘representation’ in defining cognitive science. Some experimental psychologists outside cognitive science explicitly deny the existence of mental representations. For instance, James Gibson’s (1950, 1966) ‘ecological’ approach emphasizes ‘direct’ (computation-free) responses to environmental cues. There have been many debates between Gibsonians and the followers of David Marr, who analyzed low-level vision and objectrecognition in terms of a hierarchy of representations (Marr, 1982; Ullman, 1980; Hinton, 1980; Sloman, 1989; Norman, 2002). Marrians typically complain that although Gibsonians have reported important empirical data about responses to perceptual cues, they fail to ask how those responses are possible. Gibson’s theory includes the notion of ‘affordances’ (Gibson, 1977). He claimed that perception doesn’t only, or even primarily, provide knowledge about the things in the external world, but rather provides (direct) knowledge of the possibilities for action that are afforded by those things. A gap, for example, is perceived as something that can be moved through, and a smile (or other bodily expression of positive emotion) as affording approach without fear. Not all affordances have to be learnt: the ‘visual cliff ’ experiment suggests that even very young babies can see (sic) that a sudden drop in the floor is dangerous, and refuse to crawl over it (Gibson and Walk, 1960). Fodor has complained that Gibson gave no principled way of deciding what counts as an affordance (Fodor and Pylyshyn, 1981). For him, some specific story about computations over representations is needed to explain perception of any kind. Today, that view is less prominent. The (non-computational) representational theory of perception has been a philosophical embarrassment ever since Descartes. Accordingly, ideas about direct, representation-free perception have also been developed within philosophy by the continental phenomenologists. Some of them, such as Maurice Merleau-Ponty (1962), borrowed heavily from the Gestalt psychologists; and 158
The Philosophies of Cognitive Science
all would have been more sympathetic to the Gibsonians than to the Marrians. But none admi ed the possibility of a scientific (naturalistic) explanation of intentionality, not even in neuroscience, never mind AI. According to them, what counts as a representation cannot be understood in terms of informationprocessing mechanisms of any kind, nor in terms of biological evolution either. Rather, intentionality is seen as a basic philosophical notion, to be understood (though not explained) in terms of situatedness, embodiment, in-dwelling, and/ or Dasein (see the Embodiment, Enactiveness, and Phenomenology section).
The Extended Mind In the early days of cognitive science, the psychologist Jerome Bruner wrote about how ‘cognitive technologies’ (motor action, imagery/drawing, and language) enable us to think (Greenfield and Bruner, 1969; Cole and Bruner, 1971). Similarly, Newell and Simon (1972) stressed the use of ‘external memory’ such as wri en sums and memos in problem-solving. And the cognitive anthropologist Anthony Wallace (1965) showed how drivers regulate their journeys by monitoring the input/feedback from roads, traffic lights, landmarks, and road signs, and from clutch, indicators, and gear shi . These cognitive scientists, and others a er them, have provided a wealth of evidence that cultural artefacts in general (and above all, language) enable us to think thoughts, and to raise and answer questions that would otherwise have been beyond our powers. However, they didn’t go so far as to claim that our minds are partly constituted by these artefacts. Nor did those early visionaries who predicted that the availability of certain types of information technology – then, only just conceivable, but now common – would deeply change the way we think (Bush, 1945; Engelbart, 1962). Scientists and engineers in general don’t normally ask constitutive questions. Philosophers, however, o en do. So the anthropologist Clifford Geertz (1973), speaking in philosophical mode when recommending a non-psychological view of culture and anthropology, said that the ‘mind’ is located outside the head. And the philosophers Clark and David Chalmers went even further, locating the ‘mind’ and the ‘self’ largely in the external (physical/cultural) world (Clark and Chalmers, 1998; Clark, 2003a, 2008b). Clark and Chalmers argued that while there is a clear distinction between brain (or body: Clark, 1997) and artefacts, considered as material things, there is no principled distinction between the (abstractly defined) processes of control that link them in behaviour. If an amnesiac constantly consults a personal notebook to decide what to do, and even how to do it, the complex feedback loops and information sources involved can’t be neatly assigned to either brain or world, but only to a closely coupled merger of the two. If, very broadly 159
The Continuum Companion to Philosophy of Mind
speaking, the mind is what the brain does (the credo of functionalism), then the mind itself is partly constituted by things in the world. Fodor (2009) isn’t convinced. Computation and control, he insists, are grounded in the brain. He insists, also, that the defining feature of mind is content, or ‘aboutness’, and that only brain processes can have content in a nonderivative sense. If these processes continually consult, affect, and are affected by aspects of the outside world, it doesn’t follow that those aspects – notebooks, i-phones, whatever – are actually parts of the mind. One doesn’t have to share Fodor’s views on content to agree; it’s not clear that there’s any great advantage, either philosophical or scientific, in making the strongly counterintuitive constitutive claim. Nevertheless, some philosophers are sympathetic: Clark and Chalmers’s article won a Philosopher’s Annual prize as one of ‘the year’s ten best papers in philosophy’. And some cognitive scientists, likewise, were persuaded. In certain circles, ‘extendedness’ and ‘externalism’ have become buzz-words. This philosophical approach is especially well suited to cognitive scientists’ talk of ‘distributed’ and ‘situated’ cognition. Much as AI-work on PDP refers to distributed processing and distributed representations, so AI-work on ‘autonomous agents’ refers to distributed cognition (Bond and Gasser, 1988). The cognition (problem-solving, as well as knowledge) is thought of as being distributed over a number of interacting agents. And those agents are usually described as ‘situated’, meaning that they aren’t controlled by a top-down problem-solving GOFAI program but by bo om-up cues from the environment, to which they respond directly, in a near-reflex fashion (Agre and Rosenschein, 1996; cf. Brooks, 1991a, 1991b). Both these ideas are highlighted by the cognitive anthropologist Edwin Hutchins (a follower of Wallace) in a study of ship navigation (Hutchins, 1995). He shows that this skill isn’t located inside the head of a single person, or even several. It emerges from a complex coupling of individual personalities, social roles and conventions, maps and instruments, ship-design, mariners’ knowledge, problem-solving (o en spread across several crew members), and a variety of bodily skills. The thesis of the extended mind need not be combined with the concept of embodiment (although the notion of situatedness is very close to it). But even before the prize-winning article, Clark had argued for a philosophy of mind grounded in embodiment.
Embodiment, Enactiveness and Phenomenology The notion of embodiment was important in the philosophy of mind long before it entered cognitive science, for it is a key concept of continental 160
The Philosophies of Cognitive Science
phenomenology. Indeed, Dreyfus’ (1967) early a ack on AI drew on Martin Heidegger and Merleau-Ponty, and declared that ‘computers will have to have bodies in order to be intelligent’. By this, he didn’t mean that robots would be intelligent. For a body isn’t just a self-moving physical object. For autopoietic philosophers (see below), a body is a living thing: an autonomous system that is the result of physical selforganization. For phenomenologists, it is the physical system through which the intentional agent (actor) is coupled with, or situated within, the world. The philosophy of embodiment is closely related to enactivism, a position that stresses the agent’s bodily activity as a condition of perception and thought (Varela et al., 1991; Noë, 2004). The evidence includes intriguing experiments on change-blindness and ina entional blindness (Simons and Chabris, 1999; O’Regan and Noë, 2001). Here, what one would expect to be startling environmental information (e.g. one interlocutor being replaced in mid-conversation by another, or a gorilla cavorting amidst a group of humans) simply does not register in the perceiver’s consciousness if they are actively (sic) a ending to something else. Within psychology, the Gibsonians had long focused on how the subject’s own bodily movements influence perception. But this notion, sometimes called ‘animate vision’, became prominent in the philosophy of cognitive science only much later when continental philosophy finally crossed the intellectual Channel to the Anglo-American world. When cognitive science began, the phenomenological movement was generally discounted, even despised, by analytical philosophers. Eventually, however, some analytically trained philosophers, including John McDowell (1994b), Michael Morris (1991, 1992), and even Putnam himself (1982, 1988, 1997), drew closer to the continental tradition. They now offered philosophies of mind wherein representations were explicitly denied. And folk psychology, beloved of many functionalists, was dismissed as a scientistic ‘myth’ that is ‘wrong in every particular’ (Morris, 1992, p. 111). Morris was unusual in deigning to address some claims of cognitive science. While rejecting internal representations – that is, non-semantically individuated objects/events, such as brain states, that have meaning), because no naturalistically identified event can be inherently meaningful, he offered alternative accounts of the phenomena explained by cognitive scientists in terms of them (Morris, 1992, pp. 28–30). He even allowed that cognitive scientists may sometimes be justified, for scientific purposes, in stipulating that certain states/ events in the brain have such-and-such meanings. But he believed this would be appropriate only for low-level cognitive capacities. A few philosophers sympathetic to the continental tradition take science (though not functionalist cognitive science) more seriously. Humberto Maturana and Francisco Varela, for instance, formulated the philosophy of autopoiesis 161
The Continuum Companion to Philosophy of Mind
(a concept similar to that of metabolism: Boden, 2000a), which explains biological phenomena in terms of spontaneous self-organization (Maturana and Varela, 1980). Maturana had co-authored the seminal neuroscientific paper on ‘What the Frog’s Eye Tells the Frog’s Brain’ (Le vin et al., 1959), but when writing on autopoiesis he specifically rejected its functionalist language and assumptions. Varela and other autopoietic psychologists describe the activity of the autonomous agent as a system of intimate couplings between organism and environment (Varela et al., 1991). Indeed, autopoietic enactivism defines cognition (a.k.a. mind) as the bringing-forth of a ‘world’ by an autonomous embodied agent, coherently engaging with its environment (Di Paolo, 2009). Another representation denier, prepared to take science seriously, is Timothy van Gelder (1995). He argues that cognition should be understood in terms not of computation, but of dynamical systems describable by the laws of physics. However, his arguments fall foul of two issues discussed in the ‘What is Computation?’ section. First, ‘computation’ should not be interpreted only as formal symbolic information-processing. And second, the mind is best seen as a many-levelled virtual machine, so that events that are physically implemented in the brain by dynamical systems may be understood as diverse aspects of the virtual machine, including, among many other things, GOFAI computations and representations. Ironically, given the anti-scientific bias of the continental tradition, some (empirical and philosophical) work in cognitive science has been deeply influenced by Merleau-Ponty and Heidegger. That’s not to say that these cognitive scientists take Heidegger’s philosophy as gospel. Michael Wheeler (2005), for instance, specifically denies that Dasein is restricted to languageusing human beings, and absent from animals; he also demurs from the rejection of any internal representations; and he criticizes the anti-realist implications of phenomenology. But some Heideggerian concepts (skilful coping, readiness/ unreadiness-to-hand, presence-at-hand, background) are used by him to describe and justify enactive and dynamical cognitive science research. A new twist on the realism/anti-realism debate has arisen in connection with the technology of virtual reality (VR), and in particular with respect to the 1999 film The Matrix. The human beings in this story are supposed to be experiencing a purely virtual world, while their living but inert bodies are farmed for energy by the machines in charge. The machines are clever enough to fool the people that they are walking, talking, and eating in a world like our own. Several leading philosophers of cognitive science have considered the implications, sometimes with reference to specific scientific findings/theories in the field. Some of their discussions are in online papers (Dreyfus and Dreyfus, 2002; Clark, 2003b; Chalmers, 2003b), others (Searle and Denne ) in online interviews on the same website, and yet others in print (Dreyfus, 2003; McGinn, 2005). (See also Irwin, 2002; Lawrence, 2005; Grau, 2005.) 162
The Philosophies of Cognitive Science
Consciousness Much as there is no such thing as the philosophy of cognitive science, so there is no such thing as the problem of consciousness. The many problems of consciousness encompass various intriguing aspects of the human mind. Most have been illuminated by cognitive science. Philosophical discussions, here, are usually informed by empirical evidence drawn from studies of human beings (and animals), and also from computer modeling. Crucially, such evidence reports many phenomena that cannot be shoehorned into a Cartesian vision of error-free introspection and unitary consciousness. Experimental and clinical psychologists have long challenged common-sense intuitions on ma ers ranging from a ention (Broadbent, 1958), through perception (Kolers and Rosner, 1960; Kolers, 1972) and intelligence (Damasio, 1994), to voluntary action (Libet, 1985a, 1985b, 1999; Libet et al., 1979) and its debilitating breakdown (Norman and Shallice, 1986; Cooper et al., 1995). Neuroscientists have provided relevant evidence too. But the methodology of brain-imaging has not been especially helpful. It has discovered many ‘mind-body’ correlations; but Descartes predicted such correlations, so this hardly counts as a philosophical advance. Usually, it isn’t even a scientific advance, since most work of this type reports a-theoretical fishing expeditions, which ignore the complexities of the psychological phenomena concerned. When neuroscience is relevant, that’s because it throws light on the neural mechanisms underlying the computations involved in consciousness. For example, autistic children unable to master ‘theory of mind’ do not show normal brain activations when presented with intentional concepts (Frith and Frith, 2000). Much of the past puzzlement about consciousness has rested on the seemingly paradoxical phenomenon of reflexive self-consciousness. However, a computational philosophy, thanks to the computer scientist’s concept of recursion, allows for non-paradoxical theories of self-reflection. These include accounts of meta-cognition (higher-order thought), adduced in explaining not only conscious reasoning and deliberative self-control but also hypnosis (Dienes and Perner, 1999, 2007). Similarly, puzzles about the possibility of dissociative states, such as multiple personality syndrome, dissolve when the mind is viewed as a complex system comprised of partly independent teleological structures, controls, and memories/data-bases (Boden, 1994). Free will can be understood as a feature of cognitive/motivational systems with sufficient complexity to compute the likely effects of alternative (sequences of) hypothetical actions, and to compare them with respect to moral and personal preferences of many kinds before deciding what to do (Boden, 1978; Denne , 1984). The ‘self’ itself can helpfully be seen as a narrative construction, involving self-image and self-ideals, which guides 163
The Continuum Companion to Philosophy of Mind
and interprets the person’s behaviour (Boden, 1972, pp. 236–60, 327–33; Denne , 1991a, Chapter 13). All those puzzling phenomena were addressed at length by Denne (1991a), who analysed them in broadly computational terms. Other cognitive scientists have offered general theories of consciousness somewhat similar to his, such as Bernard Baars’ (1988) integrative idea of the global work space, Thomas Metzinger’s (2003) self-model theory, and Sloman’s architectural account described below (see also Crick and Koch, 1990; Frith et al., 1999). But Denne ’s is among the most philosophically sophisticated. Of Denne ’s many provocative claims, the most controversial is his position on what Chalmers (1996) calls ‘the hard problem’ – namely, how to explain (or even admit) the existence of subjective experiences, or qualia. Early critics of functionalism declared that it couldn’t account for qualia, because zombies, behaving exactly like humans, but with no conscious experiences, are in principle possible (Block, 1978; Ziff, 1959). Denne disagrees: the concept of zombies, he says, is incoherent, and belief in their possibility ‘ridiculous’ (Denne , 1991a, Chapter 10, p. 4; 1995). Sloman (1996c, 1999) argues, similarly, that nothing could have the same computational architecture as us (necessary for it to behave exactly like us), yet lack sensation. On one key point, however, these two computationalist zombie-deniers disagree with each other. For Denne , qualia are mere fictions (Denne , 1988; 1991a, Chapter 12). Once everything has been said about the behavioural aspects (the many subtle discriminative and associative activities) of seeing blue, or tasting chablis, nothing more remains. For Sloman, by contrast, qualia do exist. They aren’t visible as overt behaviour, nor verbally describable to others, but nor are they events in some essentially mysterious immaterial world. They are internal (but self-accessible) computational states, whose generation and functions are possible only in complex computational systems – virtual machines – of a particular architectural type (Sloman and Chrisley, 2003). Sloman’s philosophy of mind takes the design stance seriously. Drawing on a wealth of experience in AI, and knowledge of a wide range of animal behaviour, Sloman makes relatively specific suggestions about which sorts of computational structures and processes could and which could not generate particular types of cognition and control (Sloman, 1978, 1993, 2000). For example, various types of anxiety, and the complex emotion of grief, are made possible by distinct types of computational mechanism, some of which can already be modelled, up to a point, in computers (Wright et al., 1996). Sloman regards most philosophical discussion of consciousness as fixated on highly confused concepts (Sloman and Chrisley, 2003; Sloman, 2010). These include not only long-familiar notions (such as qualia), but also supposedly more precise and more recent terminology (such as phenomenal and access 164
The Philosophies of Cognitive Science
consciousness: Block, 1995). This conceptual confusion partly accounts for the rise of ‘new mysterian’ accounts of consciousness (Flanagan, 1992, pp. 8f.), according to which it is either u erly unintelligible to human minds (McGinn, 1989, 1991) or intelligible only in terms of arcane, even undiscovered, aspects of quantum mechanics (Penrose, 1994; Hodgson, 1991) and/or of an information-imbued universe (Chalmers, 1996). Searle (1980, 1992) isn’t one of the new mysterians, but he might be termed an old mysterian. In rejecting functionalist/computational accounts of consciousness (and intentionality), he insists that we know that it is a biological phenomenon, just as digestion and photosynthesis are. This ‘knowledge’, however, is merely the fact that we now have even more evidence than Descartes did to believe that the brain causes/generates consciousness. What we want to know is How? At the level of material stuff (compare: lactose, chlorophyll), the emergence of consciousness is intuitively unintelligible. Searle offers no new ideas, mysterian or not: no micro-tubules, no dual-aspect information, no special types of computation or computational architecture. He simply says that the problem of consciousness is a scientific problem, and leaves the scientists to get on with it. There are now several conferences, and an international journal, dedicated to ‘machine consciousness’. Sometimes, this phrase is glossed by references to mechanisms explaining human or animal consciousness. O en, however, it is used to suggest that computers of a certain sort are, or anyway would be, genuinely conscious. Some such claims are philosophically (and computationally) thin (e.g. Aleksander and Dunmall, 2003; Aleksander, 2005). A few are more substantive. For instance, Sloman has outlined a specification for a machine whose normal functioning could lead it to discover within itself something akin to qualia, as a result of developing an ontology for describing its sensory contents (Sloman and Chrisley, 2003; cf. Sloman, 2010, Section 13).
Mind and Life Many philosophers assume that life is necessary for mind (e.g. Scriven, 1953, p. 233), and that, because computers aren’t alive, strong AI is impossible (Geach, 1980, p. 81). However, the necessity of the life/mind linkage is more o en taken for granted than explicitly justified and when arguments are given, they are usually weak. Cyberneticists in general assume that the same principles of control govern both life and mind – that is, they are ‘strong continuity theorists’ with respect to the ontological similarity of life and mind (Godfrey-Smith, 1994b). However, they pay scant a ention to issues such as self-reflection and reasoning. Even language, though sometimes mentioned, isn’t considered in any detail. 165
The Continuum Companion to Philosophy of Mind
So philosophers of biology who construe life as self-organization claim that life can culminate in cognition, but what they say about ‘cognition’ applies as much to oak trees as to humans (Pa ee, 1966, 1989; Jonas, 1966; Maturana and Varela, 1980; Cariani, 1992; Sober, 1992). Nor do they normally ask whether mind without life is possible. The existentialist theologian Hans Jonas did tackle this question (Jonas, 1966, pp. 64–91). But his argument that life is essential for mind rested on the problematic anti-Cartesian claim that all self-organized (metabolic) ma er is, in a sense, ensouled. A less provocative way of pu ing this is to say that the adaptivity of an autonomous (autopoietic) living system enables it to construct both norms and sense, so that ethics and intentionality are seen as naturalized (Di Paolo, 2009). Even here, however, very li le is said about specifics, although there are some intriguing ideas about the origin of sociality (De Jaegher and Di Paolo, 2007). If life really is necessary for mind (intentionality), then A-life is philosophically, and perhaps methodologically, prior to AI. In other words, an understanding of life as such (which is one aim of A-life) should bu ress, and perhaps even lead to, the understanding of mind. It would follow, too, that strong A-life (i.e. virtual life, or life in cyberspace) is necessary for strong AI. Despite the claims of a few A-life scientists (e.g. Ray, 1992), strong A-life is impossible. Metabolism is a criterion of life, and computers don’t metabolise. They use energy, but what biologists mean by metabolism is more than mere energy dependency. It is the self-production and self-maintenance of the physical organism by energy budgeting that involves self-equilibrating energy exchanges of some necessary complexity (Boden, 1999). Since there’s no universally agreed definition of life, someone might suggest that metabolism be omi ed. However, the only gain would be to allow the possibility of virtual life, which begs the question. And there would be a huge loss – namely, the explanatory power that the concept of metabolism, with the associated ‘laws of bio-energetics’, provides with respect to all living things.
What is Computation? Philosophers typically believe that we know what computation is because Alan Turing (1936) told us. They rely unquestioningly on his definition – computation as formal symbol manipulation – in expressing their own views on mind and/or cognitive science (e.g. Putnam, 1960; Fodor, 1975, 1980; Searle, 1980; Pylyshyn, 1980; Haugeland, 1985; Copeland, 1993). However, ma ers are more complicated. Turing’s is still the only rigorous definition. But as the practices of AI and computer science – the virtual machines involved – have become increasingly varied, additional senses have arisen 166
The Philosophies of Cognitive Science
(a dozen are distinguished in Smith, 2002a). In general, these treat computation not as abstract, uninterpreted, symbol manipulation but as actual processes implemented in computers. Some logicians have asked just which computers are equivalent to Turing machines, and which are not (Bringsford, 1994; Calude et al., 1998; Copeland and Sylvan, 1999; Scheutz, 2002). Sloman (2002) has questioned the historical relevance of Turing machines to the development of AI. As for their philosophical relevance, he says that ‘No programmer or computer engineer has, to my knowledge, ever thought of programs in [Searle’s, i.e. Turing’s] way, and as a programmer myself I have never thought of programs that way’ (p.c., quoted in Boden, 2006, p. 1415); and he has shown that the notion of Turing equivalence doesn’t capture the richness of the concept(s) of computation needed to understand minds (Sloman, 1996b). For our purposes, three senses of ‘computation’ can be distinguished. The first is Turing’s. The second is one whose denotation has changed over the years, and will continue to do so – namely, Whatever methods (virtual machines) are actually used in computer modelling. These include not only the many methods within GOFAI and connectionism, but also (for instance) approaches that simulate the computational effects of chemicals diffusing through the brain (Husbands et al., 1998; Smith et al., 2002). The third sense is the most philosophically interesting. It’s also the most unclear, because it’s a group of rather different senses, all informed by the same general aim: to present computation as intentional, and meaning as computational. What computers do is conflated with what minds do, by some account of intentionality that supposedly applies to both. One example is Newell and Simon’s theory of physical symbol systems, seen as ‘the necessary and sufficient means for general intelligent action’, whether in minds or computers (Newell and Simon, 1976; Newell, 1980). A symbol was defined as a physical pa ern with causal effects. Likewise, causal definitions were given of intentional terms such as representation, interpretation, designation, reference, naming, standing for, and aboutness. Another third-style example is due to Sloman, who holds that we don’t yet understand computation largely because we don’t yet understand causation. He dismisses causal theories of meaning (like Newell and Simon’s) that assume some physical relation between a symbol and its referent, because we can refer to non-existing things (Sloman, 1986). He stresses the virtual causal processes required for understanding, arguing that a virtual process can properly be said to cause a physical one, so that qualia (which he analyses in computational terms; see the Consciousness section above) aren’t epiphenomenal, but really do cause changes in the brain (Sloman, 1992, 1996a, 1996c). The most controversial third-style position is that of Brian Cantwell Smith (1996). His early analysis of computation as causal, not formal, rested on insights 167
The Continuum Companion to Philosophy of Mind
about what computers actually do (Smith, 1985). It has developed into ‘a philosophy of presence’ that is a new metaphysics: an account of the emergence of objects, individuation, particularity, subjectivity, and meaning. Mind, or intentionality, is a form of active registration that requires a relatively high degree of disconnectedness, or autonomy, as well as connectedness. This gives rise to subjectivity and objectivity alike. Smith sees no fundamental distinction between intentionality in people and computers: computation is ‘inherently participatory’, and computers have ‘intentional capacities ultimately grounded in practice’, analogous to the human practices stressed by Heidegger and Dreyfus (Smith, 1996, pp. 305, 149). Both physical objects and intentional subjects, he argues, arise from the ‘participatory engagement’ of distinguishable regions of the metaphysically basic dynamic flux. This flux is described by field-theoretic physics, and having no objects involves neither individuality nor particularity. Objects emerge, or are constructed, as a result of dynamic participatory relations. So where others were broadening Dasein from humans to animals (see the Embodiment, Enactiveness, and Phenomenology section above), Smith broadened constructive dynamical interaction from animals to rocks, and even to atoms. Smith claims to have retained the major insights of both continental and empirical-analytic traditions, without any of their problematic ontological assumptions. He also claims that his metaphysics gives us norms as well as facts: ‘a way of living right’ as well as ‘a way of speaking truthfully’ (Smith, 1996, p. 108). Some philosophers are deeply impressed: John Haugeland’s jacketblurb says ‘Smith recreates our understanding of objects essentially from scratch and changes, I think, everything’. Others are only partly persuaded, allowing that the physical implementation of computation is important although rejecting Smith’s account of it (Searle, 1990b, 1992, p. 209). Yet others are highly sceptical, and also repelled by his vividly purple prose. (My own view is that he has helped himself to the ‘dynamic flux’, his version of Kant’s noumenal world, without proper licence, despite his claim, in the final 60 pages, to have pulled this concept up by its own bootstraps.) As we’ve seen, Smith isn’t the only one to regard the concept of computation (understood intuitively as what computers do) as problematic. So the core thesis of cognitive science, like that of physicalism, should be interpreted transparently, not opaquely (Chrisley, 2000). The claim isn’t that mind can be explained by our current ideas about computation, but that it’s explicable by whatever theory turns out to be the best account of what computers do.
Conclusion Thirty years ago, Sloman predicted that philosophers today would be ‘professionally incompetent’ if they weren’t well informed about developments in 168
The Philosophies of Cognitive Science
AI (Sloman, 1978, p. xiii). He was right. This chapter has described a host of examples where the findings of cognitive science, and especially the concepts developed within AI/A-life, have provided intriguing questions, and sometimes plausible, or even satisfying, answers, for the philosophy of mind. These questions and answers cannot properly be ignored. Even those who reject the fundamental assumption that there can be a naturalistic psychology should (like Morris) at least engage lightly with some of the data/theories of cognitive science. Moreover, cognitive science can contribute to longstanding problems ranging from the philosophy of science (Sloman, 1978; Thagard, 1988, 1989; Churchland, 1989b; Whitby, 1996) and of religion (Arbib and Hesse, 1986; Boyer, 1994), through metaphysics (Smith, 1996; Sloman, 1996a), to ethics (May et al., 1996), aesthetics (Turner, 1991; Boden, 2000b, 2007; Boden and Edmonds, 2009), and philosophical logic (McCarthy and Hayes, 1969; Pearl, 2000). That’s hardly surprising. Insofar as philosophy is the study of systems of thought, and ways of knowing, it is concerned with the mind. So a science of the mind is likely to have connections to, and implications for, most areas of philosophy. Arguably, the study of metaphysics is different. But even there, the concern is with how we should identify and think about the most basic categories of being. AI models using natural language, for example, must o en employ notions of space, time, and cause. The AI definitions of these categories leave much to be desired. But AI researchers with philosophical expertise have contributed not only to AI modelling but also to philosophical understanding of, for example, causation (Sloman, 1996a; Pearl, 2000). As that last example suggests, the potential for dialectical enrichment goes both ways. When philosophers engage with the field in a serious spirit, they may come up with views that influence the science itself. They may help it to advance not only by clarifying current scientific concepts but also by offering new insights for empirical study. Examples of this salutary effect include Denne ’s work on the intentional stance, which has been used by cognitive ethologists to guide experimentation and theorizing on the ‘minds’ of many species (Griffin, 1978, 1984). (Phenomenologists who refuse intentionality to animals will describe these data in different terms; but they must admit that interesting new data have been discovered by cognitive ethologists, and that the writings of philosophers have played some part in furthering this.) Another piece by Denne which influenced empirical research was his (Denne , 1978) snippet on modelling the whole animal, although some cognitive scientists, to be sure, had already worked out that message for themselves (Arbib et al., 1974; Arbib, 1982). Two more examples, concerning the acquisition of conceptual representations, are due to Clark, writing in cooperation with a developmental psychologist (Clark and Karmiloff-Smith, 1993) and a connectionist AI scientist (Clark and 169
The Continuum Companion to Philosophy of Mind
Thornton, 1997). My own analysis of the concept of creativity, and of the computational mechanisms underlying it, has prompted research in both AI and psychology (Boden, 2004). Yet more instances are due to Sloman, including his work on temporary representations (which influenced the recent ‘dual pathways’ theory of vision: Goodale and Milner, 1992), on virtual machines and computation, on emotions and mental architecture, and on the space of possible minds. So the appellation ‘cognitive scientist’ covers many people who are primarily regarded as philosophers. There’s no fundamental distinction between those who actually do science and those who restrict themselves to commenting on it. That’s not to say that science can, of itself, answer philosophical questions. But the days when respectable philosophy, and especially the philosophy of mind, was thought to exclude any reference to science should by now be well and truly over.
170
9
Representation Georges Rey
Introduction ‘Representation’ has come to be used in contemporary philosophy and cognitive science as an umbrella term to include not only pictures and maps, but words, clauses, sentences, ideas, concepts, indeed, virtually anything that is a vehicle for intentionality (i.e. anything that stands for, ‘means’, ‘refers to’ or ‘is about something’). We will abide by this broad usage, and for the most part not distinguish among these different forms of representation and intentionality here. Following many authors, however, we can distinguish original from derived intentionality and representations. Ordinary maps, drawings, paintings, and the words and sentences of natural language, are representations that stand for or about something by virtue of how they are deliberately used for usually social purposes by people and perhaps some animals: their status as representations is derived in complex ways from the ideas and intentions of those people and animals. But what of the ideas and intentions themselves? On pain of regress, their intentionality is not derived from anyone’s deliberately using them for such a purpose, but is original. I shall confine the discussion of representation here to representations with original intentionality, and presume that these are all mental ones.1 The issues of original intentionality and representation have become particularly important in the last several decades due to the increasing interest in a computational/representational theory of thought (CRTT) in a wide variety of approaches to psychology from, for example, theories of reasoning and decision-making, to theories of vision, language acquisition and animal navigation. In the second section of what follows, I’ll briefly review the needs and prospects of CRTT, turning in the third section to consider a general puzzle about intentionality and representation, before turning in the remaining sections (fourth through sixth sections) to the difficult issues of determining by virtue of what something represents whatever it represents – namely, has the (representational) content it has. 171
The Continuum Companion to Philosophy of Mind
CRTT The program Since at least the demise of behaviourism, it has been taken for granted that people and many animals exhibit all sorts of pa erns of behaviour that seem explicable only on the assumption that they are capable of perceiving, remembering, reasoning, planning, decision making, and, for some, systematically expressing their thoughts in natural languages. There are two general properties of such thinking things: the transitions between their states are o en rational, and they occur o en by virtue of the intentional content of the states. Thus, people are sensitive to deductive, inductive and abductive relations among their thoughts, and to decision theoretic pa erns in much of their planning and behaviour, most of which can only be expressed by reference to the truth-valuable contents of the constituent states. It’s standardly truthvaluable contents that are at least one of the relata of ‘imply’, ‘infer’, ‘confirm’ or ‘rationalize’, as when, for example, the hypothesis that it has rained is confirmed by the evidence of the wet streets, and the act of taking an umbrella is rationalized by a desire that one not get wet.2 Dualists such as Descartes and Brentano have argued that rationality and intentionality aren’t assimilable to a general physical theory of the world. However, many people have thought that recent advances in various formalizations of reasoning and computation suggest otherwise, and so have pursued a CRTT. Building on the proposals of Alan Turing regarding the nature of computation, CRTT postulates that there exists a medium of formal representation (a language of thought) and a set of computational processes defined over it that could account for rational (and many irrational) phenomena.3 Since these computations and representations could be physically realized in the brain, it promises an answer at least to Descartes’ challenge that reason can’t be ‘extracted from the power of ma er’. There is still the intentionality of the representations to be accounted for, and it might be feared that CRTT depends too heavily on our understanding of artifactual computers, whose intentionality, like that of a book, is derived from the original intentionality of its programmers. To be sure, the plausibility of CRTT is based partly from such an understanding, and frequently specific CRTT proposals are tested by running them on artifactual computers. However, this reliance on artifactual machines is inessential. Indeed, CRTT departs from current research in artificial intelligence (AI) in two important respects, (a) it is not, as AI routinely is, concerned merely with understanding how to get a computer to solve some problem in some way or other; rather, it seeks to understand the actual inner workings (at a computational level) of humans and animals, and (b) it decidedly does not suppose that human or animal 172
Representation
intentionality is derived from the intentionality of any programmers, but has some other naturalistic source, as we’ll discuss at length below fourth through sixth sections). As things stand, CRTT is not so much a theory as a research program. It might be compared to the postulation of atoms as constituents of chemical phenomena, without any clear idea of what the precise character of those atoms might be. It leads us to ask interesting and o en empirically testable questions about, for example, the precise character of the medium of representation – its expressive power, the kind of information it needs to represent – as well as about the character of the computations defined over those representations: whether they are serial, parallel, connectionist or dynamic, and to what extent they are modularized or encapsulated from one another. Discovering the subtle principles and algorithms by which we understand the world and adjust our behaviour to it is not something to be expected in our grandchildren’s lifetimes, if ever. But CRTT does seem to be the only serious framework in which these issues can be raised.
(Non-)physical (non-)locality It’s important to appreciate both a serious problem that CRTT aims to solve and a constraint on its solution. An organism’s sensitivities begin to present difficulties for any purely physicalistic proposal once one considers the detection of non-local, non-physical properties by local physical agents. It is a fundamental fact about the kinds of entities that include people and animals that their causal interaction with their environment is entirely local and physical. In the case of human beings this interaction is confined to six or seven sense modalities, or transducer systems, that convert various physical phenomena into electrochemical impulses and motor systems that convert them back to bodily behaviour. As animals become more intelligent, the relations between these inputs and outputs become increasingly complex, and it becomes increasingly hard to explain how an animal or other device is capable of realizing them. For example, successful models of animal navigation need to assume that the animal can represent some of the spatio-temporal structure of that experience (e.g. how long it has been since it last found food at a given place). Indeed as Gallistel notes: [T]he representation of temporal intervals in rats and pigeons appears to be a rich one, in the formal sense of rich. Successful computational models of timing behaviour appear to require decision processes that employ operations isomorphic to the addition, subtraction, and division of represented intervals. (Gallistel, 1990, p. 586) 173
The Continuum Companion to Philosophy of Mind
But how does an animal keep track of temporal structure in a way that enables it to perform these computations and so sensitively modify its behaviour? What must its brain be like to do this? For starters, it had be er somehow represent events as having specific temporal properties, and then somehow store those representations for further use in combination with other such representations in ways available for executing the vector algebra of dead reckoning (Gallistel, 1990, 2008). Understanding mechanisms that could be sensitive in these and other ways is one of the fundamental problems for any psychology. Moreover, human sensitivities to stimuli are not confined to non-local properties. As contemporary psycholinguistic research shows, even very young children seem to be sensitive to such categories as noun or verb that are in no plausible sense physical properties of the stimuli (the properties, at any rate, do not appear in physics or physiology). What kind of mechanism could possibly produce such sensitivities? Noun phrases patently do not share any transducible property. It would seem impossible to rig a device to be sensitive to all and only noun phrases in any way other than by in one way or another building into the device the principles of grammar, and, with them, at least representations of the relevant grammatical categories over which those principles are defined. Similarly for the plethora of other non-physical properties to which people are obviously sensitive (e.g. timidity, audacity, pomposity; being composed by Beethoven, a fall in corporate profits, a proposal of marriage, or a declaration of war). The only plausible way something could be sensitive to these things is by having a mind, and the hope is that CRTT will explain how this helps by showing how it emerges from computations over representations.
Turing and Marr A crucial feature of Turing’s account of computation is that the transitions between states turn on local physical properties (e.g. ‘0’s and ‘1’s that could be individuated by their physical shape (or, in your modern computer, by electro-magnetic properties). It is this property of Turing’s proposal that enables it to avoid the familiar homunculus objections – wouldn’t one need a mind in order to read the symbols? – that have been raised to CRTT by Wi genstein, Ryle and most recently, John Searle: the operation of the machine is brute causal, not requiring any intelligence that it can thereby be enlisted to explain.4 By contrast, ge ing a machine to detect any other categories – say, whether a symbol was beautiful, terrifying, created by an Australian, or a representation of justice – would very likely require a mind, or another computer (i.e. some procedure for proceeding from the detection of local, physical properties to the target ones). 174
Representation
The pioneering work of David Marr (1982) on vision provides a paradigmatic application of Turing’s idea, with a richness of ideas and degree of success that has led it to become a paradigm for much work in the CRTT tradition. Roughly, Marr’s strategy was to explain visual recognition of familiar objects in terms of computation of intensity values on the retina, proceeding to the determination of continuous lines, or edges of an apparent object, the location of this surface of this object in a two-dimensional grid, and then to computation of conical shapes with which the original input is compared. The important suggestion here is that, through mathematical computations of certain idealized figures, co-variant relations may be gradually be established with nonlocal or non-physical properties such as being a rabbit or a chair. What promises to make this account explanatory is the presumption that at every stage a computation is being proposed on a physically specifiable representation whose content, it is hoped, can also be specified by the theory. Unfortunately, like virtually all psychologists, Marr focuses almost entirely on the computational aspects of the processes. There is li le said systematically about the issue of the content of the representations themselves. Indeed, it is an odd and in some ways unfortunate sociological fact that by and large the only cognitive scientists who have tried systematically to address this issue are philosophers, and even they have paid comparatively li le a ention to representation per se, as opposed to a itudes and intentionality generally. This is particularly puzzling given the virtual ubiquity of the notion of representation in psychology and cognitive science. Even though computations can be specified and studied without appeal to content, content is arguably presupposed by any computational theory: in vision, representations of, for example, edges and objects; in decision making, of options, losses and gains; in parsing, of nouns and verbs. Indeed, even the computations of Turing machines are standardly defined over numerals that represent numbers. Yet the specific content of the representations, and their relations to what they represent, is simply stipulated or intuitively taken for granted.
Referential Opacity The philosopher who brought the problems of representation and intentionaltiy to modern a ention was the Austrian philosopher Franz Brentano who noted some logical peculiarities exhibited by a very large class of mental verbs, the so-called propositional a itude verbs, such as ‘think’, ‘notice’, believe’, ‘desire’ (verbs that take a sentence complement – a ‘that . . . ’ or a ‘to . . . ‘ clause – as a direct object).5 These peculiarities are o en conveniently referred to by the name of ‘referential opacity’, to bring out the contrast with normal, referentially transparent verbs. 175
The Continuum Companion to Philosophy of Mind
Most non-mental verbs are referentially transparent insofar as (a) the terms involve a relation between real things: if x kills or kisses y, then both x and y had be er exist: you can’t kill/kiss something that doesn’t exist; this is contrast with a mental verb like ‘think’, since you can think that x is angry without there being anything of which you think it: the Greeks thought Zeus was o en angry, but there was no Zeus; and (b) whatever objects are related by a transparent verb are so related no ma er how the objects are named or described, so long as the names of descriptions of them truly apply. Thus, if Oedipus killed Laius, then, since Laius is, in fact, his father, it follows that Oedipus killed his father. Or, to put it another way, if the expressions ‘Laius’ and ‘his father’ both refer to the same thing, then they can be substituted for each other in a normal, referentially transparent verbal contexts such as ‘x killed y’. Again, this contrasts with a mental verb such as ‘think’: Oedipus can think he killed Laius without thinking he killed his father; it’s the fact that the one claim doesn’t imply the other that gets him into trouble. By contrast with ‘kills’, most propositional a itude terms such as ‘x thinks that p’, are ‘opaque’, so to say; the ‘light’ of reference doesn’t shine directly onto the referent, if any, of expressions in the direct object clause of the term, but somehow ‘bounces off ’ the expressions themselves. Now, curiously, representation seems to suffer from one of these problems but not the other. At least on one natural reading, it doesn’t exhibit the resistance to substitution: if Oedipus represents Laius then he represents his father, whether he knows it or not (although see note 8). But, on the other hand, it seems to exhibit resistance to existential generalization: from the fact that Oedipus represents Zeus it doesn’t follow that there exists something such that Oedipus represents that thing, there being no Zeus. Like the propositional a itudes, representation can be about nothing, at any rate nothing real (and what else is there?). To be as neutral as possible about the complex issues raised here, we need to allow that there is a crucial ambiguity in ways of talking about what representations represent. On the surface ‘represent’ would appear to be simply a two-place relation, as in: (a) The word ‘cats’ represents cats. But this can’t be quite right, since (b) The word ‘Zeus’ represents Zeus.
176
Representation
would then be false, for lack of Zeus: you can’t bear a real relation to something that doesn’t exist. But there’s surely a reading of (b) that makes it true, since, again, ‘Zeus’ is not meaningless. It’s merely ‘empty’ (which I’ll confine to meaningful expressions). So what does an empty term like ‘Zeus’ represent? Well, it’s an interesting fact that an almost universal response is that of Quine’s (1953a) fictitious philosopher, ‘McX’: it represents ‘an idea in your head’, as many people might be inclined to put it. But this is absurd, since (a) whatever else might be in your head, there are certainly no bearded gods there; and, in any case, (b) if Zeus is an actual idea in your head, then Zeus would turn out to exist aĞer all! There have been a wide variety of replies to this puzzle.6 Again, to maintain neutrality between disputes, we should simply note that the word ‘represent’ (and, for that ma er, virtually any intentional idiom) seems to suffer from a systematic ambiguity along the following lines: (REP) (a) If we are talking about a representation, x, of some real thing, y, then we o en take x to represent that real thing, y; thus ‘Nixon’ represents the actual man Nixon. (b) When there isn’t, as in the case of ‘Zeus’, then we rely on talk about the content of the expression ‘y’ (which I will abbreviate by placing brackets around an expression, e.g. [Zeus]). The first usage might be called the ‘existential’, the second the ‘(purely) intentional’ usage of ‘represent’ (and other intentional idioms).7 I’ve expressed the second, intentional use with deliberate vagueness. It would be tempting to say ‘so a purely intentional use of “representation of y”, for lack of any y, is really about an intentional content’. But this wouldn’t be correct, since someone thinking about Zeus and his philandering ways isn’t thinking about the philandering ways of an intentional content. Speaking more carefully, we should say something such as: when x represents a y that doesn’t exist, a person is standing in the thinking relation to the content [y]; but this doesn’t entail she is thinking about [y]. Even in a purely intentional usage, ‘thinking about y’ is one thing; ‘thinking about [y]’ quite another. Arguably, psychology is concerned with pure intentional content, even in cases that might be described existentially. Although it’s convenient to describe navigating birds and bees as representing, for example, the azimuth of the sun, (e.g. see Gallistel, 1990), this seems to carry the misleading suggestion that the birds and the bees actually represent the sun as the sun, which is doubtful; presumably they don’t really have the concept [sun], and would react just as well to lights in a planetarium.8
177
The Continuum Companion to Philosophy of Mind
Providing an account of intentional content, what animals represent things as, is, however, no easy ma er. There have been two main strategies, internalist and externalist.
Internalist Strategies One of the most natural ideas about meaning is that it is some sort of introspectible idea inside one’s mind and/or head, say, an image or a inclination to make one inference rather than other.
Images and stereotypes The idea that mental representations are images, or perhaps ‘maps by which we steer’ (Lewis, 1994) can be traced back to Aristotle (see Cummins, 1989). There seems to be a presumption that imagistic representation is somehow unproblematic: an image, such as a green triangular one, represents what it resembles. As appealing as this idea has been, it won’t get us very far. In the first place, one doesn’t find entities that are actually triangular and green in the brains of people who think about green triangles. Perhaps talk of resemblance is just a way of talking about a correspondence between features of neural events and real world properties, but then the naturalization problem is the problem of specifying why one correspondence rather than another provides the correct interpretation: as Wi genstein (1953) noted, there are infinite numbers of projections from a triangle to arbitrary objects in the world. Moreover, even if some mental representations can be usefully thought of as images, it is extremely implausible that all, or even a significant proportion of representations can be. Images simply don’t combine to produce logically complex images. What image, for example, could represent a negative fact, as in the thought that there are no green triangles? A green triangle with a black X superimposed upon it? How is that to be distinguished from a unnegated image of a triangle with a black X superimposed on it? And the problem, of course, only gets worse when one considers conditionals, quantified sentences, modal claims, and all the indefinitely complex thoughts that people are manifestly capable of thinking (e.g. try forming a distinctive image for ‘If not every green triangle is inside a square, then either all the figures are illusory or I’m blind’). And over all this there hovers the problem of abstract ideas, such as those of uncle, number and justice; what are the distinctive images for them? While imagistic representations might well play some role in some cognitive processes, they simply seem inadequate for any serious logical thought. Moreover, as we already noted, in a CRTT, computations are defined over local 178
Representation
physical properties, not over non-local properties such as distances between points in an image. Sentences seem to be the only physically realizable objects that begin to have anything like the requisite expressive and computationally tractable properties.
Conceptual roles One kind of proposal that has been immensely popular in the twentieth century is that the meaning of a representation is determined by its epistemic (evidentiary and logical), conceptual role in reasoning. Logical operators provide the most plausible examples; thus, a certain expression ‘#’ might plausibly mean [and] in virtue of the fact that thinking ‘P#Q’ tends to cause thinking P and thinking Q, which in turn tends to cause thinking ‘P#Q’ (cf. Peacocke, 1992). However, extending the account past [and] merely to other logical particles is difficult. There are substantial disputes about ‘or’ and the law of excluded middle, as well as about how to understand conditionals. Are we to take these differences to be differences in the meaning of the logical words, or simply in the theories that people have about them (cf. Williamson, 2006)? Moving away from purely logical cases, the problem seems even more daunting. With a li le imagination, it seems always possible to construct a story whereby someone (particularly a philosopher) could reasonably deny a standard role for a concept while still seeming to possess it, simply by having a sufficiently bizarre theory about the world. Thus, some creationists seem to be denying that humans are animals; some nominalists, that numbers are abstract; and some idealists that tables are material objects. There would seem to be no epistemic connection so secure that, with a li le ancillary theory, someone couldn’t break it and yet still be competent with the relevant concept. So how could some epistemic connection provide the meaning of a concept? A further problem, stressed by Fodor and LePore (1992), is that conceptual roles don’t seem compositional in the way that content ought to be. The content [pet fish] ought, a er all, to be some kind of compositional function of the content [pet] and the content [fish]. However, it’s not at all clear that the conceptual role of [pet fish] is a compositional function of the role of [pet] and the role of [fish]; there may well be inferences that are peculiar to pet fish that are not shared by pets or fish alone (e.g. only pet fish live in bowls). It is tempting to try to limit the relevant roles to certain constitutive inferences. The most famous effort to do this was the verifiability theory of meaning, which was an a empt to spell out the meaning of a concept in terms of the sensory evidence that would confirm or disconfirm its application. It is widely thought that such a theory encountered insuperable problems. For purposes here the most serious were Quine’s (1953b) observations about confirmation 179
The Continuum Companion to Philosophy of Mind
holism: claims are (dis)confirmed not individually, but only in conjunction with a great many other claims, making it difficult to see how to isolate certain epistemic connections as constitutive of the meaning of a specific claim. This was the heart of Quine’s famous challenge to any theory of meaning, and to efforts to distinguish analytic claims, such as that bachelors are unmarried, whose truth seemed to be due to meaning alone, from claims that were simply tenaciously believed.9 These observations about belief variability and confirmation holism have led many advocates of a conceptual role semantics to the desperate measure of semantic holism: all of a term’s epistemic connections are constitutive of its meaning. But this seems to make a havoc of psychology. It would entail that any change in any of a person’s beliefs would ipso facto be a change in the content of all of their thoughts. Since by accretion, reasoning or forgetfulness, our beliefs are constantly changing, no one would ever have the same thoughts twice; indeed, one wouldn’t remember anything with the same content. It would be a cosmic coincidence if two people ever shared a belief, since to agree on anything they’d have to agree on absolutely everything. There could be no serious generalizations about mental states, not even of the well-confirmed kind regarding visual illusions. This was perhaps no problem for Quine who was a behaviourist and sceptical of intentional psychology. However, it is a serious problem for post-behaviourist cognitive approaches such as that of CRTT.
A general problem for internalism In addition to Quine’s scepticism about distinguishing ma ers of meaning from mere ma ers of belief, a general problem with any purely internalist theory emerges from considering the observations of Kripke (1972/1980), Putnam (1975), and Burge (1979) regarding the ways in which the meanings of many terms depend crucially upon the environment of the agent. Putnam famously imagines a planet called Twin Earth that is exactly like the earth in every respect (including history) except for having a novel chemical compound, XYZ, everywhere that the earth has H2O. He claims that, since in fact water is H2O, there is no water on Twin Earth. Now consider some Earthling adult, Sophie, who knows no chemical theory, and her twin, Twin Sophie, on Twin Earth, and suppose (along the improbable lines of the story) that they are molecule-for-molecule duplicates of each other. Certainly, there would seem to be no internal psychological differences between them. And yet, if XYZ is not genuine water, then Sophie and Twin Sophie are not referring to the same substance when they use the word ‘water’. (Perhaps a clearer example would
180
Representation
be their use of ‘Aristotle’: where Sophie’s use refers to the Ancient Greek philosopher, Twin Sophie’s refers to Twin Aristotle!). If meaning is what determines extension, then, as Putnam pithily put it, “ ‘meanings” just ain’t in the head’ (Putnam, 1975, p. 227). Whatever one thinks of Putnam’s examples, still a further problem can be raised in terms of a general question for any theory that takes thought to be formal symbol manipulation. Imagine a computer that used exactly the same program one day to play chess, the next to fight a war; there would seem to be no purely internal facts that would distinguish what it was representing on the one occasion vs. the other. It would seem that something external to the computer must determine the meaning of at least some of its representations.
Externalist Theories Interestingly enough, the suggestion that meaning might be something external seems to be every bit as natural as the idea that it’s internal. One of the oldest theories of meaning is what is sometimes called the Fido/Fido theory, according to which the meaning of a representation is the object for which it stands. Obviously, so stated, the theory is false; for starters, there’s the problem of empty representations such as Zeus that we mentioned earlier. However, there are sophisticated versions of it, ones that appeal to the actual causal history of a representation; ones that appeal to co-variational relations between a representation and phenomena in the world; and teleofunctional theories that appeal to evolutionary selectional processes.
Historical causal theories Critics of internalist semantics proposed an alternative semantic picture (few claimed a serious theory here) whereby the reference of a token term is determined by actual causal chains linking a speaker’s use of a particular word to users of that word to dub the thing (object, substance, kind) to which the speaker thereby refers. Kripke (1972/1980) and Putnam (1975) vividly argued for such an account of proper names and for natural kind terms. Thus, Kripke argued that what determines the reference of a name such as Aristotle is a chain of uses of the name that extends from present uses all the way back to Aristotle’s original dubbing with the name, and Putnam suggests a similar story about water that takes earthlings back to H2O and Twin Earthlings back to XYZ. Although actual causal histories may well have some role to play in a theory of reference, it is widely recognized that such histories are not in themselves
181
The Continuum Companion to Philosophy of Mind
sufficient for a naturalistic theory, since at every stage of such causal chains there are events (such as ostensions, dubbings, communications, understandings) that require intentional characterization. For example, they would seem to require that the original dubber had one thing rather than another in mind on the occasion of the dubbing. But what determines that it was the infant, Aristotle, and not, for example, his nose, or the kind, Greek, human being, or animal, that Aristotle’s parents had in mind? All these things, a er all, are equally in the causal path described by the dubber’s ostending finger. (This is the qua problem, discussed at length in Devi and Sterlney, 1987.)
Co-variation locking theories A natural answer to the question of what a dubber dubbed might be: whatever kind of thing she would discriminate as that thing with the term; that is, whatever she would apply the term to, as opposed to everything she wouldn’t. Along these lines, a number of philosophers have considered ways in which states involving token expressions might have, in addition to actual causal histories, certain counterfactual dispositional properties to co-vary with certain phenomena in the world. Intentional meaning is treated as a species of so-called natural meaning, the kind of meaning that is said to obtain between dark clouds and rain, red spots and measles, expansions of mercury in a thermometer and ambient temperature. One event naturally means another if there is a causal law connecting them, or, as Dretske (1981) put it, the one event ‘carries information’ about the other. A sentence, on this view, means what it carries information about: the sentence ‘it’s raining’ means that it’s raining, since it carries the information (causally co-varies) with the fact that it’s raining.10 Nevertheless, so stated, the view is open to several immediate objections. As stated, almost everything would mean something, since almost everything is reliably caused by (and carries information about) something. So there must be some further condition on genuine mental meaning. Here, most tokenings of representations are produced in the absence of the conditions that they nevertheless mean: ‘That’s a horse’ can be thought on a dark night in the presence of a cow, or just idly in the presence of anything. In his influential discussions Fodor (1987, 1991b) calls these la er usages ‘wild’; the property whereby tokens of symbols can mean things that aren’t on occasion their actual cause he calls ‘robustness’. The problem for any co-variational theory is to account for robustness. In doing so it needs to solve what has come to be called the ‘disjunction problem’: given that among the causes of a symbol’s tokenings there both are meaningforming and wild causes, what distinguishes them? In particular, what makes
182
Representation
it true that some symbol ‘F’ means [horse] and not [horse or cow on a dark night], or [horse or cow on a dark night or w2 or w3 or . . . ] (where each wi is one of the purportedly wild causes)?11 Several proposals have been advanced for handling the disjunction problem. They have in common trying to constrain the occasions on which the nomic connection is meaning-forming.
Ideal co-variation A natural suggestion is that the meaning-constitutive conditions are those that obtain under some optimal conditions, conditions that obtain when nothing is interfering with the belief formation system. One of the first such theories was that of Stampe (1977), which was taken up by Stalnaker (1984) and independently proposed (and then later rejected) by Fodor (1980/91, 1987). The a raction of such a theory lies in its capturing the idea that two individuals meaning the same thing by some symbol consists in their agreeing about what it would apply to were everything else about the world known. Their disagreements are to be explained as due to their limited epistemic positions and reasoning capacities, which interfere with their being as omniscient as the right conditions for agreement would require. Insisting on such distinctions is like insisting on a distinction between guided missiles that end up at a certain location because that’s where they were aimed, from those that, aimed elsewhere, ended up there because of an error in navigation. Wherever they happen to end up, the missiles are locked onto a certain destination. Similarly, terms have a certain meaning by virtue of being locked onto a certain phenomenon in the world. Such theories in this way suggest a way of isolating semantic stability from issues of epistemic differences. In particular, a co-variation theory allows us to capture what in the world an agent is ge ing at in her use of a symbol, isolating that from her relative epistemic success or failure in reaching it. It thereby provides a basis for psychologically important predictions about how an agent will react to further evidence and argument, distinguishing rational from merely verbal revisions of thought. Although such theories might not be straightforwardly false, they do seem to be subject to a number of difficulties, the chief one consisting in the circularity that seems unavoidable in specifying the optimal conditions: it’s hard to see how to specify the conditions without employing the very intentional idiom the theory is supposed to explain. For example, it is difficult to see how to rule out the interference of other intentional states (e.g. the aforementioned bizarre theories that are the bane of conceptual role accounts, or the cooperation
183
The Continuum Companion to Philosophy of Mind
of certain intentional states, such as a ending to, thinking of, wanting to get things right). In order to avoid these problems Fodor (1987, 1990c) went on to propose another kind of co-variation, what has come to be known as the ‘asymmetric dependency relation’. Although it makes no explicit appeal to ideal epistemic conditions, much of its motivation can be appreciated by thinking of the ideal co-variational theory in the background.
Fodor’s asymmetric dependencies According to the ideal co-variational theory, under epistemically ideal conditions, tokenings of a predicate co-vary with the property it expresses. But, of course, tokens of it may also be produced by many properties it doesn’t express. Tokens might be produced by things erroneously taken to have the property, by things associated with the property, by mere thoughts about the property, etc. Now, one way to understand the asymmetric dependency theory is first to notice that, plausibly, all these la er cases depend upon the ideal case but not vice versa. The wild tokenings depend upon the ideal ones, but the ideal ones don’t depend upon the wild ones (ge ing things wrong depends upon ge ing things right in a way that ge ing things right doesn’t depend upon ge ing things wrong). Thus, the property horse causes a tokening of some symbol C because some horses (e.g. those at the far end of the meadow) look like cows and, under ideal conditions, cow causes C. Milk causes C because seeing some milk causes one to think milk, and this reminds one of where milk comes from, which, under ideal conditions, would be the sort of thing that causes cow tokenings. So formulated, of course, the account still mentions ideal conditions, and these Fodor has conceded cannot be specified non-circularly. His further audacious suggestion is that mention of the ideal conditions here is entirely inessential: the structure of asymmetric causal dependency alone, abstracted from any specific conditions or causal chains, will do all the required work! Specially, all we need do is existentially generalize over the ideal epistemic conditions, thereby avoiding the need to specify them. To a first approximation: A representation R means [p] only if it’s a law that, under some conditions, R is entokened if p, and all other tokenings of x asymmetrically depend upon this law. Thus, C means cow only if, under some conditions, C is entokened if there’s a cow present, and tokenings of C when cows aren’t present asymmetrically depend upon this law.12 An abundance of objections were raised to this theory in the 1990s (e.g. see Loewer and Rey, 1991; Loewer, 1997) to which Fodor made abundant and o en ingenious replies (see, e.g., see Fodor, 1987, 1991b). We shall confine our 184
Representation
a ention here to problems that arise for any externalist theories but only a er se ing out another important class of them.
Teleofunctional theories Millikan (1984), Papineau (1987), Dretske (1988) and Neander (2004) have been working on a general account of meaning based upon the role mental states play in a biological account of the evolution and life of an organism. On this view, intentional states possess certain information-carrying, evolutionarily determined functions, even if there are no conditions under which they presently execute them. These functions fix the state’s intentional content. Thus, a frog’s tokening F whenever a black speck crosses its retinal field might be interpreted as meaning [fly], since it’s flies that are responsible for the survival of frogs. Such teleological ideas can be combined with co-variational approaches. Thus, Dretske (1988) proposes treating meaning as the recruitment of a covariation between an internal state and a worldly phenomenon on behalf of some adaptive response. The fact of the co-variation itself (say, between a state of the frog and the motion of a fly) is an explanatorily significant cause of the frog’s shooting out its tongue at such nutritious prey. All such teleofunctional approaches face some general problems. A worry that has been much discussed about selectionist theories in general is that they are Panglossian in assuming that all useful traits were evolutionarily selected. Critics of teleosemantics, like Fodor (1987, 1991b), argue that we have no reason to think that large parts of thought and language, and their meaning, may not be unselected effects of whatever was selected, and so of no help in determining semantic content. Indeed, certain properties of thought and language do seem to outstrip any pressures from natural selection. Insofar, for example, as many of our concepts involve commitments to full potentially infinite universal quantifications, it is difficult to see how they, as opposed to more modest finite cousins, could have been selected by a finite history (see Peacocke, 1992, p. 131). But perhaps the most serious worry about such teleofunctional approaches to content is that, given the vicissitudes of natural selection, the wrong contents might get assigned to psychological states. Pietroski (1992) imagines a case in which an animal gets a racted by red flowers on high hills and increases its selectional advantage by avoiding predators in the low lying valleys. The teleofunctionalist would appear to be commi ed to claiming that the state of responding to the red flowers actually has something like the content [avoid predator], even if the animal couldn’t actually recognize a predator when face to face with it. Standard selectionist explanations of intentional states tend to presuppose the contents of such states and so would be unable to explain them. 185
The Continuum Companion to Philosophy of Mind
General problems with externalism13 A general difficulty that arises for any pure externalist account is the distinctions of the mind seem to outrun the distinctions that the external world independently provides. There are not only distinctions among concepts (such as [renate] and [cordate]) that happen in the real world to be co-extensive, but there are distinctions among necessarily co-instantiated concepts (i.e. concepts that are instantiated in all the same possible worlds and/or counterfactual situations). There are two kinds of such co-instantiated concepts: those expressed by necessarily co-extensive terms, such as ‘triangle’ and ‘trilateral’, ‘eucalyptus’ and ‘gum’, ‘circle’ and ‘point equidistant locus of co-planar points’ (whatever satisfies one in each of these pairs necessarily satisfies the other); and what might be called the ‘necessarily co-divided’ ones, such as ‘rabbit’, vs. ‘undetached rabbit parts’, vs. ‘temporal stage of a rabbit’. Different things satisfy each of these three expressions, but whenever an agent is presented with something that satisfies one of them she’s presented with something that satisfies the others (cf. Quine, 1960, Chapter 2; Gates, 1996). Or consider simply the phenomenon of subception, whereby many animals are able to recognize groups of things of certain modest cardinality (Gallistel, 1990, Chapter 10). Do such animals plausibly have the same concept [three] that people have? There’s this reason to think not: unlike other animals, most people have a concept controlled by general principles, for example, Peano’s axioms for arithmetic (e.g. zero is a number; every number has a successor), whereby we can be led by reasonings into understanding a potential infinity of complex arithmetic truths. Recalling our early discussion of empty terms and purely intentional uses of ‘represent’, a particularly crucial set of cases of necessarily co-instantiated concepts are the necessarily uninstantiated ones (e.g. [largest prime], [round square]). One way to think of dealing with such cases is to try dealing with them as logically complex, so [largest prime] would actually involve a logically complex construction about of symbols meaning [large] and [prime] (see Fodor, 1990c). However, there seem to be plenty of non-complex cases. As Plato pointed out, no one ever perceives a genuine circle: all the figures we could possibly encounter are only very crude approximations, at best ragged complex polygons.14 Moreover, although laws may arguably relate properties that happen to be non-instantiated (e.g. some specific mass that nothing ever happens to possess), if one limits oneself to properties that genuinely figure in scientific laws, it would be difficult to see how there could be laws about, for example, unicorns, angels or ghosts. It’s not at all clear that these things are even metaphysically possible, much less sufficiently possible for there to be scientific laws relating them to entokenings of representations.15 186
Representation
Still another problem for a pure co-variation theory is presented by concepts involving varying degrees of response-dependency, such as [shameful], [funny], [bizarre]. The problem here is that the same response-dependent concept can cause its possessors to lock onto different phenomena. A er all, it is a commonplace that different people find different things funny, shameful, tragic, bizarre; and this is not in all cases plausibly due to differences in their epistemic position. You and I may disagree about what’s funny, well, just simplicter, perhaps as a result of simply very brute differences between our nervous systems. Consequently, there is no reason to expect convergence even under any (ideal) circumstances on which all other uses of such concepts asymmetrically depend, despite our patently sharing them. For all these reasons it would seem that any externalist theory will need to be supplemented by some facts about a term’s inferential role: ‘square’ bears a direct relation to ‘four sides’ that ‘circle’ lacks; ‘undetached proper rabbit part’, but not ‘rabbit’ play different roles in mereological inferences; [funny] is tied to laughter whether or not people ascribe it to the same things. For all the interest of externalist intuitions and the problems with internalist ones, there seems to be something semantic in the head. So perhaps the internal and externalist strategies ought to be combined.
Ecumenical Approaches There are two ways to combine the two strategies: two factor theories and appeals to basicality.
Two factor theories: narrow and wide content Two factor theories posit both an internal and external factor to meaning. One suggestion might be to allow the representational vehicle itself to be a component of content. This suggestion seems a natural way of distinguishing necessarily co-instantiated concepts that have different constituent structure. Thus, the thought that water is wet is distinct from the thought that H2O is wet by virtue of the fact that the content of the one thought involves a syntactically complex expression, ‘H2O’, and the other doesn’t. However, such a suggestion by itself is unlikely to suffice for all the cases; a er all, there can be logically simple expressions – names, simple predicates – that are necessarily co-extensive (‘Mark Twain’/’Sam Clemens’, ‘Zeus’/’Jupiter’, ‘eucalyptus’/’gum’), and although they might be associated with different contents within the mind of a single person, it doesn’t seem as though they should always express different contents across people. Indeed, as the above examples 187
The Continuum Companion to Philosophy of Mind
show, different people could have the same thought about Sam Clemens, Zeus or gum trees without the vehicles of thought being actually spelt the same! Intuitively, all that would seem to ma er is that the role of the vehicles in their thought be the same.16 Consequently, some two-factor theorists (e.g. Loar, 1981; Block, 1986a) suggest that the narrow component be identified with a term’s conceptual role, along the lines – but also subject to the difficulties – sketched above. Another suggestion is to identify the narrow content with a rule in the head that – along the lines of Grice (1961/65) and Putnam (1975) – may leave some sort of ‘blank space to be filled in by the specialist’, a kind of indexical element that permits a full semantic content to be determined by the context with which the agent interacts, much as the semantics of indexical terms such as ‘I’,’now’, ‘this’ and ‘that’ do (cf. Kaplan, 1989). White (1982) and Fodor (1987) develop this strategy, generally identifying the narrow content of a LOT expression with a function (in the set theoretic sense) that maps a context onto a broad content. For example, the narrow content of Sophie and Twin Sophie’s ‘water’ is the function that maps Sophie’s context onto H2O and her twin’s context onto XYZ. When Sophie u ers ‘Water is wet’, she thereby expresses the content [H2O is wet], while when Twin Sophie u ers it she expresses the content [XYZ is wet]. Two symbols have the same narrow content just in case they serve to compute the same such function: it is this that is shared by Sophie and her twin. (See Chalmers (forthcoming) for further development of this strategy.) Of course, this strategy will be subject to the same Quinean worries we raised earlier with regard to conceptual role theories: how do we distinguish those roles that are essential and constitutive of meaning from those that are mere ma ers of belief. A tentatively promising approach to those worries has recently emerged in the work of Paul Horwich and Michael Devi .
Basicality? Along lines strikingly similar to Fodor’s asymmetric dependency proposal, Horwich (1998, 2005) proposes to treat meaning as ‘the property of the use of a word that is explanatorily basic: the one that best explains all the other use properties of the term’ (Horwich, 1998, p. 41), and provides a number of examples (see Horwich, 1998, pp. 45, 129): The basic property for and is a tendency for x to accept ‘p and q’ if x accepts both ‘p’ and ‘q’; for red: a disposition to apply ‘red’ to an observed red surface; for one: holding true Peano’s axioms; for Aristotle: holding true ‘This is Aristotle’, pointing to Aristotle.
188
Representation
Although one might quarrel with the examples and worry about the deflationary context in which Horwich proposes his view, there seems to me something right about it which could be applied to CRTT, at a first pass, along the following lines: (BAS) The content of a representation is determined by the property of a meaningful tokening of a term that is explanatorily basic: the one on which all other tokens with that meaning asymmetrically/explanatorily depend by virtue of that property. Note that the explanatorily basic property need not be a purely internal one but might well involve relations to external phenomena. That is, (BAS) has Fodor’s proposals potentially as special cases, cases in which the basic properties are ones about actual language use, or about how symbols manage to be locked onto actual phenomena in the world. (BAS) is simply not limited to such cases.17 Although (BAS) is by no means a reduction of intentionality (intentional notions are still mentioned in it), it still does some important work. Insofar as basic properties are sufficiently local, it permits conceptual stability despite wide epistemic and other sorts of surface variation in how people use words and concepts. The issue is not whether people agree in their surface behaviour but whether their responses are controlled by the same basic properties, an issue not so easily addressed. And (BAS) allows for empty concepts such as [unicorn] and [circle], and response-dependent ones such as [funny] and [good], where the basic properties seem to be mostly in our internal responses, not in the variable things to which we are responding. (BAS) also concedes to Quine that there may well be no adequate way to define theoretical terms such as ‘electron’ or ‘species’, since the basic facts in these cases may be precisely as theoretically diffuse as Quine’s holistic view of theoretical confirmation emphasizes. But on the other it may allow for some local basic facts of the sort that seem to explain the intuitions about meaning that people have about trivial cases such as ‘bachelor’. Thus, it seems to capture not only internalist and externalist intuitions, but also what was always reasonably driving a Quinean scepticism about intentional content. If this is correct, then it may well be at least the most ecumenical strategy to pursue in trying to provide an adequate theory of the content of mental representations.
189
10
Mental Causation Neil Campbell
In a seminal paper on the problem of mental causation Jaegwon Kim helpfully characterizes the issue in terms of the following question: ‘How is mental causation possible given X?’ where X is an assumption we have some independent reason to respect which makes mental causation prima facie problematic’ (Kim, 1990a, p. 121). For the past forty years or so the formulation of Kim’s question that has dominated the philosophical scene is the following: ‘How is mental causation possible given non-reductive physicalism?’ The widespread commitment to non-reductive physicalism is due primarily to considerations about multiple realization and mental anomalism, which I will treat as working assumptions for the sake of this discussion. By denying that mental properties are reducible to physical properties, however, it is unclear how the mental can play a genuine causal role in the production of human physical behaviour. For the most part this difficulty has been articulated in one of two ways. The first is driven by considerations about the nomological character of causation and has been formulated primarily as a challenge to Donald Davidson’s version of non-reductive physicalism, anomalous monism (Davidson, 1970). The second, which is articulated in terms of exclusion pressures, is broader in scope and is owed principally to Jaegwon Kim (1988, 1989a, 1990a, 1993a, 1994, 1998, 2005). What both versions of the problem share is the conclusion that a robust account of mental causation seems impossible if we deny that mental properties are reducible to physical properties. Indeed, both lines of argument purport to show that non-reductive physicalism leads to type epiphenomenalism, the causal inefficacy of mental properties. My goal in the sections to follow is to sketch out these two versions of the problem and to explore some ways of dealing with them. My discussion is divided into three sections. In the first section I outline anomalous monism and the objection that it entails epiphenomenalism. In the second section I provide a sketch of the exclusion argument against non-reductive physicalism. In the third and final section I show that both arguments against non-reductive physicalism rely on questionable metaphysical assumptions about the nature of events that render them either misguided or question-begging.
190
Mental Causation
Anomalous Monism and the Problem of Mental Causation In his paper ‘Mental Events’ Davidson (1970) sought to reconcile three claims that appear to be true yet seem to be mutually inconsistent: 1. At least some mental events interact causally with physical events. 2. Events related as cause and effect fall under strict laws. 3. There are no strict psychophysical laws. The apparent inconsistency is that the truth of (1) and (2) entails the falsity of (3). If a mental event such as my deciding to close the door causes the door to close then (2) seems to imply that there ought to be a law connecting my deciding to close the door and its closing, but this is just what the third claim denies. Davidson’s method of reconciliation involves a particular understanding of the second claim. According to Davidson, when events stand in a causal relation they have true descriptions that instantiate a strict law; not every true description of the events is amenable to the formulation of such laws. In fact, given the holism of the mental and the rational principles that guide mental ascription Davidson argues that mental vocabulary is unsuitable for the formulation of strict laws. Since only physical predicates are appropriate for the formulation of strict laws and mental events enter causal relations with physical events, it follows from (2) that mental events have physical descriptions and hence, are themselves also physical events. Since there are no strict psychophysical laws mental concepts cannot be reduced to physical concepts, so we have an ontological reduction of mental to physical events without a conceptual reduction of mental to physical properties. Since it is individual events and not properties that are identified, Davidson’s anomalous monism is a token rather than type identity theory. Davidson’s brand of non-reductive physicalism would seem to provide a simple and elegant account of mental causation. Mental events such as decisions or choices cause other events, including physical events, because they are themselves physical events. A number of critics (Honderich, 1982, 1983, 1984; Hess, 1981; Stoutland, 1976, 1980, 1985; Kim, 1989a, 1993a; Antony, 1989), however, have argued that anomalous monism entails the inefficacy of mental properties and consequently fails to provide an adequate account of mental causation. Although this argument takes many forms, the basic reasoning is roughly as follows.1 Davidson faces a dilemma when it comes to the issue of mental causation. At the heart of the dilemma is the observation that we ordinarily distinguish between the properties of an event or object that are causally responsible for the production of a given effect and those that are irrelevant. For example, if I throw my glass on the floor and the impact of the glass against the concrete
191
The Continuum Companion to Philosophy of Mind
causes the glass to sha er, some of the properties of the cause seem to ma er and some don’t. The fact the glass was blue and that it contained water seem peripheral to the sha ering, whereas the velocity at which the glass was travelling when it struck the floor, the angle of impact and the structure of the glass seem more important. When it comes to identifying the law that connects events like the first with events like the second it seems only natural to suppose that the la er rather than the former properties will be implicated. That is, there is more likely a law connecting the structure, velocity, and angle of impact of the glass with its breaking than one couched in terms of the colour and contents of the glass. Since mental events are, according to Davidson, identical to physical events, it seems that mental events have both mental and physical properties. Given the example of the sha ering glass it is reasonable to suppose that when there is causal interaction between a mental event and a physical event we should be able to identify which properties of the mental event enabled it to play the causal role it did. This is o en expressed in the form of the question, ‘Is it the mental event as mental (i.e. in virtue of its mental properties) that causes behaviour, or is it the mental event as physical that has causal efficacy?’ Davidson’s claim that the only strict laws there can be are physical laws suggests that it is in virtue of the event’s physical properties that it caused what it did. That is, if it is in virtue of the law-engaging physical properties of the impact of the glass against the floor that caused the glass to sha er, by analogy it seems reasonable to suppose that it is in virtue of the law-engaging physical properties of a mental event that the event caused what it did. While this is consistent with Davidson’s three claims his critics think this falls too far short of a robust account of mental causation. For what this first option means is that mental events cause behaviour solely in virtue of their physical (i.e. neurobiological) properties. That is, when I decide to get up for a drink and then rise from my chair my rising is caused by the event that was my deciding, but my behaviour is not caused in virtue of the fact that the cause was a deciding, was a desire for a drink, or in virtue of any of its mental properties. These are all as irrelevant to the production of the effect as the colour and contents of the glass were irrelevant to its sha ering. This hardly seems like mental causation anymore. As Jerry Fodor once famously said, If it isn’t literally true that my wanting is causally responsible for my reaching, and my itching is causally responsible for my scratching, and my believing is causally responsible for saying . . . if none of that is literally true, then practically everything I believe about anything is false and it’s the end of the world (Fodor, 1990b, p. 156). If mental events cause only in virtue of their physical properties then it isn’t literally true that Fodor’s wanting is causally responsible for his reaching; 192
Mental Causation
certain physical properties of the event are causally responsible for his reaching. For many of Davidson’s critics this is not good enough. Since the first option is not very appealing what if one argued for the other, according to which mental events cause in virtue of their mental properties? This would certainly address Fodor’s concern, for then his wanting would literally be causally responsible for his reaching because it would be in virtue of the fact that his wanting has the mental properties it does that he in fact reaches. However, this option fares no be er for the Davidsonian because it entails psychophysical laws. This is because, as we saw earlier, it seems that events cause in virtue of their law-engaging properties. If a mental event causes in virtue of its mental properties, then this reintroduces psychophysical laws, contradicting Davidson’s third claim. Worse still, the reintroduction of psychophysical laws revives the possibility of psychophysical reduction. So although claiming that mental events cause in virtue of their mental properties might provide a robust account of mental causation, it does so at the cost of mental anomalism and, at least potentially, of non-reductive physicalism itself. Neither of the two options then, is a ractive to the non-reductive physicalist, in which case it looks like Davidson’s brand of non-reductive physicalism stumbles on the question of mental causation.
Mental Causation and the Exclusion Principle The second formulation of the objection to non-reductive physicalism is based on Kim’s principle of causal-explanatory exclusion and is intended to be more far-reaching than the argument just examined, for Kim thinks the exclusion argument is a problem for any version of non-reductive physicalism, not just Davidson’s. At the heart of the argument is Kim’s principle of explanatory exclusion, which states that ‘there can be no more than a single complete and independent explanation of any one event’ (Kim, 1988, p. 233). Kim considers an explanation to be a complex of statements that can be divided into explanans and explanandum propositions, where the explanandum is the proposition in need of explaining and the explanans the proposition that does the explaining. However, just because explanations are defined in terms of propositions this does not mean one should think of them as arguments or as logical derivations with the explanandum as the conclusion in the way Hempel did (Hempel, 1963, 1965, 1996; Hempel and Oppenheim, 1953). In Kim’s view Hempel’s approach leads to what he calls ‘explanatory internalism’ or ‘explanatory irrealism’ because the focus on logical or derivational relations between propositions comes at the cost of neglecting the relations between the events in the world that the propositions are about. Kim prefers a more deeply externalist account of explanation that is grounded not in relations between items in our epistemic 193
The Continuum Companion to Philosophy of Mind
corpus but in events and relations in the world. For this reason he adopts what he calls ‘explanatory realism’, which claims that a proposition C is an explanans for E in virtue of there being some determinate relation R holding between events c and e. Kim, then, takes R to be the explanatory relation that ‘grounds’ the explanans relation between propositions C and E. On the realist view, our explanations are ‘correct’ or ‘true’ if they depict these relations correctly, just as our propositions or beliefs are true if they correctly depict objective facts; and explanations could be more or less ‘accurate’ according to how accurately they depict these relations. Thus, that c is related by explanatory relation R to e is the ‘content’ of the explanation consisting of C and E; it is what the explanation ‘says’ (Kim, 1988, p. 226). Since the most prominent species of explanation is causal explanation, Kim claims that the most plausible candidate to fulfil the role of R is the causal relation itself in such cases. Hence, he thinks that explanatory realism entails causal realism, the view that causal relations are mind-independent relations between events in the world2 and that ‘every event has a unique and determinate causal history whose character is entirely independent of our representation of it’ (Kim, 1988, p. 230). Bringing explanatory realism and causal realism together Kim defines having an explanation in terms of the possession of causal knowledge: ‘To “have an explanation” of event e in terms of event c is to know, or somehow represent, that c caused e’ (Kim, 1988, p. 230). Kim’s commitment to realism also leads him to locate the individuating role of explanations in the explanatory relation itself: Explanatory realism yields a natural way of individuating explanations: explanations are individuated in terms of the events related by the explanatory relation (the causal relation, for explanations of events). For on realism it is the objective relationship between events that ultimately grounds explanations and constitutes their objective content. This provides us with a basis for regarding explanations that appeal to the same events standing in the same relation as giving, or stating, one explanation, not two, just as two inequivalent descriptions can represent the same fact (Kim, 1988, p. 233). This is a fully extensional view of explanation. Logically inequivalent descriptions of the cause or the effect in explanatory claims will state the same explanation since they are ‘grounded’ in the same metaphysical relation R between the same events. Kim’s explanatory realism plays a central role in his justification for the principle of explanatory exclusion. He asks us to imagine that we have two causal explanations for the occurrence of a single event e, one in terms of c1 and 194
Mental Causation
another in terms of c2. If we explore the various ways in which c1 and c2 might be related, it turns out that the explanations fail to be complete or independent. Kim identifies six possibilities: (1) c1 is identical to c2, (2) c1 is distinct from c2 but is reducible to or supervenient on it, (3) c1 and c2 are both partial causes of e, (4) c1 is a proper part of c2, (5) c1 and c2 are different links in the same causal chain leading to e, and finally, (6) e is causally overdetermined by c1 and c2. There is no need to discuss all of these options. It is clear that if c1 and c2 are both partial causes, then neither event is sufficient on its own for the effect, and so according to Kim an explanation that appeals to either cause alone will be incomplete because it leaves out a central causal factor. Similarly, if c1 and c2 are sequential links in a causal chain, then the explanation in terms of c2 fails to be independent of the explanation in terms of c1 in virtue of the dependence of c2 on c1. Hence, Kim plausibly assumes that if the events referred to in two explanations are not independent of one another, then the explanations themselves also fail to be independent. The only time there can be two complete and independent explanations of the same event is option (6), when the event is causally over-determined (i.e. when there are two independent causes, each of which is sufficient for the effect). Kim admits this possibility but claims that genuine cases of over-determination are sufficiently rare, in which case the principle of explanatory exclusion is a plausible general principle. Kim has used the exclusion principle to place considerable strain on the concept of mental causation. Suppose that George rises from the couch. On the one hand we have an explanation for his rising that appeals to the instantiation of a mental property, such as his desire for a beer; on the other hand, since his rising is a physical event it seems to have a purely physical explanation in terms of a neurobiological property. This la er claim implicitly appeals to the causal closure of the physical domain which states roughly that for the occurrence of any physical event there is a physical cause which is sufficient for it.3 Kim claims we should find something puzzling about having both of these explanations for why George rises from the couch: When these two claims are viewed together, we should find the situation perplexing and somewhat unse ling . . . We want to ask: ‘Which really did it? What’s the real story?’ The premises of the two causal explanations are mutually consistent; however, there is something perplexing and perhaps even incoherent about accepting both as telling us what caused George’s behaviour, without an account of how the two accounts are related to each other. Each explanation specifies a cause of George’s behaviour. But how are the two supposed causes related to each other? (Kim, 1990a, p. 125) Kim then surveys the six possibilities mentioned earlier and shows that none of these is very promising. He claims the possibility that the mental and physical 195
The Continuum Companion to Philosophy of Mind
causes of George’s behaviour are each partial causes, distinct links in the same causal chain, or proper parts of the same cause are highly implausible, and I agree. Over-determination is not an option either because this would require an unexplained coincidence of causes that is systematic. Occasional cases of causal over-determination can be tolerated (e.g. the smouldering cigar and the lightning strike simultaneously ignited the haystack) because circumstances, as unusual as they may be, can lead to a coincidental convergence of sufficient causes. Appealing to over-determination to explain mental causation requires that all intentional actions are systematically over-determined by mental and physical causes. While this is not impossible, it is not a very a ractive option since there is something troubling about the idea of systematic coincidences. The only live options for Kim are identity and supervenience. Although Kim was at one time optimistic about using supervenience to account for the relation between intentional and neurobiological explanations (see Kim, 1984) he points out that such an approach faces some serious problems. The trouble is that in order to use supervenience to show that one explanation depends on the other one must offer a characterization of the supervenience relation between mental and physical properties that captures a sufficiently robust notion of dependence, and this has been lacking in standard accounts. Weak and global supervenience are, according to Kim, too weak,4 and strong supervenience arguably implies reduction, which is precluded by the non-reductive physicalist. Hence, the only real hope for a solution lies in the identification of mental with physical properties. Indeed, this is precisely what Kim argues in his most recent work (Kim, 2005), but this is once again to give up on non-reductive physicalism and espouse a version of reductionism. These considerations show that the non-reductive physicalist lacks an appropriate account of how the two explanations of George’s behaviour are related, in which case the neurophysiological explanation excludes the intentional explanation. This puts the legitimacy of all psychological explanations in jeopardy, for the above line of reasoning generalizes to every case where human beings seem to act for reasons.
The Metaphysics of Events So how can the non-reductive physicalist respond to either or both of these objections? Some authors (Pereboom, 2002; Pereboom and Kornblith, 1991) have appealed to the idea that mental properties are constituted out of their physical base properties and thereby inherit their causal efficacy, though the relevant notion of constitution is not as clear as one might like.5 Others (Noordhof, 1999b; Block,
196
Mental Causation
2003; Gulick, 1992; Bontly, 2002; Gille , 2001; Burge, 1993b; Menzies, 2003) have argued that Kim’s reasoning can be generalized to all irreducible supervenient properties in which case there is no causation anywhere but at the fundamental physical level, which is absurd. Another reply is to acknowledge the possibility that mental properties over-determine their effects, though few seem to take this idea very seriously (Bontly, 2005; Ezquerro and Vicente, 2000; Vicente, 1999; Kallestrup, 2006; Sparber, 2005; Raymont, 2003). Finally, some authors (LePore and Loewer, 1987) place their hopes in the idea that mental properties can be shown to be causally relevant to physical causation and that this relevance, though not the same as efficacy, is robust enough to rescue mental causation. In the remainder of this discussion I would like to explore an alternative approach. I will suggest that both formulations of the objection share metaphysical assumptions about the nature of events that render the objections either incoherent or question-begging, in which case non-reductive physicalism is not as implausible as the arguments make it seem. Since the form of this response is slightly different depending on the target argument, I will discuss each in turn, but since both responses rely on the same general idea, the second will build on the first. The reply to the argument against anomalous monism was actually provided by Davidson himself (Davidson, 1993), though he did not make it as perspicuous as one might like. Davidson’s main line of response was to claim that given his view of events and causation the objection raised against him makes no literal sense. Featuring heavily in this reply is his claim that causation is an extensional relation between events. If causality is a relation between events, it holds between them no ma er how they are described. So there can be descriptions of two events (physical descriptions) which allow us to deduce from a law that if the first event occurred the second would occur, and other descriptions (mental descriptions) of the same events which invite no such inference. We can say, if we please (though I do not think this is a happy way of pu ing the point), that events instantiate a law only as described in one way rather than another, but we cannot say that an event caused another only as described. Re-describing an event cannot change what it causes, or change the event’s causal efficacy (Davidson, 1993, pp. 6–7). In Davidson’s view, treating causation as an extensional relation means that causes operate independently of the way we describe or classify them. Thus, not only is it irrelevant to the causal powers of an event that we can describe it using mental vocabulary, ‘it is also irrelevant to the causal efficacy of physical events that they can be described in the physical vocabulary. It is events that
197
The Continuum Companion to Philosophy of Mind
have the power to change things, not our various ways of describing them’ (Davidson, 1993, p. 12). This means there is no room for the idea of an event causing ‘as mental’ or ‘as physical’ or in virtue of its mental or physical properties; events, not their properties, cause, and to be a mental or physical event just is to be described using mental or physical vocabulary. Hence, Davidson’s view of events as concrete particulars and his nominalism about properties prevent the epiphenomenalist objection from ge ing off the ground. The entire objection depends on the seemingly innocuous assumption that events have properties and cause what they do in virtue of some subset of those properties. But Davidson’s metaphysics is incompatible with this assumption because events are simple entities that don’t have properties as constituents; properties are instead simply ways of describing events, and if events don’t care how we describe them, as his extensionalist thesis claims, it is hard to see how one could claim that events cause in virtue of either their mental or their physical properties. Neither option is possible, yet it was by forcing one option or the other that the epiphenomenalist objection got going in the first place. So Davidson’s response, with which I am sympathetic, is that the objection is formulated on the basis of a certain view of the metaphysics of events and of causation that is foreign to his philosophy.6 The alternative view treats events as property exemplifications, according to which an event is a structured complex of which some property is a part. This is not to say that this alternative metaphysics is false or should be rejected; indeed, there is something very a ractive about the account of events and causation assumed by Davidson’s critics.7 The point is that it is illegitimate to import these assumptions into the argument against Davidson’s position, unless, that is, one can show that Davidson himself accepts these assumptions. Since it is quite clear that he does not, the objection takes aim at a position that bears only a faint resemblance to Davidson’s own views, and hence misses the mark. That this is the case is, I think, made quite plain in the following rejoinder from Kim: The issue has always been the causal efficacy of properties of events—no ma er how they, the events or the properties, are described. What the critics have argued is perfectly consistent with causation itself being a two-termed extensional relation over concrete events; their point is that such a relation isn’t enough: we also need a way of talking about the causal role of properties, the role of properties of events in generating, or grounding, these two-termed causal relations between concrete events. (Kim, 1993a, p. 21) While it might be true that what many of Davidson’s critics want is a way of talking about the causal role of properties, Kim continues to assume that this way of speaking about causation and about properties makes sense within a
198
Mental Causation
Davidsonian framework when he suggests in the above passage that properties themselves can have multiple descriptions. If one thought of properties as real entities this makes a certain amount of sense, but if one adopts Davidson’s nominalism about properties, according to which a property is just a way of describing an event, it is hard to see how descriptions themselves can be redescribed. The dispute about anomalous monism then, is largely a product of underlying metaphysical views about the nature of events and causation. Until one can show that anomalous monism leads to epiphenomenalism on Davidson’s own terms (i.e. within his metaphysical views of events and causation), it seems to me that the critics are wasting their breath. As we shall see, similar issues also complicate the version of the problem of mental causation that appeals to the principle of exclusion. To make my concerns about the argument from exclusion clear I need to elaborate a li le more on Kim’s alternative theory of events. Kim (1969, 1973, 1976) has long advocated a property exemplification theory. According to Kim events are structured complexes with three basic types of constituents. First, each event involves an object since events usually denote something undergoing a change or alteration. Second, since events are thought of as occurrences, each event also has a time at which it occurs or over which it endures. Third, each event has what Kim calls a ‘constitutive property’ since in order for the relevant object to undergo a change there must be a modification of its properties. According to Kim constitutive properties ‘are among the important properties, relative to . . . [an explanatory] theory, in terms of which lawful regularities can be discovered, described, and explained’ (Kim, 1976, p. 37). The canonical description of an event then, takes the form [x, P, t] where x is the constitutive object, P is its constitutive property, and t is the time at which the event occurs. The constitutive property, which is exemplified by the constitutive substance, determines the generic event under discussion and is distinguished from other properties the event (as opposed to the substance) exemplifies. For example, Kim says that Socrates’ dying at t is a constitutive property of the event [Socrates, dying, t] and that occurring in prison is merely exemplified by the event, though not constitutive of it (Kim, 1973, 12). Consequently, Kim is careful to point out that not every description of an event tells us about its constitutive elements. These observations are important because unless we are aware of all of an event’s constitutive elements we will be unable to distinguish it from other events. In contrast to the Davidsonian model of events, the property exemplification model is a ‘fine-grained’ account because each event possesses exactly one constitutive property only, whereas Davidson does not distinguish between constitutive properties and the various properties an event exemplifies. On Kim’s view then, if two properties F and G are tokened at the same time by the
199
The Continuum Companion to Philosophy of Mind
same object, but F ≠ G, then the tokening of F and the tokening of G are distinct events. This is made quite explicit in Kim’s ‘Identity Condition’ for events: [x, P, t] = [y, Q, t′] just in case x = y, P = Q, and t = t′. (Kim, 35) This identity condition is behind Kim’s well known disagreement with Davidson about whether Brutus’s stabbing Caesar is a distinct event from his killing Caesar. Davidson treats these as alternative descriptions of a single event whereas Kim distinguishes these as distinct events because the property of stabbing cannot be identified with the property of killing since some stabbings are not fatal. While there are certainly many interesting questions about how one should distinguish the constitutive properties of an event from other properties the event exemplifies and about how to individuate events, my interest in Kim’s theory of events concerns its implications for his criterion of individuation for explanations. What I want to suggest is that there is reason to suspect the exclusion principle is an implication of Kim’s account of events. To see how this is so imagine that the principle of explanatory exclusion were false, such that there could be multiple explanations for a single event. One way this could happen, as several authors have suggested (Marras, 1998; Campbell, 2007, 2008a; Campbell and Moore, 2009; Raymont 2003), is to adopt what is o en called the dual explanandum reply to the exclusion argument. According to this reply a single event can generate multiple explananda by tokening more than one property at the time in question. Hence, there can potentially be as many explanations of an event as there are facts about it or properties it tokens. So, for example, if a single event such as George’s rising from the couch simultaneously tokens the property of being an intentional action and the property of being bodily movement of a specific type, then we can have more than one explanation for the occurrence of that event qua the tokening of one property or the other. Hence, one explanation might appeal to George’s desire for a beer while the other might appeal to a neurobiological property. Relative to how the event is described each explanation is complete within its own domain. The thing to notice about this possibility, however, is that it is precluded by Kim’s identity condition for events. According to Kim we must treat the tokening of George’s desire for a beer and the tokening of bodily movement of a specific type as distinct events unless we can identify the two properties in question. The trouble is that according to the non-reductive physicalist this identification is not up for grabs. By Kim’s identity condition this means that the events must be distinct and so this cannot be a case of a single event having more than one explanation. For since Kim’s causal realism claims that each event has a unique causal history we have to assume that George’s rising to get
200
Mental Causation
a beer has a distinct cause from his bodily movement. Kim’s identity condition renders the dual explananda strategy impossible, for any a empt to ground multiple explanations of a single event in distinct properties, the event tokens will run afoul of Kim’s identity condition for events. Thus the a empt to show that a single event can have more than one explanation by fragmenting it into multiple explananda (according to which property is tokened) has no prospect for success within Kim’s metaphysics. Such an approach could only succeed on a Davidsonian ‘course-grained’ account of event identity.8 But this means there is an important sense in which Kim has begged the question in his use of the principle in debates about mental causation.9 The principle obviously holds for someone who, like Kim, accepts a fine-grained theory of events, but there are many who prefer a course-grained Davidsonian approach, and there is certainly much room for doubt about whether or not the exclusion principle holds under the conditions of this alternative metaphysics.10 Thus, by making it seem as though the principle of explanatory exclusion holds regardless of one’s metaphysical theory of events, Kim does the issue a serious disservice. If I am correct that Kim and other critics of non-reductive physicalism have assumed something like Kim’s property exemplification view of events, this goes a long way to discrediting the two arguments surveyed in my discussion. However, there is an even more serious concern here, at least about Kim’s use of exclusion to argue against non-reductive physicalism. Since the argument relies on Kim’s theory of events it is entirely question-begging. This is because Kim’s version of the property exemplification theory of events already assumes the falsity of non-reductive physicalism. As we saw, a central claim of Kim’s theory of events is the identity condition, which he uses to individuate events. The identity condition states that two events are identical if and only if their constitutive elements are identical. That is, for event [x, P, t] to be identical to event [y, Q, t′], x must be identical to y, so we have here the same constitutive object, t must be identical to t′, so the events occur at the same time and have the same duration, and property P must be identical to property Q. So if P is a physical property and Q is a mental property, on Kim’s schema the mental event [y, Q, t′] can be identical to the physical event [x, P, t] only if we can identify the mental property Q with the physical property P. This precludes the very possibility that defines most forms of nonreductive physicalism, namely, that mental events are physical events but that mental properties cannot be identified with physical properties. This means that if an argument against non-reductive physicalism assumes Kim’s theory of events the argument begs the question, for the property exemplification theory already assumes non-reductive physicalism is false. To the extent that Kim’s version of the argument from exclusion depends on the property exemplification theory then, the exclusion argument begs the question.11
201
The Continuum Companion to Philosophy of Mind
The lesson the above observations hold for those concerned about the problem of mental causation seems to be that the problem cannot be isolated from metaphysical questions about the nature of events and the role of properties in their individuation. Since one’s assumptions about such ma ers can have a profound effect on one’s treatment of the problem of mental causation, it seems only prudent to clear such ma ers up first, or at least to be forthright about them from the start.
202
11
Personal Identity E. J. Lowe
Why Should Personal Identity Be Philosophically Interesting? It may be wondered why a chapter on personal identity belongs in a volume on the philosophy of mind rather than in one on metaphysics. The answer is that the topic belongs to both branches of philosophical inquiry: to metaphysics because the notion of identity is a central one in that domain and to the philosophy of mind because persons are prime examples of minded beings. However, it might be supposed that, since the notion of identity is a universal one, there can be nothing special to say about personal identity as such, beyond saying that it involves the application of this notion to minded beings of a certain kind. Some philosophers would undoubtedly agree with this view. They would urge that the theory of identity, if indeed it deserves to be dignified by the title ‘theory’, is exhausted by an account of the logical properties of the identity relation, which reduces to the fact that it is a reflexive relation that is governed by Leibniz’s law or, more precisely, by the principle of the indiscernibility of identicals. This is just the principle, taken to be a necessary truth that things that are identical share all their properties, or, rather more cautiously expressed, in a way that doesn’t presuppose the existence of properties: that whatever is true of something is true of anything identical with that thing. If that view were correct, then there would be nothing to be said about personal identity beyond the banality that persons, like anything else, can be said to be identical only if they are indiscernible from one another. Thus, for example, by this account, there is nothing more to be said regarding the hypothesis that I am identical with, say, Napoleon than that it is true only if I differ from Napoleon in no discernible way. Of course, this provides us only with a logically necessary condition for the truth of that hypothesis, not a logically sufficient one. However, if the converse principle of the identity of indiscernibles is also accepted, then even this deficiency is remedied and our hypothesis may be judged to be true if and only if there is no discernible difference between Napoleon and me. It might be supposed that this is then the end of the ma er, since it is just obvious – isn’t it? – that there are indeed discernible differences between Napoleon and me, such as that he won the ba le of Austerlitz but I did not. But why should anyone be so confident that I didn’t 203
The Continuum Companion to Philosophy of Mind
win the ba le of Austerlitz? The reply may be offered: because I obviously didn’t even exist at the time of that ba le. But why should anyone be so confident of that? It can only be because something is being presupposed about the nature of persons which constrains the possibilities of identifying one person with ‘another’, such as that I can’t be identical with a person none of whose experiences I can remember having, or with a person whose body was destroyed before the body that I have now was created. It is presuppositions like these that make it seem ‘obvious’ that I can’t be identical with Napoleon, but they have nothing to do with Leibniz’s law as such, since they relate specifically to the presumed nature of persons, as opposed to things of various other kinds. This shows, then, that much more needs to be said about personal identity than can be captured simply by applying the logical properties of the identity relation to the particular case of persons. Specifically, what is needed is a principled account of the identity-conditions of persons, or, to use John Locke’s helpful phrase, an account of what their identity ‘consists in’. In modern parlance, what we must endeavour to establish is a criterion of identity for persons. And, as Locke himself insisted, this will require us to provide an account of what persons essentially are. As he succinctly puts it: ‘This being premised to find wherein personal identity consists, we must consider what Person stands for’ (Locke, 1975 [1690], II, XXVII, p. 9). Famously, his own answer to this la er question is that a person is ‘a thinking intelligent Being, that has reason and reflection, and can consider it self as it self, the same thinking thing in different times and places’ (Locke, 1975 [1690]: II, XXVII, p. 9). However, before we can examine this and other proposals concerning the nature and identity-conditions of persons, we should step back to take a wider view of the kind of the enterprise that we are embarking upon, by looking more closely at the notions of identity and criteria of identity.
Identity and Identification It seems evident that the expression ‘is identical with’, symbolized in logic and mathematics by the equality sign, ‘=‘, is a relational expression and hence denotes a certain relation in which things can stand to one another. However, if so, then it is a very peculiar relation, in that it can never literally hold between one thing and another thing, but only between a thing and itself. Other relations can, of course, hold between a thing and itself, such as the relation of admiring: someone can obviously admire him or herself. But this relation can also hold between different things, as when Peter admires Jane. Identity is peculiar as a relation in that it necessarily holds only between a thing and itself and, indeed, this has led some philosophers to deny that it is ‘really’ a relation at all. However we classify it, though, it can certainly seem strange. Since everything 204
Personal Identity
is identical with itself and with no other thing, one might wonder how facts of identity can fail to be u erly trivial and uninteresting. Part of the solution to this conundrum is provided by distinguishing, as we must do anyway, between identity and identification. Identification is a cognitive act and a far from trivial or easy one. One and the same object may o en be identified in different ways, even by the same thinker, and it may not be evident to such a thinker that, indeed, he or she has identified the same object in two such ways. To be able to identify an object is, typically, to be in possession of some descriptive information which applies uniquely to that object. But, as Frege (1960 [1892]) pointed out, a thinker can be in possession of two such pieces of information without necessarily thereby knowing that they apply to the same object. To use his famous example, it was an astronomical discovery of considerable magnitude that the Evening Star (Hesperus) is the Morning Star (Phosphorus). Similarly, it would be a stunning discovery to find out that the victor of Austerlitz (Napoleon) is the author of this chapter (Jonathan Lowe). As we shall see, the role of identity criteria is to impose certain constraints on what can count as an acceptable answer to such a question concerning identification. But in order to understand that role, we first need to say a li le bit more about identity as such. Identity, as has been remarked, is a reflexive relation – a relation which, of necessity, holds between everything and itself. We can formalize this as follows: (∀x)(x = x) As was also remarked earlier, identity is subject to Leibniz’s law, which for our purposes may be formalized in this way: (∀x)(∀y)(x = y → (F)(Fx ↔ Fy)) Here ‘F’ stands for any condition that may hold true of an object, so that the above formula effectively affirms that, for any things x and y, if x is identical with y, then anything true of x is also true of y, and vice versa. From the foregoing two principles, it is easy to derive two other logical properties of the identity relation: its symmetry and its transitivity, expressible by the following two formulas: (∀x)(∀y)(x = y → y = x) (∀x)(∀y)(∀z)((x = y & y = z) → x = z) Together, these four formulas exhaust the properties of the identity relation from a purely logical point of view. They pin that relation down uniquely, as 205
The Continuum Companion to Philosophy of Mind
being not only an equivalence relation – reflexive, symmetrical and transitive – but also, more specifically, as being the only such relation all of whose equivalence classes are necessarily single-membered, with each such member being an ordered pair of a thing and itself, of the form 〈x, x〉. To make this la er point clearer, each equivalence class of the same height relation is the class of all those pairs of objects that share a certain height and, clearly, while it might happen to be the case only one object has a certain height, it is also possible for more than one object to have the same height. Hence, some of these equivalence classes may contain ordered pairs of different objects, such as 〈Peter, Jane〉, 〈Jane, Mary〉 and 〈Peter, Mary〉, assuming that Peter, Jane and Mary all have the same height. But the equivalence classes of the identity relation are all boringly uniform, each having a unique member such as 〈Peter, Peter〉 or 〈Jane, Jane〉 because, obviously enough, Peter is identical only with Peter, Jane only with Jane, and so on. These rather austere logical points are not made idly here, since they will be seen to have a direct bearing on what can qualify as a satisfactory criterion of identity for things of a given kind. We may sum the situation up by saying that while an equivalence relation such as the same height relation may be described as being an exact similarity relation, the identity relation is necessarily stricter than that, in that it can fail to hold even between objects that are in every respect exactly similar.
Criteria of Identity A criterion of identity is a principle which specifies, in a non-trivial way, logically necessary and sufficient conditions for the identity of objects of a given sort or kind, K. The qualification ‘in a non-trivial way’ is needed to exclude principles that are uninformative or circular. Such a principle may take one or other of two different forms and, depending on which it takes, it may be described as being either a ‘one-level’ or a ‘two-level’ identity criterion (see Lowe, 1997). One-level criteria take the following form: (∀x)(∀y)((Kx & Ky) → (x = y ↔ RKxy)) Here, ‘RK’ denotes what we may call the criterial relation for objects of kind K. And note that such a relation must, of course, be an equivalence relation – reflexive, symmetrical and transitive – because identity itself is an equivalence relation and RK has to hold between Ks just in case they are identical. The best-known example of such a one-level identity criterion is the axiom of extensionality of set theory, which tells us that if x and y are sets, then x is identical with y if and only if x and y have the same members, so that in this case having the same members is the relevant criterial relation. However, Frege, who founded 206
Personal Identity
the formal theory of identity criteria, favoured two-level identity criteria, which may be wri en in the form: (∀x)(∀y)(fK(x) = fK(y) ↔ RKxy) Here, ‘fK’ denotes what could aptly be called the K-function. The best way to illustrate this is by means of Frege’s own famous example of such an identity criterion, his criterion of identity for directions (see Frege, 1953 [1884], p. 74). A direction (in the geometrical sense of the word) is always a direction of something, namely, a line. And Frege’s criterion of identity for directions is just this: the direction of line x is identical with the direction of line y if and only if x and y are parallel. So, in this case, the K-function is the direction of function and the criterial relation for directions is parallelism between lines. Observe that both the relation of having the same members and the relation of parallelism between lines are, as required, equivalence relations. It should be easy to see why the two different forms of identity criteria receive their respective names. A two-level criterion specifies the identityconditions of things of a kind K in terms of an equivalence relation between things of another kind; thus, in the case of Frege’s criterion, it specifies the identity-conditions of directions in terms of an equivalence relation between lines. In contrast, a one-level criterion specifies the identity-conditions of things of a kind K in terms of an equivalence relation between those very things; thus, in the case of the axiom of extensionality, it specifies the identity-conditions of sets in terms of an equivalence relation between those sets. We shall see that this difference between the two forms of identity criteria is significant in the context of a search for an adequate criterion of personal identity. For a two-level criterion of personal identity will be appropriate only if we can think of persons as being objects of a ‘functional’ kind, in the sense that directions are. Something more should now be said about the requirement that a criterion of identity be non-trivial and, more particularly, non-circular. Clearly, it would be blatantly circular to allow the criterial relation in a one-level criterion of identity for Ks simply to be the relation of identity itself. It is true, but just trivially so, that if x and y are Ks, then x is identical with y if and only if x and y are identical. But sometimes a putative identity criterion can be circular in a less obvious way: for example, the putative identity criterion for sets which states that if x and y are sets, then x is identical with y if and only if x and y include exactly the same sets. It is indeed logically necessary and sufficient for the identity of sets x and y that x and y include exactly the same sets (bearing in mind that every set includes itself), but since what we are seeking is an informative way of specifying the identity-conditions of sets, it is clearly unsatisfactory to do so by appealing to a criterial relation – in this case, the relation of including the same sets – which is itself defined at least partly in terms 207
The Continuum Companion to Philosophy of Mind
of sameness (i.e. identity) between sets. Another example of such circularity is provided by Donald Davidson’s well-known proposal regarding the identityconditions of events, namely, that events x and y are identical if and only if x and y have the same causes and effects (Davidson, 1980 [1969]). For, since he takes causes and effects themselves to be events, this proposal amounts to the circular claim that events x and y are identical if and only if the same events cause both x and y and the same events are caused by both x and y (see Lowe, 1989). A criterion of identity for Ks should never appeal to or rely upon, in its formulation of the criterial relation for Ks, sameness (i.e. identity) between Ks. Unfortunately, circularity of this kind in a putative identity criterion is not always easy to spot and sometimes needs considerable work to tease out. This is a problem that afflicts certain well-known a empts to formulate an adequate criterion of personal identity, as we shall see. One final point needs to be made about identity criteria in general. This is that they are here being taken to be metaphysical principles, not merely epistemic or heuristic ones. Thus, for example, while it is true in the case of human persons that having the same fingerprints provides strong empirical evidence for identity between such persons, it certainly isn’t true that human personal identity consists in having the same fingerprints — for, quite apart from anything else, a human person can obviously survive the loss of his or her fingerprints (by losing his or her fingers) and indeed can even, in these days of modern medicine, acquire someone else’s fingerprints (as a result of a hand-transplant). So it can’t be true, quite generally and of necessity, that human persons x and y are identical if and only if x and y have the same fingerprints.
What is a Person? Locke, as we noted earlier, very wisely observed that ‘This being premised to find wherein personal identity consists, we must consider what Person stands for’ (Locke, 1975 [1690], II, XXVII, p. 9). We cannot hope to formulate an adequate criterion of identity for objects of a kind K unless we have a pre y good idea as to what Ks are. But what exactly are we asking when we ask a question of the form ‘What are Ks?’ The short, but, I think, correct answer is that we are inquiring into the nature or essence of Ks. As for what the word ‘essence’ means in this context, we again do well to quote Locke who said that ‘in the proper original signification’ of the word ‘it denotes the very being of anything, whereby it is, what it is’ (Locke, 1975 [1690]: III, III, p. 15). From this we may glean that, at the very least, we do not know what a K is unless we know to what ontological category Ks belong. Unfortunately, in the case of persons this immediately gives rise to a problem, namely, that different
208
Personal Identity
philosophers over the ages and across cultures have had very different views as to what, in this sense, persons are. Some have held that persons are essentially immaterial substances (‘spirits’ or ‘souls’), some that they are ‘combinations’ of such a substance with a material one (a ‘body’), some that they are purely material substances (such as living animals), some that they are ‘phases’ of such substances (rather as caterpillars and bu erflies are different ‘phases’ of the same kind of insect), some that they are non-substances (such as ‘bundles’ of experiences, or ‘functional roles’ that substances can occupy), some that they are not even individual entities of any kind but rather universals of a certain type, some that they are ‘transcendental’ entities which cannot be identified with items of any kind that are located in the world of space and time, some that they are literally non-entities having a purely ‘fictional’ status. What is the source of this remarkably wide difference of opinion concerning the nature or essence of persons? Perhaps this: the key ingredient in anyone’s conception of a person seems to be the conviction that, at least, he or she herself is a person. Thus, possession of the first-person perspective is at the heart of anyone’s conception of a person, whatever else may also be part of it. A person is, first and foremost, something that conceives of itself as thinking, feeling or doing various things (see Lowe, 1996, Chapter 1). Such a conception is one that requires the deployment of the first-person pronoun, ‘I’, or some expression equivalent to that, for its articulation. But the peculiar feature of this pronoun, from a semantic point of view, is that its competent use apparently does not require of the user any very specific conception of what kind of thing it designates. This is why Descartes (1984 [1641], II) could famously claim to be certain of the truth of the cogito – I think – and thereby certain of his own existence, while still professing uncertainty as to what he was. In the end, of course, he concludes that he is essentially a thinking thing, a substance whose essence is thinking (in the broadest sense of that term) and which excludes any other property (at least, any material property, such as shape or mass). Locke, as we have already seen, is less prescriptive concerning the nature or essence of persons, saying only that a person is ‘a thinking intelligent Being, that has reason and reflection, and can consider it self as it self, the same thinking thing in different times and places’ (Locke, 1975 [1690], II, XXVII, p. 9). This definition of personhood certainly builds in the notion that a person is a self-aware subject of thought and experience, but it is far from clear that we should take Locke to be implying, by his use of the capitalized word ‘Being’, that persons are substances, much less that they are essentially immaterial substances (or, indeed, that they are essentially material ones either). In fact, it would appear that Locke held human persons to be, strictly speaking, nonsubstances, with their ontological status being that of modes, or ‘bundles’ of modes (‘mode’ being Locke’s preferred term for an individualized property, or
209
The Continuum Companion to Philosophy of Mind
what would nowadays be called by metaphysicians a ‘trope’). This is because, while he believed that thoughts and other mental ‘modes’ have to be borne by substances and that these substances are in all probability ‘spiritual’ rather than ‘material’ in nature, he held that you or I, as human persons having such thoughts, could not be identified with any such substance, since you or I could in principle survive a change in respect of the substance bearing our thoughts at different times in our lives. This, obviously, is connected with Locke’s own theory of personal identity and his preferred criterion of personal identity, to which we shall return shortly. So the problem is that, while practically everyone might agree that, whatever else a person is, a person is something that is, or at least is capable of being, aware of itself as having thoughts, this formulation apparently leaves it almost entirely open as to what kind of thing this ‘something’ is. In fact, it even seems to leave open the possibility that there need be no one kind of thing that a person could be. If that is the case, however, then it would appear to be misguided to search for a criterion of personal identity as such, since persons of different kinds could be expected to comply with the identity criteria, whatever they might be, associated with the kinds in question. For example, if it is held that human persons – as opposed, say, to android persons of science fiction lore – are animals of a certain kind and thus that I, as a human person, am identical with such an animal (a biological organism of the species homo sapiens), then it should be concluded that my identity-conditions are just those of one such animal – that I began to exist when it did and will cease to exist when it does. This view, known as animalism, is currently fairly popular among metaphysicians (see Olson, 1997), perhaps on account of its thoroughly naturalistic flavour and perhaps too because it effectively does away with all the traditional problems of personal identity of the sort that Locke’s account generates. On the other hand, the idea that persons are not really a single kind of things and thus that things of radically different kinds, with quite different identityconditions, could all qualify as persons is prima facie counterintuitive and even rather disturbing in its apparent moral implications. As Locke so aptly put it, ‘person’ is ‘a Forensick Term appropriating Actions and their Merit’ (Locke, 1975 [1690], II, XXVII, p. 26): it is indispensable for our moral and legal practices of apportioning praise and blame and offering rewards and punishments. One’s natural presumption is that each person has and should have a moral concern for his or her own future and, more generally, for the futures of all other persons. But if there is no unified conception of what would count as ‘the future’ of a person as such, because persons of different kinds can have quite different identity-conditions, it may be hard to see what exactly could be the basis of such a universal moral concern. Indeed, if animalism were true regarding human persons such as you and I, why, a er all, should I have any
210
Personal Identity
moral concern for your or my future as such, given that the animals that you and I supposedly are have identity-conditions which don’t entail that those futures are ones in which you or I exist as persons at all? Reflections such as these suggest that it is strongly built into the commonsense conception of a person that all persons are essentially persons, so that my ceasing to be a person would entail my ceasing to exist altogether. Locke’s definition of personhood, whatever its defects, is clearly intended by him to have this consequence and to that extent seems to be more in tune with common sense than a view like animalism is. This, in any case, is a good point at which to look more closely at Locke’s own proposed criterion of personal identity, not only because it is interesting in its own right but also because it is, in effect, the first explicitly formulated criterion of personal identity to be found and has remained highly influential. This is not to deny that preceding philosophers were implicitly commi ed to various criteria of personal identity which can be deduced from their writings. The point is just that Locke has the distinction of being the first philosopher who explicitly acknowledged the notion of a criterion of identity – although he did not use that term for it – and applied it to the case of persons.
Locke’s Criterion of Personal Identity According to Locke, [S]ince consciousness always accompanies thinking, and ‘tis that, that makes every one to be, what he calls self; and thereby distinguishes himself from all other thinking things, in this alone consists personal Identity, i.e. the sameness of a rational Being. And as far as this consciousness can be extended backwards to any past Action or Thought, so far reaches the Identity of that Person; it is the same self now as it was then; and ‘tis by the same self with the present one that now reflects on it, that that Action was done. (Locke, 1975 [1690], II, XXVII, p. 9) It is a ma er of some controversy among Locke scholars how exactly this passage should be unpacked (see Lowe, 1995, Chapter 5, and Lowe, 2009, Chapter 7), but most commentators take it to be expressing a memory-based criterion of personal identity, on the understanding that the kind of memory that we are here concerned with is what is sometimes called ‘autobiographical’ or ‘experiential’ memory (e.g. remembering seeing a certain film some years ago), as opposed to the mere memory of impersonal facts (such as remembering that the Ba le of Hastings was fought in 1066).
211
The Continuum Companion to Philosophy of Mind
Here is one way in which one might a empt to frame Locke’s proposed criterion in the form of a one-level identity criterion, as such criteria were formulated earlier: (∀x)(∀y)((x is a person & y is a person) → (x = y ↔ (∀t1)(∀t2)(∀e)((x experiences e at t1 → y remembers e at t2) & (y experiences e at t1 → x remembers e at t2)))), where t1 and t2 are any two times at which both x and y exist (with t1 being earlier than t2) and ‘e’ is a variable ranging over individual conscious experiences, such as a conscious experience of having a particular thought or undertaking a particular action. What the foregoing formula says, in plain English, is just this: if x and y are persons, then they are the same person if and only if any conscious experience had by x at any earlier time is remembered by y at any later time, and vice versa (restricting ourselves here to times at which both x and y exist, of course, since no person can experience or remember anything at a time at which he or she doesn’t exist). This criterion entails, obviously, that a person must always remember every conscious experience that he or she ever formerly had. That, however, is extremely implausible. Indeed, its implausibility was fairly soon exploited by Thomas Reid (1975 [1785]) to construct a refutation of Locke’s proposed criterion by means of his well-known ‘brave officer’ example, as follows. We can readily imagine there being an elderly general who remembers saving the regiment’s standard when in ba le as a young officer and who, as a young officer, remembered stealing apples as a boy. But it also seems quite conceivable that the elderly general has entirely forgo en the boyhood episode. Suppose, indeed, as seems prima facie conceivable, that the elderly general remembers every experience of the young officer and the young officer remembers every experience of the boy, but that the elderly general remembers only some of the experiences of the boy. Then it seems to follow that, by Locke’s criterion (as we have stated it), the elderly general is the same person as the young officer and the young officer is the same person as the boy, but the elderly general is not the same person as the boy. This, however, blatantly conflicts with the transitivity of identity and implies that Locke’s proposed criterial relation for personal identity – remembrance of past experience, as we may call it – is not, as required, an equivalence relation. We might seek to remedy ma ers by relaxing Locke’s criterion so as to require only that a person remember some of the experiences that he had at any earlier time in his life. (Very possibly, indeed, this is all that Locke himself really meant to imply.) But then the counterexample can be modified by having the elderly general remember only some of the young officer’s experiences, who in turn remembers only some of the boy’s, while the general remembers none of the boy’s – which again seems perfectly conceivable. 212
Personal Identity
Modifications to Locke’s Criterion Fortunately, if Reid’s objection does indeed expose a fatal flaw in Locke’s proposed criterion, then it is one that is fairly easily rectified. For, given a certain non-transitive relation, R, it is always easy enough to define in terms of R another relation, R*, which is guaranteed to be transitive, namely, the so-called ancestral of R. Consider, for example, the parenthood relation, in which any parent stands to his or her children, a relation which is evidently not transitive. The ‘ancestral’ of this relation is (appropriately enough) the relation of being an ancestor of. This is the relation that holds between x and y if and only if there is a chain of individuals, beginning with x and ending with y or vice versa, such that adjacent individuals in the chain stand in the parenthood relation. Thus, my great-grandmother is an ancestor of mine because she is a parent of a parent of a parent of mine. And the relation of being an ancestor of is plainly transitive: necessarily, if x is an ancestor of y and y is an ancestor of z, then x is an ancestor of z. So, it seems, all that we need to do to save Locke’s criterion of personal identity from Reid’s objection is to replace the relation of remembrance of past experience by the ‘ancestral’ of that relation, call it connectedness of remembered past experience. The elderly general does stand in this relation to the boy, it seems, given that he remembers every experience of the young officer who, in turn, remembers every experience of the boy. Certainly, he does so if at every time between now and when he was a boy, there existed a person who remembered every experience had by a person existing at the preceding moment of time, beginning with the elderly general and ending with the boy. By the modified Lockean criterion, then, even a person whose autobiographical memory is limited to a span of just a few minutes or seconds – and there are in fact such unfortunate individuals – can in principle be identified with a person living many years ago. Here it may be wondered what Locke – and indeed we – should say about the possibility of persons undergoing periods of complete loss of consciousness, as appears to happen in deep sleep or a coma. It would be consistent with the modified Lockean criterion to say that persons simply cease, temporarily, to exist during such periods. If a neo-Lockean wished to avoid saying this, however, then it seems that the criterion would have to be modified further, by replacing appeal to the actual remembrance of past experience by appeal to a capacity for such remembrance, which can be retained during periods of complete unconsciousness. Now, of course, the fact that connectedness of remembered past experience, as we have decided to call it, is a transitive relation doesn’t guarantee that it is, as required, an equivalence relation, since to have that status it needs also to be reflexive and symmetrical. That it is reflexive might seem to be relatively uncontroversial, but that it is symmetrical is certainly not, for the following 213
The Continuum Companion to Philosophy of Mind
reason. It seems at least prima facie conceivable that two distinct persons, A and B, existing at a time t2 should both stand in this relation to a single person, C, existing at an earlier time t1. But if the relation is both transitive and symmetrical, this implies that A and B stand in the relation to each other. Why? Call the relation ‘R’ for short. Then, we are given that (1) A is R to C and (2) B is R to C. But if R is symmetrical, then it follows from (2) that (3) C is R to B. And given that R is also transitive, it follows from (1) and (3) that (4) A is R to B. Here it may be suggested that an advocate of the modified Lockean criterion should just ‘bite the bullet’ and accept that in such circumstances A and B are not, a er all, two distinct persons. But this simply isn’t sustainable, even by the standards of the modified Lockean criterion. For the kind of circumstances that we are now envisaging are ones in which a single person, C, supposedly undergoes a process of ‘fission’, spli ing into two distinct persons who go on to build up, therea er, quite different and ‘unconnected’ stores of autobiographical memory. This, supposedly, might occur as a result of the bisection of C’s brain into its two hemispheres, each of which is then transplanted into the head of a different human body (see Nagel, 1979 [1971]). At the later time t2, it simply will not be true to say that A stands in the R relation to B, because there will be experiences that B had a er the fission event which are not ‘connected’ to any memory that A has at t2. Suppose, for example, that at some time a er the fission event, B experiences a toothache. Will it be the case that at t2 A remembers the past experience of someone who remembers the past experience of someone . . . who remembers B’s toothache experience? Surely not: for the memory-chain in question will take us back to C at the moment of fission, but not forward from there to B’s toothache experience. Another thing that we should bear in mind in assessing the merits of the modified Lockean criterion is this. While it is necessary that a criterial relation should be an equivalence relation, this is not sufficient, since such a relation is required to hold between objects of a kind K just in case they are identical. Consequently, it cannot hold between distinct Ks. However, it may readily be argued that the relation of connectedness of remembered past experience doesn’t meet this demand, if we are prepared to countenance, in addition to cases of personal ‘fission’, cases of personal fusion (for instance, as a result of a reversal of the kind of brain-bisection and double transplant operation described earlier). For if a single person, C, existing at a time t2, stands in this relation to both of two distinct persons, A and B, existing at an earlier time t1, then it follows – since at most one of A and B can identical with C – that C stands in this relation to at least one person who is not identical with C. Of course, it may be objected that these imagined cases of personal fission and fusion are purely imaginary and not really possible. But that is much too big a debate to be entered into here. Suffice it to say that such cases present a prima facie problem for the modified Lockean criterion. A rather different 214
Personal Identity
problem that might be raised for it is the following, which leads to an accusation – first made by Joseph Butler (1975 [1736]) – that the criterion is implicitly circular. The criterion appeals to the notion of a person, P, remembering some past experience, e. But isn’t it in fact a logically necessary condition of P’s genuinely remembering e (in the first-personal, autobiographical sense of ‘remembering’) that P him or herself should actually have experienced e? How could you properly be said to ‘remember’ having an experience which you didn’t have? Wouldn’t that simply be a ‘false memory’, that is, a false impression of remembering something, rather than a genuine memory of anything? If that is so, then, as Butler urged, memory presupposes personal identity and hence cannot be what constitutes it. The standard modern response to this objection is to concede it, but then to modify the Lockean criterion still further by appealing instead to the notion of ‘quasi-memory’, where this is understood to be a mental state with all the features of autobiographical memory except that it is not a defining condition of the state that one can ‘quasi-remember’ only experiences that one had oneself (see Parfit, 1984, Chapter 11). It is allowed; that is to say that it is at least logically possible to quasi-remember the experiences of another person. However, this appeal to the notion of quasi-memory in defence of a neo-Lockean criterion of personal identity is a two-edged sword. For although it enables the defender of such a criterion to avoid the Butlerian charge of circularity, it does so at the expense of ruling out one kind of reply to the problem raised earlier in cases of personal fusion. Thus, while it may be objected that C in such a case cannot genuinely stand in the ancestral of the memory relation to a past experience of someone distinct from C, it has to be allowed that C can stand in the ancestral of the quasi-memory relation to such an experience. It must also be acknowledged that the notion of quasi-memory is far from being uncontroversial, since a good many philosophers doubt whether it really makes sense (see Wiggins, 2001, Chapter 7). Suppose, however, that we set aside such doubts. What if anything can be done to further modify a neo-Lockean criterion that appeals to the ancestral of the quasi-memory relation (i.e. to the relation of connectedness of quasiremembered past experience), in order to safeguard it against the threat posed by putative cases of personal fission and fusion? There is a simple enough answer: simply build into the criterion a clause excluding any such ‘branching’. Then we can say, in nutshell, that persons x and y, existing at times t1 and t2 respectively, are identical just in case there is a non-branching chain of connected quasi-memories linking y at t2 to an experience of x at t1. Writing out this criterion formally, in the style deployed earlier, would be too complicated to be very useful, but it can certainly be done. However, such a criterion is still open to objection, even waiving any difficulty that one might have with the notion of quasi-memory. One objection is that it violates what is sometimes 215
The Continuum Companion to Philosophy of Mind
called the only x and y principle: this is the principle that, in order to se le a question about whether an object x is identical with an object y, only facts about x and y should be deemed relevant, not facts concerning other objects (see Wiggins, 2001, p. 96). The proposed condition on non-branching violates this principle, because it amounts to the requirement that person x’s identity with person y is conditional upon there existing no other person, z, in addition to x and y as a result of some fission or fusion process involving them. However, it may be questioned why we should regard the only x and y principle as sacrosanct. Why shouldn’t we just concede that identity can sometimes be ‘extrinsically’ determined, by being dependent on the existence or nonexistence of other objects in addition to those whose identity is at issue? Another objection is that the new version of the Lockean criterion is at odds with our moral convictions concerning the importance of personal survival. For, if my survival requires the future existence of someone who is identical with me, then it seems that, by the neo-Lockean criterion, my surviving or not surviving can turn upon the seemingly irrelevant ma er of whether or not I will at some point undergo fission or fusion (however far-fetched such scenarios may seem). To this, however, it may be replied that the real lesson of this is that my survival, in the sense in which it is or should be something of importance to me, should be defined not in terms of my identity with some future person but rather in terms of there being at least one such person who is linked to me by a connected chain of quasi-memories; if there is more than one, as in a fission case, then so much the be er, on this view (see Parfit, 1984, Chapter 12 and Chapter 13).
Another Circularity Objection to the Neo-Lockean Approach Does this mean, then, that the final, non-branching version of the neo-Lockean criterion is finally acceptable? Does it satisfy all the requirements of an adequate criterion of personal identity? Very arguably, it does not, for it still seems vulnerable to a charge of implicit circularity, although one of a different sort from Butler’s. Recall again that the neo-Lockean criterion appeals to the notion of a person, P, remembering – or, rather, quasi-remembering – some past experience, e. But now we need ask ourselves this: how are memories and experiences themselves individuated? Such items are mental states or events. But what are their identity-conditions? We can already rule out the Davidsonian criterion of identity for events as a way of se ling this question because we found it to be implicitly circular. It was so because it sought to identify events on the basis of the sameness of their causes and effects, while also taking these causes and effects to be events themselves. So it defined ‘sameness of events’ in terms appealing to sameness among events, a blatantly circular procedure, leaving us no 216
Personal Identity
clearer as to what the identity of events consists in. But the neo-Lockean criterion likewise appears to be implicitly circular, albeit in a rather more roundabout way. For it is strongly arguable that the only adequate criterion of identity for mental states and events will be one which makes reference to their subjects, which, in the case of personal memories and experiences, will be the persons who have those memories and experiences (see Strawson, 1959, Chapter 3). Let us focus on the case of experiences, although the same reasoning will apply equally to memories. On the view now being recommended, part of what makes an experience of mine numerically distinct from a qualitatively indistinguishable experience of yours is the very fact that it is mine as opposed to yours. The only other possible distinguishing feature seems to be the time at which an experience occurs. In short, the following seems to be a very plausible criterion of identity for personal experiences: (∀x)(∀y)((x is a personal experience and y is a personal experience) → (x = y ↔ (x and y are qualitatively indistinguishable & (∃P1)(∃P2)(∃t1)(∃t2) (P1 has x at t1 & P2 has y at t2 & P1 = P2 & t1 = t2)))), where ‘P1’ and ‘P2’ are variables ranging over persons and ‘t1’ and ‘t2’ are variables ranging over times. In plain English, what this formula says is just this: if x and y are personal experiences, then they are the same personal experience if and only if x and y are qualitatively indistinguishable experiences had by the same person at the same time. It is quite clear that the criterial relation invoked by this criterion is, as required, an equivalence relation. But, equally, it is obvious that it appeals to the notion of sameness between persons and hence presupposes that notion. Accordingly if, as I strongly suspect is the case, this is the only adequate criterion of identity for personal experiences, then the neoLockean criterion of personal identity is implicitly circular inasmuch as it will need to rely on the foregoing criterion for a specification of the identityconditions of the experiences to which it appeals for the purposes of identifying persons. Clearly, at any rate, we cannot both individuate persons in terms of their experiences (as the neo-Lockean criterion a empts to do) and individuate personal experiences in terms of the persons having them (as the foregoing criterion does). And to the extent that the foregoing criterion of identity for personal experiences looks to be in good order, it is the neo-Lockean criterion that must be rejected as inadequate (see Lowe, 2009, Chapter 7).
Some Loose Ends and a Brief Conclusion I have focused on the neo-Lockean approach because it is, deservedly, by far the most prominent one in the modern literature on personal identity, whether 217
The Continuum Companion to Philosophy of Mind
it is being endorsed or being a acked. But something should be said now about some alternative approaches. First of all, so far we have considered only the prospects for a one-level criterion of personal identity. But on some views of what persons are, a two-level criterion might seem more appropriate – for instance, if persons are taken to be functional states or roles that objects of appropriate kinds can occupy. Thus, one such view would be that a person’s body, or a special part of that body, such as its brain, is the object that occupies the functional role in question. Suppose that being a person is a functional role of a brain (e.g. it might be taken to be the role of being a producer of firstperson thoughts). Then a criterion of personal identity could be expected to take something like the following two-level form: (∀x)(∀y)(the person of brain x = the person of brain y ↔ (brain x and brain y are RP-related), where ‘RP’ denotes a certain equivalence relation among brains. Indeed, on one view, this relation would simply be identity itself. There would be no circularity in the criterion on this account, since it would simply be defining personal identity in terms of brain-identity, and persons and brains are here being taken to be items of quite different kinds. So this approach is by no means identifying a person with his or her brain. The brain-identity criterion of personal identity just implies that a person’s identity tracks that of the person’s brain, so that, for example, if a person A’s brain is transplanted into evacuated head of another person B’s body, then person A acquires person B’s body; and if person A’s brain is switched with person B’s, then we have a body-swap, with person A acquiring person B’s body and person B acquiring person A’s body. The scenario is really very similar to Locke’s famous imaginary example of the prince and the cobbler, who supposedly undergo a body-swap, although what Locke envisaged was that the soul of the prince entered the cobbler’s body and the soul of the cobbler entered the prince’s body (Locke, 1975 [1690], II, XXVII, p. 15). However, although Locke thought that this scenario was in principle possible, he did not, of course, subscribe to a soul-identity criterion of personal identity because he thought that the same person could in principle have different souls at different times and that the same soul could, at different times, be the soul of different persons. (For a modern defence of a soul-identity criterion, see Swinburne, 1986, Chapter 8.) I have nothing to say in recommendation of a two-level approach such as the brain-identity criterion, although it will clearly appeal to some philosophers and psychologists. Such an approach is clearly inappropriate if we regard the term ‘person’ as denoting a distinct kind of substantial being, an individual substance, rather than a certain kind of state or role that such a substance can occupy. Certainly, common sense and ordinary language strongly suggest the 218
Personal Identity
former view. I feel myself to be some thing, with distinctive properties such as thought and feeling, rather than my being merely some property or feature of some other thing, such as my brain. But it must be confessed that a satisfactory criterion of personal identity that supports this conviction is still very elusive. On the other hand, we should be open to the possibility that personal identity is so basic in our ontological scheme that we should not really expect to be able to formulate such a criterion. For, as we have seen, criteria of identity for objects of a kind K always appeal to objects of other kinds in specifying a criterial relation for K-identity. If persons are really fundamental in our ontological scheme, we should not expect to be able to appeal to such other kinds of objects in their case. That being so, we should probably conclude that personal identity is primitive and ‘simple’, in the sense that nothing more informative can be said about the identity of persons than that in some cases it just obtains and in others not (see Lowe, 2009, Chapter 7).
219
12
Embodied Cognition and the Extended Mind Michael Wheeler
The Flight from Cartesianism There is a seductive image of intelligent action that sometimes gets labelled Cartesian. According to this image, as I shall present it here, the psychological understanding of the operating principles by which an agent’s mind contributes to the generation of reliable and flexible, perceptually guided intelligent action remains conceptually and theoretically independent of the details of that agent’s physical embodiment. Less formally, one might say that, in the Cartesian image, the body enjoys no more than a walk-on part in the drama of intelligent action. Whether or not the Cartesian image is Cartesian in the sense that it ought to be a ributed to Descartes himself is a ma er that demands careful exegetical investigation (e.g. see Wheeler, 2005 for an analysis which concludes that, by and large, it should). In general, positions that are currently identified as Cartesian may not map directly or completely onto Descartes’s own views. This potential mis-match is an example of a widespread phenomenon and should come as no surprise. Were Karl Marx with us today, he might well express serious misgivings about some of what has been said and done in the name of Marxism. In Descartes’s case, his views have been handed down to us via a rich intellectual history of contested interpretations and critical debate. Inevitably, perhaps, some ideas that now bear the stamp Cartesian will have as much to do with that intervening process as they have to do with Descartes himself. Anyway, for now, I intend to ignore the question of provenance. What is crucial in the present context is that the two views of intelligent action with which I shall be concerned in this chapter – the hypotheses of embodied cognition and of the extended mind – may be understood as different stop-off points in a flight from the image in question. To bring all this into be er view, we can adapt an example due to Clark (1997, pp. 63–4) of some different ways in which an intelligent agent might solve a jigsaw puzzle. Here is a strategy suggested by the Cartesian image. On the basis of perceptual information about the problem environment (the unmade jigsaw), the agent solves the entire puzzle ‘in her head’, using some 220
Embodied Cognition and the Extended Mind
combination of mental imagery, judgement, inference, reasoning, and so on. The solution arrived at in this way is then executed in the world, through a series of movement instructions that are dispatched from the mind, to the hands and arms. Things may not always go according to plan, of course, but any failures experienced during the execution phase act as nothing more than perceptual prompts for some newly initiated in-the-head planning. Now, it is quite obvious that the puzzle-solving mind at the core of this activity needs a body to execute the movement instructions generated by that mind; and nothing in the account on offer suggests that there could be minds without brains. (Substance dualism is not the issue.) Nevertheless, in this Cartesian scenario, the fact is that the body makes only an impoverished contribution to the intelligence on display. The nature of this impoverishment becomes clear once a second vision of jigsaw competence is placed on the table. According to this new vision, certain bodily acts, such as picking up various pieces, rotating those pieces to help pa ern-match for possible fits, and trying out potential candidates in the target position, are deployed as central aspects of the agent’s problem-solving strategy. In the unfolding of this alternative plot, the details of the thinker’s embodiment, in the guise of the specific embodied manipulative capacities that she deploys, plays an essential supporting role in the story of intelligent action. This is an example of embodied cognition.1 Notice that problem-solving strategies which essentially involve bodily acts will o en encompass a richer mode of environmental interaction than is present in Cartesian contexts. Thus in our Cartesian jigsaw-completing scenario, the physical environment is arguably no more than a furnisher of problems for the agent to solve, a source of informational inputs to the mind (via sensing), and a stage on which sequences of pre-specified actions, choreographed in advance by prior neural processes, are simply executed. By contrast, in the alternative, embodied cognition scenario, although the physical environment remains a furnisher of problems and a source of informational inputs, it has also been transformed into a readily available external resource which is exploited by the agent, in an ongoing way, to restructure the piece-finding problem and thus reduce the information processing load being placed on the inner mechanisms involved. Indeed, the external factors in play – in particular, the geometric properties of the pieces themselves – participate in a kind of ongoing goal-achieving dialogue with the agent’s neural processes and her bodily movements. In so doing, those external factors account for some of the distinctive adaptive richness and flexibility of the problem-solving behaviour. The embodied mind is thus also a mind that is intimately embedded in its environment. Once one starts to glimpse the kind of environmental contribution to intelligent action ushered in by embodied solutions, it is but a small step, although one which is philosophically controversial, to the second of our target positions, namely the extended mind hypothesis (Clark and Chalmers, 1998).2 221
The Continuum Companion to Philosophy of Mind
According to this hypothesis, there are actual (in this world) cases of intelligent action in which thinking and thoughts (more precisely, the material vehicles that realize thinking and thoughts) are spatially distributed over brain, body and world, in such a way that the external (beyond-the-skin) factors concerned are rightly accorded cognitive status. In other words, ‘actions and loops through nonbiological structure [sometimes count] as genuine aspects of extended cognitive processes’ (Clark, 2008b, p. 85). So, if the extended mind hypothesis is true, it is not merely the case that thinking is sometimes (and perhaps sometimes essentially) causally dependent in complex and intricate ways on the bodily exploitation of external props or scaffolds. Indeed, bare causal dependence of thought on external factors is not sufficient for genuine cognitive extension (a point rightly emphasized by Adams and Aizawa, 2008). Rather, if the extended mind hypothesis is true, thought must sometimes exhibit a constitutive dependence on external factors. This is the sort of dependence indicated by talk of beyondthe-skin factors rightly being accorded cognitive status. Stretching our thesbian metaphor beyond reasonable limits, this is the twist in the tale of intelligent action where the scenery and the props get a mention in the cast list. In one short chapter, I cannot hope to give a comprehensive field guide to embodied cognition and the extended mind. So my goal will be more modest. I shall endeavour to cast light on a specific issue which lies at the very heart of the contemporary debate, namely the character of, and the argument for, the transition from embodied cognition to cognitive extension (see also, Clark 2008a, 2008b; Wheeler forthcoming a and c; Rowlands, forthcoming). Here, then, is where I am going. In the second section, I shall present some empirical research from cognitive science which illuminates the embodied cognition hypothesis, henceforth EmbC. In the third section, I shall suggest that once one has accepted the resulting picture of intelligent action, there remains a philosophical choice to be made over how to conceptualize the role of the body in the action-generation process, a choice between what Clark (2008a) identifies as a radical body-centrism and a newly interpreted functionalism. In the fourth section, I shall explore the connection between the second of these options and the extended mind hypothesis, henceforth ExM. My suggestion will be that the basic character of one of the central philosophical arguments for ExM, the argument from parity, makes that functionalist option more a ractive. In the fi h section, I shall seek to strengthen the emerging picture by showing how a key element of the argument from parity may be secured.
Body Matters As I shall use the term, orthodox cognitive science encompasses the bulk of research in both classical cognitive science (according to which, roughly, the 222
Embodied Cognition and the Extended Mind
mind recapitulates the abstract structure of human language, in that it is characterized by a combinatorial syntax and semantics) and mainstream connectionism (according to which, roughly, the mind recapitulates the abstract structure of the biological brain, in that it is organized as a distributed network of interconnected simple processing units). Although I shall not give a full defense of the claim here, it is arguable (e.g. see Wheeler 2005) that the Cartesian image of an explanatorily disembodied and disembedded mind has been a core feature of orthodox cognitive science and of the sort of scientifically oriented philosophy of mind that rides shotgun with that science. This is not to say that no orthodox cognitive scientist has ever expressed the view that bodily acts in close interaction with environmental structures might play a crucial and active part in generating complex behaviour. Simon famously discussed the path followed by an ant walking on a beach in order to make precisely this point (Simon, 1969; for discussion, see Boden, 2006, pp. 429–30, and Haugeland 1995/98, pp. 209–11). Moreover, the conceptual geography in this vicinity demands careful mapping. For one thing, orthodox connectionism takes its basic inspiration from a psychologically crucial part of the organic body, namely the brain. Indeed, the much recorded ability of orthodox connectionist networks to perform cognitively suggestive feats of graceful degradation, flexible generalization, fluid default reasoning, and so on, can, in many ways, be identified as a natural consequence of that nod to embodiment. So the claim that the disembodied aspect of the Cartesian image has been at work in this area of orthodox cognitive science needs to be backed by some sort of evidence (more on that soon). In addition, as we shall see later, the languagelike compositional structures of the classical framework and the distributed network-style structures of connectionism may be rendered fully compatible with ExM, so it is not as if those structures must necessarily be associated with the Cartesian image. Nevertheless, it remains true, I think, that the Cartesian image has historically held sway as part of the received orthodoxy in cognitive science. All that said, things are on the move. Over the past two decades, cognitivescientific models generated from the EmbC perspective have become increasingly common. And to the extent that such models provide illuminating, compelling and fruitful explanations of intelligent action, EmbC as a paradigm garners empirical support. It is in this context that it will serve our current purpose to make a brief visit to the sub-discipline of contemporary artificial intelligence known as situated robotics. Roboticists in this camp shun the classical cognitive-scientific reliance on detailed internal representations (although they don’t necessarily shun all forms of representation). The case for this scepticism about representational control o en turns on the thought that where the adaptive problem faced by an agent involves integrating perception and action in real time so as to generate fast and fluid behaviour, detailed representations 223
The Continuum Companion to Philosophy of Mind
are just too computationally expensive to build and maintain. So situated roboticists favour an alternative model of intelligent action in which the robot regularly senses its environment (rather than checks an internal world model) to guide its actions. It is this commitment that marks out a robot as situated (Brooks, 1991). One of the key lessons from research in this area is that much of the richness and flexibility of intelligence is down not to centrally located processes of reasoning and inference, but rather to integrated suites of specialpurpose adaptive couplings that combine neural mechanisms (or their robotic equivalent), non-neural bodily factors, and environmental elements, as ‘equal partners’ in a behaviour-generating strategy. Unsurprisingly, then, the field of situated robotics is a rich storehouse of examples of embodied cognition. To illustrate just how explanatorily powerful the appeal to embodiment may be in cognitive science, consider the following challenge. Clark and Thornton (1997) claim that there are certain learning problems – so-called type-2 problems – where the target regularities are inherently relational in nature, and so are statistically invisible in the raw input data. Type-2 problems are thus to be contrasted with type-1 problems, which involve non-relational regularities that are visible in that data. According to Clark and Thornton, this leaves cognitive science with a serious difficulty, because empirical testing suggests that many of the most widely used, ‘off-the-shelf’ artificial intelligence learning algorithms (e.g. connectionist back-propagation and cascade-correlation, plus others such as ID3 and classifier systems) fail on type-2 problems, when the raw input data is presented. This fact would, of course, be no more than a nuisance for cognitive science if such learning problems were rare; but, if Clark and Thornton are right, type-2 problems are everywhere – in relatively simple behaviours (such as approaching small objects while avoiding large ones), and in complex domains (such as grammar acquisition). Clark and Thornton proceed to argue that the solution to this difficulty involves the internal presence of general computational strategies that systematically re-represent the raw input data so as to produce a non-relational target regularity. This output re-representation is then exploited by learning in place of the initial input coding. In effect, the process of re-representation renders the type-2 learning problem tractable by transforming it into a type-1 problem. So where do embodiment and situated robotics come in? Scheier and Pfeifer (1998) demonstrate that a type-2 problem may be solved by a process in which a mobile agent uses autonomous bodily motion to actively structure input from its environment. Once again the strategy is to transform an intractable type-2 problem into a tractable type-1 problem, but this time there is no need for any computational inner re-representation mechanism. The test case is the type-2 problem presented by the task of avoiding small cylinders while staying close to large ones. Scheier and Pfeifer show that this problem may be solved by some relatively simple, evolved neural network robot controllers. Analysis 224
Embodied Cognition and the Extended Mind
demonstrated that most of these controllers had evolved a systematic circling behaviour which, by inducing cyclic regularities into the input data, turned a hostile type-2 climb into a type-1 walk in the park. In other words, adaptive success in a type-2 scenario (as initially encountered) was secured not by inner re-representation, but by an approach in which the agent, ‘by exploiting its body and through the interaction with the environment . . . can actually generate . . . correlated data that has the property that it can be easily learned’ (Scheier and Pfeifer, 1998, p. 32). Scheier and Pfeifer’s canny and frugal solution to Clark and Thornton’s challenge shows how being an embodied agent (of a mobile kind) can yield dividends in the cognitive realm, and thus how a proper sensitivity to what we might call ‘gross embodiment’ has an impact on cognitive science. A different, but equally important, perspective on how embodiment may shape our understanding of cognition comes into view if we switch scale, and concentrate instead on the detailed corporeal design of biological systems. Once again, as we shall see, situated robotics provides an experimental context in which an appeal to embodiment may be developed and tested. As the flip-side of its claim to biological plausibility, mainstream connectionism tends to promote a vision of biological brain processes as essentially a ma er of electrical signals transmi ed between simple processing units (neurons) via connections (synapses) conceived as roughly analogous to telephone wires. However, as Turing once remarked, ‘[i]n the nervous system chemical phenomena are at least as important as electrical’ (Turing, 1950, p. 46). The factoring out of brain-based chemical dynamics by mainstream connectionist theorizing thus indicates another dimension along which the embodiment of cognition is sidelined by orthodox cognitive science. So what happens when such chemical dynamics are brought into view? Reaction-diffusion (RD) systems are distributed chemical mechanisms involving constituents that are (a) transformed into each other by local chemical reactions and (b) spread out in space by diffusion. Such systems explain how unicellular organisms such as bacteria manage to distinguish between different relevant environmental factors, adapt to environmental change, and co-ordinate collective behaviour. Thus behaviour that researchers in the field of artificial life o en describe as minimally cognitive may be achieved by RD systems. Many of the molecular pathways present in unicellular organisms have been conserved by evolution to play important roles in animal brains, so an understanding of the ways in which RD systems may generate minimally cognitive behaviour will plausibly help us to explain the mechanisms underlying higherlevel natural cognition. Against this background, Dale and Husbands (2010) show that a simulated RD system (conceived as a one-dimensional ring of cells within which the concentration of two coupled chemicals changes according to differential equations governing within-cell reactions and between-cell 225
The Continuum Companion to Philosophy of Mind
diffusion) is capable of intervening between sensory input (from whiskers) and motor output (wheeled locomotion) to enable a situated robot to achieve the following minimally cognitive behaviours: (a) tracking a falling circle (thus demonstrating orientation), (b) fixating on a circle as opposed to a diamond (thus demonstrating discrimination), (c) switching from circle fixation behaviour to circle avoidance behaviour on the presentation of a particular stimulus (thus demonstrating memory). As Dale and Husbands (2010, p. 17) put it, a range of robust minimally cognitive behaviours may be exhibited by a ‘seemingly homogenous blob of chemicals’, a revision to our understanding of how cognition works that is inspired by our taking seriously the details of biological corporeal design. In this section I have highlighted two important examples of the way in which embodiment may have an impact on cognitive theory. In the next section I shall address a further question: in the light of the examples of corporeal impact to which I have drawn a ention, how, in general terms, are we conceptualize the fundamental contribution of the body to cognitive phenomena?
Two Kinds of Embodiment Clark (2008a) observes that there are two different, although o en tangled, strands of thinking at work within contemporary accounts that stress embodiment. In the following passage, he unravels those strands for us. One . . . depicts the body as intrinsically special, and the details of a creature’s embodiment as a major and abiding constraint on the nature of its mind: a kind of new-wave body-centrism. The other depicts the body as just one element in a kind of equal-partners dance between brain, body and world, with the nature of the mind fixed by the overall balance thus achieved: a kind of extended functionalism (now with an even broader canvas for multiple realizability than ever before). (Clark, 2008a, pp. 56–7) In order to see this division of ideas in its proper light, one needs to say what is meant by functionalism, as that thesis figures in the debate with which we are concerned here. The final emphasis is important, because although Clark does not address the issue, the kind of functionalism plausibly at work in the transition from EmbC to ExM is not the kind most usually discussed by philosophers, although I think it is the kind most usually assumed in cognitive psychology. To bring our target version of functionalism into view, we can exploit McDowell’s (1994) distinction between personal-level explanations, which are those concerned with the identification and clarification of the constitutive character of agency (roughly, what it is to competently inhabit a world), and 226
Embodied Cognition and the Extended Mind
sub-personal explanations, which are those concerned with mapping out the states and mechanisms (the parts of agents, as it were) that causally enable personal-level phenomena. Functionalism, as I shall understand it here, is a sub-personal causal-enabling theory. It is not, as it is in its more common philosophical form, a way of specifying constitutive criteria for what it is to undergo types of personal-level mental states. Depending on one’s account of the relationship between personal and sub-personalsub-personal levels of explanation, one might be a sub-personal functionalist while rejecting functionalism at the personal level. In this paper I shall say nothing more about personallevel functionalism. My concern is with the sub-personal version of the view, i.e., with the claim that what ma ers when one is endeavouring to identify the specific contribution of a sub-personal state or process qua cognitive is not the material constitution of that state or process, but rather the functional role which it plays in the generation of personal-level cognitive phenomena by intervening between systemic inputs, systemic outputs and other functionally identified, intra-systemic, sub-personal states and processes. With that clarification in place, let’s return to the division of ideas recommended by Clark. In the present context, it will prove useful to re-draw that division in terms of a closely related distinction between two kinds of materiality, namely vital materiality and implementational materiality (Wheeler, forthcoming c). The claim that the materiality of the body is vital is tantamount to the first strand of embodied thought identified by Clark, (i.e. that the body makes a special, non-substitutable contribution to cognition, generating what, elsewhere, Clark [2008a, p. 50] calls ‘total implementation sensitivity’). On the other hand, if the materiality of the body is ‘merely’ implementational in character, then the physical body is relevant ‘only’ as an explanation of how mental states and processes are instantiated in the material world. The link between implementational materiality and functionalism becomes clear when one notes that, on any form of functionalism, including the sub-personal one presently on the table, multiple realizability will be at least an in-principle property of the target states and processes. Because a function is something that enjoys a particular kind of independence from its implementing material substrate, a function must, in principle, be multiply realizable, even if, in this world, only one kind of material realization happens to exist for that function. And since the multiple realizability of the mental requires that a single type of mental state or process may enjoy a range of different material instantiations, the specific material embodiment of a particular instantiation cannot be a major and abiding constraint on the nature of mind. Put another way, the implementational materiality of the mental (or something akin to it) is plausibly necessary for mental states and processes to be multiply realizable. And this remains true when one’s functionalism – and thus the level at which the behaviourgenerating causal states and processes qua cognitive are specified – is pitched at 227
The Continuum Companion to Philosophy of Mind
a sub-personal level. By contrast, where the materiality of the body is vital, multiple realizability is, if not ruled out altogether, at least severely curtailed (e.g. see Shapiro, 2004, especially p. 167). Armed with the conceptual distinction just made, how are we to conceptualize the role of the body in each of our two flagship examples of embodied cognition; as a case of vital materiality (supporting a new wave body-centrism) or as a case of implementational materiality (supporting a functionalist picture)? My immediate answer to this question might come as something of a surprise. For, as far as I can see, each of our examples might be interpreted according to either vision of embodiment. Here’s why. To see Scheier and Pfeifer’s cylinder discriminating robots as an instance of vital materiality, one might begin with the observation that Clark and Thornton’s appeal to an inner process of re-representation exemplifies a computational information processing approach to solving the problem. One might then suggest, with some plausibility it seems, that the way in which Scheier and Pfeifer’s robots exploit gross bodily movement in their specific circling behaviour provides us with a radical alternative to computational information processing as a general problem-solving strategy, an alternative available only to agents with bodies of a certain kind. To see Dale and Husbands’ minimally cognitive RD system as an instance of vital materiality, one might interpret that system as an example of what Collins calls embrained knowledge. For Collins, knowledge is embrained just when ‘cognitive abilities have to do with the physical setup of the brain,’ where the term ‘physical setup’ signals not merely the ‘way neurons are interconnected’, but also factors to do with ‘the brain as a piece of chemistry or a collection of solid shapes’ (Collins, 2000, p. 182). Embrained knowledge so defined is an example of total implementation sensitivity and thus establishes vital materiality. And the evidence from Dale and Husbands that the spatio-temporal chemical dynamics of RD systems, as plausibly conserved in the evolutionary transition from unicellular organisms to animal brains, may generate minimally cognitive behaviour surely provides an example of cognitive abilities being to do with the physical setup of the brain, that is, of embrained knowledge. Now let’s look at things from a different angle. To see Scheier and Pfeifer’s robots as providing an instance of implementational materiality, one might argue that the restructuring of the learning problem achieved by their bodily movements is functionally equivalent to the restructuring of that problem effected by Clark and Thornton’s inner re-representation strategy. In both cases, a type-2 learning problem (intractable to standard learning algorithms as it stands) is transformed into a type-1 problem (and so rendered tractable). Thus one might think in terms of alternative material realizations of a single multiply realizable, functionally specified contribution (the transformation of the statistical structure of the target information), a contribution that may be 228
Embodied Cognition and the Extended Mind
performed by inner neural mechanisms or by bodily movements. To see Dale and Husbands’ RD system as an instance of implementational materiality, one need note only that the experiments described briefly above are designed explicitly as (something close to) replications, using an RD system, of experiments in minimally cognitive behaviour carried out originally by Beer (1996, 2003; Slocum et al., 2000) using continuous recurrent neural networks (CNNs). RD systems thus emerge as one kind of vehicle for functionally specified mechanisms of orientation, discrimination and memory, mechanisms that could in principle be realized in other ways, such as by CNNs. One might worry here that RD systems and CNNs are not alternative realizations of certain functionally specified mechanisms, but rather alternative ways of achieving certain minimally cognitive behaviours without there being any more specific functional unity in terms of processing architecture. And indeed, one might well analyze RD systems as examples of Collins’ embrained knowledge, and thus of vital materiality (see above), while analyzing CNNs as a dynamically richer form of connectionism, and thus as a kind of microfunctionalist theorizing (Clark 1989) that demands an implementational notion of materiality.3 But any such uncertainty in how to interpret the case is arguably grist to my mill, since it will be an illustration of the very issue of underdetermination that I have set out to highlight. As things stand, we seem to confront something of an impasse in our a empt to understand the fundamental contribution of embodiment to cognitive theory. To escape from this impasse, it seems to me, we have no option right now but to look beyond the thought that the understanding we seek may be directly read off from the available science. In the next section I shall present, analyse and briefly defend one of the central philosophical arguments for ExM, namely the argument from parity. I shall then explain why that argument forges a link with the functionalist perspective on embodiment. Given that vital materiality is inconsistent with functionalism, this suggests a consideration in favour of the view that the fundamental contribution of the body to cognitive theory is a ma er of implementational materiality. At the very least, if the argument from parity is indeed sound, then the implementational view of embodiment is correct.
From the Parity Principle to Extended Functionalism According to ExM, there are actual (in this world) cases of intelligent action in which thinking and thoughts (more precisely, the material vehicles that realize thinking and thoughts) are spatially distributed over brain, body and world, in such a way that the external (beyond-the-skin) factors concerned are rightly accorded cognitive status. To see how one might argue philosophically for this 229
The Continuum Companion to Philosophy of Mind
view, we need to make contact with what, in the ExM literature, is called the parity principle. Here is how that principle is formulated by Clark (drawing on Clark and Chalmers, 1998, p. 8): If, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process. (Clark, 2008b, p. 77) The general idea here seems clear enough: if there is functional equality with respect to governing intelligent behaviour (e.g. in the way stored information is poised to guide such behaviour), between the causal contribution of certain internal elements and the causal contribution of certain external elements, and if the internal elements concerned qualify as the proper parts of a cognitive system (state, process, mechanism, architecture . . . ), then there is no good reason to deny equivalent status to the relevant external elements. Parity of causal contribution mandates parity of status with respect to the cognitive. But if the general idea of the parity principle is clear enough, the details of how to apply it are not, so we need to pause here to get clear about those details (for a similar analysis, see Wheeler, forthcoming c). One interpretation of the parity principle is suggested by the way in which it is applied by Clark and Chalmers themselves to the near-legendary (in ExM circles) case of Inga and O o (Clark and Chalmers, 1998). In this imaginary scenario, Inga is a psychologically normal individual who has commi ed to her purely organic (neural) memory the address of the New York Museum of Modern Art (MOMA). If someone asks her the location of MOMA, she deploys that memory to retrieve the information that the building is on 53rd Street. O o, on the other hand, suffers from a mild form of Alzheimer’s, but compensates for this by recording salient facts in a notebook that he carries with him constantly. If someone asks him the way to MOMA, he automatically and unhesitatingly pulls out the notebook and looks up the relevant fact, viz. that the museum is on 53rd Street. Clark and Chalmers claim that there is a functional equivalence between (a) the behaviour-governing causal role played by O o’s notebook, and (b) the behaviour-governing causal role played by the part of Inga’s brain that stores the same item of information as part of her purely organic memory. By the parity principle, then, O o’s memory turns out to be extended into the environment. Moreover, argue Clark and Chalmers, just as, prior to recalling the information in question, Inga has the non-occurrent dispositional belief that MOMA is on 53rd Street, so too does O o, although while Inga’s belief is realized in her head, O o’s is realized in the extended, notebook-including system.
230
Embodied Cognition and the Extended Mind
If we reflect on precisely how the parity principle is intended to work in this particular case, we would be forgiven for thinking that the benchmark for parity (the set of conditions that the O o-plus-notebook system would need to meet in order to count as cognitive) is fixed by whatever Inga’s brain does. But although Clark and Chalmers’s text sometimes leaves rather too much room for this reading of the parity principle, it would be a tactical disaster for the advocates of ExM if that really were what was meant. As Menary (a fan of ExM, but not of the parity principle), drawing on work by Su on (di o), observes: [O]nly at the grossest level of functional description can [the claim of equivalence] be said to be true. O o and his notebook do not really function in the same kind of way that Inga does when she has immediate recall from biological memory. There are genuine and important differences in the way that memories are stored internally and externally and these differences ma er to how the memories are processed. John Su on has pointed out that biological memories stored in neural [i.e., connectionist] networks are open to effects such as blending and interference (see Su on [2006] for discussion). The vehicles in O o’s notebook, by contrast, are static and do no work in their dispositional form (Su on, 2006). (Menary, 2007, p. 59) Other critics of the parity principle have appealed to the psychological data on various extant inner cognitive capacities, as delivered by cognitive science, in order to construct similar failure-of-parity arguments (e.g. see Adams and Aizawa, 2008 on primacy and recency effects in organic memory; for discussion, see Wheeler forthcoming a and c). The general version of the worry, however, is this: if (a) the relatively fine-grained functional profiles of extant inner cognitive systems set the benchmark for parity, then (b) any distributed (over brain, body and world) systems that we might consider as candidates for extended counterparts of those cognitive systems will standardly fail to exhibit full functional equivalence, so (c) parity will routinely fail, taking with it the parity argument for cognitive extension. Right now things might look a li le bleak for a parity-driven ExM, but perhaps we have been moving too quickly. Indeed, it seems to me that the kind of anti-parity argument that we have been considering trades on what is in fact a misunderstanding of the parity principle. To see this, one needs to think more carefully about precisely what the parity principle, as stated above, asks us to do. It encourages us to ask ourselves whether a part of the world is functioning as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process. So we are encouraged to imagine that exactly the same functional states and processes which are realized in the actual world by certain externally located physical elements are in fact realized
231
The Continuum Companion to Philosophy of Mind
by certain internally located physical elements. Having done this, if we then judge that the now-internal but previously external processes count as part of a genuinely cognitive system, we must conclude that they did so in the extended case too. A er all, by hypothesis, nothing about the functional contribution of those processes to intelligent behaviour has changed. All that has been varied is their spatial location. And if the critic were to claim that that being shi ed inside the head is alone sufficient to result in a transformation in the status of the external elements in question, from non-cognitive to cognitive, he would, it seems, be guilty of begging the question against ExM. To apply this understanding of the parity principle to the case of O o and Inga, one must start with the functional contribution of O o’s notebook in supporting his behaviour, and ask whether, if that functional contribution were to be made by an inner element, we would count that contribution, and thus its realizer, as cognitive. If the answer is ‘yes’, then we have a case for ExM. Crucially, at no point in this reasoning have we appealed to Inga’s organic memory (the relevant extant human inner) in order to determine what counts as cognitive. And while rather more would need to be said about the precise functional contribution of O o’s notebook, our reconceived argument from parity does not succumb to criticisms that turn on any lack of fine-grained functional equivalence between the target distributed system and some extant example of inner human cognition. It is, of course, possible to conduct a debate that revolves around the functional contributions of certain elements, without that being an issue that concerns functionalism as such (cf. Chalmers, 2008). So what is the link between the parity principle and functionalism? The parity principle is based on the thought that it is possible for the very same type-identified cognitive state or process to be available in two different generic formats – one non-extended and one extended. Thus, in principle at least, that state or process must be realizable in either a purely organic medium or in one that involves an integrated combination of organic and non-organic structures. In other words, it must be multiply realizable. So, if we are to argue for cognitive extension by way of parity considerations, the idea that cognitive states and processes are multiply realizable must make sense. As we have seen, functionalism provides one wellestablished platform for securing multiple realizability. Moreover, although functionalism has standardly been developed with respect to what is inside the head (e.g. the brain of some nonhuman entity may be wired up differently, or it may be silicon-based rather than carbon-based, without that affecting the rights of that entity to be judged a cognizer), there isn’t really anything in the le er of functionalism as a generic philosophical outlook that requires such an internalist focus (Wheeler, forthcoming a and c). According to (sub-personal) functionalism, when one is endeavouring to identify the specific contribution of a sub-personal state or process qua cognitive, it is not the material constitution 232
Embodied Cognition and the Extended Mind
of that state or process that ma ers, but rather the functional role which it plays in the generation of personal-level cognitive phenomena by intervening between systemic inputs, systemic outputs and other functionally identified, intrasystemic, sub-personal states and processes. There is nothing in this schema that requires multiple realizability to be a between-the-ears phenomenon. So functionalism allows, in principle, for the existence of cognitive systems whose boundaries are located partly outside the skin. It is in this way that we arrive at the position that, following Clark, I shall call extended functionalism (Clark, 2008a, 2008b; Wheeler forthcoming a and c). We have seen already that there will be functional differences between extended cognitive systems (if such things exist) and purely inner cognitive systems. So, if extended functionalism and the parity principle are to fly together, what seems to be needed is some kind of theory that tells us which functional differences are relevant to judgements of parity and which aren’t. To that end, here is a schema for a theory-loaded benchmark by which parity of causal contribution may be judged (Wheeler forthcoming a, b and c). First we give a scientifically informed account of what it is to be a proper part of a cognitive system that is fundamentally independent of where any candidate element happens to be spatially located. Then we look to see where cognition falls: in the brain, in the non-neural body, in the environment, or, as ExM predicts will sometimes be the case, in a system that extends across all of these aspects of the world. On this account, parity is conceived not as parity with the inner simpliciter, but rather as parity with the inner with respect to a scientifically informed, theory-loaded, locationally uncommiĴed account of the cognitive. So the parity principle now emerges not as the engine room of the extended mind, but as an heuristic mechanism that helps to ensure equal treatment for different spatially located systems judged against an unbiased and theoretically motivated standard of what counts as cognitive. It is a bulwark against what Clark (2008b, p. 77) calls ‘biochauvinistic prejudice’. This idea of a scientifically informed, theory-loaded, locationally uncommi ed account of the cognitive is tantamount to what Adams and Aizawa (e.g. 2008) call a mark of the cognitive. In the interests of expository elegance, I shall default to Adams and Aizawa’s term. The most obvious next step in this dialectic would be for me to specify the – or, given the possibility that the phenomena in question will reward a disjunctive account, a – mark of the cognitive. In the next section I shall make a tentative proposal.4
A Mark of the Cognitive Newell and Simon, two of the early architects of artificial intelligence, famously claimed that a suitably organized ‘physical symbol system has the necessary 233
The Continuum Companion to Philosophy of Mind
and sufficient means for general intelligent action’ (Newell and Simon, 1976, p. 116). As anyone familiar with cognitive science will tell you, a physical symbol system is (roughly) a classical computational system instantiated in the physical world, where a classical computational system is (roughly) a system in which atomic symbols are combined and manipulated by structure sensitive processes in accordance with a language-like combinatorial syntax and semantics. I shall take it that the phrase ‘means for general intelligent action’ points to a kind of cognitive processing. More specifically it signals the sort of cognitive processing that underlies ‘the same scope of intelligence as we see in human action . . . in any real situation behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some limits of speed and complexity’ (Newell and Simon, 1976, p. 116). What we are concerned with, then, is a human-scope cognitive system. Notice that the concept of a human-scope cognitive system is not a species-chauvinistic notion. What ma ers is that the system exhibit roughly the same degree of adaptive flexibility we see in humans, not that it have our particular biological make-up, species ancestry or developmental enculturation. Against this background, Newell and Simon’s physical symbol systems hypothesis may be unpacked as the dual claims that (a) any human-scope cognitive system will be a physical symbol system, and (b) any physical symbol system of sufficient complexity may be organized so as to be a human-scope cognitive system. In effect, then, the hypothesis is equivalent to the claim that being a suitably organized physical symbol system is the mark of the (humanscope) cognitive. To unpack that claim, the physical symbol systems hypothesis advances a scientifically informed, theory-loaded account of the (human-scope) cognitive, one that supports a computational form of functionalist theorizing. But can it tick all our boxes by being a locationally independent account too? The answer, it seems, is yes. For while classical cognitive scientists in general thought of the symbol systems in question as being realized inside the head, there is nothing in the basic concept of a physical symbol system that rules out the possibility of extended material implementations. Indeed, as I shall now argue, the idea of an extended physical symbol system has much to recommend it. In a series of compelling treatments that combine philosophical reflection with empirical modelling studies, Bechtel (1994, 1996; see also Bechtel and Abrahamsen, 1991) develops and defends the view that certain human-scope cognitive achievements, such as mathematical reasoning, natural language processing and natural deduction, are the result of sensorimotor-mediated interactions between internal connectionist networks and external symbol systems, where the la er feature various forms of combinatorial syntax and semantics. It is useful to approach Bechtel’s suggestion (as he does himself)
234
Embodied Cognition and the Extended Mind
by way of Fodor and Pylyshyn’s (1988) well-known claim that connectionist theorizing about the mind is, at best, no more than a good explanation of how classical states and processes may be implemented in neural systems. Here is a brief reminder of Fodor and Pylyshyn’s key argument. It begins with the empirical observation that thought is systematic. In other words, the ability to have some thoughts (e.g. that Elsie loves Murray) is intrinsically connected to the ability to have certain other thoughts (e.g. that Murray loves Elsie). If we have a classical vision of mind, the systematicity of thought is straightforwardly explained by the combinatorial syntax and semantics of the cognitive representational system. The intrinsic connectedness of the different thoughts in question results from the fact that the processing architecture contains a set of atomic symbols alongside certain syntactic rules for recombining those symbols into different molecular expressions. Now, Fodor and Pylyshyn argue that although there is a sense in which connectionist networks instantiate structured states (e.g. distributed connectionist representations have active units as parts), combinatorial structure is not an essential or a fundamental property of those states. This leaves connectionist networks inherently incapable of explaining the systematicity of thought, and thus of explaining thinking. What such systems might do, however, is explain how a classical computational architecture may be implemented in an organic brain. Bechtel agrees with Fodor and Pylyshyn on two key points: first, that where systematicity is present, it is to be explained by combinatorially structured representations, and second, that connectionist networks fail to realize combinatorial structure. He does not need to endorse Fodor and Pylyshyn’s claim that all thought is systematic, however. For his purposes, all that is required is that some cognitive activities (e.g. linguistic behaviour, natural deduction, mathematical reasoning) exhibit systematicity. One might gloss this by saying that, for Bechtel, being a physical symbol system is a, not the, mark of the cognitive. Bechtel’s distinctive next move is to locate the necessary combinatorial structure in systems of representations that remain external to the connectionist network itself. Given the idea that our inner psychology should be conceived in connectionist terms, this is tantamount to saying that the necessary combinatorial structure resides not in our internal processing engine, but rather in public systems of external representations (e.g. wri en or spoken language, mathematical notations). As Bechtel (1994, p. 436) himself puts it, the ‘property of systematicity, and the compositional syntax and semantics that underlie that property, might best be a ributed to natural languages themselves but not to the mental mechanisms involved in language use’. (Notice that, for Bechtel, the mental is restricted to the inner. This is an issue to which we shall return). For this interactive solution to work, it must be possible for the natural sensitivity to statistical pa erns that we find in orthodox connectionist networks
235
The Continuum Companion to Philosophy of Mind
to be deployed in such a way that some of those networks, when in interaction with specific external symbol systems, may come to respect the constraints of a compositional syntax, even though their own inner representations are not so structured. Bechtel’s studies suggest that this may be achieved by exploiting factors such as the capacity of connectionist networks to recognize and generalize from pa erns in bodies of training data (e.g. large numbers of correct derivations in sentential arguments), plus the temporal constraints that characterize real embodied engagements with stretches of external symbol structures (e.g. different parts of the input will be available to the network at different times, due to the restrictions imposed by temporal processing windows). The conclusion is that ‘by dividing the labor between external symbols which must conform to syntactical principles and a cognitive system which is sensitive to those constraints without itself employing syntactically structured representations, one can perhaps explain the systematicity . . . of cognitive performance’ (Bechtel, 1994, p. 438). How should we interpret the distributed solutions that Bechtel favours: as examples of embodied cognition or as instances of cognitive extension? Bechtel himself stops short of the extended option. Thus, as we have just seen, he tellingly describes systematicity as a feature of ‘cognitive performance’ rather than as a property of the cognitive system, and states that the compositional syntax and semantics ‘might best be a ributed to natural languages themselves but not to the mental mechanisms involved in language use’ (my emphasis). What this indicates is that, for Bechtel, the genuinely cognitive part of the proposed solution remains skin-side. Let’s see what interpretation we get, however, once we apply the parity principle. If the envisaged system of syntaxsensitive processes and combinatorially structured symbols were all stuffed inside the agent’s head, we would, I think, have no hesitation in judging the symbol structures themselves to be bona fide parts of the agent’s cognitive architecture. Equality of treatment therefore seems to demand that the external symbol structures that figure in the functionally equivalent distributed version of that solution also be granted cognitive status. On the strength of the parity principle, then, what we have here are models of extended cognition.5 Of course, the foregoing direct appeal to parity considerations takes us only part of the way toward ExM. As we have seen, parity-based arguments remain inconclusive until they receive backing from some mark of the cognitive that sets the benchmark for parity. It’s at this point that we see the impact of the physical symbol systems hypothesis, conceived as specifying a mark of the cognitive. For, I suggest, both the wholly inner and the environment-involving versions of the Bechtel-style network-plus-symbol-system architecture are instantiations of sufficiently complex and suitably organized physical symbol systems. Since both exhibit that mark of the cognitive, both are cognitive systems, and the la er is an extended cognitive system. Given the functionalist 236
Embodied Cognition and the Extended Mind
character of the physical symbol systems hypothesis, such considerations strengthen further our reasons for thinking that the fundamental contribution of the body to cognitive theory is to be conceived in terms of implementational materiality, not vital materiality. One way to appreciate the plausibility of this picture is to reflect on the most obvious objection to it. In response to the view just sketched, many cognitive scientists will want to complain that the kinds of pa ern-matching and pa ern-completion processes realized by connectionist networks are not equivalent to the syntactic rules present in classical systems, implying that the analysis of the Bechtel architectures as extended physical symbol systems is suspect. With all due respect this is, I think, a failure of the imagination. It is of course true that the network processes concerned are not explicitly rule-driven in a classical sense, but two considerations strongly indicate that this is not the end of the ma er. First, the keystone of Bechtel’s model is the thought that the networks involved are genuinely sensitive to the constraints of a compositional syntax. Thus, pending good arguments to the contrary, one might insist that Bechtel’s networks implicitly realize the rules in question, at least in the minimal sense that, in this case (although not in others), classical-style rules will provide a perfectly reasonable, high-level, idealized description of the network’s processing activity. (The fact that there is idealization here should not concern anyone. For one thing, idealization is part of scientific explanation. For another, as we have seen, orthodox connectionist models are themselves abstract idealizations of real brains.) Secondly, and from a more radical perspective, it may be that the classical rules are not implicitly realized in the neural network alone. If we think of those rules as principles that govern the skilled embodied manipulations of certain external material symbols, it might be more accurate to think in terms of dynamic sub-personal vehicles that include not just neurallyimplemented connectionist elements, but also non-neural bodily factors, including physical movements. On either analysis of how the rules in question are realized, the objection under consideration would fail.
A Parting of the Ways In exploring the relationship between embodiment and cognitive extension, I have presided over a parting of the ways between, on the one hand, ExM, understood as involving an extended functionalist commitment to a kind of open-ended multiple realizability, and, on the other, a particular strain of EmbC that depicts the organic body as, in some way, intrinsically special in the generation of cognitive phenomena. At root this fork in the theoretical road may be traced to a fundamental disagreement over how philosophy and cognitive science should conceive of the materiality of the body – as just one 237
The Continuum Companion to Philosophy of Mind
implementing substrate among possible others, or as a vital and irreplaceable determinant of cognitive life. I have presented a case for thinking that we should follow the ExM path to implementational materiality. But in this fast moving and complex debate, wrong turns and dead ends will abound. Under such circumstances, drawing up a road map will always be a hazardous task, and I expect there to be many moments of disorientation and puzzlement along the way, before we arrive at a detailed theory of the embodied and extended mind.
238
13
Current Issues in the Philosophy of Mind Paul Noordhof
In the broadest terms, the issues which lie at the heart of discussions in philosophy of mind have not changed and are unlikely to change in the near future, or even, I hazard a guess, the quite distant future. People will still seek to understand the nature of consciousness in its various forms; to understand the nature of intentionality or, indeed, other ways in which our mental life may concern the world around us; to describe, and account for, the special access each of us has to our own mental lives; to scrutinize the basis of selfconsciousness; to worry about whether mental phenomena have an appropriate causal explanatory impact, and so on. Nor do new philosophical theories in these areas arrive thick and fast. Instead, at different times, different theories and their motivations receive particular development and emphasis. These facts about philosophical discussion give rise to unfortunate moments when placed on the spot by inquiring vice chancellors and sceptical governments or other, prospective, suppliers of research funding. Nevertheless, since we are among friends here, we can acknowledge it without embarrassment. Other awkward moments o en occur when some of the folk just mentioned, or the others professing to have an interest in the role of philosophy in intellectual life at large, remark that while the themes upon which it focuses are big (with the slight suggestion, about some of them, that philosophers are on a fool’s errand), the current contributions of philosophers are technical and specialized, as if, given that we can all recognize the themes, we should all be able to, without too much study, appreciate what progress has been made. However, precisely because philosophical progress is one of developing our understanding of particular types of theories by se ing them out in more detail and/or developing the motivation for them, it is entirely unsurprising that the statement of these developments will seem more specialized and technical than is desirable. Philosophers should make every effort to help with this but not to apologize for it. Once you get down to the requisite level of detail for the progress and details to show up, identification of the main themes currently in play and suggestions about how these will develop, will be controversial (I guess it is no news that 239
The Continuum Companion to Philosophy of Mind
philosophers love an argument) and more than likely receive different spins by different philosophers working in the field. That might explain why my overriding sense is one of trepidation. Nevertheless, I shall suppress this in what follows and state as clearly as I can what seem to me the main points of emphasis in a selection of the fields detailed above. Specifically, I shall discuss how development in our understanding of physicalism has more or less stabilized around an approach which reveals something rather interesting: first, some of the reason for believing it is undermined while, second, it is more entrenched as a starting point because working within this framework brings useful explanatory rigor. I shall discuss how the debate about mental causation is slowly turning into one which focuses on the ontological commitments of thought and talk about the mental, which purports to be about physical properties which are not to be identified with those of physics. Then I turn to a empts to dismiss dualist intuitions regarding the explanatory gap between neural properties of the brain and phenomenal consciousness independent of an admission of ignorance, or development of a theory of phenomenal consciousness. I explain why it is increasingly recognized that such a empts fail. Recent theories of phenomenal consciousness have sought to explain it by appeal to representational properties and a separate theory of subjective awareness (roughly, a theory of how we are conscious of the phenomenal content determined by representational properties). I explain how the appeal to representational properties has deepened our understanding of the motivation for invoking qualia and issues in the philosophy of perception and intentionality. I then outline how apparently distinct theories of subjective awareness have converged. In the final section, I examine relatively new approaches to the understanding of intentionality which a empt to come to terms with the difficulties that proponents of reductive theories of intentionality faced. In this context, I briefly touch on the putative normativity of the mental. I mourn not being able to discuss the many interesting developments in our understanding of self-consciousness, self-deception, mental illness and introspection as well as some of the fascinating texture of more specific mental states such as that of imagination, auditory perception and so on. It proved impossible to do so in a piece of acceptable length and some kind of structure.
Physicalism The characterization of physicalism At the risk of losing a few readers at the beginning, it seems to me that there is more general agreement now as to how we should go about characterizing 240
Current Issues in the Philosophy of Mind
physicalism. Identify some basic properties in terms of the subject ma er of physics, specifically, a development of today’s physics which resembles it. Admi edly, talk of resemblance is vague but this is entirely as it should be. It is quite conceivable that there will be developments of physics in which we wonder whether the properties so characterized really count as physical in any sense continuous with talk of the physical previously and conclude that it is just not clear either way. Appeal to resemblance provides a way of steering between the o -cited dilemma that, if we appeal to current physics to characterize the physical, we may soon find that there are no physical things, when current physics is superseded, and if we appeal to the ideal future physics, then physicalism becomes trivially true. The point is not that, if we simply appeal to current physics, we would have no reason to believe in the truth of physicalism since, so understood, it is likely to be false. As Andrew Melnyk points out, we might accept scientific theories even though the likelihood of their being true is low in the absence of rivals with higher probability of being true (Melnyk, 2003, pp. 225–7). Nor is this first element part of an a empt to provide a conceptual analysis of physicalism. It need be no part of somebody’s grasp of the concept of physicalism that they know that, in the circumstances, envisaged we may not know whether or not it would be correct to conclude that it was false. Instead, it is a considered judgement as to what is plausible to say about the truth of physicalism in the circumstances, given conceptual knowledge and everyday observations about our usage of this notion. With this foundation for the characterization of physicalism in place, a ention then turns to the connections between these properties (narrowly physical properties) and other properties that, more broadly, we are inclined to view as physical. It is here that controversy breaks out and its resolution will have significant consequences for the legitimate means to evaluate whether physicalism is true and the constraint it places upon understanding the nature of the mental. One way in which broadly physical properties could be related to narrowly physical properties is, once again, resemblance. Properties as disparate as being a mountain, being a table, being a cell and being an anteater, would all count as physical in the broadest sense because they resemble narrowly physical properties. However, the claim has only to be stated for its unsatisfactory character to be felt. There are such huge differences between these properties, and huge differences between any of them and narrowly physical properties, that it is far from obvious exactly how these properties all resemble the narrowly physical properties sufficiently to make them broadly physical. This thought together with the idea that somehow physics is, at least in part, concerned with the nature of things from which all the rest are composed, suggests an alternative. 241
The Continuum Companion to Philosophy of Mind
Just as broadly physical objects are composed from, and thereby nothing over and above, arrangements of narrowly physical objects, so broadly physical properties are those which are constituted from, and, thereby, nothing over and above arrangements of, narrowly physical properties. The idea is relatively unproblematic in the case of objects. Integrated spatial arrangements of small scale physical objects seem naturally to result in large scale objects. In many cases, physics will concern these small scale objects and other sciences focus on the larger scale. Opponents of physicalism o en remark that physics concerns the large scale too (Crane and Mellor, 1990). Wise physicalists will not deny this. They will just insist that any object not recognized by physics will be composed from objects which are so recognized. Unfortunately, talk of constitution is not so easily taken across to properties. If we take properties to be universals – so that, for example, there is only one property of being a hydrogen atom – then it is hard to see how the property of being CH4 – methane – can be composed of the properties of being a carbon atom and being a hydrogen atom (Lewis, 1986). There are just two of these la er properties. So it cannot be the case that the property of being methane is made up of, at least, five elements: one property of being a carbon atom, and four properties of being a methane atom. If we take the relevant notion of property to be property instances, then this case becomes less problematic. There is no difficulty in thinking of the property instance of being methane as composed, in part, from one property instance of being a carbon atom, and four property instances of being a hydrogen atom. Instead, the difficulty is that the connection between narrowly physical property instances and broadly physical ones doesn’t seem invariably to fit this model. In the case of CH4, the arrangement of the properties of atoms identified is the way of being methane. There are many different ways in which arrangements of instances of narrowly physical properties may be arranged to make up an instance of a property of being a mountain, or a cell, or an ant-eater. What is the connection between a particular way in which some narrowly physical properties are arranged and the broadly physical property which is said to result in these cases? This is the well known phenomenon of variable realization applied to non-mental cases. One kind of minimal answer appeals to supervenience. Supervenience has been characterized in a number of different ways and ge ing clear on the precise connections between all the various different alternatives is, perhaps, not the most engrossing way of spending a Sunday a ernoon. A currently popular one holds that physicalism is true of our world if a minimal physical duplicate of it, as far as arrangements of narrowly physical properties are concerned, is a duplicate simpliciter (Jackson, 1998, p. 12). For some word, w, physical properties broadly conceived are just any which meet the following condition. Given P is instantiated in w, then it is instantiated in all minimal 242
Current Issues in the Philosophy of Mind
physical duplicates of w. The connection between arrangements of basic physical properties and broadly physical properties holds of metaphysical necessity. If it held of merely nomological necessity, then any property nomologically related to arrangements of physical properties – and this might include ghostly properties, properties of ectoplasm, properties of Descartes’s immaterial mind and so on – could count as physical. All that would be ruled out is that these properties could not vary with arrangements of narrowly physical properties unless the physico-psychological laws changed. The formulation’s focus is upon what makes a world one in which physicalism is true because physicalism is doctrine primarily concerning the nature of worlds. Concern with individual objects and properties is relevant only in so far as their occurrence in a world is a way of physicalism about that world being falsified. It also avoids thorny questions about exactly which arrangement or arrangements of narrowly physical properties are responsible for the instantiation of broadly physical properties. Nevertheless, as already remarked, in the background is the thought that, for each broadly physical property, there is one or more arrangements of narrowly physical properties which metaphysically necessitate it. If broadly physical properties are nothing more than arrangements of narrowly physical properties then it should not be the case that the arrangement in question is present and yet the broadly physical property is not. Constitution by instances is thus seen as just one example of a general relationship between families of properties which reveals something about the nature of the supervening properties and explains how they involve nothing over and above the properties of the supervenience-base: those arrangements of narrowly physical properties which metaphysically necessitate them. But now the problems come thick and fast. Appeal to minimal physical duplicates of our world addresses two difficulties. The first is that physicalists do not want to deny the possibility of immaterial minds, only assert the possibility of material ones. By explicitly talking of physical duplicates, we exclude worlds with immaterial minds from consideration. The second is that we wouldn’t want to conclude that physicalism is false in our world if there is a world which matches our world in arrangements of narrowly physical properties but which differs in mental properties due to, in that world, the additional occurrence of non-physical ectoplasm. So long as none of this stuff is present in our world, physicalism would be true. Talk of minimal physical duplication sets such worlds aside as well because, while these worlds may duplicate the arrangements of narrowly physical properties, they don’t stop right there. They have ectoplasmic properties in addition whose instantiation does not come along with the instantiation of arrangements of narrowly physical properties. The appeal to minimal physical duplication gives rise to our first difficulty. Suppose that there is a non-physical property (by this I mean neither narrowly 243
The Continuum Companion to Philosophy of Mind
nor broadly physical) that breaks the connection between a particular arrangement of narrowly physical properties and the mental property for which they are putatively sufficient: a blocker. Then, intuitively, the connection between the arrangement of narrowly physical properties and the mental property is not tight enough. The possibility of the connection being blocked seems to imply that the connection is merely nomological and, hence, no different from that which holds between arrangements of narrowly physical properties and the non-physical mental properties characteristic of emergent dualism. Nevertheless, appeal to the idea of minimal physical duplicates sets aside such worlds because the relationship is broken by non-physical properties and, hence, the possibility does not reveal that physicalism is false (Hawthorne, 2002, pp. 104–6). Some argue that, as a result, physicalism should be understood in terms of sufficiency in the absence of blockers and, hence, reject the intuitive verdict that physicalism is false when there is the possibility of a non-physical blocker (e.g. Leuenberger, 2008, pp. 148–60). Prima facie, this is unacceptable for the reason identified above. The possibility of blocking suggests that the connection is no be er than emergent dualists endorse. Moreover, if the connection between arrangements of narrowly physical properties and mental properties is loose enough to be blocked by non-physical properties, then the connection between the arrangements of narrowly physical properties and mental properties holds of nomological necessity, in which case, there is no reason the connection could not also be blocked, if the physico-psychological laws were different, by the presence of some of the same narrowly physical properties which, in our world (let’s say), were responsible for the instantiation of mental properties. So there is no worry about excluding the worlds with non-physical blockers by talking of minimal physical duplicates. The looser connection would show up in the minimal physical duplicate worlds too. (I discuss one line of objection to this point under the second difficulty that this formulation of physicalism faces below.) Blockers don’t have to be causal blockers, although talk of stuff which one shouldn’t add when preparing food, or algoplasm, which makes phenomenal properties disappear, suggests that this is standardly how they are conceived (Hawthorne, 2002; Leuenberger, 2008; also Montero in this volume). Instead, blockers may disrupt a relationship in which at least one of the relata is, partly, relational. For example, I might have the property of being alone in a room which I (or my counterpart) do not possess in a minimal narrowly physical duplicate of my world because, in that world, there is a non-physical poltergeist in it with me. In these circumstances, there is no reason to expect that, in a minimal physical duplicate world with different laws, the property of loneliness may fail to be instantiated because one of the physical properties has become a blocker. However, all this shows is that we should refine our understanding of 244
Current Issues in the Philosophy of Mind
minimal physical duplicate. Minimal physical duplicates aren’t those which simply stop right there as far as the arrangements of narrowly physical properties are concerned, they also keep all the truths concerning properties which aren’t instantiated compatible with that arrangement the same. This does not trivialize the characterization of supervenience because we are not including, in the characterization of a minimal physical duplicate, that it is a duplicate in all other positive properties. With this recent line of concern set aside, two main lines of worry have been expressed about the proposed characterization of physicalism. The first is that it would fail to characterize physicalism if a powers ontology were true (O’Connor, 1994; Wilson, 2005). According to this ontology (which may, actually, be the case), properties’ potential to stand in causal relations (their ‘causal profile’) are an internal fact about the property. They are not fixed by independently holding laws. The problem arises if a powers ontology is taken to include the thesis that properties’ causal profile are essential to them. In which case, it is not possible for a certain property to be instantiated in certain circumstances and the relevant causal relations not hold. Suppose that emergent dualism is true and, hence, that some arrangement of narrowly physical properties cause, but are distinct from, a non-physical mental property. Then it will be essential to these properties that they cause the non-physical property. So, it will be true that any minimal physical duplicate world will be a world in which this non-physical property occurs. There are two points to make about this concern. The first is that a powers ontology need not be commi ed to taking the causal powers which, in part, or in whole, characterize properties as essential. The distinction between intrinsic and essential applies as much to properties as it does to individuals. Just as Socrates’ shape is not essential but intrinsic to him, so a property’s causal potential need not be essential to it in all respects even if intrinsic to it. Specifically if emergent dualism is true and there are fundamental physicopsychological laws, that part of a property’s causal profile which concerns its generation of non-physical mental properties might be absent and yet the narrowly physical property arrangement still be present (see Noordhof, 2010). Second, as we shall see, the principal reason for being an emergent dualist stems from the nature of phenomenal properties (those properties that determine what it is like to undergo a conscious mental life). Taking even these properties to be functional properties – that is properties characterized in terms of causal profile – has been a way of avoiding supposing that phenomenal properties are non-physical. In which case, we do not need to provide a definition of physicalism which excludes emergent dualist worlds in which a powers ontology is true. There are no such worlds. By far the most familiar line of resistance to the characterization of physicalism in terms of supervenience alone concerns whether it is enough or whether 245
The Continuum Companion to Philosophy of Mind
we need to appeal, in addition, to the idea of there being an explanatory connection between the arrangement of narrowly physical properties and the mental property in order for the la er to be appropriately characterized as broadly physical. The easiest way to express the worry is to consider what we might say if there were a capricious God who made sure that, in all possible worlds, if certain arrangements of narrowly physical properties were instantiated, then non-physical mental properties would be. In that case, wouldn’t the favoured account pronounce that the properties in question are physical when it shouldn’t? One answer is to insist that just as God can’t make the putative necessary truths of mathematics false (say), so likewise he cannot make the following putative necessary truth false concerning non-physical properties: It is possible that a certain arrangement of narrowly physical properties is instantiated and the non-physical mental property is not. If he really could fix it that all ‘possible’ worlds contain the non-physical mental property co-instantiated with the arrangement of narrowly physical properties, this would be a refutation of the ‘possible’ worlds analysis of possibility rather than an objection to this analysis of physicalism since, clearly, they could have come apart if God hadn’t held them together. A second answer is just to differentiate between God-enforced co-instantiation and other sorts with no further appeal to explanatoriness. More generally, we might hold that the crucial difference is between internal and external metaphysical necessitation. Externally sourced metaphysical necessitation – for instance, in God’s will – reveals nothing about the character of the properties which supervene upon the narrowly physical properties. Internally sourced metaphysical necessitation does. This response does not appeal to any fuller account of explanation of the kinds canvassed below. Unfortunately, the response doesn’t help us to capture the difference between emergent dualism and ethical non-naturalism in ethics, which is the other reason folk cite to show that supervenience needs supplementation (e.g. Horgan, 1993, pp. 557–60). It is argued that moral properties are metaphysically necessitated by arrangements of natural properties. If physicalism is commi ed to mental properties being natural, then they must have some feature which distinguishes them from non-natural moral properties. I have addressed this argument elsewhere. In brief, the way in which proponents of ethical non-naturalism took them to be non-natural is not a way which need distinguish them from mental properties. G. E. Moore, the most famous non-naturalist, cites as his considered opinion that, while there are many different intrinsic natural properties, each of which is ‘ought-implying’, there is no common property (apart from the disjunction) entailed by them all and also ‘ought-implying’. That is, there is no separate ‘ought’ property even though there are many different ‘ought-implying’ properties (see Moore, 1942, p. 605; Noordhof, 2003a, p. 96). Thus the argument is that there is variable 246
Current Issues in the Philosophy of Mind
realization with no common property. What Moore is not taking as the basis of his ethical non-naturalism, is that there is an intrinsic action-guiding or ‘ought’ property. Yet, it is the la er which would threaten the characterization of physicalism. In those circumstances, we would have Moore claiming that, since there are intrinsic ought properties, non-naturalism is true while, at the same time, we would be asserting that the fact that other properties stand in the very same way to arrangements of narrowly physical properties shows that these other properties are natural. Roll forward to the most prominent current defender of ethical nonnaturalism: Russ Shafer-Landau. He describes his position as taking the same approach to moral properties as non-reductive physicalists do to the mental (Shafer-Landau, 2003, pp. 72–8). He accepts that moral facts are intrinsically normative but seems to take this as no more problematic than the properties of phenomenal consciousness. It is not that he supposes that this intrinsic normativity is what makes moral facts non-natural and, hence, a problem for those who seek to characterize physicalism by appeal to the same kind of relationship of supervenience. So the proposal does not need defence against those who appeal to the very same relationship to characterize ethical nonnaturalism. Even if we don’t need to appeal to some kind of explanatory relationship, it is reasonable to ask whether there is some kind of explanation of the relationship. I do not want to rule this out, indeed, far from it. The point of defending the supervenience-only approach is simply to keep our explanatory options open. For example, consider currently the most popular explanation on offer: the subset view. According to it, a property F is realized by another property G if and only if the causal powers of an instance of the property F are a subset of the causal powers of an instance of the property G (This is a rough preliminary formulation, see Shoemaker, 2007, pp. 12–13, 22–31 for details). It is questionable whether this is, in fact, true of any variably realized property (e.g. see Noordhof, 1997, p. 246; Noordhof, 1999b, pp. 113–14). However, for the sake of argument, suppose that it is. Taking it as a general requirement places restrictions on the kind of explanation we should look for in every case in which we suppose that the mental supervenes upon the narrowly physical. Endorsement of it is behind Jaegwon’s Kim’s conclusion that there is a non-physical residue to the mental: qualia (a particular kind of phenomenal property, see below). He takes it that reduction of the mental to the physical requires that it is functionalizable and accepts that, in the case of qualia, exhaustively specifying their nature in functional terms is not going to be plausible because of the possibility of spectrum inversion (Kim, 2005, pp. 165–73). The conclusion, of course, does not follow if other kinds of explanation are allowed. If physicalism only requires that mental properties are metaphysically necessitated by physical properties, then there are various ways in which this might 247
The Continuum Companion to Philosophy of Mind
hold: functionalization is only one. Another might be macro-micro reduction and who knows what other options there are! As the examples illustrate, the appeal to explanation in the characterization of physicalism does not imply that the envisaged relation between arrangements of narrowly physical properties and broadly physical properties is epistemic (for another view, see Montero in this volume). Instead, it serves to characterize a certain ground for the relationship of metaphysical necessitation to hold, a ground which then may be cited in explanation. There is nothing epistemic about macro-micro reduction or functionalization of a property and accounting for its instantiation in terms of arrangements of narrowly physical properties together with, if required, laws. Thus there are two considerations in favour of the proposal. First, it is the minimum required to capture the idea that there is nothing over and above arrangements of the physical without becoming commi ed to particular kinds of explanation of this. Second, it allows for the possibility that we are ignorant, or even cognitively closed, to the exact way in which mental properties may be no more than an arrangements of physical properties and this is desirable if we have no substantial intellectual grounds for being more restrictive for reasons that will become apparent when we consider appeals to the explanatory gap in favour of dualism below. Much a ention has been paid to the proper formulation of physicalism. It might reasonably be wondered why we should care. Suppose a more restrictive account of physicalism is offered as a result of which it is concluded that physicalism is not true. What is the harm in that? To an extent, I strongly sympathize with the line of thought behind this objection. Nevertheless, I think there is merit in the role that physicalism plays even though it may not ma er precisely what we call the doctrine that plays the role. One motivation in favour of physicalism is undermined by my characterization of it. The a raction of physicalism is o en allied with being responsive to scientific development and the providing a naturalistic account of the nature of the mental. Let there be no mystery mongering! Let us seek to integrate our philosophical reflections with the findings of science. However, insisting that physicalism will only be true if the resulting physics in some way resembles our own allows for the possibility that there may be scientific developments which falsify physicalism. Under those circumstances, wouldn’t our situation be just as good? Wouldn’t a scientific theory of consciousness which took advantage of these new developments be just as substantial as one which was compatible with the truth of physicalism. In which case, what is the merit of physicalism as opposed to – robbing it of its negative connotations – scientism? Taking mental properties either to be scientifically recognized properties or supervening upon scientifically recognized properties?
248
Current Issues in the Philosophy of Mind
Physicalism, as a sub-category of scientism so understood, is unlikely to have any more virtues than scientism so, in that sense, lacks additional support. Nevertheless, it plays a significant role in another respect. Physicalism places some constraints upon how we should, currently, seek to explain our mental lives. The problem is that, without such a constraint, it is hard to see how one could offer something substantial in the area. It is all too easy for the philosopher to postulate – as a non-physical substance or property – whatever it is which has the target property that it is allegedly hard to explain in physicalistically acceptable terms, thus, making it a case of ‘lo, and in one bound, Jack was free’. The difficulties are particularly apparent in the case of those who seek to do the intellectually decent thing and give an account of how talk of non-physical properties may be put to explanatory work by formulating a viable theory of panpsychism or relating it to developments in quantum mechanics (for discussion of this, see Seager, 1999, pp. 216–52). Thus, it is not really a desire for simplicity which characterizes the physicalist. It is rather that, when dualists provide support for their doctrine, they justify it in terms of there being things which resist integration into the sciences, properties such as consciousness or being free that cannot be explained in terms of other properties (e.g. see Mawson in this volume. Li le positive is said about the nature of the explanation. It is rather that Dualism is a response to the inability to explain in other terms. Now I don’t want to rule out the possibility that no physicalistic explanation of consciousness and the rest is available. It is simply that, since it is hard to prove a negative, it is reasonable to continue with the physicalist perspective until such time as the dualist explains how their proprietary form of explanation works. This point is not quite Philip Pe it’s point that the physicalism should be presumed as a worst case scenario (Pe it, 1993). Will an account of consciousness and the rest be available if physicalism is true? It is rather the methodogical point that dualist accounts of the kind of explanation they offer have, for the most part, been conspicuous by their absence unless you count as an explanation of property P, the existence of the property P. I don’t think we should go out of the way to make ourselves intellectually suffer (as Pe it seems to suggest). However, we need to be sure that what we are offering as an explanation is subject to constraints at least as rigorous in its own terms as those imposed by the physicalist. In the absence of this, physicalism commands our a ention. As a result, both sides in the debate will, and should, have an interest in exploring further the kinds of explanations that they respectively offer. Dualists, because only then will they push home their advantage in accounting for aspects of our mental life; physicalists, because any development of the range of explanations on offer promises to provide relief from their assailants. An example of this will be discussed when we come to phenomenal consciousness.
249
The Continuum Companion to Philosophy of Mind
Mental causation and physicalism It is well known that the main argument in favour of physicalism is the so-called over-determination argument (set out in Ravenscro in this volume). It has probably received its best articulation only recently and it is contentious whether it is sound (Papineau (1993a), Chapter 1). Those who question the plausibility of physicalism either question whether it is unacceptable to allow that there is systematic over-determination or question the evidence that the world is causally closed (Mellor, 1995, pp. 103–5; Lowe, 2000a or 2000b). The la er claim, in particular, it seems to me is in need of further scrutiny. What may be agreed to on all sides is that a particular arrangement of narrowly physical property instances is sufficient to fix the probability of any effect. Nevertheless, some of these arrangements bring with them, according to the emergent dualist, non-physical mental properties. It is an open question whether these arrangements fix the probability of the effect partly in virtue of the non-physical mental properties, for whose instantiation they are responsible. Thus, there is a slide between two claims: (C1) Every target effect has its probability fixed by arrangements of narrowly physical property instances alone. (C2) Every target effect has its probability fixed by arrangements of narrowly physical property instances, perhaps partly in virtue of the instantiation of non-physical properties. Either of (C1) or (C2) is will be compatible with the evidence that the physical world is causally closed. However, only the first would establish that non-physical mental properties, if they exist, are epiphenomenal. A lot of energy has been focused, though, on whether a particular form of physicalism, non-reductive physicalism, is compatible with the efficacy of mental properties. If it is not, then non-reductive physicalism has no advantage over property dualism, given the success of the over-determination argument. Suppose that mental properties supervene upon physical properties in the way identified above. Suppose that, in particular, an instance of a physical property P1, p1, necessitates an instance of a mental property M1, m1 and that, for any candidate effect of m1 we accept (from considerations of the causal closure of the physical) that it is caused by p1. (As others do, I simplify. p1 would obviously be an arrangement of narrowly physical properties). To fix ideas, consider m1 to be an extended period of pain, which eventually becomes unbearable, from leaving one’s hand on a hot plate, p1, a firing of neurones in the brain specifically related to pain and the target effect: the behaviour of withdrawing one’s hand from the hot plate.
250
Current Issues in the Philosophy of Mind
The challenge can come in two forms. The first concerns whether the instances of mental properties can be efficacious if the instances of physical properties are. As the question is o en put, is there any further work for the mental properties to do? (For details of this kind of argument, see Ravenscro , in this volume). The second, and more pressing, challenge is whether, even if these instances of mental properties are efficacious in virtue of their relationship with instances of narrowly physical properties, they are efficacious in virtue of being mental properties. Responses to the first challenge emphasize that the instances of mental properties are either identical to instances of narrowly physical properties or realized by arrangements of them (identity: Robb, 1997; Macdonald and Macdonald, 1986, 1995; Macdonald, 1992; realized: Noordhof, 1999b, 2010; Shoemaker, 2007). Appeal to identity is a ractive because it reproduces the solution to how mental events and states can be efficacious – by being token identical to physical events and states – for property instances. They are mentioned in a footnote by Campbell but assimilated to the event response he favours on behalf of Davidson (Campbell, in this volume). However, the distinct virtue of the property instances approach is that it allows for finer grained identifications of relevance than allowed by Campbell, given his rejection of the property exemplification view of events. The main problem is to justify the claim that mental property instances are identical to physical property instances, given that they are distinct properties. Appeal to realization avoids this difficulty, and is independently a ractive because it seems implausible to say that instances of mental properties are identical to narrowly physical properties as opposed to realized by them. Such approaches, though, are said to come unstuck when they try to a ribute to the mental properties efficacy that they concede belongs to arrangements of narrowly physical properties in virtue of which they are realized. There are various a empts to deal with this relying, for example, on the claim that the causal profile of mental properties is a subset of the causal profile of its realizer or that they share efficacious property instances (Shoemaker, 2007; Paul, 2007). Opponents of the first have asserted that instances of mental properties are redundant because they recapitulate a subset of the causal role that the physical property instance already brings (e.g. O’Connor and Churchill, 2010, pp. 51–3). This seems unfair. If mental properties have a causal profile which is a subset of the causal profile of the narrowly physical properties, then it is reasonable to hold that anything which is a manifestation of this profile reveals the efficacy of mental property instances. It is in virtue of this part of the profile that a narrowly physical property is efficacious. Opponents of the second have questioned whether instances of mental properties do anything if their efficacy is the result of, for example, sharing an instance of mass with an
251
The Continuum Companion to Philosophy of Mind
arrangement of narrowly physical properties. The property instance they share seems straightforwardly narrowly physical (Ney, 2007, pp. 501–2). Both these kind of replies and objections to them share a common assumption that I think needs to be questioned. They seem to start from a position in which they grant, for the sake of argument perhaps, the truth of nonreductive physicalism and, given that, worry that, while there may be instances of broadly physical properties, it is questionable whether there are instances of broadly physical causation. The absence of the la er is taken to undermine the case for the existence of mental properties as some kind of independent argument. The true position to me seems closer to this. If you allow that there are instances of broadly physical properties, then there is no reason to come to a different verdict with regard to one particular kind of broadly physical property, the relation of broadly physical causation (on the assumption that causation is a relation; for those who deny it is, then that particular non-relational broadly physical property which is causation). Similarly, if you are inclined to say that instances of mental properties are arrangements of instances of physical properties and, in that sense, realized by them, then mental causation can be understood in the same way (e.g. Noordhof, 1999a, 2010). The assumption is devastating to the prospects of non-reductive physicalism. Its friends try to identify something which counts as a piece of narrowly physical causation – for example, a subset of the powers or the causal consequences of shared narrowly physical property instances – which its enemies then deny is the work of the mental. Instead, we should understand that arrangements of narrowly physical properties will result in an arrangement of causal relations which, if the former is the supervenience-base for a mental property, the la er is the supervenience-base for mental causation. There may be an issue about the existence of mental properties. For instance, suppose, as some argue, they might be taken to be determinable physical properties, where the arrangements of narrowly physical properties might be taken to be the determinates – just as colour, say, is the determinable of a particular shade of red (the determinate) (e.g. Yablo, 1992). Then if determinable properties don’t exist, mental properties don’t exist either. Instead, we have sentences involving mental predications such as ‘having a pain in his hand’ which are made true by the determinates: arrangements of narrowly physical properties. What I claim is puzzling is to think that you can derive an issue for the existence of mental properties from some reflections about causation as if it isn’t a property to which the same considerations apply (see Noordhof, 2010, p. 85). As a result, I suspect that the main issue that is going to confront nonreductive physicalism in the near future is not concerned with efficacy per se but which derives from reflections upon truth-making. If the truth-makers of 252
Current Issues in the Philosophy of Mind
mental statements – by the lights of the non-reductive physicalist – are arrangements of physical properties which metaphysically necessitate the truth of these statements, then there is no reason to postulate the existence of mental properties as truth makers. There is simply mental predication and mental statements (for a coherent statement of this view, see Heil, 2003). I can imagine two kinds of response to this issue. The first is to refine our understanding of the truth-making relation. The standard approach to truthmaking is to take the truth-maker to involve metaphysical necessitation of the truth of some target statement. Instead, it might be thought that truth-making involved appeal to the in virtue of relation which cuts finer (Rodriguez-Pereyra, 2006). The second is to recognize some additional dimension of assessment which validates allowing for the existence of supervening properties. That is my own favoured approach and brings me to the second challenge concerning mental causation for non-reductive physicalism. The question of whether it is in virtue of being the realization of, or instantiation of a mental property, that an arrangement of physical properties has a certain effect implicitly involves a level of generality. The precise statement of it is technical (see Noordhof, 1999a), but intuitively the thought is that if a certain type of target effect is caused as a result of a number of different realizations of that mental property – indeed all such realizations in conducive circumstances – then the explanation for this is that it is in virtue of the mental property that that type of effect is caused. Truth-making may well be one important element in the proper understanding of the ontology to which our statements commit us. But as important, it seems to me, is providing rational support for inferences we are inclined to make. To put it crudely, and joining in the myth that bulls care about red and are not colour-blind, the motivation for recognizing the existence of the determinable red, as well as the determinates corresponding to different shades of red, is that assigning one determinate or another to a cloak will not provide us with any grounds for taking other shades of red cloak also to enrage bulls. For the la er, we need to appeal to the property of red. Suppose that, for the sake of argument, the causal powers of property of being scarlet have, as a subset, those causal powers which are a ributed to the property of red, then rather than suppose that means that there is no need to postulate the existence of red because they are conveyed by the property of scarlet, we should conclude that scarlet conveys them by being, among other things, an instance of the property of being red and, thereby, have causal powers beyond its determinacy (for more discussion see Gille and Rives, 2005; Noordhof, 2010, pp. 86–8). Therefore issues about mental causation raise, and can only be appropriately resolved, if we consider more fully, the different ways in which our ontological commitments may manifest themselves and avoid the mistake of only focusing on a certain issue, viz. truth-making. 253
The Continuum Companion to Philosophy of Mind
Phenomenal consciousness It has become standard to distinguish between what Ned Block dubbed access consciousness and phenomenal consciousness. A state is access conscious if and only if it is poised to play a certain role in our practical deliberations or cognitions (e.g. see Block, 1995, p. 231). Obviously quite a lot is packed into talk of ‘poised’ here. Intuitively, some state may be poised to play a role, although we are not conscious of its presence, because the state has an impact on the workings of our minds at the relevant points. So less illuminatingly we may say that access consciousness just involves a being aware that we have a certain state, or being aware of it. I don’t want to presume here that access consciousness is awareness of facts rather than objects. Nothing more is added to this awareness. By contrast, a state is phenomenally conscious if, and only if, there is something it is like to be in that state. Many reductive theories of phenomenal consciousness break down phenomenal consciousness into two components. The first component concerns the content of phenomenal consciousness, what we may dub the phenomenal content (some call it phenomenal character). The phenomenal content is just a characterization of anything it is like for a subject to undergo the mental life – for example, his or her pain, the taste of banana cake, the sound of the waves breaking on the shore, perceiving that there is a desk lamp lit on the desk, the sense of yearning to go outside. All of these count. To introduce a stipulation now, which I will discuss in more detail later, let phenomenal properties be those properties which determine the phenomenal content of our mental states. Thus, there is a phenomenal property of my experience of the waves breaking on the shore which determines that the phenomenal content of my experience is the waves breaking on the shore. Let manifest objects and properties be those things which our experience concerns, for example, the waves (object) breaking (property). The second component derives from the fact that it seems at least possible, according to some theories of phenomenal content, that states should have it and yet not be conscious because the subject is not aware of the state instantiating the relevant content. So there is a story to tell about what makes the subject aware of the phenomenal content of a particular state he or she is in. This second component is o en called subjective awareness. I will only discuss access consciousness in the context of this la er notion since, arguably, a theory of access consciousness is a theory of subjective awareness. Discussion of phenomenal consciousness in recent years has centred on two different issues. The first concerns the proper explanation of why it seems that phenomenal consciousness resists explanation in a way which is acceptable to physicalists. The second concerns specific theories of phenomenal
254
Current Issues in the Philosophy of Mind
consciousness, in particular those which seek to explain it in terms of the representational properties of mental states.
Phenomenal properties and the explanatory gap In discussing whether phenomenal properties are compatible with physicalism, o en the main focus is Frank Jackson’s knowledge argument (Jackson, 1982, 1986). The thought experiment is discussed briefly in Hu o (in this volume) and, in detail, by Ravenscro (in this volume). However, just as much as Thomas Nagel’s discussion concerning what it is like to be a bat, Jackson’s argument contains elements which obscure the ma er it is meant to illuminate. It seeks to provide an answer to the question about whether phenomenal properties are, at least, broadly physical by consulting us on the circumstances in which a subject, usually Mary, would obtain knowledge of what it is like to experience the colour red (Nagel, 1974). It is reasonable to suppose that a necessary condition of such knowledge is that one either has knowledge by acquaintance of the colour in question or is able to imagine it (Nemirow, 1990; Lewis, 1990; Conee, 1994; Noordhof, 2003c). It is quite clear that a subject need not be able to have either through reading about the nature of red experiences in books of neuroscience. So the thought experiment cannot be taken to show anything about whether there is a non-physical fact concerning what it is like to experience something red rather than something about the nature of knowledge of what something is like. It also potentially fails to distinguish between two options: Mary doesn’t know what it is like because she is ignorant of some non-physical fact or she fails to possess a concept of what it is like to experience red (never having done so). Discussion focusing on the explanatory gap is more direct (Levine, 1983). Here the emphasis is on the fact that we lack, and appear to need, an explanation of how, if physicalism is true, an experience of red is realized by one arrangement of narrowly physical properties and an experience of green is realized by another arrangement of narrowly physical properties. If the discussion in The Characterization of Physicalism section above is correct, then no assumption need be made about the kind of explanation required; nor should we conclude that, if no explanation is forthcoming, then physicalism is false. Nevertheless, acknowledging that there is a gap, lets the dualist in to propose an account of what that is, so that needs to be taken seriously. My defence of a characterization of physicalism in terms of, implicitly, metaphysical necessitation alone may suggest that an even more direct route to the question of whether phenomenal properties are physical is by considering zombie arguments: those which rest on the claim that it is possible for a
255
The Continuum Companion to Philosophy of Mind
physical duplicate of me to lack phenomenal consciousness (see Chalmers, 1996). However, such arguments invite the response ‘Why should we take seriously our views about what may be possible in such a complex ma er?’ This forces us into a debate about the degree to which our modal intuitions are rational grounds for arriving at a view about the fundamental nature of reality. Focus on the explanatory gap provides a different kind of answer. Talk of metaphysical necessitation has given us some indication of the kind of explanation we are looking for – that is, not merely a causal/nomological explanation that could also be offered by the emergent dualist. Nevertheless, the restricted nature of this assumption means that the following options are kept open. The explanatory gap reveals an ontological gap (Chalmers, 1996), or it is a gap in our knowledge, a conceptual gap, or, as I shall characterize it, an experiential gap. Until perhaps recently, the most popular approach has been to take it to be a conceptual gap. Features of our concepts of phenomenal properties, it is argued, account for why we take there to be an explanatory gap when, in fact, there is no ontological gap. Although the approaches differ in emphasis, there is surprising agreement on the general feature which does the work. All agree that our concepts of these properties – phenomenal concepts – directly refer to the properties they concern – that is, they refer without the aid of descriptions true of the properties in question (Sturgeon, 1994; Papineau, 1998a; Tye, 1999). Some add to this the thesis that states with phenomenal properties – such as our experience of a particular pain – are, at the same time, tokens of the phenomenal concepts of these states (Balog, 1999, p. 525). Although the precise formulation of the agreed-upon point differs, the desired upshot is that there are no descriptions that we can know a priori apply to a particular phenomenal property. There is simply the fact that we have a concept which does so apply. The next step in the argument is to suggest that the standard form of theoretical identification which would bridge an explanatory gap is: (1) M has causal role R (an a priori truth) (2) A(P) occupies causal role R (an a posterior truth) Therefore, (3) M = A(P). Here ‘M’ stands for some phenomenal property, and A(P) for an arrangement of narrowly physical properties which are supposed to realize the phenomenal property. Because there are no descriptions we can know a priori are true of M, we cannot know a priori that M has a causal role, R. Therefore, 256
Current Issues in the Philosophy of Mind
such theoretical identifications are unavailable for phenomenal properties and there is no way in which the explanatory gap can be bridged. We have an explanation for why there is an explanatory gap without conceding that there is an ontological one. Note that even if you took explanatory identifications to appeal to different descriptions about M and A(P) than their causal role, the point about direct reference promises to deliver an account of the explanatory gap. Unfortunately, the account has seemed more and more unsatisfactory. Quite a few have begun wondering whether there is any substantial account of the workings of phenomenal concepts to be given (e.g. Tye, 2009, Chapter 3). I’ll allow, for the sake of argument, that there is. Here are two points which are increasingly familiar even granting this. First, since we are seeking to relate all the various Ms ultimately to arrangements of narrowly physical properties, R must be characterized in terms of the la er. But it is very unlikely that we know a priori any R characterized in terms of narrowly physical properties for any non-narrowly physical properties and not just phenomenal ones. The standard illustration is water. Moreover, there is no explanatory gap regarding whether water is H20, banished when we formulate the appropriate R by whatever means, and use it to identify H20 (Block and Stalnaker, 1999, pp. 13–23). The impression that we haven’t identified the root of the explanatory gap is reinforced by the second point. Indexical or demonstrative concepts simply don’t have their reference fixed by description. Yet, we don’t get explanatory gaps arising to the same degree here. For example, consider John Perry’s example of the messy shopper. John Perry looks for the messy shopper who is leaving a trail of flour in the local supermarket only to discover that he himself is the messy shopper. When he thinks to himself ‘I am the messy shopper’, he does not feel plagued by an explanatory gap. How could it be that I am the person who is a messy shopper? Yet, presumably that is what the proponent of the kind of account of the explanatory gap I have detailed above should expect, given their theory as to its source. To my knowledge, no satisfactory replies have been given to these concerns. They throw into considerable doubt the supposition that the explanatory gap is a conceptual one. Of course, in one sense, if physicalism is true, it must be. If the explanatory gap does not correspond to an ontological one, then statements about phenomenal mental states must express different concepts from those which occur in statements about the brain. The point is rather that, in recognizing this fact, we still have no explanation as to why we feel that there is an explanatory gap. An alternative approach to the explanatory gap is to account for it in terms of our ignorance of some crucial facts. One kind of ignorance would involve being ignorant of the narrowly physical properties which would explain the gap. Another would be that there is a way in which we can explain how one type of property supervenes upon, or is realized by, another which we 257
The Continuum Companion to Philosophy of Mind
have yet to understand. The proposal is perhaps most famously associated with the name of Colin McGinn, although versions of it have now been expounded by Joseph Levine and Daniel Stoljar, and it was even present in Nagel’s original article on what it is like to be a bat (McGinn, 1989; Levine, 2001; Stoljar, 2006; Nagel, 1974). The principal difference between McGinn and Stoljar concerns whether we will forever be ignorant given the very structure of our perception and cognition, or whether we are just ignorant at the moment. Stoljar is quite right to observe that we don’t need to make the former stronger claim in order to have a potential account of the explanatory gap. This is a good thing because McGinn’s argument that we cannot even arrive at a theoretical understanding of the property which accounts for the link between the brain and phenomenal consciousness is questionable unless the property in question is supposed, in some way, to be implied by perception and thereby have a character very close to that which we experience (Stoljar, 2006, pp. 92–3). Nevertheless, there are problems that the ignorance approach must face. The first is that physics just provides us with properties identified in terms of structure and dynamics. It has no use for intrinsic qualitative properties. If something like the la er is required to explain the nature of consciousness, then it is not that we are ignorant of some narrowly physical property, it is rather that panpsychism is true. We are ignorant of some qualitative element which escapes physics but is omnipresent in the natural world. Now it is true that the claim that the missing element must be qualitative has not been made out. Nevertheless, in pu ing forward an ignorance account of narrowly physical properties – which ignorance has been made manifest to you by the fact that you can’t explain phenomenal consciousness – at least raises the possibility that, whatever you are ignorant of, is not a narrowly physical property, or arrangements of the same. Second, as Levine emphasizes, part of the problem is not that we are ignorant of the narrowly physical properties, or the means by which they are arranged, which would explain phenomenal properties, it is rather that we have no idea of what kind of explanation should be offered (Levine, 2008), or, at least, proponents of the ignorance approach do not make clear the kind of explanation that they think that we should provide, of which we are ignorant of the constituents. Because of this, it is hard to be certain whether we are missing the vital ingredients. This brings me to the third kind of approach to the explanatory gap. The basic idea is that what we fundamentally need to explain is why our experience of arrangements of narrowly physical properties are very different from the experiences which are supposed to be these arrangements of narrowly physical properties. Once we are in a position to account for this difference, we will be able to bridge the explanatory gap. 258
Current Issues in the Philosophy of Mind
The basic form of the answer seems obvious and remarking upon it is quite familiar. It is that, obviously, the reason why these things seem to be very different is that when we experience the arrangements of narrowly physical properties that, allegedly, realize an experience, we have a different object of experience to what that experience concerns, for example, the waves breaking upon the shore. Yet it is the la er that characterizes what it is like to have the experience. By itself, this comment goes very li le way to resolving the situation. Suppose the current object of my experience is a certain arrangement of physical properties which, let us presume for the sake of argument, realizes our experience of a red wall. Then what we want to know is why our experience of this arrangement of physical properties in no way reveals the fact that the arrangement is an experience of a red wall, or if it does, then in what way it does and how can this be squared with what it is like to be in that experience. We cannot just trade upon the fact that there are different objects of experience, we want to understand how one object of experience should manifest itself in the other object of experience. The point can be driven home if we consider the special case in which we are experiencing an arrangement of physical properties which either is exactly like the arrangement of physical properties which is a realization of this kind of experience in another or one which is very like the arrangement if, for example, we are having an introspective experience of one of our own experiences. In the former case, we want to know why our experience of the arrangement of physical properties in another does not reveal that it is an experience of the same type as we are currently undergoing. In the la er case, we would be having an introspective experience of e* (an experience very like our introspective experience of it) and asking why it in no way reveals that it is very like the arrangement of physical properties which realize the introspective experience. Looked at this way, an a empt to bridge the explanatory gap has two components. First, we need a theory of what makes a particular experience of a property P. Second, we need an account of what makes P show up with the character it has in our experience. In the next section, I will be considering various accounts of the first, including the currently fashionable one which I favour: representationalism. The second component will not receive a ention though it is important. As many have noted, if we a empt to bridge the explanatory gap by focusing on what our experiences concern, then the explanatory gap is reintroduced regarding the properties the experiences concern. For example, if colour is a profile of surface reflectance properties, then why does it look the way it does in experience? However, the room for manoeuvre here is far greater. The world is not blurred, but my blurred perception due to myopia has a perfectly satisfactory explanation to bridge the non-blurred-blurred gap. 259
The Continuum Companion to Philosophy of Mind
Obviously, then, in an extended sense the recommended response is a version of the ignorance approach. However, it is distinctive in that a characterization of the kind of explanation required has been provided and it is suggested that, when we possess an explanation of that kind, the explanatory gap will no longer be present. Indeed, it is compatible with the proposal that, in fact, we are not ignorant of the arrangements of narrowly physical properties which account for phenomenal consciousness (or need not be) for the explanatory gap to arise. We just don’t appreciate their significance until we appreciate what makes an experience of the manifest objects and properties they concern. This brings us to the question of the character of phenomenal properties.
Theories of phenomenal consciousness It is quite possible to challenge the division of labour regarding theories of phenomenal consciousness observed below into theories of phenomenal content and theories of what it is to be aware of that content (e.g. Neander, 1998). However, the reasons for this are best understood by considering the two elements separately.
Phenomenal content There are, broadly speaking, three approaches to phenomenal content, more precisely, the phenomenal properties which determine it, currently favoured in the literature: those which take qualia, representational properties and brute non-representational relations of awareness respectively as that which determines the phenomenal content of mental states. That these three approaches constitute three distinct options may not be immediately obvious. For example, it may be said that qualia se le the phenomenal content of our mental lives by our being aware of them. Moreover, rightly, any reader of the word ‘determines’ will want to know what that involves. Ma ers can be clarified if we consider the case of veridical perception where the three approaches most obviously come into immediate contact. Although this might not be historically accurate, a productive way of thinking about the case of veridical perception is to see appeals to qualia, and to brute non-representational relations of awareness, as a response to the perceived inadequacies of the representational approach. According to representationalism, the representational properties of experience determine the phenomenal content of that experience (see Harman, 1990; Dretske, 1995; Tye, 1995, 2000). Representationalists take the kind of properties that make beliefs have truth conditions, desires have satisfaction conditions 260
Current Issues in the Philosophy of Mind
(e.g. that I have a fig now) and, outside the mental arena, sentences have truth conditions, and suppose that they will also be the basis for an account of the phenomenal content of an experience. For example, some of the representational properties of a particular perception may determine that the content of the experience is that there is a grey elephant at such and such a location. Although, in one sense of determine, what determines the phenomenal content of the experience is that the elephant and the property of being grey are constituents. In another sense, the one to which the representationalist appeals, the representational properties determine the phenomenal content by certain objects and other properties being the manifest objects and properties of the experience. Representationalists o en, and increasingly controversially, motivate their account by appealing to the fact that, in characterizing what an experience is like, we don’t notice properties of the experience so much as go right through to what the experience concerns – the manifest objects and properties – suggesting that representation is at work to determine that content. It is reasonably clear how the representationalist approach may seek to make a contribution to the question of the explanatory gap mentioned at the end of the last section. If we had an account of the representational properties of an experience, we could see what elements of an arrangement of physical properties helped to explain how that experience was an experience of a red wall. To illustrate, rather than because the account is independently plausible, if what made an experience of a red wall was the fact that it was causally correlated with red walls in optimal circumstances, then experiencing these causal facts would be to experience what made that experience have the character it does. A acks to the representationalist picture come from, broadly, two angles. On the one hand, critics seek to identify experiences which have phenomenal differences but where it is plausible that their representational content is the same (Peacocke , 1983; Block, 1990, 1996; Shoemaker, 1990, 1991). On the other hand, critics emphasize the obvious differences between two very different kinds of mental states: beliefs and perceptions (e.g. Martin, 2002). The general form of the debate under the first heading has been that, for every phenomenal difference identified by the critics of representationalism – which difference is supposed to encourage the introduction of qualia as additional phenomenal properties – the representationlist identifies a representational difference a er all. Thus qualia are usually thought to make a difference to what an experience is like by being possessed by the experience without being representational properties and, for some, without being the objects of awareness. Others are inclined to accept that they are objects of awareness and, hence, in one respect, the way in which they determine the phenomenal content of our mental lives is by being manifest properties se led by the third account of phenomenal content appealing to brute awareness. 261
The Continuum Companion to Philosophy of Mind
For example, it has been remarked by representationalism’s critics that our visual and gustatory experience of wine is very different and yet may well be of the same chemical properties which give rise to distinct affects on the light we experience and on the taste buds (Shoemaker, 1990). Representationalists’ response has been to emphasize that, even if there are similar properties experienced, they will be experienced in different ways because of the other bundles of properties with which they are associated. Of course, representationalists cannot prove that this accounts for the phenomenal difference but the force of the reply is no worse than the hypothesis that the same properties are experienced and yet there is a phenomenal difference: stand-off. Some apparent difficulties for the representationalist have arisen because of the account of representational properties they have endorsed. For example, one of its foremost proponents, Michael Tye, takes an experience to represent a blue object, say if the experience is of a type where it is optimally causally correlated with blue objects, and so on and so forth for the other colours. Consider now the alleged possibility of inverted spectrum. It is hard to make sense of this because the basic idea is that the two inverted subjects see the colours of objects in different ways while otherwise having their experiences caused in a the same way and giving rise to the same responses. If representation of colour is by causal correlation, this should not be possible if you accept the claim that, although the subjects see the colours in different ways (one sees red the way the other sees green, and so on) it would be a mistake to suppose that one subject is systematically misrepresenting the world where the other subject gets it right. There are a number of different possible replies that the representationalist might make. One is to challenge the claim that spectrum inversion is a possibility. If there is independent reason to believe that the account of representational properties is correct, then this would be one ground for resistance. Unfortunately, this is far from clear. As we shall note later, causal correlation in optimal circumstances faces problems as an account of representation. More promising, with regard to dealing with the alleged possibility of spectrum version, is to note that our conviction that it is possible may mistake the intrinsic character of what is represented for the intrinsic character of what is doing the representing. Thus, if colour is an intrinsic property of the surfaces of objects – at least to the extent that the colour of an object does not seem to depend upon the colour of other objects – then we are inclined to judge that we could have see that object with another colour instead – or indeed all the colours swapped round – when, in fact, this is not possible. To be fully developed, this approach will have to explain how we are convinced of this possibility when colour illusions show that how we experience a colour is o en a function of the other colours experienced along with it. A third option is to take the possibility of spectrum inversion as providing additional grounds for supposing that the account of representational properties 262
Current Issues in the Philosophy of Mind
in terms of causal correlation in optimal conditions is a mistake. Some other account of representational properties is appropriate. This is an approach I favour. Rather than take an analysis of representational properties to dictate what is represented in experience, we revise our account in response to what seems to be represented in experience. Other phenomenal states seem to present difficulties for the representationalist because we come with certain assumptions about what must be represented. These assumptions o en stem from a view about the correct analysis of representational properties. The clearest examples of this are the alleged non-presentational states such as that of free-floating anxiety or general depression. These states may involve a number of different presentations of the world in the way distinctive of the mental set of the depressed or anxious person, but there is an element of these states – it is alleged – which involves no presentation at all. Representationalist treatments of these kind of cases either tend to claim that this element is captured by the various ways in which a depressed or anxious person represents the world to him or herself, for example, as lacking things of value to do, or to urge, as a result of your favoured theory of representational properties, that, in fact, something is represented a er all (Crane, 1998, seems to suggest the former strategy). As an illustration of the la er approach, Tye has argued that the general phenomenal character of depression relates to low states of certain chemicals in the body which is said to lie at the root of depression (Tye, 1995). Neither move is satisfactory without further defence. When we experience the world as lacking value, it is doubtful whether what is represented is that the world lacks value. It is rather that our a itude to what is in the world is that we find no value in it. So there seems to be a retreat from taking phenomenal content to be exhausted by what is represented to taking mental states essentially to involve some form of representation even if the phenomenal content is not exhausted by it. On the other hand, working out that something is represented a er all, according to a preferred account of representational properties, still gives rise to a puzzle. We do not have to work out what is represented in experience when we take it to represent something square or red. These properties are manifest in experience. If we conclude that the general character of depression is that it presents low lithium levels in the body, then the obvious question is why wasn’t this obvious in experience. Both these critics of representationalism and the responses overlook another possibility. They all seem to assume that if a state, in a certain respect, involves nothing but a certain character it is like to be in that state, then that state couldn’t, in that respect, actually be presenting that character. But this can be challenged. If a certain property seems to characterize a state but does not involve the presentation of anything else, it is reasonable to consider the hypothesis that 263
The Continuum Companion to Philosophy of Mind
the state in question in fact, partly, presents that character. Thus, alleged nonpresentational states in fact involve higher order states which are presentations of some of these lower order states’ properties (for a suggestion of this kind of response, see Byrne, 2001; Noordhof, 2003b). A number of issues are raised by this possibility. The first is that there are cases of phenomenal difference where it is part of our understanding of this phenomenal difference that it does not involve a presentation of this difference. The classic example is blurred vision. When we blurriedly experience something, our experience precisely doesn’t seem to be an experience of something which is blurred (Crane, 2006, pp. 130–1). A second is that if we do take putative non-presentational states as involving higher order presentations, are we, in effect, undermining the a raction of representationalism by, as it might seem, allowing that what is represented in this case are the very phenomenal properties that representationalists were seeking to do away with, viz. qualia? I raise the second issue only to set it aside. I guess your view about that will depend upon what you pack into the idea of qualia. In principle, representationalists should not be concerned about what counts as the manifest properties of experience so long as it is representational properties which determine that they are the manifest properties. The idea that there might be phenomenal differences which correspond to representational differences and yet which don’t involve presentations of these properties – the blurred vision case – may, fruitfully, be related to the idea that there might be different degrees of phenomenal presence. Here are two examples. Suppose that you are currently experiencing a red tomato. Then it seems plausible to say that our experience is of something which has a backside. Indeed, one might think, the phenomenal content of our experience would be rather different if we experienced the tomato as not having a backside rather than as experiencing it as having a backside. How should we capture this difference? Second, many feel happy about saying that our visual experiences present colours and shape to us but they balk at saying it presents water as opposed to something which flows, is transparent, etc. But just as with the case of the backside of objects, it seems undeniable that there is a phenomenal difference – at some level – between experiencing water and experiencing this more cautiously described kind of thing. It is tempting for the representationalist to deal with these cases by suggesting that the phenomenal content is the result of the judgements we are inclined to make on the basis of the experience. Thus, it is because we don’t take our blurred experience of the world to be an experience of a blurred world that the blurriness is not presented as a property of the world. But the implementation of this strategy is not straightforward. For example, suppose that you are convinced that the forest you are currently seeing is an elaborate hologram. Then you won’t be inclined to take the experience to be of trees and so 264
Current Issues in the Philosophy of Mind
forth. Nevertheless, it seems that what you experience is still appropriately characterized as having a phenomenal content characterized in terms of trees and not some feature shared by tree and hologram presentations (Siegel, 2006, pp. 494–5). Some have been tempted to see an argument for the enactive account of perception to be derived from the claim that the backside of objects is phenomenally present to us in our experience in some sense (e.g. as absence). According to, perhaps, its leading philosophical proponent, Alva Noë, perceptions involve the utilization of sensori-motor knowledge to have the content that they do and what is responsible for our experience of a tomato with a backside is that we have sensori-motor expectations concerning what would happen if we changed our location relative to the tomato (Noë, 2004, pp. 77–9). Although highly suggestive, the proposal has difficulties. We have many sensori-motor expectations. Exactly how are some recorded in the phenomenal content of experience and others not in a way which accords with the phenomena to which we have just adverted? For example, I have certain expectations about what would happen if I moved into the next room regarding what I would see. Yet, it is stretching it to say that the next room has a grade of phenomenal presence – even as absence – in my experience of the current room in which I am located. Perhaps what is crucial is sensori-motor skills which are in some way integrated with our perception of particular objects or kinds. But it is unclear why we need to adopt such an approach rather than argue that there are conditions that must be met for an object or property to be a manifest object or property of experience, which together with a representation of only some of its properties will mean that other of its properties are present by their absence. A third issue raised by the tendency to be liberal about what is represented, for example, by allowing representation of properties of mental states, is what we might dub the treatment of putatively non-representative sensory residue. For example, Noë suggests that without the sensori-motor skills in play, we will have sensuous material without anything being represented (Noë, 2004, p. 10). Similarly, A. D. Smith has argued that our perceptual experience may be intrinsically but not essentially representational. When our sensations don’t make us aware of normal physical objects, we simply have sensuous ma er without representation. Although, later, he concedes that sensations might, in those circumstances, provide us with presentation of non-objective states of our body, he takes this to be distinct from perception of independent objects characteristic, as he sees it, of the intentionality of perception (Smith, 2002, pp. 123–7). Is there a substantial issue here or can all the facts be captured whether one decides to build more or less into representation? One consideration in favour of being more concessive over what can be represented is that the theoretical 265
The Continuum Companion to Philosophy of Mind
cost of explaining how there might be more substantial representation derived out of more primitive cases of representation may be less than seeking to provide an entirely new characterization of what characterizes the phenomenal content of our mental lives and, most importantly, how it relates to, and constitutes, the classic cases of intentionality in perception. The debate between representationalists and those who appeal to brute relations of awareness, that is, proponents of relational views of perception (relationists), arise from the la er’s diagnosis of the former as a empting to assimilate perceptual experience to belief or judgement. Yet, there is an obvious difference in phenomenal content. In the case of perceptual experience, objects and properties in the world seem presented to us. In the case of belief or judgement, this is not the case. My belief that snow is white – as opposed to my perception that snow is white – involves nary a hint of the delectable, chilly, white, fluffy stuff. Broadly speaking, the debate has had two areas of focus. One has been those conscious states that seem to be phenomenally similar to perceptual experience and yet involve objects that don’t, or at least needn’t, exist. Indeed it is not even that the objects and properties needn’t exist as that we seem to have an experience of them without standing in the kind of relationship which is distinctive of perceptual experience. The obvious examples of these are imagination and hallucination. If representationalists can make good the charge that these states have the same phenomenal content as (or appropriately similar phenomenal content to) perceptual experiences, then their opponents are wrong to characterize perceptual experience in such a way that the explanation of its phenomenal content is very different to that of these states and appeals to relations. Typically, then, proponents of relational views of perception are keen to deny that there is this similarity in phenomenal content. In the case of imagination, they are inclined to say that the reason why we suppose that there is a similarity of phenomenal content is that when we sensuously imagine something, we imagine having a perceptual experience of that thing (dubbed the dependency thesis). Imagining having a perceptual experience gives us an explanation for why we feel that the phenomenal content is the same when it isn’t. It is debatable whether this can be the basis for a successful response. It is worth nothing that we can also think that we are having a perceptual experience of black dog with a wet nose (you are having that thought now) or believe that we are. Nevertheless, the phenomenal content is very different (see Martin, 2002; Noordhof, 2002). In the case of hallucination, a disjunctive approach is adopted. First, it is asserted that perception and hallucination are two different kinds of states with no common content. Second, a non-ontological account is given of phenomenal 266
Current Issues in the Philosophy of Mind
similarity. Thus it is said that when we hold that perception and hallucination are phenomenally similar, that just means that they are indiscriminable to introspection alone. The account of why they are indiscriminable may vary. All that is denied is that phenomenal similarity should be understood simply in terms of similarity in phenomenal content (Martin, 2006, p. 369). In the case of perceptual experience, the determinant of its phenomenal content is that it involves a certain relation to an object and its properties. In the case of hallucination, no account is given of its phenomenology. Some hold that that is because it has no phenomenology (Fish, 2009). Others may allow that it has a phenomenal content but just not like that of perceptual experience. Giving an account of the various ways in which hallucination may be indiscriminable from the corresponding perceptual experience and resolving the question of whether hallucination does have a phenomenal content are pressing issues for proponents of the relational view. I think it is fair to say that it is upon their treatments of these issues that the success of their approach turns. Nevertheless, as we remarked, representationalists have a difficulty of their own: the apparent substantial phenomenal dissimilarity between perceptual experience and belief or judgement. The standard strategies are to hold that perceptual experience involves a richer, perhaps non-conceptual, content. Thus the phenomenal difference is to be accounted for by supposing that there are more representational properties at work in the case of perception and that what is represented is organized nonpropositionally (e.g. spatially and/or without requiring that the subject who has the states possesses the relevant concepts; see, for example, Tye, 1995). While these are plausible differences from belief, it is questionable whether they will do. Representionalists hold that representational properties have phenomenal significance. Thus even in the case of belief, one would expect its representational properties to have some phenomenal significance. The belief that there is a dog barking should have a whiff of a dog about it. The belief that it is brown should give us a flash of brown. But none of this is the case. For this reason, representationalists are going to have to appeal to an analysis of the second component of phenomenal consciousness, namely subjective awareness. They need to do this anyway to explain why there is no phenomenology in the case of unconscious mental states which, we may presume, have representational properties. But perhaps, in addition, such an appeal will help differentiate between conscious judgement or belief and conscious experience.
Subjective awareness Theories of subjective awareness divide broadly into those which take the consciousness of a certain state to be an intrinsic fact and those who take it to 267
The Continuum Companion to Philosophy of Mind
be an extrinsic fact. However, as we shall shortly see, it is hard to retain this division and there is a distinct danger that theories of the former sort collapse into theories of the la er sort when suitable refinements are made. These theories become part of a full theory of phenomenal consciousness because they appeal to materials discussed under the previous section to differentiate between what it is like to experience various kinds of mental states – those with different types of content. Theories of subjective awareness explain the circumstances in which we are aware of this content. An exception to these remarks is the provocative theory of consciousness offered by Ted Honderich. He takes consciousness to be constituted by the existence of a mind-dependent world of objects and properties. Thus, for him, what we might dub the phenomenal content is the account of our awareness of it (Honderich, 2004, Chapters 7–10; for discussion, see Noordhof, 2006b). Among theorists who take the consciousness of a certain state to be an extrinsic fact, there is a further division between those who adopt a perceptual model and those who appeal to the idea of higher order thought instead. The former hold that something like the following is the case. A conscious mental state is one which is scanned by the introspective perceptual mechanism, an unconscious one is one which is not. (Lycan, 1996, pp. 14–15) This theory raises in particularly stark form many of the problems which have plagued all extrinsic accounts of consciousness, so I will rehearse them here first. Possibly the most familiar objection is that it makes our experience of our own conscious states highly fallible. The internal scanning mechanism may misfire so that either we introspect that we are in a different state to the one we are, in fact, in, or we think we are having a certain mental state when we are not. You could, for example, perceive that you are in serious pain when, in fact, that’s just a false case of introspection. In the case of pain, it is possible to construct an evolutionary argument in favour of radical fallibility being extremely unlikely. Our pain system is important for us to recognize when our bodies are being damaged and creatures which fail to appreciate when this is happening are unlikely to survive long enough to pass on their genes. On the assumption that the capacity to introspect is coded into our genes – as one might expect, given it is a sensory mechanism – we can expect considerable accuracy (Lycan, 1996, p. 18). This is much less obvious in the case of other mental states. It would be productive to consider this argument on a case by case basis and consider whether our relative confidence concerning our judgements about whether we are in those states correspond to what we might predict by the evolutionary story.
268
Current Issues in the Philosophy of Mind
Nevertheless, rather than focus on this, I want to discuss more immediate and challenging difficulties that follow on from this initial observation. If what makes a mental state conscious is that we perceive it, then what about the objects in the world? They are the objects of perception. Do we thereby make them conscious? For example, gaze at a rock. Does your gazing make this a conscious rock? You might reply that, yes, the rock is an object of consciousness but it is not, itself, conscious. However, in making this response, you are presuming that the conscious state is the product of the perceptual/ introspective mechanism. That is not the position of those who adopt this theory of consciousness. Their claim is that the conscious state is the one which is the input into the process – the object of the perceptual-introspective mechanism. So, instead, the reply that is that rocks aren’t conscious because we do not call them mental states (Lycan, 1996, pp. 23–4). This reply is either weak or under-described. If the denial reflects a stipulation regarding our use of the term conscious, then the reply seems weak. It concedes that, in terms of the reality, there is considerable similarity between the state the rock is in, as an object of perception, and the state one of our mental states is in, as the object of mental perception. It just denies that we apply conscious to the former. The situation would be no different to remarking that sparkling white wine produced by the champagne method could not be champagne if it did not come from the champagne region. This might be an important point regarding marketing but, unless there is something else to be said on the taste front, entirely uninteresting to the drinker. On the other hand, if it is suggested that we limit applying conscious to mental states because, in addition, they are states of a cognitive/affective system and this affects the nature of consciousness, then the reply is certainly under-described. It is also questionable whether the result is, strictly speaking, a perceptual model of consciousness since now, the a ribution of consciousness to something is not just a ma er of it being perceived but, in addition, these further facts I’ve mentioned. This is probably why, as I can tell, the foremost proponent of the perceptual model – William Lycan – seems to have in mind a response of the former sort, with its a endant weakness. The third challenge that the perceptual model faces concerns what we should say if the perceptual-introspective mechanism misfires and proclaims that we are undergoing a certain state when we are not. What are mental hallucinations like? If you claim that, when a subject hallucinates that she, say, is imagining a green frog, then it is exactly for her as if she were imagining this. Then it seems that we have to allow the possibility that there are some intrinsically conscious states, our hallucinations. If we allow intrinsic consciousness here, then why not everywhere? (cf. Byrne, 1997, pp. 121–2, with regard to higher order thought theory).
269
The Continuum Companion to Philosophy of Mind
For this reason, I suspect that proponents of extrinsic accounts of consciousness must insist that there is nothing it is like to hallucinate a mental state since, in those circumstances, there would be no state of which we could be conscious. However, there are costs of making this move. First, suppose that a subject introspects that she is imagining a green frog and has a still higher-order mental state that she is introspecting that she is imagining a green frog. Then there must be some conscious state the subject is in – she must be conscious of a hallucinatory introspective state – by the terms of the theory. But what’s that like? Aren’t we forced to say that it is exactly like thinking that one is consciously imagining a green frog? Second, if we allow the object of introspection to determine the content of introspection, then aren’t we forced to conclude that it doesn’t ma er what we introspect ourselves to be undergoing because it does not determine the content of our conscious states? The la er is just se led by whatever state we are in being the object of an introspective state. To deal with the first point, it seems we must abandon the idea that a state is conscious simply by being the object of another state. There must be a firstorder state at the end of the chain. Otherwise, we will have phenomenal similarity between conscious hallucinatory states and non-hallucinatory states without the phenomenal content that the la er is meant to supply. In response to the second point, it may be suggested that ge ing the state we are in roughly right is a necessary condition of the state being conscious as a result of being perceived. Why do some philosophers appeal to higher order thought rather than perception in seeking to provide an extrinsic account of consciousness? Probably the following considerations have weighed with them. First, introspection seems to involve no distinctive sensory qualities for itself. If this is taken to be a necessary condition for perception, then introspection fails to count as a case of perception. Second, there is the problem of the rock mentioned earlier. It might be thought less likely that thought would stand in the same relationship to a rock as higher-order thought to the mental state of which it is supposed to make us conscious. However the last point is not at all obvious. Rocks seem just as likely, and in similar ways, to trigger thoughts about them as mental states trigger higher-order thoughts about them. With regard to the former point about distinctive sensory qualities, it is hard to know what significance to a ach to it. If the higher-order states are not supposed to the contribute to the content of conscious states, but just make us aware of the lower-order state, then these sensory qualities cannot be provided by the states of perception themselves but must rather lie in their object. If mental states are representational, one might expect that they provide no sensory qualities of their own but convey the sensory qualities possessed by what they concern. There are also disadvantages appealing to higher order thought rather than perception – namely, that it makes consciousness a feature of creatures with 270
Current Issues in the Philosophy of Mind
conceptual capacity – the capacities needed to have thoughts – and also relates it to self-consciousness because the higher-order thought is characterized as that I am in such and such a mental state. Those who wish to a ribute phenomenal consciousness to other animals and children at an early stage of development, o en balk at this commitment. The only way this can be resolved, I suspect, is if we have a much clearer idea of what is involved in the basic conceptual capacities at work. Naturally, higher-order thought theories downplay their sophistication, but the question is whether they can do so while, at the same time, making it legitimate to think that the states they postulate are genuine thoughts rather than non-sensory higher-order representations of some non-thought-like kind (e.g. Rosenthal, 1986, p. 344; Rosenthal, 1991a, pp. 32–3; Rosenthal, 1993). It is standard to appeal to the presence of higher-order perception or thought, rather than the disposition to have these things to account for consciousness. The reason offered is that being conscious of something is an occurrent state rather than a dispositional one (Rosenthal, 1993, pp. 208–9). However, this seems to be a confusion. Once it is recognized that the conscious state is the state which the disposition to have an introspection or thought concerns, then we have an explanation of in what sense being conscious of something is occurrent – namely, that there is an occurrent state which is the object of a disposition. One thing that either type of theory emphasizes is that the higher-order perception or thought itself is not conscious. What would make it conscious, in turn, is an even further higher-order thought or perception about it. This is usually thought to be an unsatisfactory feature by those who propose a reflexive intrinsic theory of consciousness (e.g. Kriegel, 2009, Chapters 1 and 4). Such theorists begin by pu ing forward something like the following. S is a conscious state if and only if S represents itself in the right way. Thus suppose that I am having a conscious perceptual experience of a desk lamp, then I am in a state with two contents. The first of these is of the desk lamp. The second concerns the state which is of the desk lamp. Proponents of this approach struggle to ensure that this doesn’t re-introduce an extrinsic account of consciousness. Obviously if it is said there is a mental state which has two components – the first of which has the content that there is a desk lamp, the second of which has the content that this state is of a desk lamp – then we have a difference in name only from an extrinsic account of consciousness. The only difference is that reflexive accounts of consciousness have made the decision to call the complex of states to which extrinsic theories of consciousness appeal a state itself. Obviously no proponent of a reflexive account of consciousness is going to 271
The Continuum Companion to Philosophy of Mind
accept this description of their position. So the interesting question is how they avoid it. One suggestion has been that the conscious state has the causal roles both for the experience that there is a desk lamp there and the awareness of this experience. Co-instantiation of the causal roles explains the reflexivity of the state (Shoemaker, 1994, pp. 242–5). Obviously co-instantiation of roles by itself won’t do. There are any number of roles which may be coin-instantiated. What is of particular significance is that one of the roles is that of an awareness of the experience. At the minimum, this means that one of the roles must be of a state which represents the experience in question. This makes the proposal a version of representationalism in which the correct account of the representational properties of awareness of the experience is to be given in terms of causal role. Some have argued that appeal to coincidence of roles – a dispositional ma er – is once more ill-suited to explain the occurrent nature of consciousness (Kriegel, 2005, pp. 37–8). If what I have argued before is correct, then this is a mistake. The occupant of these coinciding roles is occurrent. In any event, it has influenced the currently most prominent exponent of an intrinsic theory of consciousness to provide a weakened version of this approach to take this into account (Kriegel, 2005, pp. 44–51). According to Uriah Kriegel, M is conscious if there are M* and M** such that both M* and M** are proper parts of M, M is a complex (not merely a sum but an integrated whole like a molecule) of M* and M** and M* represents M by (indirectly) representing M**. (Kriegel, 2009, pp. 226–8) The idea is that M contains proper parts, M* and M** such that M** causes and hence is represented by M* and M* indirectly represents M by representing M** in roughly the way that a painting can represent a house by representing a portion of it, the rest being obscured by trees etc. Obviously a lot of work is being done by talk of an integrated whole and the idea of indirect representation. Since there seems nothing to rule out integrated wholes in which representation is only of a part, talk of representation of a part of the overall state, which thereby represents a whole, requires a particular kind of, as yet unspecified, integration. To remark it is unspecified is not to say that, over time, such an account cannot be provided. The question is whether indirect representation is what we need. In the pictorial case, although the whole house is represented, it is plausible that only part of the house – the part not occluded – is phenomenally present in the strongest sense. The remainder may have a weaker grade of phenomenal presence, depending upon the outcome of the discussion mentioned earlier. However, it is questionable whether this is what we want in the case of our consciousness of a mental state. Is there 272
Current Issues in the Philosophy of Mind
some part of our conscious state of which we are not conscious when we are conscious of it? A natural candidate would be the reflexive element. However, proponents of the reflexive account of consciousness usually emphasize that the reflexive character is part of what we are aware of (e.g. Kriegel, 2009, p. 117). Nevertheless, perhaps this can be turned into a virtue. It has o en been remarked that we are not conscious of the nature of our mental states but just conscious of what they are of in being conscious of them. It might be argued that a lower grade of phenomenal presence implied by indirect representation nicely explains this. One of the motivations for the moving to an intrinsic, reflexive account of consciousness was the conviction that unconscious states were not the kind of states to make the states they concern conscious and yet, if it was insisted that the higher order states were conscious, a vicious regress would ensue. Obviously, one might ask the same question of M*. Are we conscious of it and, if so, what makes us so. Kriegel’s answer is that it is part of a globally conscious state. However, it seems we may have replaced an infinite regress with an explanatory circle. We are conscious of M (and M**) by having M* representing M** and conscious of M* by having M* as part of M. If that move is allowed to the proponent of reflexive consciousness, why isn’t an equivalent available to the proponent of an extrinsic account of consciousness. Nor does it seem that this approach to consciousness avoids the possibility that we could have M** without M* and hence the same questions arise as to whether we would, in those circumstances, have a hallucinatory mental state or no phenomenal state at all. Reflexive accounts of consciousness are likely to have to draw upon the same materials as extrinsic accounts of consciousness here. Although we began this section with two theories in play, it appears that they have converged upon something remarkably similar. A conscious state involves, at least, two parts. The first is the state of which we are conscious; the second is the element which makes us conscious of it. The second element, therefore, cannot be present and provide us with consciousness independent of the first element. There is nothing it is like just to be in a state with the second element. The differences lie in the account of indirect representation and the claim that the second element is part of the state of which we are conscious. The plausibility of these claims relate back, I have suggested, to the differing degrees of phenomenal presence we may recognize in our conscious states. If this convergence is to be evaluated further, work is required on the content of the second element. I have suggested that an appeal to a causal relationship between it and the first isn’t mandatory to explain its content. Perhaps a successful explanation of the content may restore some of the difference between the two approaches once more. It also seemed that the second element couldn’t 273
The Continuum Companion to Philosophy of Mind
convey consciousness by any old misrepresentation of the first element. It had to be roughly right. Indeed, since expression of the content of the second element seems to be behind our self reports about our conscious states, and we are generally recognized to have a special first-person authority concerning them, we might have to be more than roughly right. Yet, specifying the right kind of content to convey consciousness is no easy ma er. It is this which has led some to wonder whether the division of our theorizing concerning phenomenal consciousness into these two elements is a mistake. I mentioned at the end of my discussion of phenomenal content that the difference between conscious judgement/belief and perception may lie in the theory of subjective awareness. Here is how it might work. The content that the higher order must have to convey consciousness upon the state which is its object (I’ve stated this in terms that proponents of the intrinsic as well as extrinsic theories of consciousness can accept) varies depending upon the kind of state involved. In the case of perception, the way in which the second element makes that perception conscious is by correctly representing, or being disposed to represent, the content of that perception. Whereas, in the case of conscious judgement, the way in which the second element makes that judgement conscious is by correctly representing that one has a judgement that p (where p is the content of the judgement). This might account for the fact that, in the case of perception, what we are aware of is the content of perception: the objects and properties in the world. Whereas, what we are aware of, in the case of judgement, is the representational properties of the judgement, the meanings that make up its expression. If something like this is along the right lines, then there is, as yet, no decisive reason to reject representationalism about phenomenal consciousness, pending successful treatment of an account of subjective awareness. Whether such an approach will be compatible with the truth of physicalism will turn on arriving at a proper understanding of the nature of representational properties. In the final section of this chapter, I briefly touch on some issues that have been considered under this heading.
Intentionality and Normativity Just as the evaluation of representationalism has engendered an increased sophistication in the discussion of phenomenal content, there has also been an interesting discussion of the nature of intentionality. According to the standard picture, intentionality involved direction upon a content which was determined by the representational properties of a state. Belief is, perhaps, the paradigm example. Various features were a ributed to intentionality so characterized, of which, perhaps, the most significant is that there could be directedness upon 274
Current Issues in the Philosophy of Mind
objects and properties which do not exist. Allowing that this is so is compatible with also allowing, as externalists insist (see Sawyer in this volume for a good discussion of externalism), that what contents a subject entertains may be settled by the environment. Even if you allow that a subject thinks about water rather than twater because they have water in their environment, it does not follow from that that there can be no circumstances in which the subject thinks about water without there being any in the environment. However, when we turn our a ention to the case of perceptual experience, it has seemed to a growing number that it does not have these features. We cannot perceive a dog unless the dog is present and we stand in a relation to it. This has given rise to an interesting range of research questions. Can there be non-representational intentionality as well as representational intentionality? Are there both propositional and non-propositional forms of intentionality? Can intentionality come in both conceptual and non-conceptual forms? What are the relationships between the answers to these questions? Such questions have given impetus to a field which appeared to have been going through a bit of an arid spell. If we compare the flurry of activity in the late 1970s and 1980s with regard to providing a reductive analysis of intentionality (i.e. in terms compatible with the truth of physicalism) with the current state of affairs, it seems clear that the emphasis has changed (Field, 1978; Dretske, 1981, 1988); Fodor, 1984, 1987); Millikan, 1984, 1986, 1989); Papineau, 1984, 1987, 1993a); Stalnaker, 1984; Block, 1986c; Cummins, 1989; Whyte, 1990, 1991). To some extent, the reductive programme has stalled. As far as causalinformational accounts have concerned, there has been li le further development since Fodor’s A Theory of Content back in 1990. The most significant competitor – Millikan and Papineau’s appeal to biological function – still faces controversy over whether it successfully has dealt with the criticisms concerning indeterminacy and error which it was introduced to supply (Millikan, 1984; Papineau, 1987, 1998b; see Ravenscro and Rey in this volume, for a description of the disjunction problem which lies at the heart of this debate). A disappointing feature of the current lack of further developments has been the tendency to use ‘toy’ reductive accounts of intentionality in the development of representationalist accounts of consciousness and the like – both to resolve (in the case of bodily sensations) and present difficulties (in the case of inverted earth) – as if the ‘toy’ accounts in question were close to being right as opposed to having been shown to be, more than likely, mistaken. One consequence of this situation is that there has been renewed interest in the nature of conscious intentionality with the suggestion that the solution to the difficulties which bedevil earlier reductive accounts will be found here. For example, Galen Strawson takes experience (conscious experience) to be the primary form of intentionality because he believes that only cognitive experiential qualities will bring sufficient determinateness to our intentional states to 275
The Continuum Companion to Philosophy of Mind
make misrepresentation possible. Suppose that I am currently experiencing a desk lamp. Then what makes my experience of a desk lamp rather than any item on the causal chain between my eyes and the desk lamp (or indeed any item further back on the causal chain a er the desk lamp) is the cognitive experiential quality of my experience which involves a taking it to be of a desk lamp rather than my retinal stimulation (say). Put me in a different environment in which my experiences are caused differently – by mammals which tend to sit around on desks in various unnatural poses – and my experiences will be mistakenly about desk lamps rather than them because I take my experience in that way (Strawson, 2008, pp. 295–302). Of course the problem with appeal to taking something consciously a certain way to resolve these difficulties – to which Strawson is sensitive – is that its resolution can seem very like magic. It is also potentially threatening to the hopes of representationalists to provide an account of phenomenal consciousness by appealing to the representational properties of our mental states. Nevertheless, the fact that, in consciousness, there seems no problem with these issues, where accounts which do not appeal to it struggle, promises to make the discussion of consciousness more productive. We have a putative explanatory role for consciousness to play and a way of considering what it brings where other accounts fall short, if, in the end, they do. By the same token, the development of a way of dealing with these issues without explicit appeal to consciousness provides an insight into the nature of consciousness because they are accounts of something which seems effortless at the level of consciousness. Perhaps this is also part of what lies behind the recent a raction to enactive theories of perceptual consciousness. The way in which our sensori-motor skills are integrated with the causal process from our object of perception to states of the brain se le that the causal process supports perception of that particular object rather than events later or earlier in the chain (e.g. see Noë’s remarks on prosthetic perception and non-standard causal chains in Noë, 2003, pp. 97–100). Another aspect of the case in its favour is the idea that perceptual content is best specified in terms of what is immediately available to a perceiving subject to explore in the world and not what is represented (Noë, 2004; 2006, pp. 426–8). This has led some to consider whether perceptual consciousness – following on from cognitive processes (see Wheeler in this volume) – is constituted in part by extra-cranial processes (see Hu o in this volume). We couldn’t have the phenomenal consciousness we do without the environment. Proper defence of this view is going to require systematic exploration of the kind of relationships which take place in the brain, upon which consciousness is taken to supervene as a working hypothesis, to consider whether they are reproduced in our interaction in the environment. Others have appealed to biological function to provide a proper account of the objects of perception, (e.g . Davies, 1983). Whether such appeals supply 276
Current Issues in the Philosophy of Mind
the required materials will turn on a detailed examination of the grounds for a ributing these functions, and a resolution of the issue of indeterminacy. At the moment it looks like different a ributions of content may be a ributed to an organism compatible with offering an explanation of their cognitive and conative processes in terms of biological functions. However, perhaps this is a failure to consider these ma ers at the right level of detail. Further consideration of biological function also has utility in discussing the second strain of recent investigation into the nature of intentionality to which I wish to draw a ention. Problems with developing a naturalistic account of intentionality have led some philosophers to consider the extent to which intentionality, and the states which possess it, have normative properties. There seems to be a sense in which it is legitimate to hold that if you perceive that a white rabbit is nibbling grass to the le of the burrow, and you consider the ma er, you ought to have the corresponding belief. If you grasp the concepts of 2 + 2 and 4, then you ought to affirm that 2 + 2 = 4 (if you consider the ma er). If you possess the concept dog, and are in the business of making dog judgements, then you ought to apply the concept to dogs you come across and so on. These ‘oughts’ appear to be normative oughts and are related to the nature of concepts and the meaning of words. Apart from the intrinsic interest in understanding these normative claims, the focus on normativity has promised to provide traction on the issue of coming to a final evaluation of the possibility of a reductive account of intentionality. Insisting upon the normative nature of content provides one way of articulating why reductive accounts of intentionality aren’t possible. Everybody knows you can’t naturalize morality; now, the claim continues, we can see why we can’t naturalize intentionality either. Taming this normativity, on the other hand, provides a way of keeping the option open. A distinction is o en drawn between two ways in which these oughts can be taken: as related to a norm or standard or as action-guiding (e.g. Ha iangadi, 2007, pp. 37–8). It is pre y much uncontroversial that they can be understood in the first sense and, so understood, present no more difficulty for naturalism than the difficulty it already faces with regard to intentionality in general. The question is whether they should be understood in the second. Do they provide guidance to action independent of our desires to follow these standards? The action-guiding sense of ought in this context is compared with the case of morality in which it is supposed that, when it is said we ought to behave or ought not to behave in a certain way, this is independent of our desires to behave, or not behave, that way. It seems unlikely that the oughts relating to content have any action-guiding role independent of the aims of the mental states in which they figure. Nevertheless, it is less clear whether we may take these aims to be rooted in our interests. For example, is the aim of belief to be accounted for in terms of our 277
The Continuum Companion to Philosophy of Mind
desire to act on the basis of what we take to be true (see Noordhof, 2001, for a suggested positive answer)? Or is believing the truth desirable in a moral sense (see Horwich, 2006, for a suggestion along these lines)? If the aims of mental states are not rooted in our interests, then there may be scope to recognize a class of categorical content-related oughts in the context of the states through which these contents are possessed. However, I am not convinced that a positive answer to this last question threatens the reductive programme any more than if they are conditional upon our interests. Categorial oughts may be more controversial, in that it is questioned whether we really ought to do such and such if we are uninterested in doing so, but the threat to the reductive programme if any does not stem from where the normative properties are a ributed but that they are at all. It is just as bad to suppose that if you have certain desires, you ought to do such and such. What exactly does it mean to a ribute such a property to the world (if that’s what you are doing)? Biological function provides a potential source of these aims which would enable an understanding of them in physicalistically acceptable terms. However, the application of such an appeal to the understanding of the aims of mental states is not straightforward. Consider the case of the aim of belief. It is o en remarked that the aim of belief – to be true – explains why we cannot consciously believe at will. How should we interpret the ‘cannot’? On one interpretation, the cannot is as strong as metaphysical necessity. It is literally not possible for a creature consciously to be able to do this. Appeal to biological function has no capacity to explain this unless it is not possible for there to be non-evolved creatures with beliefs. Perhaps mental states can only have aims if the creature whose states they are is biologically evolved. Then we would have to conclude that, either no non-evolved organisms can have beliefs or that beliefs should be understood by their causal profile independent of the aim. For ease of discussion, I shall adopt the la er option (which is independently defensible (see Noordhof, 2001; for a view to the contrary, see Velleman, 2000). It does not ma er for the issue raised below since it can be simply re-cast for a state with the causal profile alone. In any event, it is plausible that there may be creatures with states with that causal profile which are non-evolved. So there is a limitation to the biological explanation of the impossibility of consciously believing at will right here. However, things are worse than that. Appeal to biological function seems in no be er shape to explain why creatures can’t consciously believe at will even if they are biologically evolved and the cannot is simply nomological. It is a familiar fact that things with a certain biological function can malfunction (e.g. the human heart can fail to pump blood). While we might accept that creatures that can consciously will that they are in a state with the causal profile of belief may die out quickly, there should be anomalies just as there are in all 278
Current Issues in the Philosophy of Mind
other cases. Creatures are born with hearts that don’t work; why aren’t they born with consciously willable beliefs? In my view, the proper answer to this question reveals something about the nature of consciousness, viz. that it makes manifest the a ractiveness of being disposed to act on what you take to be true (Noordhof, 2001). Working out the connection between functions derived from our biological heritage, features of consciousness, cognitive aims and so forth provides a rich domain of study. It enables us to think about the role of various kinds of normativity in the proper understanding of our mental life.
Concluding Remarks Debate in the philosophy of mind in 1950s to 1970s seemed primarily focused on which of the main physicalist theories of mind, if any, were true. Everything was seen through the prism of counterexamples to the type-type identify theories, problems for functionalism and the like. Focus in the late 1970s to the early 1990s was on three issues which intertwined. First, there as the possible truth of eliminativism about the mental; second, there was the debate about whether externalism was true about mental content; third, there as the anxiety about mental causation even if physicalism was true. It is only a slight exaggeration to say that issues which came up under the second and third items o en were pressed into service as a possible argument for the first: eliminativism. Although I have touched upon some of these issues in the current chapter, I have tried to reflect my sense that, while the work from the 1950s was essential to arrive at our current position, in many respects, the debate has moved in a healthier direction. Now there is far more focus on the detail of conscious states and the kind of materials to which we should appeal to understand their character. It is true that I have given the impression that the new centre of gravity is over whether phenomenal content should be understood as the representationalist recommends. However, the result of this discussion has been a much richer understanding of both the nature of phenomenal consciousness, the nature of intentionality and how they relate to each other – or, if not richer understanding, at least richer appreciation of the issues at stake.
279
Glossary action. Actions have been central to modern philosophy of mind ever since Descartes’s contemporaries first criticized the account of mental causation suggested by his substance dualism. Since behaviour involves physical movements it was deemed that its causes must also be physical on pain of having to explain how, when and where dualistic causation takes place. Descartes located it in the pineal gland, a view rejected by modern substance dualists. Other positions in the philosophy of mind, such as those of property dualism, anomalous monism, behaviourism, identity theory and eliminativism have all been motivated by concerns related to the causation of behaviour. While some philosophers use the terms ‘action’ and ‘behaviour’ interchangeably, others reserve the former for behaviour that is intentional and/or voluntary (at least under some description). Voluntary mental acts, such as the act of calculating inside one’s head, pose a problem for any species of behaviourism which treats all behaviour as (necessarily) being publicly observable. In the philosophy of mind actions are typically identified with events and/or processes, however there is much dispute over which events actions are to be identified with. Some insist that actions are identical to bodily movements. For example, Davidson (who maintained that all actions are intentional under some description) identified actions with movements of our bodies. Yet as Hornsby has rightly cautioned, we must not conflate my moving my body with the (‘mere’) bodily movement I bring about when I move my body (for the term ‘bodily movement’ may be used in both a transitive and an intransitive sense). This motivates the competing view that actions are the causes of bodily movements (according to Hornsby, for example, all actions are tryings which may or may not cause our bodies to move). We may wish to further follow Hornsby in also distinguishing between the thing one did and (the event of) one’s doing it (thus mirroring the Fregean distinction between the thing one believes and one’s believing it). A third view, put forth by von Wright, identifies action with the causing of an event. While this does not immediately rule out the possibility that actions are events, it is perhaps misguided to always seek a precise location of X’s causing of Y, as evidenced in the literature by various counterintuitive claims concerning the spatio-temporal location of killings involving slow deaths. Such issues regarding the ontology and individuation of actions are of particular relevance to the question of whether reasons are causes which in turn relates to questions concerning causation, agency, control and (ultimately) free will. 280
Glossary
C. S. Danto, A. (1973), Analytical Philosophy of Action, Cambridge: Cambridge University Press. Hornsby, J. (1980), Actions, London: Routledge. Moya, C. (1990), The Philosophy of Action, Cambridge: Polity Press. Stout, R. (2005), Action, Teddington: Acumen.
akrasia. See will. animal minds. There is much debate over the extent to which some or all non-human animals can be said to have minds. The arguments typically revolve around what it is to have a mind, and in particular what it is to have a so-called ‘mental state’ such as a belief or desire. For example, some philosophers maintain, contra behaviourism, that a dog cannot believe that the cat is at the top of the oak tree unless it has the concept of an oak tree, where a concept is a linguistic representation of some kind. The question of animal minds is thus also closely related to questions about language and/or concept acquisition. A dog may now desire to go for a walk immediately, but it cannot now desire to go for a walk next Tuesday. Frankish here contrasts a behaviourbased concept of mind with the language-involving concept of ‘supermind’. C. S. Bekoff, M., and D. Jamieson (eds) (1996), Readings in Animal Psychology, Cambridge: MIT Press. Glock, H. J. (2000), ‘Animals, Thoughts and Concepts’, Synthese, 123, 35–64. Searle, J. (1994), ‘Animal Minds’, Midwest Studies in Philosophy, 19, 206–19.
anomalous monism. A metaphysical thesis, argued for by Davidson, that insists on the identity of particular mental and physical events. It maintains that any given causally efficacious mental event described in a mentalistic idiom must be logically identical to some physical event. As such, when we speak about causally efficacious mental happenings or properties we are in fact just speaking about physical happenings or properties using different vocabulary. There is just one thing present, not two – hence the monism. This is the case even though these particular happenings admit of radically different and irreducibly incommensurable descriptions in the specialized vocabularies of folk (or everyday) psychology and physics. The different ways that we describe such happenings ma ers to our capacity to make systematic predictions and explanations of what follows from such happenings. Thus the allegedly normative character of our everyday mentalistic scheme for a ributing propositional a itudes, such as beliefs, precludes the possibility of developing our mentalistic idiom into a science of the mind that incorporates strict laws; hence, the anomalous nature of the mental. By contrast, Davidson assumes that an ideal physics – one 281
Glossary
capable of predicting and explaining the true causes of any given happening – only trades in strict causal laws. These assumptions frame his famous argument that if mental events do in fact cause physical events then they must be logically identical to some physical event or other, since only an explanation couched at the level of an ideal physics can possibly get at the true cause of any physical happening. D. H. Davidson, D. (1980), ‘Mental Events’, in Essays on Actions and Events, Oxford: Clarendon Press. —(1987), ‘Problems in the Explanation of Action’, in Metaphysics and Morality, Oxford: Blackwell. Macdonald, C. (1989), Mind-Body Identity Theories, London: Routledge.
artificial intelligence. The project of programming computers or, more generally, designing machines, to engage in behaviour that would be counted as intelligent if executed by a human. Artificial intelligence was the dominant component of cognitive science in the 1960s and 1970s. During that period the focus tended to be on high level cognitive skills such as game playing, logical reasoning and natural language understanding. Such capacities were studied in isolation from other cognitive, perceptual and motor abilities and were modelled by processes involving the rule governed manipulation of representations. In the 1990s a new approach developed that focussed on designing machines that do not heavily rely upon internal representations but can successfully navigate real-world environments and so exhibit the intelligence of animals much simpler than humans. M. C. Denne , D. (1998), Brainchildren: Essays on Designing Minds, Cambridge, MA: MIT Press. Haugeland, J. (ed.), Mind Design II, Cambridge, MA: MIT Press.
behaviourism. In philosophy the term ‘behaviourism’ is typically associated with a range of views according to which mental phenomena are to be analysed (if not quite analysed away) by reference to behaviour. According to the analytic or logical behaviourism of Carnap and Hempel, for example, the verification of any psychological statement will involve behavioural observations. This view should be distinguished from the methodological behaviourism of Watson (which states that psychology may legitimately only concern itself with observable behaviour) and the even more radical psychological behaviourism of Skinner (according to which all human an non-human behaviour is to be explained without reference to the subject’s inner life). More contentiously,
282
Glossary
behaviourism has also been associated with Ryle and Wi genstein, who both emphasized the conceptual relations between behaviour and mental ascriptions, though they arguably fell short of confirming ‘hypotheses about psychological events in terms of behavioural criteria’ (to use Sellars’ characterization of what makes someone a behaviourist). A recent counterexample to some (though by no means all) strands of behaviourism is that of Galen Strawson’s Weather Watchers, viz. beings who are hypothesized to have a mental life despite being ‘constitutionally incapable of any sort of behaviour, as this is ordinarily understood’. C. S. Smith, L. (1986), Behaviorism and Logical Positivism, Stanford: Stanford University Press. Stout, R. (2006), The Inner Life of a Rational Agent, Edinburgh: Edinburgh University Press.
belief. While Russell and McTaggart both wrote about ‘states of mind’ as early as 1921, and Turing talks of the ‘internal states’ of machines in his influential 1950 Mind article ‘Computing Machinery and Intelligence’, it was not until the rise of functionalism in the 1960s (and in particular Hilary Putnam’s 1964 paper ‘Minds and Machines’) that philosophers began to refer to beliefs as ‘mental states’, alongside desires and other mental phenomena. Yet belief does not generally appear to be a state of mind in the ordinary (emotional) sense of the term, besides which one can only be in so many states of mind at any given moment. Some theorists hold that there is an episodic ‘occurrent’ form of belief in which thoughts are actively brought to mind. It is true that we could not have many of these at any given time, but this is not because they are states we might be in. One may be in a state of nervousness or anxiety but it ordinarily makes no sense to say that a subject, mind, or brain is in a state of belief (though one may find oneself in a state of disbelief). Beliefs so construed are held to have representational contents which relate to (possible or actual) sates of affairs, much as the line on a gramophone record relates to the notes played by the recorded orchestra. Representationalist thought dates at least as far back as Hume, if not Plato. Wi genstein a acked this characterization in his Philosophical Investigations which, among other things, sought to repudiate his own, earlier, picture theory of the mind. To believe that p, he came to think, is not to form any kind of representation of p, but simply to take p to be the case, thus explaining the impossibility of meaningfully stating ‘p, but I don’t believe p’. Moore’s ‘paradox’ was founded on the worry that the statement in question could nonetheless be true (since ‘p, but Moore doesn’t believe that p’ is unproblematic). But there is no real paradox here, for on the case-stating view (but not on the state-reporting one) to say that one
283
Glossary
believes that p is not to explicitly say anything about oneself, though one may be disclosing much through conversational implicature. Frankish helpfully distinguishes between two strands of belief, associated with two distinct kinds of mental processing and, more generally, two conceptions of mind. The first (‘basic belief’) is typically non-conscious, passive, nonoccurrent and a ributable on purely behavioural grounds (see animal minds). The second (‘superbelief’) may be held consciously, typically requires linguistic conceptualization, and is frequently occurrent. Frankish argues that basic beliefs and superbeliefs may conflict. For example, I superbelieve that the indicator in the new car is on the le , yet on each turn I move my hand to the right, as in the old car, suggesting that my basic belief (which I may become aware of) contradicts it. This offers an alternative approach to Moore’s paradox, on which the asymmetry between first and third person disappears. If ‘p’ and ‘I don’t believe p’ refer to a itudes of different types, then they may both be assertable, even if both (or neither) of them are self-descriptions. The ontology of (both kinds of) belief also crosses paths with theories of truth. Inspired by Frege’s distinction between believing and the thing believed, White has suggested that the term ‘belief’ is simply ambiguous: it can either refer to a proposition (which may be true or false) or to the believing of such a proposition. However we do not believe our beliefs any more than we desire or desires or fear our fears, and it is at best awkward to talk of believing, desiring, or fearing propositions. In contrast to what we believe, a belief may be imaginative, and this need not coincide with one’s believing being imaginative (for I may unimaginatively just latch onto your imaginative belief). Following Gilbert Ryle, it is arguably a category mistake to think that we have beliefs in the same sense in which we can be said to have pencils, mortgages, or family in Tanzania. To claim, instead, that we paradigmatically ascribe beliefs to beings when they behave (or when it is assumed they would behave) as if something were the case is not to deny we may legitimately talk of beliefs that no one has ever had; on the contrary there are numerous things which one could take to be the case even if nobody has ever actually done so. Indeed, two people may have exactly the same belief, for the beliefs we have are not particular tokens of universal types, though one’s having a belief is an instance of a general case. C. S. Collins, A. (1987), The Nature of Mental Things, Notre Dame: Notre Dame Press. Frankish, K. (2004), Mind and Supermind, Cambridge: Cambridge University Press. Steward, H. (1997), The Ontology of Mind, Oxford: Oxford University Press. White, A. (1972),’What We Believe’, in N. Rescher (ed.), Studies in the Philosophy of Mind, APQ monograph series no. 6, Oxford: Blackwell.
284
Glossary
causal closure. Causal closure is the claim that all physical events have a physical cause. If one accepts causal closure one denies the causal efficacy of the non-physical. Ghosts cannot drag iron chains, and God cannot intervene in the physical world. Non-physical minds cannot interact with physical bodies and thus causal closure is thus at odds with mind-body dualism. D. O. Papineau, D. (2002), Thinking about Consciousness, Oxford: Oxford University Press.
Chinese room argument. An argument devised by Searle to undermine strong artificial intelligence, the doctrine that an appropriately programmed computer would have a mind. Searle, who does not understand Chinese, imagines himself locked in a room that contains English instructions telling him how to correlate some Chinese symbols with others. The instructions only mention the syntactic properties of the symbols. Sheets of Chinese symbols are fed into the room. Searle responds to this by following the English instructions, correlating the new symbols with the symbols wri en on a batch of sheets. These symbols are then copied onto blank sheets which are in turn posted out of the room. It turns out that the input symbols are questions wri en in Chinese and the output symbols are sensible answers to those questions. Searle’s symbol-manipulating behaviour mimics that of a competent speaker of Chinese, but he does not understand a word of it. The crucial point is that Searle does exactly what a computer does – symbol manipulation according to syntactic but not semantic properties. Searle does not understand Chinese, just as a computer does not understand any of the symbols it manipulates. Thus, he claims, no computer, however it is programmed, is capable of understanding Chinese or any other language. He generalizes this result to all cognitive capacities. M. C. Preston, J., and M. Bishop (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford: Oxford University Press. Searle, J. (1980), ‘Mind, Brains and Programs’, Behavioural and Brain Sciences, 3, 417–24.
computationalism. The view that the mind is a computer or ensemble of computers embodied in the brain and that thinking is a kind of computation. Interpreted narrowly, a computer is a mechanical device that manipulates syntactically structured symbols by means of the application of symbol manipulation rules. Although the device is sensitive only to the syntactic properties of such symbols, the symbols do have meaning so that the device processes
285
Glossary
information when it computes. Such a view of the mind is central to what has become known as classical cognitive science and is associated with Putnam, Fodor and others. Interpreted more broadly, neural networks of the kind postulated by connectionists are computers. So, to some, connectionism is a form of computationalism. M. C. Fodor, J. (2008), LOT 2: The Language of Thought Revisited, Oxford: Oxford University Press. Haugeland, J. (ed.) (1997), Mind Design II, Cambridge, MA: MIT Press.
conceivability argument. See zombies. concepts. Concepts are the constituents of thought. To possess a concept is to be able to think about a certain aspect of the world. I can think of the stuff in my cup as coffee, as bi er, as brown and as hot; and to think in these ways I require the concepts coffee, bi er, brown and hot. The concept coffee is necessary for various kinds of thoughts, for example, the concept coffee enables me to believe that coffee is my favourite beverage, to desire a particular Italian rich roast blend, and to hope that my cafetiere is not empty. D. O. Margolis, E. (ed.) (1999), Concepts: Core Reading, Cambridge: MIT Press.
connectionism. Connectionism is an approach in cognitive science according to which the mind is a neural network or ensemble of such networks. Connectionism came to prominence in the 1980s and is contrasted with what has become known as classical cognitive science, as connectionist networks typically do not manipulate syntactically structured symbols or store information by means of such symbols. Champions of connectionism are impressed by several features of connectionist networks including their similarity to the brain, their capacity to learn, their possession of a content addressable memory, and their ability to deal with noisy input data and internal damage. One particularly prominent debate surrounding connectionism relates to Fodor’s charge that connectionism cannot account for the fact that it is a psychological law that thought is systematic in the respect that anyone capable of thinking that x stands in relation R to y (e.g. that John loves Jane) is also capable of thinking that y stands in relation R to y (e.g. that Jane loves John). M. C. Bechtel, W., and A. Abrahamsen (2002), Connectionism and the Mind, second edition, Oxford: Blackwell.
286
Glossary Haugeland, J. (ed.) (1997), Mind Design II, Cambridge, MA: MIT Press.
consciousness. Consciousness signifies a number of different phenomena. Sometimes the subject of the a ribution of consciousness is the creature per se. We speak of a person or creature as being conscious or not. But there are cases in which the subject of the a ribution is a particular mental state of the creature. According to most philosophers, there are different kinds of consciousness. Following Block (1995), they distinguish between phenomenal (P-consciousness) and access (A-consciousness). A-conscious states are conscious propositional a itudes, for example, beliefs, judgements and desires. Although P-conscious states can be A-conscious too, they can occur independently of A-conscious states. The mark of A-conscious states is that they are available for use in reasoning and rationally guiding speech and action. The notion of A-consciousness is dispositional, not access, but accessibility is required (also called global accessibility). On the other hand, the distinguishing mark of P-consciousness, namely what makes a mental state P-conscious, is that there is something it is like to be in that state. There are also higher-order conceptions of consciousness, o en called reflexivity or monitoring consciousness, which involve representation of one’s mental states. Finally, there is the notion of self-consciousness, which involves what psychologists call a theory of mind, namely the ability to a ribute mental states in everyday life and to reflect upon our mental lives and the mental lives of others. D. P. Armstrong, D. M. (1981), ‘What is Consciousness?’, in D. Rosenthal (ed.), The Nature of Mind, Ithaca, NY: Cornell University Press, pp. 55–67. Block, N. (1995), ‘On a Confusion about a Function of Consciousness’, Behavioral and Brain Sciences, 18, 227–47. Rosenthal, D. (2002), ‘How Many Kinds of Consciousness?’, Consciousness and Cognition, 11 (4), 653–65.
content. The term ‘content’ is used equivocally in the philosophical literature. It can refer either to what is thought about – a thought’s subject ma er (e.g. a state of affairs or object; whether real or imaginary) – or the specific manner or way in which that subject ma er is thought about. Content of the first type is extensional. That of the second type is intensional (with an s). An important difference between these two kinds of content is that the former but not the la er is transparent to substitution of coextensive terms. It is normally taken to be a defining feature of contentful mental states that they are truth-evaluable (i.e. such mental states have the property of being either true or false). Contentful mental states come in conceptual and non-conceptual varieties. Those of the la er sort are allegedly possessed by thinkers who are unable to specify for
287
Glossary
themselves by means of concepts, verbally or otherwise, how it is that they think about the world. D. H. Evans, G. (1982), The Varieties of Reference, Oxford: Oxford University Press.
desire. A psychological inclination towards an object or aim. Hume famously claimed that desires do not have ‘any representative quality, which renders [them] a copy of any other existence or modification’ (Hume, 1978, p. 415). This has recently been interpreted as suggesting that, unlike beliefs, desires do not aim to represent any other existence. Their role is not to copy or represent the world but to change it. Following this line of thought, in contemporary philosophy of mind desires are identified primarily by their role in combining with beliefs (about how the world is and can be changed) to generate actions which will result in the world being changed to fit their content. Both beliefs and desires are thought of as propositional a itudes, but as a itudes with distinct aims or ‘directions of fit’. Beliefs aim to fit the world while desires aim to get the world to fit them. What exactly is to be understood by this is not however always agreed upon, nor is it always agreed that the role of desires in combining with beliefs to give rise to action can truly capture the full and varied nature of desire. I. M. Hume, D. (1978), Treatise of Human Nature, ed. L. A. Selby-Bigge, Oxford: Clarendon Press. Sobel, D., and D. Copp (2001), ‘Against Direction of Fit Accounts of Belief and Desire’, Analysis 61 (1), 44–53.
dispositions. A disposition is best described as a propensity or liability of an object, animal, or person to behave in certain ways under certain circumstances (in which the disposition in question is triggered by the environment). Philosophers of mind distinguish between the mental and physical dispositional properties of any given subject, though o en they a empt a reduction of the former to the la er. The current trend of describing dispositions as mental and/ or physical states which may act as mental or behavioural causes is at odds with certain forms of behaviourism and various Wi gensteinian insights that are (sometimes erroneously) associated with it. C. S. Mumford, S. (1998), Dispositions, Oxford: Oxford University Press.
dualism. Any view which posits the existence of just two kinds of something, as opposed to just one kind of something (monism) is dualistic in nature. In the 288
Glossary
philosophy of mind there are substance dualists, most famously Descartes, who maintain that ultimately there are just two kinds of stuff in the universe: physical stuff and mental stuff. There are also property dualists who argue that there are at least two kinds of properties: physical and mental properties. Dualists of whatever stripe hold that mental stuff or mental properties are neither reducable to nor explainable entirely in terms of the physical. The mental really is its own kind of something, and it belongs on the final inventory of the universe, right alongside quarks or mass. Arguments for dualism come from many quarters: reflection on what Mary did or didn’t know, zombies, personal identity, language use and modality or bare conceivability. Perhaps more famous are the difficulties for dualism: its lack of explanatory power, the problem of interaction, maybe just a general failure to fit into a conceptually tidier monism. What’s interesting, too, is the grip dualism has on us despite the answers we give to the arguments in its favour and the troubles we find in thinking it true. R. D. Chalmers, D. (1996), The Conscious Mind, Oxford: Oxford University Press. Descartes, R. (1642/1984), ‘Meditations on First Philosophy’, in J. Co ingham, R. Stoothoff and D. Murdoch (trans. and eds), The Philosophical Writings of Descartes, vol. 2, Cambridge: Cambridge University Press.
eliminative materialism. Eliminative materialists claim that our common sense description of the mind – our folk psychology – is false. We do not have beliefs, desires, hopes and fears. In a complete, true theory of the mind these categories will not be reduced to physical categories; rather, they will be eliminated in favour of the categories of a materialist theory that explains human cognition in physical, probably neurophysiological, terms. Some eliminative materialists, such as Quine, accept that folk psychology is indispensable to our everyday dealings: we shall thus continue to talk of beliefs and desires even though, strictly speaking, we do not have them. Others, notably Paul and Patricia Churchland, claim that we should strive to je ison such false ways of speaking. In the future we shall come to speak of each other, and see ourselves, not as believers and creatures of desire, but rather, in terms of the categories of neuroscience. D. O. Churchland, P. (1979), Ma er and Consciousness, Cambridge: MIT Press.
emotions. Mental states such as fear, happiness, anger and (on some views) desire, commonly associated with distinctive pa erns of sensation, thought and behaviour. What constitutes the underlying nature of such states is a ma er of much controversy as is the issue of whether they form a unified ontological category at all. 289
Glossary
The etymology of the term ‘emotion’ points to the most obvious theory of emotions as visceral ‘movements’. Theories which emphasize this visceral or ‘sensation’ aspect of emotions have been widely criticized however for leaving out the whole intentional dimension of emotions and for being unable to account for either the rationality of emotions or for their complex role in rationalizing other states and actions. Such concerns underlie a number of more recent theories of emotions – cognitivist and perceptual theories in particular – according to which emotions are just special kinds of judgements or experiences of the world as having emotion-specific evaluative properties such as of being frightening, dangerous, worth avoiding or worth obtaining. These theories retain however, in common with sensation theories, the view that emotions are essentially conscious episodes, thus leaving them open to the charge that they cannot make room for the existence of unconscious or nonconscious emotions (i.e. ones not currently occupying one’s a ention). As against such theories, it is thus sometimes argued that emotions are best thought of as mental dispositions – dispositions to behave in various ways, to have one’s experiences of the world transformed or coloured in distinctive ways, and to have a wide range of other mental states, both episodic and dispositional, including thoughts, beliefs, desires and even other emotions. How each of these approaches might be refined to accommodate the varied aspects and wide ranging roles of emotions in our mental lives is the focus of much current research. Questions being actively addressed include ones about the rationality of emotions, about their role in motivating action, about the nature of emotional experience and emotional expression as well as of course fundamental questions about the ontological nature of emotions and about our ability to know them. I. M. James, W. (1884), ‘What is an Emotion?’ Mind, 9, 188–205. Sartre, J. (1962), Sketch for a Theory of the Emotions, London: Methuen. Wollheim, R. (1999), On the Emotions, New Haven: Yale University Press.
epiphenomenalism. Mental events seem to have a causal effect on each other and on our bodies. A pain can cause me to be depressed and to rub my sprained ankle. Epiphenomenalism, though, is the thesis that such causal relations are an illusion. Mental events play no causal role. They are, rather, side effects of physical events in our brains and bodies. Physical event F causes me to rub my ankle, and also has the side effect of causing me to feel pain. Certain pains, then, tend to be followed by such behaviour, but this is because these pains and ankle-rubbings are the product of a common cause, F, and not because pains cause this behaviour.
290
Glossary
D. O. Jackson, F. (1982), ‘Epiphenomenal Qualia’, The Philosophical Quarterly, 32, 127–36.
explanatory gap. Contemporary neuroscience is teaching us that our mental states correlate with neural processes in the brain. However, although we know that consciousness arises from a physical basis, we don’t have a good explanation of why and how it so arises. Pain experiences for instance, correlate with C-fibre firing, but even if we know that the feeling of pain correlates with C-fibre stimulation and that say, the existence of such pain states is contingent on the occurrence of such neural events, we still want to know why it doesn’t correlate with a neural state of another kind or why it is pain rather than the feeling of elation or an itch that correlates with that particular kind of neural state. This leads us to the more general question of why is it that there holds such a correlation at all. Trying to answer such questions raises the problem of the explanatory gap. The general idea is that physical properties – the subject ma er of physics – can be exhaustively explained in objective-scientific terms, that is, in terms of function and structure, but it appears that phenomenal properties cannot be explained in those terms. It is further claimed that phenomenal properties cannot be explained in terms of cognitive abilities either in that the la er can be given – at least in principle – a functional characterization. Philosophers divide into five groups: (1) there is no explanatory gap or there is one that is easily closable; (2) there is a deep explanatory gap for now, but an answer might be forthcoming or we might someday close it; (3) there is a permanent explanatory gap (we cannot close it in principle because we suffer from cognitive closure), but there is no ontological gap; (4) there is a permanent explanatory gap, but we will never know whether there is an ontological gap; (5) there is a permanent explanatory gap and a corresponding ontological gap. D. P. Chalmers, D., and F. Jackson (2001), ‘Conceptual Analysis and Reductive Explanation’, Philosophical Review, 110, 315–61. Levine, J. (1983), ‘Materialism and Qualia: The Explanatory Gap’, Pacific Philosophical Quarterly, 64, 354–61. McGinn, C. (1989), ‘Can We Solve the Mind-Body Problem?’ Mind, 98, 349–66.
extended mind. The extended mind thesis holds that at least some cognitive processes essential for enabling the completion of specific acts of cognition are not wholly within the boundaries of the skin or the skull. Focusing on cases of belief formation, Clark and Chalmers argue that sometimes successful cognition unavoidably depends on the use of environmental resources (e.g. appeal to
291
Glossary
information contained in notebooks or other devices). Moreover, they argue that if the manipulation and use of such resources to support cognition were to occur entirely within the bounds of the subject’s head we would not hesitate to class them as cognitive. On these grounds, it is argued that we have no reason to deny that the machinery of the mind can, at least when certain specified criteria are met, extend into the wider environment. D. H. Clark, A. (2009), Supersizing the Mind: Embodiment, Action and Cognitive Extension, Oxford: Oxford University Press. Clark, A., and D. Chalmers (1998), ‘The Extended Mind’, Analysis, 58, 7–19.
first-person authority. A subject’s position as ultimate authority or expert regarding the contents of his or her own mind. The literature on self-knowledge is divided both on whether this authority exists and on whether, assuming it does exist, it is epistemic (the result of our enjoying some form of privileged access to our own mental states) or merely the consequence of some special feature of our self-ascriptive judgements of the form ‘I believe that p’ or ‘I am in pain’. Sceptics about first-person authority note that failures to know what we believe, desire, fear, etc. are both possible and frequent, as testified by common cases of self-deception. Yet, despite the existence of such cases, it seems undeniable that at least sometimes our authority is unrivalled. If I now come to the conclusion that space is not Euclidean, I seem to be far be er placed to know that I have reached this view than anyone else, no ma er how a entive they might be to my behaviour. The central task of any theory of self-knowledge is to explain the authoritative position we at least appear to stand in with respect to a wide range (if not all) of our current mental states. Philosophers of a non-epistemic persuasion try to point to various distinctive features of our authoritative statements (e.g. to their being self-verifying, to our act of u ering them itself making us count as having them, or to their being mere expressions of mental states rather than judgements about them). At the other end of the spectrum, defenders of epistemic approaches to first-person authority point to the non-authoritative status of some of our mental selfascriptions (about our unconscious minds) to argue that the authoritative character of other of our self-ascriptions (about our conscious minds) cannot be a feature of our self-ascriptive judgements considered merely as such – that is, merely as judgements with a particular form (the self-ascriptive form) or subject ma er (our mental states). If a statement of the form ‘I believe that p’ can in some cases be authoritative and in other cases not be authoritative, the authoritative status of those instances of self-ascription which are authoritative cannot arise out of their form and content alone, but must arise instead out of the particular way in which they were reached. Which line of approach,
292
Glossary
epistemic or non-epistemic, is the closest to the truth is a ma er of ongoing debate, a debate further complicated by an independent threat thought to be posed by externalism about mental content to the authority of even those of our mental self-ascriptions which seem to be least open to challenge by others. I. M. Moran, R. (2001), Authority and Estrangement: An Essay on Self-Knowledge, Princeton, NJ: Princeton University Press.
first-person/third-person perspective. The first-person perspective on mental states comprises our subjective apprehension of our own mental states: the knowledge we have, for example, of what we believe and desire, and of what it feels like to have pains and tickles. The third-person perspective comprises an objective description of mental states. Science, for example, a empts to provide such a description. This distinction is related to two fundamental issues in the philosophy of mind. First, I seem to have a special kind of privileged access to my own mental states. The Cartesian view is that we are infallible with respect to the contents of our minds. I seem to be able to know with certainty what I am thinking and feeling. You, however, can be in error with respect to my thoughts and feelings: you can think that I am happy when I am not. Others, however, have construed such first person authority in a weaker sense. I may have a distinct kind of non-inferential access to my own mental states, but I am not infallible. There are times, for example, when I may not know that I am jealous, or it may be obvious from my behaviour that I believe that a certain climbing route is dangerous, even though I am not aware that I have such a belief. Nevertheless the default assumption is that I am in the best position to know what I am thinking and feeling. Second, there would appear to be aspects of the first-person perspective that cannot be accounted for by a third person theory. Various kinds of dualists argue that subjective features of the mind cannot be given an objective description: the way a ripe peach tastes to me cannot, for example, be explained in objective, scientific terms. D. O. Alston, P. (1971), ‘Varieties of Privileged Access’, American Philosophical Quarterly, 8, 223–41. Nagel, T. (1979), ‘What is It Like to Be a Bat?’ in Mortal Questions, Cambridge: Cambridge University Press.
folk psychology. ‘Folk psychology’ is the name given by philosophers to the everyday practice of explaining, predicting and understanding intentional 293
Glossary
actions (i.e. our own and those of others) in terms of reasons. Engaging successfully in this practice requires being able to answer a particular sort of ‘why’ question by competently deploying the idiom of mental predicates (beliefs, desires, hopes, fears, etc.) and a ributing these mental state terms appropriately. Sometimes folk psychology is used to refer, more restrictively, to the complete set of propositions and generalizations (or at least a perspicuous presentation of the core body of these) that its practitioners are implicitly commi ed to when using mental state terms in order to make sense of actions. Used in this way, folk psychology is o en imagined to denote the ‘theory’ that would be obtained by systematically describing all of the relevant folk commitments in a systematic way. There are several accounts of the nature of folk psychology and how best to explain the relevant abilities associated with it – for example, it has been variously characterized as being, in essence: a kind of theory; a practice involving modelling or simulation of mental states; and a narrative practice. Among some of those who regard it as a theory its elimination has been called for by stressing its ‘folk’ status and highlighting its lack of fit with growing modern science. D. H. Carruthers, P., and P. Smith (eds) (1996), Theories of Theories of Mind, Cambridge: Cambridge University Press. Goldman, A. (2006), Simulating Minds: The Philosophy, Psychology and Neuroscience of Mindreading, Oxford: Oxford University Press. Hu o, D. D. (2008), Folk Psychological Narratives: The Socio-Cultural Basis of Understanding Reasons, Cambridge, MA: MIT Press.
frame problem. The problem of understanding how a computational system could accurately revise a complex network of representations of the world to maintain its accuracy when one element of that network has been changed. Following Denne , the problem can be illustrated by considering a robot driven by a computer. The robot carries out an action, changing the world in a limited way. As the world is a complex causal system, this change can be expected to bring about many other changes in the world, some of them quite distant, while leaving many other aspects of the world unchanged. The problem the robot faces is that of effecting the appropriate changes without having to examine every individual element of its complex representation of the world for doing that would threaten cognitive breakdown if the network was of any great magnitude. While many artificial intelligence workers have addressed the problem head on, Fodor, a long-time advocate of computationalism, regards the problem as so grave as to suggest that central cognition might not be a form of computation a er all.
294
Glossary
M.C. Denne , D. (1998), Brainchildren: Essays on Designing Minds, Cambridge, MA: MIT Press. Fodor, J. A. (2000), The Mind Doesn’t Work That Way, Cambridge, MA: MIT Press.
functionalism. Tables are not defined in terms of their physical structure since they can be made out of all kinds of stuff, including metal, wood, and plastic. They are defined according to their function: a table is (roughly) something which we use to put things on. Functionalists have a related view of the mind. I can feel pain and so perhaps can a tuna fish, yet our brains are structured differently. It is also conceivable that creatures on other planets can feel pain, and perhaps future robots, but such beings will have very different brains to us and to tuna. Mental states are not therefore defined in terms of their physical structure; they are, rather, defined by their causal relations. Pains are the kind of state that are caused by bodily damage and that lead to avoidance behaviour and depression. If an alien creature is in the kind of state that bears those relations with its behaviour and other mental states, then it is in pain. As Hilary Putnam (1975, p. 291) claims: ‘we could all be made of Swiss cheese and it wouldn’t ma er.’ A key problem for functionalism lies in accounting for the subjective feel of mental states, or what is called their phenomenology or qualitative nature. Pain may have the causal relations that functionalists say it does, but pain also feels a certain way – it hurts – and it is not clear how functionalists can account for this fact. D. O. Putnam, H. (1975), ‘Philosophy and Our Mental Life’, in Philosophical Papers, vol. 2, Cambridge: Cambridge University Press.
H20/XYZ. XYZ is a fictional chemical invented by Hilary Putnam in his Twin Earth thought experiment. It is superficially identical to what we on Earth call ‘water’: it is colourless, tasteless, odourless and thirst-quenching; it rains from the sky and flows down rivers into the sea. It is, however, physically different from H20 (water on Earth). It is not comprised of hydrogen and oxygen; its chemical formula is very long and complicated and is abbreviated to XYZ. It is different stuff. Descriptivists claim that both XYZ and H2O are water since (at normal temperature and pressure) they both satisfy our common sense description of water: it is the stuff that flows down rivers into the sea, is colourless, etc. Essentialists, however, claim that chemists have discovered an essential property of water, that it is H2O, and nothing can be water that does not have this chemical structure. Conclusions concerning this are relevant to debates concerning semantic and cognitive externalism.
295
Glossary
D. O. Putnam, H. (1975), ‘The Meaning of “Meaning” ’, in Philosophical Papers, vol. 2, Cambridge: Cambridge University Press.
higher-order thought. A thought or belief about one’s own mental states. In the philosophy of mind the term most o en comes up in discussions of so called higher-order thought theories of consciousness, according to which the presence of a higher-order thought or belief is what makes the lower-order state it is about conscious. Numerous objections have been raised against such views, generally taking the form of counterexamples aimed at showing that, and how, a state’s being conscious can come apart from its being accompanied by a second-order belief about it. As a result of such objections, higher-order thought theories have been revised and refined in numerous ways and have taken on a number of distinct forms. Whether any such theories can however ultimately stand up to scrutiny while retaining their essence as theories of what constitutes consciousness remains open to question. I. M. Rosenthal, D. (1991), ‘Two Concepts of Consciousness’, in D. Rosenthal (ed.), The Nature of Mind, Oxford: Oxford University Press.
intentionality. Intentionality is the ‘aboutness’ or directedness of thought towards an object. A thought intends an object, establishes an intentional relation with it as the thing that the thought is about, which is then also and equivalently said to be the object intended by the thought or the thought’s intended object. Intentionality is the property of thought or the expression of thought whereby it is about, refers to, or is directed on something. The intentionality of thought is recognized by Aristotle and emphasized by certain of the medieval philosophers. It was revived for modern philosophy in the nineteenth century by Franz Brentano in his 1874 book, Psychologie vom empirischen Standpunkt [Psychology from an Empirical Standpoint]. Brentano’s intentionality thesis combines two parts. Brentano maintains both that: (1) every psychological (‘psychic’) phenomenon intends an object, and (2) intended objects belong to and are contained in the psychological phenomena by which they are intended. Many of Brentano’s later followers accepted (1) but denied (2), and most if not all intentionalists in the philosophy of mind today are of the same inclination, accepting Brentano’s intentionality thesis in the general form of Brentano’s thesis (1), sometimes with modifications, while rejecting the specific immanence or in-existence thesis that Brentano accepted in (2). Without (2) it is possible to say in closer conformity to common sense that at least some intended objects exist outside the mind, so that we can perceive and want, hope for, believe about, doubt, love, hate, and so on objects that do not merely
296
Glossary
exist in but transcend the acts of perception, wanting, hoping, etc. Thus, at least some intended objects are intended as existing in the external world outside of thought if they do not simply belong to the thinker’s imagination. Brentano, F. (1973), Psychology from an Empirical Standpoint, ed. L. McAlister, trans. A. Rancurello, D. Terrell and L. McAlister, London: Routledge and Kegan Paul. Denne , D. (1987), The Intentional Stance, Cambridge: MIT Press. Searle, J. (1983), Intentionality: An Essay in the Philosophy of Mind, Cambridge: Cambridge University Press.
internalism/externalism. Internalism maintains that the defining properties of mental states are non-relational or intrinsic. Such properties are either identical to or supervene on the internal, microstructural properties of sentient and sapient beings (e.g. their brain or bodily states). Externalism maintains that at least some mental state properties are extrinsic and hence necessarily relational. The difference between intrinsic and extrinsic properties can be illustrated by the difference between mass and weight. The mass of a body (an intrinsic property) is a measure of how much ma er it contains, whereas the weight of the same body (an extrinsic property) depends on situational factors – such as the force of a raction between it and other bodies. If internalism is true mental states can be individuated wholly by appeal to internal properties that reside inside the skin or head of an agent; if externalism is true mental state individuation will necessarily require an additional appeal to external factors as well. D. H. Fodor, J. A. (1994), The Elm and the Expert, Cambridge, MA: MIT Press. Putnam, H. (1975), ‘The Meaning of “Meaning” ’, in Mind, Language and Reality: Philosophical Papers, vol. 2, Cambridge: Cambridge University Press. — (1988), Representation and Reality, Cambridge, MA: MIT Press.
introspection. The special way each person has of knowing the contents of his or her own mind, although what this way is, whether it is in fact special and whether it is a way of knowing at all, are all ma ers of some controversy. The etymology of the term ‘introspection’ suggests that introspection is essentially a form of ‘inner perception’ and that to ‘introspect’ is in some sense to ‘look inside’. But, one might ask, in what sense? The verbs ‘to perceive’ and ‘to see’ are pre-theoretically used in a variety of ways, in particular to mean quite generally ‘to know’ or to ‘understand’ – as when speaking of ‘seeing’ what someone means. Similarly, to ‘look’ is o en used to mean ‘to think about’ or ‘to investigate’ – as when promising to ‘look into some ma er’. In its modern theoretical sense however ‘perception’ refers fundamentally to external sense perception. Speaking of ‘introspection’ in the theoretical context of philosophy can thus be seen to establish an analogy between so called ‘inner perception’ and external
297
Glossary
sense perception, or between our way of knowing our own mind on the one hand and our way of knowing that which lies outside it, through sense perception, on the other. Few current philosophers however believe this analogy to be legitimate. Introspection is argued instead to be either a (particularly quick and well informed) process of inference from observation of our behaviour (and so not a special way of knowing, distinct from our way of knowing the mental states of others), or, some essentially non-epistemic process of avowal of our current mental states (such as that of mere expression of our beliefs and desires – and so not a way of knowing at all). Beyond standard epistemic and non-epistemic accounts of introspection, the recent literature has seen a number of further, o en more subtle, views of introspection being espoused on both sides. The central task for any such theory remains that of providing an account of at least the appearance of our having a special way of knowing our own minds which is unlike our way of knowing the minds of others and which displays certain key characteristics (e.g. of immediacy, first-person authority and immunity to certain types of error). I. M. Armstrong, D. M. (1968), A Materialist Theory of Mind, London: Routledge and Kegan Paul. Heal, J. (1994), ‘Moore’s paradox: a Wi gensteinian approach’, Mind (January). Shoemaker, S. (1996), ‘Self-Knowledge and “Inner Sense” ’, in S. Shoemaker, The First Person Perspective and Other Essays, Cambridge: Cambridge University Press.
language of thought. A language that the mind employs in thinking, as postulated by Fodor. For Fodor, the language of thought is shared by all members of the human species regardless of what natural language they speak. Like a natural language, it consists of a vocabulary of a finite number of simple symbols along with a finite number of formation rules for combining these simple symbols into more complex symbols. In addition to their syntactic properties, sentences of the language of thought have semantic properties, and the meaning or content of such a sentence is a product of the meaning of its component simple symbols and its syntactic structure. For Fodor, whenever an individual has a particular propositional a itude (e.g. a belief that dogs chase postmen) there will be a sentence of the language of thought bearing the appropriate meaning or content located in his or her mind. Sentences of the language of thought are physically embodied by states of the brain, but they are multiply realizable at the physical level. M. C. Cain, M. J. (2002), Fodor: Language, Mind and Philosophy, Cambridge: Polity.
298
Glossary Fodor, J. A. (2008), LOT 2: The Language of Thought Revisited, Oxford: Oxford University Press.
logical behaviourism. See behaviourism. mental causation. It is widely assumed that mental properties, such as being angry, feeling uneasy, or noticing that the bridge is unsafe, can and typically do cause action and behaviour. Such mental causes are o en thought to be productive (i.e. mental events or properties generate other events, just as the collision of one billiard ball with another is thought of as forcefully producing motion). However it is also possible to think of mental causation in the weaker terms of counterfactual dependence where the occurrence of one event depends upon the occurrence of another, as in the case in which my washing up the dishes depends upon my remembering a promise I made to my wife (such that had I not remembered, the washing up would not have been done by me). Those philosophers who insist that mental causation must be of the productive variety and who also endorse some form of non-reductive physicalism encounter a famous problem: the exclusion or pre-emption problem. It is the worry that any causal contribution that a mental property might make to the occurrence of another event (e.g. an action) will be systematically usurped by its neural or other physical realizers. This is due to the assumed relation of ontological dependence holding (vertically) between mental states and their realizers. As the problem is entirely metaphysical it holds even though, for practical purposes, the mentalistic and, say, neuroscientific explanations are quite different. D. H. Davidson, D. (1980), ‘Actions, Reasons and Causes’, in Essays on Actions and Events, Oxford: Clarendon Press. Heil, J., and A. Mele (eds) (1993), Mental Causation, Oxford: Oxford University Press. Kim, J. (2008), Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation, Cambridge, MA: MIT Press.
mentalese. See language of thought. methodological behaviourism. See behaviourism. mind-body problem. The mind-body problem is the central and most difficult problem in the philosophy of mind. It is the challenge of explaining the metaphysics of the relation between the physical body, especially the brain and nervous system, and the events of consciousness experienced by a thinking psychological subject.
299
Glossary
Efforts to solve the mind-body problem can be divided into two main categories, those that (1) try to reduce or eliminate all meaningful reference to the mind to or in favor of purely physical material entities, properties and events, and (2) those who argue that no such reductions or eliminations could possibly be adequate to the relevant data we find not only in reflecting on the content of our own subjective psychological lives and those we a ribute to others, that we learn about from their expressions of thought, but also in the objective external behavior of other psychological subjects. The main theories in category (1) include eliminative or reductive (a) behaviorism, (b) materialism, and (c) functionalism, the la er subsuming (d) computationalism as a special type that is otherwise simply identified with functionalism. These theories maintain either that there are no such things as thoughts, states of consciousness, or the mind, or that whatever can truly be said of mental entities and events can be interpreted in a vocabulary consisting entirely of terms for purely physical material substances, entities, properties and events. The main theories in category (2) include: (a) what is alternatively called, a er Descartes, Cartesian, substance, or ontic dualism, and (b) property dualism. Property dualism in category (2b) is in turn sub-divisible into (i) intentionalist- and (ii) qualia-based philosophies of mind that emphasize either intentionality or the existence and nature of qualia as explanatorily ineliminable and physically or materially irreducible. There is also no reason why a (2b) property dualism could not accept both (i) and (ii) in opposition to mind-body eliminativism or reductivism in trying to solve or at least clarify the mind-body problem, and then alternatively ordering (i) and (ii) in terms of explanatory or other priority, with either (i) taking precedence over (ii) or the reverse, or of treating (i) and (ii) as distinct but explanatorily equally significant and important grounds for denying the truth of any mind-body solution in category (1). Since it is absurd to suppose as eliminativism does that, despite appearances, thoughts and the mind do not exist, appearances themselves being states of mind, and since Cartesian, substance or ontic dualism is widely believed to be indefensible, it is possible to speak in practical terms of the mind-body problem as a contest between some form of physical reductivism, behavioral, material, or functional (computational) on the one hand, and, on the other, some form of property dualism that emphasizes intentionality over qualia or qualia over intentionality, or gives equal importance to both intentionality and qualia in understanding the nature of mind. D. J. Descartes, R. (1641), The Meditations on First Philosophy, in which the Existence of God and the Distinction of Mind and Body are Demonstrated, trans. and ed. J. Co ingham, Cambridge: Cambridge University Press. Jacque e, D. (2009), Philosophy of Mind: The Metaphysics of Consciousness, London: Continuum Books.
300
Glossary McGinn, C. (2000), The Mysterious Flame: Conscious Minds in a Material World, New York: Basic Books.
moral psychology. The branch of ethics that deals with human and animal psychology, particularly as it relates to moral judgement and motivation. Its primary aim is to investigate the relation between vice and virtue and our (general and individual) cognitive and conation abilities including those of belief, desire, impulse, intention and volition. Various views within moral psychology debate the extent to which ethical norms and reasons are relative to agential character traits or dispositions (e.g. (a) whether or not moral judgements necessarily motivate and (b) whether one can have a normative reason for doing something even if there was nothing in their motivational set that could ever (either directly or indirectly) move them to do it). C. S. Blackburn, S. (1998), Ruling Passions, Oxford: Oxford University Press. Co ingham, J. (1998), Philosophy and the Good Life, Cambridge: Cambridge University Press. Smith, M. (1994), The Moral Problem, Oxford: Blackwell.
multiple realizability. Computational devices can be made out of various kinds of physical systems: metal cogs (as was the first computer, Babbage’s difference engine), valves, silicon circuit boards, or perhaps, in the future, lasers. Computers are therefore multiply realizable. Similarly, according to the functionalist, human physiology is not the only kind of physical system in which minds can be realized. There may perhaps be alien species that have mental states akin to ours – they might feel pain and desire food – but these aliens could be made out of different physical stuff. Mental states are therefore multiply realizable. D. O. Kim, J. (1992), ‘Multiple Realizability and the Metaphysics of Reduction’, Philosophy and Phenomenological Research, 52, 1–26.
neural network. A network of simple processing units modelled on the human brain. Such networks come in a variety of forms, but a standard network consists of units arranged into three layers: an input, an intermediate and an output layer. At any point in time each unit will be in a state of activation and require a certain amount of stimulation in order to become active (this is its threshold value). Each unit in the input layer is linked by a number of connections to many units in the intermediate layer and the intermediate units are similarly linked to units in the output layer. Impulses are transmi ed along these connections. 301
Glossary
The connections have weights so that they can amplify or dampen the strength of an impulse they carry and can be either excitatory or inhibitory. When units in the input layer are stimulated by the outside world, impulses pass along connections to the intermediate layer so stimulating activity there. Impulses are then passed to the output layer resulting in pa erns of activity at that level. Consequently, the system transforms pa erns of activation at the input layer into pa erns of activation at the output layer. The system’s input-output behaviour is determined by the nature and weight of the connections and the threshold values of the units. Adjusting the connection weights will alter the system’s input-output behaviour. As the pa erns of activation can have semantic significance, the network can serve as an information processor. M. C. Bechtel, W., and A. Abrahamsen (2002) Connectionism and the Mind, second edition, Oxford: Blackwell. Haugeland, J. (ed.) (1997), Mind Design II, Cambridge, MA: MIT Press.
other minds. I know that I have a mind, but how can I be sure that others do? I see people writing shopping lists, running for the bus and talking to each other, but it is possible that all these people are just mindless automata, their actions akin to the behaviour of non-sentient robots. I might be the only mind in existence! This is called ‘the problem of other minds’. Various solutions to this problem have been offered. I could come to know that others have a mind by analogy. I know that my behaviour is caused by my mental states, and since others behave in a similar way to me, I can infer that their behaviour is caused by their mental states. In contrast behaviourists claim that behaviour is not merely the surface effect of underlying mental causes. As Ryle puts it in The Concept of Mind, ‘Overt intelligent performances are not clues to the workings of minds; they are those workings. Boswell described Johnson’s mind when he described how he wrote, talked, ate, fidgeted and fumed.’ For the behaviourist, then, it is my perceptual experience of the behaviour of others that justifies my belief in other minds. Last, and most popular, is a theoretical account of the mind. I am justified in believing that others have minds in the same way that I am justified in believing that stars are giant nuclear reactions. The physics of nuclear reactions can be used to predict and explain the behaviour of stars, and folk psychological categories can be used to predict and explain the actions of people. The reasoning applied here is inference to the best explanation. If there is a theory that explains the occurrence of certain phenomena be er than any alternative theory, then we are justified in believing that theory. I am therefore justified in believing in folk psychology and the existence of other minds.
302
Glossary
D.O. Avramides, A. (2001), Other Minds, London: Routledge.
perception. Perception can be generally characterized as a process in virtue of which we select, organize and interpret sensory stimulation and sensation into a coherent experience of the world. We can categorize perception as inner and outer: inner perception – bodily awareness – involves awareness or apprehension of the goings on in our bodies (proprioception); and outer or sensory perception involves awareness or apprehension of the goings on in the external world outside our bodies. The la er involves the use of our senses of sight, hearing, touch, smell and taste. The traditional philosophical problem of perception involves the question of whether we perceive physical objects and properties directly or indirectly in virtue of being directly aware of some sensory or mental items that represent the physical objects and their properties. D. P. Maund, B. (2003), Perception, Chesham: Acumen. Robinson, H. (1994), Perception, New York: Routledge.
perceptual content. Perceptual content refers to the things and their properties that feature in one’s perceptual experience, or in other words, it is what is conveyed to one by one’s perceptual experience via the five sense-modalities and proprioception. According to some philosophers, perceptual content is representational since it seems that our perceptual experiences are by their nature such that they present the world as being a certain way. Perceptual experiences seem to have accuracy conditions; they are accurate in certain circumstances and inaccurate in others and therefore are assessable for accuracy. If one thinks that experiences have representational content then one thinks of them as belief-like in some respects: Believing for instance, that there is a cup on the table is being in a state with representational content. But if one claims that an experience has representational content, that does not commit one to identifying experiences with beliefs, since (1) experience may not be the same a itude as belief, and (2) if (1) is false, they may be both a itudes to a different kind of content. Regarding the la er, some philosophers believe that perceptual states can represent the world without the subject of those states possessing the concepts required to specify their content. On this view, one’s experience of the world is not constrained by one’s conceptual capacities. We can further ask, in virtue of what do perceptual experiences have the content they have and represent the state of affairs they represent? Externalism about perceptual content (also called phenomenal externalism) holds that the contents of experience are not determined by the internal states of the brain but
303
Glossary
rather by external facts. For example, my Twin Earth counterpart (a molecular duplicate of me in a different external environment) and I, being in exactly the same brain state, have different experiences. On this view, what makes it the case that a particular experience has the content is has depends on (external) relations outside the subject’s body, such as social and causal relations to things in the environment. According to internalism, the contents of experience are determined by the internal states of the brain, not by external facts; my Twin Earth counterpart and I, being in exactly the same brain state, cannot have different experiences. D. P. Crane, T. (2003), ‘The Intentional Structure of Consciousness’, in Q. Smith and A. Jokic (eds), Consciousness: New Philosophical Perspectives, Oxford: Oxford University Press. Gunther, Y. (2003), Essays on Non-Conceptual Content, Cambridge, MA: MIT Press. Lycan, W. (2001), ‘The Case for Phenomenal Externalism’, Philosophical Perspectives, 15, 17–35.
phenomenal concepts. Phenomenal concepts are our concepts of conscious states. Proponents of the phenomenal concept strategy (Stoljar, 2005) claim that these concepts have a special nature and given that nature, it is not surprising that we find an explanatory gap between physical processes and phenomenal properties: the former are conceived under physical concepts and the la er under phenomenal concepts. The gap then involves the relationship between these concepts and our possession of phenomenal concepts can be explained in physical terms. D. P. Chalmers, D. (2007), ‘Phenomenal Concepts and the Explanatory Gap’, in T. Alter and S. Walter (eds), Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism, Oxford: Oxford University Press, pp. 167–95. Stoljar, D. (2005), ‘Physicalism and Phenomenal Concepts’, Mind and Language, 20, 469–94.
phenomenology. The study of ‘phenomena’ or the appearances of things as they appear in our experience. The properties in virtue of which there is something it is like for one to be in a mental state are called phenomenal properties and constitute the ways in which our experiences or phenomenally conscious states differ; there is for instance something it is like for one to feel a sharp pain or an itch in one’s finger, as there is also something it is like for one to smell coffee brewing or to see the vivid colours of a sunset. Mental states with such properties include perceptual experiences, bodily sensations and felt emotions or moods but may also include conscious thoughts and propositional a itudes
304
Glossary
(i.e. experiences in which no qualitative property, for example, sensory or affective quality) is somehow involved. An example might be the phenomenon of ‘understanding experience’ (Strawson, 1994). The la er refers roughly to there being something it is like for one to understand a spoken sentence over and above the stream of sound or of any sensory qualities that may be somehow involved. ‘Phenomenology’ in the Continental tradition refers to the study of structures of consciousness as experienced from the first-person point of view; the study of experiences themselves and their interrelations, not the search for laws or causal explanations. Phenomenology is not concerned so much with the world as such but rather the world as appearing to consciousness. It goes beyond the study of the phenomenal character of our experiences, addressing the meaning things have in our experience, that is, the significance of objects and events, the flow of time, the structure of mental content, temporal awareness, bodily awareness, memory, imagination, embodied action and the self. D. P. Brentano, F. (1995/1874), Psychology from an Empirical Standpoint, ed. L. McAlister, trans. A. Rancurello, D. Terrell and L. McAlister, London and New York: Routledge. Carruthers, P. (2006), ‘Conscious Experience versus Conscious Thought’, in U. Kriegel and K. Williford (eds), Consciousness and Self-Reference, Cambridge, MA: MIT Press. Smith, D., and A. Thomasson (eds) (2005), Phenomenology and Philosophy of Mind, Oxford and New York: Oxford University Press. Strawson, G. (1994), Mental Reality, Cambridge, MA: MIT Press.
physicalism. According to substance dualism there are two kinds of substance in the world, mental and material (or physical). Descartes claimed further that each of us is a union, made up of a material substance – the body – and a mental substance – the mind. By ‘substance’ we normally mean something that can exist independently, have properties, and enter into relationships with other substances. If something is a substance then it can exist in such a way as to stand in need of nothing beyond itself in order to exist. Physicalists deny that there are two kinds of substance. According to them, there is only one kind of substance, namely physical substance, and all that exists in the world are bits of ma er in space-time. There are, in the main, two versions of physicalism: reductive and nonreductive physicalism (or property dualism). According to the former, there is one kind of substance, physical, and mental properties are reducible to, and reductively identifiable with, physical properties. According to the la er there is one kind of substance, physical, but two different kinds of properties, mental
305
Glossary
and physical properties, and the former are distinct from and irreducible to the la er. D. P. Chalmers, D. (2002), ‘Consciousness and Its Place in Nature’, in id. (ed.), Philosophy of Mind: Classical and Contemporary Readings, New York: Oxford University Press, pp. 247–72. Kim, J. (2005), Physicalism, or Something Near Enough, Princeton: Princeton University Press.
privileged access. The special epistemic position each person stands in with respect to the contents of his or her own mind. It is o en noted in the literature on self-knowledge that we seem to be be er placed than anyone else to say what mental states we are in. Whether we are be er placed by virtue of having a special way of accessing our own mental states (e.g. some form of ‘inner sense’) or merely by virtue of tending to have more evidence available (due to our greater proximity with ourselves) is a ma er of some controversy. Equally controversial is the wider issue of whether the authoritative status of our judgements about our own mind is truly the result of a cognitive achievement on our part (and so the result of a form of access), or merely the consequence of some special feature of our self-ascriptive judgements of the form ‘I believe that p’ or ‘I am happy’ or indeed ‘I am in pain’. I. M. Alston, W. (1971), ‘Varieties of Privileged Access’, American Philosophical Quarterly, 8.
propositional aĴitudes. Propositional a itudes are psychological a itudes of various types that relate a thinker to specific propositions. The content of such propositions is paradigmatically expressed in natural language by sentential that-clauses – such as, ‘London is twenty miles away’. For example, a thinker, X, can believe, desire, hope, fear, or recognize that p; where the mental state verb denotes X’s a itude and p denotes the content to which X’s a itude relates. The propositions in question may be true or false (e.g. things may or may not be as the thinker takes them to be) and as such propositional a itudes are thought to be states of mind that possess or relate to truth-evaluable contents. Depending on one’s view of the nature of propositions, propositional a itudes will be regarded as simple or more complex states of mind. They either relate a thinker directly to some actual or possible state of affairs or indirectly to some actual or possible state of affairs via a specific mode of presentation or representation. D. H. Russell, B. (1918), ‘The Philosophy of Logical Atomism’, The Monist; reprinted in R. C. Marsh (ed.), Logic and Knowledge: Essays 1901–1950, London: Unwin Hyman, 1956.
306
Glossary Frege, G. (1988), ‘Thoughts’, in N. Salmon and S. Soames (eds), Propositions and A itudes, Oxford: Oxford University Press; alternative translation: ‘The Thought: A Logical Inquiry’, in P. F. Strawson (ed.), Philosophical Logic: Oxford Readings in Philosophy, Oxford: Oxford University Press, 1976. Quine, W. V. (1956), ‘Quantifiers and Propositional A itudes’, Journal of Philosophy, 53; reprinted in W. V. Quine, The Ways of Paradox, Cambridge, MA: Harvard University Press, 1966.
psychoanalysis. A psychological method for investigating the mind, originally with the further clinical goal of treating mental illness. While the method was founded by Breuer and Freud, it was later developed (in ways that they would not have necessarily approved of) by Jung, Melanie Klein and Freud’s daughter Anna. Non-clinical psychoanalytic theories have also been put forth by critical thinkers such as Lacan. Despite numerous important theoretical differences between all these, they remain united by a focus on the analysed person’s descriptions of their own thoughts, emotions, defences, dreams, fantasies and free associations. Psychoanalysis is o en mistakenly conflated with depth psychology, viz. any psychological approach that focuses on the unconscious. C. S. Freud, S. (1964), New Introductory Lectures on Psychoanalysis, trans. J. Strachey, London: Hogarth Press.
psychological behaviourism. See behaviourism. psychology. The study of how human and animal minds function, with the primary aim of providing explanatory theories of how our knowledge of the mind helps to explain or manipulate behaviour. Much psychological theory thus o en falls between philosophy and science (in particular psychiatry); indeed the godfather of modern psychology is o en said to be Nietzsche who put forth his concept of a will to power as an explanation of the behavioural drives of all living things. Not unlike the philosophy of mind, psychology divides itself into numerous specialities such as cognitive, motivational, social, neural, educational, perceptual, or cultural psychology. Such theories may then be variously applied in fields as diverse as advertising, military strategy and emotional counselling. C. S. Smith, E. et al. (eds) (2002), Atkinson and Hilgard’s Introduction to Psychology, fourteenth edition, Belmont, CA: Wadsworth Publishing.
qualia. The word ‘qualia’ is used in stronger and weaker ways in the philosophical literature. Sometimes it is used to designate the distinctive quality, 307
Glossary
characteristic feel or phenomenal character of token experiences (perceptions, sensations, feelings, moods) (i.e. what-it-is-like ‘for’ a subject to undergo such experiences). In this broad use it is o en equated with having an idiosyncratic first-personal point of view or perspective of a certain experiential character. In its more restrictive use, the term denotes inner, intrinsic and introspectable mental particulars or properties – particulars or properties that are entirely inaccessible and invisible to third-personal analysis and with which such subjects have a privileged and private acquaintance. These properties are allegedly unique and are u erly distinct from purely intentional, representational or functional properties. So construed, some believe that qualia literally constitute part of the content of logically private, privileged first-person reports of our inner states of mind and that a special first-personal science of consciousness would need to be developed if we are to study such properties. D. H. Chalmers, D. (1996), The Conscious Mind, Oxford: Oxford University Press. Denne , D. C. (1991), Consciousness Explained, New York: Penguin Books. Flanagan, O. (1993), Consciousness Reconsidered, Cambridge, MA: MIT Press.
reasons. Agents are said to act for or in the light of reasons when they act with some aim or purpose in mind. Many philosophers hold that all intentional action is performed for a reason while almost all hold the converse view that all actions performed for reasons are intentional. There is much ontological debate over whether reasons should be conceived of as facts, states of affairs, propositions, mental states, or some disjunctive combination. Reasons why we do things need not be reasons for which we do them, nor are the la er always normative reasons for acting (though it had be er be possible to act for a good reason). C. S. Dancy, J. (2000), Practical Reality, Oxford: Oxford University Press. Parfit, D. (1984), Reasons and Persons, Oxford: Oxford University Press. Sandis, C. (ed.) (2009), New Essays on the Explanation of Action, London: Palgrave Macmillan.
reduction. In its most general terms, reduction is the translation of a theory, including a single proposition in the limiting case, into another theory that is ontically commi ed to fewer or fewer kinds of entities or properties, or that requires fewer explanatory principles than the original theory. When such a translation is correct, it is said to have been reduced either to the explanatorily simpler or ontically more economical theory, or, where applicable, to both. One measure of the effectiveness of a reduction is to compare the complexity of explanations and the domain of existent objects belonging to both the original theory and its proposed reduction, a ma er that can sometimes be decided by 308
Glossary
comparing explanations for relative difficulty and counting the terms needed in the vocabularies of target theories and their putative reductions. Where philosophy of mind is concerned, mind-body reduction in particular is the reduction of the mind or mental properties to exclusively non-mental, purely physical behavioural, material, or functional (computational) properties. Whether such a reduction can possibly succeed in the case of all truths about consciousness, mental or psychological phenomena, the mind and its thoughts, is one way of formulating the mind-body problem. D. J. Horst, S. (2007), Beyond Reduction: Philosophy of Mind and Post-Reductionist Philosophy of Science, Oxford: Oxford University Press. Searle, J. (1992), The Rediscovery of the Mind, Cambridge, MA: MIT Press. Stich, S. (1983), From Folk Psychology to Cognitive Science: The Case Against Belief, Cambridge, MA: MIT Press.
representation. Generally, the notion of representation entails one thing’s standing for something else or itself (self-representation). Representing is not instantiating. Universals or objective properties (e.g. colours) are instantiated by particulars (or in the environment). In general, the mind does not instantiate the properties it represents. An object’s shape, for example, is instantiated in the environment and it is represented in one’s perceptual experience. For a be er illustration, we might contrast what we might call the ‘representational’ with the ‘instantiation’ view of phenomenal character. According to some representationalists (e.g. Jackson, 1977) the phenomenal character of our sensory experiences, that is, the apparent objects and properties of those experiences, are merely representational, namely they comprise or contain the content of those experiences without that content thereby being actually instantiated in the mind. Contrariwise, according to some philosophers (Russell, 1998) when I experience a red tomato for example, the content of that experience involves the instantiation of an oval red object or the properties of this object in the mind. On this view, the object or the property of looking red is not representational. It is an intrinsic property of the mind. The idea of representation has been central in discussions of intentionality for many years. It is o en assumed that to have intentionality is to have content. Mental content is otherwise described as representational/intentional or informational content, and intentionality is seen as the way of bearing or carrying information. Now, if we say what the intentional content of a state of mind is we thereby determine the conditions that must be met if this content is to be satisfied (i.e. the conditions of its truth). Thus if I believe that ‘Gordon Brown is the British Prime Minister’ my belief has a certain content thereby describing the conditions under which it is true. Hence representational states have correctness conditions partly determined by their contents. A belief for example is 309
Glossary
correct just in case its content is correct, and a proposition which gives that content is correct just in case it is true. D. P. Crane, T. (1992), ‘The Non-Conceptual Content of Experience’, in id. (ed.) The Contents of Experience, Cambridge: Cambridge University Press, pp. 136–57. Jackson, F. (1977), Perception: A Representative Theory, Cambridge: Cambridge University Press. Macpherson, F., and D. Platchias (forthcoming), Representationalism, Cambridge, MA: MIT Press. Tye, M. (1995), Ten Problems of Consciousness, Cambridge, MA: MIT Press.
self. The ‘I’ who experiences, thinks, believes and desires. This conception of the self as the subject of psychological states has long been argued to be responsible for creating the Cartesian ‘fiction’ of the self as a purely mental entity which persists unchanged through time and in which psychological states inhere. Hume famously rejected this view, arguing that we have no impression of such a self but only of a sequence of constantly changing ‘perceptions’ (i.e. mental states) and hence no idea of the self except as a bundle of such perceptions. Most current philosophers also reject, though o en for independent reasons, the Cartesian view of the self as a mental entity, yet o en retain Descartes’s assumption that the ‘I’ of psychological self-a ribution refers to an entity, namely to the human being which thinks, believes, desires and which is located in space at the point of origin of our spatial experiences or at the point of reference of our spatial thoughts of the form ‘I am here’ or ‘that is over there’. I. M. Descartes, R. (1912 [1637]), A Discourse on Method, Meditations and Principles, Toronto: Dent. Hume, D. (1978 [1740]), Treatise of Human Nature, ed. L. A. Selby-Bigge, Oxford: Clarendon Press. Valberg, J. (2007), Dream, Death and the Self, Princeton: Princeton University Press.
self-consciousness. Awareness of the self, or, more commonly in the philosophy of mind, awareness of one’s mental states, and in particular awareness of them as one’s own, with or without additional awareness of any substantive self. Although sometimes used interchangeably with the term ‘self-knowledge’, ‘self-consciousness’ is commonly reserved to refer to less explicit, yet (on some views) more fundamental forms of self-awareness. In the phenomenological literature for instance, self-consciousness is thought of as something which does not arise merely upon a entive reflection, but which is already present in, and forms part of, world-directed conscious thought and experience itself.
310
Glossary
Whether any such form of self-consciousness can be shown to exist, what it should be taken to amount to, and finally how (if at all and in what sense) it might be argued to be a prerequisite for fully explicit, reflective self-knowledge are complex issues touched upon from the philosophy of Kant to the phenomenological literature all the way to current debates in analytical philosophy of mind. I. M. Bermudez, J. (1998), The Paradox of Self-Consciousness, Cambridge, MA: MIT Press. Kant, I. (1929), Critique of Pure Reason, London: Macmillan Press. Sartre, J. (1969), Being and Nothingness, London: Routledge.
self-knowledge. Typically used in the philosophy of mind to refer to the special knowledge each person has of the contents of his or her own mind. We seem to be able to know a wide range of our own thoughts, beliefs, desires and emotions in a special immediate, authoritative way in which we are not able to know the mental states of others. How is this possible? What is this special way we have of knowing a certain class of our own mental states? The recent literature divides the theoretical options as follows: either (a) we know our own minds inferentially (though perhaps particularly quickly) from observing our own behaviour, or (b) we do so observationally through a form of ‘inner sense’, or (c) we do so not in any way or on any particular epistemic basis, but rather in virtue of the holding of some essentially constitutive link between our first-order conscious states and our second-order self-ascriptive judgements. Which of these options is the closest to the truth and whether they are in fact jointly exhaustive of the theoretical options available remain ma ers of lively debate. I. M. Boghossian, P. (1998), ‘Content and Self-Knowledge’, Philosophical Topics, 17, 5–26. Moran, R. (2001), Authority and Estrangement: An Essay on Self-Knowledge Princeton, NJ: Princeton University Press.
semantics. Semantics is the study of meaning. As natural language symbols have meaning, much of semantics falls within the domains of linguistics and the philosophy of language. However, some semantic issues belong to the philosophy of mind because mental states such as propositional a itudes have semantic properties. They can refer to and represent particular things, classes of things, and states of affairs and have truth or satisfaction conditions. For example, my belief that aardvarks eat termites is about aardvarks, represents them as being termite eaters, and is true if and only if aardvarks eat termites. One prominent semantic issue in the philosophy of mind involves the explanation of how mental states get the semantic properties that they have in
311
Glossary
naturalistic terms (i.e. in terms of lower level properties recognized by the natural sciences). Another class of issues relates to the role that the mind has is determining the meaning of words and sentences on the lips of an individual. In this context, semantic externalists such as Putnam argue that linguistic meaning isn’t solely determined by the nature of the individual’s mind considered in isolation from the external world; rather, the nature of the extra-cranial world at the physical level plays a key meaning determining role. M. C. Fodor, J. (1990), A Theory of Content and Essays, Cambridge, MA: MIT Press. Putnam, H. (1975), ‘The Meaning of “Meaning” ’, in id., Mind, Language and Reality: Philosophical Papers, vol. 2, Cambridge: Cambridge University Press.
supervenience. Supervenience is the (ontic) dependence of the existence of a token or type of an entity or property on the existence of a token or type of entity or property. In standard definitions of supervenience originally owing to Jaegwon Kim, in several analyses of distinct but related concepts of supervenience, an entity or property can be said to supervene on itself, and on other entities in which a two-way ontic dependence relation obtains. In his classic paper, ‘Concepts of Supervenience’ (Kim, 1993), and later investigations of related topics, Kim distinguishes between weak and strong supervenience. Kim’s definitions have gained widespread discussion, but not universal adoption. Alternatively, there are also intuitive considerations that support making supervenience into a one-way relation in which if a token or type of entity or property X supervenes on a token or type of entity or property Y, then Y does not also supervene on X. Such a provision also precludes anything supervening on itself, since if X = Y and X supervenes on Y, it would otherwise follow logically that Y supervenes on X. Defining supervenience as an asymmetrical relation from the outset has the further advantage of making intuitive sense of the term ‘supervenience’ in which it is suggested that something, the supervenient entity or property, is superior to, in some way above, the entity or property on which it supervenes, also known as the supervenience foundation or base. The instantiation of particular properties can be said in all of these senses to supervene on the objects that possess the properties, since if the objects did not exist then neither would the instantiations of their properties. On Kim’s distinction, weak supervenience holds that there are no logically possible worlds within which there are Y-indiscernible but X-discernible properties or entities. Strong supervenience in contrast implies that there are no Y-indiscernible but X-discernible properties within the same or different logically possible worlds. Strong supervenience entails weak supervenience when the logically possible worlds under consideration coincide, although weak supervenience generally does not entail strong supervenience.
312
Glossary
An important application of the concept of supervenience is in trying to understand the metaphysical relation between body and mind. The mind can be said to supervene on the body, in the sense that mental states, properties, and events, tokens and types, supervene on physical states, properties, and events, tokens and types, just in case the mind and its properties would not exist without the body and its properties, although the body could exist without the mind. An interesting possibility is that in which a given human being functions as a zombie, living and behaving verbally and in other ways exactly like a conscious person, while lacking any conscious mental states. We should expect that if the mind supervenes on the body, then if the exact same conditions of body are duplicated, the exact same mental states are also duplicated. If not, then the failure is due to a lack of law-like regularity in the relations whereby the existence of mind and mental occurrences depend on the existence of a living body, especially a more or less normally functioning brain and neural system, so that no positive type-type correlation occurs between the properties of mind and body, even if a positive token-token correlation sometimes but only irregularly and in that sense accidentally obtains. Since we expect that the natural laws of the physical world governing the body on which consciousness supervenes are regular if not simply or conditionally necessary, we can also have confidence in the proposition that if X supervenes on Y, then if Y’s properties as supervenient are duplicated, there will also most probably be a corresponding duplication of X’s properties. D. J. Kim, J. (1993), Supervenience and Mind: Selected Philosophical Essays, Cambridge: Cambridge University Press. Rowlands, M. (1995), Supervenience and Materialism, Avebury: Ashgate Publishing. Tooley, M. (1999), Laws of Nature, Causation, and Supervenience, London: Routledge.
teleofunctionalism. Teleofunctionalists explain mental state properties (e.g. representational content) by focusing on the purpose these properties serve in answering to the needs of complex systems or creatures. For example, certain properties of an inner state (e.g. a natural sign or internal indicator) will be said to have the teleofunction of representing Xs if those properties are ‘supposed to’ track or indicate the presence of Xs or if by responding appropriately to them the organism would, in historically normal conditions, perform as it ought. This emphasis on purpose that such properties or the responses to them fulfil differs from standard functionalist theories that focus exclusively on the causal profile or systemic role of mental states (i.e. those that are understood solely in terms of actual or counterfactual relations to characteristic inputs, other mental states and characteristic outputs, for example, being produced by specific kinds of perceptual stimuli, generating specific kinds of behaviour, etc.). Teleofunctionalists
313
Glossary
lay stress on what a mental state is supposed to do as opposed to what it actually does or is disposed to do. By comparison, hearts can be said to have the teleofunction to pump blood whether or not any particular or token heart is in fact capable of doing so under any actual or nomologically possible conditions. Despite introducing a normative dimension into their account of mental states, teleofunctions are ultimately meant to be explained in wholly naturalistic terms, by appeal to standards set, for example, by evolutionary processes, such as natural selection, and individual learning and training. D. H. Dretske, F. (1988), Explaining Behaviour: Reasons in a World of Causes, Cambridge, MA: MIT Press. Millikan, R. (1984), Language, Thought and Other Biological Categories, Cambridge, MA: MIT Press.
Third-person perspective. See first-person/third-person perspective. threshold value. See neural network. transparency. In the literature on self-knowledge, ‘transparency’ typically refers to a datum, pointed out by (among others) Evans drawing on a remark by Wi genstein, that when asked about what we believe, we tend to turn our a ention not to ourselves but to the world, and to consider evidence not explicitly about our beliefs but about how the world is. In order to answer the question of whether I believe that it is raining for example I will look not at myself but out the window, and consider not what mental states I am in but whether it is raining. How this transparency is best to be explained is a ma er of ongoing debate. The term ‘transparency’ is frequently used however also in another context, namely in the philosophy of perception, where it refers not to a view about the evidence appealed to in introspection but about what introspection reveals about our perceptual experiences. According to this ‘transparency’ view, introspection reveals only the objects of our experiences out in the world, not any additional ‘phenomenal’ properties of our perceptual states themselves. I. M. Evans, G. (1982), The Varieties of Reference, Oxford: Clarendon Press. Martin, M. (2002), ‘The Transparency of Experience’, Mind and Language, 17, 376–425.
Turing machine. A simple and abstract computing device invented by the British logician Alan Turing. A Turing machine consists of an infinitely long tape divided into squares (each of which can either have a ‘1’ or a ‘0’ wri en on it or be blank), and a read-write head which scans the squares one at a time. Whenever it scans a square the head will be in one of a finite number of possible 314
Glossary
states. How the machine responds to what it scans will depend upon its state. Its response will have several elements involving: (1) either leaving the square unchanged or writing a new symbol on it (i.e. a ‘1’, ‘0’ or a blank); (2) moving one square to the le or one square to the right or halting; and (3) moving into some other state or remaining in the same state. The machine’s response to any possible symbol for each of the states that it can be in is specified by a machine table. Turing proved that for any computable mathematical function there is a Turing machine that can compute it. A universal Turing machine can be programmed (by means of strings of symbols printed on its tape) to mimic any possible Turing machine and so is capable of computing any computable mathematical function. Putnam developed an early version of functionalism that compares mental states with Turing Machine states as the la er are defined by the machine table in terms of their relations to inputs, outputs and other states rather than their material constitution. M. C. Hodges, A. (1983), Turing: The Enigma, New York: Simon and Schuster. Putnam, H. (1975), Mind, Language and Reality: Philosophical Papers, vol. 2, Cambridge: Cambridge University Press.
Turing test. A test proposed by Turing as a precise alternative for the meaningless question of whether a machine is capable of thought. The test is based upon the ‘imitation game’ where the tester presents both a machine and a human with a series of questions via a teletypewriter. On the basis of the answers received, the tester tries to work out which respondent is the human and which is the machine. The machine passes the test if the tester fails to identity it as the machine. M. C. Millikan, P., and A. Clark (1996), Machines and Thought: The Legacy of Alan Turing, vol. 1, Oxford: Clarendon Press. Turing, A. (1950), ‘Computing Machinery and Intelligence’, Mind, 59, 433–60.
Twin Earth. Twin Earth is a fictional planet described in a thought experiment that originally appeared in Hilary Putnam’s paper ‘The Meaning of “Meaning” ’. Twin Earth is superficially identical to Earth. There is someone there who looks like you – your ‘twin’ – reading a book that is identical in appearance to the Continuum Companion to the Mind. There is, however, a difference between Earth and Twin Earth. The glass of water on your desk, and all the water on our planet, contains the chemical H20. The glass of liquid on your twin’s table, and all such liquid on his planet, contains XYZ. H20 and XYZ appear the same: they are colourless, tasteless, odourless, thirst-quenching liquids, that rain from 315
Glossary
the sky and flow down rivers into the sea. They are, though, different liquids since they are made of different stuff. Philosophers of mind discussing this thought experiment o en refer to the twin on Earth as Oscar, and the one on Twin Earth as Toscar, and the liquids on their planets as water and twater respectively. Putnam took his thought experiment to show that the meanings of natural kind terms such as ‘water’ could not be wholly determined by items that are literally inside the head of a thinker, items such as mental images, or brain or computational states. Everything inside the heads of Oscar and Toscar is the same when they are talking about what they both call ‘water’, yet Oscar’s word refers to water and Toscar’s to twater. Their words therefore have different meanings even though everything in their heads, and their behaviour, is the same. As Putnam puts it, ‘Meaning just ain’t in the head.’ Others, including John McDowell and Greg McCulloch, take Putnam’s thought experiment to entail a stronger conclusion, one concerning not just the meanings of words, but the contents of thoughts. Oscar has thoughts with the content water; Toscar has thoughts with the content twater. Even though everything in their heads is the same, their thoughts are different. The content of thought is partly determined by our relation to the world: ‘the mind ain’t in the head’ (McCulloch, The Life of the Mind, p. 41). This is cognitive externalism. Putnam himself has now adopted this position. D. O. Pessin, A., and S. Goldberg (1996), The Twin Earth Chronicles: Twenty Years of Reflection on Hilary Putnam’s ‘The Meaning of “Meaning” ’, New York: Armonk.
what it’s like. Nagel famously wrote that ‘the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism . . . fundamentally an organism has conscious states if and only if there is something it is like to be that organism – something it is like for the organism – the facts of experience [are] facts about what it is like for the experiencing organism. (Nagel, 1974, pp. 435, 439, emphasis in the original). This quote points directly at ‘what-it-is-likeness’, the salient but difficult to describe feature of a conscious state. If there’s something it is like for one to be in a mental state then the state is experiential. If there is nothing it’s like for one to be in that state, it’s not. However, to say that there is something it’s like for one to be in an experiential state is not merely to mean that there is something that an experience is like. That there is something that an experience is like is a mere truism in that it is plain that there is nothing such that it is not like something. We can say for instance, that there is something that a rock or a table is like. What-it-is-likeness in the Nagelian sense concerns the individual. If there is something it is like for the individual to be in a particular mental state
316
Glossary
then that state is experiential. What it’s like to be in an experiential state is in the relevant sense what it’s like for one to be in that state. D. P. Jackson, F. (1982), ‘Epiphenomenal qualia’, Philosophical Quarterly, 32, 127–36. Nagel, T. (1974), ‘What is It Like to be a Bat?’, Philosophical Review, 83 (4), 435–50. Tye, M. (1995), ‘What What It’s Like is Really Like’, in Ten Problems of Consciousness, Cambridge, MA: MIT Press, 133–55.
will. The faculty of the will grounds our abilities to do things intentionally, voluntarily, and at will, where this typically (but perhaps not necessarily) involves acting for reasons. Contrary to an influential British tradition dating at least as far back as Hobbes, to act voluntarily is not to perform an act of volition but rather to act (intentionally or otherwise) according to one’s own will (or desire) and not under coercion or duress. This need not involve performing an act of will, the la er requiring a high degree of motivational strength, courage and/or effort. A person’s will is said to be weak if they are too easily prone to change their mind under the influence of others. In philosophy, however, ‘weakness of will’ has become a technical term for the phenomenon of acting against one’s be er judgement, commonly also referred to in the literature as akrasia or (even more misleadingly) incontinence. C. S. Hacker, P. M. S. (2000), ‘Willing and the Nature of Voluntary Action’, in id., Wi genstein – Mind and Will – Part I: Essays, Oxford: Blackwell. O’ Shaughnessy, B. (1980), The Will – A Dual Aspect Theory, vols. 1 and 2, Oxford: Oxford University Press.
XYZ See H20/XYZ. zombies. The philosophical zombie is a living organism which is functionally and behaviourally indistinguishable from a conscious one (with which they may share the same environment and have same causal histories), but there is nothing it is like to be that creature (i.e. the creature is non-conscious). According to some philosophers, it is not only conceivable that this creature can exist but it can also possibly exist. The explanatory gap argument revolves around this issue – that is, whether such creatures are conceivable and further whether they are possible. Let P be the proposition that everything physical is as it actually is and Q the proposition that there are phenomenal or experiential properties. According to most philosophers, it is conceivable that (P and ¬Q). Since it is conceivable that (P and ¬Q) then one can ask, why, given that P is the case, is Q the case? Hence the explanatory gap, namely there is no entailment from P to Q; one cannot deduce Q from P. According to some philosophers
317
Glossary
(Chalmers, 1996; Jackson, 2001), from this epistemic gap one can infer an ontological gap: if we cannot deduce Q from P then we cannot explain phenomenal consciousness in terms of physical processes, and if we cannot explain it in terms of physical processes then phenomenal consciousness is not a physical process. D. P. Brueckner, A. (2001), ‘Chalmers’ Conceivability Argument for Dualism’, Analysis, 61, 187–93. Chalmers, D. (1996), ‘Can Consciousness Be Reductively Explained?’ in The Conscious Mind, Oxford: Oxford University Press. Nagel, T. (1998), ‘Conceiving the Impossible and the Mind-Body Problem’, Philosophy, 73, 337–52. Shoemaker, S. (1999), ‘On David Chalmers’s The Conscious Mind’, Philosophy and Phenomenological Research, 59 (2), 439–44.
318
Chronology This is necessarily idiosyncratic, but the hope is to provide a useful desk reference for philosophers of mind who require speedy access to a date or title as well as a bit of context for it. There are a few lines of explanation for almost every entry. I tried to encapsulate in a single sentence as much of what ma ers as possible. Obviously this almost never works, but a narrative structure really does help one get a grip on the major events in the history of our thinking about the mind. The narrative ends in 1949 with Ryle’s Concept of Mind. My excuse for stopping there is that it’s difficult to say what the impact or meaning of more recent books and papers might be, because we are too close in time to know. Forgive omissions; it is very hard to tell which books and events should be included early on or during unfamiliar centuries, and it gets extremely difficult as we approach the present. 800 BCE Homeric poems taking shape between the late ninth and early eighth century; they characterize the soul thinly, as something lost at death, something which then howls off to Hades.
600 BCE Thales (fl. 600) might view psyche as a mover, force, or impetus, something which initiates the movement of moving things, from animals and people to magnets. Anaximenes (c. 585–c. 528) possibly believes that psyche holds a living thing together and rules or controls it. Pythagoras (fl. 530) accepts metempsychosis; possibly first to locate the soul in the head.
500 BCE Anaxagoras (c. 500–c. 428) seems to argue for a materialist world actuated by a cosmic intelligence, mind or nous. Heraclitus (fl. 500) might believe that psyche is fire, somehow responsible for the changes a ending waking, sleeping and death. Parmenides (early to mid-fi h century) distinguishes between false appearances and reality as revealed by reason; might flirt with idealism.
400 BCE Empedocles (c. 495–c. 435) probably formulates the first theory of perception; his talk of the cosmic psychological principles, love and strife, suggest panpsychism to some.
319
Chronology Socrates (c. 469–c. 399), the man not the mouthpiece; might conceive of soul as the bearer of moral qualities. Democritus (c. 460–c. 370) elaborates the atomism of Leucippus, including materialist conceptions of perception and the soul; might be first to tie soul to intelligence. Plato (c. 427–c. 347) distinguishes soul from body, argues for immortality of the soul, ties soul to reason, Phaedo; divides the soul into three parts: reason, spirit and appetite, Republic). Aristotle (c. 384–c. 322) offers an extended, systematic discussion of psychological phenomena, De Anima and Parva Naturalia, (c. 350); soul characterized as the form of a living thing. Epicurus (c. 341–c. 271) argues for a radical materialism and for the impossibility of the soul surviving death.
300 BCE Zeno of Citium (c. 335–c. 263) founds Stoic School, active until c. 520, which perpetuates a variety of materialist notions of soul, typically conceived as a breath-like substance diffused throughout the body.
200 BCE The Septuagint produced between the third and first centuries BCE; conceptions of the soul and mental phenomena as depicted in the Hebrew Bible translated into Greek.
100 BCE Lucretius (c. 98–c. 51) propounds and expands the philosophy of Epicurus, producing the first philosophical treatment of mind in Latin, De Rerum Natura.
BCE/CE Philo (c. 20 BCE–c. 50 CE) blends Greek philosophy and Hebrew thought about the soul.
100 The Church Fathers (end of the first century to as late as 749 CE) subordinate philosophical accounts of mind to scriptural ones, raise religious questions, and shape the intellectual agenda accordingly. Tertullian (c. 160–c. 225) advocates traducianism; argues that soul must be somehow corporeal if it can be tormented in hell, On the Soul.
200 Origen (c. 185–c. 254) holds that souls were created by God for contemplation but, falling away in distraction, became enveloped in bodies, On First Principles. Plotinus (204/5–270/1) founds Neoplatonism, articulates a conception of soul as part divine and part entwined with body, as well as an intricate theory of perception, The Six Enneads.
320
Chronology
300 Augustine (354–430) offers a detailed description of and reflection on introspected mental life, Confessions; has thoughts on action theory, On Free Will; might argue by analogy for other minds, anticipate the cogito, and influence the Cartesian conception of mind, On the Trinity and City of God.
400 Boethius (c. 480–c. 524) translates Aristotle and Plato into Latin; emphasizes rational nature of the soul, Contra Eutychen.
900 Avicenna (c. 980–1037) integrates Islamic philosophy and Greek thought about mind and soul, formulates floating man thought experiment, On the Soul.
1100 Averroes (1126–1198) Latin translations of his commentaries on Aristotle bring Greek views on mind, through Islamic lenses, back to the West; also develops his own complex psychology and metaphysics of the soul, Long Commentary on De Anima. Vespasian Homilies (c. 1150) contain possibly the first use in English of a variation on the word ‘soul’ (sawle), meaning life or life force.
1200 William of Moerbeke (c. 1215–1286) undertakes a complete translation of Aristotle into Latin (c. 1250). Aquinas (c. 1224–1274) reinterprets Aristotle in the light of Christian teaching, articulates full-blooded conceptions of mind, soul, intellect, memory, appetite, self-knowledge, imagination, perception, etc., Summa Theologiae.
1300 William of Shoreham (fl. 1330) writes religious poems containing a forerunner of ‘mind’ (mende), which might be the first use in English tied to cognition.
1400 Marsilio Ficiono (1433–1499) is the first to translate all of Plato into Latin, and the Platonic conceptions of soul and mind are rekindled, Theologia Platonica de Immortalitate Animae, (1474). Pomponazzi (1462–1525) might anticipate property dualism, On the Immortality of the Soul.
1500 Shakespeare (1564–1616) writes Hamlet (c. 1600); some detect Cartesian presuppositions in certain soliloquies; others hear Hamlet’s repressed desires.
321
Chronology
1600 Hobbes (1588–1679) defends a causal, empiricist, mechanistic and materialist conception of mental phenomena, Leviathan (1651). Descartes (1596–1650) articulates Cartesian dualism; disentangles new thoughts on mind from Aristotelian, Platonic and Scholastic thinking; thereby ushers in modern philosophical reflection on the mental, Meditations on First Philosophy (1641). Geulincx (1624–1669) (and Géraud de Cordemoy [1626–84]) follows Descartes; argues for pre-established harmony before Leibniz, Opera Philosophica (c. 1668). Spinoza (1632–1677) rejects Cartesian dualism in favour of dual-aspect monism: there is one substance, God, Ethics (1677). Locke (1632–1704) formulates modern conception of self; raises questions about personal identity; claims that experience is the source of ideas; sets out limits to understanding, Essay Concerning Human Understanding (1690). Malebranche (1638–1715) largely follows Descartes, but argues for occasionalism, Dialogues on Metaphysics and on Religion (1688).
1700 Leibniz (1646–1716) argues for pre-established harmony, Discourse on Metaphysics (1686). Berkeley (1685–1753) writes an account of perception, Essay Towards A New Theory of Vision (1709) and argues that to be is to be perceived, Principles of Human Knowledge (1710). Hartley (1705–1747) founds associationist school of psychology, Observations on Man, His Frame, His Duty, and His Expectations (1749). Reid (1710–1796) brings common sense to an account of sensation, conception, and perception; uses memory to inform a notion of self, An Inquiry into the Human Mind on the Principles of Common Sense (1764), Essays on the Intellectual Powers of Man (1785). Hume (1711–1776) brings the experimental method to bear on mind, follows the sceptical implications of empiricism through, propounds the bundle theory of self, Treatise of Human Nature (1739), Enquiry Concerning Human Understanding (1748). Adam Smith (1723–1790) considers the nature of sympathy, The Theory of Moral Sentiments (1759). Kant (1724–1804) argues that the structuring activity of the mind makes possible a world of experience; gives an account of reason, perception, judgement, the understanding, imagination, etc., – a Copernican Revolution in the conception of mind, Critique of Pure Reason (1781), Critique of Practical Reason (1788), Critique of Judgement (1790). Bentham (1748–1832) articulates modern psychological hedonism, Introduction to the Principles of Morals and Legislation (1789).
1800 Hegel (1770–1831) gives an account of the evolution of consciousness as it plays out in human history, The Phenomenology of Spirit (1807). Schopenhauer (1788–1860) sees blind craving, will, at the depressing centre of human action; our inner experience of it points to the hidden nature of all things, The World as Will and Representation (1819).
322
Chronology J. S. Mill (1806–1873) elaborates on the connection between right and wrong and pleasure and pain; connects social and political reform to psychology, A System of Logic (1843). Kierkegaard (1813–1855) claims that subjectivity is truth, Concluding Unscientific Postscript to Philosophical Fragments (1846). T. H. Huxley (1825–1895) memorably couches a version of epiphenomenalism in terms of whistles and steam engines, ‘On the hypothesis that animals are automata, and its history’ (1874). Wundt (1832–1920) investigates the self-examination of experience, Principles of Physiological Psychology (1873/4), establishes a laboratory of experimental psychology in 1879. Brentano (1838–1917) reintroduces the Scholastic conception of intentionality as the mark of the mental, and his elevation of introspection paves the way for the phenomenological movement, Psychology from an Empirical Standpoint (1874). Peirce (1839–1914) raises objections to Cartesian methods and suggests panpsychism, along with further thoughts on signs and representation, The Fixation of Belief (1877), The Monist series (1891–1893). James (1842–1910) largely sets the agenda for both the philosophy of mind and psychology by advancing influential accounts of the brain, the mind-body relation, the stream of consciousness, memory, sensation, imagination, will, and emotions – all peppered with compelling introspective reports, The Principles of Psychology (1890). Nietzsche (1844–1900) calls the subject a ‘grammatical fiction’, On the Genealogy of Morals (1887). Bradley (1846–1924) leads the turn towards idealism in the English-speaking world, rejects empiricist psychology, The Principles of Logic (1883), Appearance and Reality (1893). Husserl (1859–1938) rejects psychologism and formulates the phenomenological method, Logical Investigations (1900/1); the method of epoché and transcendental phenomenology itself appear, Ideas (1913). Bergson (1859–1941) offers an alternative to phenomenology, finds multiplicity in consciousness, regards intuition as method, Time and Free Will (1889), MaĴer and Memory (1896).
1900 Freud (1856–1939), father of psychoanalysis, formulates such concepts as repression, psychosexual motivation, unconscious desire, as well as the id, ego and super ego, The Interpretation of Dreams (1900), The Ego and the Id (1923), Introductory Lectures on Psychoanalysis. Dewey (1859–1952) brings pragmatism to bear on mind, rejects dualisms in favour of naturalism and evolution; mind emerges socially; founds the functional approach to psychology, ‘The Reflex Arc Concept in Psychology’ (1896), Experience and Nature (1925). Whitehead (1861–1947) rejects materialism for the view that nature is a structure of evolving processes, Process and Reality (1929). Russell (1872–1970) champions analytic method, moves from reflection on sense data to neutral monism, rejects idealism and psychologism, ‘Knowledge by
323
Chronology Acquaintance and Knowledge by Description’, (1910), The Analysis of MaĴer (1927), The Analysis of Mind (1929). Moore (1873–1958) brings commonsense realism to metaphysics and epistemology, ‘Refutation of Idealism’ (1903), ‘Proof of an External World’ (1939). Watson (1878–1958) gives the boot to consciousness in general and introspection in particular, ‘Psychology as a Behaviorist Views It’ (1913). Broad (1887–1971) argues for emergent vitalism, considers the possibility of survival a er death, The Mind and Its Place in Nature (1925). Wi genstein (1889–1951) early picture theory of meaning gives way to therapeutic treatments of problems associated with mental phenomena; private language argument makes trouble for Cartesian reflection and solipsism, Tractatus LogicoPhilosophicus (1922), Philosophical Investigations (1953), The Blue and Brown Books (1958), On Certainty (1969). Heidegger (1889–1976) urges reflection on Dasein, instead of a misunderstood conception of Being, reorients numerous mental concepts, Being and Time (1927). Carnap (1891–1970) ties meaning to phenomenalistic language, argues that metaphysics is meaningless, The Logical Structure of the World (1928), Pseudoproblems in Philosophy (1928). Price (1899–1984) reflects on perceptual consciousness, sense data, and the role of concepts in thought, Perception (1932), Thinking and Experience (1953). Ryle (1900–1976) ushers in contemporary philosophy of mind, arguing against Descartes’ ghost in the machine and for logical behaviourism, The Concept of Mind (1949). Feigl (1902–1988) ‘The “Mental” and the “Physical” ’ (1958, as a book with Postscript and Preface, 1967). Sartre (1905–1980) the father of existentialism distinguishes between being-initself and being-for-itself; we’re both, The Psychology of Imagination (1940), Being and Nothingness (1943), Critique of Dialectical Reason (1960) Sketch for a Theory of the Emotions. Merleau-Ponty (1908–1961) perception takes centre stage, phenomenology meets scientific psychology, The Structure of Behavior (1942), Phenomenology of Perception (1945), The Visible and the Invisible, (1964). Quine (1908–2000) ‘Two Dogmas of Empiricism’ (1951), ‘Quantifiers and Propositional A itudes’, (1956), ‘Epistemology Naturalized’, (1969), Word and Object (1960). Ayer (1910–1989) applies the verification principle to claims about the mind, offers an analysis of sense data, Language, Truth and Logic (1936), The Problem of Knowledge (1956), The Concept of a Person and Other Essays (1963). Austin (1911–1960) Sense and Sensibilia (1959). Malcolm (1911–1990) ‘Our Knowledge of Other Minds’ (1958). Turing (1912–1954) ‘Computing Machinery and Intelligence’ (1950). Sellars (1912–1989) ‘Empiricism and the Philosophy of Mind’ (1956), Science and Metaphysics (1968), ‘Meaning as Functional Classification’ (1974). Chisholm (1916–1999) Perceiving (1957), Person and Object (1976), The First Person (1981), Brentano and Intrinsic Value (1986). Geach (1916) Mental Acts (1957). Davidson (1917–2003) ‘Actions, Reasons and Causes’ (1963), ‘Mental Events’ (1970), Essays on Actions and Events (1980).
324
Chronology Anscombe (1919–2001) Intention (1957), ‘The First Person’ (1975), Metaphysics and the Philosophy of Mind (1981). Strawson (1919–2006) Individuals (1959), Freedom and Resentment and Other Essays (1974) Scepticism and Naturalism (1985). Smart (1920) ‘Sensations and Brain Processes’ (1959), Philosophy and Scientific Realism (1963). Place (1924–2000) ‘Is Consciousness a Brian Process?’ (1956). O’Shaughnessy (1925–2010) The Will (1980), Consciousness and the World (2000). Armstrong (1926) Perception and the Physical World (1961), Bodily Sensations (1962), A Materialist Theory of the Mind (1968), The Nature of Mind and Other Essays (1980), Consciousness and Causality (1984), The Mind-Body Problem (1999). Putnam (1926) ‘The Nature of Mental States’ (1967), Mind, Language and Reality (1975) ‘The Meaning of ‘Meaning’’ (1975). Chomsky (1928) Syntactic Structures (1957), Aspects of the Theory of Syntax (1965). Williams (1929–2003) Problems of the Self (1973). Rorty (1931–2007) Philosophy and the Mirror of Nature, (1979). Shoemaker (1931) Self-Knowledge and Self-Identity (1963), Identity, Cause and Mind: Philosophical Essays (1984), The First-Person Perspective, and other Essays (1996). Dretske (1932) Seeing and Knowing (1969), Knowledge and the Flow of Information (1981), Naturalising the Mind (1995), Perception, Knowledge and Belief (2000). Searle (1932) ‘Minds, Brains and Programs’ (1980), Intentionality (1983), Minds, Brains and Science (1984), The Rediscovery of the Mind (1992). Fodor (1935) The Language of Thought (1975), Propositional AĴitudes (1978), Representations (1979), The Modularity of Mind (1983), Psychosemantics (1987), A Theory of Content (1990). Nagel (1937) ‘What is it Like to be a Bat?’ (1974), Mortal Questions (1979), View from Nowhere (1986). Honderich (1933) A Theory of Determinism (1988), On Consciousness (2004). Millikan (1933) Language, Thought, and Other Biological Categories (1984), White Queen Psychology and Other Essays for Alice (1993). Kim (1934) Supervenience and Mind (1993). Rosenthal (1939) ‘Two Concepts of Consciousness’ (1986), Consciousness and Mind (2005), ‘Consciousness and Its Function’ (2008). Kripke (1940) Naming and Necessity (1972), WiĴgenstein on Rules and Private Language (1982). Lewis (1941–2001) ‘An Argument for Identity Theory’ (1966), ‘Psychophysical and Theoretical Identifications’ (1972), ‘Mad Pain and Martian Pain’ (1980), Philosophical Papers, Volume II (1986). McDowell (1941) Mind and World (1994), Mind, Value and Reality (1998). Jackson (1943) Perception (1977), ‘Epiphenomenal Qualia’ (1982), ‘What Mary Didn’t Know’ (1986). Block (1942) ‘Psychologism and Behaviorism’ (1981), ‘On a Confusion about the Function of Consciousness’ (1995), Consciousness, Function and Representation (2007). Denne (1942) Brainstorms (1981), Content and Consciousness (1986), The Intentional Stance (1989), Consciousness Explained (1992), Kinds of Minds (1996), Brainchildren (1998), Sweet Dreams (2005), Neuroscience and Philosophy (2007).
325
Chronology Parfit (1942) Reasons and Persons (1984). Paul Churchland (1942) Scientific Realism and the Plasticity of Mind (1979), MaĴer and Consciousness (1988), A Neurocomputational Perspective (1989). Jackson (1943) ‘Epiphenomenal Qualia’ (1982), ‘What Mary Did Not Know’ (1986). Patricia Churchland (1943) Neurophilosophy (1986), The Computational Brain (1992), Brain-Wise (2002). Burge (1946) ‘Individualism and the Mental’ (1979), Foundations of Mind (2007). McGinn (1950) The Character of Mind (1982), Mental Content (1989), The Problem of Consciousness (1991), The Mysterious Flame (1999), Consciousness and Its Objects (2004). Papineau (1947) Philosophical Naturalism (1993), Thinking About Consciousness (2002). Peacocke (1950) Sense and Content (1983), A Study of Concepts (1992), Truly Understood (2008). Tye (1950) Ten Problems of Consciousness (1995), Consciousness, Color and Content (2000), Consciousness and Persons (2003). McCulloch (1951–2001) The Mind and It’s World (1995), The Life of the Mind (2003). Clark (1957) Being There (1997), Supersizing the Mind (2008). Chalmers (1966) The Conscious Mind (1996).
326
Research Resources The following is a list of journals, websites and centres devoted to subjects of interest to philosophers of mind. Many of the journals listed below have companion websites with at least some free content. This list is, of course, not exhaustive, but it does contain some useful starting points. The awe-inspiring Mind Papers (hĴp://consc.net/mindpapers), compiled by David Chalmers (Editor) and David Bourget (Assistant Editor), is probably the most useful starting point of all.
Journals AI & Society AI Communications Annals of Mathematics and Artificial Intelligence Artificial Intelligence Artificial Intelligence and Law Artificial Intelligence Review Behavior and Philosophy Behavioural and Brain Sciences Brain and Cognition Brain and Language Brain and Mind Cognition and Emotion Cognitive Linguistics Cognitive Psychology Computational Intelligence Consciousness and Cognition Consciousness and Emotion Cybernetics and Human Knowing International Journal of Approximate Reasoning Journal of Artificial Intelligence Research Journal of Cognitive Systems Research Journal of Consciousness Studies Journal of Culture and Cognition Journal of Experimental and Theoretical Artificial Intelligence Journal of Intelligent Systems 327
Research Resources
Journal of Mind and Behaviour Journal of Theoretical and Philosophical Psychology Mind and Language Minds and Machines Neuroethics Phenomenology and the Cognitive Sciences Philosophy, Psychiatry, & Psychology Psyche Theory and Psychology Thinking and Reasoning Trends in Cognitive Science
Websites Cogprints hĴp://cogprints.org/view/subjects/phil-mind.html Consciousness and the Brain: Annotated Bibliography www.consciousness-brain.org/ Dictionary of the Philosophy of Mind hĴp://philosophy.uwaterloo.ca/MindDict/ Episteme Links www.epistemelinks.com/ A Field Guide to the Philosophy of Mind hĴp://host.uniroma3.it/progeĴi/kant/field/ Internet Encyclopedia of Philosophy www.iep.utm.edu/ KLI Theory Lab www.kli.ac.at/theorylab/index.html Mind Papers hĴp://consc.net/mindpapers Stanford Encyclopedia of Philosophy hĴp://plato.stanford.edu/ The Turing Archive for the History of Computing www.alanturing.net/
Centres and Societies Association for the Advancement of Artificial Intelligence www.aaai.org/home.html Association for the Scientific Study of Consciousness www.theassc.org/ 328
Research Resources
Centre for Cognition and Culture www.case.edu/artsci/cogs/CenterforCognitionandCulture.html Centre for Consciousness hĴp://consciousness.anu.edu.au/ Center for Consciousness Studies www.consciousness.arizona.edu/ Centre for Research in Cognitive Science www.sussex.ac.uk/cogs/index.php Centre for Research into Embodied Subjectivity www2.hull.ac.uk/fass/humanities/philosophy/research/centre-for-researchinto-embod.aspx Centre for Research on Concepts and Cognition www.cogsci.indiana.edu/ Centre for the Study of Perceptual Experience www.gla.ac.uk/Acad/Philosophy/cspe/ Cognitive Science Society hĴp://cognitivesciencesociety.org/index.html Consciousness and Self-Consciousness Research Centre www2.warwick.ac.uk/fac/soc/philosophy/research/conandselfcon/ European Society for Philosophy and Psychology www.eurospp.org/ International Association for Computing and Philosophy www.ia-cap.org/ Mind, Meaning and Rationality Research Group www.open.ac.uk/Arts/philosophy/mmr/index.shtml Society for Philosophy and Psychology www.socphilpsych.org/ Southern Society for Philosophy and Psychology hĴp://southernsociety.org/
329
Notes
Chapter 1 1 One problem with Armstrong’s definition of substance is that objects like the Sun cannot exist alone for they are essentially spatio-temporal objects and so depend for their existence on the existence of space and time. I will not a empt to resolve this issue here. 2 In order to avoid concerns raised by quantum indeterminacy, physical closure is sometimes expressed by saying that every physical event has a physical cause sufficient to determine its objective probability. See Yablo (1992); for a helpful discussion see Section 2.3 of Robb et al. (2008). 3 The example is not to be taken too seriously because the neurobiology of pain is very complex, and cannot be captured by the slogan ‘pain = c-fibre firing’. I will use ‘c-fibre firing’ as a convenient label for the complex neurobiological process which are involved in the human pain response. 4 We can distinguish between the property of having a property which occupies the causal role characteristic of mental state M, and the property which occupies the causal role characteristic of mental state M. Functionalists differ as to which of these properties is to be identified with M. 5 Strictly speaking, Davidson only denied the possibility of strict laws linking propositional a itudes with physical events. I introduce propositional a itudes in the Mental Representation section. 6 For alternative eliminativist strategies, see Stich (1983). 7 Notice the similarity between this proposal and Denne ’s intentional stance (see the Eliminativism, Instrumentalism and the Intentional Stance subsection). 8 Traditionally, the key external relations were held to be those of similarity or resemblance, but this idea is fraught with difficulties (for a quick overview see Ravenscro , 2005, pp. 126–7). 9 Putnam originally used his example to develop a point about the reference of natural kind terms in ordinary human languages. However, his example has been widely used to discuss parallel considerations in the philosophy of mind. 10 What Chalmers calls the hard problem is the problem of accounting for phenomenal consciousness, and what he calls the easy problem includes the problem of accounting for access consciousness (see Chalmers, 2003a, p. 103). 11 Strictly speaking, ‘qualia’ is the plural; the singular is ‘quale’. However, in line with most contemporary usage, I will use ‘qualia’ as both the plural and the singular.
Chapter 2 1 Philosophers in the phenomenological tradition engage in the descriptive project of revealing the structures of experience in detail. For a useful introduction to their work see Gallagher and Zahavi (2007).
330
Notes 2 Several prominent critics question the phenomenal concepts strategy on the grounds that it is questionable that concepts of the requisite first-personal sort might exist (see Prinz 2007; Tye 2009). Nevertheless, it is arguable that ordinary, public concepts of experience could do the same work that phenomenal concepts are meant to do equally well (or badly). 3 For a catalogue of other problems and worries concerning higher order theories and their relatives see Block (forthcoming).
Chapter 3 1 We hasten to add that we think it is in principle possible to build machines that could think out of non-biological materials. 2 See Adams and Aizawa (2008) for the pros and cons. 3 In a longer version of this paper (available from the authors) we discuss the views of the eliminativists and the mysterians about the mental. 4 Antony (1997), Block (1995, 1996, 1998) McGinn (1982) and Searle (1983) are other well-known critics of the view that sensory or qualitative states are representational (intentional) states. 5 We take this to mean that Rorty does not take non-occurrent states to be mental even though others may call them so. Thus, we are taking Rorty to hold the single property view, not the property cluster view or single system view of the mark of the mental, though he perhaps could be interpreted as taking the single system view. 6 Rorty (1970a) considered other possible properties to distinguish the mental from the physical, such as intentionality, introspectability, purposiveness, non-spatiality and privacy, but rejected all of these. We won’t go through his reasons here. 7 Surprisingly Rorty (1970a) says, if science had only beliefs and desires to deal with (not reports of mental events), science would never have invented the concept of mind or the mental. (Rorty, 1970a, p. 408) 8 Even Rorty (1972, p. 217) says that robots would require language use similar to that of humans for their reports to be incorrigible. We take this to require, at the very least, communicative intentions. 9 Rorty (1970a, p. 405) says: ‘It seems clear that it is the notion held by Cartesian philosophers that we must explicate if we are to make sense of materialism. This la er notion must contain properties incompatible with properties of physical entities’. He also says: (Rorty, 1970a, p. 402) ‘It is part of the sense of “mental” that being mental is incompatible with being physical, and no explication of this sense which denies this incompatibility can be satisfactory’ and again (p. 405) ‘ “Material” and “physical” would be vacuous notions without the contrast with “mental”. “Immaterial” and “nonphysical” are notions that have sense only if the mental is given as an instance of them’. 10 Crane (1998) argues for what he calls the ‘weaker’ view that all mental states, even qualitative states, have intentionality even if qualia themselves are not intentional. Whereas, we read Tye and Dretske as a empting to explain the qualitative character of mental states in virtue of their being representational states, and thereby intentional. Crane also at least flirts with the view that an intentionalist about sensations, say pain, may hold that the intentional object presented in a pain sensation is an internal mental object. Where Tye and Dretske would say the thing represented in a pain sensation (say, foot pain) is the damage to the foot, not an object in the mind. Much of Crane’s view (1998 is devoted to responding to Searle’s view that emotions
331
Notes
11 12 13
14
15
16 17 18 19 20
21 22
23
are non-intentional. Crane gives good reasons to think that this is not true, but we cannot go into the details here. See Adams and Dietrich (2004) for an account that emphasizes the differences. Of course there can be hallucinations or artificially caused experiences, but the qualitative character of the experience would derive from past representational episodes. All of these are examples of natural signs. Natural signs (such as smoke being a sign of fire; footprints being a sign of a passerby) have a kind of informational aboutness. These seem to be the wrong kind of intentionality, at least in part because they need not produce a phenomenology, cannot be falsely tokened, and have not risen to the level of semantic meaning. Smoke naturally indicates fire, but ‘smoke’ means smoke (and its tokening need not indicate fire). Famously, Descartes believed that non-human animals not only could not think (were not intentional systems), but could not feel because they were not intentional systems. Descartes believed that to feel pain, for instance, one must be able to think the thought ‘I’m in pain’. So, only intentional systems were able truly to have phenomenological states. When Fodor says paramecia are only sensory systems, he does not say whether they may have a phenomenology. He also does not say whether a purely sensory system may be credited with mental states. Actually, Fitch says conflicting things. At times he seems to say that the reasons computers can’t think is that they don’t have nano-intentionality. At others times he seems to say that he’s only talking about vertebrates. We don’t know whether Searle would think it is possible to have a purely sensory conscious being, but we don’t find anything he has said that rules it out. See discussion in Ma hen, 2005, Chapter 12. See Adams and Aizawa, 2008. See Dretske, 1981, for the interpretation of the mathematical model for use by cognitive science that we are following. We accept that even biological structures that are selected to be dedicated information processors have a physical-chemical make-up, but physics and chemistry alone won’t explain why these structures are where they are and are doing what they are doing. For that you need to appeal to selectional history (Dretske, 1995). There is a whole literature on this feeling of magic. See Levine, 2001, and this may be part of why McGinn adopts the mysterian view. We do not say only biological systems can have minds (as Fitch comes close to saying and Searle is taken to say). To think, computers would need concepts the non-derived meanings of which were meaningful to them, not only to us or to their designers. Acknowledgements: James Garvey, Ken Aizawa, John Barker, Fred Dretske, William Tecumseh Fitch, Annie Steadman, and to University of Delaware Office of Undergraduate Research.
Chapter 4 1 However, substance dualism is not without contemporary defenders. The most prominent is R. Swinburne, for example, in The Evolution of the Soul (second edition, Oxford: Oxford University Press, 1997), but see also J. Foster, ‘A Defence of Dualism’, in J. Smythies and J. Beloff (eds), The Case for Dualism (Charlo esville, VA: University of Virginia Press, 1989) and The Immaterial Self (London: Routledge, 1996); W. D. Hart, ‘Dualism’, in S. Gu enplan (ed.), A Companion to the Philosophy of Mind (Oxford: Blackwell, 1994); and H. Robinson, Ma er and Sense (Cambridge: Cambridge
332
Notes
2
3
4
5
6 7 8 9
10
11
University Press, 1982) and ‘Dualism’, in S. Stich and T. Warfield (eds), The Blackwell Guide to Philosophy of Mind, (Oxford: Blackwell, 2003). Substance dualism is perhaps not commi ed to hylomorphism; see Swinburne, The Evolution of the Soul, pp. 330–2. For general discussion of the position see H. Robinson, ‘Aristotelian dualism’, Oxford Studies in Ancient Philosophy 1, (Oxford: Oxford University Press, 1983), pp. 123–44, and D. Oderberg, ‘Hylemorphic dualism’, Social Philosophy and Policy, 22, (2005), 70–99. For articulation of the nature of the classic Cartesian view see M. Rozemond, Descartes’s Dualism (Cambridge, MA: Harvard University Press, 2002) and J. Hawthorne, ‘Cartesian dualism’, in P. van Inwagen and D. Zimmerman (eds), Persons Human and Divine (Oxford: Oxford University Press, 2007). For discussion of Aquinas’s view, see A. Kenny, Aquinas on Mind (London: Routledge, 1994). See J. Kim, ‘Lonely Souls: Causality and Substance Dualism’, in T. O’Connor and D. Robb (eds), Philosophy of Mind:Contemporary Readings (London: Routledge, 2003). See also R. Larmer, ‘Mind-body interactionism and the conservation of energy’, International Philosophical Quarterly, 26, (1986), 277–85; E. Mills, ‘Interaction and overdetermination’, American Philosophical Quarterly, 33, (1996), 105–15, and ‘Interactionism and physicality’, Ratio, 10, (1997) 169–83; and E. J. Lowe, ‘The problem of psychophysical causation’, Australasian Journal of Philosophy, 70, (1992), 263–76. Lowe himself endorses a form of dualism, albeit not the one argued for here in E. J. Lowe, ‘The causal autonomy of the mental’ Mind, 102, (1993), 629–44; Subjects of Experience (Cambridge: Cambridge University Press, 1996), and ‘Non-Cartesian substance dualism and the problem of mental causation’, Erkenntnis, 65,1, (2006), 5–23. Rene Descartes, in J. Co ingham, R. Stoothoff and D. Murdoch (trans. and eds), The Philosophical Writings of Descartes (Cambridge: Cambridge University Press, 1994), vol. II, p. 275. For a longer discussion of this argument, see my Belief in God (Oxford: Oxford University Press, 2005), pp. 87–99. For articulation of this view, see J. Eccles and K. Popper, The Self and its Brain (New York: Springer, 1977). I qualify this conclusion somewhat in my forthcoming book Free Will (Continuum). The original may be found in F. Jackson,’Epiphenomenal qualia’, Philosophical Quarterly, (1982), 127–36 and, for a good contemporary discussion, see the papers collected by P. Ludlow, Y. Nagasawa and D. Stoljar (eds), There’s Something About Mary: Essays on Phenomenal Consciousness and Frank Jackson’s Knowledge Argument (Cambridge, MA: MIT Press, 2004). On the issue of the problem of consciousness, Denne , for example, arguably re-conceptualizes it until ‘it’ becomes tractable to the natural sciences in his Consciousness Explained (Boston: Li le Brown, 1991), but perhaps thereby merely fails to address the real issue; see David Chalmers, for example, in The Conscious Mind (New York: Oxford University Press, 1996 ). See also D. Ross, ‘Denne ’s Conceptual Reform’ in Behaviour and Philosophy, 22 (1994), 41–52 and N. Latham, ‘Chalmers on the addition of consciousness to the physical world’, Philosophical Studies, 98, (2000), 67–93. I am grateful for the comments of Richard Swinburne on an early dra of this chapter.
Chapter 5 1 What does it mean to be fundamental? As I understand it, the basic or fundamental properties (1) may determine each other, as in, for example, F=MA, (2) may determine
333
Notes
2
3 4 5 6
other properties, as how the properties of physics determine (many think) the chemical properties (i.e. once the properties of physics are in place, the chemical properties are also in place), and (3) are not determined by anything that they do not themselves determine. See Montero (2006) for a suggestion on how to formulate physicalism if there is no fundamental level. Indeed, a world that is entirely understandable to a human mind seems, if anything, to point a non-physicalistic view, as it would seem to hint at a creator that made the world intelligible to humans. (This isn’t to say that any anti-physicalistic view implies the existence of a creator, but just that the existence of a creator would seem to imply an anti-physicalistic position.) See Wilson (2010). Terence E. Horgan (1993). For an explanation of this stance see McLaughlin (2001). See Melynk (2003). I argue against his view in Montero (1999).
Chapter 6 1 Jaegwon Kim has pressed this objection repeatedly. He offers the analogy of killing someone by firing a gun. Had the gun had a silencer, the shot would still have killed the victim. Thus, the noise of the shot was causally irrelevant. However, by analogy had the shot been fired with by a gun with a silencer, the shooting would have been a different event, and so would the death. Kim also presses the epiphenomenal charge by claiming that it is not the mental property but only the physical property of an event that is causally efficacious. However, for Davidson, causality is a relation between events not properties, nor does it hold in virtue of properties of events. And besides, Davidson doesn’t believe in properties, and so Kim’s objections are beside the point (see Davidson, 1993). 2 Davidson (1986a) ‘A Coherence Theory of Truth and Knowledge’, p. 150. 3 Davison rejects the locution, ‘knowing what I mean’, preferring, ‘meaning what I say’, and he points out that if we didn’t mean what we say we would be uninterpretable. 4 Note that Denne notes that this objection is based on too limited a view of causation. See Denne , 1991b. However, he doesn’t elaborate. 5 See Fodor’s 1974 paper ‘Special Sciences: The disunity of science as working hypothesis’.
Chapter 7 1 It also concerns the individuation conditions of psychological contents, states and concepts. It does not, however, concern phenomenal properties unless they are conceived as being representational. 2 In order to make sense of the debate between internalists and externalists the necessity involved in both psycho-physical supervenience theses should be understood as nomological necessity: necessity consistent with the laws of nature. 3 See Putnam (1975) for the original Twin Earth thought experiment. Although Putnam introduced the Twin Earth thought experiment his argument concerned linguistic properties rather than psychological properties: indeed Putnam was clearly an internalist at the time. The argument was adapted by Burge to establish externalism.
334
Notes
4
5
6 7 8 9
10 11 12
13 14
15 16 17 18 19 20 21 22 23
See Burge (1979). Putnam himself accepts the adaptation as can be seen by remarks in his (1996). Putnam’s original example was of water on Earth and twater on Twin Earth, where twater is a substance superficially indistinguishable from water but with a different chemical composition. I have not used his example here because it has an irrelevant complication, namely that S on Earth could not be an intrinsic physical duplicate of S* on Twin Earth if S were partly composed of water and S* were partly composed of twater. The view of natural kinds that is taken to underwrite this form of externalism can be found in Kripke (1972) and Putnam (1975). However, the view remains controversial. For a thorough overview of recent positions on natural kinds and natural kind terms see Wilkerson (1998). See Burge (1982). The original example involves an incomplete understanding of the term ‘arthritis’. See Burge (1986a) and (1986b). The example is from Burge (1986b). For reasons of space I have not been able to discuss Davidson’s views, which in some sense straddle the internalism/externalism divide. According to Davidson, roughly speaking, the content of a thought is determined by the way in which a subject is best interpreted in the context of a shared world. Thus although the meaning of a subject’s words and the contents of her thoughts are and must be grounded in the discriminative capacities and epistemic outlook of the individual, they are externally individuated nonetheless because in order to interpret an individual one must make essential reference to objective properties in the environment to which she (and you) are jointly related. See Davidson (1984b) especially Chapters 9 and 10, and (2001), especially Chapters 1, 2 and 9. See Evans (1982) and McDowell (1977, 1984). See also McDowell (1986). See, for example, Salmon (1986) and Soames (2002). See Burge (1977) for more detail. See also Segal (1989). It should be noted that the view is consistent with both internalism and externalism about the representational content of singular thoughts thus conceived. It counts as a form of singular internalism simply because the representational content of a singular thought is not dependent for its individuation on the object the thought concerns. For all this, the predicative element of the thought may depend for its individuation on the properties to which the thinker is related, for reasons akin to those given in the Predicative Externalism section above. This view of proper names is advocated by Burge in his (1973). For recent defences of the view see Elugardo (2002) and Sawyer (2009). ‘Epistemic outlook’ can itself be understood individualistically or anti-individualistically. This makes it difficult to offer a neutral characterization of the internalist position conceived generally. See Segal (2000) for a thorough defence of this kind of view. See Fodor (1980), McGinn (1982b) and Stich (1983). For this view see Chalmers (2003c). See Sawyer (2007). See Burge (1979), Section IV. Both type- and token-physicalism are commi ed to this claim, although typephysicalism is commi ed to the stronger claim about property identity as well. That psychological states are the causes of actions (re)gained prominence following Davidson’s (1980), Chapter 1. For an alternative view see Morris (1992). For arguments to this effect see Fodor (1987) and (1991a). See Burge (1989) and (1993b) and Wilson (1995).
335
Notes 24 For recent discussion see Noordhof (2006a) and Keijzer and Schouten (2007). 25 In my discussion of both problems I focus on predicative externalism, but the problems and responses in each case are analogous for singular externalism. 26 For arguments concerning the strength of the argument see Ludlow (1995) and (1997) and Warfield (1997). See also McLaughlin and Tye (1998) and Brown (2004). 27 See, for example, Burge (1988), Heil (1988) and Peacocke (1999), Chapter 5. Burge has gone further in identifying a set of what he calls ‘cogito-like judgements’ that provide a limiting case of direct, non-empirical, authoritative self-knowledge that withstand any amount of disruption to presupposed background conditions for self-knowledge. Cogito-like judgements are self-referential and self-verifying in virtue of being so. Consequently, they cannot but be true even if their contents are externally individuated. Examples include: I am now thinking that writing requires concentration, and I hereby judge that examples need elaboration. See Sawyer (2002) for a defence of the view. 28 See, in particular, Burge (1988). 29 Davidson offers a different response that follows from his particular form of externalism referred to in Note 9 above. For his response see Davidson (2001), Chapters 1 and 2. 30 See Boghossian (1989) for the original presentation of the argument. 31 See Ludlow (1998). 32 See Burge (1998) and (1993a). 33 In addition, and following discussion of cogito-like judgements about current thoughts in Note 26, Burge extends the realm of cogito-like judgements to include thoughts about past thoughts. Thus the was and thereby in judgements such as I was thereby thinking that p relate to elements in the original thought preservatively rather than referentially. See Burge (1998). 34 There are related issues concerning the implications of externalism for reasoning. In addition to Burge (1993a) and (1998), see Boghossian (1992) and Schiffer (1992). See Goldberg (2007) for an account of the implications of externalism for content preservation and discursive justification. 35 McKinsey sparked the controversy with his (1991), although he rejects the interpretation of his argument that led to the controversy. See also Brown (1995). 36 See Brueckner (1992) and Goldberg (2003). 37 For this strategy, see Davies (2000), for example, and Wright (2004), for example. There are differences between the positions of Davies and Wright but the general strategy is the same. The strategy is criticized in Sawyer (2006). 38 See Sawyer (1998) and (2006). 39 Externalism has further epistemological implications that I have not had the space to discuss here. See, for example, Majors and Sawyer (2005) in which it is argued that externalism (and only externalism) grounds a reliabilist theory of justification. See also Majors and Sawyer (2007) for the ramifications of externalism in epistemology and meta-ethics. 40 I have not had the space to explain this fully here. See Majors and Sawyer (2005).
Chapter 8 1 This chapter is based on the much fuller discussions in my book Mind as Machine: A History of Cognitive Science (Boden, 2006). No specific references are given below. But the most directly relevant parts are Chapters 4 and 16 entire, and Sections 1.i–ii and iii.b–d; 6.iii.c and iv.c; 7.i.e–h, iii, and vi.d–h; 9.vii and x; 11.ii–iii; 12.x; 13.vii; 14.ii and viii–xi; and 15.i.
336
Notes
Chapter 9 1 That is, we will be concerned with issues of a psychosemantics, or issues about the nature and meaning of representations in the mind, not, for example, a linguosemantics, as might be provided for the expressions in natural language, which may or may not be used by a mind to express its thoughts. 2 Note that, pace Denne (1987a) and others who stress the role of norms in intentional a ribution, even much irrational thought and behaviour is understood intentionally, as in, for example, the gambler fallacy, disregard of base rates, and discounting of future satisfactions. Thus, the gambler’s fallacy is likely due to errors in reasoning about probability, not to some non-intentional, purely mechanical breakdown. See the exchange between Wedgwood (2007) and Rey (2007) for recent discussion. 3 See Fodor (1975), Rey (1997) and Harnish (2002) for extensive discussion of the program, and the several volumes of Osherson (1995/98) for representative discussions of empirical work within it. 4 Turing’s famous proposal of Turing machines, and his ‘thesis’ that they can compute anything that can be computed should not be confused with the far less plausible ‘test’ for intelligence named a er him, according to which a machine would count as intelligent if a (normal?) human being couldn’t distinguish its teletype responses to questions from those of an intelligent human being. Pace Searle’s notorious Chinese Room Argument, CRTT is commi ed to no such behavioural test, but to quite detailed stories about how behaviour is produced (see Rey, 1997, 2002 for further discussion). 5 Resurrecting (unfortunate) medieval terminology, Brentano thought these peculiarities were the mark of intentionality. For those new to these discussions, intentionality in this sense means directedness or aboutness, as when we say that a thought is ‘directed upon’ or ‘about’ its object, in the way that a thought about Julius Caesar is directed upon, or about Caesar. ‘Intentional’ in the sense of ‘deliberate’ (as in ‘he coughed intentionally’) is only one of a very large class of states that are ‘intentional’ in Brentano’s sense. 6 See Quine (1953a), Cartwright (1960/1987), Evere and Hofweber (2000) and Priest (2005) for rich discussions. See note 15 below for rejections of empty representations. 7 The distinction is, of course, close to the much-discussed distinctions between transparent/de re and opaque/de dicto readings of propositional a itudes and/or their ascriptions (e.g. see Kaplan, 1969). I don’t want to assimilate my distinction immediately to those, both because they are the objects of enough controversy on their own, and because a usual strategy for understanding them won’t work for ‘represent’ (e.g. a transparent reading of ‘John thinks of Sam Clemens that he’s funny’ may well involve John being related to a representation, ‘Mark Twain is funny’, that involves a representation, ‘Mark’ that in fact represents Sam. But this understanding obviously can’t be available for the term ‘represent’ itself). 8 This, of course might be a reason to treat representation as also opaque to substitution in a serious psychology. 9 Quine’s challenges have recently been vigorously pressed by Fodor (1998) as arguments against any epistemic account of meaning. 10 A cautionary note: ‘information’ is used quite freely in cognitive science, o en with a tacit presumption that its use is sanctioned by the technical notion of information (as roughly negative entropy) introduced by Shannon and Weaver. Perhaps some uses are sanctioned in this way, but it is far more likely that the uses in psychology are simply disguised references to intentional content. Dretske himself introduces a specifically co-variational notion, in terms of strict co-variance, which is the notion being recruited here to explain intentional content.
337
Notes 11 This problem is of a piece with the problem Kripke (1982) a ributes to Wi genstein (1953) about how to distinguish someone who has added 57 and 63 and obtained 5 as an ‘error’ from someone who is computing a different function, ‘quaddition, which is identical to addition except for the case of 57 and 63. 12 It’s important to note that Fodor doesn’t intend his proposal as a sufficient condition on intentionality tout court, but only as a sufficient condition for meeting disjunction objections; see his 1990c, pp. 127–31. 13 In Rey (forthcoming) I argue that the representations of geometrical figures (geons) in early vision, as well of the standard entities of linguistics (words, phonemes) are empty along these lines. 14 One strategy is to appeal to real, but simply uninstantiated properties, such as unicornhood, which (arguably) can exist even without unicorns (see Fodor, 1987, 1991b). Another is to claim that at least syntactically simple expressions are not genuine representations (see Millikan, 2000). See Rey (forthcoming) for discussion. 15 Note that the ‘Zeus’/’Jupiter’ example shows one can’t rely here on co-reference alone. 16 This is a condensation of a longer discussion in my (2009) in which I try to distill what seems to me common and correct in Fodor’s and Horwich’s proposals, while rejecting what seems to me mistaken in each (viz., Fodor’s externalism and Horwich’s deflationism). Devi ’s (1996) proposal also looks to explanatory roles, but is not so focused on the asymmetric basicality condition, which seems to me crucial to replying to the Quinean challenge. Note that (BAS) is deliberately neutral between Horwich’s (‘explanatory’) and Fodor’s (‘asymmetric’) ways of expressing what seems to me the common important idea.
Chapter 10 1 My formulation will follow Honderich’s (1982) classic statement of the problem. 2 This does not preclude the possibility that other types of relations could play the role of R. In fact, in his later work on explanation Kim (1994) claims that explanations track dependence relations, so it is certainly possible that other kinds of dependence relations (such as mereological dependence) can serve as the explanatory relation in other explanatory contexts. 3 Closure is motivated by the longstanding problems with Cartesian interaction. Most recently Kim defines the causal closure of the physical domain as follows: ‘If a physical event has a cause at t, it has a physical cause that occurs at t’ (Kim, 2005, p. 43). He also considers a stronger version of closure: ‘Any cause of a physical event is itself a physical event, that is, no nonphysical event can be a cause of a physical event’ (ibid., p. 50) but claims that this is too strong since it rules out mind-body causation all by itself and would allow one to dispense of the exclusion principle altogether in the argument against non-reductive physicalism. 4 The problem with weak supervenience is that it lacks the modal force required to generate a relation of dependence between properties by virtue of the fact that they need not co-vary across all possible worlds. The problem for global supervenience is it is not sufficiently restrictive because it says nothing about how mental and physical properties are distributed within any possible world. For a more detailed discussion see (Kim, 1987, 1990b, 1993b; Campbell 2000). 5 See Kim’s assessment of this option (Kim, 2005, p. 61). See also (Ney, 2007) for a helpful critical discussion of the notion of constitution. 6 For defences of Davidson in terms of these kinds of considerations see (Campbell, 1997, 2003, 2008; Gibb, 2006), though Gibb does go on to argue that this reply leads to an intolerable account of causal relata.
338
Notes 7 Indeed, the consensus in the literature seems to be that the Davidsonian account of events should be avoided because it renders causation an u erly mysterious relation. The virtue of a property exemplification view of events is that it permits an account of why a cause produces its effect in terms of an appeal to its causally relevant properties (Gibb, 2006). 8 For further discussion see (Campbell and Moore, 2009). 9 Kim thinks that he has not begged any questions since he claims that ‘if it is aspects of events, rather than events simpliciter, that are explained, then explanatory exclusion would apply to these event aspects’ (Kim, 1989a, p. 96) and elsewhere, ‘We have so far spoken indifferently of both events simpliciter and events being a certain kind or having certain properties as causes and effects. This makes no difference: however we individuate causes and effects, we face the same problem [about explanatory exclusion]’ (Kim, 1990a, pp. 40–1). I think we have seen ample reason to disagree with these claims. Interestingly, Marras and Yli-Vakkuri (2008) have recently observed that Kim’s improved version of the exclusion argument (the ‘supervenience argument’) also presupposes a fine-grained account of event identity and also begs the question against non-reductive physicalism. 10 Indeed, Marras (1998) ably shows it does not. 11 This observation has also been made in a slightly different context by Marras (Marras, 2007; Marras and Yli-Vakkuri, 2008). It is overly simplistic simply to suggest that the choice we face is between Davidson’s and Kim’s conception of events, though that is o en the sense one gains from the literature. While theirs are the most prominent views there are others worth considering, such as Chisholm’s (1970, 1976) and Lombard’s (1986), and of course there is Horgan’s (1978) claim that we can and should make do without introducing events into our ontology at all. Macdonald and Macdonald (2006) adopt a modified version of Lombard’s account and argue that it allows for the causal efficacy of mental property instances. However, their approach strikes me as a return to the Davidsonian position since events are identified with property instances. It would be interesting to explore whether or not the exclusion argument can be maintained on an alternative model of events. If, like Lombard, one adopts the position that events can have more than one constitutive property it seems an exclusion argument formulated in terms of such events would be even more vulnerable to the dual explanandum reply. Unfortunately, I do not have sufficient space to explore this question here.
Chapter 12 1 More precisely, it’s an example of the version of embodied cognition with which we shall be concerned here. There are other versions of the view. For instance, some embodied cognition theorists concern themselves with the way in which embodiment has an impact on our understanding of perceptual experience (e.g. O’Regan and Noë, 2001, Noë, 2004). Others argue that our embodiment structures our concepts (Lakoff and Johnson, 1980, 1999). This sample is not exhaustive. 2 The classic presentation of what I am calling the extended mind hypothesis is by Clark and Chalmers (1998). See Menary (forthcoming) for a recent collection of papers. Rather confusingly, the view has always traded under a number of different names, including close variants of the original moniker, such as the hypothesis of extended cognition (Rupert, 2004) and the extended cognition hypothesis (Wheeler, forthcoming a), but also active externalism (Clark and Chalmers, 1998), vehicle externalism (Hurley, 1998; Rowlands, 2003), environmentalism (Rowlands, 1999), and locational externalism (Wilson, 2004).
339
Notes 3 It has o en been noted that connectionist networks may be analysed in terms of cognitively relevant functions which need to be specified at a finer level of grain than those performed by classical computational systems (e.g. using mathematical relations between units that do not respect the boundaries of linguistic or conceptual thought), hence Clark’s term ‘micro-functionalism’. 4 Two comments: First, although this is not the place to launch into a critique of the details of Adams and Aizawa’s position, my view is that while they are right that ExM needs a mark of the cognitive, they are wrong about what that mark might be. Secondly, my appeal to a scientifically informed, theory-loaded mark of the cognitive will, in some quarters, be controversial. For example, Clark (2008b) suggests that the domain of the cognitive should be determined by our intuitive folk-judgements of what counts as cognitive. His supporting argument is (roughly) that our intuitive understanding of the cognitive is essentially locationally uncommi ed, while the range of mechanisms identified by cognitive science is in truth too much of a motley to be a scientific kind, and so will thwart any a empt to provide a scientifically driven, theory-loaded account of the cognitive, locationally uncommi ed or otherwise. I disagree with this assessment. I hold out for a locationally uncommi ed account of the cognitive that is scientifically driven and theory-loaded on the grounds (roughly) that our intuitive picture of the cognitive has a deep-seated inner bias, while Clark’s argument for the claim that there is a fundamental mechanistic disunity in cognitive science is far from compelling (Wheeler, forthcoming b). 5 In previous ExM treatments of Bechtel’s logical reasoning studies, Rowlands (1999, pp. 168–71) and Menary (2007, also pp. 168–71) rely at root not on parity considerations to justify the claim of cognitive extension, but rather on the integration of inner connectionist processing with external symbol systems in order to complete a cognitive task that could not ordinarily be achieved by the inner networks alone. My own view is that the mere fact that an external resource is necessary to complete a cognitive task is not sufficient to establish cognitive extension, as opposed to a compelling case of embodied-embedded cognition.
340
Bibliography
Adams, F. (1991), ‘Causal Contents’, in B. McLaughlin (ed.), Dretske and His critics (Oxford: Blackwell), 131–56. —(2003), ‘Thoughts and Their Contents: Naturalized Semantics’, in S. Stich and T. Warfield (eds), The Blackwell Guide to the Philosophy of Mind (Oxford: Blackwell), 143–71. Adams, F., and Aizawa, K. (2008), The Bounds of Cognition (Oxford: Blackwell). —(1994), ‘Fodorian Semantics’, in Stich and Warfield (1994), 223–42. Adams, F., and Dietrich, L. (2004), ‘Swampman’s Revenge: Squabbles among the Representationalists’, Philosophical Psychology, 17: 323–40. Agre, P. E., and Rosenschein, S. J. (eds) (1996), Computational Theories of Interaction and Agency (Cambridge, MA: MIT Press). Aikns, K. (1993), ‘A Bat without Qualities’, in M. Davies and G. Humphreys (eds), Consciousness (Oxford: Blackwell). —(1996), ‘Lost the Plot? Reconstructing Daniel Denne ’s Multiple Dra s Theory of Consciousness’, Mind and Language, 2: 1–43. Aizawa, K. (2003), The Systematicity Arguments (Dordrecht: Kluwer). Aleksander, I. (2005), The World in My Mind, My Mind in the World: Key Mechanisms of Consciousness in Humans, Animals, and Machines (Exeter: Imprint Academic). Aleksander, I., and Dunmall, B. (2003), ‘Axioms and Tests for the Presence of Minimal Consciousness in Agents’, in O. Holland (ed.), Machine Consciousness (Exeter: Imprint Academic), 7–18. Special issue of the Journal of Consciousness Studies, 10 (4–5). Alexander, S. (1920), Space, Time, and Deity, 2 vols. (London: Macmillan). Anscombe, G., and Geach, P. (1954), Descartes: Philosophical Writings (Indianapolis: Bobbs-Merrill). Anscombe, G. E. M. (1957), Intention (Oxford: Blackwell). —(1965), ‘The Intentionality of Sensation: A Grammatical Feature’, in A. Noë and E. Thompson (eds) (2002), Vision and Mind, Selected Readings in the Philosophy of Perception (Cambridge, MA: MIT Press). Antony, L. (1989), ‘Anomalous Monism and the Problem of Explanatory Force’, Philosophical Review, 98: 153–87. —(1997), ‘What It’s Like to Smell a Gardenia’, Times Literary Supplement 4897 (7 February). Arbib, M. A. (1982), ‘Modelling Neural Mechanisms of Visuomotor Coordination in Frog and Toad’, in S. Amari and M. A. Arbib (eds), Competition and Cooperation in Neural Nets, Lecture Notes in Biomathematics, 45 (Berlin: Springer-Verlag), 342–70. Arbib, M. A., and Hesse, M. B. (1986), The Construction of Reality (Cambridge: Cambridge University Press).
341
Bibliography Arbib, M. A., Boylls, C. C., and Dev, P. (1974), ‘Neural Models of Spatial Perception and the Control of Movement’, in W. D. Keidel, W. Handler and M. Spreng (eds), Cybernetics and Bionics (Munich: Oldenbourg), 216–31. Armstrong, D. (1968), A Materialist Theory of Mind (London: Routledge & Kegan Paul). —(1981), ‘What is Consciousness?’, in The Nature of Mind (Ithaca, NY: Cornell University Press). Ayers, M. (1990), Locke, vol. 2 (London: Routledge). Baker, L. R. (2000), Persons and Bodies: A Constitution View (Cambridge: Cambridge University Press). Balog, K. (1999), ‘Conceivability, Possibility and the Mind-Body Problem’, The Philosophical Review, 108: 497–528. Bechtel, W. (1994), ‘Natural Deduction in Connectionist Systems’, Synthese, 101: 433–63. —(1996), ‘What Knowledge Must Be in the Head in Order to Acquire Language’, in B. Velichkovsky and D. M. Rumbaugh (eds), Communicating Meaning: The Evolution and Development of Language (Hillsdale, NJ: Lawrence Erlbaum Associates). Bechtel, W., and Abrahamsen, A. (1991), Connectionism and the Mind: An Introduction to Parallel Processing in Networks (Oxford: Basil Blackwell). Bedau, M. (1999), ‘Supple Laws in Psychology and Biology’, in V. Hardcastle (ed.), Where Biology Meets Psychology (Cambridge, MA: Bradford Books, MIT Press), 251–71. Beer, R. D. (1996), ‘Toward the Evolution of Dynamical Neural Networks for Minimally Cognitive Behavior’, in P. Maes, M. Mataric, J. Meyer, J. Pollack and S. Wilson (eds), From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior (Cambridge, MA: MIT Press), 421–9. —(2003), ‘The Dynamics of Active Categorical Perception in an Evolved Model Agent’, Adaptive Behavior, 11 (4): 209–43. Behan, D. (1979), ‘Locke on Persons and Personal Identity’, Canadian Journal of Philosophy, 9: 53–75. Block, N. (1978), ‘Troubles with Functionalism’, in C. W. Savage (ed.), Perception and Cognition: Issues in the Foundations of Psychology, Minnesota Studies in the Philosophy of Science, vol. 9 (Minneapolis: University of Minnesota Press), 261–325. —(1986a), ‘Advertisement for a Conceptual Role Semantics’, in P. French, T. Uehling and H. We stein (eds), Studies in the Philosophy of Mind (Minneapolis: University of Minnesota Press). —(1986b), ‘Advertisement for a Semantics for Psychology’, in P. French, T. Uehling and H. We stein (eds), Midwest Studies in Philosophy, vol. 10 (Minneapolis: University of Minnesota Press), 615–78; reprinted in Stich and Warfield (2003), 81–135. —(1986c), ‘Functional Role and Truth Conditions’, Proceedings of the Aristotelian Society, Supplementary Volume 60: 157–181. —(1990), ‘Inverted Earth’, in J. E. Tomberlin (ed.), Philosophical Perspectives, vol. 4 (Atascadero, CA: Ridgeview), 53–79. —(1994), ‘Consciousness’, in Gu enplan (1994), 210–19.
342
Bibliography —(1995), ‘On a Confusion about the Function of Consciousness’, Behavioural and Brain Sciences, 18: 227–47. —(1996), ‘Mental Paint and Mental Latex’, in Enrique Villenueva (ed.), Perception, Philosophical Issues, 7 (Northridge, CA: Ridgeview), 19–48. —(1998), ‘Is Experience Just Representing?’, Philosophy and Phenomenological Research, 58: 663–70. —(2003), ‘Do Causal Powers Drain Away?’, Philosophy and Phenomenological Research, 67 (1): 133–50. —(2007), ‘Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience’, Behavioral and Brain Sciences, 30: 481–548. —(forthcoming), ‘Comparing the Major Theories of Consciousness’, in M. Gazzangia (ed.), The Cognitive Neurosciences, 4th edn (Cambridge MA: MIT Press). Block, N., and R. Stalnaker (1999), ‘Conceptual Analysis, Dualism and the Explanatory Gap’, The Philosophical Review, 108: 1–46. Boden, M. A. (1965), ‘McDougall Revisited’, Journal of Personality, 33. —(1970), ‘Intentionality and Physical Systems’, Philosophy of Science, 37: 200–14. —(1972), Purposive Explanation in Psychology (Cambridge, MA: Harvard University Press). —(1978), ‘Human Values in a Mechanistic Universe’, in G. Vesey (ed.), Human Values: Royal Institute of Philosophy Lectures 1976–77 (Brighton: Harvester Press), 135–71. —(1994), ‘Multiple Personality and Computational Models’, in A. Phillips-Griffiths (ed.), Philosophy, Psychology, and Psychiatry (Cambridge: Cambridge University Press), 103–14. —(1999), ‘Is Metabolism Necessary?’, British Journal for the Philosophy of Science, 50: 231–48. —(2000a), ‘Autopoiesis and Life’, Cognitive Science Quarterly, 1: 1–29. —(2000b), ‘Cra s, Perception, and the Possibilities of the Body’, British Journal of Aesthetics, 40: 289–301. —(2004), The Creative Mind: Myths and Mechanisms, 2nd edn, expanded/revised (London: Routledge). —(2006), Mind as Machine: A History of Cognitive Science, 2 vols. (Oxford: Oxford University Press). —(2007), ‘Creativity and Conceptual Art’, in P. Goldie and E. Schellekens (eds), Philosophy and Conceptual Art (Oxford: Clarendon Press), 216–37. Boden, M. A., and Edmonds, E. A. (2009) ‘What is Generative Art?’, Digital Creativity, 20 (1–2): 21–46. Boghossian, P. (1989), ‘Content and Self-Knowledge’, Philosophical Topics, 17: 5–26. —(1992), ‘Externalism and Inference’, Philosophical Issues, 2: 11–28. Bond, A. H., and Gasser, L. (eds) (1988), Readings in Distributed Artificial Intelligence (San Francisco: Morgan Kaufmann). Bontly, T. (2002), ‘The Supervenience Argument Generalizes’, Philosophical Studies 109 (1): 75–96. —(2005), ‘Exclusion, Overdetermination, and the Nature of Causation’, Journal of Philosophical Research, 30: 261–82. Boyer, P. (1994), The Naturalness of Religious Ideas: A Cognitive Theory of Religion (London: University of California Press).
343
Bibliography Braddon-Mitchell, D., and Jackson, F. (1996), The Philosophy of Mind and Cognition (Oxford: Blackwell). Brentano, F. (1874), Psychology from an Empirical Standpoint (London: Routledge). Bringsford, S. (1994), ‘Computation, Among Other Things, is Beneath Us’, Minds and Mahines, 4 (4): 469–88. Broad, C. D. (1925), The Mind and Its Place in Nature (London: Routledge & Kegan Paul). Broadbent, D. E. (1958), Perception and Communication (Oxford: Pergamon Press). Brooks, R. A. (1991a), ‘Intelligence without Reason’, in Proceedings of 12th International Joint Conference on Artificial Intelligence (San Mateo, CA: Morgan Kauffman), 569–95; reprinted in Brooks, Cambrian Intelligence: The Early History of the New AI (Cambridge, MA: MIT Press), 133–86. —(1991b), ‘Intelligence without Representation’, Artificial Intelligence, 47: 139–59. Brown, J. (1995), ‘The Incompatibility of Anti-Individualism and Privileged Access’, Analysis, 53: 149–56. —(2004), Anti-Individualism and Knowledge (Cambridge, MA: MIT Press). Brueckner, A. (1992), ‘What an Anti-Individualist Knows A Priori’, Analysis, 52: 111–18. Bruner, J. S., Goodnow, J., and Austin, G. (1956), A Study of Thinking (New York: Wiley). Burge, T. (1973), ‘Reference and Proper Names’, Journal of Philosophy, 70: 425–39. —(1977), ‘Belief De Re’, Journal of Philosophy, 74: 338–62. —(1979), ‘Individualism and the Mental’, in P. French, T. Uehling, Jr., and H. We stein (eds), Midwest Studies in Philosophy, vol. 4 (Minnesota: University of Minnesota Press), 73–121. —(1982), ‘Other Bodies’, in A. Woodfield (ed.), Thought and Object: Essays on Intentionality (Oxford: Clarendon Press). —(1986a), ‘Individualism and Psychology’, Philosophical Review, 95 (1): 3–45. —(1986b), ‘Intellectual Norms and Foundations of Mind’, Journal of Philosophy, 83 (12): 697–720. —(1988), ‘Individualism and Self-Knowledge’, Journal of Philosophy, 85: 649–63. —(1989), ‘Individuation and Causation in Psychology’, Pacific Philosophical Quarterly, 70: 303–22. —(1993a), ‘Content Preservation’, Philosophical Review, 102: 457–88. —(1993b), ‘Mind-Body Causation and Explanatory Practice’, in J. Heil and A. Mele (eds), Mental Causation (Oxford: Oxford University Press). —(1998), ‘Memory and Self-Knowledge’, in P. Ludlow and N. Martin (eds), Externalism and Self-Knowledge (Stanford: CSLI). Bush, V. (1945), ‘As We May Think’, Atlantic Monthly, 176 (July): 101–8; reprinted in R. Packer and K. Jordan (eds), Multimedia: From Wagner to Virtual Reality (London: W. W. Norton, 2001), 135–53. Butler, J. (1975) [1736], ‘Of Personal Identity’, in J. Perry (ed.), Personal Identity (Berkeley and Los Angeles: University of California Press). Byrne, A. (1997), ‘Some Like It Hot: Consciousness and Higher-Order Thoughts’, Philosophical Studies, 86: 103–29. —(2001), ‘Intentionalism Defended’, The Philosophical Review, 110: 199–240. Calude, C., Casti, J. L., and Dinneen, M. (eds) (1998), Unconventional Models of Computation (London: Springer).
344
Bibliography Campbell, J. (1994), Past, Space, and Self (Cambridge, MA: MIT Press). Campbell, N. (1997), ‘The Standard Objection to Anomalous Monism’, Australasian Journal of Philosophy, 73 (3): 373–82. —(2000), ‘Supervenience and Psycho-Physical Dependence’, Dialogue: Canadian Philosophical Review, 39: 303–16. —(2003), ‘Causes and Causal Explanations: Davidson and His Critics’, Philosophia: Philosophical Quarterly of Israel, 31 (1–2): 149–57. —(2007), ‘Explanatory Pluralism’, International Journal of the Humanities, 5 (3): 25–29. —(2008a), ‘Explanatory Exclusion and the Individuation of Explanations’, Facta Philosophica, 10. —(2008b), Mental Causation: A Nonreductive Approach (New York: Peter Lang). Campbell, N., and Moore, D. (2009), ‘On Kim’s Exclusion Principle’, Synthese, 169 (1): 75–90. Cariani, P. (1992), ‘Emergence and Artificial Life’, in C. G. Langton, C. Taylor, J. D. Farmer and S. Rasmussen (eds), Artificial Life II (Redwood City, CA: Addison-Wesley), 775–97. Carruthers, P. (1996), Language, Thought and Consciousness (Cambridge: Cambridge University Press). —(2000), Phenomenal Consciousness: A Naturalistic Theory (Cambridge: Cambridge University Press). Cartwright, R. T. (1960/87), ‘Negative Existentials’, in his Philosophical Essays (Cambridge, MA: MIT Press). Chalmers, D. (1995), ‘Facing Up to the Problem of Consciousness’, Journal of Consciousness Studies, 2: 200–19. —(1996), The Conscious Mind: In Search of a Fundamental Theory (Oxford: Oxford University Press). —(1999a), ‘First-Person Methods in the Science of Consciousness’, Bulletin from the Center for Consciousness Studies. —(1999b), ‘Materialism and the Metaphysics of Modality’, Philosophy and Phenomenological Research, 59: 473–93. —(2002a), ‘Consciousness and Its Place in Nature’, in Chalmers (2002b), 247–72. —(2002b), Philosophy of Mind: Classical and Contemporary and Readings (New York: Oxford University Press). —(2003a), ‘Consciousness and Its Place in Nature’, in Stich and Warfield (2003), 102–42. —(2003b), ‘The Matrix as Metaphysics’, March 20th. Available at h p:// whatisthematrix.warnerbros.com. —(2003c), ‘The Nature of Narrow Content’, Philosophical Issues, 13: 46–66. —(2008), Foreword to Andy Clark’s Supersizing the Mind, in Clark (2008b), ix–xvi. —(forthcoming), ‘Two Dimensional Semantics’, in E. Lepore and B. Smith (eds), Oxford Handbook to the Philosophy of Language (Oxford: Oxford University Press). Chisholm, R. M. (1956), ‘Perceiving: A Philosophical Study’, in D. Rosenthal (ed.), The Nature of Mind (Oxford: Oxford University Press, 1990), chapter 11. —(1970), ‘Events and Propositions’, Nous, 4 (1): 15–24. —(1976), Person and Object: A Metaphysical Study (London: G. Allen & Unwin and La Salle, IL: Open Court).
345
Bibliography Chomsky, N. (1994), ‘Chomsky, Noam’, in Gu enplan (1994), 153–67. Chrisley, R. L. (2000), ‘Transparent Computationalism’, in M. Scheutz (ed.), New Computationalism (Berlin: Academia Verlag, 2000), 105–20. Churchland, P. M. (1981), ‘Eliminative Materialism and the Propositional A itudes’, Journal of Philosophy, 78: 67–90. —(1986), ‘Some Reductive Strategies in Cognitive Neurobiology’, Mind, 95: 279–309. —(1989a), ‘Knowing Qualia: A Reply to Jackson’, in his A Neurocomputational Perspective (Cambridge MA: MIT Press). —(1989b), A Neurocomputational Perspective: The Nature of Mind and the Structure of Science (Cambridge, MA: MIT Press). —(1995), The Engine of Reason and Seat of the Soul (Cambridge, MA: MIT Press). Churchland, P. M., and Churchland, P. S. (1981), ‘Functionalism, Qualia, and Intentionality’, Philosophical Topics, 12: 121–45. Churchland, P. S. (1986), Neurophilosophy: Towards a Unified Theory of the Mind-Brain (Cambridge, MA: MIT Press). Churchland, P. S., and Sejnowski, T. J. (1992), The Computational Brain (Cambridge, MA: MIT Press). Clark, A. J. (1989), Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing (Cambridge, MA: MIT Press). —(1990), ‘Connectionist Minds’, Proceedings of the Aristotelian Society, 90: 83–102. —(1991), ‘Systematicity, Structured Representations and Cognitive Architecture: A Reply to Fodor and Pylyshyn’, in T. Horgan and J. Tienson (eds), Connectionism and the Philosophy of Mind (Dordrecht: Kluwer), 198–218. —(1993), Associative Engines: Connectionism, Concepts, and Representational Change (Cambridge, MA: MIT Press). —(1996), ‘Connectionism, Moral Cognition, and Collaborative Problem-Solving’, in L. May, M. Friedman and A. Clark (eds), Mind and Morals: Essays on Ethics and Cognitive Science (Cambridge, MA: MIT Press), 109–27. —(1997), Being There: PuĴing Brain, Body, and World Together Again (Cambridge, MA: MIT Press). —(2003a), Natural-Born Cyborgs: Why Minds and Technologies are Made to Merge (Oxford: Oxford University Press). —(2003b), ‘The Twisted Matrix: Dream, Simulation or Hybrid?, 19 December. Available at h p://whatisthematrix.warnerbros.com. —(2008a), ‘Pressing the Flesh: A Tension in the Study of the Embodied, Embedded Mind?’, Philosophy and Phenomenological Research, 76 (1): 37–59. —(2008b), Supersizing the Mind: Embodiment, Action, and Cognitive Extension (New York: Oxford University Press). Clark, A. J., and Chalmers, D. J. (1998), ‘The Extended Mind’, Analysis, 58: 7–19. Clark, A. J., and Grush, R. (1999), ‘Towards a Cognitive Robotics’, Adaptive Behavior, 7: 5–16. Clark, A. J., and Karmiloff-Smith, A. (1993), ‘The Cognizer’s Innards: A Psychological and Philosophical Perspective on the Development of Thought’, Mind and Language, 8: 487–519. Clark, A. J., and Thornton, C. (1997), ‘Trading Spaces: Computation, Representation, and the Limits of Uninformed Learning’, Behavioral and Brain Sciences, 20: 57–90.
346
Bibliography Clark, A. J., and Wheeler, M. W. (1999), ‘Genic Representation: Reconciling Content and Causal Complexity’, British Journal for the Philosophy of Science, 50: 103–35. Cole, M., and Bruner, J. S. (1971), ‘Cultural Differences and Inferences about Psychological Processes’, American Psychologist, 26: 867–76. Collins, H. (2000), ‘Four Kinds of Knowledge, Two (or Maybe Three) Kinds of Embodiment, and the Question of Artificial Intelligence’, in M. Wrathall and J. Malpas (eds), Heidegger, Coping and Cognitive Science: Essays in Honor of Hubert L. Dreyfus, vol. 2 (Cambridge, MA: MIT Press), 179–95. Collins, S. (1982), Selfless Persons: Imagery and Thought in Theravada Buddhism (Cambridge: Cambridge University Press). Conee, E. (1994), ‘Phenomenal Knowledge’, Australasian Journal of Philosophy, 72: 136–50. Cooper, R., Shallice, T., and Farringdon, J. (1995), ‘Symbolic and Continuous Processes in the Automatic Selection of Actions’, in J. Hallam (ed.), Hybrid Problems, Hybrid Solutions (Oxford: IOS Press), 27–37. Copeland, B. J. (1993), Artificial Intelligence: A Philosophical Introduction (Oxford: Blackwell). Copeland, B. J., and Sylvan, R. (1999), ‘Beyond the Universal Turing Machine’, Australasian Journal of Philosophy, 77: 46–67. Craik, K. J. W. (1943), The Nature of Explanation (Cambridge: Cambridge University Press). Crane, T. (1998), ‘Intentionality as the Mark of the Mental’, in Anthony O’Hear (ed.), Contemporary Issues in the Philosophy of Mind (Cambridge: Cambridge University Press), 229–51. —(2006), ‘Is There a Perceptual Relation?’, T. S. Gendler and J. Hawthorne (eds), Perceptual Experience (Oxford: Oxford University Press). Crane, T., and Mellor, D. H., (1990), ‘There is No Question of Physicalism’, Mind, 99: 185–206. Crick, F. (1994), The Astonishing Hypothesis: The Scientific Search for the Soul (New York: Charles Scribner’s Sons). Crick, F. H. C., and Koch, C. (1990), ‘Towards a Neurobiological Theory of Consciousness’, Seminars in Neuroscience, 2: 263–75. Cummins, R. (1989), Meaning and Mental Representation (Cambridge, MA: MIT Press). Cussins, A. (1990), ‘The Connectionist Construction of Concepts’, in M. A. Boden (ed.), The Philosophy of Artificial Intelligence (Oxford: Oxford University Press), 368–440. Dale, K., and Husbands, P. (2010), ‘The Evolution of Reaction-Diffusion Controllers for Minimally Cognitive Agents’, Artificial Life, 16 (1): 1–19. Daly, C. (1997), ‘What are Physical Properties?’, Pacific Philosophical Quarterly, 79 (3): 196–217. Damasio, A. R. (1994), Descartes’ Error: Emotion, Reason and the Human Brain (New York: Putnam). Davidson, D. (1970), ‘Mental Events’, in L. Foster and J. Swanson (eds), Experience and Theory (London: Duckworth), 79–101; reprinted in Davidson (1980), 207–25. Page numbers in this chapter refer to the reprint. —(1974), ‘Special Sciences or The disunity of Science as a Working Hypothesis’, in Synthese, 28: 97–115; reprinted in Fodor (1981).
347
Bibliography —(1975), The Language of Thought (Cambridge, MA: Harvard University Press). —(1980), Essays on Actions and Events (Oxford: Oxford University Press). —(1980) [1969], ‘On the Individuation of Events’, in his Essays on Actions and Events (Oxford: Clarendon Press). —(1981), RePresentations: Philosophical Essays on the Foundations of Cognitive Science (Cambridge, MA: Bradford Books, MIT Press). —(1982), ‘Rational Animals’, in Dialectica, 36: 317–27. —(1984a), ‘First-Person Authority’, in Dialectica, 38: 101–11; reprinted in Subjective, Intersubjective, Objective (2001). —(1984b), Inquiries into Truth and Interpretation (Oxford: Oxford University Press). —(1985), ‘Fodor’s Guide to Mental Representation: The Intelligent Auntie’s Vade-Mecum’, Mind, 94: 76–100. —(1986a), ‘A Coherence Theory of Truth and Knowledge’ in Lepore (1986). —(1986b), ‘The Myth of the Subjective’, in M. Krausz (ed.), Relativism: Interpretation and Confrontation (Indiana: Notre Dame); reprinted in Davidson (2001). —(1987a), ‘Knowing One’s Own Mind’, in Proceedings from the American Philosophical Association, 61: 441–58; reprinted in Q. Cassam (ed.), Self-Knowledge (Oxford: Oxford University Press); and in Davidson (2001). —(1987b), Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: MIT Press). —(1990), A Theory of Content and Other Essays (Cambridge, MA: Bradford Books, MIT Press). —(1991), ‘A Modal Argument for Narrow Content’, Journal of Philosophy, 88: 5–26. —(1993), ‘Thinking Causes’, in J. Heil and A. Mele (eds), Mental Causation (Oxford: Oxford University Press), 3–17. —(1994), The Elm and the Expert (Cambridge, MA: MIT Press). —(2001), Subjectivity, Intersubjectivity, Objectivity (Oxford: Oxford University Press). Davies, M. (1983), ‘Function in Perception’, Australasian Journal of Philosophy, 61: 409–26. —(2000), ‘Externalism and Armchair Knowledge’, in P. Boghossian and C. Peacocke (eds), New Essays on the A Priori (Oxford: Oxford University Press). Davies, M., and Humphreys, G. (1993), Consciousness: Psychological and Philosophical Essays (Oxford: Blackwell). Davies, M., and Stone, T. (1995), Mental Simulation: Evaluations and Applications (Oxford: Blackwell). De Jaegher, H., and Di Paolo, E. A. (2007), ‘Participatory Sense-Making: An Enactive Approach to Social Cognition’, Phenomenology and Cognitive Science, 6 (4): 485–507. Denne , D. C. (1969), Content and Consciousness: An Analysis of Mental Phenomena (London: Routledge & Kegan Paul). —(1971), ‘Intentional Systems’, Journal of Philosophy, 68: 87–106. —(1978), ‘Why Not the Whole Iguana?’, Behavioral and Brain Sciences, 1: 103–4. —(1981a), Brainstorms (Brighton: Harvester). —(1981b), ‘True Believers: The Intentional Strategy and Why It Works’, in A. F. Heath (ed.), Scientific Explanation: Papers Based on Herbert Spencer Lectures Given in the University of Oxford (Oxford: Oxford University Press), 53–75. —(1984), Elbow Room: The Varieties of Free Will Worth Wanting (Cambridge, MA: MIT Press).
348
Bibliography —(1987a), The Intentional Stance (Cambridge, MA: MIT Press). —(1987b), ‘True Believers: The Intentional Strategy and Why It Works’, in his The Intentional Stance (Cambridge MA: MIT Press), 13–35; reprinted in Rosenthal (1991), 339–50. —(1988), ‘Quining Qualia’, in A. Marcel and E. Bisiach (eds), Consciousness in Contemporary Science (Oxford: Oxford University Press), 42–77. —(1991a), Consciousness Explained (New York: Penguin Books). —(1991b), ‘Real Pa erns’, Journal of Philosophy, 88: 27–51. —(1994), ‘Denne , Daniel, C.’, in Gu enplan (1994). —(1995), ‘The Unimagined Preposterousness of Zombies: Commentary on Moody, Flanagan, and Polger’, Journal of Consciousness Studies, 2: 322–6. —(1996), ‘The Case for Rorts’, in R. Brandom (ed.), Rorty and His Critics (Oxford: Blackwell), 91–101. —(1997), Kinds of Minds: Towards an Understanding of Consciousness (New York: Basic Books). —(2001), Consciousness Explained (New York: Li le Brown). —(2003), ‘Who’s On First? Heterophenomenology Explained’, Journal of Consciousness Studies, 10: 19–30. —(2006), Sweet Dreams: Philosophical Obstacles to a Science of Consciousness (Cambridge, MA: MIT Press). Descartes, R. (1637/1985), ‘Discourse on the Method’, in J. Co ingham, R. Stoothoff and D. Murdoch (trans. and eds), The Philosophical Writings of Descartes, vol. 1 (Cambridge: Cambridge University Press), 111–51. —(1642/1984), ‘Meditations on First Philosophy’, in J. Co ingham, R. Stoothoff and D. Murdoch (trans. and eds), The Philosophical Writings of Descartes, vol. 2 (Cambridge: Cambridge University Press), 12–62. —(1994), in J. Co ingham, R. Stoothoff and D. Murdoch (trans. and eds), The Philosophical Writings of Descartes, vol. 2 (Cambridge: Cambridge University Press), 275. Devi , M. (1996), Coming to Our Senses (Cambridge: Cambridge University Press). Devi , M., and Sterelny, K. (1987/99), Language and Reality (Cambridge, MA: MIT Press). Di Paolo, E. A. (2009), ‘Extended Life’, Topoi, 28: 9–21. Dienes, Z., and Perner, J. (1999), ‘A Theory of Implicit and Explicit Knowledge’, Behavioral and Brain Sciences, 22: 735–808. —(2007), ‘The Cold Control Theory of Hypnosis’, in G. Jamieson (ed.), Hypnosis and Conscious States: The Cognitive Neuroscience Perspective (Oxford: Oxford University Press), 293–314. Dijksterhuis, E. J. (1961), The Mechanization of the World-Picture (Oxford: Clarendon). Dowell, J. L. (2006), ‘Formulating the Thesis of Physicalism’, Philosophical Studies, 131 (1): 1–23. —(2006), ‘Physical: Empirical not Metaphysical’, Philosophical Studies, 131 (1): 25–60. Dretske, F. (1980), ‘The Intentionality of Cognitive States’, in D. Rosenthal (ed.), The Nature of Mind (Oxford: Oxford University Press, 1990). —(1981), Knowledge and the Flow of Information (Cambridge, MA: Bradford Books, MIT Press). —(1985a), ‘Constraints and Meaning’, Linguistics and Philosophy, 8 (1): 9–12.
349
Bibliography —(1985b), ‘Machines and the Mental’, Proceedings and Addresses of the American Philosophical Association, 59: 23–33. —(1986), ‘Misrepresentation’, in R. Bogdan (ed.), Belief (Oxford: Oxford University Press). —(1988), Explaining Behavior: Reasons in a World of Causes (Cambridge, MA: MIT Press). —(1993), ‘Conscious Experience’, Mind, 102: 263–83. —(1994), ‘Differences That Make No Difference’, Philosophical Topics, 22 (1–2): 41–58. —(1995), Naturalizing the Mind (Cambridge: MA: MIT Press). Dreyfus, H. L. (1967), ‘Why Computers Must Have Bodies in Order to be Intelligent’, Review of Metaphysics, 21: 13–32. —(1972), What Computers Can’t Do: A Critique of Artificial Reason (New York: Harper & Row). —(2003), ‘Existentialist Phenomenology and the Brave New World of The Matrix’, The Harvard Review of Philosophy, 11 (Fall): 18–31. Amended version of Dreyfus and Dreyfus (2002). Dreyfus, H. L., and Dreyfus, Stephen (2002), ‘The Brave New World of the Matrix’, 20 November. Available at h p://whatisthematrix.warnerbros.com. For an amended version see Dreyfus (2003). Dreyfus, H. L., and Dreyfus, Stuart E. (1988), ‘Making a Mind Versus Modelling the Brain: Artificial Intelligence Back at a Branch Point’, in S. Graubard (ed.), The Artificial Intelligence Debate: False Starts, Real Foundations (Cambridge, MA: MIT Press), 15–43. Eccles, J. (1987): ‘Brain and Mind: Two or One?’, in C. Blakemore and S. Greenfields (eds), Mindwaves (Oxford: Blackwell). Eccles, J., and Popper, K. (1977), The Self and Its Brain (New York: Springer). Efron, A. (1992): ‘Residual Assymetric Dualism: A Theory of Mind-Body Relations’, Journal of Mind and Behaviour, 13: 113–36. Elman, J. L. (1990), ‘Finding Structure in Time’, Cognitive Science, 14: 179–212. —(1993), ‘Learning and Development in Neural Networks: The Importance of Starting Small’, Cognition, 48: 71–99. Elster, J. (1999), Alchemies of the Mind: Rationality and the Emotions (Cambridge: Cambridge University Press). Elugardo, R. (2002), ‘The Predicate View of Proper Names’, in G. Preyer and G. Peter (eds), Logical Form and Language (Oxford: Clarendon Press). Enc, B. (1982), ‘Intentional States of Mechanical Devices’, Mind, 91: 161–82. Engelbart, D. C. (1962), Augmenting Human Intellect: A Conceptual Framework. Report No. AFOSR 3233, prepared for the Air Force Office of Scientific Research (Menlo Park, CA: Stanford Research Institute); reprinted, abridged, in R. Packer and K. Jordan (eds), Multimedia: From Wagner to Virtual Reality (London: W. W. Norton, 2001), 64–90. Evans, G. (1982), The Varieties of Reference, J. McDowell (ed.) (Oxford: Oxford University Press). Evere , A., and Hofweber, T., Empty Names, Fiction and the Puzzles of Non-existence (Stanford: CSLI). Ezquerro, J., and Vicente, A. (2000) ‘Explanatory Exclusion, Over-Determination, and the Mind-Body Problem’. Paper read at the Twentieth World Congress of Philosophy at Boston.
350
Bibliography Feigl, H. (1958), ‘The “Mental” and the “Physical”’, in H. Feigl, G. Maxwell and M. Scriven (eds), Concepts, Theories and the Mind-Body Problem: Minnesota Studies in the Philosophy of Science, vol. 2 (Minneapolis: University of Minnesota Press), 370–497; republished in 1967 with a new postscript, preface to the postscript and additional bibliography. —(1978), ‘Mental Representation’, Erkenntnis, 13; reprinted with postscripts in N. Block (ed.), Readings in Philosophy of Psychology, vol. 2 (London: Methuen). —(1992), ‘Physicalism’, in J. Earman (ed.), Inference, Explanation and Other Frustrations (Berkeley: University of California Press), 271–92. Fish, W. (2009), Perception, Hallucination, and Illusion (Oxford: Oxford University Press). Fitch, T. (2007), ‘Nano-Intentionality’, Biology & Philosophy, 23: 157–77. Flanagan, O. (1991), The Science of the Mind (Cambridge, MA: MIT Press). —(1992), Consciousness Reconsidered (Cambridge MA: MIT Press). Fodor, J. A. (1968), Psychological Explanation: An Introduction to the Philosophy of Psychology (New York: Random House). —(1975), The Language of Thought (New York: Thomas Crowell). —(1980), ‘Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology’, Behavioral and Brain Sciences, 3: 63–73. —(1983), The Modularity of Mind: An Essay in Faculty Psychology (Cambridge, MA: MIT Press). —(1984), ‘Semantics, Wisconsin Style’, Synthese, 59: 231–350; and in his A Theory of Content (Cambridge, MA: MIT Press, 1990), 31–49. —(1986a), ‘Banish Dis-Content’, in J. Bu erfield (ed.), Language, Mind and Logic (Cambridge: Cambridge University Press). —(1986b), ‘Why Paramecia Don’t Have Mental Representations’, in P. French, T. Uehling, Jr., and H. We stein (eds), Midwest Studies in Philosophy, vol. 10 (Minneapolis: University of Minnesota Press), 3–23. —(1987), Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: MIT Press). —(1988), ‘Connectionism and Cognitive Architecture: A Critical Analysis’, Cognition, 28: 3–71. —(1990a), ‘Fodor’s Guide to Mental Representation’, in Fodor (1990c), 3–29. —(1990b), ‘Making Mind Ma er More’, in Fodor (1990c), 137–59. —(1990c), A Theory of Content and Other Essays (Cambridge, MA: MIT Press). —(1990d), ‘A Theory of Content, I: The Problem’, in Fodor (1990c), 51–87. —(1990e), ‘A Theory of Content, II: The Theory’, in Fodor (1990c), 89–136. —(1991a), ‘A Modal Argument for Narrow Content’, Journal of Philosophy, 88: 5–26. —(1991b), ‘Replies’, in B. Loewer and G. Rey (eds), Meaning in Mind: Fodor and His Critics (Oxford: Blackwell), 255–319. —(1998), Concepts: Where Cognitive Science Went Wrong (Cambridge, MA: MIT Press). —(2000), The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology (Cambridge, MA: MIT Press). —(2009), ‘Where is My Mind?’, London Review of Books, 31 (3), 12 February. Fodor, J. A., and Lepore, E. (1992), Holism: A Shopper’s Guide (Cambridge, MA: MIT Press).
351
Bibliography Fodor, J. A., and McLaughlin, B. P. (1990), ‘Connectionism and the Problem of Systematicity: Why Smolensky’s Solution Doesn’t Work’, Cognition, 35: 183–204. Fodor, J. A., and Pylyshyn, Z. W. (1981), ‘How Direct is Visual Perception?: Some Reflections on Gibson’s “Ecological Approach”’, Cognition, 9: 139–96. Foster, J. (1989), ‘A Defence of Dualism’, in J. Smythies and J. Beloff (eds), The Case for Dualism (Charlo esville: University of Virginia Press). —(1991), The Immaterial Self: A Defence of the Cartesian Dualist Conception of Mind (London: Routledge). —(1996), The Immaterial Self (London: Routledge). Frege, G. (1953) [1884], The Foundations of Arithmetic, trans. J. L. Austin (Oxford: Blackwell). —(1960) [1892], ‘On Sense and Reference’, in P. T. Geach and M. Black (eds), Translations from the Philosophical Writings of GoĴlob Frege, 2nd edn (Oxford: Blackwell). Freud, S. (1917/1991), Introductory Lectures on Psychoanalysis (Harmondsworth: Penguin). Frith, C. D. (2007), Making Up the Mind: How the Brain Creates Our Mental World (Oxford: Blackwell). Frith, C. D., and Frith, U. (2000), ‘The Physiological Basis of Theory of Mind: Functional Neuroimaging Studies’, in S. Baron-Cohen, H. Tager-Flusberg and D. J. Cohen (eds), Understanding Other Minds: Perspectives from Developmental Cognitive Neuroscience, 2nd edn (Oxford: Oxford University Press), 334–56. Frith, C. D., Perry, R., and Lumer, E. (1999), ‘The Neural Correlates of Conscious Experience: An Experimental Framework’, Trends in Cognitive Science, 3: 105–14. Gallagher, S. (2008), ‘Direct Perception in the Intersubjective Context’, Consciousness and Cognition, 17: 535–43. Gallagher, S., and Zahavi, D. (2007), The Phenomenological Mind (London: Routledge). Gallese, V. (2006), ‘Intentional A unement: A Neurophysiological Perspective on Social Cognition and Its Disruption in Autism’, Brain Research, 1079: 15–24. Gallistel, C. (1990), The Organization of Learning (Cambridge, MA: MIT Press). —(2008), ‘Learning and Representation’, in R. Menzel (ed.), Learning Theory and Behavior, vol. 1 of Learning and Memory: A Comprehensive Reference, 4 vols. (J. Byrne, ed.) (Oxford: Elsevier), 227–42. Garre , B. (1998), Personal Identity and Self-Consciousness (London: Routledge). Gates, G. (1996), ‘The Price of Information’, Synthese, 107 (3): 325–47. Geach, P. T. (1980), ‘Some Remarks on Representations’, Behavioral and Brain Sciences, 3: 80–1. Geertz, C. (1973), ‘Thick Description: Toward an Interpretive Theory of Culture’, in C. Geertz (ed.), The Interpretation of Cultures: Selected Essays (New York: Basic Books), 3–32. Gibb, S. (2006), ‘Why Davidson is Not a Property Epiphenomenalist’, International Journal of Philosophical Studies 14 (3): 407–22. Gibson, E. J., and Walk, R. D. (1960), ‘The Visual Cliff ’, Scientific American, 202 (April): 64–71. Gibson, J. J. (1950), The Perception of the Visual World (Cambridge, MA: Riverside Press).
352
Bibliography —(1966), The Senses Considered as Perceptual Systems (Westport, CT: Greenwood Press). —(1977), ‘The Theory of Affordances’, in R. Shaw and J. Bransford (eds), Perceiving, Acting, and Knowing: Toward an Ecological Psychology (Hillsdale, NJ: Lawrence Erlbaum), 67–82. Gille , C. (2001), ‘Does the Argument from Realization Generalize? Responses to Kim’, Southern Journal of Philosophy, 39 (1): 79–98. Gillet, C., and Loewer, B. (2001), Physicalism and Its Discontents (Cambridge: Cambridge University Press). Gille , C., and Rives, B. (2005), ‘The Non-Existence of Determinables: Or a World of Absolute Determinates as a Default Hypothesis’, Noûs, 39: 483–504. Glenberg, A., and Adams, F. (1978), ‘Type I Rehearsal and Recognition’, Journal of Verbal Learning and Verbal Behavior, 17: 455–63. Godfrey-Smith, P. (1994a), ‘A Continuum of Semantic Optimism’, in Stich and Warfield (1994), 259–77. —(1994b), ‘Spencer and Dewey on Life and Mind’, in R. A. Brooks and P. Maes (eds), Artificial Life IV (Cambridge, MA: MIT Press), 80–9. Gois, I. (2007), ‘On a Misconception about Consciousness’, unpublished Ph.D. thesis, University of London. Goldberg, S. (2003), ‘On Our Alleged A Priori Knowledge That Water Exists’, Analysis, 63: 38–41. —(2007), ‘Anti-Individualism, Content Preservation, and Discursive Justification’, Noûs, 41: 178–203. Goodale, M. A., and Milner, A. D. (1992), ‘Separate Visual Pathways for Perception and Action’, Trends in Neuroscience, 13: 20–23. Grau, C. (ed.) (2005), Philosophical Essays on the Matrix (New York: Oxford University Press). Green, C. (2003), The Lost Cause (Oxford: Forum). Greenfield, P. M., and Bruner, J. S. (1969), ‘Culture and Cognitive Growth’, in D. A. Goslin (ed.), Handbook of Socialization Theory and Research (Chicago: Rand McNally), 633–54. Grice, H. P. (1961/1965), ‘The Causal Theory of Perception’, in R. Swartz (ed.), Perceiving, Sensing, and Knowing (New York: Doubleday). Griffin, D. R. (1978), ‘Prospects for a Cognitive Ethology’, Behavioral and Brain Sciences, 4: 527–38. —(1984), Animal Thinking (Cambridge, MA: Harvard University Press). Griffiths, P. (1997), What Emotions Really are: The Problem of Psychological Categories (Chicago: Chicago University Press). Grush, R. (2004), ‘The Emulation Theory of Representation: Motor Control, Imagery, and Perception’, Behavioral and Brain Sciences, 27: 377–442. Gulick, R. V. (1992), ‘Three Bad Arguments for Intentional Property Epiphenomenalism’, Erkenntnis 36 (3): 311–32. Gu enplan, S. (1994), A Companion to the Philosophy of Mind (Oxford: Blackwell). Harman, G. (1990), ‘The Intrinsic Quality of Experience’, in J. E. Tomberlin (ed.), Philosophical Perspectives, 4: 31–52; reprinted in N. Block, O. Flanagan and G. Güzeldere (eds) (1997), The Nature of Consciousness, 663–75. Harnish, M. (2002), Minds, Brains, Computers: An Historical Introduction to the Foundations of Cognitive Science (Oxford: Blackwell).
353
Bibliography Hart, W. D. (1994), ‘Dualism’, in S. Gu enplan (ed.), A Companion to the Philosophy of Mind (Oxford: Blackwell). Ha iangadi, A. (2007), Oughts and Thoughts (Oxford: Oxford University Press). Haugeland, J. (1983), ‘Weak Supervenience’, American Philosophical Quarterly, 19: 93–103. —(1985), Artificial Intelligence: The Very Idea (Cambridge, MA: MIT Press). —(1995/1998), ‘Mind Embodied and Embedded’, in Having Thought: Essays in the Metaphysics of Mind (Cambridge, MA: Harvard University Press), Chapter 9, 207–37. Hawthorne, J. (2002), ‘Blocking Definitions of Materialism’, Philosophical Studies, 110 (2): 103–13. —(2003), From an Ontological Point of View (Oxford: Oxford University Press). —(2007), ‘Cartesian Dualism’, in P. van Inwagen and D. Zimmerman (eds), Persons Human and Divine (Oxford: Oxford University Press). Heil, J. (1988), ‘Privileged Access’, Mind, 97: 238–51. —(2003), From an Ontological Point of View (Oxford: Oxford University Press). —(2008), ‘Anomalous Monism’, in H. Dyke (ed.), From Truth to Reality: New Essays in Metaphysics (London: Routledge), 85–98. Heller, M. (1990), The Ontology of Physical Objects: Four-Dimensional Hunks of MaĴer (Cambridge: Cambridge University Press). Hempel, C. (1963), ‘Reasons and Covering Laws in Historical Explanation’, in S. Hook (ed.), Philosophy and History: A Symposium (New York: New York University Press). —(1965), Aspects of Scientific Explanation (New York: Free Press). —(1966), Philosophy of Natural Science, Prentice-Hall Foundations of Philosophy series (Englewood Cliffs, NJ: Prentice-Hall). —(1996), ‘Laws and Their Role in Scientific Explanation’, in C. Hempel (ed.), Philosophy of Natural Science (Englewood Cliffs: Prentice Hall). Hempel, C., and Oppenheim, P. (1953), ‘The Logic of Explanation’, in H. Feigl and M. Brodbek (eds), Readings in the Philosophy of Science (New York: Appleton). Herbert, R. T. (1998), ‘Dualism/Materialism’, Philosophical Quarterly, 48: 159–75. Hess, P. (1981), ‘Actions, Reasons, and Humean Causes’, Analysis, 40: 77–81. Himma, K. E. (2005), ‘When a Problem for All is a Problem for None: Substance Dualism, Physicalism and the Mind-Body Problem’, Australian Journal of Philosophy, 42 (2): 81–92. Hinton, G. E. (1980), ‘Inferring the Meaning of Direct Perception’, Behavioral and Brain Sciences, 3: 387–8. —(1990), ‘Representing Part-Whole Hierarchies in Connectionist Networks’, Artificial Intelligence, 46: 47–75. Special issue on Connectionist Symbol Processing. Hirsch, E. (1982), The Concept of Identity (Oxford: Oxford University Press). Hodgson, D. (1991), The Mind MaĴers (Oxford: Oxford University Press). Honderich, T. (1981): ‘Psychophysical Law-Like Connections and Their Problems’, Inquiry, 24: 277–303. —(1982), ‘The Argument for Anomalous Monism’, Analysis, 42: 59–64. —(1983), ‘Anomalous Monism: Reply to Smith’, Analysis, 43: 147–9. —(1984), ‘Smith and the Champion of Mauve’, Analysis, 44: 86–9. —(2004), On Consciousness (Edinburgh: Edinburgh University Press).
354
Bibliography Horgan, T. (1978), ‘The Case Against Events’, The Philosophical Review, 87 (1): 28–47. —(1983), ‘Supervenience and Microphysics’, Pacific Philosophical Quarterly, 63: 29–43. —(1993), ‘From Supervenience to Superdupervenience: Meeting the Demands of a Material World’, Mind, 102 (408): 555–86. Horgan, T., and Woodward, J. (1985), ‘Folk Psychology is Here to Stay’, Philosophical Review, 94: 197–225. Hornsby, J. (1980), Actions (London: Routledge). Horwich, P. (1998), Meaning (Oxford: Oxford University Press). —(2005), Reflections on Meaning (Oxford: Oxford University Press). —(2006), ‘The Value of Truth’, Noûs, 40: 347–60. Hudson, H. (2001), A Materialist Metaphysics of the Human Person (Ithaca, NY: Cornell University Press). —(2007), ‘I Am Not an Animal!’, in P. van Inwagen and D. Zimmerman (eds), Persons: Human and Divine (Oxford: Clarendon Press). Humphreys, N. (1982), Consciousness Regained (Oxford: Oxford University Press). Hunt, E. B. (1962), Concept Learning: An Information Processing Problem (New York: Wiley). Hurley, S. L. (1998), Consciousness in Action (Cambridge, MA: Harvard University Press). —(forthcoming), ‘Varieties of Externalism’, in R. Menary (ed.), The Extended Mind (Aldershot: Ashgate). Husbands, P., Smith, T., Jakobi, N., and O’Shea, M. (1998), ‘Be er Living through Chemistry: Evolving Gas Nets for Robot Control’, Connection Science, 10: 185–210. Hutchins, E. (1995), Cognition in the Wild (Cambridge, MA: MIT Press). Hu o D. D. (2009), ‘Mental Representation and Consciousness’, in W. Banks (ed.), Encyclopedia of Consciousness, vol. 2 (London: Elsevier), 19–32. Hu o, D. D., and Myin, E. (forthcoming), Radicalizing Enactivism (Cambridge, MA: MIT Press). Irwin, W. (ed.) (2002), The Matrix and Philosophy: Welcome to the Desert of the Real (Chicago: Open Court). Jackendoff, R. (1987), Consciousness and the Computational Mind (Cambridge, MA: MIT Press). Jackson, F. (1982), ‘Epiphenomenal Qualia’, Philosophical Quarterly, 32: 127–36. —(1986), ‘What Mary Didn’t Know’, Journal of Philosophy, 83: 291–5; reprinted in his Mind, Method and Conditionals (London and New York: Routledge, 1998), 70–5; and in Rosenthal (1991), 392–4. —(1993), ‘Armchair Metaphysics’, in J. Hawthorne and M. Michael (eds), Philosophy in Mind (Amsterdam: Kluwer). —(1998), From Metaphysics to Ethics (Oxford: Oxford University Press). —(2003), ‘Mind and Illusion’, in A. O’Hear (ed.), Minds and Persons (Royal Institute of Philosophy Supplement 53), 51–71. —(2006), ‘On Ensuring That Physicalism is Not a Dual A ribute Theory in Sheep’s Clothing’, Philosophical Studies, 131 (1): 227–49. Jackson, F., and Pe it, P. (1990), ‘Program Explanation: A General Perspective’, Analysis, 50: 107–17.
355
Bibliography Johnston, M. (1987), ‘Human Beings’, Journal of Philosophy, 84: 59–83. Jonas, H. (1966), The Phenomenon of Life: Toward a Philosophical Biology (New York: Harper Collins); reprinted, Evanston, IL: Northwestern University Press (2001). Kallestrup, J. (2006), ‘The Causal Exclusion Argument’, Philosophical Studies, 131 (2): 459–85. Kaplan, D. (1969), ‘Quantifying in’, in D. Davidson and J. Hintikka (eds), Words and Objections (Dordrecht: Reidel). —(1989), ‘Demonstratives’, in J. Almog, J. Perry and H. We stein (eds), Themes from Kaplan (Oxford: Oxford University Press). Karmiloff-Smith, A. (1992), Beyond Modularity: A Developmental Perspective on Cognitive Science (London: MIT Press). Keijzer, F., and Schouten, M. (2007), ‘Embedded Cognition and Mental Causation: Se ing Empirical Boundaries on Metaphysics’, Synthese, 158: 109–25. Kenny, A. (1989), The Metaphysics of Mind (Oxford: Clarendon Press). —(1994), Aquinas on Mind (London: Routledge). Kim, J. (1969), ‘Events and Their Descriptions: Some Considerations’, in N. Rescher (ed.), Essays in Honor of Carl G. Hempel (Dordrecht: Reidel). —(1973), ‘Causation, Nomic Subsumption and the Concept of Event’, Journal of Philosophy, 70: 217–36. —(1976), ‘Events as Property Exemplifications’, in M. Brand and D. Walton (eds), Action Theory (Dordrecht: Reidel). —(1984), ‘Epiphenomenal and Supervenient Causation’, Midwest Studies in Philosophy, 9: 257–70. —(1987), ‘“Strong” and “Global” Supervenience Revisited’, Philosophy and Phenomenological Research, 48: 315–26. —(1988), ‘Explanatory Realism, Causal Realism, and Explanatory Exclusion’, Midwest Studies in Philosophy, 12: 225–40. —(1989a), ‘Mechanism, Purpose, and Explanatory Exclusion’, Philosophical Perspectives, 3 (Philosophy of Mind and Action Theory). —(1989b), ‘The Myth of Non-Reductive Materialism’, Proceedings of the American Philosophical Association, 63: 1–27. —(1990a), ‘Explanatory Exclusion and the Problem of Mental Causation’, in E. Villanueva (ed.), Information, Semantics, and Epistemology (Oxford: Blackwell); reprinted in MacDonald (ed.), Philosophy of Psychology: Debates on Psychological Explanation (Oxford: Blackwell, 1995). —(1990b), ‘Supervenience as a Philosophical Concept’, Metaphilosophy, 21: 1–27. —(1993a), ‘Can Supervenience and ‘Non-Strict Laws’ Save Anomalous Monism?’, in J. Heil and A. Mele (eds), Mental Causation (Oxford: Clarendon). —(1993b), ‘Concepts of Supervenience’, in J. Kim (ed.), Supervenience and Mind (Cambridge: Cambridge University Press). —(1993c), Supervenience and Mind: Selected Philosophical Essays (Cambridge: Cambridge University Press). —(1994), ‘Explanatory Knowledge and Metaphysical Dependence’, Philosophical Issues, 5: 51–69. —(1995), ‘Mental Causation: What? Me Worry?’, Philosophical Issues, 6: 123–151. —(1998), Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation (Cambridge MA: MIT Press).
356
Bibliography —(2003), ‘Lonely Souls: Causality and Substance Dualism’, in T. O’Connor and D. Robb (eds), Philosophy of Mind: Contemporary Readings (London: Routledge). —(2005), Physicalism, Or Something Near Enough (Princeton: Princeton University Press). —(2006), The Philosophy of Mind (Boulder: Westview Press). Kirsh, D. (1991), ‘Today the Earwig, Tomorrow Man?’, Artificial Intelligence, 47: 161–84. Kitcher, P. (1984), ‘In Defense of Intentional Psychology’, Journal of Philosophy, 71: 89–106. Kolers, P. A. (1972), Aspects of Motion Perception (London: Pergamon Press). Kolers, P. A., and Rosner, B. S. (1960), ‘On Visual Masking (Metacontrast); Dichoptic Observation’, American Journal of Psychology, 73: 2–21. Kriegel, U. (2005), ‘Naturalizing Subjective Character’, Philosophy and Phenomenological Research, 71: 23–57. —(2009), Subjective Consciousness (Oxford: Oxford University Press). Kripke, S. (1972), Naming and Necessity (Oxford: Blackwell). —(1972/80), Naming and Necessity (Cambridge: Harvard University Press). —(1982), WiĴgenstein on Rules and Private Language (Cambridge, MA: Harvard University Press). Ladyman, J., Ross, D., Spurre , D., and Collier, J. (2007), Everything Must Go: Metaphysics Naturalized (Oxford: Oxford University Press). Lahav, R., and Shanks, N. (1982): ‘How to Be a Scientifically Respectable “Property Dualist”’, Journal of Mind and Behaviour, 13: 211–32. Lakoff, G., and Johnson, M. (1980), Metaphors We Live By (Chicago: University of Chicago Press). —(1999), Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought (New York: Basic Books). Larmer, R. (1986), ‘Mind-Body Interactionism and the Conservation of Energy’, International Philosophical Quarterly, 26: 277–85. Latham, N. (2000), ‘Chalmers on the Addition of Consciousness to the Physical World’, Philosophical Studies, 98: 67–93. Lawrence, M. (2005), Like a Splinter in Your Mind: The Philosophy Behind the Matrix Trilogy (Oxford: Blackwell). Lepore, E. (ed.) (1986), Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson (Oxford: Blackwell). Lepore, E., and Loewer, B. (1987), ‘Mind Ma ers’, Journal of Philosophy, 84 (11): 630–42. Lepore E., and McLaughlin, B. (eds) (1985), Actions and Events: Perspectives on the Philosophy of Donald Davidson (Oxford: Blackwell Publishers) Le vin, J. Y., Maturana, H. R., McCulloch, W. S., and Pi s, W. H. (1959), ‘What the Frog’s Eye Tells the Frog’s Brain’, Proceedings of the Institute of Radio Engineers, 47 (11): 1940–59. Leuenberger, S. (2008), ‘Ceteris Absentibus Physicalism’, in D. W. Zimmerman (ed.), Oxford Studies in Metaphysics, vol. 4 (Oxford: Oxford University Press), 145–70. Levine, J. (1983), ‘Materialism and Qualia: The Explanatory Gap’, Pacific Philosophical Quarterly, 64: 354–61. —(1993), ‘On leaving Out What It’s Like’, in M. Davies and G. Humphreys (eds), Consciousness: Psychological and Philosophical Essays (Oxford: Blackwell).
357
Bibliography —(1998), ‘Conceivability and the Metaphysics of Mind’, Noûs, 32 (4): 449–80. —(2001), Purple Haze: The Puzzle of Conscious Experience (Oxford: Oxford University Press; Cambridge, MA: MIT Press). —(2008), ‘Review of Ignorance and Imagination by Daniel Stoljar’, Mind, 117: 228–31. Lewis, D. (1980), ‘Mad Pain and Martian Pain’, in N. Block (ed.), Readings in Philosophy of Psychology, vol. 1 (London: Methuen), 216–22. —(1983b), ‘Postscript to “Mad Pain and Martian Pain”’, in his Philosophical Papers, vol. 1 (Oxford: Oxford University Press), 122–32. —(1986), ‘Against Structural Universals’, Australasian Journal of Philosophy, 64, 25–46. —(1990), ‘What Experience Teaches’, in W. Lycan (ed.), Mind and Cognition (Oxford: Blackwell), 499–519; reprinted in Ned Block, Owen Flanagan and Güven Güzeldere (eds) (1997), The Nature of Consciousness, 579–95. —(1994), ‘David Lewis’, in S. Gu enplan (ed.), A Companion to the Philosophy of Mind (Oxford: Blackwell). Libet, B. (1985a), ‘Subjective Antedating of a Sensory Experience and Mind-Brain Theories’, Journal of Theoretical Biology, 114: 563–70. —(1985b), ‘Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action’, Behavioral and Brain Sciences, 8: 529–66. —(ed.) (1999), The Volitional Brain: Towards a Neuroscience of Free Will. Special issue of the Journal of Consciousness Studies, 6 (8–9) (August–September). Libet, B., Wright, E. W., Feinstein, B., and Pearl, D. K. (1979), ‘Subjective Referral of the Timing for a Conscious Sensory Experience’, Brain, 102: 193–224. Loar, B. (1981), Mind and Meaning (Cambridge: Cambridge University Press). —(1987), ‘Social Content and Psychological Content’, in R. Grimm and D. Merrill (eds), Contents of Thought: Proceedings of the 1985 Oberlin Colloquium in Philosophy (Tucson: University of Arizona). —(1990), ‘Phenomenal States’, Philosophical Perspectives, 4: 81–108. Locke, J. (1975) [1690], An Essay Concerning Human Understanding, P. H. Nidditch (ed.), (Oxford: Clarendon Press). Lockwood, M. (1989), Mind, Brain, and the Quantum (Oxford: Oxford University Press). Loewer, B. (1997), ‘A Guide to Naturalizing Semantics’, in B. Hale and C. Wright (eds.), A Companion to the Philosophy of Language (Malden, MA: Blackwell), 108–26. Loewer, B., and Rey, G. (1991), Meaning in Mind: Fodor and His Critics (Oxford: Blackwell). Lombard, L. B. (1986), Events: A Metaphysical Study (London and Boston: Routledge & Kegan Paul). Lowe, E. J. (1989), ‘Impredicative Identity Criteria and Davidson’s Criterion of Event Identity’, Analysis, 49: 178–81. —(1992), ‘The Problem of Psychophysical Causation’, Australasian Journal of Philosophy, 70: 263–76. —(1993), ‘The Causal Autonomy of the Mental’, Mind, 102: 629–44. —(1995), Locke on Human Understanding (London and New York: Routledge). —(1996), Subjects of Experience (Cambridge: Cambridge University Press).
358
Bibliography —(1997), ‘Objects and Criteria of Identity’, in R. Hale and C. Wright (eds), A Companion to the Philosophy of Language (Oxford: Blackwell). —(2000a), ‘Causal Closure Principles and Emergentism’, Philosophy, 75: 571–85. —(2000b), An Introduction to the Philosophy of Mind (Cambridge: Cambridge University Press). —(2006), ‘Non-Cartesian Substance Dualism and the Problem of Mental Causation’, Erkenntnis, 65 (1): 5–23. —(2009), More Kinds of Being: A Further Study of Individuation, Identity and the Logic of Sortal Terms (Malden, MA and Oxford: Wiley-Blackwell). Lucas, J. R. (1961), ‘Minds, Machines, and Godel’, Philosophy, 36: 112–27. Ludlow, P. (1995), ‘Externalism, Self-Knowledge, and the Prevalence of SlowSwitching’, Analysis, 55: 45–9. —(1997), ‘On the Relevance of Slow Switching’, Analysis, 57: 285–6. —(1998), ‘Social Externalism and Memory: A Problem?’, in P. Ludlow and N. Martin (eds), Externalism and Self-Knowledge (Stanford: CSLI). Ludlow, P., Nagasawa, Y., and Stoljar, D. (2004), There’s something about Mary: Essays on Phenomenal Consciousness and Frank Jackson’s Knowledge Argument (Cambridge, MA: MIT Press). Lutz, A., and Thompson, E. (2003), ‘Neurophenomenology: Integrating Subjective Experience and Brain Dynamics in the Neuroscience of Consciousness’, Journal of Consciousness Studies, 10: 9–10, 31–52. Lycan, W. G. (1987), Consciousness (Cambridge, MA: MIT Press). —(1996), Consciousness and Experience (Cambridge, MA: MIT Press). McCarthy, J., and Hayes, P. J. (1969), ‘Some Philosophical Problems from the Standpoint of Artificial Intelligence’, in B. Meltzer and D. M. Michie (eds), Machine Intelligence, vol. 4 (Edinburgh: Edinburgh University Press), 463–502. McCulloch, W. S., and Pi s, W. H. (1943), ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’, Bulletin of Mathematical Biophysics, 5: 115–33. Macdonald, C., and Macdonald G. (1986), ‘Mental Causes and the Explanation of Action’, Philosophical Quarterly, 36: 145–58; reprinted in L. Stevenson, R. Squires and J. Haldane (eds), Mind, Causation and Action (Oxford: Basil Blackwell, 1986). Page references in text to la er. —(1995), ‘How to Be Psychologically Relevant’, in C. Macdonald and G. Macdonald (eds), Philosophy of Psychology (Oxford: Basil Blackwell). —(2006), ‘The Metaphysics of Mental Causation’, Journal of Philosophy, 103 (11): 539–76. Macdonald, G. (1992), ‘The Nature of Naturalism’, Proceedings of the Aristotelian Society, Supplementary Volume 66: 225–44. McDowell, J. (1984), ‘De Re Senses’, in Wright, C. (ed.), Frege: Tradition and Influence (Oxford: Blackwell). —(1986), ‘Singular Thought and the Extent of Inner Space’, in P. Pe it and J. McDowell (eds), Subject, Thought and Context (Oxford: Clarendon Press). —(1994), ‘The Content of Perceptual Experience’, The Philosophical Quarterly, 44 (175): 190–205. —(1994b), Mind and World (Cambridge, MA: Harvard University Press). —(1977), ‘On the Sense and Reference of a Proper Name’, Mind, 86: 159–85. McGinn, C. (1982a), The Character of Mind (Oxford: Oxford university Press).
359
Bibliography —(1982b), ‘The Structure of Content’, in A. Woodfield (ed.), Thought and Object: Essays on Intentionality (Oxford: Oxford University Press). —(1989), ‘Can We Solve the Mind-Body Problem?’, Mind, 98 (July): 349–66, reprinted in his (1991), The Problem of Consciousness (Oxford: Blackwell), 1–22; also reprinted in N. Block, O. Flanagan and G. Güzeldere (eds), The Nature of Consciousness (Cambridge, MA: MIT Press, 1997), 529–42. —(1991), The Problem of Consciousness: Essay Towards a Resolution (Oxford: Blackwell). —(1993), ‘Consciousness and Cosmology: Hyperdualism Ventilated’, in M. Davies and G. Humphreys (eds), Consciousness: Psychological and Philosophical Essays (Oxford: Blackwell). —(2005), ‘The Matrix of Dreams’, in C. Grau (ed.), Philosophical Essays on the Matrix (New York: Oxford University Press), 62–70. Mackie, D. (1999), ‘Personal Identity and Dead People’, Philosophical Studies, 95: 219–42. McKinsey, M. (1991), ‘Anti-Individualism and Privileged Access’, Analysis, 51: 9–16. McLaughlin, B. (1981), ‘Anomalous Monism and the Irreducibility of the Mental’, in Lepore and McLaughlin (1985). —(1992), ‘The Rise and Fall of British Emergentism’, in A. Beckermann, H. Flohr and J. Kim (eds), Emergence or Reduction? (Berlin: De Gruyter). —(2001), ‘Physicalism and Alternatives’, in N. J. Smelslser and P. B. Baltes (eds), International Encyclopedia of the Social and Behavioral Sciences (Oxford: Pergamon), 11422–7. McLaughlin, B., and Tye, M. (1998), ‘Is Content-Externalism Compatible with Privileged Access?’, Philosophical Review, 107: 349–80. Majors, B., and Sawyer, S. (2005), ‘The Epistemological Argument for Content Externalism’, Philosophical Perspectives, 19: 257–80. —(2007), ‘Internal Accessibility and the Opacity of Mental Content’, in S. Goldberg (ed.), Internalism and Externalism in Semantics and Epistemology (Oxford: Oxford University Press). Marr, D. C. (1982), Vision: A Computational Investigation into the Human Representation and Processing of Visual Information (San Francisco: Freeman). Marras, A. (1998), ‘Kim’s Principle of Explanatory Exclusion’, Australasian Journal of Philosophy, 76 (3): 439–51. —(2007), ‘Kim’s Supervenience Argument and Nonreductive Physicalism’, Erkenntnis, 66 (3): 305–27. Marras, A., and Juhani Yli-Vakkuri (2008), ‘The “Supervenience Argument”: Kim’s Challenge to Nonreductive Physicalism’, in S. Gozzano and F. Orilia (eds), Tropes, Universals and the Philosophy of Mind: Essays at the Boundary of Ontology and Philosophical Psychology (Frankfurt: Ontos Verlag). Martin, M. G. F. (2002), ‘Transparency of Experience’, Mind and Language, 17: 376–425. —(2006), ‘On Being Alienated’, in T. S. Gendler and J. Hawthorne (eds), Perceptual Experience (Oxford: Oxford University Press), 354–410. Martin, R., and Barresi, J. (eds) (2003), Personal Identity (Oxford: Blackwell). Ma hen, M. (2005), Seeing, Doing, and Knowing: A Philosophical Theory of Sense Perception (Oxford: Oxford University Press).
360
Bibliography Maturana, H. R., and Varela, F. J. (1980), Autopoiesis and Cognition: The Realization of the Living (Boston: Reidel). Mawson, T. (2005), Belief in God (Oxford: Oxford University Press). May, L., Friedman, M., and Clark, A. J. (eds) (1996), Minds and Morals: Essays on Cognitive Science and Ethics (Cambridge, MA: MIT Press). Mellor, D. H. (1995), The Facts of Causation (London and New York: Routledge). Melnyk A. (1997), ‘How To Keep The “Physical” in Physicalism’, Journal of Philosophy, 94: 622–37. —(2003), A Physicalist Manifesto: Thoroughly Modern Materialism (Cambridge: Cambridge University Press). Menary, R. (2007), Cognitive Integration: Mind and Cognition Unbounded (Basingstoke: Palgrave Macmillan). —(ed.) (forthcoming), The Extended Mind (Cambridge, MA: MIT Press). Menzies, P. (2003), ‘The Causal Efficacy of Mental States’, in S. Walter and H.-D. Heckmann (eds), Physicalism and Mental Causation (Exeter: Imprint Academic). Merleau-Ponty, M. (1962), Phenomenology of Perception (London: Routledge & Kegan Paul). Metzinger, T. (2003) Being No-One: The Self-Model Theory of Subjectivity (Cambridge, MA: MIT Press). —(ed.) (1995), Conscious Experience (Paderborn: Ferdinand Schöningh). Millikan, R. (1984), Language, Thought and Other Biological Categories: New Foundations for Realism (Cambridge MA: MIT Press). —(1986), ‘Thoughts without Laws’, The Philosophical Review, 95: 47–80; and in her White Queen Psychology and Other Essays for Alice (Cambridge, MA: MIT Press, 1993), 51–82. —(1989), ‘Biosemantics’, The Journal of Philosophy, 86: 281–97; and in her White Queen Psychology and Other Essays for Alice (Cambridge, MA: MIT Press, 1993), 83–101. —(1990), ‘Truth Rules, Hoverflies, and the Kripke-Wi genstein Paradox’, The Philosophical Review, 99: 323–53; and in her White Queen Psychology and Other Essays for Alice (Cambridge, MA: MIT Press, 1993), 211–39. —(2000), On Clear and Confused Ideas (Cambridge University Press). Mills, E. (1996), ‘Interaction and Overdetermination’, American Philosophical Quarterly, 33: 105–15. —(1997), ‘Interactionism and Physicality’, Ratio, 10: 169–83. Milner, A., and Goodale, M. (1995), The Visual Brain in Action (Oxford: Oxford University Press). Minsky, M. L. (1965), ‘Ma er, Mind, and Models’, Proceedings of the International Federation of Information Processing Congress, 1: 45–9 (Washington, DC: Spartan). —(1985), The Society of Mind (New York: Simon & Schuster). —(2006), The Emotion Machine (New York: Pantheon). Montero, B. (1999), ‘The Body Problem’, Noûs, 33 (2): 183–200. —(2006), ‘Physicalism in an Infinitely Decomposable World’, Erkentnis, 64 (2): 177–191. Montero, B., and Papineau, D. (2005), ‘A Defense of the Via Negativa Argument for Physicalism’, Analysis, 65 (3): 233–7. Moore, G. E. (1942), ‘A Reply to My Critics’, in P. A. Schilpp (ed.), The Philosophy of G. E. Moore (La Salle, IL: Open Court), 535–677.
361
Bibliography Morris, M. (1991), ‘Why There are No Mental Representations’, Minds and Machines, 1: 1–30. —(1992), The Good and the True (Oxford: Clarendon Press). Moya, C. (1990), The Philosophy of Action (Cambridge: Polity Press). Nagel, T. (1974), ‘What is It Like to Be a Bat?’, Philosophical Review, 83 (4): 435–50; reprinted in Rosenthal (1991), 422–8. —(1979) [1971], ‘Brain Bisection and the Unity of Consciousness’, in his Mortal Questions (Cambridge: Cambridge University Press). Neander, K. (1998), ‘The Division of Phenomenal Labour: A Problem for Representational Theories of Consciousness’, in J. E. Tomberlin (ed.), Philosophical Perspectives, 12 (Boston, MA, and Oxford: Blackwell), 411–34. —(2004), ‘Teleological Theories of Mental Content’, in Stanford Encyclopedia of Philosophy. Available at h p://plato.stanford.edu/. Nemirow, L. (1980), ‘Review of Nagel’s Mortal Questions’, Philosophical Review, 89: 475–6. —(1990), ‘Physicalism and the Cognitive Role of Acquaintance’, in W. Lycan (ed.), Mind and Cognition (Oxford: Blackwell), 490–9. Neurath, O. (1931), ‘Physicalism: The Philosophy of the Vienna Circle’, in R. S. Cohen and M. Neurath (eds), Philosophical Papers 1913–1946 (Dordrecht: Reidel, 1983), 48–51. Newell, A. (1980), ‘Physical Symbol Systems’, Cognitive Science, 4: 135–83. —(1990), Unified Theories of Cognition. (Cambridge, MA: Harvard University Press). Newell, A., and Simon, H. A. (1972), Human Problem Solving (Englewood Cliffs, NJ: Prentice-Hall). —(1976), ‘Computer Science as Empirical Enquiry: Symbols and Search’, Communications of the Association for Computing Machinery, 19 (3): 113–26; reprinted in M. A. Boden (ed.), The Philosophy of Artificial Intelligence (Oxford: Oxford University Press, 1990), 105–32. Newell, A., Shaw, J. C., and Simon, H. A. (1958), ‘Elements of a Theory of Human Problem-Solving’, Psychological Review, 65: 151–66. Ney, A. (2007), ‘Can an Appeal to Constitution Save the Exclusion Problem?’, Pacific Philosophical Quarterly, 88 (4): 486–506. Noë A. (2003), ‘Causation and Perception: The Puzzle Unravelled’, Analysis, 63: 93–100. —(2004), Action in Perception (Cambridge, MA: MIT Press). —(2006), ‘Experience without the Head’, in T. Szabo Gendler and J. Hawthorne (eds), Perceptual Experience (Oxford: Oxford University Press), 411–33. Noonan, H. (1998), ‘Animalism versus Lockeanism: A Current Controversy’, Philosophical Quarterly, 48: 302–18. —(2003), Personal Identity, 2nd edn (London: Routledge). Noordhof, P. (1997), ‘Making the Change: The Functionalist’s Way’, British Journal for the Philosophy of Science, 48: 233–50. —(1999a), ‘Causation by Content?’ Mind and Language, 14: 291–320. —(1999b), ‘Micro-Based Properties and the Supervenience Argument: A Response to Kim’, Proceedings of the Aristotelian Society, 99 (1): 109–114. —(2001), ‘Believe What You Want’, Proceedings of the Aristotelian Society, 2001: 101, 247–65.
362
Bibliography —(2002), ‘Imagining Objects and Imagining Experiences’, Mind and Language, 17: 426–55. —(2003a), ‘Not Old . . . But Not That New Either: Explicability, Emergence and the Characterisation of Materialism’, in S. Walter and H.-D. Heckmann (eds), Physicalism and Mental Causation (Exeter: Imprint Academic), 85–108. —(2003b), ‘Review of Consciousness, Color and Content by Michael Tye’, Mind and Language, 18: 538–45. —(2003c), ‘Something Like Ability’, Australasian Journal of Philosophy, 81: 21–40. —(2006a), ‘Environment-Dependent Content and the Virtues of Causal Explanation’, Synthese, 149: 551–75. —(2006b), ‘The Success of Consciousness’, in A. Freeman (ed.), Radical Externalism (Exeter: Imprint Academic), 109–27. —(2010), ‘Emergent Causation and Property Causation’, in C. Macdonald and G. Macdonald (eds), Emergence in Mind (Oxford: Oxford University Press), 69–99. Norman, D. A. (1986), ‘Reflections on Cognition and Parallel Distributed Processing’, in J. L. McClelland, D. E. Rumelhart and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, Foundations (Cambridge, MA: MIT Press), 531–46. Norman, D. A., and Shallice, T. (1986), ‘A ention to Action: Willed and Automatic Control of Behavior’, in R. Davidson, G. Schwartz and D. Shapiro (eds), Consciousness and Self Regulation: Advances in Research and Theory, vol. 4 (New York: Plenum), 1–18. Norman, J. (2002), ‘Two Visual Systems and Two Theories of Perception: An A empt to Reconcile the Constructivist and Ecological Approaches’, Behavioral and Brain Sciences, 25: 73–144. Nussbaum, M. C. (1984): ‘Aristotelian Dualism’, Oxford Studies in Ancient Philosophy, 2: 197–207. O’Connor, T. (1994), ‘Emergent Properties’, American Philosophical Quarterly, 31: 91–104. O’Connor, T., and Churchill, J. R. (2010), ‘Is Non-Reductive Physicalism Viable within a Causal Powers Metaphysic?’, in C. Macdonald and G. Macdonald (eds), Emergence in Mind (Oxford: Oxford University Press), 43–60. O’Leary-Hawthorne, J., and McDonough, J. K. (1998), ‘Numbers, Minds and Bodies: A Fresh Look at Mind-Body Dualism’, Philosophical Perspectives, 12: 349–71. O’Regan, J. K., and Noë, A. (2001), ‘A Sensorimotor Approach to Vision and Visual Consciousness’, Behavioral and Brain Sciences, 24 (5): 883–975, 939–73. Oderberg, D. (2005), ‘Hylomorphic Dualism’, Social Philosophy and Policy, 22: 70–99. Olson, E. (1997), The Human Animal: Personal Identity without Psychology (New York: Oxford University Press). —(2007), What are We? A Study in Personal Ontology (Oxford: Oxford University Press). Osherson, D. (1995/98), An Invitation to Cognitive Science, 4 vols. (Cambridge, MA: MIT Press). Papineau, D. (1984), ‘Representation and Explanation’, Philosophy of Science, 51: 55–73. —(1986), ‘Semantic Reductionism and Reference’, in J. Bu erfield (ed.), Language, Mind and Logic (Cambridge: Cambridge University Press).
363
Bibliography —(1987), Reality and Representation (Oxford: Basil Blackwell). —(1993a), Philosophical Naturalism (Oxford: Blackwell). —(1993b), ‘Physicalism, Consciousness and the Antipathetic Fallacy’, Australasian Journal of Philosophy, 71 (2): 169–83. —(1998a), ‘Mind the Gap’, in J. E. Tomberlin (ed.), Philosophical Perspectives, 12: 373–88. —(1998b), ‘Teleosemantics and Indeterminacy’, Australasian Journal of Philosophy, 76: 1–14. —(2002), Thinking about Consciousness (New York: Oxford University Press). Parfit, D. (1971), ‘Personal Identity’, Philosophical Review, 80: 3–27; reprinted in Perry (1975). —(1976), ‘Lewis, Perry, and What Ma ers’, in A. Rorty (ed.), The Identities of Persons (Berkeley: University of California Press). —(1984), Reasons and Persons (Oxford: Clarendon Press). Parker, A., Derrington, A., and Blakemore, C. (eds) (2002), The Physiology of Cognitive Processes. Special Issue of Philosophical Transactions of the Royal Society: B, 357, 957–1146 (London: Royal Society). Parsons, T. (1980), Nonexistent Objects (New Haven: Yale University Press). Pa ee, H. H. (1966), ‘Physical Theories, Automata, and the Origin of Life’, in H. H. Pa ee, E. A., Edelsack, L. Fein and A. B. Callahan (eds), Natural Automata and Useful Simulations: Proceedings of a Symposium on Fundamental Biological Models (Washington: Spartan Books), 73–106. —(1989), ‘Simulations, Realizations, and Theories of Life’, in C. G. Langton (ed.), Artificial Life (Redwood City, CA: Addison-Wesley), 63–77. Paul, L. (2007), ‘Constitutive Overdetermination’, in J. K. Campbell (ed.), Causation and Explanation, Topics in Contemporary Philosophy series (Cambridge, MA: MIT Press). Paull, C., and Sider, T. (1992), ‘In Defense of Global Supervenience’, Philosophical and Phenomenological Research, 52: 833–54. Peacocke, C. (1983), Sense and Content (Oxford: Oxford University Press). —(1992), A Study of Concepts (Cambridge, MA: MIT Press). —(1999), Being Known (Oxford: Oxford University Press). Pearl, J. (2000), Causality: Models, Reasoning, and Inference (Cambridge: Cambridge University Press). Penelhum, T. (1970), Survival and Disembodied Existence (London: Routledge). Penrose, R. (1989), The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford: Oxford University Press). —(1994), Shadows of the Mind: A Search for the Missing Science of Consciousness (Oxford: Oxford University Press). Pereboom, D. (2002), ‘Robust Nonreductive Materialism’, Journal of Philosophy, 99 (10): 499–531. Pereboom, D., and Kornblith, H. (1991), ‘The Metaphyscs of Irreducibility’, Philosophical Studies, 63 (2): 125–45. Perry, J. (1993), The Essential Indexical and Other Essays (Oxford: Oxford University Press). —(1994), ‘Intentionality (2)’, in S. Gu enplan (ed.), A Companion Volume to the Philosophy of Mind (Oxford: Blackwell).
364
Bibliography —(2001), Knowledge, Possibility, and Consciousness (Cambridge, MA: MIT Press). Pe it, P. (1993), ‘A Definition of Physicalism’, Analysis, 53: 213–23. —(2009), ‘Consciousness and the Frustrations of Physicalism’, in Ravenscro (2009), 163–87. Pietroski, P. (1992), ‘Intentionality and Teleological Error’, Pacific Philosophical Quarterly, 73: 367–82. —(1994): ‘Mental Causation for Dualists’, Mind and Language, 9: 336–66. Pinker, S., and Prince, A. (1988), ‘On Language and Connectionism: Analysis of a Parallel Distributed Model of Language Acquisition’, Cognition, 28: 73–193. Place, U. T. (1956), ‘Is Consciousness a Brain Process?’, British Journal of Psychology, 47 (Part 1): 44–50. Poland, J. (2001), Physicalism, The Philosophical Foundations (Oxford: Oxford University Press). Polger, T., and Flanagan, O. (1999), ‘Natural Answers to Natural Questions’, in V. Hardcastle (ed.), Where Biology Meets Psychology: Philosophical Essays (Cambridge, MA: Bradford Books, MIT Press), 221–47. Popper, K. R. (1953), ‘Language and the Mind-Body Problem: A Restatement of Interactionism’, in Proceedings of the 11th International Congress of Philosophy; reprinted in Conjectures and Refutations (Basic Books, 1962). —(1955): ‘A Note on the Mind-Body Problem’, Analysis, 15: 131–5. Popper, K., and Eccles, J. (1977): The Self and Its Brain (New York: Springer). Premack, D., and Woodruff, G. (1978), ‘Does the Chimpanzee Have a Theory of Mind?’, Behaviour and Brain Science, 1: 515–26. Priest, G. (2005), Towards Non-Being: The Logic and Metaphysics of Intentionality (Oxford: Oxford University Press). Prinz, J. (2004), Gut Reactions: A Perceptual Theory of Emotion (Oxford: Oxford University Press). —(2007), ‘Mental Pointing: Phenomenal Knowledge without Concepts’, Journal of Consciousness Studies, 14: 9–10, 184–211. Pucce i, R. (1973), ‘Brain Bisection and Personal Identity’, British Journal for the Philosophy of Science, 24: 339–55. Putnam, H. (1960), ‘Minds and Machines’, in S. Hook (ed.), Dimensions of Mind: A Symposium (New York: New York University Press), 148–79. —(1967), ‘The Nature of Mental States’. First published as ‘Psychological Predicates’, in W. H. Capitan and D. Merrill (eds), Art, Mind, and Religion (Pi sburgh: University of Pi sburgh Press), 37–48; reprinted in H. Putnam, Mind, Language, and Reality: Philosophical Papers, vol. 2 (Cambridge: Cambridge University Press, 1975), 429–40, and in Rosenthal (1991), 197–203. —(1975), ‘The meaning of “meaning”’, reprinted in his Mind, Language, and Reality: Philosophical Papers Vol. II (Cambridge: Cambridge University Press). —(1982), ‘Why There isn’t a Ready-Made World’, Synthese, 51: 141–67. —(1986) ‘Information and the Mental’, in E. Lepore (ed.), Truth and Interpretation, Perspectives on the Philosophy of Donald Davidson (Oxford: Blackwell). —(1988), Representation and Reality (Cambridge, MA: MIT Press). —(1996), ‘Introduction’ to A. Pessin and S. Goldberg (eds), The Twin Earth Chronicles: Twenty Years of Reflection on Hilary Putnam’s ‘The Meaning of “Meaning”’ (New York: M. E. Sharpe).
365
Bibliography —(1997), ‘Functionalism: Cognitive Science or Science Fiction?’, in D. M. Johnson and C. E. Erneling (eds), The Future of the Cognitive Revolution (Oxford: Oxford University Press), 32–44. Pylyshyn, Z. W. (1973), ‘What the Mind’s Eye Tells the Mind’s Brain: A Critique of Mental Imagery’, Psychological Bulletin, 80: 1–24. —(1980), ‘Computation and Cognition: Issues in the Foundations of Cognitive Science’, Behavioral and Brain Sciences, 3: 111–32. Quine, W. (1953a), ‘On What There is’, in his From a Logical Point of View (New York: Harper and Row), 1–19. —(1953b), ‘Two Dogmas of Empiricism’, in his From a Logical Point of View (New York: Harper and Row), 20–46. —(1960), Word and Object (Cambridge, MA: MIT Press). Ravenscro , I. (2005), Philosophy of Mind: A Beginner’s Guide (Oxford: Oxford University Press). —(2009), Minds, Ethics, and Conditionals: Themes From the Philosophy of Frank Jackson (Oxford: Oxford University Press). —(2010), ‘Folk Psychology, as a Theory’, in E. Zalta (ed.), Stanford Encylopedia of Philosophy. Available at h p://plato.stanford.edu/. Ray, T. S. (1992), ‘An Approach to the Synthesis of Life’, in C. G. Langton, C. Taylor, J. D. Farmer and S. Rasmussen (eds), Artificial Life II (Redwood City, CA: Addison-Wesley), 371–408. Raymont, P. (2003), ‘Kim on Overdetermination, Exclusion and Nonreductive Physicalism’, in S. Walter and H.-D. Heckmann (eds), Physicalism and Mental Causation: The Metaphysics of Mind and Action (Exeter: Imprint Academic). Reid, T. (1975) [1785], ‘Of Mr. Locke’s Account of Personal Identity’, in J. Perry (ed.), Personal Identity (Berkeley and Los Angeles: University of California Press). Rey, G. (1997), Contemporary Philosophy of Mind: A Contentiously Classical Approach (Oxford: Blackwell). —(2002), ‘Searle’s Misunderstandings of Functionalism and Strong AI’, in J. Preston and M. Bishop (eds), Views into the Chinese Room (Oxford: Oxford University Press), 201–25. —(2007), ‘Resisting Normativism in Psychology’, in J. Cohen and B. McLaughlin (eds), Blackwell Debates in Philosophy of Mind (Oxford: Blackwell), 69–84. —(2009), ‘Concepts, Defaults, and Internal Asymmetric Dependencies: Distillations of Fodor and Horwich’, in N. Kompa, C. Nimtz and C. Suhm (eds), The A Priori and Its Role in Philosophy (Paderborn: Mentis). —(forthcoming), ‘Externalism and Inexistence in Early Content’, in R. Schantz (ed.), Prospects for Meaning (New York: De Gruyter). Richardson, R. C. (1982), ‘The “Scandal” of Cartesian Dualism’, Mind, 91: 20–37. Robb, D. (1997), ‘The Properties of Mental Causation’, The Philosophical Quarterly, 47: 178–94. Robb, D., and Heil, J. (2008), ‘Mental Causation’, in E. Zalta (ed.), Stanford Encyclopedia of Philosophy. Available at h p://plato.stanford.edu/. Robinson, H. (1982), MaĴer and Sense (Cambridge: Cambridge University Press). —(1983), ‘Aristotelian Dualism’, Oxford Studies in Ancient Philosophy, 1: 123–44. —(2003), ‘Dualism’, in Stich and Warfield (2003), 85–101. Rock, I. (1983), The Logic of Perception (Cambridge MA: MIT Press).
366
Bibliography Rodriguez-Pereya, G. (2006), ‘Truthmaking, Entailment, and the Conjunction Thesis’, Mind, 115: 957–82. Root, M. (1986), ‘Davidson and Social Science’, in Lepore (ed.), Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson (Oxford: Blackwell). Rorty, R. (1970a), ‘Incorrigibility as the Mark of the Mental’, Journal of Philosophy, 68: 399–424. —(1970b), ‘In Defense of Eliminative Materialism’, Review of Metaphysics, 24: 112–21. —(1972), ‘Functionalism, Machines, and Incorrigibility’, Journal of Philosophy, 69: 417–58. Rosch, E. H., and Mervis, C. B. (1975), ‘Family Resemblances: Studies in the Internal Structure of Categories’, Cognitive Psychology, 7: 573–605. Rosenburg, J. F. (1988), ‘On Not Knowing Who or What One is: Reflections on the Intelligibility of Dualism’, Topoi, 7: 57–63. Rosenthal, D. M. (1986), ‘Two Concepts of Consciousness’, Philosophical Studies, 49: 329–59. —(1991a), ‘The Independence of Consciousness and Sensory Qualities’, in E. Villaneuva (ed.), Consciousness, Philosophical Issues, no. 1 (Atascadero, CA: Ridgeview), 15–36. —(1991b), The Nature of Mind (New York: Oxford University Press). —(1993), ‘Thinking That One Thinks’, in M. Davies and G. W. Humphreys (eds), Consciousness (Oxford: Blackwell), 197–223. —(1997), ‘A Theory of Consciousness’, in N. Block, O. Flanagan and G. Guzeldere (eds), The Nature of Consciousness (Cambridge, MA: MIT Press). —(2000), ‘Consciousness, Content and Metacognitive Judgments’, Consciousness and Cognition, 9 (2): 203–14. —(2005), Consciousness and Mind (New York: Oxford University Press). Ross, D. (1994), ‘Denne ’s Conceptual Reform’, Behaviour and Philosophy, 22: 41–52. Rowlands, M. (1999), The Body in Mind (Cambridge: Cambridge University Press). —(2001), The Nature of Consciousness (Cambridge: Cambridge University Press). —(2003), Externalism: PuĴing Mind and World Back Together Again (Chesham, Bucks.: Acumen). —(forthcoming), The New Science of the Mind: From Extended Mind to Embodied Phenomenology (Cambridge, MA: MIT Press). Rozemond, M. (2002), Descartes’s Dualism (Cambridge, MA: Harvard University Press). Rumelhart, D. E., and McClelland, J. L. (1986), ‘On Learning the Past Tenses of English Verbs’, in J. L. McClelland, D. E. Rumelhart and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, Foundations (Cambridge, MA: MIT Press), 216–71. Rupert, R. (2004), ‘Challenges to the Hypothesis of Extended Cognition’, Journal of Philosophy, 101 (8): 389–428. Russell, B. (1927), The Analysis of MaĴer (London: Kegan Paul). Salmon, N. (1986), Frege’s Puzzle (Cambridge, MA: MIT Press). Samuels, R. (1998), ‘Evolutionary Psychology and the Massive Modularity Hypothesis’, British Journal for the Philosophy of Science, 49: 575–602.
367
Bibliography Sawyer, S. (1998), ‘Privileged Access to the World’, Australasian Journal of Philosophy, 76: 523–33. —(2002), ‘In Defence of Burge’s Thesis’, Philosophical Studies, 107: 109–28. —(2006), ‘Externalism, Apriority and Transmission of Warrant’, in T. Marvarn (ed.), What Determines Content? The Internalism/Externalism Dispute (Cambridge: Cambridge Scholars Press). —(2007), ‘There is No Viable Notion of Narrow Content’, in B. McLaughlin and J. Cohen (eds), Contemporary Debates in the Philosophy of Mind (Oxford: Blackwell). —(2009), ‘The Modified Predicate Theory of Proper Names’, in S. Sawyer (ed.), New Waves in Philosophy of Language (London: Palgrave MacMillan). Scheier, C., and Pfeifer, R. (1998), ‘Exploiting Embodiment for Category Learning’, in R. Pfeifer, B. Blumberg, J.-A Meyer and S. W. Wilson (eds), From Animals to Animats 5: Proceedings of the FiĞh International Conference on Simulation of Adaptive Behavior (Cambridge, MA: MIT Press), 32–7. Scheutz, M. (ed.) (2002), Computationalism: New Directions (Cambridge, MA: MIT Press). Schiffer, S. (1992), ‘Boghossian on Externalism and Inference’, Philosophical Issues, 2: 29–37. Scriven, M. (1953), ‘The Mechanical Concept of Mind’, Mind, 62: 230–40. Seager, W. (1999), Theories of Consciousness (London and New York: Routledge) Searle, J. R. (1980), ‘Minds, Brains and Programs’, The Behavioral and Brain Sciences, 3 (3): 417–57. Includes peer commentaries and reply. —(1983), Intentionality (Cambridge: Cambridge University Press). —(1990a), ‘Consciousness, Explanatory Inversion, and Cognitive Science’, Behavioral and Brain Sciences, 13: 585–642. —(1990b), ‘Is the Brain’s Mind a Computer Program?’, Scientific American (January): 20–25. —(1992), The Rediscovery of the Mind (Cambridge, MA: Bradford Books, MIT Press). Segal, G. (1989), ‘The Return of the Individual’, Mind, 98: 39–57. —(2000), A Slim Book about Narrow Content (Cambridge, MA: MIT Press). Sellars, W. (1954), ‘A Note on Popper’s Argument for Dualism’, Analysis, 15: 23–4. Shafer-Landau, R. (2003), Moral Realism (Oxford: Oxford University Press). Shannon, C. E., and Weaver, W. (1963), The Mathematical Theory of Communication (Illinois: University of Illinois Press, 1998). Shapiro, L. (2004), The Mind Incarnate (Cambridge, MA: MIT Press). Shoemaker, S. (1963), Self-Knowledge and Self-Identity (Ithaca: Cornell University Press). —(1970), ‘Persons and Their Pasts’, American Philosophical Quarterly, 7: 269–85. —(1984), ‘Personal Identity: A Materialist’s Account’, in Shoemaker and Swinburne, Personal Identity (Oxford: Blackwell). —(1990), ‘Qualities and Qualia: What’s in the mind?’ Philosophy and Phenomenological Research, 50: 109–131; and in his The First Person Perspective and Other Essays (Cambridge: Cambridge University Press, 1996), 97–120. —(1991), ‘Qualia and Consciousness’, Mind, 100: 507–24; and in his The First Person Perspective and Other Essays (Cambridge: Cambridge University Press, 1996), 121–40. —(1994), ‘Self-Knowledge and “Inner Sense: Lecture 2, the Broad Perceptual Model”’, Philosophy and Phenomenological Research, 54; and in The First Person
368
Bibliography Perspective and Other Essays (Cambridge: Cambridge University Press, 1996), 121–40. —(1997), ‘Self and Substance’, in J. Tomberlin (ed.), Philosophical Perspectives, 11 (Atascadero, CA: Ridgeview), 283–319. —(1999), ‘Self, Body, and Coincidence’, Proceedings of the Aristotelian Society, Supplementary Volume 73: 287–306. —(2004), ‘Functionalism and Personal Identity: A Reply’, Noûs, 38: 525–33. —(2007), Physical Realization (Oxford: Oxford University Press). Siegel, S. (2006), ‘Which Properties are Represented in Perception?’, in T. Szabó Gendler and J. Hawthorne (eds), Perceptual Experience (Oxford: Oxford University Press), 480–503. Simon, H. (1969), The Sciences of the Artificial (Cambridge, MA: MIT Press). Simons, D. J., and Chabris, C. F. (1999), ‘Gorillas in Our Midst: Sustained Ina entional Blindness for Dynamic Events’, Perception, 28 (9): 1059–74. Slocum, A. C., Downey, D. C., and Beer, R. D. (2000), ‘Further Experiments in the Evolution of Minimally Cognitive Behavior: From Perceiving Affordances to Selective A ention’, in J. Meyer, A. Berthoz, D. Floreano, H. Roitblat and S. Wilson (eds), From Animals to Animats 6: Proceedings of the Sixth International Conference on Simulation of Adaptive Behavior (Cambridge, MA: MIT Press), 430–39. Sloman, A. (1971), ‘Interactions between Philosophy and Artificial Intelligence: The Role of Intuition and Non-Logical Reasoning in Intelligence’, Artificial Intelligence, 2: 209–25. —(1975), ‘A erthoughts on Analogical Representation’, in R. C. Schank and B. L. Nash-Webber (eds), Theoretical Issues in Natural Language Processing: An Interdisciplinary Workshop in Computational Linguistics, Psychology, Linguistics, and Artificial Intelligence, Cambridge, MA, 10–13 June (Arlington, VA: Association for Computational Linguistics), 164–8. —(1978), The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind (Brighton: Harvester Press). Out of print, but available – and continually updated – online at www.cs.bham.ac.uk/research/cogaff/crp/. —(1986), ‘Reference without Causal Links’, in B. du Boulay and L. Steels (eds), Seventh European Conference on Artificial Intelligence (Amsterdam: North-Holland), 369–81. —(1989), ‘On Designing a Visual System: Towards a Gibsonian Computational Model of Vision’, Journal of Experimental and Theoretical AI, 1: 289–337. —(1993), ‘The Mind as a Control System’, in C. Hookway and D. Peterson (eds), Philosophy and the Cognitive Sciences (Cambridge: Cambridge University Press), 69–110. —(1996a), ‘Actual Possibilities’, in L. C. Aiello and S. C. Shapiro (eds), Principles of Knowledge Representation and Reasoning: Proceedings of the FiĞh International Conference (KR ’96) (San Francisco: Morgan Kaufmann), 627–38. —(1996b), ‘Beyond Turing Equivalence’, in P. J. R. Millican and A. J. Clark (eds), Machines and Thought: The Legacy of Alan Turing, vol. 1 (Oxford: Oxford University Press), 179–220. —(1996c), ‘Towards a General Theory of Representations’, in D. M. Peterson (ed.), Forms of Representation: An Interdisciplinary Theme for Cognitive Science (Exeter: Intellect Books), 118–40.
369
Bibliography —(1992), ‘The Emperor’s Real Mind, Review of Roger Penrose’s The Emperor’s New Mind: Concerning Computers Minds and the Laws of Physics’, Artificial Intelligence, 56, 355–96. —(1999), ‘Review of R. Picard’s Affective Computing’, AI Magazine, 20: 1 (March), 127–33. —(2000), ‘Architectural Requirements for Human-Like Agents Both Natural and Artificial. (What Sorts of Machines Can Love?)’, in K. Dautenhahn (ed.), Human Cognition and Social Agent Technology: Advances in Consciousness Research (Amsterdam: John Benjamins), 163–95. —(2002), ‘The Irrelevance of Turing Machines to Artificial Intelligence’, in M. Scheutz (ed.), Computationalism: New Directions (Cambridge, MA: MIT Press), 87–127. —(2010), ‘An Alternative to Working on Machine Consciousness’, International Journal of Machine Consciousness, 2 (1): 1–18. Sloman, A., and Chrisley, R. L. (2003), ‘Virtual Machines and Consciousness’, in O. Holland (ed.), Machine Consciousness (Exeter: Imprint Academic), 133–72. Special issue of the Journal of Consciousness Studies, 10 (4–5). Smart, J. J. C. (1959), ‘Sensations and Brain Processes’, Philosophical Review, 68: 141–56; reprinted in Rosenthal (1991), 169–76. —(1978), ‘The Content of Physicalism’, Philosophical Quarterly, 28: 239–41. Smith, A. D. (2002), The Problem of Perception (Cambridge, MA: Harvard University Press). Smith, B. C. (1985), ‘Prologue to Reflection and Semantics in a Procedural Language’, in R. J. Brachman and H. J. Levesque (eds), Readings in Knowledge Representation (Los Altos, CA: Morgan Kauffman), 31–40. —(1996), On the Origin of Objects (Cambridge, MA: MIT Press). —(1998), ‘On Knowing One’s Own Language’, in C. Wright, B. C. Smith and C. Macdonald (eds), Knowing Our Own Minds (Oxford: Oxford University Press). —(2002a), ‘The Foundations of Computing’, in M. Scheutz (ed.), Computationalism: New Directions (Cambridge, MA: MIT Press), 23–58. —(2002b), ‘Keeping Emotions in Mind’, in P. Goldie (ed.), Understanding Emotions: Mind and Morals. Ashgate Epistemology and Mind series (London: Ashgate). Smith, T., Husbands, P., and O’Shea, M. (2002), ‘Neuronal Plasticity and Temporal Adaptivity: GasNet Robot Control Networks’, Adaptive Behavior, 10: 161–83. Smolensky, P. (1987), ‘The Constituent Structure of Mental States: A Reply to Fodor and Pylyshyn’, Southern Journal of Philosophy, 26: 137–60. —(1988), ‘On the Proper Treatment of Connectionism’, Behavioral and Brain Sciences, 11: 1–74. Smythies, J. R., and Beloff, J. (eds) (1989), The Case for Dualism (Charlo esville: University of Virginia Press). Snowdon, P. (1990), ‘Persons, Animals, and Ourselves’, in C. Gill (ed.), The Person and the Human Mind (Oxford: Clarendon Press). —(1996), ‘Persons and Personal Identity’, in S. Lovibond and S. G. Williams (eds), Essays for David Wiggins: Identity, Truth and Value (Oxford: Blackwell). Soames, S. (2002), Beyond Rigidity: The Unfinished Semantic Agenda of ‘Naming and Necessity’ (Oxford: Oxford University Press). Sober, E. (1992), ‘Learning From Functionalism: Prospects for Strong Artificial Life’, in C. G. Langton, C. Taylor, J. D. Farmer and S. Rasmussen (eds), Artificial Life II (Redwood City, CA: Addison-Wesley), 749–66.
370
Bibliography Sparber, G. (2005), ‘Counterfactual Overdetermination vs. the Causal Exclusion Problem’, History and Philosophy of the Life Sciences 27 (3–4): 479–90. Stalnaker, R. (1984), Inquiry (Cambridge, MA: MIT Press). Stampe, D. (1977), ‘Towards a Causal Theory of Linguistic Representation’, in Midwest Studies in Philosophy, vol. 2 (Minneapolis: University of Minnesota Press), 42–63. Sterelny K. (1990), The Representational Theory of Mind (Oxford: Blackwell). —(2003), Thought in a Hostile World: The Evolution of Cognition (Malden, MA: Blackwell). Steward, H. (1996), The Ontology of Mind (Oxford: Clarendon). Stich, S. (1983), From Folk Psychology to Cognitive Science: A Case Against Belief. (Cambridge MA: MIT Press). —(1990), The Fragmentation of Reason: Preface to a Pragmatic Theory of Cognitive Evaluation (Cambridge MA: MIT Press). —(1996), Deconstructing the Mind (Oxford: Oxford University Press). Stich, S., and Warfield, T. (1994), Mental Representation: A Reader (Oxford: Blackwell). —(2003), The Blackwell Guide to Philosophy of Mind (Malden, MA: Blackwell). Stoljar, D. (1996), ‘Nominalism and Intentionality’, Noûs, 30 (2): 261–81. —(2000) ‘Physicalism and the Necessary A Posteriori’, Journal of Philosophy, 97 (1): 33–54. —(2001a), ‘The Conceivability Argument and Two Conceptions of the Physical’, Philosophical Perspectives, 15: 393–413. —(2001b), ‘Two Conceptions of the Physical’, Philosophy and Phenomenological Research, 62: 253–81. —(2006), Ignorance and Imagination (Oxford: Oxford University Press). Stout, R. (2005), Action (Teddington: Acumen). Stoutland, F. (1976), ‘The Causation of Behaviour’, in J. Hintikka (ed.), Essays on WiĴgenstein in Honour of G. H. von Wright, Acta Philosophica Fennica (Amsterdam: North-Holland). —(1980), ‘Oblique Causation and Reasons for Action’, Synthese, 43: 351–67. —(1985), ‘Davidson on Intentional Behavior’, in E. Lepore and B. McLaughlin (eds.), Actions and Events: Perspectives on the Philosophy of Donald Davidson (New York: Blackwell). Strawson, G. (2008), ‘Real Intentionality 3: Why Intentionality Entails Consciousness’, in Real Materialism (Oxford: Oxford University Press), 281–305. Strawson, P. F. (1959), Individuals: An Essay in Descriptive Metaphysics (London: Methuen). Stroud, B. (1986), ‘The Physical World’, Proceedings of the Aristotelian Society, 87: 263–77. Sturgeon, S. (1994), ‘The Epistemic View of Subjectivity’, The Journal of Philosophy, 91: 221–35. Sussman, A. (1981): ‘Reflections on the Chances for a Scientific Dualism’, Journal of Philosophy, 78: 95–118. Su on, J. (2006), ‘Distributed Cognition: Domains and Dimensions’, Pragmatics and Cognition, 14 (2): 235–47. Swinburne, R. (1984), ‘Personal Identity: The Dualist Theory’, in Shoemaker and Swinburne, Personal Identity (Oxford: Blackwell). —(1986), The Evolution of the Soul (Oxford: Clarendon Press).
371
Bibliography Thagard, P. (1988), Computational Philosophy of Science (Cambridge, MA: MIT Press). —(1989), ‘Explanatory Coherence’, Behavioral and Brain Sciences, 12: 435–502. —(1990), ‘Concepts and Conceptual Change’, Synthese, 82: 255–74. Thomasson, A. (1998), ‘A Nonreductivist Solution to Mental Causation’, Philosophical Studies, 89: 181–95. Thompson E. (2007), Mind in Life: Biology, Phenomenology, and the Sciences of Mind (Cambridge, MA: Harvard University Press). Touretsky. D. S., and Hinton, G. E. (1985), ‘Symbols among the Neurons: Details of a Connectionist Inference Architecture’, Proceedings of the Fourth International Conference on Artificial Intelligence (Los Angeles, CA), 238–43. —(1988), ‘A Distributed Connectionist Production System’, Cognitive Science, 12: 423–66. Turing, A. M. (1936), ‘On Computable Numbers with an Application to the Entscheidungsproblem’, Proceedings of the London Mathematical Society, Series 2 (42–3) (30 November), 230–40, and (42–4) (23 December), 241–65. —(1950), ‘Computing Machinery and Intelligence’, Mind, 59: 433–60; reprinted in M. A. Boden (ed.), The Philosophy of Artificial Intelligence (Oxford: Oxford University Press, 1990), 40–66. Page numbers in the text refer to the reprinted version. Turner, M. (1991), Reading Minds: The Study of Literature in an Age of Cognitive Science (Oxford: Princeton University Press). Tye, M. (1995/1996), Ten Problems of Consciousness: A Representational Theory of the Phenomenal Mind (Cambridge, MA: MIT Press). —(1999), ‘Phenomenal Consciousness: The Explanatory Gap as a Cognitive Illusion’, Mind, 108: 705–25. —(2000), Color, Content and Consciousness (Cambridge MA: MIT Press). —(2009), Consciousness Revisited: Materialism without Phenomenal Concepts (Cambridge, MA: MIT Press). Ullman, S. (1980), ‘Against Direct Perception’, Behavioral and Brain Sciences, 3: 373–415. Unger, P. (1979), ‘I Do Not Exist’, in G. F. MacDonald (ed.), Perception and Identity (London: Macmillan); reprinted in Rea (1997). Van Cleve, J. (1990), ‘Supervenience and Closure’, Philosophical Studies, 58: 225–83. Van Gelder, T. J. (1995), ‘What Might Cognition Be, If Not Computation?’, Journal of Philosophy, 92: 345–81. Van Gulick, R. (2000), ‘Closing the Gap?’, Journal of Consciousness Studies 7 (4): 93–7. —(2003), ‘Maps, Gaps and Traps’, in A. Jokic and Q. Smith (eds), Consciousness: New Philosophical Perspectives (Oxford: Oxford University Press). —(2004), ‘Higher-Order Global States HOGS: An Alternative Higher-Order Model of Consciousness’, in Gennaro, R. (ed.), Higher-Order Theories of Consciousness (Amsterdam and Philadelphia: John Benjamins). —(2009), ‘Jackson’s Change of Mind: Representationalism, A Priorism and the Knowledge Argument’, in Ravenscro (2009), 189–218. Van Inwagen, P. (1990), Material Beings (Ithaca: Cornell University Press). Varela, F. J., Thompson, E., and Rosch, E. (1991), The Embodied Mind: Cognitive Science and Human Experience (Cambridge, MA: MIT Press).
372
Bibliography Velleman, J. D. (2000), ‘On the Aim of Belief’, in his The Possibility of Practical Reason (Cambridge: Cambridge University Press, 2000), 244–81. Vendler, Z. (1972), Res Cogitans (Ithaca, NY: Cornell University Press). —(1984), The MaĴer of Minds (Oxford: Clarendon Press). Vicente, A. (1999), ‘Mind-Body Causal Overdetermination’, Theoria, 14 (36): 511–24. Von Eckardt, B. (1995), ‘Folk Psychology (1)’, in Gu enplan (1994), 300–7. Wallace, A. F. C. (1965), ‘Driving to Work’, in M. E. Spiro (ed.), Context and Meaning in Cultural Anthropology (London: Collier-Macmillan), 277–96. Warfield, T. A. (1997), ‘Externalism, Self-knowledge and the Irrelevance of SlowSwitching’, Analysis, 57: 282–4. Wedgwood, R. (2005), ‘Normativism Defended’, in J. Cohen and B. McLaughlin (eds), Blackwell Debates in Philosophy of Mind (Oxford: Blackwell), 85–101. Weiskrantz, L. (1997), Consciousness Lost and Found: A Neuropsychological Exploration (Oxford: Oxford University Press). Wheeler, M. (2005), Reconstructing the Cognitive World: The Next Step (Cambridge, MA: MIT Press). —(forthcoming a), ‘In Defense of Extended Functionalism’, in Menary (forthcoming). —(forthcoming b), ‘In Search of Clarity about Parity’, in Philosophical Studies, book symposium on A. Clark, Supersizing the Mind (Clark [2008b]). —(forthcoming c), ‘Minds, Things, and Materiality’, in C. Renfrew and L. Malafouris (eds), The Cognitive Life of Things: Recasting the Boundaries of the Mind (Cambridge: McDonald Institute for Archaeological Research Publications). Whitby, B. (1996), Reflections on Artificial Intelligence: The Legal, Moral, and Ethical Dimensions (Exeter: Intellect Books). White, S. (1982), ‘Partial Character and the Language of Thought’, Pacific Philosophical Quarterly, 63: 347–65. Whyte, J. T. (1990), ‘Success Semantics’, Analysis, 50: 149–57. —(1991), ‘The Normal Rewards of Success’, Analysis, 51: 65–74. Wiggins, D. (2001), Sameness and Substance Renewed (Cambridge: Cambridge University Press). Wilkerson, T. E. (1998), ‘Recent Work on Natural Kinds’, Philosophical Books, 39: 225–33. Wilkes, K. (1988), Real People (Oxford: Clarendon Press). Williams, B. (1956–1957), ‘Personal Identity and Individuation’, Proceedings of the Aristotelian Society, 57; reprinted in his Problems of the Self (Cambridge: Cambridge University Press, 1973). —(1970), ‘The Self and the Future’, Philosophical Review, 59; reprinted in his Problems of the Self (Cambridge: Cambridge University Press, 1973). Williamson, T. (2006), ‘Conceptual Truth’, The Aristotelian Society, Supplementary Volume 80: 1–41. Wilson, J. (1999), ‘How Superduper Does a Physicalist Supervenience Need to Be?’, Philosophical Quarterly, 50 (194): 33–52. —(2005), ‘Supervenience-Based Formulations of Physicalism’, Noûs, 39: 426–59. —(2010), ‘What is Hume’s Dictum, and Why Believe It?’, Philosophy and Phenomenological Research, 80: 595–637. Wilson, R. (1995), Cartesian Psychology and Physical Minds (Cambridge: Cambridge University Press).
373
Bibliography Witmer, G. (2006), ‘How to Be a (Sort of) A Priori Physicalist’, Philosophical Studies, 131 (1): 185–225. Wi genstein, L. (1953), Philosophical Investigations (New York: Macmillan and Oxford: Blackwell). Wollheim, R. (2005), ‘The Emotions and Their Philosophy of Mind’, in A. Hatzimoysis (ed.), Philosophy and the Emotions (Cambridge: Cambridge University Press). Wright, C. (2004), ‘On Epistemic Entitlement: Warrant for Nothing (and Foundations for Free?)’, Proceedings of the Aristotelian Society, Supplementary Volume 78: 167–212. Wright, I. P., Sloman, A., and Beaudoin, L. P. (1996), ‘Towards a Design-Based Analysis of Emotional Episodes’, Philosophy, Psychiatry, and Psychology, 3: 101–37. Yablo, S. (1992), ‘Mental Causation’, The Philosophical Review, 101: 245–80. Yolton, R. (1983), Thinking MaĴer (Minneapolis: University of Minnesota Press). Zahavi D. (2005), Subjectivity and SelĢood: Investigating the First-Person Perspective (Cambridge, MA: MIT Press). Ziff, P. (1959), ‘The Feelings of Robots’, Analysis, 19: 64–8. Zimmerman, D. W. (2004): ‘Should a Christian Be a Mind-Body Dualist?’, in Contemporary Debates in Philosophy of Religion (Malden, MA: Blackwell).
374
Index aboutness see intentionality Abrahamsen, A. 234 access consciousness 254, 287 achievement problem and externalism 145–8 action, significance of 280 Adams, F. 17, 54, 61, 222, 231, 233, 331n2, 332nn11, 18 affordances 158 agents 12, 15–16, 19, 55, 88, 102–4, 109–10, 112–14, 121, 125, 160–2, 183, 188, 220–1, 225, 236, 308 Agre, P. E. 160 Aizawa, K. 55, 222, 231, 233, 331n2, 332n18 Akins, K. 27 akrasia see will Aleksander, I. 165 Alexander, S. 27 Allen, C. 17 animal minds 281 animate vision 161 anomalousness and holism of mental 111–12, 191 of mental 9–10, 20 monism and 7, 110, 190, 191–3, 197, 199, 281–2 Anscombe, G. 4 anti-individualism see externalism anti-physicalism 26–7 Antony, L. 191, 331n4 a priori principles, of interpretation 112–14 Aquinas 79, 88, 89, 90 Arbib, M. A. 169 Armstrong, D. 4, 57, 330n1 artificial intelligence and artificial life (AI/A-Life) 151, 166, 167, 168, 169, 172, 282 aspectual shape and connection principle 65–6 asymmetric dependencies 184–5 asymmetry, between first-and third-person perspective 107, 116–17 authority, first-person 116
autobiographical memory 211, 212, 215 autopoiesis 158, 161–2, 166 Baars, B. 164 Balog, K. 256 basicality and ecumenical approach 188–9 basic belief 284 Bechtel, W. 234–6 Beer, R. D. 229 behaviourism 50 significance of 282–3 see also individual entries Beighley, S. 54 belief 3, 12–16, 43, 58–9, 68, 109, 121, 129, 136, 267, 274, 283–4 desire framework and behaviour 104–6, 111–13, 117–22 and meaning 114–15 see also desire biological function 17, 63, 69, 275, 276–8 biological memories 231 blindsight 66, 69–70 Block, N. 15, 24, 40, 51, 69, 154, 164, 188, 196, 254, 257, 261, 275, 287, 331n3 (Ch 2), 331n4 blockers and physicalism 244 Boden, M. A. 151, 153, 162, 163–4, 166, 167, 169, 170, 223, 336n1 body 222–6 kinds of 226–9 and mind 1, 2–3 swapping 79 see also individual entries Boghossian, P. 336nn30, 34 Bond, A. H. 160 Bontly, T. 197 Boswell, J. 302 Boyer, P. 169 Braddon-Mitchell, D. 15, 25, 29, 30 brain 3–8, 11, 25–7, 40, 46, 49, 52–3, 57, 61–6, 77, 82–4, 89, 130, 151–60, 218, 225–6, 228, 276, 304 brainoscopes 82
375
Index Brentano, F. 60, 172, 175, 337n5 Psychologie vom empirischen Standpunkt (Psychology from an Empirical Standpoint) 296 Breuer, J. 307 Bringsford, S. 167 Broad, C. D. The Mind and its Place in Nature 33 Broadbent, D. E. 163 Brooks, R. A. 160, 224 Brown, J. 336nn26, 35 Brueckner, A. 336n36 Bruner, J. 155, 159 Burge, T. 18, 144, 147, 197, 335nn3, 6–8, 12–13, 19, 23, 336nn27–8, 32–4 Bush, V. 159 Butler, J. 215 Byrne, A. 264, 269 Calude, C. 167 Campbell, N. 190, 200, 251, 338nn4, 6, 339n8 Cariani, P. 166 Carnap, R. 282 Carruthers, P. 38, 50, 58 Cartesianism 220–1 Cartwright, R. T. 337n6 causality 111, 144, 167, 197, 334n1 and closure 78, 195, 250, 285, 338n3 and connectionism 29 and explanations of behaviour 105 and interaction 8–9 nomological character of 9 power and 21, 94, 253 realism and 194, 200–1 and relevance 22 and responsibility 88–9 cerebroscope 57 c-fibre firing 7–8, 10–11, 291, 330n3 Chabris, C. F. 161 Chalmers, D. 24, 26, 33, 43, 44, 159, 160, 162, 164, 165, 221, 230, 232, 256, 291, 318, 330n10, 335n17, 339n2 The Conscious Mind 333n10 charity 112, 113–14, 115 Chinese room argument 285 Chisholm, R. M. 339n11 Chomsky, N. 6, 155 Chrisley, R. L. 164, 165, 168 Churchill, J. R. 251 Churchland, P. M. 12, 31, 38, 40, 41, 106, 154, 155, 169, 289
376
Churchland, P. S. 154, 155, 289 circularity 207–8 objection, to neo-Lockean approach 216–19 Clark, A. J. 54, 154, 155, 156, 157, 159, 160, 162, 169, 220, 221–2, 224, 226, 227, 228, 229, 230, 233, 291, 339n2, 340n4 coarse-grained information processing 68 cogito-like judgements 336nn27, 33 cognitive illusion 27, 45 cognitive psychology 6, 108, 151 cognitive science 2, 222–3 computation and 166–8 connectionism and 154–6 consciousness and 163–5 embodiment and 160–2 enactiveness and 161, 162 extended mind and 159–60 functionalism and 152–4 mind and life and 165–6 phenomenology and 161, 162 and pluralism 151–2 representation varieties 156–9 cognitive slips 9 cog-objection 58 co-instantiation 186, 246, 272 Cole, M. 159 Collins, H. 228 commonsense psychology see folk psychology communal use and linguistic meaning 137 compositional syntax and semantics 156, 235–6 computational/representational theory of thought (CRTT) 171, 337n4 Marr on 175 (non-) physical (non-) locality 173–4 program 172–3 Turing on 174 computationalism, significance of 285–6 computational psychology 124 conceivability argument see zombies concepts, significance of 286 Conee, E. 255 confirmation holism 180 connectedness of quasi-remembered past experience 215 of remembered past experience 213
Index connectionism 3, 64–5, 103, 109, 113, 122–3, 154–6, 158, 167, 223, 225, 286 causality and 29 and networks 235–6, 237 consciousness 6–7, 23, 35–8, 211, 268, 269–74, 287, 333n10 anti-physicalism 26–7 cognitive science and 163–5 folk psychology and 129–30 knowledge and 28–33 loss of 213 metaphors of mind and 47–53 non-reductive naturalism, for and against 42–7 physicalism and 24–6 reductively naturalistic frameworks 38–41 strong eliminativism about 27 substance dualism and 89–90 system view of 64–72 weak eliminativism about 27–8 consequence problem and externalism 148–9 constitutive property 199 content 13–14, 27, 49, 56, 108, 115, 124–6, 147, 160, 179, 270–1, 278, 287 associated 133 intentional 123, 124–5, 130, 172, 177–8, 185, 189 narrow versus wide 18–19, 141–2, 187–8 phenomenal 254, 260–8, 274 propositional 62, 123 representational 21, 51–2, 70, 135–9, 142, 171, 175, 189 semantic 52, 67, 70–1, 185, 188 theories 15–17 continuous recurrent neural networks (CNNs) 229 Cooper, R. 152, 163 Copeland, B. J. 166, 167 co-variation locking theories and externalism 182–3 Craik, K. J. W. 152, 156, 157 Crane, T. 60–1, 62, 67, 242, 263, 264, 331–2n10 Crick, F. 38, 164 criterial relation and identity 206–7, 212 Cummins, R. 178, 275 Cussins, A. 155, 157 cyberneticists 165
Dale, K. 225–6, 228 Damasio, A. R. 163 Davidson, D. 8–10, 11, 20, 102, 112–14, 119, 129, 190, 191, 197–8, 208, 251, 280, 281, 330n5, 334nn1–3, 335nn9, 21, 336n29, 338n6 and causal requirement 111 philosophy of mind of 108–11 against scientific psychology 117 on third-person epistemology of mind 115–17 Davies, M. 128, 276, 336n37 De Jaegher, H. 166 Denne , D. 2, 12–13, 25, 27, 38, 42, 48–51, 56, 58, 102, 124, 130, 153–4, 162, 163–4, 169, 294, 333n10, 334n4, 337n1 on design stance 118 on intentional stance 118 original position of 119 revised view of 119–22 Descartes, R. 2, 6, 7, 42, 58, 73, 75, 76, 79, 84–5, 88, 89, 90, 158, 163, 165, 172, 209, 220, 243, 280, 300, 305, 332n14, 333n5 descriptivism and internalism 140 design stance 118, 154, 164 desire 6, 9, 12–13, 15–16, 21, 68, 97, 102–7, 110–13, 118–22, 125–7, 277–8, 288 determination over- 5, 6, 20, 23, 196, 197 upward 96–7 Devi , M. 182, 338n16 Dienes, Z. 163 Dietrich. L. 332n11 Di Paolo, E. A. 158, 162, 166 direct reference theory 138 disjunction problem 16–17, 40, 70, 124, 182–3 disposition 3, 39, 68, 131, 182, 271–2, 288 dissociative states 163 Dretske, F. 16, 25, 38, 60, 61, 67–8, 69, 70, 71, 158, 182, 185, 260, 275, 331n10, 332nn19–20, 337n10 Dreyfus, H. L. 152, 158, 161, 162, 168 Dreyfus, S. 162 Dreyfus, S. E. 152 dualism 4, 249, 288–9, 300 emergent 245 interactive property 5–6 interactive substance see interactive substance dualism substance see substance dualism
377
Index Dunmall, B. 165 dynamic flux 168 Eccles, J. The Self and its Brain 333n7 ecumenical approaches basicality and 188–9 narrow and wide content and 187–8 Edmonds, E. A. 169 eliminativism 11–12, 55 materialism and 106, 154, 156, 289 strong 27 and threat to folk psychology 105–6 weak 27–8 Elman, J. L. 156 Elster, J. 130 Elugardo, R. 335n13 embodied cognition 160–2, 220–2 embrained knowledge 228–9 emergentism 26–7 and dualism 245 emotions 27, 130–2, 289–90 see also individual entries enactiveness and cognitive science 161, 162 enactivists, on consciousness 52–3 Enc, B. 62 Engelbart, D. C. 159 epiphenomenalism 6, 20, 22, 190, 198, 199, 290 about qualia 26, 28, 29 epistemology 1, 24, 32, 44, 46, 54, 77, 79–81, 95, 107, 115–17, 140–3, 145–9, 179–80, 183–4, 248, 292, 306 equivalence relation and identity 206, 207, 212, 213 ethical non-naturalism 246–7 Evans, G. 68–9, 138, 155, 314, 335n10 events and behaviour 109–10 metaphysics of 196–202 Evere , A. 337n6 exclusion problem 20–1, 193–6, 200 responses to 21–3 experiential awareness 36 experiential memory see autobiographical memory explanatorily basic property 188–9 explanatory exclusion principle 200–1 explanatory gap 45–7, 53, 240, 248, 255–60, 291, 317 explanatory internalism 193
378
explanatory irrealism see explanatory internalism explanatory realism 194 extended functionalism 233 extended mind hypothesis 221–2, 223, 226, 229, 236, 291–2, 339n2 and cognitive science 159–60 functionalism and 232–3 parity principle and 229–32 externalism 18, 133, 297, 336n39 achievement problem and 145–8 asymmetric dependencies and 184–5 consequence problem and 148–9 co-variation locking theories and 182–3 historical causal theories and 181–2 ideal co-variation 183–4 and metaphysical considerations 143–4 phenomenal 303–4 predicative 134–7 problems with 186–7 singular 137–9 teleofunctional theories 185 external memory 159 Ezquerro, J. 197 false memory 215 Feigl, H. 7 Fido theory 181 Field 275 first-person authority 116, 274, 292–3 first-person/third person perspective 43, 45, 48, 49, 51, 56–9, 81, 82, 86, 107–8, 115–17, 119, 128–31, 209, 293 Fitch, T. 63–4, 67, 68, 69, 332n15 Flanagan, O. 25, 41, 165 Fodor, J. A. 12, 14, 17, 19, 21, 62–3, 67, 102, 128, 130, 153, 155, 158, 160, 166, 179, 182, 183, 184, 185, 186, 188, 192, 235, 286, 332n14, 334n5, 335nn16, 22, 337nn3, 9, 338nn12, 14 A Theory of Content 275 intentional realism of 122–7 folk psychology 11–12, 102–4, 108, 121, 123–5, 131–2, 156, 161, 289, 293–4, 302 anomalousness and mental holism and 111–12 a priori principles of interpretation and 112–14 belief and meaning and 114–15
Index causal requirement and 111 consciousness and 129–30 design stance and 118 eliminativist threat to 105–6 emotions and 130–2 idealization and abstraction of 120 intentional network and 104 intentional realism and 122–7 intentional stance and 118 mentalizing abilities and 128–9 and mental life 127 philosophy of mind and 108–11 physical stance and 119 propositional a itudes and intentional actions and 103–4 realism about 106–7 scientific psychology and 107–8, 117–18 and self-knowledge 107 third-person epistemology of mind and 115–17 Foster, J. 332n1 frame problem 294 Frankish, K. 284 free will 163 Frege, G. 205, 206–7, 284 Freud, S. 6, 307 Frith, C. D. 163, 164 functionalism 7, 8, 39, 40–1, 50, 156, 222, 226–7, 295 cognitive science and 152–4 and extended mind 232–3 and representation 33 functionalization 248 Gallagher, S. 129, 330n1 (Ch 2) Gallese, V. 129 Gallistel, C. 173–4, 177, 186 gambler’s fallacy 337n2 Garvey, J. 336n25, 338n16, 340nn4–5 Belief in God 333n6 Free Will 333n8 Gasser, L. 160 Gates, G. 186 Geach, P. T. 165 Geertz, C. 159 Gibb, S. 338n6, 339n7 Gibson, E. J. 158 Gibson, J. J. 158 Gille , C. 197, 253 God 101 upward determination and 96–7
Godfrey-Smith, P. 17, 165 Gois, I. 28 Goldberg, S. 336nn34, 36 Goodale, M. 66, 170 Good Old-Fashioned AI (GOFAI) 151, 153, 155–6, 160, 162, 167 granny psychology 125 Grau, C. 162 Greenfield, P. M. 159 Grice, H. P. 70, 188 Griffin, D. R. 169 Griffiths, P. 27–8 Grush, R. 157 Gulick, R. V. 197 Harman, G. 25, 32, 260 Harnish, M. 337n3 Hart, W. D. 332n1 Ha iangadi, A. 277 Haugeland, J. 151, 166, 168, 223 Hawthorne, J. 244, 333n3 Hayes, P. J. 169 Heidegger, M. 161, 162, 168 Heil, J. 20, 253, 336n27 Hempel, C. 193, 282 Hess, P. 191 Hesse, M. B. 169 heterophenomenology 48–9, 130 higher-order thought 270–1, 296 Hinton, G. E. 156, 158 historical causal theories and externalism 181–2 Hobbes, T. 317 Hofweber, T. 337n6 Honderich, T. 191, 268, 338n1 Horgan, T. 12, 246, 334n4, 339n11 Hornsby, J. 280 Horwich, P. 188, 278 H2O XYZ 18, 21, 180, 181, 188, 295 human-scope cognitive system 234–5 Hume, D. 97, 288 Hunt, E. B. 155 Hurley, S. L. 55, 339n2 Husbands, P. 167, 225–6, 228 Hutchins, E. 160 Hu o, D. D. 35, 52, 255, 276 ideal co-variation externalism 183–4 identification problem and substance dualism 78–83 implementational materiality and body 227–9
379
Index incomplete linguistic externalism 136–7 incorrigibility 56–9 indeterminacy 112, 330n2 and substance dualism 83–4, 86 indiscernibles identity 203 individualism see internalism infants-and-animals objection, and mental states 58 inferential promiscuity 23 instrumentalism 12, 154 intentionality 21, 55, 60–4, 67, 112, 120–1, 159, 168, 171, 172, 296–7, 332n13 and action 110 and content 123, 124–5, 130, 172, 177–8, 185, 189 nano 63–4, 68, 69 network, and beliefs and desires 104 and normativity 274–9 orders of 67–8 primitive 62 and psychology 123–5 and realism 122–7 and sensory states 55–6, 61, 65, 68 unconscious state and 65 intentional stance 12–13, 153–4 Denne on 118 interactive property dualism 5–6 interactive substance dualism (Cartesian dualism) 4, 26 challenges to 4–6 non-reductive physicalism and 7–10 reductive physicalism and 7, 8 see also substance dualism internalism 133, 297 conceptual roles and 179–81 images and stereotypes and 178–9 and metaphysical considerations 143 phenomenal 304 singular 139 thorough-going 140–1 two-factor 141–3 interpretation theories and a priori principles 112–14 linguistic and non-linguistic 114–15 introspection 6–7, 45, 48, 163, 178, 240, 259, 267–70, 297–8 irrealism, about psychological talk 106 Irwin, W. 162 Jackendoff, R. 51 Jackson, F. 22, 25, 26, 28, 29, 30, 31, 32, 43, 95, 101, 242, 255, 309, 318, 333n9
380
Johnson, M. 339n1 Jonas, H. 158, 166 Joyce, J. Ulysses 49 Joycean machine 50 Kallestrup, J. 197 Kaplan, D. 188, 337n7 Karmiloff–Smith, A. 153, 169 Keijzer, F. 336n24 Kenny, A. 333n3 K-function 207 Kim, J. 10, 19, 20–1, 22, 55–6, 50, 190, 191, 193–6, 198–201, 247, 312, 333n4, 334n1, 338nn2–5, 339n9 Kirsh, D. 157 Kitcher, P. 12 knowledge and consciousness 28–33 Koch, C. 164 Kolers, P. A. 163 Kornblith, H. 196 Kriegel, U. 271, 272, 273 Kripke, S. 181, 335n5, 338n11 Ladyman, J. 94 Lakoff, G. 339n1 language of thought 124–7, 153, 172, 298 Larmer, R. 333n4 Latham, N. 333n10 Lawrence, M. 162 Leibniz’s law 205 LePore, E. 179, 197 Le vin, J. Y. 162 Leuenberger, S. 244 Levine, J. 45, 95, 255, 258, 332n21 Lewis, D. 30, 31, 154, 242, 255 Libet, B. 163 linguistic meaning 136–7 linguosemantics 337n1 Loar, B. 95, 119, 188 Mind and Meaning 106 Locke, J. 204, 208, 209–16, 218 Lockwood, M. 26 Loewer, B. 184, 197 logical behaviourism see behaviourism Lombard, L. B. 339n11 Lowe, E. J. 203, 206, 208, 209, 211, 217, 219, 250, 333n4 Lucas, J. R. 154 Ludlow, P. 333n9, 336nn26, 31 Lutz, A. 43 Lycan, W. G. 38, 42, 50, 268, 269
Index Macdonald, C. 251, 339n11 Macdonald, G. 251, 339n11 machine consciousness 165 Majors, B. 336nn39–40 map theory 15 Marr, D. C. 6, 158, 175 Marras, A. 200, 339nn9–11 Martin, M. G. F. 261, 266, 267 Marx, K. 220 material stuff and consciousness 165 Matrix, The 162 Ma hen, M. 332n17 Maturana, H. 161, 166 Mawson, T. 73, 249 May, L. 169 McCarthy, J. 169 McClelland, J. L. 155 McCulloch, G. 152, 154 The Life of the Mind 316 McDowell, J. 138, 161, 226, 316, 335n10 McGinn, C. 6, 23, 25, 45, 46, 95, 162, 165, 258, 331n4, 332n21 Mckinsey, M. 336n35 McLaughlin, B. P. 27, 156, 334n5, 336n26 McTaggart, J. 283 Mellor, D. H. 242, 250 Melnyk, A. 93, 94, 241, 334n6 Menary, R. 231, 339n2, 340n5 mental causation 19, 21, 190, 299 anomalous monism and 191–3 and anomalousness of mental 20 dual explanandum solutions 22–3 exclusion problem 20–1, 193–6 metaphysics of events and 196–202 and physicalism 250–3 program explanation and 22 reductive physicalism and 21–2 and representational content 21 mentalese see language of thought mental representation 13, 32, 64, 123–6, 153, 178, 189 conceptual role approach 15–16 dual explanandum solutions and 22–3 information-theoretic approaches 16–17 narrow versus wide content 18–19 representational theory of mind and 14 mental substances 4, 26, 305 Menzies, P. 197 Merleau-Ponty, M. 158, 161, 162 Mervis, C. B. 155
metabolism 166 meta-cognition 163 methodological behaviourism see behaviourism methodological issues 2 Metzinger, T. 164 microphysics 98 Millikan, R. 158, 185, 275, 338n14 Mills, E. 333n4 Milner, A. 66, 170 mind 54 and body problem 1, 2–3, 299–300 lack of unified conception of 55 property cluster view of 56 single property view of 56–64 system view of 64–72 see also individual entries minimal physical duplication and physicalism 243–5 Minsky, M. L. 152 mirror-neurons 128–9 misrepresentation 67 modest physicalism 30–1, 32 Montero, B. 92, 244, 332n1, 334nn1, 6 Moore, D. 200, 283, 339n8 Moore, G. E. 91, 246–7, 283 moral psychology 301 moral responsibility 1, 88–9 Morris, M. 161, 335n21 mountain–molecule relations and physicalism 95–8 multiple dra s model 49, 130 multiple instantiation 86–7 multiple personality syndrome 163 multiple realizability 227–8, 232, 233, 301 Myin, E. 52 mysterianism 55 Nagasawa, Y. 333n9 Nagel, T. 23, 25, 37, 214, 255, 258, 316 nano-intentionality 63–4, 68, 69 narrow versus wide content 18–19 naturalism 144 non-reductive 42–7 natural kind externalism 135–6 natural meaning 70, 182 natural selection 17, 185 Neander. K. 185, 260 Nemirow, L. 30, 32, 255 neo-Lockean approach, circularity objection to 216–19
381
Index neural network 151, 224, 229, 237, 286, 301–2 Newell, A. 152, 153, 155, 159, 167, 233–4 new mysterianism 25 Ney, A. 252, 338n5 Noë, A. 52, 161, 265, 276, 339n1 nominalism 199, 199 non-computational representational theory, of perception 158 non-modular computation 153 non-naturalism 246–7 non-nomicity 62–3, 67 non-reductive naturalism 42–7 non-reductive physicalism 7–10, 20, 110, 190, 191, 197, 200, 201, 250, 252–3, 305–6 non-representationalist view, of experience 52 Noordhof, P. 196, 239, 245, 246, 252, 253, 255, 264, 266, 268, 278, 279, 336n24 Norman, D. A. 152, 156, 163 Norman, J. 158 O’Connor, T. 245, 251 occurrent states 56 Oderberg, D. 333n2 Olson, E. 210 ontology 11–12, 49, 73, 77, 79, 82, 86, 92, 95–6, 100, 165, 191, 219, 240, 245, 253, 256–7, 280, 284, 289, 308 Oppenheim, P. 193 optimistic physicalism 25 O’Regan, J. K. 161, 339n1 Osherson, D. 337n3 other minds 79–83, 107, 116, 121, 129, 302 over-determination 5, 6, 20, 23, 196, 197, 250 Papineau, D. 45, 46–7, 158, 185, 250, 256, 275 parallel distributed processing (PDP) 154–6 parallelism, between lines 207 parenthood relation and identity 213 Parfit, D. 215, 216 parity principle 229–32 Parker, A. 157 partial integration 75 participatory engagement 168 Pa ee, H. H. 166 Paul, L. 251 Peacocke, C. 179, 185, 261, 336n27
382
Pearl, J. 169 Penrose, R. 38, 154, 165 perception 14, 45, 50, 51, 129, 134, 137, 258, 260–1, 265–71, 274, 276 conscious 68–9 inner 297, 303 and non-computational representational theory 158 outer 303 perceptual content 51, 276, 303–4 Pereboom, D. 196 Perner, J. 163 person, significance of 204 personal identity 203 criteria of 206–8 and identification 204–6 Locke’s criterion of 211–16 neo-Lockean approach, circularity objection to 216–19 person, significance of 208–11 and substance dualism 85–7 pessimistic physicalism 25 Pe it, P. 22, 25, 32, 249 Pfeifer, R. 224–5, 228 phenomenal blueness 51 phenomenal concepts 44, 45, 256–7, 304, 331n2 (Ch 2) phenomenal consciousness see consciousness phenomenal content 254, 260–8 phenomenal fundamentalism 27 phenomenal representation 33 phenomenology 52, 60–1, 152, 169, 267, 295, 304–5 and cognitive science 161, 162 philosophical zombies see zombies philosophy of presence’ 168 phlogiston theory 12 phredness 32 physical closure 5–6, 330n2 physicalism 74, 88 anti- 26–7 characterization of 240–9, 255 and consciousness 24–6 domain of 92–8 intentionality and normativity and 274–9 mental causation and 250–3 non-reductive 7–10, 20, 110, 190, 191, 197, 200, 201, 250, 252–3, 305–6 phenomenal consciousness and 254–5, 260–74
Index phenomenal properties and explanatory gap and 255–60 physical and 98–101 reductive 7, 8, 21–2, 305 physicalist structuralism 94 physical reports 57 physical stance 119, 154 physical substances 4 physical symbol systems 234, 236–7 physics 258 folk 104 of nuclear reactions 302 and physicalism 98–100, 241 Pietroski, P. 185 Pinker, S. 155 Pi s, W. H. 152, 154 Place, U. T. 7, 154 Plato 73, 186 Poland, J. 93 Popper, K. The Self and its Brain 333n7 powers ontology 245 predicative externalism 134–7 predicative thought, two-factor theory of 141, 142 Premack, D. 104 preservative memory 147 Priest, G. 337n6 primitive intentionality 62 Prince, A. 155 Prinz, J. 331n2 (Ch 2) privileged access 292, 293, 306 property cluster view, of mind 56, 66 property exemplification theory 199 propositional a itudes 13–14, 67, 117, 122, 125, 131, 154, 176, 281, 287, 288, 298, 306 beliefs and desires and 111–12, 115–16, 123 and intentional actions 103–4 propositional content 62, 123 psychoanalysis 6, 307 psychological behaviourism see behaviourism psychology cognitive 6, 108, 151 computational 124 folk see folk psychology granny 125 intentional 123–5 moral 301 scientific 107–8, 117–19, 121, 124
psychophysical nexus and consciousness 45–6 psychosemantics 337n1 putative identity criterion 207–8 Putnam, H. 7, 8, 18, 38, 152, 153, 154, 161, 166, 181, 188, 283, 286, 295, 312, 315, 316, 330n9, 334n3 (Ch 7), 335nn4–5 Pylyshyn, Z. W. 155, 157, 158, 166, 235 qualia 24, 26, 28–33, 61, 62, 69, 153, 164, 165, 167, 247, 260, 261, 307–8, 330n11 quantum indeterminacy and substance dualism 83–4 quasi-memory 215–16 Quine, W. V. 177, 179–80, 186, 189, 337nn6, 9 radical body-centrism 222 randomness 84 rationality 9, 13, 105, 108, 109–10, 112–14, 120, 121, 172 Ravenscro , I. 1, 11, 250, 251, 275, 330n8 Ray, T. S. 166 Raymont, P. 197, 200 reaction-diffusion (RD) systems 225, 228, 229 reasons, significance of 308 recursion 163 reduction, significance of 46–8, 191, 193, 196, 247–8, 308–9 reductive naturalism 38–41 reductive physicalism (identity theory) 7, 21–2, 305 challenge to 8 referential opacity and 175–8 reflexive consciousness 163, 271, 273, 287 reflexive relation and identity 205, 206, 213 Reid, T. 212, 213 representation 2–3, 14, 52–3, 62, 309–10, 223–5, 228, 235–6, 240, 259–67, 271–77, 283, 294, 303 cognitive science and 156–9 computational/representational theory of thought (CRTT) 172–5 content and 21, 51–2, 70, 135–9, 142, 171, 175, 189 ecumenical approaches 187–9 externalist theories and 181–7 and functionalism 33 internalist strategies and 178–81
383
Index representation (Cont’d) mental causation and 21 phenomenal 33, 260–72 referential opacity and 175–8 see also mental representation Rey, G. 171, 184, 275, 337nn2–4, 338nn13–14 Rives, B. 253 Robb, D. 251 Robinson, H. 332n1, 333n2 Rock, I. 6 Rodriguez-Pereyra, G. 253 Root, M. 116 Rorty, R. 56–8, 331nn5–9 Rosch, E. H. 155 Rosenschein, S. J. 160 Rosenthal, D. M. 38, 50, 271 Rosner, B. S. 163 Ross, D. 333n10 Rowlands, M. 222, 339n2, 340n5 Rozemond, M. 333n3 Rumelhart, D. E. 155 Rupert, R. 339n2 Russell, B. 27, 283, 309 Ryle, G. 174, 283, 284 The Concept of Mind 302 Salmon, N. 335n11 Samuels, R. 153 Sawyer, S. 133, 275, 335nn13, 18, 336nn27, 37–40 Scheier, C. 224–5, 228 Scheutz, M. 167 Schiffer, S. 336n34 Schouten, M. 336n24 scientific psychology 107–8, 117–19, 121, 124 Davidson against 117 Scriven, M. 165 Seager, W. 249 Searle, J. R. 64–6, 69, 154, 162, 165, 166, 168, 174, 285, 331n4, 332n16 seeing-as 129 Segal, G. 19, 335nn12, 15 Sejnowski, T, J. 155 self, significance of 310 self-a ributions 149 self-awareness 37, 209, 310 self-consciousness 37, 271, 287, 310–11 reflexive 163 self-knowledge 107, 134, 145–7, 292, 306, 311, 314
384
semantics 14–15, 123, 126, 139, 180, 181, 234, 311–12 and compositional syntax 156, 235–6 content and 52, 67, 70–1, 185, 188 promiscuity 16, 17 sensation 129, 131 sensitivities 174 sensori-motor knowledge and expectations 265, 276 sensory states and intentional states dichotomy 55–6, 61, 65, 68 Shafer-Landau, R. 247 Shallice, T. 152, 163 Shannon, C. E. 67, 337n10 Shapiro, L. 228 Shoemaker, S. 251, 261, 262, 272 Siegel, S. 265 Simon, H. 152, 153, 159, 167, 223, 233–4 Simons, D. J. 161 simulation theory 128 single property view, of mind incorrigibility 56–9 intentionality 60–4 singular externalism 137–9 singular internalism 139 Slocum, A. C. 229 Sloman, A. 152, 153, 157, 158, 164, 165, 167, 168, 169, 170 Smart, J. J. C 2, 32 Smith, A. D. 265 Smith, B. C. 117, 131, 167, 168, 169 Smith, Barry C. 102 Smith, T. 167 Smolensky, P. 155, 156 Soames, S. 335n11 Sober, E. 155 social externalism 136–7 Socrates 73 souls and identity criterion 218 and physicalism 98–9 and substance dualism 79, 81, 83–4 Sparber, G. 197 special sciences 123–7 speech acts 49 Stalnaker, R. 183, 257, 275 Stampe, D. 183 standing states 56 Sterelny, K. 2, 14, 182 Stich, S. 13, 15, 330n6, 335n16 Stoljar, D. 26, 258, 333n9 Stone, T. 128
Index Stoutland, F. 191 Strawson, G. 275–6, 283, 305 Strawson, P. F. 217 strong eliminativism 27 strongly optimistic physicalism 25 strongly pessimistic physicalism 25 Sturgeon, S. 256 subjective awareness 254, 267–74 substance dualism 73, 77, 332n1, 333n2 consciousness and 89–90 freedom and 87–9 identification problem 78–83 interaction problems and 83–5 personal identity and 85–7 significance of 74–7 see also interactive substance dualism substantive event memory 147 superbelief 284 supervenience 10–11, 52, 96–7, 111, 133, 196, 242, 245, 247, 312–13, 338n4 Su on, J. 231 Swampman 11, 18 Swinburne, R. 218, 332n1, 333nn2, 11 Sylvan, R. 167 symbol 167 symmetry relation and identity 205, 206, 213 systematicity, of cognitive performance 235–6 system view 56, 66–72 consciousness and 64–6 teleofunctionalism 185, 313–14 teleological theory of content 17 Thagard, P. 155, 169 theory of mind 104 theory-theory 128 third-person perspective see first-person/ third-person perspective thisness 79 Thomasson, A. 23 Thompson, E. 43, 52 Thornton, C. 170, 224, 228 thorough-going internalism 140–1 thought experiment 79, 89, 255 threshold value see neural network token 184 -token identity 154 token-physicalism 143, 335n20 total implementation sensitivity 227 Touretsky, D. S. 156
transcendental realism 25 transitivity relation and identity 205, 206, 212 transparency 38, 140, 168, 175–6, 287, 314 truth-making 252–3 Turing, A. M. 166, 172, 174, 225, 283, 314 Turing-computation 152 Turing machines 167, 175, 314–15, 337n4 Turing test 315 Turner, M. 169 Twin Earth 18, 124, 180, 295, 315–16, 334n3 (Ch 7), 335n4 two-factor theory of predicative thought, of 141, 142 singular thought 141–2 Tye, M. 32, 38, 51, 60, 61, 62, 68, 69, 255, 257, 260, 262, 263, 267, 331n2 (Ch 2), 331n10, 336n26 type identities 7, 10 type-physicalism 143, 335n20 type-type identity 40 Ullman, S. 158 uncommi ed physicalism 25–6, 30 unconscious intentional state 65 upward determination 96–7 van Gelder, T. J. 157, 162 Van Gulick, R. 29, 32–3 Varela, F. J. 161, 162, 166 variable realization 242 Velleman, J. D. 278 via negativa physicalism 101 Vicente, A. 197 virtual machines 50, 152, 156, 166, 167, 170 vision for action 66 Von Eckardt, B. 12 von Wright, G. 280 Walk, R. D. 158 Wallace, A. F. C. 159 Warfield, T. A. 336n26 weak eliminativism 27–8 weakly optimistic physicalism 25 weakly pessimistic physicalism 25 Weather Watchers 283 Weaver, W. 337n10 Wedgwood, R. 337n2 Weiskrantz, L. 66
385
Index what-it-is-likeness 31, 32, 37–8, 49, 51, 89–90, 255, 258–9, 268, 308, 316–17 Wheeler, M. 157, 162, 220, 222, 223, 232, 233, 276, 339n2 Whitby, B. 169 White, S. 188 Whyte, J. T. 275 Wiggins, D. 215, 216 Wigner, E. 100 Wilkerson, T. E. 335n5 will, significance of 317 Williamson, T. 179 Wilson, J. 245, 334n3, 335n23, 339n2 Witmer, G. 95
386
Wi genstein, L. 155, 174, 178, 314, 338n11 Philosophical Investigations 283 Wollheim, R. 131 Woodruff, G. 104 Wright, C. 336n37 XYZ see H2O XYZ Yablo, S. 23, 252, 330n2 Yli-Vakkuri, J. 339nn9, 11 Zahavi, D. 330n1 (Ch 2) Ziff, P. 164 zombies 24, 34, 43–4, 97, 164, 317–18