121 60 37MB
English Pages 504 [505] Year 2018
The Bloomsbury Companion to the Philosophy of Consciousness
Also available from Bloomsbury: The Bloomsbury Companion to Analytic Philosophy The Bloomsbury Companion to Epistemology The Bloomsbury Companion to Metaphysics The Bloomsbury Companion to Philosophical Logic The Continuum Companion to Philosophy of Mind The Bloomsbury Companion to the Philosophy of Science
The Bloomsbury Companion to the Philosophy of Consciousness Edited by Dale Jacquette
Bloomsbury Academic An imprint of Bloomsbury Publishing Plc
LON DON • OX F O R D • N E W YO R K • N E W D E L H I • SY DN EY
Bloomsbury Academic An imprint of Bloomsbury Publishing Plc 50 Bedford Square 1385 Broadway London New York WC1B 3DP NY 10018 UK USA www.bloomsbury.com BLOOMSBURY and the Diana logo are trademarks of Bloomsbury Publishing Plc First published 2018 © Dale Jacquette, 2018 Dale Jacquette has asserted his right under the Copyright, Designs and Patents Act, 1988, to be identified as Editor of this work. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. No responsibility for loss caused to any individual or organization acting on or refraining from action as a result of the material in this publication can be accepted by Bloomsbury or the editor. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: HB: 978-1-4742-2901-2 ePDF: 978-1-4742-2903-6 eBook: 978-1-4742-2902-9 Library of Congress Cataloging-in-Publication Data Names: Jacquette, Dale, author. Title: The Bloomsbury companion to the philosophy of consciousness / Dale Jacquette. Description: New York : Bloomsbury, 2017. | Includes bibliographical references and index. Identifiers: LCCN 2017027229 (print) | LCCN 2017036282 (ebook) | ISBN 9781474229029 (ePub) | ISBN 9781474229036 (ePDF) | ISBN 9781474229012 (hardback) Subjects: LCSH: Consciousness. Classification: LCC B808.9 (ebook) | LCC B808.9 .J33 2017 (print) | DDC 128/.2–dc23 LC record available at https://lccn.loc.gov/2017027229 Cover design: Irene Martinez Costa Typeset by Deanta Global Publishing Services, Chennai, India
To find out more about our authors and books visit www.bloomsbury.com. Here you will find extracts, author interviews, details of forthcoming events and the option to sign up for our newsletters.
In memory of Dale Jacquette, 1953–2016
vi
Think of consciousness as a territory just opening its settlement and exploitation, something like an Oklahoma land rush. Put it in color, set it to music, frame it in images—but even this fails to do justice to the vision. Obviously consciousness is infinitely bigger than Oklahoma. —Saul Bellow, Collected Stories (2001), Afterword, 441.
Contents Contributorsix Prefacexi
1 Introduction: Philosophy of Consciousness Dale Jacquette
1
Part 1 Historical Development 2 The Hard Problem of Understanding Descartes on Consciousness Katherine Morris 11 3 Brentano’s Aristotelian Concept of Consciousness Liliana Albertazzi 27 4 Wittgenstein and the Concept of Consciousness Garry L. Hagberg 57 5 ‘Ordinary’ Consciousness Julia Tanney 78 Part 2 Groundbreaking Concepts of Consciousness 6 Consciousness, Representation and the Hard Problem Keith Lehrer 7 The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ Daniel Stoljar 8 Conscious and Unconscious Mental States Richard Fumerton 9 Higher-Order Theories of Consciousness Rocco J. Gennaro 10 Kripke on Mind–Body Identity Scott Soames
93 108 126 142 170
Part 3 Metaphilosophy of Consciousness Studies 11 Understanding Consciousness by Building It Michael Graziano and Taylor W. Webb 12 The Illusion of Conscious Thought Peter Carruthers 13 Actualism About Consciousness Affirmed Ted Honderich 14 Cracking the Hard Problem of Consciousness Dale Jacquette
187 211 234 258
viii
Contents
Part 4 Mental Causation, Natural Law and Intentionality of Conscious States 15 Toward Axiomatizing Consciousness Selmer Bringsjord, Paul Bello and Naveen Sundar Govindarajulu 16 Intentionality and Consciousness Carlo Ierna 17 Cognitive Approaches to Phenomenal Consciousness Pete Mandik 18 Free Will and Consciousness Alfred Mele 19 Notes Towards a Metaphysics of Mind Joseph Margolis
289 325 347 371 389
Part 5 Resources 20 Annotated Bibliography 21 Research Resources 22 A–Z Key Terms and Concepts
413
Index
471
428 437
Contributors Liliana Albertazzi Principal Investigator Center for Mind/Brain Sciences (CIMEC) Professor at the Department of Humanities University of Trento Trento, ITALY Selmer Bringsjord Chair, Department of Cognitive Science Rensselaer Polytechnic Institute Professor of Computer Science and Cognitive Science Troy, NY, USA Peter Caruthers Professor, Department of Philosophy University of Maryland College Park, MD, USA Richard Fumerton Professor, Department of Philosophy The University of Iowa Iowa City, IA, USA Rocco J. Gennaro Professor and Department Chair of Philosophy University of Southern Indiana Evansville, IN, USA
Michael Graziano Professor, Department of Psychology and Neuroscience Princeton University Princeton, NJ, USA Garry Hagberg James H. Ottaway Jr. Professor of Philosophy and Aesthetics Department of Philosophy Bard College Annandale-on-Hudson, NY, USA Ted Honderich Grote Professor Emeritus of the Philosophy of Mind and Logic University College London London, England, UK Carlo Ierna Postdoctoral Researcher Research Institute for Philosophy and Religious Studies (OFR) Department of Philosophy and Religious Studies—Philosophy Universiteit Utrecht Utrecht, THE NETHERLANDS Dale Jacquette Senior Professorial Chair Division for Logic and Theoretical Philosophy University of Bern BERN, SWITZERLAND
x
Contributors
Keith Lehrer Regent’s Professor Emeritus of Philosophy University of Arizona Tucson, AZ, USA Pete Mandik Professor, Department of Philosophy William Paterson University Wayne, NJ, USA Philosophy-Neuroscience-Psychology Washington University St. Louis, MO, USA Joseph Margolis Laura H. Carnell Professor of Philosophy Department of Philosophy Temple University Philadelphia, PA, USA Alfred Mele William H. and Lucyle T. Werkmeister Professor of Philosophy
Florida State University Tallahassee, FL, USA Katherine Morris Supernumerary Fellow in Philosophy Mansfield College Oxford University Oxford, England, UK Scott Soames Professor of Philosophy University of Southern California School of Philosophy Los Angeles, CA, USA Daniel Stoljar Professor of Philosophy Australian National University (ANU) Canberra, AUSTRALIA Julia Tanney Independent Scholar
Preface The chapters collected in this book investigate philosophical aspects, problems and challenges in the concept of consciousness. Consciousness studies are a subdivision in philosophy of mind with a recent history and unique character that sets them slightly apart from other developments in philosophy of mind or philosophical psychology otherwise conceived. I am grateful to Fiona Dillier for her able assistance in collating materials for the Part V Resources section, with detailed Annotated Bibliography, electronic Research Resources, and A–Z of Key Terms and Concepts. Thanks are due especially to contributing authors for their excellent chapters that present the recent history and contemporary thinking in philosophy of consciousness.
xii
1
Introduction: Philosophy of Consciousness Dale Jacquette
‘Consciousness was upon him before he could get out of the way.’ Kingsley Amis The philosophy of consciousness is a relatively new concentration of interest within the more widely recognized general field of philosophical psychology and philosophy of mind. The concept of consciousness is not taken for granted, but subjected to close critical scrutiny. What exactly is meant by speaking of consciousness? What sort of thing, beginning categorically? If we are not to simply spoon up synonyms, then philosophy of consciousness assumes a specific burden in philosophical psychology and philosophy of mind, what kind of thing, ontologically, metaphysically speaking, is a moment of consciousness? What are the epistemically justified ways of discerning their qualities and relations? If moments of consciousness have as they are frequently said to have, undeniably if phenomenology is consulted, qualia and intentionality, then what kind of thing is a moment of consciousness to possess such apparently physically irreducible kinds of properties? These are not new questions. Nor need there be perceived any urgency in addressing these conceptual questions about and investigations of the properties of consciousness exactly now. To the extent that we question the concept of consciousness in approaching the traditional mind–body problem, the analysis of the concept and identity conditions for persons, inquiry in philosophy of mind becomes literally self-conscious. In the process, we are made conscious of the centrality of the concept of consciousness in every aspect of the philosophy of mind. Philosophy of consciousness focuses attention and places emphasis on the concept category of consciousness, which needless to say has been present all along. It is in one sense precisely what thinkers in the philosophy of mind have always been talking about. The challenge is to try explaining what that
2
The Bloomsbury Companion to the Philosophy of Consciousness
means, what philosophy of mind has, perhaps less self-consciously had, in its sights from ancient times. There is no competition between philosophy of mind and contemporary philosophy of consciousness. Philosophy of consciousness extensionally is subsumed by philosophy of mind. A discovery in philosophy of consciousness is automatically a contribution to philosophy of mind, even if not conversely. Philosophy of consciousness can venture subdivisions of its subject that might not have occurred in philosophy of mind lacking explicit engagement with the concept of consciousness. Consciousness is said to come in three varieties – in several terminologies the perceptual, affective and cognitive. Whether or not this is correct is not so much the point as that it offers something substantive about the nature of consciousness for philosophical consideration. What does it mean, and what are the arguments? What are the relevant identity conditions, if the perceptual, affective and cognitive are different kinds or applications of consciousness, if there is or cannot be anything common underlying all three, to which all three kinds or modes of consciousness can be reduced? Are there three consciousnesses, or three capabilities of a single unified consciousness? Can consciousness fail to be unified? What would that mean? What ontic and explanatory models could be invoked in the metaphysics of three kinds or modes of consciousness? In the science of psychology we can afford to be indifferent about these questions, but not here in the philosophy of consciousness. We must arrive at defensible identity conditions for consciousness. It should suffice to understand what it is for something to be a single moment of consciousness carried along metaphorically speaking in a progression streaming in sync with the conscious subject’s perception of the passing of time. This too has proved elusive. It is to attempt expressing in words the necessary and sufficient conditions for a thinking subject to think a single conscious thought. Anyone who has not tried but considers the task trivial need only continue reading as the authors assembled here raise philosophical difficulties for the concept and explore the perplexing nature of consciousness. There is nothing more familiar to each individual conscious being, and by reputation few things more resistant to exact articulation, let alone reductive conceptual analysis. There is by reputation again supposed to be a ‘hard’ problem of consciousness. What seems the hardest is saying exactly what the problem is meant. What is it supposed to be so hard to do? If it is to explain how living tissues can be related to the existence of moments of consciousness, then more must be said as to what sorts of explanations can satisfy and for what reasons they would finally
Introduction: Philosophy of Consciousness
3
answer the hard question. Otherwise it is not clear that a question is being asked at all, of any level of difficulty. Why should it not do then, to say that qualia and intentionality supervene on a conscious thinking subject’s functioning neurophysiology, to which it can be further added that qualia and intentionality are emergent properties that cannot be fully explained in terms exclusively of the purely physical properties of their supervenience base? Philosophy of consciousness taking inquiry in that direction opens many avenues of theory development with potential implications for analysis of the concept of perception, reasoning, mental action, passion and suffering, the concept of person, freedom of the will, action theory, phenomenological epistemology, and much else besides. Many philosophers of consciousness will not want to pursue all of these possibilities. There is thankfully no party line philosophically in consciousness studies. Instead there is commitment to improved understanding of the concept and properties of consciousness insofar as these can be rigorously investigated and conclusions defended by reasonable arguments. Engagement in philosophy of consciousness is a concentrated study in philosophy of mind, dedicated specifically to understanding the existence and nature of consciousness, beginning with a single moment of consciousness abstracted in isolation from the streaming progression of conscious moments. If consciousness is something like a streaming film, as Saul Bellow mentions in the book’s quotation, with a technicolor sound track and all the other sensory inputs of a richly experienced individual moment of consciousness, then we proceed analytically by asking what is a single frame in the movie and how it is linked up and connects together with all the other frames running through the chain of an individual subjective consciousness. These themes are examined from multiple perspectives in the present collection’s four main parts. The Historical Development of the philosophy of consciousness presented in Part I examines highlights selected from the history of the subject and explains their relation to the evolution of the subject among philosophers of many different orientations. Katherine Morris begins the book, appropriately, by examining, “The Hard Problem of Understanding Descartes on Consciousness.” Liliana Albertazzi in her comprehensive study, ‘Brentano’s Aristotelian Concept of Consciousness’, examines Franz Brentano’s descriptive and experimental psychology against the background of his thesis of the characteristic intentionality or ‘aboutness’ of thought. Albertazzi draws on her expertise in perceptual psychology to position herself authoritatively to conclude with Brentano and by implication with Aristotle in De Anima that even complete knowledge of the brain and its workings will never adequately
4
The Bloomsbury Companion to the Philosophy of Consciousness
explain the concept, possibility, structure or phenomenological contents of consciousness revealed only to Aristotle’s ‘inner sense’ of the active intellect or Brentano’s faculty of ‘inner perception’. Garry L. Hagberg in ‘Wittgenstein and the Concept of Consciousness’ historically-philosophically examines especially the later Wittgenstein’s scattered remarks relevant to understanding the nature of consciousness. Hagberg interprets Wittgenstein especially in the posthumous Philosophical Investigations, Remarks on the Philosophy of Psychology and Last Writings on the Philosophy of Psychology, also Zettel and Blue and Brown Books, as opening his discussion to a double entendre based on explicating several meanings of the colloquial phrase, ‘it is not what you think’. Julia Tanney in ‘“Ordinary” Consciousness’ considers a common-sense approach to the problems of consciousness that require theory to become more self-conscious about the questions it asks, the kinds of answers it wants and expects, and that it could meaningfully accept. The chapter exemplifies ‘ordinary language’ considerations about the concepts and terminologies conventionally adopted in an effort to express conscious experience, notably the ‘what it is like’ vocabulary of qualia. She considers zombies’ arguments from this point of view and presses received philosophies of consciousness with a dilemma whereby the possibility of zombies is conceivable only if sufficient inner mental life of precisely the sort in question in the internalismexternalism debate is built into the concept of conscious non-zombies. The thought experiment consequently does not get off the ground without reasoning in a vicious circle. Part II, Groundbreaking Concepts of Consciousness, opens with an important new chapter by Keith Lehrer titled, ‘Consciousness, Representation and the Hard Problem’. Lehrer elaborates a representational theory of consciousness that makes reflexive exemplarization a key concept in understanding the facts and external world correspondences and truthconditions for states of consciousness. With one eye on semantics and another on epistemology against a realist metaphysical background, Lehrer makes a case for understanding consciousness and addressing David J. Chalmers’s mention of the ‘hard problem’ of consciousness as a self-presentation representation that ‘radiates’ beyond itself to represent the external world, however accurately and with whatever epistemic caveats and cautions. Daniel Stoljar in The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ considers a response to the knowledge argument based on Frank Jackson’s colour scientist thought experiment in his (1983) essay, ‘Epiphenomenal Qualia’. The suggestion that Mary, the colour scientist, comes
Introduction: Philosophy of Consciousness
5
to ‘know what it is like’ to see red for the first time is judged ambiguous by Stoljar between an interrogative reading and a free relative reading. Stoljar argues that the ambiguity counterobjection is unsuccessful because the crucial concept supporting the knowledge argument can be reformulated to avoid the response. Stoljar distinguishes the what-it-is-like objection from two related proposals in the literature by David Lewis and Michael Tye. Richard Fumerton in his chapter, ‘Conscious and Unconscious Mental States’, considers whether there could be such a thing as an unconscious mental state. The idea is similar to that of the Freudian Unbewußt, where unconscious mental states must have intention, even qualia, and be capable of causing or contributing causally to a thinking subject’s external behaviour. Fumerton draws intriguing connections between the concept of conscious versus unconscious mental states, carefully defined and explicated, and such now-classic problems in philosophy of consciousness as the knowledge argument and correct interpretation of Jackson’s colour scientist thought experiment. Fumerton is motivated throughout his chapter by the consideration that the existence of mental states is disclosed to the individual phenomenologically, and that intuitively it appears at least logically possible for such states to exist even when the thinking subject is unaware of their occurrence. Rocco J. Gennaro in ‘Higher-Order Theories of Consciousness’ addresses the key question he thinks should be answered by any theory of consciousness: What makes a mental state a conscious mental state? He introduces an overall approach to consciousness called representationalism, and discusses Tye’s First-Order Representationalism, which Gennaro finds inadequate. Gennaro accordingly presents three major versions of a higher-order representationalism (HOR): higher-order thought (HOT) theory, dispositional HOT theory, and higherorder perception theory. He considers objections to HOR, to which he offers replies. He develops a connection between higher-order representational theories of consciousness conceptualism. He critically examines the claim that the representational content of a perceptual experience is entirely determined by the conceptual capacities the perceiver brings to bear in the experience. Scott Soames in ‘Kripke on Mind-Body Identity’ critically assesses Saul A. Kripke’s efforts to establish mind–body property non-identity in Naming and Necessity and precursor essay, ‘Identity and Necessity’. The argument is important because it intertwines considerations of modal semantics, identity theory, epistemology and philosophy of mind. Soames’s purpose is to explicate accurately an inference that has been muddled in the secondary philosophical literature to some extent, and to evaluate precisely the essential moves in
6
The Bloomsbury Companion to the Philosophy of Consciousness
Kripke’s reasoning against the background of their broader implications for the philosophy of language and philosophy of mind. Part III, Metaphilosophy of Consciousness Studies, begins with a constructive explanation of consciousness. Michael Graziano and Taylor W. Webb, in ‘Understanding Consciousness by Building It’, offer to explain basic concepts of consciousness by describing in plausible detail how a conscious entity might be systematically built, using technologies and programming protocols already available today. By establishing a hierarchy of nested internet-based information databases, and most importantly analysing beforehand in preparation the kinds of information concerning which thinking subjects can be expected to be conscious. The authors anticipate the demands on a conscious machine and carpenter-in data and metadata of several kinds structured and accessible to question-triggered information retrieval of whatever sort is generally available to and required of the reports of developing consciousness. Peter Caruthers in ‘The Illusion of Conscious Thought’ takes a refreshingly skeptical view of the existence of consciousness, supported by an independently interesting volley of arguments against the reality of consciousness, in support of the contrary thesis that consciousness as defined by Caruthers is an illusion. The reason is that on the strength of Caruthers’s main distinction between conscious and unconscious propositional attitude-events and the categorization of all ‘thoughts’ as propositional attitude-events all ‘thoughts’ so understood are unconscious. Significantly, Caruthers’s characterization of ‘thoughts’ explicitly excludes perception and affection, applying exclusively to enlanguaged cognition. Ted Honderich in ‘Actualism About Consciousness Reaffirmed’ offers an explication and philosophical defence of his unique analysis of actual consciousness. He divides consciousness into three distinct types or modes – perceptual, cognitive and affective. He identifies five ‘leading ideas’ about consciousness extracted from recent philosophical literature on the nature of our subject as a starting place for inquiry, primarily to dispel the assumption that consciousness is monolithic in meaning. Honderich outlines a metaphysics of physical reality that has two aspects – the unitary objective physical world and all the individual subjective worlds in which conscious participates, resides, perceives, acts in, and the like. Honderich explains main theoretical explanatory and problem-solving advantages of actual consciousness theory and recommends it on the grounds of avoiding difficulties to which other concepts of consciousness are liable. Dale Jacquette in his highly programmatic contribution, ‘Cracking the Hard Problem of Consciousness’, describes a new paradigm for understanding the concept of consciousness in fundamental metaphysical terms. The proposal
Introduction: Philosophy of Consciousness
7
for an Attributive-Dynamic (AD) model of consciousness explains streaming consciousness as the brain’s dynamic activity in attributing information data packages of properties to passing moments of time as predication objects. Streaming consciousness is the brain’s successive attributions of information clusters to distinct moments of time as individual conscious states or moments in the stream. Implications and theoretical applications of the analysis are briefly suggestively explored. Foremost among the proposal’s touted advantages is its essentialist explanation of the manifest but otherwise inexplicably intimate connection between streaming consciousness and conscious awareness of the passage of time. The model embodies an analytic answer to Edmund Husserl’s quest for a phenomenology of internal time consciousness. Part IV, Mental Causation, Natural Law, and Intentionality of Conscious States’. Selmer Bringsjord and Paul Bellow’s chapter ‘Toward Axiomatizig Consciousness’ critically discusses the concept posed in the title of their essay. Carlo Ierna in ‘Intentionality and Consciousness’ chronicles important moments in the historical phenomenological tradition in philosophy of consciousness. He considers in detail the immanent intentionality thesis in Brentano’s Psychology, and Husserl’s canonical writings. Ierna contrasts the intentionality tradition in early phenomenology with the rootless intentionalism of John R. Searle. He makes instructive comparisons between the intentionality commitments of these first two related and third disparate contemporary thinkers dedicated to understanding the aboutness of consciousness. Pete Mandik continues the discussion with his chapter ‘Cognitive Approaches to Phenomenal Consciousness’. Alfred Mele in ‘Free Will and Consciousness’ studies the longstanding thorny problem of human free will versus determinism through the lens of Benjamin Libet’s and followers’ controversial experiments in reportings of and neuromuscular activation times. Mele argues that recent findings bear on the question whether there can be neuroscientific evidence for the nonexistence of free will. He provides empirical, conceptual and terminological background to the topic, and explores the status of generalizations from alleged findings about decisions or intentions in an experimental setting of a particular kind to all decisions and intentions. Casting doubt on the experimental findings and their implications, Mele disconnects recent Libet-inspired experimental findings from the ambitious conclusions offered on their foundation for the conclusion that the sense of free will at the ground of free and responsible action is delusional. Joseph Margolis completes the book with his thoughts on ‘Toward a Metaphysics of Mind’.
8
The Bloomsbury Companion to the Philosophy of Consciousness
Then follows Part V, Resources, with further annotated readings, electronic website materials, and an A–Z Key Terms and Concepts guide to the vocabulary and categories prevalent in contemporary philosophy of consciousness. A man’s thinking goes on within his consciousness in a seclusion in comparison with which any physical seclusion is an exhibition to public view. — Ludwig Wittgenstein
Part One
Historical Development
10
2
The Hard Problem of Understanding Descartes on Consciousness Katherine Morris
Descartes does not make extensive use of the terms ‘consciousness’ (conscientia, conscience) and ‘conscious’ (conscius, conscient) in his corpus.1 (A further complication which I ignore for present purposes: there is a range of terms apart from ‘conscientia’ which, as their context indicates, means the same thing to Descartes, for example, ‘apperception immediate’. These terms, as well as ‘conscientia’, are usually translated as ‘awareness’ or ‘immediate awareness’ in CSM.) Nonetheless, his conception of consciousness has been widely misunderstood, and these misunderstandings tend to carry further misconstructions in their wake. I will in what follows use the term ‘conscientia’ (and, occasionally, ‘conscius’) rather than ‘consciousness’ (and ‘conscious’) as a reminder of this danger.2 I will offer an interpretation of Descartes’s conception of conscientia that has some continuities with scholastic usage, although I won’t review that complex usage here.3 In particular, I will suggest (a) that conscientia retains (albeit in a complex and indirect way) its etymological links with scientia (knowledge), and (more controversially and more speculatively) (b) that conscientia also retains its etymological links with conscience. (In fact I will suggest that the relevant notion of conscience is itself a form of knowledge, viz. knowledge of one’s own actions.) The interpretation I offer also draws on some concepts taken from Sartre, and thus has some continuities with the usage of some twentieth-century French philosophers.4 Such continuities hardly constitute an argument in favour of this interpretation; nonetheless they perhaps provide some reassurance that the suggested interpretation might be on the right lines.
12
The Bloomsbury Companion to the Philosophy of Consciousness
Introduction: Conscientia and thought It is clear that Descartes draws some kind of connection between conscientia and thought. It has been argued that he draws two different kinds of connections, thereby indicating two different conceptions of conscientia. (Radner 1988: 445–52 calls these C1 and C2 respectively.)5 There are passages where Descartes apparently equates ‘thought’ and ‘conscientia’ (e.g. AT VII 176, CSM II 124: ‘the common conception of thought or perception or consciousness’), and in particular there are passages where he uses ‘conscientia’ to refer to one type of thought, namely ‘seeming’ (e.g. he refers to our conscientia of walking (AT VII 353, CSM II 244), and clearly means our seeming to walk).6 There are other passages where, rather than equating thought and conscientia, he sees conscientia as, in a sense yet to be explicated, some kind of awareness of thought; it is this sense (Radner’s C2) on which I will be concentrating here. (Hereafter, I will simply refer to ‘conscientia’ without the number, but this must be understood.) These two passages in particular will be our primary focus; the first comes from the Second Replies as a definition preceding his setting-out of his arguments in more geometrico, the second from the Principles: [1] I use this term [‘thought’] to include everything that is within us in such a way that we are immediately aware of it. (AT 7:160, CSM II 113) [2] By the term ‘thought’, I understand everything which we are aware of as happening within us, in so far as we have awareness of it. (AT VIIIA 6, CSM I 195)
These are clearly meant as explanations of the term ‘thought’, not of the term ‘conscientia’; but we can use these passages to help us understand what he meant by ‘conscientia’. Passage [1] is followed by ‘Thus all the operations of the will, the intellect, the imagination and the senses are thoughts’, and passage [2] by a variant on this, both echoing the well-known passage in M2 which asserts that [3] A thing that thinks . . . [is a] thing that doubts, understands, affirms, denies, is willing, is unwilling, and also imagines and has sensory perceptions. (AT VII 28, CSM II 19)
In the remainder of this chapter, I will focus on those operations of the mind which have to do with the human being as union of mind and body. Thus the first substantial section focuses on conscientia in connection with the operations of the senses and imagination, the second in connection with the operations of the will
The Hard Problem of Understanding Descartes on Consciousness
13
(which are particularly relevant for the links between conscientia and conscience).7 I take it that we also have conscientia of purely intellectual thoughts, but the issues are more complex and interesting in respect of these other kinds of cases.
1 Conscientia and the operations of the senses and imagination Descartes seems to see no need to offer definitions or explications of ‘conscientia’. Evidently he assumed that his audience would understand this term without explanation (although, as we will see, this proved not to be entirely true).8 This carries a danger for us today: we are likely to begin with our own pre-theoretical understanding of ‘consciousness’ or ‘awareness’ and use these passages to work out what he means by ‘thought’ from that. Using this interpretive strategy, many commentators are led to the view that he is expanding the extension of ‘thought’ well beyond what we mean by ‘thought’ today, to include, for example, sense data, sensations, mental images, etc.9 The opposite interpretive strategy might have something to recommend it: perhaps we may arrive at an understanding of conscientia by beginning from the hypothesis that he meant by ‘thought’ more or less what we mean today. I suppose that, minimally, we think of thoughts as (i) intentional, that is, ‘about’ something, or having a ‘content’, and (ii) expressible in articulate propositions. Clearly enough, sense data, sensations, and mental images and so on are not thoughts, thus understood. We may be tempted to add to this characterization of ‘thought’ ‘(iii) items which have a truth-value’; this won’t do for Descartes’s conception, because operations of the will, being roughly equivalent to what we call intentions, have a different ‘direction of fit’ from what we ordinarily call thoughts. That Descartes classifies operations of the will as thoughts does indeed represent a difference from contemporary usage, but not one that is normally focused upon. We may note passages [1] and [2] and their sequelae don’t quite say that the operations of the senses and imagination are thoughts: they say that they are thoughts insofar as we have conscientia of them.10 This suggests the following interpretation: that to see something, to hear something, to fear something, to feel pain in such and such a place, to imagine something . . ., are, for Descartes, complex; to put it in an un-Cartesian idiom, the truth-conditions for the claim that x (for example,) sees light (where x is a human being)11 include, but are not exhausted by, the occurrence of a thought. Thus we could analyse ‘x sees
14
The Bloomsbury Companion to the Philosophy of Consciousness
light’ something like this: x sees light if and only if (a) there is light which (b) is stimulating x’s eyes, optic nerve, etc. such that (c) a certain thought (perhaps naturally expressed as ‘I seem to see light’ (AT VII 29, CSM II 19)) is given rise to in x.12 ( ‘I seem to see light’ might perhaps be analysed further, along the following lines: ‘I (x) am entertaining the proposition that (a) and (b) hold and am powerfully inclined to affirm that proposition’.13) I presume that similar analyses could be offered for other operations of the senses.14 To a first approximation, the suggestion would then be that when we see light, we have conscientia of (and only of) condition (c). This would make good sense of the claim that the operations of the senses are thoughts insofar as we have conscientia of them. But how does this help us make sense of conscientia itself? Much of the discussion in the literature centres on the question of whether conscientia is to be understood as a ‘higher-order thought’ or as a ‘same-order thought’. Bourdin, the author of the Seventh Objections, claimed to understand Descartes in the first way: as holding that ‘when you think, you know and consider that you are thinking (and this is really what it is to be conscious and to have conscious awareness of some activity)’ (AT VII 533-4, CSM II 364); this interpretation Descartes describes as ‘deluded’ (AT VII 559, CSM II 382). The position ascribed to Descartes by Bourdin is untenable, as Descartes clearly recognizes if conscientia is a HOT and if we have conscientia of every thought (including the higher-order ones), then we will end up in an infinite regress.15 This observation might lead us to the following view: that conscientia is to be understood simply as the power of the soul to reflect on its own operations.16 This has the obvious advantage that it does not construe conscientia as a higherorder thought; reflection – the exercise of conscientia construed as a power or disposition – is a HOT, but one can think without reflecting, so there is no regress. This proposal however cannot be quite right, for reasons that come out in Descartes’s exchange with Arnauld in the Fourth Objections and Replies: Arnauld asserts that ‘the mind of an infant in its mother’s womb has the power of thought, but is not aware of it’ (AT VII 214, CSM II 150), to which Descartes replies that ‘we cannot have any thought of which we are not aware at the very moment when it is in us’, going on to suggest that the infant too is ‘immediately aware of its thoughts’ but ‘does not remember them afterwards’ (AT VII 246, CSM II 171-2). This does not sit easily with the above proposal that conscientia is nothing but a power to reflect, although there is, I will suggest, still an essential link between conscientia and reflection.
The Hard Problem of Understanding Descartes on Consciousness
15
Another possibility is that Descartes sees conscientia as a same-order thought rather than a HOT.17 Direct support for this might come from Descartes himself: ‘The initial thought by means of which we become aware of something does not differ from the second thought by means of which we become aware that we were aware of it’ (AT VII 559, CSM II 382).18 This is sometimes understood as saying that every thought has two objects, one of which is whatever the thought is about, and the other of which is the thought itself. Sometimes the first object is called the ‘primary object’, the second the ‘secondary object’.19 I confess to finding this difficult to make sense of, and want to suggest another possibility (which may, for all I know, be what the two-objects view is attempting to get at).On this view, conscientia is neither a higher-order nor a same-order thought: it is not a thought at all. Rather, it is a kind of ‘background’ or ‘implicit’ awareness that we – necessarily – have of our thoughts, which is closely tied to the power of reflection, in the following sense: to make that background awareness explicit, to ‘foreground’ the thought itself (rather than what the thought is about), is to reflect. The resultant reflection is a thought and indeed a HOT (We might need to add that only as we become adults are we able to exercise the power of reflection, bearing in mind Descartes’s distinction between ‘direct’ and ‘reflective’ thoughts, the thoughts of infants being direct (AT VII 220-1, CSMK III 357).)20 What lies behind this suggestion is Sartre’s distinction between positional and non-positional conscientia (1986 (1943): xxviii–xxx). We can make some intuitive sense of this distinction with the following analogy. If, as the phenomenologists (following the Gestalt psychologists) claim, every perception is structured into figure and background,21 to be perceptually aware of the figure is (inter alia) to be aware of the background, but the awareness of the figure is explicit, whereas the awareness of the background is not; it can be made explicit by a shift of perceptual attention. (It would be strange to say that every perception has two objects: the figure and the background. The figure is the object of the perceptual act, but the background is an inextricable part of the whole perceptual experience and can become the object of another perceptual act via a shift of attention.) In like manner, for Descartes, to think one sees light (to seem to see light) is (inter alia) to be conscius of thinking that one sees light; one’s conscientia of thinking one sees light is not a thought, but reflection, which is simply the making-explicit of the conscientia, is one. We might, following Sartre (1986 (1943): xxx), prefer to write ‘conscientia of ’ as ‘conscientia (of)’, to remind ourselves of the point that conscientia ‘of ’ this or that thought does not have the thought as its object.22
16
The Bloomsbury Companion to the Philosophy of Consciousness
Thus conscientia can be understood as the ‘background awareness’ (of), in the case under discussion, what we earlier called condition (c), that is, the thought (e.g. that one sees light) which forms an essential part of the operations of the senses and imagination in human beings. Finally, I want to suggest that reflection, which is the making-explicit of conscientia, can yield knowledge of the operations of the senses and imagination. Thus the claim is not that conscientia is a kind of knowledge, but that it is internally related to (at least) this particular class of knowledge. Let me begin with an easy case of this, before introducing complications. We know that for Descartes ‘I seem to see light’ is immune to hyperbolic doubt. This is the burden of the paragraph which follows our passage [3]: ‘Are not all these things [‘I seem to see, to hear, and to be warmed’] just as true as the fact that I exist, even if I am asleep all the time, and even if he who created me is doing all he can to deceive me?’, (AT VII 28-9, CSM II 19, emphasis original). Thus ‘I seem to see light’ is ‘indubitable’, as long as we understand this term as expressing, not a psychological incapacity, but the idea that it cannot be called into doubt, that is, that no reasons can be given to doubt it.23 I take it that this amounts to knowledge.24 Thus if I make that (of) which I have conscientia (when I see light) explicit (through reflection), the resultant thought is indubitable and thus amounts to knowledge. Now for the complications: there are obstacles to the reflection just described. We know already that the operations of the senses and imagination have complex truth-conditions. On this basis, we might, following Descartes, see propositions such as ‘I see light’ as, in a sense, ambiguous: there is a wide sense, according to which ‘I see light’ expresses conditions (a)–(c), and a restricted sense (AT VII 29, CSM II 19), in which it expresses only condition (c), that is, ‘I seem to see light’. We normally (indeed naturally, i.e., because of our nature as union of mind and body) fail to distinguish (make a distinction between) the two meanings. (This is a way of understanding Descartes’s claim that the operations of the senses and imagination are ‘confused thoughts’ (e.g. AT VII 81, CSM II 56), bearing in mind that ‘distinct’ is the Cartesian opposite of ‘confused’.) 25 We might on this basis say that when our conscientia (of) this confused thought is made explicit (i.e. when we reflect), we (normally and naturally) engage in what we may call ‘impure reflection’.26 It is impure precisely because it does not distinguish between the wide and the restricted senses of ‘I see light’. Much of the Meditations is devoted to the sort of intellectual work – we may call this ‘purifying reflection’27 – required to unconfuse such confused thoughts. Only when we have done so are we in a position to engage in what we may
The Hard Problem of Understanding Descartes on Consciousness
17
call ‘pure reflection’:28 our reflection then makes explicit our conscientia of a thought (‘I see light’ in the restricted sense, i.e. ‘I seem to see light’) which is now carefully distinguished from the thought with which it was formerly confused (‘I see light’ in the wide sense). And it is this thought, not ‘I see light’ in the wide sense, which is immune to hyperbolic doubt. If we fail to distinguish these two senses of ‘I see light’, we are liable to fall into error: we, through habit or in our eagerness to find truth, may take it that ‘I see light’ in the wide sense is as indubitable as ‘I see light’ in the narrow sense: that it is as certain that there is light, and that my eyes are being stimulated by light, as it is that I seem to see light. Thus pure reflection yields (indubitable) knowledge of the operations of the senses and imagination, insofar as these operations are thoughts. Conscientia is not itself a form of knowledge;29 nonetheless, it is internally related to something (namely pure reflection) which does yield knowledge of this limited class.30
2 Conscientia and the operations of the will I will be suggesting that the picture of conscientia painted in the previous section can, when applied to the operations of the will (as opposed to those of the senses and imagination) and with a few modifications, be understood as closely related to conscience. I will suggest that conscience may itself be understood as a type of knowledge, in particular knowledge of one’s own actions; once again, the claim is not that conscientia is such knowledge, but that there is an internal relation between the two. The operations of the will, as they figure in the Meditations, seem at first sight to be limited to affirming and denying.31 Passage [3], however, says of a thinking thing not just that it affirms and denies but also that it ‘is willing’ and ‘is unwilling’ (AT VII 28, CSM II 19). Perhaps these terms might take us closer to what we ordinarily think of as actions. (Affirming and denying may, of course, be called actions; but actions as we ordinarily think of them involve body movements, and actions thus understood – as most parallel to operations of the senses and imagination, insofar as they can only occur in a union of mind and body – will be the focus here.) I take it that an action is, by definition, intentional,32 and that possibly these terms ‘willing’ and ‘unwilling’ are pointing us in the direction of intentionality in this sense. Descartes says very little about actions; one passage, however, might get us started. Consider something Descartes said in response to an objection by
18
The Bloomsbury Companion to the Philosophy of Consciousness
Gassendi: ‘I may not . . . make the inference “I am walking, therefore I exist”, except insofar as the awareness of walking is a thought’ (AT VII 353, CSM II 244). Note that ‘awareness [conscientia] of walking’ here means conscientia in the sense earlier called C1, not C2: it means something like ‘seeming to walk’. This might suggest that we could treat ‘x is walking’ in parallel fashion to our earlier treatment of ‘x sees light’: perhaps we could say that the truth-conditions for the claim that x is walking (where x is a human being)33 are complex, like those of ‘x sees light’. We might be tempted by something more or less like the following analysis: ‘x walks’ is true if and only if (a) there is a surface against which x’s feet push, (b) such pushing is caused by movements of various nerves, muscles and limbs, such that x’s body is propelled forward, and such that (c) a certain thought is given rise to (perhaps naturally expressed as ‘I seem to be walking’, cashed out in parallel fashion to ‘I seem to see light’). This, however, doesn’t capture the sense in which walking is an action, that is, intentional: It treats it as if it is some occurrence which is simply perceived. A better analysis might make the thought in condition (c), not ‘I seem to be walking’, but something like ‘I am trying to walk’, possibly roughly cashable-out as ‘I (x) intend that the other truth-conditions for “x is walking” hold’ (where this gives rise to those other conditions, rather than being given rise to by them, so as to capture the opposite direction of fit).34 This would begin to make sense of the idea that the operations of the will – including, strange as it may sound, actions such as walking – are thoughts insofar as we have conscientia of them. Let us take our previous discussion of conscientia and reflection for granted here. Conscientia (of) the operations of the will is neither a HOT nor a sameorder thought; rather, it names the kind of ‘background awareness’ we have of our own thoughts (in this case, our intentions). To try to walk is (inter alia) to be conscius (of) one’s trying to walk; one’s conscientia (of) so trying is not a thought, but can give rise to one through reflection. Finally, reflection, the making-explicit of conscientia, can – with an important caveat – yield indubitable knowledge of the operations of the will. The caveat is, once again, that the reflection in question be ‘pure’. It is through our exploration of this that connections between conscientia and conscience will begin to look more plausible. Although this is not a move which Descartes makes, we might argue that ‘I am walking’ is in a certain sense ambiguous, just as ‘I see light’ is. In the wide sense, ‘I am walking’ expresses the parallel conditions (a)–(c); in the restricted sense, it expresses only condition (c), that is, ‘I am trying to walk’. Again, Descartes never explicitly claims that operations of the will are confused
The Hard Problem of Understanding Descartes on Consciousness
19
thoughts, but we can suggest on his behalf that they are, in that we normally (and naturally) fail to distinguish the two meanings. Hence when we reflect on the operations of the will, we normally engage in impure reflection; the sort of purifying reflection sketched here enables us to unconfuse these thoughts so that we can engage in pure reflection. As with the operations of the senses and imagination, we can see an epistemological case for making the distinction between a wide and a narrow sense of ‘I am walking’. If the demon is deceiving me that there is an external world and that I have a body, then I am not walking in the wide sense, but it will still be the case that I am walking in the narrow sense, that is, trying to walk. What, though, has all this to do with conscience? The term ‘conscience’ has itself been understood in multiple ways.35 For the purposes of making sense of Descartes’s use of ‘conscientia’, we might link it to a tradition (visible, e.g. in Aquinas) which sees it as consisting in two kinds of knowledge: first, knowledge that one has performed or is performing this or that action, and secondly, knowledge of the moral character of this action.36 What we have so far is that pure reflection, the making-explicit of conscientia that we are acting ‘in the narrow sense’, that is, trying to act, yields (indubitable) knowledge of one’s actions, insofar as one can have such knowledge. Although what I have said so far is somewhat speculative, it is a fairly natural extension of the account given in the first section of conscientia (of) the operations of the senses and imagination. But can it be connected to knowledge of the moral character of one’s actions? Now, ‘x is walking’, all by itself, doesn’t look as though it has any particular moral character; on the other hand, we seldom ‘just walk’. That is, the intention is seldom just ‘to walk’, but, for example, to walk to the shop to get some Campari, or to walk with a friend to enjoy the companionship and countryside, or to walk away from the scene of an accident. If ‘walking away from the scene of an accident’ is, to use the jargon, the description under which my action is intentional, that is, what I am trying to do, I may fail to reflect properly on this trying, and this in one of several ways that go beyond the natural confusion described earlier. In the first place, I may fail to treat the action in question as an action at all: in effect, my reflection only gets me to ‘I (x) am entertaining the proposition that the other truth-conditions for “x is walking” hold and am powerfully inclined to affirm that proposition’. (One treats one’s action as if it were a mere happening.) In the second place, I may fail to specify fully what it is that I intended to do: in effect, my purifying reflection only gets me to ‘I (x) intend that the other truthconditions for “x is walking” hold and am powerfully inclined to affirm that they do hold’. (One treats one’s action as if it were a morally neutral action.) Or I focus
20
The Bloomsbury Companion to the Philosophy of Consciousness
on some description of the action other than that under which it was intentional (I am walking towards the shop), and so on. These kinds of reflective failures are commonplace, and one wants to say that they are not mere failures but motivated, and indeed motivated by the conscientia (of) what I am trying to do. This would be intelligible, although undoubtedly a great deal more would need to be said, if we can suppose that in being conscius (of) trying to walk away from the scene of an accident I am conscius (of) the moral reprehensibility of what I am trying to do, and it is this that motivates me to reflect so impurely. It remains the case that were I to reflect purely, I would achieve indubitable knowledge of my action, scilicet of what I am trying to do, and in so doing I would achieve knowledge of the action’s moral character. If this link could be made, then knowledge of our own actions and knowledge of the moral character of one’s actions are in fact not that far apart.37 These last remarks are both speculative and controversial; I hope at least to have made it plausible that there is an internal relation between conscientia of the operations of the will and knowledge of our own actions, even if someone wants to resist the further argument for an internal relation between conscientia of the operations of the will and knowledge of the moral character of our own actions.
3 Some concluding remarks I have tried to offer an interpretation of Descartes’s conception of conscientia, or to be precise an interpretation of one of his conceptions of conscientia, the one indicated in passages [1] and [2] quoted in the introduction. I have deliberately focused on conscientia of the operations of the mind which arise from the mind–body union. Hence I looked, first, at the operations of the senses and imagination (although with a focus on the senses), and, secondly, on those operations of the will which could be understood as intentions to perform a body-involving action. I argued, first, for a way to make sense of Descartes’s claim that the operations of the senses, imagination and the will are thoughts insofar as we have conscientia of them. I suggested that the truth-conditions of these operations (in human beings) were, for Descartes, complex, involving things going on in the world, things going on in the body and things going on in the mind. The latter, in the case of operations of the senses, could be understood as ‘seemings’ (which I attempted to cash out in a way that made it clear that to seem to see was to have
The Hard Problem of Understanding Descartes on Consciousness
21
a thought); in the case of operations of the will, it could be understood as ‘tryings’ (which I again attempted to cash out in a way that made it clear that to try was to have a thought, as long as we understand tryings as thoughts with the opposite direction of fit from seemings). It is this – the seeming or the trying – of which we have conscientia. I argued, secondly, that conscientia is not to be understood either as a higher-order or as a same-order thought (with, perhaps, a primary and a secondary object, the secondary object being the thought itself). Rather, it can be understood as a ‘background’ or ‘implicit’ awareness (of) the seeming or the trying just identified. Finally, the connection between conscientia and knowledge (including knowledge of one’s own actions insofar as these involve tryings) went, I argued, via reflection. Conscientia is not itself a form of knowledge. Rather, reflection is a making-explicit of conscientia; reflection yields knowledge when, and only when, it is ‘pure’, and it is pure (in relation to the operations of the senses, imagination and will) when, and only when, the thinker has gone through the sort of ‘purifying reflection’ that unconfuses the thoughts which are normally and naturally confused. I suggested, very speculatively, that in the case of the operations of the will, there were obstacles to pure reflection that went beyond natural confusion, in order to try to make somewhat plausible a more robust link between conscientia and conscience. This is a complicated picture; some of the details may be contestable. However, there are two conclusions which I think may be drawn. First, a full appreciation of what Descartes understands by conscientia requires some comprehension of his conception of a human being as well as of his philosophical outlook more widely. We need (at least) to understand what he means by ‘thought’, to recognize that the operations of the senses and imagination as well as (I have suggested) the operations of the will depend on the union of mind and body, to recognize that these operations normally and naturally result in ‘confused thoughts’, and to appreciate that his main aim in the Meditations is to give readers practice in unconfusing their confused thoughts. And secondly, his conception of conscientia is a great distance from the mythological picture painted by Ryle (according to whom Descartes holds that ‘If I think, hope, remember, will, regret, hear a noise, or feel a pain, I must, ipso facto, know that I do so’, 1949: 158), a picture which continues to exert an influence within contemporary philosophy of mind (e.g. Dennett 1991 with his notion of the ‘Cartesian theater’), and which possibly even has some residual traces in some serious Descartes scholars.38
22
The Bloomsbury Companion to the Philosophy of Consciousness
Notes 1 It is noteworthy that the recently published, and highly authoritative, Cambridge Descartes Lexicon (ed. Nolan 2016), with over 300 entries, does not contain an entry on consciousness; indeed, the only references to consciousness in the entire lexicon are in Alanen’s article ‘Thought’. 2 Hennig 2007 also uses ‘conscientia’ for roughly this purpose. 3 See Baker and Morris (1996: 100ff.) and the more recent and much more thorough Hennig (2007). Hennig also sees a connection between conscientia and conscience, though rather different from the one suggested here. I suspect that Descartes’s conception of conscientia as sketched here might also shed light on Locke’s use of the terms ‘conscious’ and ‘consciousness’ (principally in his discussion of innate ideas and principles and in his discussion of personal identity), although I cannot pursue this here. According to Fox (1982: 9), ‘outside of three minor uses of the word itself . . . the earliest written use of the word “consciousness” in the English language is by John Locke’. 4 In particular, I will suggest that Descartes’s term ‘conscientia’ (in the sense focused on here) may be understood as something like Sartre’s ‘non-positional selfconsciousness’. It is confined to operations of the mind (AT VII 232, CSM II 162), and possibly its capacities or powers (e.g. AT VII 49, CSM II 34, although AT VII 232, CSM II 162 seems to deny that). 5 Simmons (2012: 5) holds that passages supporting Radner’s C1 are taken out of context in such readings and do not represent Descartes’s considered view; I find Radner’s view more helpful, but for my purposes it doesn’t matter which view we take. 6 Aquila (1988) suggests that Descartes was revolutionary in introducing the idea of having conscientia of objects in the world; this may be true, but note that this applies only to what Radner calls C1. 7 I am here understanding ‘operations of the will’ as specifically linked to (bodyinvolving) actions. Although Descartes says little about actions, there is a case to be made for the claim that Descartes sees such operations of the will as, like the operations of the senses and imagination, arising from the mind–body union. He suggests that the faculties of imagination and sensory perception are modes of thinking, but ones which only a mind united with a body has (AT VII 78, CSM II 54); it seems plausible to say that, similarly, body-involving actions are modes of extension which only a body united with a mind has. 8 Cf. Jorgensen (2014: 4). 9 Baker and Morris (1993 and 1996: 13ff. and 30ff.) label this widespread interpretation the ‘expansion thesis’ and argue against it. 10 Likewise the operations of the will, but this is for the next section.
The Hard Problem of Understanding Descartes on Consciousness
23
11 In non-human animals, which also see, hear, feel pain, feel fear, imagine, and so on, but not in the way that human beings do (cf., e.g. AT V 278, CSMK III 366), the truth-conditions for their seeing something, hearing something, fearing something, feeling pain in such and such a place, imagining something . . . would arguably consist of all of these except the thought. In what follows I will take the qualification ‘where x is a human being’ for granted. 12 I am using the slightly awkward phrase ‘give rise to’ to avoid ‘cause’, ‘occasion’, or any other more specific term. The issues here are well beyond the scope of the present chapter. However, see for example Hoffman 2010 for a reminder that Descartes’s conception of causation is not Humean, and see for example Skirry 2005 (esp. 109–11 and 167–8) for an argument that a proper conception of the mind–body union obviates the need to speak of efficient causation. 13 ‘Powerful inclination’: cf. ‘great propensity to believe’ (AT VII 80, CSM II 55). Note that this is a ‘non-phenomenological’ reading of ‘seems’, using ‘phenomenological’ in the way it is commonly understood by Anglo-American commentators on Descartes. Seeming on this reading involves both the intellect (grasping the proposition) and the will (being powerfully inclined to affirm the proposition). 14 Likewise the imagination, but space precludes treating that in detail. 15 Simmons (2012: 7 n. 23) reviews some of the history of this charge, as do Coventry and Kriegel (2008: esp. sec. 4) in connection with a parallel issue in Locke interpretation, as well as indicating other objections to the ‘higher-order’ interpretation. Sartre too raises this objection (1986 (1943): xxviii); in fact Descartes’s conscientia, I think, ends up far closer to Sartre’s ‘pre-reflective cogito’ than Sartre perhaps realized, cf. Wider 1997: 8ff. 16 Cf. Baker and Morris (1996: 107). 17 The terminology is adapted from Coventry and Kriegel’s discussion of Locke (2008: 224ff.; they speak of ‘higher-order’ and ‘same-order perception’); we can see this interpretation of Descartes in, for example, Jorgensen (2014: 6). Aquila (1988: 544) speaks of ‘a single, bi-directional state of consciousness’, which at first sight sounds like the same view; but this seems to me not to make sense of Descartes’s claim that conscientia in the relevant sense is, by definition, conscientia of thoughts. 18 Here the first and third occurrence of the ‘aware’ indicates C1, the second indicates C2. 19 See Radner (1988: 446). This terminology comes from Brentano: every conscious act ‘has a double object, a primary object and a secondary object’, where the secondary object is the conscious act itself (quoted in Coventry and Kriegel 2008: 226). 20 Simmons (2012: 15ff.) makes a broadly similar distinction between ‘brute consciousness’ and ‘reflective consciousness’; see also Radner (1988). 21 Merleau-Ponty (2002: 4). 22 Hennig (2007: 464–66) objects to interpreting conscientia as ‘awareness’; the notion of awareness invoked here is a semi-technical one which may or may not entirely
24
23
24
25
26 27 28 29 30 31
32 33 34
35 36
The Bloomsbury Companion to the Philosophy of Consciousness concord with our ordinary notion (whatever that is, exactly), but seems to me not to succumb to Hennig’s objections, although I cannot argue for this here. It is sometimes suggested that knowledge of such thoughts is ‘incorrigible’ (e.g. Jorgensen 2014: 4); Broughton (2002: 137), rightly, says that they are indubitable. She however understands ‘indubitability’ differently from either the ‘psychological incapacity’ tradition or in terms of ruling out hyperbolic doubt. Rather, she argues that they are conditions for the possibility of the project of methodological doubt. That is, if all possible reasons for doubt have been eliminated, it would be frivolous to suggest that it might for all that not be true (cf. AT VII 144-5, CSM II 103). Some will object that this still does not amount to knowledge; there is no need for me to treat these issues here, as they are general issues about Descartes’s conception of knowledge. See Morris (1995); see also Cunning (2010 passim), who helpfully diagnoses the multiple sources of confusion. Simmons (2012: 17) makes a broadly parallel point, but understands ‘confusion’ very differently from me or Cunning. See also Hennig (2007:460). Sartre (1986): 155ff. Sartre (1986): 581. Sartre (1986): 155ff. Cf. Radner (1988: 447). Cf. Simmons (2012: 16). This is the role which they play in M4; and of course affirmations and denials can be held to come under the scope of conscience as well, although that is not where I want to put the main emphasis here. This view is shared by many philosophers, including many Anglo-American philosophers of mind, and is also explicit in Sartre (1986: 433). In non-human animals, which also walk, etc., the truth-conditions for their walking would also arguably consist of all of these except the thought. This is not exactly our ordinary use of ‘trying’ (any more than the ‘seeming’ which figured in the analysis of operations of the senses was our ordinary use of ‘seeming’, as Austin (1962) famously demonstrated). We might compare it to Hornsby’s conception (most recently, 2010); she argues for the ‘ubiquity’ of trying, that is, every action involves a trying. She goes further in a way that need not concern us here (she argues that every action is a trying, although not every trying is an action), and she has nothing that quite corresponds to the ‘powerful inclination to affirm’ expressed in the present conception. Guibilini (2016) presents a ‘conceptual map’ (2016: 14) of some of these uses. See , for example, Hennig (2007: 474ff). for a more historical account. See Guibilini (2016: 4–6), Hennig (2007: 476).
The Hard Problem of Understanding Descartes on Consciousness
25
37 A fuller discussion of this claim would bring in Descartes’s conception of freedom of the will as well as his doctrine of the passions. These are some of the directions which Davenport (2006) explores. 38 See Baker and Morris (1996: 18–20) for an outline of some of the main features all too often ascribed to so-called Cartesian introspection.
4 References Alanen, L. (2016). ‘Thought’, in Nolan, ed., The Cambridge Descartes Lexicon, Cambridge and NY: Cambridge University Press, 712–17. Aquila, R. E. (1988). ‘The Cartesian and a certain “poetic” notion of consciousness’, Journal of the History of Ideas 49 (4), 542–62. Austin, J. L. (1962). Sense and Sensibility, Oxford: Clarendon Press. Baker, G. P., and K. J. Morris (1993). ‘Descartes unLocked’. British Journal for the History of Philosophy, 1 (1), 5–28. Baker, G. P., and K. J. Morris (1996). Descartes’ Dualism, London: Routledge. Broughton, J. (2002). Descartes’s Method of Doubt. Princeton and Oxford: Princeton University Press. Coventry, A., and U. Kriegel (2008). ‘Locke on consciousness’, History of Philosophy Quarterly, 25 (3), 221–42. Cunning, D. (2010). Argument and Persuasion in Descartes’Meditations, Oxford and New York: Oxford University Press. Davenport, A. A. (2006). Descartes’s Theory of Action, Leiden: Brill Academic Publishers. Dennett, D. C. (1991). Consciousness Explained, Boston: Little, Brown & Co. Fox, C. (1982). ‘Locke and the Scriblerians: The discussion of identity in the early eighteenth century’, Eighteenth-Century Studies, 16 (1), 1–25. Guibilini, A. (2016). ‘Conscience’, Stanford Encyclopedia of Philosophy (online). Hennig, B. (2007) ‘Cartesian Conscientia’, British Journal for the History of Philosophy, 15 (3), 455–84. Hoffman, P. (2010). ‘Descartes’, in T. O’Connor and C. Sandis, eds., A Companion to the Philosophy of Action, Oxford: Wiley-Blackwell, 481–89. Hornsby, J. (2010). ‘Trying to act’, in T. O’Connor and C. Sandis, eds., A Companion to the Philosophy of Action, Oxford: Wiley-Blackwell18–25. Jorgensen, L. M. (2014). ‘Seventeen-century theories of consciousness’, Stanford Encyclopedia of Philosophy (online). Merleau-Ponty, M. (2002). Phenomenology of Perception, translated by C. Smith, London and NY: Routledge. Morris, K. (1995). ‘Intermingling and confusion’, International Journal of Philosophical Studies, 3 (2), 290–306.
26
The Bloomsbury Companion to the Philosophy of Consciousness
Nolan, L., ed. (2016). The Cambridge Descartes Lexicon, Cambridge and NY: Cambridge University Press. Radner, D. (1988). ‘Thought and consciousness in Descartes’, Journal of the History of Philosophy 26, 439–52. Ryle, G. (1949). The Concept of Mind, London: Hutchinson & Co. Sartre, J.-P. (1986 (1943)). Being and Nothingness, translated by H. E. Barnes, London: Routledge. Simmons, A. (2012). ‘Cartesian consciousness reconsidered’. Philosophers’ Imprint, 12 (2), 1–21. Skirry, J. (2005). Descartes and the Metaphysics of Human Nature, London and New York: Thoemmes-Continuum. Wider, K. V. (1997). The Bodily Nature of Consciousness, Ithaca and London: Cornell University Press.
3
Brentano’s Aristotelian Concept of Consciousness Liliana Albertazzi
1 Introduction Brentano was both a classic proponent and a subverter of the Western philosophical tradition. The theoretical and implicitly experimental potential of his Psychologies (Brentano 1971, 1981a, 1995a, b) has only been partially explained in the secondary literature and commentary on his work. Drawing on Aristotelian origins, in fact, Brentano’s ideas gave rise to the outstanding tradition of experimental inquiry which culminated in Gestalt Psychology (Benussi 1913, 1914; Ehrenfels 1890; Koffka 1935; Köhler 1969, 1929; Meinong 1899, 1910; Stumpf 1883; Wertheimer 1923. See Albertazzi 2013a, b; Albertazzi, Jacquette and Poli 2001; Ihde 1986; Wagemans 2015). The principles of Gestalt organization have never been disputed, and they have received further development in the neurosciences (Hess, Beaudot and Mullen 2001; Kovács, Fehér and Julesz 1998; Kovács and Julesz 1993; Shapiro and Todorovic 2014; Spillmann and Ehrenstein 2004; Wagemans et al. 2012), although they have been mainly considered in terms of low-level vision. This implies the use of methodologies and models which would ensure their veridical explanation. One may reasonably ask whether this is what Brentano really had in mind. Brentano has often been reductively and idiosyncratically classified as a forerunner of Husserlian phenomenology (Spiegelberg 1982) or a precursor of analytic philosophy (Smith 1994), or more simply as ‘the theoretician of intentionality’, but this classification occurs almost always without knowledge of the many dimensions of an ‘intentional reference’ (intentionale Beziehung). Brentano’s ideas on the phenomena of consciousness (Bewusstseins Phänomene) – which formed the core of his thought – were never speculative. With his profound knowledge of the nascent scientific psychology of his time, Brentano
28
The Bloomsbury Companion to the Philosophy of Consciousness
was one of the first to claim the legitimacy of phenomenological analysis in the study of consciousness (Brentano 1995a). In fact, he advanced a pioneering and architectural theory of consciousness derived from his studies of Aristotle (Aristotle 1986; Brentano 1977), based on the idea that we perceive and are conscious of qualitative forms, not of stimuli or (neuro)physiological correlates. In so doing, Brentano claimed the independence of a science of psychic phenomena per se, and he specified the difference between the subject, method and epistemic value of psychological science and psychophysics, physiology (Brentano 1995a, 1971, 1981b) and Newtonian physics. His distinctions in this regard are still valid today, notwithstanding the enormous computational and quantitative development of those sciences in recent decades of experimental research. Analysis, however brief, of the components of what Brentano termed a whole of consciousness may suffice to outline the theory and requirements that still today entail the kind of systematic and experimental analysis which he advocated. To this end, I shall show that his ideas are not comparable to contemporary theories of mind, and that, on the contrary, they may constitute a veritable turning point in the scientific analysis of consciousness – as, more generally, they are beginning to be recognized in studies on perception (Albertazzi 2013; Wagemans 2015). There is a large body of literature on Brentano, to which I myself have contributed (Albertazzi 2005, 2007, 2015a), so that I need only refer to the extensive bibliography available.
2 The externalism/internalism divide The current debate on a science of consciousness dates back to the beginning of the 1990s, but there does not yet exist an overall theoretical framework on which to anchor the idea of consciousness itself. It also seems that current empirical research is unable to solve the philosophical problem because of the lack of proper observables to be verified, which is to say, definition of and agreement on what a phenomenon of consciousness is supposed to be. More or less, what is verified in laboratories and/or through computational models is psychophysical response/ judgement (a behavioural output) to physical stimulus or a neural (physical) correlate of the stimulus, or the synchronization of the activity of different neurons in the cortex (Milner and Goodale 1995; Singer 1999; Pöppel and Logothetis 1986). Within the framework of cognitive science, the notion of internalism (to which Brentano is sometimes verbally linked) grew out of the science of artificial
Brentano’s Aristotelian Concept of Consciousness
29
intelligence; it represents the mind as an internal mechanism (like a computer) for the algorithmic elaboration, transformation and representation of stimuli (metrical cues) originating in a transcendent reality (usually identified with classical mechanics). The development of neuroscience essentially changed the reference from one kind of computer to another (the brain) that supposedly acts primarily according to inferential-probabilistic principles rather than logicaldeductive ones. Aside from the specific differences in research and instruments of investigation, in both cases the internalism of the mind refers to an algorithmic mechanism and is analysed primarily according to computational methods. However, doubts on the computer nature of the brain have been raised because of statistical variations on movement and brain cells death (Edelman 2004), and because not necessarily mental functions have to be algorithmic (Penrose 1989). In recent years, proposals in favour of an embodied or situated consciousness have entered the debate. Not all of them embrace the same idea of reality or even the same idea of representation, which ranges from internalist hypotheses (the enactive perception as autopoiesis of Maturana and Varela (1980)) to externalist ones (O’Reagan and Noë 2001, 2004) that usually end in the idea of an extended mind (Clark 1992). Most interestingly, nearly all the versions of embodied cognition, whether internalist or externalist, reduce the mind to the brain and/or to a psychophysical body. They consequently share a fundamental reductionist element that uses classical physics or biology as the primary ontological reference points for the explanation of conscious phenomena. These theories address the issue of mind and consciousness in quantitative terms of stimuli and the processing of information contained within the stimuli according to the classic mathematical conception of Shannon and Weaver (1949/1998). Embodied and enactive approaches can be powerful methodological tools in behavioural and neurophysiological investigations; but their ultimate reliance on sensorimotor contingencies for the construction of mental content inevitably gives them a naïve realist flavour (see Vishwanath 2005. More on this topic in Albertazzi, van Tonder and Vishwanath 2010, § 4). Other proposals are instead still couched in terms of inferences or symbolicconceptual interpretations of stimuli (Gregory 1986, 2009; Mack and Rock 1998; Rock 1983), or of social communication activities such as language (Dennett 1991; Searle 1980, 1992), avoiding the fact and the explanation of an evident and direct perception of meaningful appearances in daily life that are in principle language-independent. In all theories, past experience plays a primary role in the recognition of objects as an unconscious process, or in terms of instances of the corresponding
30
The Bloomsbury Companion to the Philosophy of Consciousness
concept based on characterizations or interpretations of a linguistic/cognitive nature, be they mental images (Jackendoff 1987, 1992) or pictures in the sense of computational vision theory (Marr 1982), presented as digital images. What remains unexplained is the nature of subjective conscious phenomena per se. Currently, theories of subjective experience can be grouped into three categories: (i) some explain qualitative experiences directly in terms of physical or primary properties (e.g. colorimetry explains perception of the colour ‘red’ in terms of radiation (Brainard 1995)); (ii) others do so in terms of a psychophysical (Boynton 1979) or physiological process (e.g. neurophysiology explains perception of the colour ‘red’ in terms of neuronal correlates); (iii) yet others do so in terms of the phenomena of consciousness (subjective experience of the colour ‘red’) which arise on the material basis of neural correlates. The first two explanations are somewhat self-consistent in that their frame of reference is physics and/or neurophysiology. They consequently reduce the phenomena of consciousness to physical phenomena. The third explanation is more problematic because it is usually unable to explain the presumed categorial difference among the physical stimulus, its chemical–electrical (therefore still physical) processing in the cortical pathways and the qualitative nature claimed for the phenomena of consciousness (the ‘red’ subjectively experienced in seeing). Also in this case, it is assumed that perception begins with retrieval of information of a physical type (based on an inverse optics, for example, Pizlo 2001), and that it consists of biological (physical) changes occurring in the organism that can be modelled using Bayesian and standard regulation methods. Conscious perceiving thus falls within one of the two previous categories, even when it explains its difference from them by appealing to syntactic functions of the ‘reading’ of neurophysiological data by a not further defined ‘mind’ (Eccles 1990). In fact, there is nothing semantic, qualitative or conscious about algorithmic syntax. It therefore seems that contemporary theories of consciousness are uniformly at risk when they claim to ‘explain’ the meaning of our conscious experiences and their nature. One wonders what error has caused this deadlock and whether Brentano’s neo-Aristotelian theory might be a viable alternative to them.
3 The Aristotelian legacy As well known, in De Anima, Aristotle defines the psyche as the inner principle of an animated substance, by which he intends a living and/or a biological organism. Psyche means a life, the life of a living organism (the indefinite article
Brentano’s Aristotelian Concept of Consciousness
31
being not present in Greek). In the Aristotelian framework, psychology is part of the science of nature, as is physics. One should not forget that Avicenna listed the Aristotelian book of Psychology as the sixth book of the things of nature, and that Aristotle was an unbiased realist, that he never questioned the existence of an external world. Nor should one forget that, most interestingly, Aristotelian physics is eminently qualitative (Aristotle 1980). One might speak of the Aristotelian doctrine in terms of perceptual realism; or better, describe Aristotle as a realist in regard to sensible qualities. The issue is whether in his framework, too, physiological processes have to be understood as the material basis for conscious perceiving, linearly (mathematically) derived from physical stimuli external to the perceiver, and defined in modern terms as primary qualities that are essentially, metric features. In the Aristotelian view, however, things themselves have sensible qualities. Things are not merely primary qualities as we conceive them in light of the Galilean definition, as objective properties definable in terms of metrical cues (Galileo 1623/1957). Vice versa, in Aristotle it is the secondary qualities, called sensibles (aisthêta) (colours, tastes, smells, sounds), that qualify the objects which we perceive in terms of smells, colours, sounds, tactile impressions, etc. (see also Hume 1975). The assumption that primary qualities understood in the Galilean sense as metric cues are responsible for a general infrastructure of reality is not to be found either in Aristotelian physics or in Aristotelian psychology; nor, moreover, does it appear in Brentano’s empirical and descriptive psychology. According to Aristotle, access to the external world given by our senses occurs in terms of couples of contraries, perceivable attributes such as warm and cold, hard and soft, high and low, rough and smooth and the like. The sensible matter of the things that we perceive (sensibles), in fact, is due to the four elements (earth, water, air and fire) characterized by couples of contraries forming four ontological combinations: earth is dry and cold, water is cold and moist, air is moist and warm, fire is warm and dry. Every sensible is organized according to couples of contraries: colours themselves, for example, can be shown to be arranged in terms of hot and dry and cold and moist (Albertazzi, Koenderink and van Doorn 2015). Elements themselves undergo transformations and combinations: for example, air and water can transform into each other, which accounts for the fact that, for example, metal can be liquified (aqueous) and still be solid (earthy) at standard temperatures. Aristotelian elements too are not to be understood in terms of physical stuff. They are ontological types or principles to whose nature all the other material substances approximate (as ‘more or less’, or in ‘pure or mixed
32
The Bloomsbury Companion to the Philosophy of Consciousness
forms’), producing consciousness states similar to the qualities of the things in nature. The continuous change and transformation of the matter of things into different kinds (elements) can occur according to quality (as from blue to green, from sweet to bitter, from treacly to acid) or intensity (from more to less, from great to small). Here vocabulary is at issue because Aristotle is not speaking in terms of Platonic ideas but of qualitative perceivable attributes or qualities of matter shared by the different modalities and associated by an inner sense. The cross-modal difference between perceiving something as yellow and sweet (honey) and something else as yellow and bitter (gall) is not an idea we obtain from the single senses. Aristotle distinguishes among proper sensibles (the qualities perceived by individual senses, such as colour, sound or taste); common sensibles (what is common to more than one sense, such as being still or in motion); and sensibles per accidens (the perception of substances, for example a person, through his or her accidents such as being a white and tall thing). Briefly, what we call perceptive experiences in the broad sense, such as tables, balls, trees, cherries, people and all others, in the Aristotelian framework are individual substances perceived through their qualities and relations. We do not perceive ‘stimuli’ as such. In the dynamic process of perceiving, the subject actualizes any kind of potential material object (through its qualities), relatively to the different senses, what gives rise to conscious psychic states. In Aristotle both the perception of sensibles and self-awareness are due to the proper sensible, defined as an inner sense directed towards the internal dynamics of the sensitive part of the psyche (Aristotle 1986, II, 6, 418 a, 16. On this point see also Brentano 1977, Part III, b, c.62). Analytically, as the active principle of sensation, it is the inner sense that enables the perception and the distinction of differences among the objects of different senses (a sound, colour, tactile perception); the cross-dimensional perception and distinction of differences among the objects of each sense (such as seeing something as red and round); the cross-modal association of smooth things with sweet, warm, dry and soft and vice versa sharp-angled things with sour, wet and hard, because of the reciprocal overlapping between the couples of contraries. Most of all, it allows self-consciousness. Briefly, inner sense, defined as the active corporeal quality, acts as the medium for any other sensation, and is responsible for the fact that perceived objects are conscious. The point would be further specified by Brentano in his denial that any psychic phenomenon as such may be unconscious (Brentano 1995a). In the ancient Aristotelian framework – obviously very distant from the rise of psychology as an experimental discipline – common sense is also what makes
Brentano’s Aristotelian Concept of Consciousness
33
our experienced reality of things the reality for us, and directly given. We do not infer the information provided to us by the senses, which is instead directly given and unified by common sense. However, direct perceiving cannot be understood in modern Gibsonian terms (Gibson 1979, 1971) because we perceive sensibles, qualities and not stimuli: for example, we perceive the brightness of the surface of a polished silver plate and not the metric reflectance of the light by the atomic properties of Ag, 47. Brentano worked for his entire lifetime on these themes, and from different perspectives.
4 The psychophysical watershed of consciousness Scientific psychology was born with psychophysics in Brentano’s time, and consists in a series of methods for both determination of the degree of sensitivity of the sense organs and measurement of sensations, as well as a series of psychological operations of judgement, for example, in the comparison or evaluation between perceived stimuli. According to Fechner, the percept (a colour, a sound, a landscape) is the product of a causal chain of events starting from a distal stimulus (a physical object), proceeding through a proximal stimulus (such as the retinal image of the object), to be further processed in the cortical areas, and finally ending with the percept of the distal stimulus. The method developed by Fechner (1860) for psychophysical measurement was based on determination of a certain number of differential thresholds. The unit of measurement of psychophysics (the ‘just noticeable difference’ or JND), is the minimum amount of a physical magnitude required for the subject to notice a perceptual difference, implying that the two perceptions are different. The aspect that was and still is not fully appreciated is that, in the definition of JND, ‘the amount something must be changed’ pertains to physics, while ‘for a difference to be noticeable’ pertains to psychology/perception. It is therefore necessary to determine a correspondence between the two stimulations and the two sensations, which also gives rise to many further problems like adaptation. Once again, as Brentano acutely pointed out (Brentano 1995a), what is not explained is what ‘for a difference to be noticed’ means. An Aristotelian viewpoint in the rise of experimental psychology would have started from a different initial basis. As Brentano highlighted in his criticism of psychophysics, what one should look for is not the ‘just noticeable difference’ (JND, the representative unit of classic psychophysics), but what
34
The Bloomsbury Companion to the Philosophy of Consciousness
is ‘just qualitatively perceivable’ (JPD) (Brentano 1995a). Brentano, in fact, raised doubts about the validity of assuming the psychological equivalence of just noticeable differences at different levels of stimulation (Brentano 1995a, 67–68). The real problem with psychophysical research, which Brentano immediately detected, is clarifying what is being measured in the particular case, how it is being measured, and what is meant by the spurious concept of ‘perceived stimulus’. At Brentano’s time, with the rise of a scientific psychology, there were substantially three potential psychophysical lines of inquiry: (i) Analysis of the relationship between the external stimulus (metric cue) and the percept (or psychic phenomenon, though analysed in terms of threshold behavioural response); (ii) Analysis of the relationship between the external stimulus and its physiological elaboration (the so-called inner stimulus); (iii) The overall programme of external and internal psychophysics which Fechner originally had in mind but remained unfulfilled. What was missing, and still is today, is the scientific analysis of the psychic phenomena per se: that is, of the conscious subjective experience arising on the basis of (but not necessarily and not reducible to) physical stimuli and physiological correlates. One has beautiful phenomena occurring through imagination, as Brentano emphasized. Brentano’s proposal, of Aristotelian derivation, reverses this point of view. One should start from (empirical and descriptive) analysis of the components of phenomena as they appear to consciousness (which is a great undertaking) so that an experimental design and an appropriate measure can then be developed (Canal and Micciolo 2013). Starting from analysis of the inner perception of the phenomena of consciousness provides an approach opposite to that of external psychophysics. But it does not correspond to an internal psychophysics in analysis of the genesis of phenomena by the unconscious and physiological point of view (in contemporary terms, to neuroscientific analysis). Of Brentano’s proposal there remains the empirical-descriptive part. It does so for purely contingent reasons: academic hostility prevented Brentano from opening a laboratory of experimental psychology. The experimental part implicit in the proposal was only partially developed by his pupils and gave rise to the two Gestalt schools, those of Berlin and Graz, which analysed various components of the structure of the act of intentional reference (Albertazzi 2015a). Analysis of the results achieved, their limitations, and how they influenced interpretation of Brentano’s proposal warrant separate discussion (see Albertazzi 2013a).
Brentano’s Aristotelian Concept of Consciousness
35
5 Psychic and physical phenomena If analysis of the phenomena of consciousness must begin from within, but does not coincide with (neuro)physiological analysis, in what does it consist? The question is clearly topical. The best-known and perhaps most misunderstood of Brentano’s theses – which is often likened to a generic internalist position – is precisely intentional reference. It is usually construed as a theory of intentionality in the sense of having in mind an intention in regard to an action. The question, however, is much more complex than this. There is a common denominator in Brentano’s thought between metaphysics (immanent realism, see Albertazzi 2005) and psychology that consists in the thesis of intentional reference. As well known, in Book Two of Psychology from an Empirical Standpoint Brentano presents his distinction between psychic (psychische) and physical (physische) phenomena. Specifically: Every presentation which we acquire either through sense perception or imagination is an example of a psychic phenomenon. By presentation I do not mean what is presented, but rather the act of presentation. Thus, hearing a sound, seeing a coloured object, feeling warm or cold, as well as similar states of imagination are examples of what I mean by this term. I also mean by it the thinking of a general concept, provided such a thing actually does occur. Furthermore, every judgement, every recollection, every expectation, every inference, every conviction or opinion, every doubt, is a psychic phenomenon. Also to be included under this term is every emotion: joy, sorrow, fear, hope, courage, despair, anger, love, hate, desire, act of will, intention, astonishment, admiration, contempt, etc. (Brentano 1995a, 79. Translation slightly modified).
‘Psychic’ phenomena are therefore to be understood as acts (processes of psychic energy such as seeing, thinking or loving) (see also James 1950) and should not be confused with somehow static mental states (although such confusion is often the case in the science of consciousness still today, especially when compared to the states of a computing machine). Such acts are of a certain kind (primarily presentational, secondarily judgemental and emotional, which diminishes the role of top-down influences such as inferences or assumptions); they originate in both sensations and fantasies (and hence do not exclusively depend on stimuli); they are inner presentations (and hence have no direct access to outer perception); and they are expressible in language by verbal forms (seeing, hearing, imagining and the like). Brentano does not further define what psychic
36
The Bloomsbury Companion to the Philosophy of Consciousness
energy is, except in Aristotelian terms (dynamis, δύναμις; ἐνέργεια, energheia), which means the potentiality for a process to become and be fulfilled, and in continuous change. But experimental psychology, too, has so far been unable to define the concept in other than auto-referential terms (mental energy), in reductionistic terms (brain metabolism as a correlate of mental energy), or more generally in terms of physical power. In fact, like many other terms in classical psychological science, the concept of psychic energy is derived from physics and in experimental analysis it is an unquestioned dimension analysed from a behavioural point of view and explained on the basis of metric properties and/or brain functions. It is Brentano’s definition of ‘physical’ phenomena, however, that deserves specific attention because of its peculiarity, and because it is precisely what does not allow a Cartesian and dualistic interpretation of his ideas. Brentano affirms: Examples of physical phenomena, on the other hand, are a colour, a figure, a landscape which I see, a chord which I hear, warmth, cold, odour which I sense; as well as similar images which appear in the imagination (Brentano 1995a, 79–80).
According to Brentano’s theory of intentional reference, an act always has an object (item), and objects are given in inner perception. Because presentations are psychic processes not detachable from their objects, physical phenomena are conscious qualitative appearances, such as colours and shapes in seeing, musical tones in hearing and emotional feelings as consciously experienced rather than metric cues. The Aristotelian origin of his psychology may let understand ‘physical’ as the sensible quality of the external object (such as its colour) transferred to the sense organ of the perceiver. A good example is the subjective experience (for example, seeing) of a colour appearance in the visual field. The phenomenon is currently explained in terms of radiations impinging on the eye to generate its neuronal correlate (criticism in Da Pos and Albertazzi 2010; Albertazzi and Poli 2014), from which is excluded the explanation of the phenomenal and contextual appearances of colour. Other chromatic dimensions such as warm and cold, and/or their behaviour as depth cues is confined to the realm of aesthetics because they are not reducible and/or treatable as physical dimensions (Katz 1935; Sivik 1974). From the viewpoint of inner perception, not only presentations triggered by external stimuli but also dreams, hallucinations and products of imagination are real psychic phenomena: they may be experienced with a greater or lesser
Brentano’s Aristotelian Concept of Consciousness
37
degree of reality, but they are not of a different kind (Metzger 1936/2006, 1941). The point is of utmost importance for understanding Brentano’s theory of immanent realism and for its subsequent developments. This is so first of all because, in Brentano’s descriptive psychology, physical phenomena, as universal characteristics of our subjective experience, are qualitative. They are consequently entirely different from a Cartesian conception of physical matter. This comes as no surprise because, as well known, Descartes was highly critical of the Aristotelian (and medieval) idea of qualitative ‘physics’, as is apparent in Brentano’s theory. Descartes’s assumption of a dualism between body and mind is based on a conception of physics totally devoid of qualities: in fact, he considered to be explicative of reality only the empirically quantifiable attributes of size, shape and motion. He writes: If you find it strange that I make no use of the qualities one calls heat, cold, moistness, and dryness [deriving from the four Aristotelian elements] … as the philosophers [of the schools] do, I tell you that these qualities appear to me to be in need of explanation, and if I am not mistaken, not only these four qualities, but also all the others, and even all of the forms of inanimate bodies can be explained without having to assume anything else for this in their matter but motion, size, shape, and the arrangement of their parts (Descartes 1983, XI, 25–26. Emphasis mine).
Only apparently, then, is the Brentanian stance comparable to Descartes’s idea of consciousness. The ontological difference between mind and matter in Descartes is what makes consciousness (ontologically) not reducible to (neuro-) biology and physics. However, it is also the starting point of the debate on how to give a scientific explanation of their connection (be it intended, in modern terms, as a combination or integration of neurons (Tononi 2008)) that is the origin of the so-called ‘hard’ problem of consciousness (Chalmers 1995, 1996). The same position was taken by Galileo, who considered as ‘primary’ physical properties like spatiality, solidity, hardness, weight, shape, size, motion, within the framework of classical mechanics, and relegated all other properties – such as ‘white or red, bitter or sweet, noisy or silent, and of sweet or foul odour’ (Galileo 1623/1957, 311 ff) – to the status of subjective secondary qualities. According to Galileo, secondary qualities reside in consciousness and are hence addressable, in his view, only by names (Galileo 1623/1957, 274. Emphasis mine). What Brentano tried to do was explain the nature of the qualitative experiences of consciousness without resigning to address them only by linguistic categories such as names.
38
The Bloomsbury Companion to the Philosophy of Consciousness
Descartes’s and Galileo’s ideas are still difficult to disconfirm in contemporary science, in which one may eventually think of qualities in terms of emergent mental properties in biological organisms. The unanswered question, however, is the nature of these emergent properties. One example, already mentioned, is provided by Eccles’s idea that the self-conscious mind is a sort of emergent reading of what happens in the different brain circuits (Eccles 1990). Yet still to be explained are how qualitative mental experiences may entirely originate from purely physical matter (in the classical sense as physical stimuli or neural correlates, see Crick and Koch 1990), and why it has to be considered essentially a symbolic process. In conscious perceiving, one does not read or interpret stimuli or electrochemical stimuli transformations; rather, one has conscious, qualitative, figural, meaningful, positive/negative experiences. In principle, one could do without language in whatever form, symbolic syntax included. Brentano, in fact, repeatedly stressed the unreliability of grammar as a guide to descriptive psychology (Brentano 1981b, The second draft of the Theory of Categories, 1916).
6 Inner perception: What does it mean? The concept of inner perception (innere Wahrhnemung) (Brentano 1995a, 29–30), of Aristotelian derivation, regards the experience of the phenomena of consciousness from both the point of view of perceiving (Brentano’s term for psychic phenomena) and that of its objects (Brentano’s term for physical phenomena). A binding problem as to how metrical features that are processed in parallel are bound to the one unique conscious percept does not arise in Brentano’s theory of consciousness. In fact, it is not necessary to explain what constitutes the qualities of a biological structure (be this confined to neuronal activity or extended to the entire body) and how they arise, because these are tasks for psychophysics and (neuro-)physiology. Psychic phenomena form a distinct genus not reducible to stimuli and their neuronal processing. On this point, Brentano very explicitly states one must resolutely contradict the person who, out of confusion of thought, claims that our consciousness in itself has to be seen as a physico-chemical event, that it itself is composed out of chemical event. … Chemical elements are substances which, by themselves, are unintuitive, and which can be characterized only in relative terms by considering manifold direct and indirect effects in our consciousness. The elements of inner life, i.e. the different most
Brentano’s Aristotelian Concept of Consciousness
39
simple constituents, by contrast, are intuitively contained in our consciousness. In enumerating them, psychognosis [descriptive or pure psychology] can therefore leave out any reference to the physiological, the psycho-chemical realm. (Brentano 1995b, 4)
Subsequently, both Hering (1920/1964) and Metzger (1936/2006) would argue in similar terms. The experiences of which we are aware, in fact, are real and incontrovertible. Appearances are imbued with a salience, certainty, meaning and emotional value which do not and cannot pertain to stimuli. Consider the intrinsically cross-modal nature of perception, to which Aristotle also referred, albeit with caution (Aristotle 1980). Today it is explained in terms of featureto-feature integration or, conversely, as semantically induced by higher-order processes (ideaesthesia, see Jürgens and Nikolić 2014), which is to say by speech and/or conceptualization (for a review see Spence 2011). In a Brentanian framework it would be explained in Aristotelian terms by the subjectively perceived similarity between qualitative attributes in diverse sensory modalities consisting of couples of contraries (hot/cold, rough/smooth, pleasant/unpleasant and so on) unified by the inner sense, and whose experimental analysis requires specific methodologies (examples in Albertazzi et al. 2012, 2014; Albertazzi, Canal and Micciolo 2015; Da Pos and Pietto 2010; Murari et al. 2014). Although subjective experiences are not universally invariant and explicable in thirdperson explanation, experimentally they can be shown to have high statistical consistency among subjects and to be intersubjectively shareable, reliable and measurable (Albertazzi 2015c). They are not necessarily to be given in terms of an introspective verbal rendering. On the basis of an appropriate experimental methodology, one can also test how the mental, subjective, experiential content of a colour (be it ‘red’, ‘yellow’, or ‘orange’ and so on) can be scientifically observable, objectified and hence verified as intersubjectively shared by the general population (Albertazzi and Da Pos 2016). Asking whether or not a mathematical model of what we perceive is real, hence predictive, starting from an inferential viewpoint according to which our brain syntactically interprets the metric cues incoming through our sensory organs, is senseless for observables such as present conscious appearances. Current mathematical models are suitable for behavioural psychophysics, not for consciousness studies, although this does not exclude, in principle, the possibility of designing a mathematical model of subjective experiences, once their nature has been clarified and if a correct mathematics is applied. Moreover, the interpretation of the stimuli attributed to the brain is currently explained in terms of past experience – which
40
The Bloomsbury Companion to the Philosophy of Consciousness
is again a vicious circle, because it is not explained what a past experience is – and in a third-person account. What we perceive, however, is eminently subjective, experienced and judged from a first-person perspective, and not necessarily based on past experience, as shown by the phenomenology of perception (Kaniza 1991). The science of consciousness has to be built on different bases. Brentano wrote: Actually, psychology, in so far as it is descriptive, is far in advance of physics. The thinking thing – the thing that has ideas, the thing that judges, the thing that wills – which we innerly perceive is just what we perceive it to be. But so-called outer perception presents us with nothing that appears the way it really is. The sensible qualities do not correspond in their structure to the external objects, and we are subject to the most serious illusions with respect to rest and motion and figure and size [ i.e. primary qualities]. According to some philosophers, the subject of our mental acts and sensations and that of the analogous animal activities is something corporeal; if it were true, we could have intuitive presentations of certain accidents of bodies. A careful analysis of mental phenomena, however, proves beyond any doubt that their substantial support (Träger) is not something spatially extended, but is something which is mental (etwas Geistiges). This being the case, it may turn out that the domain governed by the laws of mechanics is in fact very different from what physicists have until now assured. (Brentano 1981b IV, 208. Emphasis mine)
The idea of a science of consciousness, distinct from classical physics, bearing a metaphysical commitment and offering more reliable information about the essence of being was also developed by William James. Wolfgang Metzger, in the 1930s, expresses the same idea (Metzger (1936/2006), 198). Brentano’s view is an alternative not only for the admissibility of a science of consciousness per se (in principle not reducible to physics and neurophysiology), but for the concept itself of what reality is for a living being. As for Aristotle, in fact, Brentano’s conception of a ‘physics from the observer’s viewpoint’ is based on primitives and conceptual categories different from those described by classical physics; primitives and categories which also differ from those employed to develop the formalisms available today. However, although Brentano’s notion of consciousness avoids the dispersion and fragmentation of research into biochemical, quantum, electroencephalographic analyses and of neuronal responses (also in the brains of other species), it is not an easily implemented proposal. It could be called a sort of second Copernican revolution. It entails a new theory of (subjective) space-time and the elements of a science of nature (to which consciousness also
Brentano’s Aristotelian Concept of Consciousness
41
pertains) based on the qualitative experience of reality. To be reversed are the radically different background assumptions of Newtonian-Galilean theory; and those assumptions necessarily require a highly sophisticated theoretical basis, which Brentano provides in his works.
7 Presentations are not representations The aspect that distinguishes Brentano’s theory from other contemporary proposals – the computational mind and consciousness as a product of neuronal activity – is the nature of the act of presentation (Vorstellung). In accordance with laws of inner dependence, all the other processes – representations, judgements, inferences, evaluations, emotions – develop from a conscious presentation. These are diverse phenomena and their characteristics have been widely analysed (Brentano 1995b). Brentano’s empirical and descriptive psychology is in fact the exact science of the elements of psychic experience (starting from presentation), of its ontological categories (such as whole/parts that are not merely epistemological), its observables (conscious qualitative phenomena) and their inner laws of dependence. Examples are different psychic phenomena like seeing and thinking, or thinking and desiring; or the parts of a whole, such as the notes or the temporal and tonal distances in a melodic configuration (Brentano 1995b, 1988). Act, content and object are components that coexist in the duration of the presentation, and of which some parts are not separable. A presentation always has an object. Some have one-side separability (such as red and colour relation, or seeing and thinking, or thinking and desiring); others are separable only in distinctional manner, as when we refer to an object as mammal, feline, vertebrate and the like, within a taxonomic classification (Brentano 1995b). Kanizsa’s grammar of seeing (Kanizsa 1979) is an example of how Brentano’s thesis can be exemplified and then scientifically explained according to laws of organization in vision. In particular, Kanizsa’s work has evidenced the differences and specificities of acts of presentation as diverse as seeing and thinking (Kanizsa 1991); a difference usually overlooked by the cognitive sciences, which are prone to overestimate the amount of top-down influences (such as inferences, hypotheses, knowledge, language and so on) in perception. Meinong and then Benussi used a descriptive psychology approach to analyse the temporal structure of an inner act of presentation (acoustic and visual) in systematic and experimental terms. Their results are uncontested till today (Meinong 1899; Benussi 1913; Rensink 2000, 2002; Pöppel 2009).
42
The Bloomsbury Companion to the Philosophy of Consciousness
Hence it is not a matter of amending, assimilating and completing Brentano’s so-called ‘internalist’ proposal by reformulating it according to new or more recent methods and results of contemporary scientific research. The act of presentation – and the correlated phenomena of which we are aware – is not the representation of an external objective reality in the sense of classical physics, what would render the problem ‘difficult’ to solve (Chalmers 1996); nor is it conceivable in terms of physical phenomena at the quantum level (Beck, Eccles 1994; Hameroff 1994; Hameroff and Scott 1998; Squires 1988). Presentation can be regarded as a biological phenomenon (Revonsuo 2006) only if it is understood in the Aristotelian sense and not in the current one of cognitive neuroscience. In fact, it is understood neither as mere brain activity (NNC) nor as a property of the overall organism and/or its situatedness in the environment (Thompson and Varela 2001; Varela, Thompson and Rosch 1991). ‘Naturalizing phenomenology’ or consciousness means to explain and model conscious experiences again in terms of neurophysiological correlates and to conceive consciousness as embodied and absolutely dependent on the brain (Crick 1994; Edelman and Tononi 2000; Edelman 2004; Petitot et al. 1999; Petitot 2008). The idea of discovering and explaining the nature of consciousness through the use of methods such as PET (positron emission tomography) or functional magnetic resonance, and EEG (electroencephalography) or MEG (magnetoencelography) – which eventually show the where but not the what (although in neurophysiological sense) – or even speaking in terms of ‘phenomenal neuroscience’, ‘cognitive neuroscience’ or an ‘extended mind’ (Clark 2000) picking up and representing information from the physical environment, would be nonsense for Brentano, as previously explained. The greatest part of brain activities are unconscious, and the detection of a signal may occur without being conscious of it (Libet 1987). Once again, it would be a misunderstanding of what are the proper observables for a science of consciousness. Contemporary methods of imaging or brain scanning analyse and explain the neural activity of the brain – what Brentano would term research in genetic psychology – and certainly not the phenomenal level of our conscious experience. The false step in analysis of the phenomena of consciousness juxta propria principia, today as at Brentano’s time, consists in its point of departure: the assumption that psychic phenomena are representations of a physical reality, mainly in the sense of classical mechanics (Albertazzi 2015a). The representation at the basis of the contemporary science and theories of consciousness, in fact, is the output from the processing of stimuli universally perceived (in the sense of common mechanisms among the individuals) through
Brentano’s Aristotelian Concept of Consciousness
43
sensory processes and subsequently modified and represented as such through equally universal electrochemical processes, such as the action potentials or neuronal codes that supposedly guarantee its veridicality. The problem of the veridicality of a science of ‘psychic’ phenomena in respect to ‘physical’ phenomena, however, does not arise in Brentano, because none of the metrical properties of external stimuli or the properties of neurons (Crick and Koch 1990), nor those of entire populations of neurons (Singer 1994, 2000), are present as such or remain invariant in the subjective spatio-temporal process of presentation.
8 Psychic dimensions The act of presentation is not volatile. It has spatio-temporal properties and dimensions of location, direction, change and velocity that Brentano systematically analysed from a descriptive viewpoint (Brentano 1988, 1981b) that may receive experimental verification provided that it is understood that they are not quantitative physical dimensions. Physical dimensions concerning quantity are in fact non-negative numbers multiplied by some physical ‘unit’, which in classical physics are essentially conventional. In consciousness, vice versa, the units are qualitative, subjective and not conventional at all (Albertazzi 2015c; Albertazzi, Koenderink and van Doorn 2015). Conscious perceiving develops according to subjective moments of time and phenomenal parts of space, strictly intertwined. Brentano describes (in Aristotelian terms and with Aristotelian vocabulary) the perceived dimensions of phenomena. An example will help. Consider the case of a rectangular shape appearing in the visual field, changing in colour from left to right, from black to white, but instead remaining unchanged in its colour from top to bottom. In this case, Brentano says, from left to right the shape has lesser perceived acceleration [teleiosis] of change at every point; vice versa, it has full acceleration of change from top to bottom. A similar example is given, in perception, if we simply light up a transparent sphere. Its brightness, in fact, decreases very rapidly as one moves downwards. What is in play when seeing this phenomenon is the change of perceived brightness and the way in which it changes. Other examples concern the colour of a boundary line between more coloured surfaces perceived on the same plane (Brentano 1988; Da Pos, Albertazzi in preparation). Paradigmatic examples of the nature of the qualitative dimensions of physical phenomena are stroboscopic motion (Wertheimer 1912/2012), the perception of causality (Michotte 1954), or the many forms
44
The Bloomsbury Companion to the Philosophy of Consciousness
of colour interaction (Albers 1975). However, the ‘matter’ of appearances, the space-time subjective primitives, and their laws of organization are the same for every phenomenon of which we are conscious. The matter itself of phenomenal consciousness does not coincide with what today is called ‘filling-in’ from the psychophysical or neurophysical point of view (Grossberg 2003; Pessoa and De Weerd 2003; Ramachandran 2003. See Albertazzi 2013b). Put briefly, what we are conscious of are patterns internally construed and eventually imposed on the raw material of stimuli (Mausfeld 2010, 2013), not re-presentations of physical time-space and metrical cues. Examining the structure of the underlying neural correlates (NSS) does not explain the nature and the qualitative dimensions of the phenomenon; rather, it simply records neuronal activity in terms of cerebral functions calibrated on physical measurements. Today, for example, neurophysiological research makes it possible to identify temporal durations of an act of perceiving (Pöppel and Bao 2014), including the delay necessary for a sensory stimulus to become conscious, of about 500 ms (Libet et al. 1992). However, detecting at brain level the presence of temporal windows of about 20–30 ms (so-called relaxation oscillations) representing the logistical basis on which to integrate spatial and visual information, and identifying temporal windows of about 3.2 seconds (with a subjective variability) which constitute the specious present, signifies detecting the presence of syntactic mechanisms at neurophysiological level. These are logistical functions that reduce sensory noise or uncertainty through elementary integration units and open the sensory channel for new information. What these functions do is deliver a frame for subjective time, or at least for some temporal phenomena. They do not say anything about either the qualitative experience of subjective time or its experienced length and continuity, or the individuation of a self. Research in this field, in fact, affirms that the continuity of the flow of time is due to the semantics (Pöppel 2009), but again what semantics is and how it is to be understood remains unsaid (unless to call upon again symbolic syntactical structures). Translating these findings into Brentanian terms, the discovered durations may be considered the syntax of the act – as Benussi clearly understood and analysed experimentally (Benussi 1913). However, the meaning, relative to the perceived content-s allowed by the logistical temporal windows, and the subjective length/shortness of the experienced durations is not explained at or by reference to this level. As regards subjectively experienced space, there exist countless examples of how it is not perceived according to a Euclidean geometry (which is still the
Brentano’s Aristotelian Concept of Consciousness
45
frame of reference for psychophysical analysis) and how this space is not a container of objects (Albertazzi 2015b). Consider the wide variety of so-called perceptual illusions. The space in which the segments of the Müller-Lyer illusion appear (Müller-Lyer 1889) is part of their so-called apparent size, due to the fact that once they are placed in a context (the fins pointing inwards or outwards), the segments are no longer pieces separable from the whole to which they perceptively belong. Brentano was aware of the presence of these phenomena, which do not conform with the ‘physical units’, and on which he himself worked to produce a version of the Müller-Lyer illusion (Brentano 1979). The fact that these contextual perceptions have recently been verified in species very distant from our own confirms that they are fundamental ecological traits of perception (Sovrano, Da Pos, Albertazzi 2015). From the point of view of inner perception, so-called illusions are not illusions, just as the phenomenal presence is not the result of a model implemented by the brain to simulate a virtual reality: what we perceive, imagine, desire and the like, are real phenomena, ones much more real and immediate than an external physical reality to which consciousness has only indirect access and ability to be modelled (therefore explainable) in terms of the computational theory of vision. Besides the so-called illusions, consider the simple appearance of coloured shapes like a blue ball or a red roof. Shape and colour as they appear in subjective space cannot be explained as the product of a sum of components separated in cortical areas specialized in processing the different aspects of colour, shape (and motion as well), and characterized by different types of neurons, connection patterns, metric velocity of conduction, etc. (Zeki 1993; Zeki and Bartels 1999). Although the colour and shape of a red square are coded by different neurons in the visual system, we are not conscious of ‘red’ and ‘square’ separately. It is impossible, in fact, to see a shape that is not coloured (even if it is black and white, because achromatics are also colours from the phenomenal point of view). It is no coincidence that Brentano defined shape and colour as a metaphysical relation whose parts are not detachable from each other (Brentano 1995b). If one instead insists on ‘explaining’ the perception of a qualitative whole (like a ‘red square’) starting from the processing of metric cues, as in the computational science of vision, then a binding problem inevitably arises. Contemporary science is still very far from understanding what a conscious qualitative meaningful percept is. Explaining the complexity of perceptive experience by referring to a global neuronal workspace that broadcasts signals to many sites in the cortex for parallel processing (Baars 1988, 1997; Dehaene and Naccache 2001), or re-entrying signals (Edelman 1989, 2003), or even meta-representations (Cleermans 2005, 2011)
46
The Bloomsbury Companion to the Philosophy of Consciousness
is again a categorical mistake, from a Brentanian perspective, because these hypotheses and models are built on metric cues. Beyond cues, therefore, acting in the deployment of conscious appearances are mental operations directed at items of various kinds. In short, perceptions are not computed from cues. Rather, the intentional acts of the perceiver enable the generation of perceptions in terms of clues and/or subjective choices (Brentano 1995a; Albertazzi, van Tonder and Vishwanath 2010). Most of all, contemporary models refer to unconscious processing, being unable to explain conscious percepts per se, and eventually introducing top-down linguistic, cognitive, attentional integrations. There is neither a simulated virtual space, a virtual time, nor a virtual self in conscious experience. Even more radically, Brentano’s final considerations support the idea that substance is present in the accidents (a kind of reversal of the Aristotelian position that accidents belong to a substance), so that the hypothesis of the existence of a substantial self in the time of presence of an act of presentation, which, by the way, has also been confirmed at the level of logistical durations, is also gainsaid. From this viewpoint, Brentano would probably not agree either with the idea of a transitory ‘core self ’, generated by our sensory experience, or with the other stages of Damasio’s theory of consciousness (Damasio 1999). Awareness and evidence of the phenomena we experience are provided by the inner structure of the intentional reference. In the presentation there is the simultaneous co-presence of what is presented and the subjective modes of its presenting, what makes the object evident to consciousness (Brentano 1995a). Brentano, in fact, distinguishes between a primary object (object properly, say, a colour) and a secondary object (awareness of the colour, simultaneously given). The ‘seeing’ and the ‘colour seen’, and the modes in which this consciousness deploys (as in seeing, recognizing, loving, remembering a colour, as present, just past, definitively past and so on) are different parts of the same complex whole (for the further development of the issue, regarding the temporal modes of presentation, see Albertazzi 1995/1996, 1999). The detailed analysis of the parts of consciousness is given in Descriptive Psychology (Brentano 1995b), while the complexity of their interrelations and the final idea of substance is to be found in Sensory and Noetic Consciousness (Brentano 1981a). Consciousness is deployed in a duration, which has the characteristics of a multiple and multifarious continuum (Brentano 1981a, 1988). The fact that perception has (though not exclusively) a causal basis in transcendent reality excludes an idealist drift in Brentano’s thought; but it also excludes a foundation of consciousness in the neurochemistry and neurotransmitter system of the brain. In the Brentanian framework, ideas such
Brentano’s Aristotelian Concept of Consciousness
47
as ‘the biology of mind’ (Gazzaniga, Ivry and Mangun 2002; Gazzaniga, Richard and George 1997) are out of place because consciousness does not reside within the boundaries of the brain. Phenomena of consciousness do not coincide with neuron firings or synchronization patterns of temporal oscillations (Pöppel and Logothetis 1986), unless one assumes that neurons ‘see’, obviously once again interpreting meaningful appearances in terms of emerging processes occurring in the visual cortex (Crick and Koch 1990; Singer 1994; Sperry 1969, 1990). Whatever we may come to know about the brain (Marshall and Magoun 1998) offers no glimpse of consciousness phenomena.
9 Conclusions Questioning the assumption that one either does physics or does not do science at all (the Laplacian viewpoint), and claiming that there is a perfectly rigorous way to conduct qualitative research on consciousness, obviously entails radical reconsideration of the fundamental elements of psychological science and the nature of the subjective space-time of the phenomena of awareness, on which Brentano worked for most of his life. There is no evidence, in fact, even today, that conscious phenomena are explainable in terms of the physics that we actually know (Libet 2004). Brentano’s psyche is not a powerful computational algorithm for generating and reading symbolic strings (Turing 1950; Hofstadter 1979; Jackendoff 1987, 1992; Johnson-Laird 1983), or which develops according to principles of neural networks, as essentially identical (Churchland 1989; Crick and Koch 1990; Dennett and Kinsbourne 1992); nor is it a material substance as highly organized as the brain. The brain may be a sufficient but not a necessary condition for conscious experiences. Developing a science of consciousness per se as proposed by Brentano (Brentano 1995b, 4–5) is a great endeavour and challenge for current research. In fact, starting from the analysis and description of conscious experience, one should re-define the qualities classically considered to be primary, such as the attributes of physics, like shapes, size, motion and the like, in the qualitative terms of ‘voluminousness’, ‘remoteness’, ‘solidness’, ‘squareness’ and so on, all of which are relational, distributed qualities of what is perceived. One has to bracket off the correlated psychophysical and/or neurophysiological inquiries and develop an autonomous science of qualities. For the time being, we still do not know how life emerged from an inanimate being, and we also do not know how consciousness arises from unconscious entities. We nevertheless
48
The Bloomsbury Companion to the Philosophy of Consciousness
have evidence of both. Moreover, we know at least some of the relations of dependence among the different levels of reality (Hartmann 1935; Poli 2001, 2012). It seems to be more productive and scientifically honest to recognize the existence of different realms, categorically different phenomena, governed by specific laws, and enjoying equal ontological dignity, instead of reducing all types of reality to the one we presently know better, or are supposed to know better, that is physical being. Future discoveries may allow us to know more about the complete nature of reality. Within this framework, consciousness is part and parcel of nature, and it is given to us phenomenologically or, as Brentano would have said, in phenomenal presence.
References Albers, J. (1975). Interaction of Color, revised and expanded ed., New Haven and London: Yale University Press. Albertazzi, L. (1995/1996). ‘Die Theorie der indirekten Modifikation’, Brentano Studien, VI, 263–82. Albertazzi, L. (1999). ‘The Time of Presentness. A Chapter in Positivistic and Descriptive Psychology’, Axiomathes, 10, 49–74. Albertazzi, L. (2005). Immanent Realism. Introduction to Brentano, Berlin-New York, NY: Springer. Albertazzi, L. (2007). ‘At the Roots of Consciousness. Intentional Presentations’, Journal of Consciousness Studies, 14 (1–2), 94–114. Albertazzi, L. (2013a). ‘Experimental Phenomenology. An Introduction’, in L. Albertazzi (ed.), The Wiley Blackwell Handbook of Experimental Phenomenology. Visual Perception of Shape, Space and Appearance, 1–36, Chichester: BlackwellWiley. Albertazzi, L. (2013b). ‘Appearances from an Experimental Viewpoint’, in L. Albertazzi (ed.), The Wiley Blackwell Handbook of Experimental Phenomenology. Visual Perception of Shape, Space and Appearance, 267–90, Chichester: Blackwell-Wiley. Albertazzi, L. (2015a). ‘Philosophical Background: Phenomenology’, in J. Wagemans (ed.), Oxford Handbook of Perceptual Organization, 21–40, Oxford: Oxford University Press. Albertazzi, L. (2015b). ‘Spatial Elements in Visual Awareness. Challenges for an Intrinsic ‘Geometry’ of the Visible’, in Niveleau C.-E. and A. Métraux (eds.), The Bounds of Naturalism: Experimental Constraints and Phenomenological Requiredness, Special issue of Philosophiae Scientia 19(3):95–125. Albertazzi, L. (2015c). ‘A science of qualities’, Biological Theory, 10 (3), 188–199. doi:10.1007/s13752-015-0213-3. Albertazzi, L., Jacquette, D. and Poli, R., eds. (2001). The School of Alexius Meinong, Aldershot: Ashgate.
Brentano’s Aristotelian Concept of Consciousness
49
Albertazzi, L., van Tonder, G. and Vishwanath, D. (2010). ‘Information in Perception’, in L. Albertazzi, G. van Tonder and D. Vishwanath (eds.), Perception beyond Inference. The Information Content of Perceptual Processes, 1–26, Cambridge, MA: MIT Press. Albertazzi, L., Canal, L., Da Pos, O., Micciolo, R., Malfatti, M. and Vescovi, M. (2012) ‘The Hue of Shapes’, Journal of Experimental Psychology: Human Perception and Performance, 39 (1), 37–47. doi:10.1037/a0028816. Albertazzi, L., Canal, L., Dadam, J. and Micciolo, R. (2014). ‘The Semantics of Biological Forms’, Perception, 43 (12), 1365–76 doi: 10.1068/p7794. Albertazzi, L., Canal, L. and Micciolo, R. (2015). ‘Cross-Modal Associations between Materic Painting and Classical Spanish Music’, Frontiers in Psychology, 21 April 2015. http://dx.doi.org/10.3389/fpsyg.2015.00424. Albertazzi, L. and Poli, R. (2014). ‘Multi-leveled Objects: Color as a Case Study’, Frontiers in Psychology, 5, 5–92. doi:10.3389/fpsyg.2014.00592. Albertazzi, L. and Da Pos, O. (2016). ‘Color Names, Stimulus Color, and Their Subjective Links’, Color Research and Application. doi:10.1002/col.22034 Albertazzi, L., Koenderink, J. J. and van Doorn, A. (2015). ‘Chromatic Dimensions: Earthy, Watery, Airy and Fiery’, Perception, 44 (10), 1153–78. doi:10.1177/0301006615594700. Aristotle (1980). Physics, translated by P. H. Wicksteed and F. M. Cornford, Cambridge, MA: Harvard University Press. Aristotle (1986). De Anima, translated by W. S. Hett, Cambridge, MA: Harvard University Press. Baars, B. J. (1988). A Cognitive Theory of Consciousness, Cambridge: Cambridge University Press. Baars, B. J. (1997). In the Theater of Consciousness: The Workspace of the Mind, New York: Oxford University Press. Beck, F. and Eccles, J. C. (1994). ‘Quantum Aspects of Brain Activity and the Role of Consciousness’, in J. C. Eccles (ed.), How the Brain Controls the Mind, 145–65, Berlin: Springer. Benussi, V. (1913). Die Psychologie der Zeitauffassung, Leipzig: Hölder. Benussi, V. (1914). ‘Gesetze der inadäquäten Gestaltauffassung’, Archiv für die gesamte Psychologie, 32, 396–419. Boynton, R. M. (1979). Human Color Vision, New York: Holt, Rinehart and Winston. Brainard, D. H. (1995). ‘Colorimetry’, in M. Bass, E. Van Stryland and D. Williams (eds.), Handbook of Optics: vol. 1 Fundamentals, Techniques, and Design, 26, 1–26–54, 2nd ed. New York: McGraw-Hill. Brentano, F. (1971). Von der Klassifikation der psychische Phänomene, edited by O. Kraus, Hamburg: Meiner (1st ed. 1911, Leipzig: Duncker & Humblot). Brentano, F. (1977). The Psychology of Aristotle, in Particular his Doctrine of the Active Intellect, With an Appendix Concerning the Activity of Aristotle’s God, translated by R. George, Berkeley: University of California Press (1st German ed. 1867, Mainz: Kirchheim).
50
The Bloomsbury Companion to the Philosophy of Consciousness
Brentano, F. (1979). Untersuchungen zur Sinnespsychologie, edited by R. M. Chisholm and R. Fabian, Hamburg: Meiner (1st German ed. 1907, Leipzig: Duncker & Humblot). Brentano, F. (1981a). Sensory and Noetic Consciousness, edited by L. McAlister and M. Schättle, London: Routledge (1st ed. German 1928, edited by O. Kraus, Leipzig: Meiner; rpt. 1968, edited by F. Mayer-Hillebrand). Brentano, F. (1981b). The Theory of Categories, edited by R. M. Chisholm, N. Guterman, Den Haag: Nijhoff (1st German ed. 1933, edited by A. Kastil, Leipzig: Meiner). Brentano, F. (1988). Philosophical Lectures on Space, Time and the Continuum, edited by S. Körner and R. M. Chisholm, London: Croom Helm (1st German ed. 1976, Hamburg: Meiner). Brentano, F. (1995a). Psychology from an Empirical Standpoint, edited by L. McAlister, London: Routledge (Translation of the 2nd ed. with an Introduction and notes by O. Kraus, Leipzig 1924) (1st German ed. 1874, Leipzig: Duncker & Humblot). Brentano, F. (1995b). Descriptive Psychology, edited by B. Müller, London: Routledge (1st German ed. 1982, edited by R. M. Chisholm and W. Baumgartner, Hamburg: Meiner). Canal, L. and Micciolo, R. (2013). ‘Measuring the Immeasurable: Quantitative Analyses of Perceptual Experiments’, in L. Albertazzi (ed.), The Wiley Blackwell Handbook of Experimental Phenomenology. Visual Perception of Shape, Space and Appearance, 477–98, Chichester: Blackwell-Wiley. Chalmers, D. J. (1995). ‘Facing Up to the Problem of Consciousness’, Journal of Consciousness Studies, 2 (3), 200–19. Chalmers, D. J. (1996). The Conscious Mind, New York: Oxford University Press. Churchland, P. (1989). A Neurocomputational Perspective: The Nature of Mind and the Structure of Science, Cambridge, MA: MIT Press. Clark, A. (1992). Sensory Qualities, Oxford: Oxford University Press. Clark, A. (2000). A Theory of Sentience, Oxford: Oxford University Press. Cleermans, A. (2005). ‘Computational Correlates of Consciousness’, Progress in Brain Research, 1, 81–98. Cleermans, A. (2011). ‘The Radical Plasticity Thesis. How the Brains Learns to be Conscious’, Progress in Brain Research, 168, 19–33. Crick, F. (1994). The Astonishing Hypothesis: The Scientific Search for the Soul, New York: Simon & Schuster. Crick, F. and Koch, C. (1990). ‘Toward a Neurobiological Theory of Consciousness’, Seminars in the Neurosciences, 2, 263–75. Damasio, A. R. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness, New York: Harcourt. Da Pos, O. and Albertazzi, L. (2010). ‘It is in the Nature of Color…’, Seeing and Perceiving, 23, 39–73.
Brentano’s Aristotelian Concept of Consciousness
51
Da Pos, O. and Pietto, M. L. (2010). ‘Highlighting the Quality of Light Sources, Proceedings of the 2nd CIE Expert Symposium on Appearance ‘When Appearance Meets Lighting’, 8–10. Gent, Belgium. Da Pos, O. and Albertazzi, L. (in preparation). ‘Colour Determinants of Surface Stratification’. Abstract presented at SEQS 2014, Rovereto, Italy: CIMeC. Dehaene, S. Naccache, L. (2001). ‘Towards a Cognitive Neuroscience of Consciousness: Basic Evidence for a Workspace Framework’, Cognition, 79, 1–37. Dennett, D. C. (1991). Consciousness Explained, Boston: Little, Brown Dennett, D. C. and Kinsbourne, M. (1992). ‘Time and the Observer: The Where and When of Consciousness in the Brain’, Behavioral and Brain Sciences, 15, 183–247. Descartes, R. (1983). Oeuvres, 11 vols., edited by Charles Adam, Paul Tannery, Paris: Librairie Philosophique J. Vrin. Eccles, J. (1990). ‘A Unitary Hypothesis of Mind-Brain Interaction in the Cerebral Cortex’, Proceedings of the Royal Society of London, 240, 433–51. doi:10.1098/rspb.1990.0047. Edelman, G. M. (1989). The Remembered Present: A Biological Theory of Consciousness, New York: Basic Books. Edelman, G. M. (2003). ‘Naturalizing Consciousness: A Theoretical Framework’, Proceedings of the National Academy of Sciences, 100 (9), 5520–4. Edelman, G. M. (2004). Wider than the Sky. The Phenomenal Gift of Consciousness, New Haven: Yale University Press. Edelman, G. and Tononi, G. (2000). A Universe of Consciousness: How Matter Becomes Imagination, New York: Basic Books. Ehrenfels, Ch. von. (1890). ‘Über Gestaltqualitäten’, Vierteljharschrift für wissenschaftliche Philosophie, 14, 242–92. Fechner, G. Th. (1860). Elemente der Psychophysik, Leipzig: Breitkopf & Härtel. Galileo Galilei. (1623/1957). The Assayer, translated by Stillman Drake, in Discoveries and Opinions of Galileo, 237–8, New York: Doubleday & Co. Gazzaniga, M. S., Richard, B. I. and George, R. M. (1997). Cognitive Neuroscience. The Biology of the Mind, New York: Norton & Company. Gazzaniga, M. S., Ivry, R. B. and Mangun, G. R. eds. (2002). Cognitive Neuroscience. The Biology of the Mind, 2nd ed. New York: Norton & Company. Gibson, J. J. (1971). ‘The Legacies of Koffka’s Principles’, Journal of the History of the Behavioral Sciences, 7, 3–9. Gibson, J. J. (1979). The Ecological Approach to Visual Perception, Boston MA: Houghton Mifflin. Gregory, R. L. (1986). Odd Perceptions, London: Methuen. Gregory, R. L. (2009). Seeing Through Illusions, Oxford: Oxford University Press. Grossberg, S. (2003). ‘Filling-In the Forms: Surface and Boundary Interactions in Visual Cortex’, in L. Pessoa, P. De Weerd (eds.), Filling-In. From Perceptual Completion to Cortical Reorganization, 13–37, Oxford: Oxford University Press. Hameroff, S. R. (1994). ‘Quantum Coherence in Microtubules: A Neural Basis for Emergent Consciousness?’, Journal of Consciousness Studies, 1, 91–118.
52
The Bloomsbury Companion to the Philosophy of Consciousness
Hameroff, S. R. and Scott, A. C. (1998). ‘A Sonoran Afternoon: A Discussion of the Relevance of Quantum Theories of Consciousness’, in S. R. Hameroff, A. W. Kaszniak, A. C. and Scott (eds.), Towards a Science of Consciousness II, 635–44, Cambridge, MA: MIT Press. Hartmann, N. (1935). Ontologie (4 Vols), I: Zur Grundlegung der Ontologie, Berlin-Leipzig: de Gruyter. Hering, E. E. (1920/1964). Outlines of a Theory of the Light Sense, translated by L. M. Hurvich, D. Jameson, Cambridge, MA: Harvard University Press. [Trans. of Zur Lehre vom Lichtsinn]. Hess, R. F., Beaudot, William H. A. and Mullen, K. T. (2001). ‘Dynamics of Contour Integration’, Vision Research, 41 (8): 1023–37. Hofstadter, D. (1979). Gödel, Escher, Bach: An Eternal Golden Braid, New York: Basic Books. Hume, D. (1975). Enquiries Concerning Human Understanding and Concerning the Principles of Morals, edited by L. A. Selby-Bigge, 3rd ed., revised by P. H. Nidditch, Oxford: Clarendon Press. Ihde, D. (1986). Experimental Phenomenology. An Introduction, New York: University of New York Press. Jackendoff, R. (1987). Consciousness and the Computational Mind, Cambridge, MA: MIT Press. Jackendoff, R. (1992). Languages of the Mind: Essays on Mental Representation, Cambridge, MA: MIT Press. James, W. (1950). Principles of Psychology, 2 vols., Boston: Holt and Co. New York: Dover Publications 1950 (1st ed. 1890). Johnson-Laird, P. (1983). Mental Models. Toward a Cognitive Science of Language, Inference and Consciousness, Cambridge, MA: Harvard University Press. Jürgens, U. M. and Nikolić, D. (2014). ‘Synesthesia as an Ideasthesia – Cognitive Implications’, in J. R. Sinha and C. Söffing (eds.), Synesthesia and Children-Learning and Creativity, Kassel: Kassel University Press. Kanizsa, G. (1979). Organization in Vision, New York: Praeger. Kanizsa, G. (1991). Vedere e pensare, Bologna: Il Mulino. Katz, D. (1935). The World of Colour, London: Routledge Koffka, K. (1935). Principles of Gestalt Psychology, London: Routledge & Kegan Paul. Köhler, W. (1969). The Task of Gestalt Psychology, Princeton, NJ: Princeton University Press. Kovács, I. and Julesz, B. (1993). ‘A Closed Curve is Much More than an Incomplete One: Effect of Closure in Figure-Ground Segmentation’, Proceedings of the National Academy of Sciences USA, 90 (16), 7495–7. Kovács, I., Fehér, A., Julesz, B. (1998). ‘Medial-Point Description of Shape: A Representation for Action Coding and its Psychophysical Correlates’, Vision Research, 38 (15–16), 2323–33.
Brentano’s Aristotelian Concept of Consciousness
53
Libet, B. (1987). ‘Consciousness: Conscious, Subjective Experience’, in G. Adelman (ed.), Encyclopedia of Neuroscience, 271–5, Boston: Birkhäuser. Libet, B. (2004). Mind Time. The Temporal Factors in Consciousness. Cambridge: Harvard University Press. Libet, B., Wright Jr. E. W., Feinstein, B. and Peral, D. K. (1992). ‘Retroactive Enhancement of a Skin Sensation by a Delayed Cortical Stimulus in Man: Evidence for Delay of a Conscious Sensory Experience’, Consciousness and Cognition, 1, 367–75. Mack, A. and Rock, I. (1998). Inattentional Blindness, Cambridge, MA: MIT Press. Marr, D. (1982). Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information, New York: W. H. Freeman. Marshall, L. H., Magoun, H. W. (1998). Discoveries in the Human Brain, Totowa, N.J.: Humana Press. Maturana, R. H. and Varela, F. (1980). Autopoiesis and Cognition: The Realization of the Living, Dordrecht: Reidel. Mausfeld, R. (2010). ‘The Perception of Material Qualities and the Internal Semantics of the Perceptual System’, in L. Albertazzi, G. van Tonder, D. Vishwanath (eds.), Perception Beyond Inference. The Information Content of Perceptual Processes, 159–200, Cambridge, MA: MIT Press. Mausfeld, R. (2013). ‘The Attribute of Realness and the Internal Organization of Perceptual Reality’, in L. Albertazzi (ed.), The Wiley Blackwell Handbook of Experimental Phenomenology. Visual Perception of Shape, Space and Appearance, 91–118, Chichester: Blackwell-Wiley. Meinong, A. (1899). ‘Über Gegenstände höherer Ordnung und ihren Verhältnis zu inneren Wahrnehmung’, Zeitschrift für Psychologie und Physiologie des Synnesorgane, 21, 182–271. Rpt. 1971, in A. Meinong (ed.), Untersuchungen zur Gegenstandstheorie und Psychologie, Leipzig: Barth. Meinong, A. (1910). Über Annahmen, Leipzig: Barth (1st ed. 1902). English translated (1983) by J. Heanue, Berkeley: University of California Press. Metzger, W. (1936/2006). Laws of Seeing, translated by L. Spillmann, S. Lehar, M. Stromeyer, M. Wertheimer, Cambridge, MA: MIT Press. Metzger, W. (1941). Psychologie: die Entwicklung ihrer Grundannahmen seit der Einführung des Experiments, Dresden: Steinkopff. Michotte, A. (1954). La perception de la causalité, Louvain: Publications Universitaires de Louvain. Milner, A. D. and Goodale, M. A. (1995). The Visual Brain in Action, Oxford: Oxford University Press. Müller-Lyer, F. C. (1889). ‘Optische Urteilstäuschungen’, Archiv für Anatomie und Physiologie. Physiologische Abteilung, 2, 263–70. Murari, M., Rodà, A., Da Pos, O., Canazza, S., De Poli, G. and Sandri, M. (2014). ‘How blue is Mozart? Non verbal sensory scales for describing music qualities’, Proceedings of the Conference ICMC-SMC, 209–16, Athens, Greece. doi:10.13140/2.1.1953, 1521.
54
The Bloomsbury Companion to the Philosophy of Consciousness
O’Reagan, J. and Noë, A. (2001). ‘A Sensorymotor Account of Vision and Visual Consciousness’, Behavioural and Brain Sciences, 24 (5), 939–1031. Penrose, R. (1989). The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics, Oxford: Oxford University Press. Pessoa, L. and De Weerd, P. (2003). Filling-In. From Perceptual Completion to Cortical Reorganization, Oxford: Oxford University Press. Petitot, J. (2008). Neurogéometrie de la vision, Paris: Les éditions de l’école polytechnique. Petitot, J., Varela, F. J., Roy J.-M., Pachoud, B., eds. (1999). Naturalizing Phenomenology: Issues in Contemporary Phenomenology and Cognitive Science, Stanford: Stanford University Press. Pizlo, Z. (2001). ‘Perception Viewed as an Inverse problem’, Vision Research, 41, 3145–61. Poli, R. (2001). ‘The Basic Problem of the Theory of Levels of Reality’, Axiomathes, 12 (3–4), 261–83. Poli, R. (2012). ‘Nicolai Hartmann’, Stanford Encyclopedia of Philosophy: http://plato. stanford.edu/entries/nicolai-hartmann/. Pöppel, E. (2009). ‘Pre-semantically Defined Temporal Windows for Cognitive Processing’, Philosophical Transactions of the Royal Society of London: Series B, Biological Sciences, 363, 1887–96. Pöppel, E. and Logothetis, N. (1986). ‘Neuronal Oscillations in the Human Brain. Discontinuous Initiations of Pursuit Eye Movements Indicate at 30hz Temporal Framework for Visual Information Processing’, Naturwissenschaften, 73, 267–8. Pöppel, E. and Bao, Y. (2014). ‘Temporal Windows as a Bridge from Objective to Subjective Time’, in V. Arstila, D. LLyod (eds.), Subjective Time, 241–62, Cambridge, MA: MIT Press. Ramachandran, V. S. (2003). ‘Foreword’, in L. Pessoa, P. De Weerd (eds.), Filling-In. From Perceptual Completion to Cortical Reorganization, xi–xxii, Oxford: Oxford University Press. Rensink, R. A. (2000). ‘Seeing, Sensing, Scrutinizing’, Vision Research, 40, 1469–87. Rensink, R. A. (2002). ‘Change Detection’, Annual Review Psychology, 53, 245–77. Revonsuo, A. (2006). Inner Presence. Consciousness as a Biological Phenomenon, Cambridge, MA: MIT Press. Rock, I. (1983). The Logic of Perception, Cambridge, MA: MIT Press. Searle, J. R. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3, 417–57. Searle, J. R. (1992). The Rediscovery of the Mind. Cambridge, MA: MIT Press. Shannon, C. and Weaver, W. (1949/1998). The Mathematical Theory of Communication, Chicago: University of Illinois Press. Shapiro, A. and Todorovic, D. (eds.) (2014). Compendium of Visual Illusions, Oxford: Oxford University Press. Singer, W. (1994). ‘The Organization of Sensory Motor Representations in the Neocortex: A Hypothesis based on Temporal Coding’, in C. Umiltà, M.
Brentano’s Aristotelian Concept of Consciousness
55
Moscovitch (eds.), Attention and Performance XV, Conscious and Nonconscious Information Processing, 77–107, Cambridge, MA: MIT Press. Singer, W. (1999). ‘Neuronal Synchrony: A Versatile Code for the Definition of Relations?’ Neuron, 24, 49–65. Singer, W. (2000). ‘Phenomenal Awareness and Consciousness from a Neurobiological Perspective’, in T. Metzinger (ed.), Neuronal Correlates of Consciousness, 121–37, Cambridge, MA: MIT Press. Sivik, L. (1974). ‘Color Meaning and Perceptual Color Dimensions. A Study of Color Samples’, Göteborg Psychological Reports, 4 (1). Smith, B. (1994). Austrian Philosophy. The Legacy of Franz Brentano, Chicago-LaSalle: Open Court. Sovrano, V. A., Da Pos, O. and Albertazzi, L. (2015). ‘The Müller-Lyer Illusion in the Teleost Fish Xenotoca Eiseni’, Animal Cognition. doi:10.1007/s10071-015-0917-6. Spence, C. (2011). ‘Crossmodal Correspondences: A Tutorial Review’, Attention, Perception & Psychophysics, 73, 971–95. doi:10.3758/s13414-010-0073-7. Sperry, R. W. (1969). ‘A Modified Concept of Consciousness’, Psychological Review, 76 (6), 532–6. Sperry, R. W. (1990). ‘Mind-Brain Interaction: Mentalism, Yes; Dualism, No’, Neuroscience, 5, 195–206. Spiegelberg, H. (1982). The Phenomenological Movement, The Hague: Nijhoff, 2nd ed. Spillmann, L., Ehrenstein, W. (2004). ‘Gestalt Factors in the Visual Neuroscience?’, The Visual Neurosciences, 19, 428–34. Squires, E. J. (1988). ‘Why are Quantum Theorists Interested in Consciousness?’, in S. R. Hameroff, A. W. Kazniak and A. C. Scott (eds.), Towards a Science of Consciousness II, 609–18, Cambridge, MA: MIT Press. Stumpf, C. (1883). Tonpsychologie, 2 vols, Leipzig: Hirzel. Thompson, E. and Varela, F. J. (2001). ‘Radical Embodiment: Neural Dynamics and Consciousness’, Trends in Cognitive Sciences, 5 (10), 418–25. Tononi, G. (2008). ‘Consciousness as Integrated Information: A Provisional Manifesto’, Biological Bulletin, 215, 216–42. Turing, A. (1950). ‘Computing Machinery and Intelligence’, Mind, 50, 433–60. Varela, F. J., Thompson, E. and Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience, Cambridge, MA: MIT Press. Vishwanath, D. (2005). ‘The Epistemological Status of Vision Science and its Implications for Design’, Axiomathes, 3 (15), 399–486. doi:10.1007/s10516-0044-5445-y. Wagemans, J. (2015). ‘Historical and Conceptual Background: Gestalt Theory’, in J. Wagemans (ed.), Perceptual Organization, 3–20, Oxford: Oxford University Press. Wagemans, J., Feldman, J., Gepshtein, S., Kimchi, R., Pomerantz, J. R. et al. (2012). ‘A Century of Gestalt Psychology in Visual Perception. Conceptual and Theoretical Foundations’, Psychological Bulletin, 138 (6), 1218–52. Wertheimer, M. (1912/2012). ‘Experimentelle Studien über das Sehen von Bewegung’, Zeitschrif für Psychologie, 61, 161–265. Eng. translated by M. Wertheimer and
56
The Bloomsbury Companion to the Philosophy of Consciousness
K. W. Watkins, in L. Spillmann (ed.), Max Wertheimer, On Perceived Motion and Figural Organization, 1–92, Cambridge, MA: MIT Press. Wertheimer, M. (1923). ‘Laws of Organization in Perceptual Forms’, Psychologische Forschung, 4, 301–50. English trans. in W. E. Ellis (1938). A Source Book of Gestalt Psychology, 71–94, London: Routledge. Zeki, S. (1993). A Vision of the Brain, Oxford: Blackwell. Zeki, S. and Bartels, A. (1999). ‘Towards a Theory of Visual Consciousness’, Trends in Cognitive Science, 7 (5), 225–59.
4
Wittgenstein and the Concept of Consciousness Garry L. Hagberg
Précis A colloquial phrase that here functions as a double entendre can show the way into a discussion of Wittgenstein’s mature1 understanding of consciousness: it is not what you think. That is, on one meaning, what are on an established dualistic view taken as the pure contents of consciousness as they would reside within the hermetic enclosure of the mind (and so where ‘what one thinks’ refers to pre-linguistic and metaphysically private cognitive experience knowable only by first-person introspection) is a philosophical picture or conceptual template Wittgenstein’s investigation into consciousness meticulously dismantles. And on the other meaning – and now precisely because Wittgenstein is widely known to have dismantled that conceptual picture – one can think that he holds a fixed position on the issue of consciousness and that it in essence (despite his protestations) is a form of either reductive or eliminative behaviourism. (In fact, this itself is an incorrect reductive interpretation of his work on this topic.) Thus his view of consciousness – or the question of consciousness – is not what one thinks in these two senses simultaneously. Let us consider the dualistic or introspectionist picture first. It is actually a mosaic of a number of elements, and understanding it is a prerequisite for understanding its repudiation in behaviourism.
1 The dualist–introspectionist picture To articulate these elements, first, dualism: the traditional ontological separation of mind from matter, of the mental from the physical, and of mind from body,
58
The Bloomsbury Companion to the Philosophy of Consciousness
all conspire to generate the philosophical picture of a Great Divide between two kinds of entities: material entities are extended in space; mental entities, while in a sense in the mind, are not similarly extended. (This dualistic picture led to the classical problem of the causal interaction of mind and matter, or specifically how an immaterial substance could causally act upon a material substance.) But on this dualistic picture they are not in the mind as a chair is in a room; it is, rather, that the presence of any such immaterial entity is conceptually modelled upon the presence of a chair in a room. This conceptual modelling led to the discussion of the Cartesian2 theater and its immaterial furnishings, the inner world of consciousness and its similarly inward contents. It led as well to the picturing of mental experiences as mental objects (as we shall see, a central concern of Wittgenstein’s critical reflections). Second, I referred just above to those contents as ‘pure’: this word is called into (as we also shall see, perhaps illicit) service by this model because the immaterial contents are thought to be in their essence unrelated to any external thing – they do not gain any part of their identity by standing in relation to anything outside the inner theatre or anything ontologically of a different kind from them. Thus, the very notion of hermetic enclosure comes with this: ontological purity is taken as ensured by the Great Divide. Third, I used the description ‘pre-linguistic’ (as Wittgenstein will point out, a description already in, and not prior to, language): on the dualistic picture language is construed as invariably secondary to thought, or external to the content of consciousness (content that is, again, regarded as hermetically internal). This places language in the position of a translation, or of a code: language takes its place in this larger conceptual model or mosaic as a representational system for translating content prior to and ontologically separate from it. Or, similarly if terminologically different, language is regarded as an encoding of content prior to the arbitrarily attached symbols of the code in which it is expressed or outwardly delivered. (Wittgenstein will uncover instructive problems with the appropriation of these concepts – translating and encoding – into our understanding of what language essentially is, which as we shall see will in turn reveal a good deal about his nuanced understanding of consciousness.) In any case, this notion of pre-linguistic content as the raw material of consciousness that is later and only contingently translated or encoded is one central part of the mosaic of dualistic elements. Fourth, the concept of metaphysical privacy is also a primary constituent of this larger picture. We have (as Wittgenstein will remind us) a highly developed and variegated concept of privacy as we use it in life and language: we can
Wittgenstein and the Concept of Consciousness
59
keep thoughts to ourselves, we can hide objects and ideas, we can have hidden agendas, we can conceal just as we can reveal. We can declare certain matters private, just as we can distinguish between public and private correspondence or speak on or off the record. Metaphysical privacy is philosophically different from all of these, and for some it lies at the heart of any question concerning human consciousness. It is a special form of inviolable privacy that is (assumed on this model or picture to be) the most fundamental fact of human existence. On this view we from the start are non-relationally independent of each other and begin in a private world that makes our ordinary usages of the concept of privacy seem in fact public by comparison. The metaphysical version of ‘private’ requires a special usage that is regarded as the most fundamental single-word description of the human solitary predicament. It describes the sealed interior from which we are then thought to move outwards into the world and into the presence of others only by a kind of inferential or analogical bridge that is (like the picture of language just above) always contingent, always inviting or preserving a place for scepticism concerning our knowledge of anything or anyone beyond the limits of our metaphysically bounded conscious interior, and always at an epistemologically reduced station, always inferior to what we indubitably know inside where we are guaranteed against error. That leads, fifth, to the final element, which is first-person introspection. On this view the contents of consciousness are always transparently knowable by directing our inner gaze upon them; this is modelled upon our closely scrutinizing a given object close up in bright light with 20/20 vision – but turned inwards. It is vision with the mind’s eye, and it is directed at the mind’s contents: inspection becomes introspection. And according to this model, consciousness is what allows us to experience mental content within the Cartesian theater, and our introspective capacity places us in the invariably privileged position of having direct or unmediated access to those mental experiences. These five elements, then, together articulate the more precise first meaning of the phrase ‘what you think’. But what then of the second meaning, the other sense of the double entendre?
2 The behaviourist antithesis Behaviourism is the polemical antithesis of the dualist picture of consciousness. It regards the elements of the dualistic picture just described as philosophical
60
The Bloomsbury Companion to the Philosophy of Consciousness
mythology and thus consciousness as articulated on that picture as epiphenomenal or a kind of conceptual illusion. By repudiating that picture, it leaves us with a general or overarching explanation of human action that rather than seeing it as the physical translation or contingent embodiment of prior inner and metaphysically private intentional content, sees only physical stimulus – response relations and behaviouristic mechanisms. Removing the ‘black box’ (the theoretically posited retainer of hidden mental content) from consideration, behaviourism comes close to either rejecting consciousness as traditionally conceived or re-describing it so minimally that one might wonder if there is any real part of what we generally regard as consciousness left; that is, there is only a form of physicalistic monism (in place of dualism) remaining. But then it is true that there appears, at least initially, good reason to embrace behaviourism, again because of the powerful criticisms Wittgenstein brought (as we will shortly see) against the five elements of the dualistic picture. If we start with mind–body dualism, proceed to a critique of the mind-side of that dichotomy,3 and then ask what is left, the answer does seem simple. Simple, but, as Wittgenstein also shows, wrong – and interestingly so. As we will also shortly see, it is not the mind-side of the dichotomy that he is critiquing, where that critique would then leave the body-side untouched. Thus in critiquing these elements of dualism, he is not thereby explicitly or implicitly arguing for monism. Rather, he is investigating the intellectual impulses and temptations to posit the Great Divide (where that positing takes form for present considerations as the picture of the metaphysical seclusion of consciousness) and, with that Divide then structuring our subsequent thought, to see all the complexity of human life as reducible in essence to mind (dualism) or to matter (behaviourism). So, if not in one-side-or-the-other polemical terms, how does Wittgenstein proceed in his investigation into consciousness?
3 Wittgenstein’s mode of inquiry A fundamental methodological approach that Wittgenstein uses throughout his philosophy, and certainly in his considerations of the nature or character of consciousness, is to examine details of our language concerning the phenomenon at hand. This often yields a form of conceptual clarification that, for its expansive character, is resistant to summation or to any kind of ‘ism’ (that is, behaviourism, monism, dualism). He often begins such examinations by destabilizing an entrenched dichotomy that is too easily taken as granted or
Wittgenstein and the Concept of Consciousness
61
as a fixed starting point. In the light of this, consider this passage, Philosophical Investigations4, Sec. 421: It seems paradoxical to us that in a single report we should make such a medley, mixing physical states and states of consciousness up together: ‘He suffered great torments and tossed about restlessly.’ It is quite usual; so why does it seem paradoxical to us? Because we want to say that the sentence is about both tangibles and intangibles. – But does it worry you if I say: ‘These three struts give the building stability?’ Are three and stability tangible? – Regard the sentence as an instrument, and its sense as its employment.
A great deal of the Wittgensteinian approach to philosophical difficulty is intimated in this section. First he notes that the mixing of physical states with states of consciousness seems paradoxical, but that the air of paradox is generated only by our having first implicitly subscribed to the Great Divide model, where the two categories should be kept apart by metaphysics. Second, he effects a sudden reorientation in our thinking by giving an example from elsewhere in life that casts new light on the present case (of tormented tossing); that new example unproblematically combines the number three and the production of stability by tangible struts in a building, in such a way that we do not quite know how to categorize the three and the stability. We are left to think: if, first, we try to stay with the dichotomy, then we face the fact that tangibles and intangibles are inseparably combined; if, second, on the other hand we reject the dichotomy, then we are decisively placing practice over theory – methodologically letting the examples speak first, and acknowledging that they speak most clearly. Third, we are then left to ask ourselves: Why are we concerned about the first medley and not the second (the second, on its own, would not have so much as attracted our attention with regard to mixed tangibles and intangibles). The answer is, of course, that in the tormented-tossing case we are speaking of a person’s consciousness, and Wittgenstein is showing how that very concept can awaken, or insinuate, dualistic metaphysical expectations. But for Wittgenstein the air of paradox need not survive our placing this case alongside others of precisely the kind he gives – cases in which the inner–outer dichotomy seems not to apply or where it need not be invoked to fully describe and comprehend the case at hand. This is the meaning of his final comment concerning the sense of a sentence and its employment: let us look to the language we actually use, and consider what it does, what it performs, what its point is, in situ. Wittgenstein is consulting our actual linguistic usage (against the abovedescribed dualistic model of language) as relevant to our understanding of states
62
The Bloomsbury Companion to the Philosophy of Consciousness
of consciousness. He shows that such language is always relationally intertwined; it does not function as a mere contingently attached vehicle or encoding-system for carrying pre-linguistic content. Thus, he writes of an experience we might well take to be the perfect case of hermetically sealed inner conscious content, in Remarks on the Philosophy of Psychology, Vol. II,5 Sec. 150: The concept of pain is simply embedded in our life in a certain way. It is characterized by very definite connections. Just as in chess a move with the king only takes place within a certain context, and it cannot be removed from this context. To the concept there corresponds a technique. (The eye smiles only within a face.)
To select a single piece from a chessboard, to pick it up, and then focus our attention wholly and exactingly on it as isolated would of course never show us anything about the role, the character, the possible moves, the meaning, of that piece. One sees what it is in, and only within, its context. Our understanding of the concept of pain and its expression – although we might, under the influence of the dualistic picture, initially regard it as the sine qua non of hermetic inner conscious content – functions in the same way. Consequently, in the next section Wittgenstein adds: Only surrounded by certain normal manifestations of life, is there such a thing as an expression of pain. Only surrounded by even more far-reaching particular manifestations of life, such as the expression of sorrow or affection. And so on.6
Just as, in microcosm, an eye smiles only within a face, so in macrocosm the expression of pain is only possible, only comprehensible and only intelligible, within a more expansive form of life. The expression of sorrow is neither discernible nor comprehensible as a single facial movement or isolated verbal utterance; the chess-piece approach could never succeed. But if the elements of the dualistic inner-to-outer picture were true as stated, then the implicit question is: Why indeed not? This directly links to the modelling of introspection on inspection; inspection turned inwards was to serve as the way in which we come to know (in a uniquely privileged and unmediated way) the contents of consciousness. What consciousness on that picture does is to introspect upon, identify and categorize those inward furnishings (items of consciousness, such as hopes, fears, aspirations, regrets, plans, ambitions, resolutions, ambivalences, memories, intentions, reinterpretations and countless articles of knowledge and belief). This as Wittgenstein knows is an attractive picture or model that presents itself
Wittgenstein and the Concept of Consciousness
63
when the concept ‘consciousness’ and the concept ‘self-knowledge’ are thought of together. Wittgenstein writes: I want to talk about a ‘state of consciousness’, and to use this expression to refer to the seeing of a certain picture, the hearing of a tone, a sensation of pain or of taste, etc. I want to say that believing, understanding, knowing, intending, and others, are not states of consciousness. If for the moment I call these latter ‘dispositions’, then an important difference between dispositions and states of consciousness consists in the fact that a disposition is not interrupted by a break in consciousness or a shift in attention ... . Really one hardly ever says that one has believed or understood something ‘uninterruptedly’ since yesterday. An interruption of belief would be a period of unbelief, not, e.g. the withdrawal of attention from what one believes, or, e.g. sleep. (The difference between ‘knowing’ and ‘being aware of ’.)7
Profoundly respecting the language we have developed across large spans of usage in our form of life to speak of consciousness and conscious events or acts, Wittgenstein sees here that the facts of that language are in strong and direct conflict with what the dualistic picture would imply: because of the ‘interruption’ problem, and what we would say about it (an interruption of belief would be a period of troubled belief or unbelief, say in theological circles8), we can rightly say that we have intentions to do X, that we know X, that we have been thinking about X, that we are reconsidering X, and countless related firstperson statements that, while true descriptions of our mental life, are not true by virtue of introspective ‘spotlighting’ of those mental furnishings. ‘I have understood the general theory of relativity uninterruptedly since Thursday’ is a sentence that does not wear its sense on its sleeve; predictable replies might be ‘What are you talking about?’ or ‘Is that philosophical humor?’ Knowledge, and particularly self-knowledge, are both severely miscast by the dualist picture; if we follow their dictates and speak accordingly, we speak nonsense. (Wittgenstein believes this happens throughout philosophy, and the way back to sense is to respect actual language.) But extending Wittgenstein’s point, it is as a corollary not at all true (as it should be if the picture were correct) that we confirm what we believe by training the introspective spotlight on an inner item of consciousness. As we saw above, where the model of the mental is drawn from the physical, we here – impelled by the pre-positioned dualistic picture – think that checking on a belief must be the inward ‘mind’s eye’ variant of checking to see if an old suitcase is still in the attic. What Wittgenstein is here9 calling dispositions do not work like that; and this becomes a fundamental observation about the character of consciousness and its ‘objects’. ‘Do you know
64
The Bloomsbury Companion to the Philosophy of Consciousness
that Jones is coming on Tuesday?’ has a different use, a different sense and different point, than ‘Are you aware that Jones is coming on Tuesday?’ Knowing X is not describable, without difference, as being aware of X (and being aware is not a matter of inner ‘spotlighting’ anyway).
4 The observational model of consciousness (and its insoluble problems) But then matters, as we might begin to expect, are more complex still: there are mental events, or things that take place in consciousness, that are not the kinds of things to which we can direct our attention. This, on the dualist model, is deeply counterintuitive (and in fact should be an impossibility). Wittgenstein writes: Where there is genuine duration one can tell someone: ‘Pay attention and give me a signal when the picture, the rattling etc. alters.’ Here there is such a thing as paying attention. Whereas one cannot follow with attention the forgetting of what one knew or the like.10
Forgetting should be, after all, a phenomenon of consciousness – we do not forget in any other place. Yet, while a phenomenon of consciousness, it is not one upon which we can focus attention, on which we can shine an introspective spotlight as it is happening. It does happen, and it happens in the mind (again, where else?), yet it is not introspectable. Wittgenstein’s point is that the inward attention-directing model, the introspectionist model as derived from outward inspection, simply cannot accommodate this. And yet this is an undeniable fact of our mental lives. ‘I was watching myself forget my Latin vocabulary, inwardly seeing each single word quietly disappear – I watched each one go, each one just fading to black’ is instructive nonsense. So the observational model of self-knowledge – the privileged first-person knowledge that the introspectionist element of the five-part mosaic picture of consciousness described above should automatically deliver – is in increasing difficulty under Wittgenstein’s investigation. Consciousness may not work like that. He writes: Think of this language-game: Determine how long an impression lasts by means of a stop-watch. The duration of knowledge, ability, understanding, could not be determined in this way.11
Wittgenstein and the Concept of Consciousness
65
The observational model of consciousness is derived, as we saw above, from outward cases then turned inwards: But as we are now seeing, questions such as ‘Do we still have that bottle of Barolo that we picked up in Tuscany?’, answered by ‘I’ll check in the cellar’, are instructively not – not at all – parallel to questions such as ‘Do I still love her?’ There is no single location within consciousness to find and definitively identify the bounded ‘mental object’ that is the love (or the hope, fear, aspiration, regret, plan, reinterpretation and so forth); the concept, one wants to say with Wittgenstein, does not work that way. Hence the parallel response turned inwards, ‘I’ll check in the mind’, is instructively disorienting. Sharply focusing the point, Wittgenstein writes: The general differentiation of all states of consciousness from dispositions seems to me to be that one cannot ascertain by spot-check whether they are still going on.12
If consciousness were an inner repository of mental objects, we would be able to spot-check any of its furnishings at any time; in truth, as Wittgenstein is revealing step by step, to take this approach gives us no more understanding of what we think of as mental contents of consciousness any more than staring at the single isolated chess piece will reveal its nature, its function. In referring to William James’s discussion of consciousness and the self, Wittgenstein writes: James’s introspection showed, not the meaning of the word ‘self ’ (so far as it means something like ‘person’, ‘human being’, ‘he himself ’, ‘I myself ’), or any analysis of such a being, but the state of a philosopher’s attention when he says the word ‘self ’ to himself and tries to analyze its meaning.13
It is here that Wittgenstein makes explicit what is for him the deep connection between issues of linguistic meaning and issues of consciousness. His way forward in understanding consciousness is to now pay the closest attention (as he has implicitly been doing all along) to the language we use, to (as he calls them) the language-games, the circumscribed contexts of discourse, of consciousness and the varying phenomena of being conscious of a given thing. Earlier in his work Wittgenstein had said that seemingly insoluble philosophical problems arise ‘when we look at the facts through the medium of a misleading form of expression’. Precisely this is happening here, where we picture the conscious mind on the model of a room containing objects, and where we thus think (not unreasonably given that presupposition) that, directing our introspective attention not on any particular mental object but on the inner room itself, on the container, we become ‘witnesses’14 to our own consciousness.
66
The Bloomsbury Companion to the Philosophy of Consciousness
As we saw briefly above, a traditional conception of word-meaning (the one Wittgenstein is unearthing and supplanting) is that any case of such meaning (any use of any word) is determined by inward mental content that precedes the contingent outward utterance or its attachment to an external sign. According to this dualistic picture of language, a speaker could answer any question concerning what they mean by stopping, turning the introspective gaze upon the pre-verbal intention, and reconfirming the accuracy of the ‘translation’ or the ‘encoding’. In truth, as Wittgenstein shows, there are countless kinds of questions concerning word-meaning; one of them is where a speaker stops to reflect on the implications of what she has said – and that reflection will not take place as an act of introspection of the kind pictured. Wittgenstein is suggesting that this is true of the word ‘consciousness’ as well. We need to situate it into contexts of usage, assemble a good collection of those, and then, having broadly considered what he calls the (philosophical) ‘grammar’ of the concept, see how that compares to, relates to, stands in conflict with or exposes the hidden incoherence of, the philosophical picture at hand. So to state where we are with Wittgenstein succinctly, it is only by a submerged yet influential misleading analogy, or by ‘looking at the facts through the medium of a misleading form of expression’, that we conceptually ‘paint’ the five-element dualistic–introspectionist picture and draw the illicit ‘inspection–introspection’ relation, and then see what we take to be the problem of consciousness in terms of that picture. We then take the final turn described just above, making the ‘room’ of consciousness itself into another (if more capacious) inner object and then introspecting upon, or inwardly witnessing, that. This, one thinks under the influence of these collaborating intellectual influences, would give one the invariant and definitive meaning of the word ‘consciousness’. Wittgenstein, having unearthed the conceptual impulses and misleading analogies that lead us down this road, turns in an entirely different direction. The way to understand consciousness for Wittgenstein is to understand, in a way free of the dualistic picture of linguistic meaning, the meaning of the word as we use it in all its multifarious employments. In accordance with this, and emphasizing the necessity of context for genuine comprehension (and so anything but the ‘chesspiece’ approach), Wittgenstein writes: Whom do I really inform if I say ‘I have consciousness’? What is the purpose of saying this to myself, and how can another person understand me? – Now sentences like ‘I see’, ‘I hear’, ‘I am conscious’ really have their uses. I tell a doctor ‘Now I can hear with this ear again’, or I tell someone who believes I am in a faint ‘I am conscious again’, and so on.15
Wittgenstein and the Concept of Consciousness
67
Having a point, having a purpose in saying something, being understood within a context – these are all matters of particularized usages within our languagegames. A drawing together of such usages concerning mental life, considered in connection with what we easily take to be the general question of consciousness, will tell us much more than attempting to follow the dictates of the philosophical image of dualistic introspection ever could.
5 The linguistic approach Then one could, and I think should, quite reasonably ask: Does this approach not convert the entire issue to one of descriptive linguistics? We want to know about consciousness, not the word ‘consciousness’. However, the entire Wittgensteinian direction on this topic suggests that this separation itself is a philosophical myth, itself a manifestation of the Great Divide and the attendant idea of content prior to, ontologically separate from, and only contingently attached to, language (that is, where the mind/matter and intangible/tangible distinction has transmuted into the thought/speech distinction). On this matter Norman Malcolm captured a point about Wittgenstein’s philosophical work more broadly, but it is of considerable importance to understanding Wittgenstein’s approach to the problem, the very question, of consciousness: Wittgenstein says that his philosophical observations are ‘remarks on the natural history of human beings’. It would be difficult to exaggerate the significance of that comment. It is often said that Wittgenstein’s work belongs to ‘linguistic philosophy’ – that he ‘talks about words’. True enough. But he is trying to get his reader to think of how the words are tied up with human life, with patterns of response, in thought and action. His conceptual studies are a kind of anthropology. His descriptions of the human forms of life on which our concepts are based make us aware of the kind of creature we are.16
Language, properly understood, is not merely a set of descriptions or factual assertions that are posterior to life as lived and experienced, nor is that life anterior to language. Wittgenstein’s point is that these are not disparate categories.17 Thus, for Wittgenstein, as with so many other philosophical issues, the separation of the study of consciousness from our language concerning consciousness is impossible – so the right response to the preceding question is not to attempt to justify ‘mere’ language, but to question the distinction that the question presupposes. The approach is thus not badly called, with
68
The Bloomsbury Companion to the Philosophy of Consciousness
Malcolm, ‘a kind of anthropology’. We are now better positioned to see why this approach requires an acute sensitivity to language as we use it.18 In light of these observations, consider Wittgenstein’s next remarks: Do I observe myself, then, and perceive that I am seeing or conscious? And why talk about observation at all? Why not simply say ‘I perceive I am conscious’? – But what are the words ‘I perceive’ for here – why not say ‘I am conscious’? But don’t the words ‘I perceive’ here show that I am attending to my consciousness? – which is ordinarily not the case. – If so, then the sentence ‘I perceive I am conscious’ does not say that I am conscious, but that my attention is focused in such-and-such a way. But isn’t it a particular experience that occasions my saying ‘I am conscious again’? – What experience? In what situations do we say it?19
In later writings, Wittgenstein discusses what he calls ‘the application of a picture’, and by extension a conceptual picture’s or schematic model’s prismatic misapplication. When he initiates the section just above by asking if he observes himself and perceives if he is conscious, he is suggesting that not only (as we have seen) is the picture of inspection (such as of an object in bright light, perhaps through a magnifying glass and the like) as source material for the picture of introspection misleading and out of place, but also now that the very idea of perceiving itself invites misleading correlations or awakens misleading associations and analogies – in this case analogies to seeing, where we too easily take an ocular metaphor (‘inner vision’ or the ‘mind’s eye’ or ‘looking within herself ’) as the literal description of a sensory/perceptual phenomenon. Where we see consciousness itself on the model of an activity, the problem of misleading analogies that Wittgenstein is unearthing here is only worsened. But how so, precisely? In an earlier remark in the course of rethinking linguistic meaning, Wittgenstein wrote: Perhaps the main reason why we are so strongly inclined to talk of the head as the locality of our thoughts is this: the existence of the words ‘thinking’ and ‘thought’ alongside of the words denoting (bodily) activities, such as writing, speaking, etc., makes us look for an activity, different from these but analogous to them, corresponding to the word ‘thinking’. When words in our ordinary language have prima facie analogous grammars we are inclined to try to interpret them analogously; i.e. we try to make the analogy hold throughout. We say, ‘The thought is not the same as the sentence; for an English and a French sentence, which are utterly different, can express the same thought’. And now, as the sentences are somewhere, we look for a place for the thought. (It is as though we looked for the place of the king of which the rules of chess treat, as opposed
Wittgenstein and the Concept of Consciousness
69
to the places of the various bits of wood, the kings of the various sets.) – We say, ‘surely the thought is something; it is not nothing’; and all one can answer to this is, that the word ‘thought’ has its use, which is of a totally different kind from the use of the word ‘sentence’.20
The dualistic picture, and the picture of a location (where this means the inner hermetic room that houses observed objects), is motivated by, fuelled by, reflections such as these, and Wittgenstein’s response is to identify them, show the precise ways in which they are produced by and then still further produce, misleading analogies, and then return the words expressing these philosophical pictures to their intelligibility-ensuring contexts of actual usage.21 And so next, the question: Is a thought as expressed in a context itself the kind of thing we want or need to ‘get behind’ to get at what it really is or to get at its real content? Of this question, Wittgenstein somewhat later in his discussion says that in response to a question about what a person is thinking, when that person answers in usual direct and unproblematic ways, ‘I’d never say, these are just words, and I’ve got to get behind the words’.22 Or: as though words were insufficient to capture, to manifest, to constitute or to express without mediation, translation or encoding the contents of consciousness. Although such content may well be concealed in particular cases concerning sensitive knowledge, it cannot on closer examination be understood as generically private in the metaphysical sense.23
6 Recasting introspection, understanding privacy The five elements making up the dualistic picture were (1) mental objects in the Cartesian theater; (2) pure mental entities not standing in relation to any external thing (including contexts of discourse); (3) pre-linguistic content only contingently attached to signs for outward delivery; (4) metaphysical privacy and its corresponding indubitable self-knowledge, and linked to this; (5) introspection as inward-inspection. All five are now in (at best) a state of reconsideration, of rethinking; it is not at all clear that taken together they can generate an understanding of the nature of consciousness, nor is it clear that any one alone is defensible when brought up against the language we actually use in connection with consciousness. But recall that the second side of the double entendre with which we started concerned the reduction to behaviourism. Because this position is polemically or oppositionally structured and subtractive, it can be addressed much more
70
The Bloomsbury Companion to the Philosophy of Consciousness
quickly. As we saw briefly, if the elements of the dualistic picture are destabilized, and yet we stay with the mind–body structure of the question, the reduction of the mental aspects of intentional action and consciousness to physical stimulus – response mechanisms and patterns seems the next plausible option. On this view consciousness becomes the philosophical equivalent of an optical illusion; we attribute qualities to things not really there, we project where we think we perceive, and we correspondingly develop a vocabulary of consciousness that in reality is nothing more than language addressing an epiphenomenon – as if we were classifying after-images along with tables and chairs. But Wittgenstein – and this is a fact surprising to those who would reduce him to a behaviourist – does not in his reconsiderations of the dualistic picture and the Great Divide eliminate introspection. Rather, he thoroughly recasts it in terms (and to a greater extent, gives us the materials to recast it – he always leaves a great deal to his reader) emerging from our practices rather than in terms deriving from an underlying picture. Those practices, collecting themselves as what we actually do to introspect – to reconsider, to come to see connections between episodes of life, to discern previously hidden but now emergent patterns or lineages of action, to structure and restructure the narrative sense of a life, to speak in a personally exploratory way with confidantes, to ‘try on’ varying formulations and positionings of a significant event or occurrence, to piece together what initially seem detached or isolated experiences into a larger coherent whole, to make connections within and across a life, or to undergo a process of change such that we now have in view such connections or a previously concealed repetition compulsion – these and other reflective engagements of this kind are the real content of introspection, and they are what anyone would identify as central occupations of human consciousness and what we might here call a conscious or self-aware life. Yet these broadly introspective activities do not fall into neat ontological categories any more than do ‘three’ and ‘stability’. They cannot be checked as can the Barolo in the cellar. They do not come with fixed boundaries as the mental analogues to physical objects.24 Behaviourism, as a subtractive philosophical position, would remove and reject all this; Wittgenstein, and a more broadly developed Wittgensteinian position, does not. Then behaviourism would insist that behaviour is always evidence. We make inferences from it, and we assemble our sense of a person, such as it is, from that evidence. Wittgenstein, by now predictably, would look to particular cases in which we see slices or hints of behaviour as evidence – seen against the much more numerous normal cases in which we do not. (We might catch a
Wittgenstein and the Concept of Consciousness
71
defendant in a court of law glancing fleetingly and meaningfully to his alleged co-conspirator just as an allegation is voiced, we might discover an inconsistency in his testimony, or the like). Wittgenstein points out that the behaviouristic perception of a human being is disorientingly alien to our actual natural interactive social practices within what he calls our form of life; indeed, the very notion of perceiving a person itself requires a special context.25 Such a view would also place us at an inferential distance from others and their emotional states, which is also, when fully described, for him (and thus for us, given our socially rooted and evolved practices) inhuman. Of course, we often have knowledge of a person’s state or condition without being able to articulate evidence for such knowledge; if we were to stay with the inferential picture (which Wittgenstein is saying deeply falsifies our awareness of each other), then it is as if we have the conclusion of the evidence before we have defined the evidence, or drawn the inference before we have identified the material that supports the inference. The reality of such human understanding and cross-consciousness (‘other-minds’) knowledge is explored and shown in literature often at profound levels of examination.26 If, as Wittgenstein claims, the language we use in such cases is fundamental to any understanding of them, then literature would seem a natural repository of such language and thus of potentially massive service to philosophical understanding.27 In any case, finally under this heading we have a reduction of personperception (again, an interestingly dangerous conjunction of words) to that of body-perception. Wittgenstein, and those working in his tradition, have done much to reveal the severe warping or fracture of our language that such a reduction would entail. We are not fundamentally living in a world in which we see bodies first and then on inferential or evidential grounds impute merely speculative conscious content onto them. (Thus, one might see bodies at a horrific crime scene; such perception is not foundational, or the truly real element of perception, beneath our seeing of persons.) Here again, Wittgenstein gives priority to the language we use, and sees how the relevant concepts work in accordance with that. Wittgenstein is thus miscast as any variety of behaviourist – he is rethinking both sides of the dichotomy that rests beneath (and as we have seen above motivates) so much of our philosophical thinking about the mind and consciousness. Although not easy to understand without all of the preceding considerations behind us, one finds a remarkable summation of the foregoing points to be found in his late writings on the philosophy of psychology:
72
The Bloomsbury Companion to the Philosophy of Consciousness ‘I know what I want, wish, believe, hope, see, etc., etc.,’ (through all the psychological verbs) is either philosophers’ nonsense or at any rate not a judgment a priori.28
That is, the claim that one knows all such things automatically by virtue of being the possessor of one’s own consciousness, by virtue of having transparent privileged access to the contents of that consciousness that are metaphysically hidden from others outwardly and metaphysically invariably open to unmediated introspection inwardly, is what he now calls philosophers’ nonsense, precisely because that philosophical view is impelled by the large-scale picture that blinds one to relevant (and meaning-determining) particularity. The illusion of sense of those kinds of claims is created through an illicit act of meaningborrowing, or indeed what one might see as conceptual smuggling, from particular cases that do make sense, cases that do use the relevant concepts and words within contexts of readily intelligible discourse. (The word ‘privacy’, in its special metaphysical sense as discussed above, borrows meaning in precisely this way.) But then Wittgenstein adds, encapsulating the heart of the matter concerning our knowledge of our own consciousness-content as expressed in psychological verbs: ‘or at any rate not a judgment a priori’. We might know many things concerning our wants, wishes, hopes, beliefs and so forth, and we might know them thoroughly – but this is not an a priori fact of existence given by the nature of consciousness. What this encapsulates is the observation above concerning, not philosophically mythological introspection, but real (nondualistic) introspection; again, one may have to work to gain knowledge of these aspects of selfhood, work by reconsidering, by reflecting, by repositioning life episodes, by consulting and conversing, by discerning subtle patterns and by all the other things mentioned above (and much, much, more).29 Consciousness (if indeed we call it that in particular cases of self-reflection) can in these senses be explored – but not in anything like what the philosophical picture dictates. We do not know all of our wishes, wants, hopes, fears and so forth equally – our selfknowledge is differentiated across cases, a fact very difficult to explain on the dualist–introspectionist model. Where then do Wittgenstein’s reflections on the matter leave us? We see perhaps more clearly now why his reflections do not coalesce into an ‘ism’. The very conception of philosophical progress is fundamentally different, and his remarks do not allow assembly into a theory of consciousness. What they do instead is to remove by careful conceptual excavation, and identify with the most exacting language, the presuppositions and pictures that are often in play (and
Wittgenstein and the Concept of Consciousness
73
often unwittingly) from the first stages of a philosophical investigation – where those pictures then direct, from beneath, both how the rest of the inquiry will proceed and what will and will not be seen as progress. Wittgenstein works to clarify the actual ‘grammar’ (as he used the term above) of our concepts; this will be a fluctuating and evolving linguistic phenomenon in such a way that definitive or invariant lines cannot be set down between sense and what he called ‘philosopher’s nonsense’ in advance.30 We will be able to see in a given case – for us presently, cases of thinking and talking about the character and nature of consciousness and its contents – where the impulses to think and speak a certain way have been generated by previously undetected or unanalysed pictures. That is then, for him and for this approach, philosophical progress. He clears the way, and leads us back to an instructive scrutiny of our language-games of mental life.31 Of those games, those circumscribed contexts of interactive linguistic usage, it is vital to see that Wittgenstein never attempted to reduce the languagegame of mental content, of philosophical psychology, to the language-game of physical objects, or to claim either one as primary to the other. They remain for him differing and yet complexly interrelated spheres of language, and as we have seen above, it is by returning to this language free of prismatic distortion, free of misleading analogies, free of picture-driven and overgeneralizing and oversimplifying impulses that we will gain insight through a philosophical attentiveness to language in the way Malcolm described it. At one stage Wittgenstein invoked the distinction between saying and showing; to readapt that distinction for present purposes, we might posit that, while ‘saying’ on the topic of consciousness can often be dictated by underlying schematic conceptual pictures, ‘showing’ – in the form and at the length of a novel32 – can display our consciousness-related concepts in action. Piecing together that kind of mosaic of examples, of cases, of the words (including Wittgenstein’s collection of ‘all the psychological verbs’) and thoughts of fully imagined characters in context, can show us much of what ‘consciousness’ actually means, and thus what consciousness actually is.
Notes 1 There is good reason to see an influence of Schopenhauer on his younger view of the matter (linked in ways to a solipsistic picture and private mental enclosure) that did not survive his mature reconsiderations; for an incisive brief discussion of this along with helpful references, see Glock (1996), 84–6.
74
The Bloomsbury Companion to the Philosophy of Consciousness
2 I discuss this (and the matter of whether ‘Cartesian’ as it has been used in twentieth-century philosophy describes Descartes) in Hagberg (2008), 1–14, on ‘Confronting the Cartesian Legacy’. 3 For a particularly helpful anthology of writings incorporating both sides (and more) of this dichotomy, see Rosenthal, ed. (1991), especially the pieces brought together in ‘Mind as Consciousness’, 15–81 and ‘Consciousness, Self, and Personhood’, 422–77. 4 Wittgenstein (2009). 5 Wittgenstein (1980). 6 Ibid. §151. 7 Ibid. §45. 8 Quite apart from the issue of the possibility of any form of religious knowledge, there is a good deal under this heading that holds direct significance for the understanding of consciousness and its contents. Bambrough (1991), 239–50, asks ‘Can we, by taking thought, alter either our theoretical beliefs or our practical attitudes?’ He quotes in this context John Henry Newman, who wrote in his Parochial Sermons, ‘Which of our tastes and likings can we change at our will in a moment? Not the most superficial. Can we then at a word change the whole form and character of our minds? Is not holiness the result of many patient, repeated efforts after obedience, gradually working on us and first modifying and then changing our hearts?’ (243). Reflections of this kind bring into higher relief the telling difference between our ability to change the furniture in a room and change the ‘furnishings’ of consciousness. 9 Wittgenstein is here making a comparison to show a clarifying difference; he is not laying the foundation for a dispositional account of belief. 10 Wittgenstein (1980), §50. 11 Ibid., §51. 12 Ibid., §57. 13 Wittgenstein (2009) §413. 14 Ibid., §416. 15 Ibid. 16 Malcolm (1970), 22; quoted in Hallett (1977), 456. See also Malcolm (1995), 118–32, where he reconsiders the notion that ‘whenever a person has a conscious thought, desire or intention, that person is, or is operating from, “a point of view”’, or more broadly that the comprehension of consciousness necessitates our seeing it in terms of a subjective point of view. Malcolm’s discussion shows how fruitful a detailed Wittgensteinian analysis of presupposed general philosophical language concerning consciousness can be. (I return to this matter below.) 17 In his insightful discussion of the actual human role and power of names (implicitly working against the conception of language as arbitrary attachments under discussion here), Cioffi (1998) quotes a passage from Goethe in which he
Wittgenstein and the Concept of Consciousness
18 19 20 21
22 23
24 25
75
expresses annoyance at Herder’s having taken ‘liberties with the name Goethe by punning on Goth’ and how this ‘provides an illustration of the peculiarly intimate relation in which we stand to our names’. Goethe writes, ‘It was not in very good taste to take such jocular liberties with my name; for a person’s name is not like a cloak which only hangs round him and may be pulled and tugged at, but a perfectly fitting garment grown over and around him like his very skin, which one cannot scrape and scratch at without hurting the man himself ’ (166–7). Proper names function within linguistic consciousness in the intertwined and inseparable way Goethe captures here – and in a way the dualistic conception of language systematically misses. For a presentation of this approach that powerfully conveys a sense of its philosophical value, see Rhees (2006), 243–56. Wittgenstein (2009), §417. Wittgenstein (1958), 7. Part of this passage is helpfully contextualized with related citations in Hallett (1977), 462. I should note – although this is a partly separate matter – that the philosophical grammar of the word ‘meaning’ would require a parallel investigation of the kind I am discussing here for its clarification: the word ‘meaning’ is not the name of, or does not refer to, one single kind of generic entity any more than the word ‘consciousness’. Wittgenstein writes (1958), 18: ‘This again is connected with the idea that the meaning of a word is an image, or a thing correlated to the word. (This roughly means, we are looking at words as though they all were proper names, and we then confuse the bearer of the name with the meaning of the name)’. Wittgenstein (2009), §503. Here I am directly following Hallet’s (1977), 462–3, insightful connection of these passages. In this connection consider Wittgenstein’s remark (1980), 77: ‘Nearly all my writings are private conversations with myself. Things that I say to myself tete-atete’. This is intelligible privacy – privacy within a public language. This connects directly to the much-discussed ‘private language’ issues, where the alleged inner private sensation (which Wittgenstein shows to be an incoherent notion) would be the meaning-determining referent of an external word naming inner consciousness-content. In connection with this issue, see the conceptually clarifying chapter by Schulte (1993). See Wittgenstein (2009), §420, where he discusses the (instructively failed) attempt to see people as automata or as ‘behaving entities’ first, from which we would then draw an inference of humanity. This holds for self-consciousness as well as for consciousness of others; he considers the falsification of human expressivity on the behaviourist model in Wittgenstein (1980), Vol. I, §925, where he writes: ‘If someone imitates grief for himself in his study, he will indeed readily be conscious of the tensions in his face. But really grieve, or follow a sorrowful action in a film, and ask yourself if you were conscious of your face.’ One way to put this point,
76
26
27 28 29
30 31 32
The Bloomsbury Companion to the Philosophy of Consciousness contra behaviourism, is that consciousness may be manifest in the face but it is not translated there. For a lucid discussion of this issue (including this and related passages), see Johnston (1993), 143–6. It is of interest in this respect that we have the general categories of ‘philosophical novel’ and ‘psychological novel’. These literary categories hardly display fixed boundaries, but it would not be surprising if novels so categorized made contributions to understanding of precisely this kind. I offer a discussion of the contribution autobiographical and self-descriptive writing can make to this kind of conceptual understanding in Hagberg (2008). Wittgenstein (1982), §881. For an intricate discussion of the sort of process of self-inquiry I am suggesting here, see Wollheim (1984), 226–56. Wollheim’s focus on psychoanalytically working through a fantasy and its similarity to coming to understand a work of art captures the active, and not merely spectatorial, character of the process to which I am here alluding. On this matter see Greve and Macha (2015). Central among such language-games of mental life are of course autobiographies; see Cowley, ed. (2015), and DiBattista and Wittman (2014). Or the length of a poem. In (1967), §155, Wittgenstein writes: ‘A poet’s words can pierce us. And that is of course causally connected with the use that they have in our life. And it is also connected with the way in which, conformably to this use, we let our thoughts roam up and down in the familiar surroundings of the words.’
References Bambrough, R. (1991). ‘Fools and Heretics’, in Wittgenstein Centenary Essays, edited by A. Phillips Griffiths, Cambridge: Cambridge University Press. Cioffi, F. (1998). ‘Wittgenstein on Making Homeopathic Magic Clear’, in Cioffi Wittgenstein on Freud and Frazer, 155–82, Cambridge: Cambridge University Press. Cowley, C., ed. (2015). The Philosophy of Autobiography, Chicago: University of Chicago Press. DiBattista, M. and Wittman, E. O., eds. (2014). The Cambridge Companion to Autobiography, Cambridge: Cambridge University Press. Glock, H.-J. (1996). A Wittgenstein Dictionary, Oxford: Blackwell. Greve, S. and Macha, J., eds. (2015). Wittgenstein and the Creativity of Language, edited by S. Greve and J. Macha, London: Palgrave. Hagberg, G. L. (2008). Describing Ourselves: Wittgenstein and Autobiographical Consciousness, Oxford: Oxford University Press. Hallett, G. (1977). A Companion to Wittgenstein’s Philosophical Investigations, Ithaca: Cornell University Press.
Wittgenstein and the Concept of Consciousness
77
Johnston, P. (1993). Wittgenstein: Rethinking the Inner, London: Routledge. Malcolm, N. (1970). ‘Wittgenstein on the Nature of the Mind’, in Studies in the Theory of Knowledge, edited by N. Rescher, 9–29, Oxford: Blackwell. Malcolm, N. (1995). ‘Subjectivity’, in Wittgensteinian Themes, edited by G. H. von Wright, 118–32, Ithaca: Cornell University Press. Rhees, R. (2006). ‘Philosophy, Life, and Language’, in his Wittgenstein and the Possibility of Discourse, 2nd ed., edited by D. Z. Phillips, 243–56, Oxford: Blackwell. Rosenthal, D. M., ed. (1991). The Nature of Mind, New York: Oxford University Press. Schulte, J. (1993). Experience & Expression: Wittgenstein’s Philosophy of Psychology, Oxford: Oxford University Press. Wittgenstein, L. (1958). The Blue and Brown Books, Oxford: Blackwell. Wittgenstein, L. (1967). Zettel, edited by G. E. M. Anscombe and G. H. von Wright, trans. Anscombe, Oxford: Blackwell. Wittgenstein, L. (1980). Culture and Value, translated by Peter Winch, Oxford: Blackwell. Wittgenstein, L. (1980). Remarks on the Philosophy of Psychology, Vol. I, II, edited by G. H. von Wright and H. Nyman, translated by C. G. Luckhardt and M. A. E. Aue, Oxford: Blackwell. Wittgenstein, L. (1982). Last Writings on the Philosophy of Psychology, Vol. I, edited by G. H. von Wright and Heikki Nyman, translated by C. G. Luckhardt and Maximilian A. E. Aue, Chicago: University of Chicago Press. Wittgenstein, L. (2009). Philosophical Investigations, Rev. 4th ed., P. M. S. Hacker and J. Schulte, translated by G. E. M. Anscombe, P.M.S. Hacker, and J. Schulte, Malden: Wiley-Blackwell. Wollheim, R. (1984). The Thread of Life, Cambridge, MA: Harvard University Press.
5
‘Ordinary’ Consciousness Julia Tanney
1. Philosophers who insist that something both essential and ineffable is missing from the predominant philosophical accounts of the mind – those that take as their starting place a rejection of Cartesian dualism and attempt to find a place for the mental in the physical world – have their work cut out. On the one hand, they must convince us that something that should be accommodated has indeed been omitted; on the other, and by their own insistence, just what has been left out cannot be articulated, even, evidently, by means of first-person narrative. So, they are faced with the conundrum of convincing us of the existence of something – and the need to accommodate it within a workable theory of mind – that is by its very nature inaccessible and indescribable. The strategy, thus, is to gesture at what is missing with efforts that always – and as we shall see, necessarily – fall short of a successful depiction as the target remains just beyond reach. ‘Conscious experience’ is characteristically introduced by asking us to concentrate on particularly striking smells, tastes or visions and to remind ourselves of their differences. The scent of an orange is distinct from that of a strawberry, just as being immersed in purple is unlike how it feels to be swathed in pink. Natural language fails us here, we are told, since ‘the language we have for describing experiences is largely derivative on the language we have for describing the external world’.1 We are thus obliged to allude to the characteristic bearers of scent (such as oranges or strawberries) or employ normal colour words (such as purple and pink) as an approximation. Nonetheless, each of us is (almost certainly) aware from his or her own case of the striking experiences brought about by – or what it is like for us when – smelling an orange or being engrossed in purple. If we cannot understand what we are supposed to apprehend, we are compared, dismissively, to those who have to ask what jazz is2 or encouraged to pinch ourselves or risk being pinched by another.3
‘Ordinary’ Consciousness
79
Perhaps a reminder is in order then, that questions such as ‘What was it like (for you)?’ call for rather special circumstances and constitute, in the main, an invitation for personal reflection and expression. With a sigh of pleasure, words of thanks, or a narrative that relives some emotions of a particular moving experience or the terror of a traumatic one, a person conveys to another what she felt. Or what was missing. We do not expect the question – and thus it is not clear what could be asked – when the circumstance of the asking calls for us to elaborate upon commonplaces. ‘What was it like for you to drink that cup of coffee?’ ‘It was nice, thanks.’ ‘No: what was it like?’ ‘What do you mean? I just told you. It was nice.’ …… ‘What was it like when you pinched yourself really, really hard?’ ‘It hurt.’ ‘Describe the hurt.’ ‘It felt as if I was pinching myself really, really hard.’ …….
If this is what philosophers have in mind it is no wonder we seem to lack the requisite vocabulary for an answer. But the real problem is with the question. What about experiences that are genuinely special? Perhaps philosophers are not very talented at conveying particular trains of thoughts, feelings, sensations or memories that occur when something moves us in a particular way. Chalmers, for example, alludes compendiously to a ‘mysterious’, ‘almost ineffable’ experience of having his glasses fitted when he was young.4 But before we give in too quickly and conclude that such examples of ‘subjective experience’ will be incommunicable (or suggest that perhaps a formalism is needed in which at least the structural aspects of phenomenological data can be expressed5) – it will be useful to remind ourselves how it should be done. Consider the flood of subjective experiences brought on by a simple – but as it turns out momentous – bite of cake: And soon, mechanically, weary after a dull day with the prospect of a depressing morrow, I raised to my lips a spoonful of the tea in which I had soaked a morsel of the cake. No sooner had the warm liquid, and the crumbs with it, touched my palate, a shudder ran through my whole body, and I stopped, intent upon the extraordinary changes that were taking place. An exquisite pleasure had
80
The Bloomsbury Companion to the Philosophy of Consciousness invaded my senses, but individual, detached, with no suggestion of its origin. And at once the vicissitudes of life had become indifferent to me, its disasters innocuous, its brevity illusory – this new sensation having had on me the effect which love has of filling me with a precious essence; or rather this essence was not in me, it was myself. I had ceased now to feel mediocre, accidental, mortal. Whence could it have come to me, this all-powerful joy? I was conscious that it was connected with the taste of tea and cake, but that it infinitely transcended those savours, could not, indeed, be of the same nature as theirs. Whence did it come? What did it signify? How could I seize upon and define it?6
Define it, Proust’s narrator does, after several attempts to remember: And suddenly the memory returns. The taste was that of the little crumb of madeleine which on Sunday mornings at Combray (because on those mornings I did not go out before church-time), when I went to say good day to her in her bedroom, my aunt Léonie used to give me, dipping it first in her own cup of real or of lime-flower tea. The sight of the little madeleine had recalled nothing to my mind before I tasted it; perhaps because I had so often seen such things in the interval, without tasting them, on the trays in pastry-cooks’ windows, that their image had dissociated itself from those Combray days to take its place among others more recent; perhaps because of those memories, so long abandoned and put out of mind, nothing now survived, everything was scattered; the forms of things, including that of the little scallop-shell of pastry, so richly sensual under its severe, religious folds, were either obliterated or had been so long dormant as to have lost the power of expansion which would have allowed them to resume their place in my consciousness. But when from a long-distant past nothing subsists, after the people are dead, after the things are broken and scattered, still, alone, more fragile, but with more vitality, more unsubstantial, more persistent, more faithful, the smell and taste of things remain poised a long time, like souls, ready to remind us, waiting and hoping for their moment, amid the ruins of all the rest; and bear unfaltering, in the tiny and almost impalpable drop of their essence, the vast structure of recollection.7
Here is a paragon description of ‘subjective experience’ – memories, thoughts, feelings and associations – that are aroused by immersion in a very particular act of sensing. Perhaps ineffability, then, is a merely local handicap? Although special events that arouse feelings, memories, associations and the like are mentioned in an attempt to get us to focus on what is meant by ‘phenomenal consciousness’, this is not always so. It is not (or not merely) these thoughts and feelings that may spring to mind when we, for example, taste, smell or touch something special in the way Proust so brilliantly brings to life. It is the
‘Ordinary’ Consciousness
81
‘sense experience’ itself: the tastes, odours, views, feels and sounds themselves that are alleged to strike a note on the keyboard of consciousness. At least, that is the assumption. But, it is alleged – and this is an essential part of the argument – it could never be verified whether someone has such experiences or not. Whence the pessimism? Loss of sensory capacity – blindness and deafness, for example – are common phenomena. Those who have lost their eyesight or their hearing communicate very effectively the experiences they are no longer able to enjoy. Consider here a recent description of the less familiar condition of anosmia.8 Tom Laughton lost his sense of smell 25 years ago, when he was a 19-year old student. He was assaulted by a stranger on the street, who smashed his nose in. Over the years, Laughton has had several operations, but his sense of smell is damaged. Despite a happy marriage and a good job – he gives businesses psychology-based training to improve working relationships – he always feels something is lacking, particularly when the seasons change. He yearns for the scent of winter mornings and summer evenings. And it is not just individual smells he misses, he tells me, but also his ‘sense of place’ in the world. ‘With smell, when breathing in, the world comes inside us. Without smell, when I see things, they just stay where they are. They are nothing to do with me.’9
The writer of the article asks us to imagine not being able to smell a baby’s head when we hug him or her, or not to realize that someone is baking hot cross buns. She concludes: You can see why anosmia sufferers may feel isolated and angry. We do not notice how our sense of wellbeing is propped up by a thousand subtle odours, especially as the seasons change. It is the smell of hot asphalt and street food when you fling the windows open on a hot afternoon. It is bonfire smoke in winter and strawberries in summer. It is walking through the door at the end of a long day and knowing, even with your eyes shut, that you are home.
Cases of anosmia are genuine and, of course, verifiable examples of sense-loss. When a person pretends to smell who is unable, or who is able but professes not to be, there are methods by which we can test her claims. Even reports of ‘phantom pain’ and tinnitus – when pain or a ringing sound occurs without the normal stimulus – are subject to tests to rule out feigning. If anything counts as ‘qualitative’ or ‘phenomenal’ experiences, these cases should. Even if in certain circumstances the question whether an individual has heard, seen or smelt something is undecidable, this is not in general so.
82
The Bloomsbury Companion to the Philosophy of Consciousness
Clearly, if someone is blind, deaf or anosmic from birth then she will not have seen a blue sky, heard the dawn chorus, smelt oranges or tasted strawberries. She will not, therefore, know what it would have been to do so. But nor will she be able to portray any thought-associations, memories or feelings such exercises of her sense faculties have aroused since there were no such exercises. Those who have not loved or grieved will not have experienced the wonder and joy, or loss and pain these bring in train. But nor will they express wonder and joy or loss and pain while in a loving relationship or when mourning the loss of one. Those who have not visited the Pont du Gard will not know what it is to have done so. But nor could they describe the experience. These are platitudes. Yet a great number of philosophers have insisted we can conceive of individuals who do, for example, tell us what it was like when they first laid eyes on the Roman aqueduct, who do manifest wonder and joy in a new relationship; they can even identify the birds in a dawn chorus. Yet there is something missing: for them, there is nothing that it is like, there are no experiences. Never have been; never will be. Clearly, then, if there is to be any sense to be made of this claim, philosophers must be using the expression ‘what it is like’ or ‘experience’ in some other way. What is this special philosophical sense and what is the point of introducing it? 2. Philosophical Zombies are allegedly indistinguishable from us in all physical, functional and behavioural respects and in all past, present and future situations. The difference is that they are (by stipulation) ‘without conscious experiences’ in some new sense of ‘conscious experience’. Since there is ‘nothing it is like’ to be a zombie, again in a special sense of ‘what it is like’, it is supposed to follow that, whatever the appearances, they are not genuinely conscious and thus (some even hold) not appropriate subjects for mental ascriptions of any kind. They are, by this line of reasoning, without minds. We have been asking why introducing such creatures or engaging in other, related, thought experiments has become de rigueur in arguments about consciousness, the mental and its place in a physical world. Why invent stories about creatures whose ‘conceivability’ relies on introducing a new understanding of what it is to ‘have experiences’ or ‘what it is like’? Why, indeed, insist upon the logical possibility of ‘phenomenal consciousness’ that is incommunicable, elusive and unverifiable? Why not consider perfectly straightforward instances of the kinds of phenomena we include as conscious or as exhibiting consciousness which are both describable and subject to tests for truth and truthfulness?10 Should not a theory of the mind be about that?
‘Ordinary’ Consciousness
83
The reason the friends of zombies find themselves in this awkward position, it seems, is that the functionalist can claim to accommodate the more mundane manifestations of consciousness. This, evidently, is why those resisting this purportedly ‘broadly physicalist’ picture are forced to search for phenomena that can swim through its net. Nonetheless, the quarry is now so elusive that nothing can be said or known about it. Or, to invert this point: to the extent that things can be said or known about it, the functionalist will be able to capture it. Indeed, so cagey is this crucial element alleged to be missing from a functionalist account, that (it is conceded) there will be no way to tell a non-zombie apart from her logically possible zombie-doppelgänger. For let us consider in more detail what is packed into the ‘broadly physicalist notion’ of a ‘behavioural, physical and functional duplicate’. If, in describing ‘behaviour’ we are permitted to avail ourselves of full-blooded action-descriptions and meaningful speech, then there will be no way of telling a zombie apart from a non-zombie by interacting with it in social contexts. As long as sensations, feelings, perceptions, memories and thoughts may be classified as mental states or events then, the functionalist will claim, they become candidates for her theory. Such states or events (which are, by and large, presumed to be identical to or to have emerged from, or at least be realizable in, physical ones) are alleged to enter into causal relations not only with environmental stimulus and ‘behaviour’ but also with other mental ones. Thus there will be no way of telling a zombie apart from a non-zombie in terms of her dreams, fantasies and silent soliloquies.11 If, finally, the zombie and a non-zombie are indistinguishable with respect to their physical (and presumably social) environments, as well as their bodily organization and constitution, there will be no way to distinguish them by looking at their interactions with the environment or by studying their brains, central nervous systems, genetic make-up and so on and so forth. Our zombie duplicates thus pass all tests for being human. But of course once we have granted this much – and this is the rub – then they also satisfy the conditions for being conscious. It is no wonder, then, that the thicker the action-descriptions permitted to the behaviourist, and the more sophisticated the mental and linguistic ‘content’ granted to the functionalistist, the more fugacious the ‘conscious phenomena’ their combined theories are alleged to neglect. It is alleged that thought experiments of some kind are necessary to defend the claim that zombies are conceivable.12 But with so much granted to this ‘broadly physicalist picture’, the thought experiments – once their suppositions and commitments are made explicit – dissolve into a farcical exercise in which the deployment of mental predicates completely unravels.
84
The Bloomsbury Companion to the Philosophy of Consciousness
For notice that a zombie duplicate of Proust’s narrator would satisfy none of the conditions of consciousness in this new sense, and his narration would fail to count as a true description of his memories, since there were none. Nor was there any associated imagery, or fantasies, feelings and the like that sprang to mind when he bit into the madeleine; indeed, the zombie was, strictly speaking, not entitled to talk about anything that happened to him that day because nothing did. Incidentally we could not retreat by claiming that it only seemed to him that he remembered his days in Combray, since he could not enjoy seeming to remember either. Indeed, it appears that nothing he says is meaningful – or rather nothing he appears to say is meaningful, for by hypothesis he is not (really) saying anything. Zombie Tom would satisfy all of the diagnostic conditions for anosmia since he, as a perfect physical duplicate, has a damaged nose and, on the basis of his avowals and the usual tests, would be deemed unable to smell. And he too would, by all normal considerations, be genuinely lamenting something lacking when the seasons change. Apparently, we would nonetheless be mistaken to single out his inability to smell, on the grounds that in fact (and against all the evidence) he cannot sense or indeed perceive anything. He would be unable to claim truthfully that he yearns for the scent of winter mornings and summer evenings, since, given that he does not satisfy this new (irremediably unverifiable) condition for having ‘subjective, phenomenal experiences’, no such yearning is possible for a creature of his kind. Indeed, he could not even claim truthfully that life has any meaning for him since ‘there is nothing it is like to be him’. Worse, it seems he can make no claims at all. Poor Zombie Tom. Or am I irrational to be concerned? After all, Zombie Tom does not even have a mind and even though he is (by hypothesis) the functional and behavioural duplicate of Tom, we would not, strictly speaking, be able to describe what he says and does without using inverted commas or ‘as-ifs’.13 Yet the satisfaction conditions for the ascription of ‘as-if ’ perceptions, feelings, memories and such to Zombie Tom are exactly the same as for the ascription of genuine perceptions, feelings, memories and the like to Tom. On this aberrant modification of the notion of consciousness, then, neither of them would satisfy the conditions for being conscious or having a mind. In fact, none of us would. The zombie advocates concede that they are not exactly using terms such as ‘consciousness’, ‘conscious experience’ or ‘what it is like’ – nor, it would seem, any mental predicate – in the ordinary senses. The problem is that they are and they are not. On the one hand, these predicates are employed in their usual senses to generate the conclusion that something we all agree is absolutely essential
‘Ordinary’ Consciousness
85
to the mind has been left out of broadly physicalist accounts. On the other, the predicates associated with consciousness have been stretched so far that it has become a truism to say that it can never be known if an individual is conscious (or, therefore, minded). That certainly is a change from the ordinary sense. Indeed, on this distorted use, nothing would count for or against the individual’s being so.14 Robert Kirk, with a certain breviloquence, endorses a popular definition of ‘conceivable’ as cannot be known a priori to be false; so A is conceivable if and only if not-A cannot be ruled out a priori.15
Expanding a bit, we might say: A situation is conceivable if a description of it cannot be known a priori to be false; Zombies are conceivable if and only if their nonexistence cannot be ruled out a priori.
We cannot judge, however, if zombies’ non-existence can be ruled out a priori until we apprehend what zombies are supposed to be. We are in no position to determine a priori whether a description of them, or of situations involving them, is either true or false when all such attempts at characterization fall into pieces before they can be so evaluated. In failing to recognize the equivocation that carouses through the attempt to describe ‘a being like us in all respects but without conscious experience’, zombie advocates are, to paraphrase P. F. Strawson, guilty of ‘tricking themselves by simultaneously withdrawing the predicates from the ordinary games and yet preserving the illusion that they are still using them to play the ordinary games’.16 It is not that we lack the cognitive resources to solve the problem:17 we lack the wherewithal to articulate it. Sense, it seems, is a necessary condition of conceivability. And these thought experiments cannot be fleshed out without descending into nonsense. How did those arguing against the prevailing accounts find themselves in such an untenable position? We saw earlier that the zombie proponent conceded to the behaviourist and the functionalist the right to deploy full-blooded action-descriptions as well as speech-content in what they are allowed to count as ‘behaviour’. Functionalism maintains, and the friends of zombies acknowledge, that the supposed causal network of ‘mental events’ that constitute the postulated referents of mental terms include sensations, feelings, perceptions and memories, not to mention beliefs, hopes, desires and fears, with all the ‘propositional content’ they need
86
The Bloomsbury Companion to the Philosophy of Consciousness
in order to play their role in attempting to render reasonable the actions and thoughts they are alleged – also – to cause. It should have been obvious, however, that if so much is conceded to the erstwhile ‘reductionist’ then the game is up. For in appropriating actiondescriptions the so-called ‘behaviourist’ has gifted herself an almost-full complement of warrants for circumstance-dependent, third-person ascriptions of mental predicates and with it the wherewithal to situate, and make sense of, her subject accordingly. And with the addition of what her subject says, in the form of first-person avowals and reports, her hand is complete.18 The functionalist appropriates this full house. The sophisticated descriptive content to which she helps herself – with its rich inferential network of permissions and proscriptions – carries with it the presumption of intention, self-knowledge, awareness, intelligence, perception, feelings and sensation – bref, the manifestations of consciousness.19 She, however, pretends these links can be understood ‘naturalistically’ and this, on her terms. This requires an accommodation that is independent of the very practices which provide the background – the implicit agreement in judgement or way of going on – without which there could be no inferences, expectations, presuppositions or understandable actions, reactions, feelings and so on in the first place. Instead, she tries to situate this complex web of ‘content’ into an ill-fitting view about the supposed reference of, and thus an utterly different explanatory role for, her mental terms. The logical categories of state, event, property and relation are used as neutral stand-ins, supposed to figure in both the domain of the mental and the physical: the conundrum is to understand their relation. Various forms of metaphysical and explanatory dualisms simply re-emerge and proliferate. Epistemological puzzles abound. No wonder there is the residual disquiet by the dualist-sympathizer that a ‘purely’ causal-functional story cannot capture what is essential to the mind, for that is true enough. No wonder the physicalist – counting the behaviourist and the functionalist as part of her team – demurs: for in helping herself to action, speech and thought, she has illicitly and without acknowledging it appropriated the very practices that her reductionist pretensions abhor. We are presently witnessing increasingly extraordinary – and futile – suggestions to tackle the ‘hard problem of consciousness’; from the suggestion that a ‘Cartesian theatre’ should be appended to a broadly functionalist account to accommodate aspects of properties of mental representations,20 to the recommendation that we ‘naturalize’ the vexatious residual dualism by treating ‘experience’ as fundamental,21 or even as a pervasive feature of reality.22
‘Ordinary’ Consciousness
87
The trumped up problem only exists, however, because of an ill-considered attachment to these exercises in ‘naturalizing’ the mind, with their implicit but unshakeable – and forever recurring – commitment to mind–body dualism.23 So enough of this! A plague on both your houses! There is no need to prolong the suffering by committing such violence against the ‘ordinary’ notions of consciousness.
Notes 1 Chalmers (1997). These ideas are developed in Chalmers (2010). 2 Block (1978), 281. 3 Searle (1995). 4 Chalmers (1997), 7. 5 Chalmers (1997). 6 Proust (1922). 7 Ibid. 8 One of several described by Bee Wilson (2016) at the time of writing. 9 Wilson (2016). 10 For a masterful cartographical exploration of the interlocking notions that we subsume under ‘Consciousness’, see Chapter 1 of Hacker (2013). 11 The functionalist will appropriate features of our examples to support her case. Odours, she will say, are by and large identified by their typical causes such as a baby’s head, strawberries or a bonfire. Indeed, tastes may well give rise to memories, feelings and thoughts like those described by Proust. And they elicit typical reactions – such as inhaling deeply to take in summer mornings and holding one’s breath while out walking on a country road to avoid the exhaust fumes of a car backfiring. 12 Kirk (2015). 13 For a more detailed description of the unravelling of mental predicates that these thought experiments – when examined carefully – engender, see Tanney (2004); rpt. as Chapter 10 in Tanney (2013). 14 If, on this special philosophical sense, nothing would count as an individual’s having ‘conscious experience’, it is not easy to grasp what it could mean to say that we should ‘get started’ constructing a theory of it – one that will show how the purely subjective aspects of conscious experience arise from physical systems and what sorts of experiences are correlated with what sorts of systems. See Chalmers (1995) and (2010). 15 Kirk (2015). 16 Strawson (1974). 17 McGinn (1990; 2004).
88
The Bloomsbury Companion to the Philosophy of Consciousness
18 Far from issuing autonomously, fully formed, from an innate and private language, the subject is taught as a young child how to articulate what she feels and thinks as well as and what she sees and hears (and, then, what she thought she saw or heard) in English, French or other natural language. Just as well, else we would not be able to understand her. In this way, first-person testimony is grafted onto, and is not a logical precursor to, the third-person deployment of the well-established but constantly evolving inferential web of predicates that traverse the canonical – but misleading – boundary between the mental and the physical. Indeed, the illusion that consciousness concepts can be divided into ‘functional/structural’ and ‘phenomenological’ components stems from overlooking the fact that our access to another’s imaginings, silent soliloquies and dreams is through her first-person reports – voiced, of course, in the natural language we share. 19 These can be cancelled up to a point, as the notions undergo slight shifts to exclude certain inferences and to include others. The circumstances usually make clear how the expression is being used. We can understand what is being said, for example, when someone is described as having walked across the street unintentionally and if we do not, we can ask. (He was supposed to have kept to the left but absentmindedly followed the crowd.) It is less clear, and therefore takes rather special circumstances to bring out, what it could mean to describe someone as not being aware he was filling out his tax return. No doubt a rather elaborate story could be told, but the more central the use of ‘not aware’ the less intelligible the description ‘he was filling out his tax return’. As we saw in the reductio above, we would have no idea what is being said, what to infer, expect, predict, feel or how to act when these inferential ties become severed – not singly but all at once! – as philosophical exercises in ‘conceivability’ purport to do. 20 Levine (2010). 21 Chalmers (2010). 22 Strawson (2006). 23 For a fuller discussion of these issues, see Tanney (2013).
References Block, N. (1978). ‘Troubles with Functionalism’, in Perception and Cognition: Issues in the Foundations of Psychology, edited by C. W. Savage, Minneapolis: University of Minnesota Press. Chalmers, D. J. (1995). ‘Facing Up to the Problem of Consciousness’, Journal of Consciousness Studies, 2 (3), 200–19. Chalmers, D. J. (1997). ‘Moving Forward on the Problem of Consciousness’, Journal of Consciousness Studies, 4 (1), 3–46.
‘Ordinary’ Consciousness
89
Chalmers, D. J. (2010). The Character of Consciousness, Oxford: Oxford University Press. Hacker, P. M. S. (2013). The Intellectual Powers: A Study of Human Nature, Malden: Wiley Blackwell. Kirk, R. (2015). ‘Zombies’, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/sum2015/ entries/zombies/. Levine, J. (2010). ‘Phenomenal Experience: A Cartesian Theater Revival’, Philosophical Issues, 20 (1), Philosophy of Mind, 209–25. McGinn, C. (1990). The Problem of Consciousness, Oxford: Blackwell. McGinn, C. (2004). Consciousness and its Objects, Oxford: The Clarendon Press. Proust, M. (1922). Swann's Way: In Search of Lost Time, Vol. 1, translated by C. K. Scott Moncrieff (1922), Public Domain. Searle, J. R. (1995). ‘“The Mystery of Consciousness”: An Exchange’ by Daniel C. Dennett, reply by John R. Searle, New York Review of Books, 21 December 1995. Strawson, G. (2006). ‘Realistic Monism: Why Physicalism Entails Pansychism’ in Consciousness and its Place in Nature: Does Physicalism Entail Panpsychism?, edited by Anthony Freeman, Exeter: Imprint Academic. Strawson, P. F. (1974). ‘Self, Mind and Body’, in P. Strawson (ed.), Freedom and Resentment and Other Essays. Methuen & Co. Ltd. Tanney, J. (2004). ‘On the Conceptual, Psychological, and Moral Status of Zombies, Swamp-Beings, and other “Behaviourally Indistinguishable” Creatures’, Philosophy and Phenomenological Research, 69 (1), 173–86. Tanney, J. (2013). Rules, Reason, and Self-Knowledge, Cambridge: Harvard University Press. Wilson, B. (2016). “‘I’ve been told bacon smells lovely” – life without a sense of smell’, The Guardian, 26 March 2016.
90
Part Two
Groundbreaking Concepts of Consciousness
92
6
Consciousness, Representation and the Hard Problem Keith Lehrer
1 The hard problem David Chalmers (1997) is famous for his argument that there is a problem posed by conscious states, which he has called the hard problem. The problem, simply put, is to explain why the phenomenal character, what these states are like for the subject, should arise from material biology in the brain. Horgan and Tienson (2002) have argued that this problem extends beyond sensations to thoughts because what they are like, their phenomenal character, extends to the feature of what they are about, their narrow content. I agree with Chalmers that there is a hard problem of explaining why what consciousness states are like should arise from the body and with Horgan and Tienson that the problem extends from what sensations are like to what thoughts are like. I wish to make some contribution to the problem by proposing that the character of these states has a special role that explains how we represent the world, ourselves, ourselves in the world and, finally, the world in ourselves in an autonomous manner. The phenomenal character of conscious states, what they are like for the subject, explains how we represent the world in a way that provides a plasticity in the way that we connect experience with meaning and truth. The body produces consciousness to represent the world including the role of consciousness in itself.
2 Sensations and consciousness: Reid, Ferrier and Sartre Sensations have been a starting point for discussions of consciousness. It is useful to begin with them and the problem of our knowledge of them.
94
The Bloomsbury Companion to the Philosophy of Consciousness
Thomas Reid (1863) argued in the eighteenth century that sensations may lack intentionality, that our experience of sensation does not have the character of a state with an object. He says ‘feeling a pain signifies no more than being pained’ (183). He also says that consciousness gives us knowledge of the operations of our mind (222). This leads to a paradox. Sensation, being pained, is a conscious state, which lacks intentionality, of which we have immediate knowledge. But knowledge has the character of state with an object because there is an object of knowledge. So how does sensation, which may not be about anything, provide us with knowledge about the sensation? Moreover, if knowledge about the sensation, which has an object, to wit, the sensation, is a separate conscious state, then we must have knowledge of that knowledge, which appears to lead to a regress of knowledge of knowledge of knowledge ad infinitum as I noted in Lehrer (1989). It is interesting that James Frederick Ferrier (1866) in the nineteenth century noted that sensation differs from the thought of the sensation and suggested that in the thought of the sensation, the sensation could play the role of a sample, but in such a role the sensation goes outside of and beyond itself being an instance of a kind of object: When you feel the pain, you feel that pain merely, that particular pain and no other; but when you think that pain, you do not think that pain merely, you think other pains as well. … The present pain is apprehended as a sample of what may occur again. It is thought of as an instance of pain, which implies the thought of something more than it. In thinking the pain, then, your mind travels out of and beyond the particular pain which you are feeling. Your sensation never travels beyond that pain. (224)
Sartre (1956) also noted that if you suppose consciousness is knowledge, then you arrive at the problem of knowing that you know, which leads to a circle or an infinite regress. He preferred the circle, taking thought, consciousness of counting as an example and remarked: Thus in order to count, it is necessary to be conscious of counting. Of course, someone may say, but this makes a circle. For is it not necessary that I count in fact in order to be conscious of counting? That is true. However there is no circle, or if you like, it is the very nature of consciousness to exist ‘in a circle.’ The idea can be expressed in these terms: Every conscious existence exists as consciousness of existing. (lvi)
Consciousness, Representation and the Hard Problem
95
3 Knowledge of consciousness: Self-presentation We began with the question: Why does the body give rise to consciousness? I want to approach this question indirectly by answering the question implicit in Reid, Ferrier and Sartre – How do we know that a conscious state, a sensation, for example, exists and what it is like when the state itself can lack the structure of state with an object essential to knowledge? Carneades as presented by Chisholm (1966) proposed that conscious states are selfpresenting. This means conscious states exhibit or present themselves to us, unlike many states of our body, the secretion of hormones into the blood, for example, that do not present themselves to us. When the conscious state presents itself to us we know what it is like. So we know what conscious states are like when they present or exhibit themselves to us. But how does the conscious state present itself to us in such a way that we know what it is like? How does self-presentation give us knowledge? To know what the selfpresenting conscious state is like, we must have some idea, conception or, as I prefer, some representation of what it is like. My answer to how self-presentation gives us knowledge is that self-presentation provides us with self-representation. This is not the only possible answer. For some have thought that we have innate ideas, innate representations, when we experience the self-presenting state. Even given the innateness hypothesis for the origin of our ideas of conscious states, we would require an explanation of how we apply the innate idea to the self-presenting state. One answer is that the principle of application is also innate, so that when the conscious state presents itself we automatically, as the result of an innate principle of operation, apply the innate idea to the self-presenting state. Others have thought, Sellars (1963, 190), most notably, in his discussion of the Myth of Given, that the conception of conscious states arises empirically as part of the acquisition of language. The innateness theory and the language acquisition theory, for short, the language theory, have the common problem of explaining how we know how to apply the ideas, conceptions and representations we have of these states. This problem has two aspects. The first is that it is difficult to believe that we could have complete conceptions of conscious states, sensations of colour or sound or smell, for example, without experiencing those sensations. This evokes selfpresentation as a step towards an answer. But how does the experience, selfpresentation or acquaintance with conscious state supply a conception? What is the role of the experience in the conception?
96
The Bloomsbury Companion to the Philosophy of Consciousness
This question leads to another. Self-presenting states have a kind of truth security. One might suggest that if you are in such state and, as the innateness and language theories propose, have an idea or representation of it, then you cannot fail to apply the idea or representation to the state. But if the idea or representation is like other representations, those articulated in words, for example, it seems as though you might fail to apply the conception to the state in the same way that we fail to apply other representations. Words can be misapplied, and so, it would seem, could other forms of representation, ideas or conceptions. What is needed is an account of how the self-presenting states are incorporated into our conception, idea or representation of them so that we can know what they are like.
4 Self-representation: Exemplarization and reflexivity My proposal is that we give up the innateness and language theories and replace them with a theory of self-representation. The theory, developed elsewhere Lehrer (2006, 2011), is that when we know what a conscious state is like, the conscious state becomes an exemplar, like a sample, that represents a kind of state. When a conscious state functions as a term of representation, I call the process exemplarization. The exemplar representation resembles the use of a sample to represent a class or kind of objects. Shown a sample of paint, for example, the viewer takes the sample to represent a class or kind. In such a case the experience, the colour sensation of the sample, acquires a representational role for the subject. The colour sensation represents a kind of colour sensation, being a sample of that sensation. Hume (1739) noted this use of the particular sense impression to stand for a class of particulars becomes general. The particular is used to stand for a general class. Ferrier in the quotation above noted the role of a particular sensation as a sample of a kind, but he suggested that this creates a bifurcation between a particular sample and the general thought. For Ferrier the particular sensation and the general thought are distinct and separate. I suggest following Hume in holding that the particular sensation used to stand for the general class represents a general class. My proposal is that the sensation, used as a sample, is a constituent of the thought of the general class of which it is a member. The sensation is a particular, and as Ferrier suggests, no other, but contrary to Ferrier it is also an exemplar representation referring beyond itself to refer to other things of which it is an exemplar representation. The exemplar is a term of representation that exhibits what a class of objects is like in the same way that a sample does. It is essential that the exemplar, when
Consciousness, Representation and the Hard Problem
97
it is a sensation, can be both a term used to represent a class of objects, a mental term, and one of the objects represented. The exemplar sensation is in itself a particular and representational thought about sensations. The exemplar is reflexive representing itself as noted by Ismael (2007), Fürst (2014) and Tolliver and Lehrer (2014). Reflexive representation solves a problem about representing mental activity noted by Sartre; it explains, that is, how consciousness circles back onto itself to give us knowledge of consciousness. A problem arises when a conscious activity becomes the object of thought. For the thought of the conscious activity, as Ferrier noted, seems to be a new conscious activity that is distinct from the original conscious activity. Hume’s distinction between impressions and ideas suggests the same problem, namely, that you can only have an idea of a sense impression when it is a faded memory of the original sense impression. Put in terms of thought and sensation, the thought of a sensation must, it seems, be a conscious state that is distinct from the sensation itself. Sartre noted the problem when he considered whether consciousness of consciousness leads to a regress. His solution to the problem, as we noted, was to suggest that consciousness circles back onto itself in what he called consciousness consciousness. He thus recognized the problem of the distinction between consciousness and a thought of consciousness, but he lacked an account of how the conscious state becomes consciousness consciousness circling back onto itself. The solution to the problem is exemplarization of the conscious state. Exemplar representation uses the exemplar to represent or refer to itself reflexively. Reflexivity closes the circle as the exemplar of representation is an exhibit or sample of the class of objects represented and, at the same time, is an instance of the class. Two qualifications are immediately necessary. Notice that I have said that when we know what the conscious state is like, the state is exemplarized. The reason for this emphasis is that I do not think that we always know what conscious states are like. There are sensations, most notably those of touch, where attention is directed towards an external quality, the hardness of a surface for example, without attention being focused on what the sensation itself is like. Many have noted this, especially Fodor (1983). In such cases, the conscious state evokes a representation of an external object without any representation of what the conscious state itself is like. Direction of attention to the conscious state yields exemplarization of the state, although not all conscious states are attended to. This qualification, to wit, that we do not always attend to conscious states leads to a second qualification. Some, most notably Rosenthal (2005), have argued that higher-order representation is itself constitutive of consciousness,
98
The Bloomsbury Companion to the Philosophy of Consciousness
while Kriegel (2009), closer to the present view, argued that self-representation is constitutive of consciousness. An advantage of Rosenthal’s and Kriegel’s views is that they supply a representational account of the nature of consciousness states. However, these accounts do not seem correct. As we noted, not all conscious states are self-represented. The sensations of touch sometimes evoke representations of external things without higher-order representation or selfrepresentation. Another example is when one first awakens experiencing some sensation without any conception or understanding of what is occurring to one. A final example would be of a brain-damaged patient suffering lesions that prevent cognitive functioning including representation but experience sensation, pain for example, though they do not know that is what they experience. The operations of conception and cognition are not functional. So there are selfpresenting states that are not self-representational. Self-presentation does not entail self-representation. Before proceeding with the implications of exemplar representation, it is useful to distinguish exemplarization from the notion of exemplification advocated by Goodman (1968) and disquotational representation I advocated earlier (2006) similar to Papineau (2002). Goodman (1968) developed a theory of exemplification that he used to explain how an artwork could be symbolic. He also appealed to the notion of a sample, a sample of colour, to explain his notion of exemplification. His proposal, which was intended to explain the symbolic character of art, was that the sample is taken to refer to a property, or predicate, that is exemplified by the sample. So the model is that the sample refers to a property that the sample instantiates. It is important to notice that though the sample plays a referential role, it only refers to a property or predicate of which it is an instance. The sample is not an instance of itself but instead of some property or predicate. So, the account of exemplification is not a theory of self-representation in the strict sense of representing a class of instances, which includes itself as an instance. On Goodman’s account, some predicate referred to by the sample is instantiated, but the sample is not itself a term of representation that has instances. It is not self-instantiated.
5 Disquotational representation: Sellars The disquotational account of representation closely resembles that of Sellars. Sellars argued that sentences like
Consciousness, Representation and the Hard Problem
99
1. ‘rot’ in German means red or 2. ‘rot’ in German refers to red objects explain the semantic or representational concepts of meaning and reference by disquotation. The unquoted occurrence of the word red is used, not mentioned, to exhibit a linguistic role in a language understood by the receiver of the expression in quotation marks. Of course, the use of this meaning rubric has minimal or degenerate use when the word in quotes and the word used to exhibit the linguistic role of the word in quotes are instances of the same word type. Consider, 3. ‘red’ in English means red or 4. ‘red’ in English refers to red objects as examples. Here it is natural to speak of explaining the meaning or reference by disquotation. The last two sentences are degenerate explanations of meaning because the unquoted or used word is an instance of the same type as the quoted or mentioned word. They could not convey anything about the linguistic role of the quoted or mentioned word to someone unless they already understood the word and the linguistic role it plays. However, the sentences are not analytic because it is a contingent fact that the word has meaning and plays the linguistic role exhibited by the use of it. Sellars suggested that the role of the used occurrence of the word (3) is not same as a simple quotation of it, for example, in
4a. ‘red’ in English has the same meaning as ‘red’ in English. For the translation of sentence (4) into another language, French, would leave the quoted word as it is, namely ‘red’, and would not convey information about the linguistic role of ‘red’ to a speaker of French. Compare 5. ‘rot’ in German has the same meaning as ‘rouge’ in French. Sentence (5) would not tell a monolingual speaker of English the meaning of either quoted word, though such a person informed by a speaker of German, French and English might know from such testimony that (5) is true without learning the meaning of the words in quotation marks.
100
The Bloomsbury Companion to the Philosophy of Consciousness
Sellars noted that the use of the word red in (3) or (4), though not a quoted use, is a special use of it as an exhibit of a linguistic role. So, though not quoted, it is exhibited in a way that has some similarity to quotation and Sellars introduced dot quotes to deal with the similarity to a quoted use. So (1) might be reformulated as ‘rot’ in German is a • red •.
We may apply this model of explaining meaning and reference by disquotation to consciousness; the sample of consciousness is used to refer to samples of consciousness as though we place the sample between dot quotes using it to refer to other examples of conscious states. The earlier view of mine (2006) similar to Papineau is that the sample of consciousness is used to exhibit a representational role by offering us something playing that role. Lehrer and Stern (2000) have argued that Sellars anticipated this account in published correspondence with Castañeda. This account is similar to the account of exemplarization as I noted in Lehrer (2012), but there is a difference. The difference between the disquotational theory and the theory of exemplarization is that in a sentence like (1) there is an obvious difference between the quoted word mentioned and the word used, they are different tokens, even if the token used exhibits the representational role of the token quoted. Moreover, in the case of (3), although the quoted token and used token are tokens of the same kind, they are different tokens. Thus, the disquotational theory, like the exemplification theory, is not reflexive exemplarization. That was the problem Ferrier noticed when he remarked that thought reaches out and beyond the sample sensation to a thought of a class. It is a problem that Hume noted when he distinguished between an impression and the idea of the impression, noting that the idea was distinct from the original impression being an impression of a faded impression. The impression that is the idea of an impression is not the same as the original; it lacks the liveliness of the original impression. The chasm between thought and sensation can only be closed when they are the same, that is, when the sensation becomes a thought of the sensation itself, as well as other sensations, by exhibiting what the represented sensations are like and, at the same time, representing itself. The sensation becomes an exemplar representation of itself, a thought of itself, reflexively. It exhibits what it is like by being what it is like.
6 Solution to the hard problem: Why is matter conscious? Let us ask whether and how this solves the hard problem. Why is matter conscious? The first answer is because it is representational. The second
Consciousness, Representation and the Hard Problem
101
answer is that it shows us what representation is like because the term of representation, the exemplar, is at the same time the vehicle of representation and the thing represented. We know what it is like when we exemplarize it. Knowing what it is like is, at the same time, knowing what a representation of it is like. Wittgenstein (1922) remarked that the form of representation, what I call the activity of representation, cannot be fully described; it can only be shown. The exemplarization of the exemplar shows us what cannot be fully described. These observations about the role of exemplarized consciousness are only the initial steps in answering the hard question formulated in terms of the function of consciousness. One way to answer the question – Why are we conscious? – is by answering the question – What is the function of consciousness? The answer depends on recognizing that the self-presentation of conscious states is what makes exemplarization of them possible. They set themselves before us, when we attend to them, as we must in the case of intense pain, but otherwise as a matter of a choice. Focusing attention on them, I suggest, means exemplarizing them. You may have many conscious states, minor annoyances of thought and feeling, without attending to them. But when you attend to them, you know immediately what they are like because you exemplarize them effecting a representation of what they are like. I have proposed that consciousness presenting itself for exemplar representation of itself gives us an understanding of what representation of consciousness is like. Notice the transparency of this knowledge of consciousness. Description or higher-order representation leaves us ignorant of something about the connection, the representation connection, between the term of representation and the things represented. The only way to close the epistemic gap between the term of representation and the things represented is when they are the same; and we know what they both are like at the same time because of the reflexive character of exemplarization of the conscious state.
7 Truth explained by reflexive representation of consciousness Another way to see the importance of reflexive exemplarization is to consider a theory of truth. A theory of truth is supposed to convey the semantic relationship between representation, whether an external linguistic description, and what is represented. As long as the truth theory is formulated in language, a question about the truth relation between the representation and what it represents will remain unanswered. It is a relation that cannot be fully described in language,
102
The Bloomsbury Companion to the Philosophy of Consciousness
but can only be shown by something that is, at the same time, a representation and the thing represented. The problem with a discursive theory of truth is that it leaves us with a gap between a sentence and what it describes that cannot be fully closed by further description because that will reiterate the gap at a new level. Some have argued that a recursion theory of truth-conditions closes the gap. My claim is that it only supplies a recursion of the gap. We can have a truth theory up to disquotation, but, as we have noted, disquotation leaves a residue of something unexplained about the truth of the disquoted description. I am not denying that truth theories can be illuminating, but they cannot fully illuminate everything any more than description can fully illuminate what colour is like to the blind. There is a component of the truth of description that must be experienced to be understood. It is tempting, having argued that the function of consciousness is to explain how we know what representation and truth are like to conclude that we have answered the question – Why are we conscious? The answer is that otherwise there would be something about representation and truth that would remain unexplained. Consciousness, rather than being a mystery, removes the most fundamental mystery about truth. However, this answer, though poignant, leaves the impression that the function of consciousness is to supply representation of itself and thereby an understanding of such representation. This is not a trivial contribution. To exhibit what cannot be fully described about representation and truth is important. It fills in an explanation of what representation and truth are like that is central to maximizing explanation. But the role of consciousness in representation and truth is not limited to explaining representation and truth about consciousness. Consciousness and the exemplarization of it radiates semantically as part of the meaning of thought and description about the external world.
8 Consciousness as evidence of the external world It is argued, as I noted above, that some representation of the external world is elicited by sensation. Sense impressions and sense data that are not exemplarized but pass through the mind without calling attention to what they are like in themselves need not be representational. However, such facts do not sustain the conclusion that such conscious states are not evidence for the existence of the external objects represented. The character of such evidence may only be
Consciousness, Representation and the Hard Problem
103
implicitly manifest in the certainty that they create about the existence of such objects, but may become explicit when their representations are challenged. Someone may doubt the existence of an object, a dog for example, which directs attention to the conscious experience of the external object. The look, smell and feel, sensations of which we are conscious, are exemplarized. How does the exemplarization of the look, smell and feel of conscious sensations convert into evidence of the existence of external objects as a result of exemplar representation? The conversion into evidence is the result of a semantic role, the functional role of the meaning of our thought and talk about the external object. Something looks like a dog, it smells like a dog, it feels like a dog, so it is a dog. Why? Because these exemplarized conscious experiences are part of what we mean when we claim that we are confronted with a dog. The evidence is fallible, and we may be wrong, because there is more to our conception of dogs, more to the meaning of talk and thought about dogs, than how they appear, smell and feel. My claim is only that part of our conception of dogs, part of the meaning of talk and thought about dogs, is how they look, smell and feel. Since the look, smell and feel are parts of the meaning, the exemplar representations of these sensory experiences are evidence of the existence of dogs. This argument applies generally to our conception of external objects. It was a mistake of phenomenalism to presuppose that if sense data were part of the meaning of talk and thought about external objects, then such talk must be reducible by definition to discourse about sense data. I disagree. There is more to our conception of external objects, especially the external objects of scientific discourse, than discourse about sense date can capture. Nevertheless, part of our conception of external objects, and, therefore, part of the meaning of discourse about them is contained in the evidence of sensation, of sense impressions and sense data. The case of colour is perhaps the strongest support. There is more to the external colour red than our sensation of colour, but how our exemplarized representation of the sensation of red is evidence that an external object is red. The exemplarized sensation is part of our conception of red objects and how they appear. I have moved the argument from the exemplarization of conscious states to conclusions about our conceptions and evidence concerning the existence of external objects. Now I wish to suggest that the exemplarization of conscious states plays a special role in our plasticity and autonomy in how we represent the world. Remember that not all conscious states are exemplarized. Some elicit representation of external objects without eliciting representation of what they are like in themselves. However, the exemplarized conscious
104
The Bloomsbury Companion to the Philosophy of Consciousness
state opens up the possibility of extending, radiating, the representation of the exemplarized state from the state itself to the representation of external objects. I have conceded that the conscious state can trigger representation. It can serve as input for the representation of an external object in a subject without the subject having any representation of what the input state is like in itself. On a computational model, the input consists of something like pressing keys on a keyboard to represent objects external to the computer without any representation of keys on a keyboard in the program of the computer. So, put metaphorically, conscious states may function to elicit representations of external objects in a subject without the subject having any representation of what a conscious state is like in itself, that is, without any representation of phenomenal character.
9 Conscious exemplars as freedom of representation What, then, is the function of exemplarization of the conscious state beyond providing a representation of what the conscious state is like in itself? The answer is that the exemplarization converts the conscious state into a general term of representation, that is, into a predicate of representation, which allows us to modify and reconfigure what it represents. The exemplarization of the conscious state gives us knowledge of the state, and that knowledge of the state opens the possibility of reconfiguring the meaning of the state. We begin with the exemplarized state representing what it is like, and we extend the meaning of it to represent what some external object is like. When we become aware of what the exemplarized state is like, the extension of the exemplarized state to exhibit what the external object is like in terms of the sensory exemplar is accessible. In this way, the shift from conscious state serving as an unrepresented input to that of serving as an exemplarized term of representation of an external object shows us what the object is like. The exemplarized state becomes a symbol like a word that we use as a vehicle of representation of a special variety, namely, one that exhibits what the represented object is like. Moreover, once the exemplar is self-represented it becomes available as a vehicle or term of representation radiating beyond itself to exhibit what external objects are like. This extension or radiation of representation and meaning to objects beyond itself connects the exemplar representation with a background system of representation. To return to our example, the look, smell and feel of a dog, when these sensations are exemplarized, is part of our conception of a dog; that part of our conception is
Consciousness, Representation and the Hard Problem
105
connected with and filled in by a background system of our conceptions of dogs concerning the habits, capacities and biology of dogs. Now, once the sensory exemplars are represented, we transcend the automatic response to an unrepresented input. This allows us to change or reconfigure what the exemplar represents. Illusions, reflections on the highway that we at first take for water, appearances of a stick bending in water, are examples in which by attending to exemplar, to the sensory appearance, and knowing what the exemplar is like, we are free to reconfigure the meaning of the sensory exemplar in the external world. Such reconfiguration reflects our plasticity and autonomy in how we represent the external world, including the world of science. Attention to sensory detail in scientific photography enables us to distinguish artefacts of the process from features of the object photographed, for example. Exemplar representation converts input into a represented term of representation. Once the conversion takes place, we note our freedom, our autonomy, in how we represent the world and ourselves in terms of sensory materials. This autonomy, like autonomy of choice generally, is limited by a multiplicity of factors, especially our background conceptual system; but it is essential to the creation of innovative representation. Let us return to the hard problem of explaining the function of consciousness. Conscious states present themselves to us for exemplar representation when we attend to them. The exemplar representations of conscious states extend in meaning and representation to external objects. However, the exemplarization of those states converts them into vehicles of representation providing us with plasticity and autonomy in how we represent the external world. If we become reflective about the process, we shall note our autonomy in how we represent ourselves as well as external objects. Finally, then, as the process of exemplar representation of conscious states extends beyond the exemplar to represent what external objects are like and what we are like representing the objects that way. We note that our conception of our world and ourselves in our world is, at the same time, a conception of our world in ourselves. Without conscious states presenting themselves to us for exemplar representation radiating into representation, we would lack the means for the autonomous reconfiguration and re-representation of the meaning of our sensory experience that is vital to our practices of science and art. Conscious experience shows what the world is like. Exemplar representation of conscious experience converts it into a vehicle of representation that exhibits our plasticity, our freedom, to reconfigure our conception of what our world is like. That freedom must be limited, of course, but the limit is the loop of exemplarized
106
The Bloomsbury Companion to the Philosophy of Consciousness
experience and the radiation of the exemplar beyond itself. The reflexive exemplar representation of conscious experience loops back onto itself as it becomes part of the meaning and content of our representations of the external world. Exemplarization of experience ties together the external with internal, the object and concept, teaching us the lesson of empiricism. Let us return to the hard problem. Why is the body conscious? What is the function of consciousness in the body? Consciousness provides the body with the capacity to represent the external world and the place of the body in the world. There are representations of the world that do not involve consciousness. Computers and robots represent the external world and their place in the world. What is missing from how they represent the world? They do not know what the world they represent is like. When we consider the role of consciousness in representation we see that the hard problem is the solution to the hard problem. We must experience what the world as we represent it to know what it is like. Why would it matter to us if we did not know what the world we represent is like? We would lack the human cognitive capacity to connect our representations of the world with representations of experience of the world. We would, as a result, lack the understanding that would enable us to configure and reconfigure how we represent the world, ourselves in our world and our world in ourselves. The truth connection of representation to experience would be missing in the absence of exemplar representation of conscious experience. Truth would be blind. We would never know what truth is like. We would never know what we were missing.
References Chalmers, D. J. (1997). The Conscious Mind, New York and Oxford: Oxford University Press. Chisholm, R. M. (1966). Theory of Knowledge, Englewood Cliffs, NJ: Prentice Hall; 2nd ed. 1977; 3rd ed. 1989 Ferrier, J. F. (1866). Lectures on Greek Philosophy and Other Philosophical Remains, (Edinburgh) v.2. Fodor, J. (1983). The Modularity of Mind, Cambridge, MA: MIT Press. Fürst, Martina. (2014). ‘A Dualist Version of Phenomenal Concepts’ in Contemporary Dualism. A Defense, Andrea Lavazza and Howard Robinson eds., New York: Routledge, 112–35. Goodman, N. (1968). Languages of Art: An Approach to a Theory of Symbols, Indianapolis: Bobbs-Merrill.
Consciousness, Representation and the Hard Problem
107
Horgan, T. and Tienson, J. (2002). ‘The Intentionality of Phenomenology and the Phenomenology of Intentionality’ in D. J. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings, 520–32, New York: Oxford University Press. Hume, D. (1739). A Treatise of Human Nature, (London: John Noon). Ismael, J. (2007). The Situated Self, New York and Oxford: Oxford University Press. Kriegel, U. (2009). Subjective Consciousness: A Self-Representational Theory, Oxford: Oxford University Press. Lehrer, K. (1989). Thomas Reid, London: Routledge. Lehrer, K. (1997). Self Trust: A Study of Reason, Knowledge and Autonomy, Oxford: Clarendon Press, Oxford University. Lehrer, K. (2006). ‘Consciousness, Representation and Knowledge’ in SelfRepresentational Approaches to Consciousness, U. Kriegel and K. Williford, eds., 409–20, Cambridge: MIT Bradford. Lehrer, K. (2011). Art, Self and Knowledge, New York: Oxford University Press. Lehrer, K. (2012). ‘The Unity of the Manifest and Scientific Image by SelfRepresentation’, Humana Mente, Issue 21, May, 69–82. Lehrer, K. and Stern, D. G. (2000). ‘The “denoument” of “Empiricism and the Philosophy of Mind”,’ History of Philosophy Quarterly, 17 (2), 201–16 Lehrer, K. and Tolliver, J. (2014). ‘Truth and Tropes’, in Mind, Values and Metaphysics: Papers Dedicated to Kevin Mulligan, Anne Reboul (ed.), vol. I, 109–17, Dordrecht and New York: Springer Publishing Company. Papineau, D. (2002). Thinking About Consciousness, Oxford: Oxford University Press. Reid, T. (1863). The Philosophical Works of Thomas Reid, D. D., 6th ed., Sir W. Hamilton (ed.), Edinburgh: James Thin. Reid’s books first published, (1764) Inquiry into the Human Mind on the Principles of Common Sense, (Edinburgh,) and (1785) Essays on the Intellectual Powers of Man, (Edinburgh). Page citations are from Reid (1863). Rosenthal, D. M. (2005). Consciousness and Mind, Oxford: Clarendon Press of Oxford University Press. Sartre, J.-P. (1956). Being and Nothingness, H. Barnes, translated by New York: Philosophical Library. Sellars, W. F. (1963). “Empiricism and the Philosophy of Mind” in Science Perception and Reality, New York: Humanities Press. Wittgenstein, L. (1922). Tractatus Logico-Philosophicus, New York: Harcourt, Brace and Company.
7
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ Daniel Stoljar
1 Introduction The knowledge argument against materialism may be presented in various ways, but in its simplest form, it has two premises. The first premise – K1, as I will call it – is that it is possible for a person to know all the physical facts and not know what it’s like to see something red. The second premise – K2, as I will call it – is that if this is possible then materialism is false. Since K1 and K2 together entail that materialism is false, the assessment of the argument turns on the truth or otherwise of the premises. Why believe the premises? The rationale for K1 derives from various imagined cases that seem to illustrate its truth. The best and most famous case is that of Mary, due to Frank Jackson: Mary is confined to a black-and-white room, is educated through black and white books and through lectures relayed on black-and-white television. In this way she learns everything there is to know about the physical nature of the world. She knows all the physical facts about us and our environment. … It seems, however, that Mary does not know all there is to know. For when she is let out of the black-and-white room or given a color television, she will learn what it’s like to see something red, say. This is rightly described as learning – she will not say ‘ho, hum’. (Jackson 1986, 291; see also Jackson 1983)
Offhand, there is no contradiction in this story; it apparently describes a possibility, and, moreover, a possibility in which someone knows all the physical facts and yet does not know what it’s like to see something red. Hence, on the face of it, K1 is true. The rationale for K2 derives from the idea that materialism – at least in its simplest form1 – is the thesis that every fact is a physical fact. Suppose that every
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 109
fact is a physical fact; then if you know every physical fact, you know every fact. Contrariwise, if you know every physical fact but do not know every fact, then some fact is not a physical fact. (Compare: If every piece of fruit in the box is an orange, then if you eat every orange in the box you have eaten every piece of fruit. Contrariwise, if you eat every orange but not every piece of fruit, then some piece is not an orange.) But Mary is apparently someone who knows every physical fact but does not know every fact; hence some facts are not physical and materialism is false. Hence, on the face of it, K2 is true. The knowledge argument is one of those beautiful arguments in philosophy that is simple on the surface but is extremely rich and intricate underneath. In consequence, it is almost impossible within the confines of a single paper to review and properly discuss the solutions that have been offered to it.2 In what follows, therefore, I am going to focus on just one line of response, a response that starts from various observations about the semantics of the expression ‘know what it’s like’ – I will call it the knowing-what-it’s-like response. I should say straightaway that the knowing-what-it’s-like response is not my own. In fact, as we will see, I am convinced it is unsuccessful, and that the real problems with the argument lie elsewhere. Nevertheless, the response is extremely interesting and suggestive, and has considerable prima facie plausibility. In addition, so far as I know, it has no defenders in the contemporary literature, though suggestions similar to it certainly do exist, which is a point I will expand on at the end of the discussion. In short, the knowing-what-it’s-like response has not been given a fair shake. My aim is to give it that shake.
2 Interrogative versus free relative readings of ‘Knowing What it’s Like’ Both premises of the knowledge argument concern the idea of knowing what it’s like to see something red, or, more accurately, not knowing what it’s like to see something red. But what is it in general to know what it’s like to do or be something? It is this question that lies at the heart of the knowing-what-it’s-like response. In Consciousness and Experience, W. G Lycan (1996, pp. 92–3) discusses this issue, and says the following: Indirect-question clauses are closely related to ‘that’ clauses, both in meaning and grammatically. In particular, instances of ‘S knows wh-…’ are related to ‘S knows that…’: ‘S knows where X Vs’ is true in virtue of S’s knowing that X Vs
110
The Bloomsbury Companion to the Philosophy of Consciousness
at p, where ‘p’ suitably names some place; ‘S knows when X Vs’ is true in virtue of S’s knowing that X Vs at t where ‘t’ suitably names some time; ‘S knows who Vs’ is true in virtue of Ss knowing that N Vs, where ‘N’ suitably names some person. (‘Suitably’ in these formulations hides a multitude of technicalities, but they do not affect the present issue.)
He goes on: On this model, ‘S knows what it’s like to see blue’ means roughly ‘S knows that it is like Q to see blue’, where ‘Q’ suitably names some inner phenomenal property or condition.
Lycan is making three different points here. First, that knowing what it’s like is similar to knowing where, knowing who, knowing how and so on; that is, it is an instance of knowing-wh. Second, that knowledge-wh in general is a kind of propositional knowledge; hence, for example, when you know where something is, you know that such and such is the case. Third, that knowing what it’s like is a distinctive kind of knowledge-wh, as distinct from other cases of knowledge-wh as knowing-where is from knowing-who. In particular, Lycan says, just as you know where something is just in case you know a fact about a place, so you know what it’s like to see red just in case you know a fact about (what Lycan calls) some inner phenomenal property or condition. Are these points correct? As regards his first point, Lycan is clearly right. Knowing what it’s like to see red is as much a case of knowledge-wh as is, for example, knowing where your car keys are, or knowing who Hillary Clinton is. As regards his third point, Lycan is clearly wrong – or so I think and will assume in what follows, though admittedly the issues here are controversial. For one thing, as Hellie (2004, 359; see also Hellie 2007) notes, it is not clear what his suggestion is; in particular, it is not clear what ‘like’ is supposed to mean in his analysis. For another thing, as I have argued elsewhere (see Stoljar, 2016), ‘knowing what it’s like to F’ is very plausibly analysed as in context being equivalent to ‘knowing how it feels to F’, and this in turn is a sort of knowledgehow, though admittedly not the sort of knowledge-how that has attracted the attention of philosophers.3 What about the second point, that knowing-wh is always a case of knowledgethat? Here I think Lycan is right in one way and not right in another. In general, it is plausible that sentences that attribute knowledge-wh are ambiguous. On one reading – which, following Jonathan Schaffer (2010), I will call the interrogative reading – they certainly do attribute propositional knowledge. But on a different reading – which, again following Schaffer, I will call the free
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 111
relative reading – they do not attribute propositional knowledge, or at any rate, there is nothing in the semantics which entails that they do. It is for this reason that Lycan is right in one way but not in another. To illustrate the distinction, consider a standard example of a sentence that attributes knowledge-wh – say, ‘Alice knows where the conference is’. On its interrogative reading, this sentence is true just in case Alice knows some fact that answers the embedded question ‘where is the conference?’4 Suppose for example that the conference is in Rio and Alice knows this. Then she knows a fact – namely, that the conference is in Rio – and it is in virtue of knowing this that she knows where the conference is. On its free relative reading, however, the sentence is true just in case Alice knows some particular place, namely, the place denoted by the noun phrase (or the free relative – hence the name of the reading) ‘where the conference is’. Suppose the conference is in Rio and Alice knows Rio. Then she knows a place – namely, the city of Rio – and it is in virtue of knowing this that she knows where the conference is. The distinction between these interpretations of ‘Alice knows where the conference is’ is owing to two underlying facts: (a) that in this sentence the complement clause – namely, ‘where the conference is’ – can be interpreted either as an interrogative or as a noun phrase; and (b) that the verb ‘know’ permits both sorts of interpretation. But not all verbs are like this. Consider ‘Alice wondered where the conference is.’ The verb ‘wonder’ forces the interrogative reading in its complement clause: what Alice wondered is what fact answers the question ‘where is the conference?’5 Or consider ‘Alice loves where the conference is.’ The verb ‘love’ naturally suggests – but does not quite force6 – the free relative reading. On that reading, what Alice loves is not a fact but a city, namely, the city where the conference is. If the interrogative/free relative distinction is explained in these ways, we should expect it to apply to all, or at least most,7 cases of knowledge-wh, and to knowing what it’s like in particular. And so it does. Consider the positive variant of the sentence that occurs in presentations of the knowledge argument: ‘Mary knows what it’s like to see something red’. On its interrogative reading, the sentence is true just in case Mary knows some fact that answers the question ‘What is it like to see something red?’ Since that question seems intuitively to ask ‘What experience do you have when you see something red?’, the sentence intuitively means that Mary knows some fact that answers this question. On its free relative reading, by contrast, the sentence is true just in case Mary knows the denotation of the noun phrase ‘what it’s like to see something red’. Since that
112
The Bloomsbury Companion to the Philosophy of Consciousness
expression intuitively denotes a type of experience, on this reading the sentence intuitively means that Mary knows a type of experience, the one you have when you see something red.8
3 The knowing-what-it’s-like response Suppose now we agree that ‘Mary knows what it’s like to see something red’ has both an interrogative and a free relative reading. Then we may formulate the knowing-what-it’s-like response to the knowledge argument as having three parts: (a) if this sentence is ambiguous, its negation is likewise ambiguous, and so the knowledge argument itself has two versions; (b) neither version of the argument is persuasive; and (c) the original argument only seemed persuasive because these two versions had not been kept apart. To amplify on (a), suppose that the free relative reading is uniformly adopted. Then K1 is that it is possible for someone to know all the physical facts, and not know in the free relative sense what it’s like to see red. And K2 is that if this is possible then materialism is false. Let’s call this version of the knowledge argument, ‘KA-1’. Suppose now that the interrogative reading is uniformly adopted. Then K1 is that it is possible for someone to know all the physical facts, and not know in the interrogative sense what it’s like to see red. And K2 is that if this is possible then materialism is false. Let’s call this version of the knowledge argument, ‘KA-2’. To amplify on (b), KA-1 is unpersuasive because here K2 may easily be denied. On the free relative reading, to know what it’s like to see something red is to know a type of experience. Hence to fail to know what it’s like is to fail to know a type of experience. But on the face of it, one could fail to know a type of experience and yet not fail to know any particular fact about that type of experience. Perhaps, for example, knowing the experience requires more than knowing some set of facts about the experience; if so, one could know that set of facts and not know the experience. Compare: Perhaps knowing Rio requires more than merely knowing a set of facts about Rio; if so, one could fail to know Rio and yet still know that set of facts. The conclusion is that K2 is false on the free relative reading: from the fact that one knows all the physical facts but not what it’s like to see something red in the free relative sense, it does not follow that there is any fact one does not know; hence it does not follow that materialism is false. As regards KA-2, this is unpersuasive because here K1 may easily be denied. On the interrogative reading, to know what it’s like to see something red is to
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 113
know some fact that answers the question ‘What is it like to see something red?’ But when you focus on it, this requirement is very weak; all it takes is that Mary knows some fact – any fact – that answers the relevant question. But surely Mary knows some fact of this sort. She knows for example that to see something red is to detect via vision some distinctive property of the thing in question. She also knows that to see something red is to undergo a process that is rather like seeing a grey thing (something she has done in her room), or at any rate is more like seeing a grey thing than it is like many other things, for example playing the piano. The conclusion is that K1 is false on the interrogative reading: Someone who knows all the physical facts will know some fact that answers the question ‘What is it like to see something red?’ Hence, on the interrogative reading, such a person will know what it’s like. To amplify on (c), once we have distinguished KA-1 and KA-2, it is natural to say that K1 seemed plausible only because we had in mind the free relative reading of ‘knowing what it’s like’, and likewise that K2 seemed plausible only because we had in mind the interrogative reading. Once these two interpretations have been distinguished, however, the original argument stands revealed as a fallacy of equivocation, and is therefore implausible.
4 Two cul-de-sacs How successful is this response to the knowledge argument? As I have said, my own view is that it is unsuccessful. In explaining this reaction, it is helpful to look first at two possible criticisms that seem to me cul-de-sacs. Cul-de-sac 1 says that the response confuses semantics and metaphysics, or at any rate, semantics and psychology. On this view, the sentence ‘Mary knows what it’s like to see something red’ is true and attributes non-propositional knowledge, but the fact that that makes this sentence true is a fact about propositional rather than non-propositional knowledge. More generally, this objection claims that the analysis of KA-1 presented above is misguided: It focuses on sentences when what we ought to focus on are the psychological facts those sentences report. However, while there is a distinction between semantics and metaphysics, it is implausible that it may be appealed to in this way. For suppose the sentence ‘Mary knows what it’s like to see something red’ is true. Then we may immediately infer that it is a fact that Mary knows what it’s like to see something red. (The underlying rationale for this is that if ‘S’ is true, then we may immediately infer that it is a fact that S.) But in what sense is this latter fact not
114
The Bloomsbury Companion to the Philosophy of Consciousness
a fact about non-propositional knowledge? After all, given the analysis we have been operating with, the fact that Mary knows what it’s like to see something red just is the fact that Mary knows a type of experience – and that fact is a fact about non-propositional knowledge if anything is. Cul-de-sac 2 points out, in relation to a sentence such as ‘Alice knows Rio’, that it is hard to see that it can be true unless Alice knows various facts about Rio.9 Likewise, one might argue, it is hard to see that Mary can know what it’s like to see something red unless she knows various facts about seeing something red. But doesn’t the knowing-what-it’s-like response predict that she can? In fact, the knowing-what-it’s-like response predicts nothing of the sort, and this criticism gets things back to front. The situation we have been imagining is not one in which someone knows what it’s like to see something red and yet does not know any facts about seeing something red. Rather, it is a situation in which someone knows lots of facts about seeing something red, and yet does not know what it’s like to see something red. Likewise in the case of Alice and Rio, the analogous situation is not one in which Alice knows Rio, but – bizarrely – does not know any facts about Rio; it is rather one in which Alice knows lots of facts about Rio, and yet does not know Rio.
5 Two versions of Mary Even if we accept that these two criticisms are no good, there are other more telling lines of thought against the knowing-what-it’s-like response. The first of these distinguishes two versions of the Mary story. On the first version, we imagine that pre-release Mary fails to know what it’s like in both of the senses we have isolated. Hence she fails to know what it’s like (free relative sense) and fails to know what it’s like (interrogative sense). On a natural development of this view, since Mary does not know in the free relative sense, and so does not know the type of experience in question, she fails to understand the experience, fails to possess the concept required to understand it, and so on. Moreover, on this version, there is a natural explanation for why she fails to know what is like to see something red in the interrogative sense – namely, she fails to know in that sense because she does not even understand the fact or facts that answer the question. In other words, she fails to know in the interrogative sense because she fails to know in the free relative sense. On the second version, we imagine that pre-release Mary fails to know in only one of the senses we have isolated, the interrogative sense. Hence she fails
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 115
to know what it’s like in the interrogative sense but knows what it’s like in the free relative sense. On a natural development of this view, since Mary knows in the free relative sense, and so knows the type of experience in question, she understands the experience, has the concepts required to understand it and so on. Moreover, on this version, the fact that she fails to know what it’s like in the interrogative sense is not explained by her failing to understand the facts that answer the relevant question; she may understand them well enough. It is rather that she simply does not know these facts. Hence she fails to know in the interrogative sense even though she knows in the free relative sense. What does this distinction have to do with the knowing-what-it’s-like response? If we operate with the second version of the story, we may formulate a third version of the knowledge argument, a version different from the two we considered above; let’s call it ‘KA-3’. In this version, K1 says that is possible that someone knows all the physical facts and what it’s like to see something red (free relative sense) and yet does not know what it’s like to see something red (interrogative sense); and K2 says that if this is possible then materialism is false. And the problem KA-3 presents for the knowing-what-it’s-like response is that what this response says about the free relative sense of knowing what it’s like is irrelevant. In particular, while it may be true that KA-1 is unpersuasive in just the way the response says, it may nevertheless be that KA-3 is persuasive. If so, we have a version of the knowledge argument that evades the response we have been considering. It might be thought that while KA-3 evades the knowing-what-it’s-like response, it may be dismissed for independent reasons. Take a person who knows all the physical facts and understands the propositions that if true would answer a question like, ‘What is it like to see something red?’ Is it really possible that such a person will not know those answers? The response to this is ‘Yes, it is possible’ – or at any rate so a proponent of the argument may reasonably claim. One consideration in favour of this points out that even if pre-release Mary knows a type of experience, the type you have when you typically see something red, she may still not know that she will have that experience when she comes out of the room. She may reasonably wonder, for example, if she will have a different experience or none at all. From this point of view, the problem is not that she cannot distinguish the possible situation in which she will have a particular experience from the situation in which she will have a contrasting one. The problem is rather that she cannot tell, and nor does her impressive physical knowledge enable her to tell, which of these possibilities are actual.
116
The Bloomsbury Companion to the Philosophy of Consciousness
It is worth emphasizing that the underlying point here – that the knowledge argument can be developed on the basis of the second sort of example – is well known in the literature on these matters. In some cases – this I think is true of Jackson’s original presentation – it is simply assumed that the second version of the story is in play (see Jackson 1986). In other cases, the two Marys appear as two phases of a single temporal development of Mary (see Nida-Rümelin 1995). In still other cases, a distinction is made between two ways of telling the story of Mary, and hence two versions of the argument (see Stoljar 2005). However the issue is developed, it is a point well established in the literature that the knowledge argument can proceed (as we would put it here), even if Mary knows what it’s like to see something red in the free relative sense. If so, there is no way that KA-3 can be dismissed. It might also be objected that while KA-3 evades part of the knowing-whatit’s-like response, it does not evade the other part. Part of that response focuses on the free relative reading of ‘know what it’s like’, and certainly KA-3 avoids that. But another part focuses on the idea that, on the interrogative reading, it appears that Mary does indeed know what it’s like to see something red. For as we have seen, on that reading, if you know a fact that answers the relevant question, you know what it’s like, and Mary does plausibly know a fact of that sort. One might say that this criticism applies just as much to KA-3 as to KA-2. I think this point is a good one. What it shows is that the point about the two versions of Mary blocks only part of the knowing-what-it’s-like response. To see how to block the other part, we need to consider another criticism of the response. It is to that other criticism that I now turn.
6 Mention-all versus mention-some We have distinguished between the interrogative and the free relative readings of ‘knowing what is like’. But we should also distinguish, within the interrogative reading, two rather different possibilities. On the first, ‘Alice knows where the conference is’ is true just in case Alice knows some fact that answers the question ‘Where is the conference?’ On the second, ‘Alice knows where the conference is’ is true just in case Alice knows all facts that answer the relevant question. In the linguistics and philosophy of language literature, the first of these readings is often called a ‘mention-some’ reading, while the second is called a ‘mention-all’ reading.10 On the face of it there are sentences fitting both paradigms. To borrow and adapt slightly some examples discussed by Jason Stanley (see Stanley 2011,
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 117
115–22), in ‘Hannah knows where to buy an Italian newspaper in New York’, one is inclined to think that it is true just in case Hannah knows some answer to the embedded question. If she knew only that you can get an Italian newspaper at that place on Second Avenue, for example, would be sufficient for her to have the knowledge in question. By contrast, in ‘John knows which Beatles albums are good ones’, one is inclined to think it is true just in case John knows every answer to the relevant question. If he knew only that Revolver is a good album, for example, that would not be sufficient to have the knowledge in question; rather he must know of each good Beatles album that it is a good one. How does the mention-all/mention-some distinction bear on the response we have been examining? So far we have uncritically adopted a mention-some reading of ‘knowing what it’s like’. For we have assumed that ‘Mary knows what it’s like to see something red’ is true just in case she knows some fact that answers the embedded question. Moreover, this assumption played an essential role in the criticism of KA-2 described above and, by extension, of KA-3 as well. For that criticism pointed out that pre-release Mary knows some fact that answers the question ‘What is it like to see something red?’, and then drew the conclusion that Mary knows what it’s like to see something red, contrary to the first premise of KA-2. Clearly that inference is reasonable only if the mention-some reading is in play. But suppose instead that the mention-all reading is in play. In that case, ‘Mary knows what it’s like to see something red’ is true just in case she knows every fact that answers the relevant question. Now we cannot infer from the premise that Mary knows some fact to the conclusion that she knows what it’s like to see something red. Indeed, once the mention-all reading is in play, if so much as one answer eludes her, she will fail to know what it’s like. The upshot is that the knowing-what-it’s-like response is unconvincing if ‘knowing what it’s like’ has a mention-all reading. One might respond to this by insisting that ‘Mary does not know what it’s like to see something red’ does not have a mention-all reading, or at least not a legitimate one. The problem with this is that the mention-some/mention-all issue is a hugely contested empirical matter in linguistics and philosophy of language. As such, it would be very ill-advised in one’s philosophy of mind to go out on a limb by insisting on the mention-some reading. Alternatively, one might point out that if insisting on a mention-some reading is ill-advised, insisting on a mention-all reading for the same reason is likewise ill-advised. But doesn’t the criticism of the knowing-what-it’s-like response we have just made precisely depend on us doing so? Although there is of course truth in this objection, I think we may formulate our criticism of the knowing-what-it’s-like response without taking a stand
118
The Bloomsbury Companion to the Philosophy of Consciousness
on any tendentious empirical issue. In particular, in view of the material just introduced, we may formulate a fourth and final version of the knowledge argument – let us call it ‘KA-4’. On this version, K1 is that it is possible for someone to know all the physical facts and know in the free relative sense what it’s like to see something red, and yet not know some fact which answers the question ‘What is it like to see something red?’; and K2 is that if this is possible then materialism is false. The problem that this version of the argument presents for the knowing-what-it’s-like response is that nothing in that response says it is unpersuasive. The material about the free relative reading remains sidelined in the case of KA-4, just as it did for KA-3. The material about the interrogative reading has no effect on KA-4, since the argument operates not with ‘Mary does not know what it is like to see something red’, but with the distinct, but closely related, ‘Mary does not know a fact that answers the question, “What is it like to see something red?” ’. As we have seen, if the mention-all reading is in play, then the second of these entails the first. But even if that reading is not in play, the second by itself causes a problem for materialism, as KA-4 illustrates.
7 Overall assessment I am now in a position to formulate my overall assessment of the knowingwhat-it’s-like response to the knowledge argument. According to this response, reflections on the semantics ‘knowing what it’s like’ reveals two versions of the knowledge argument, neither of which is persuasive. An attractive feature of this response is that the observations it is founded on are plausible, and it is surely a good idea in general to distinguish various versions of the knowledge argument. However – and here is the main problem with the response – when we think through the observations about ‘knowing what it is like’, it emerges that there are many further versions of the knowledge argument than the two with which the response operates. Moreover, at least one of these versions is such that nothing in the knowing-what-it’s-like response undermines it. It is for this reason that this response is ineffective against the knowledge argument. How should one proceed from this point? One option would be to explain what response is effective against the knowledge argument. In other work, I have argued that what is wrong with the argument is the assumption that Mary knows all the physical facts (see, for example, Stoljar 2006). Rather than trying to defend that proposal here, I want instead to return to the point mentioned at the outset, namely, that the knowing-what-it’s-like response has not been defended in the
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 119
literature. To illustrate this claim, I will finish the chapter by briefly comparing the proposal we have been considering with two related but different proposals, the first due to David Lewis, and the second to Michael Tye.
8 Lewis’s view The basic shape of Lewis’s response to the knowledge argument is well known.11 He starts with a distinction inherited from Gilbert Ryle between propositional knowledge and knowledge-how, which is to say knowledge reported by sentences of the form ‘S knows how to F’. He goes on to suggest that Mary is best described as gaining a sort of knowledge-how when she comes out of her room, the reason being that she gains some abilities to imagine, recollect and think about experiences that she did not have before and that these abilities are best thought of a sort of knowledge-how. Finally, he says, the version of the argument that invokes knowledge-how is implausible, and the reason – to put it in our terms – is that K2 is false: the mere fact that one can know all the physical facts and lack an ability or lack some know-how does not in any way threaten materialism since it does not entail that you fail to know any fact. The general structure of Lewis’s response closely resembles that of the knowing-what-it’s-like response. In particular, both responses involve the suggestion that there are two kinds of knowledge and that the argument illegitimately conflates them. What distinguishes them is that Lewis’s view relies on the distinction between knowing how and knowing that, while the knowingwhat-it’s-like response does not rely on that distinction. There is much to say about Lewis’s view, but here I will focus on just one observation. A very influential line of attack against Lewis is suggested in the passage from Lycan quoted above (cf. Lycan 1986), and has been developed in detail by Stanley and Williamson (see Stanley and Williamson 2001; see also Stanley 2011, Cath 2009). According to this criticism, Lewis’s response fails because the Rylean assumption it is founded on is mistaken: Knowledge-how is itself a sort of propositional knowledge. To know how to ride a bike, for example, is to know, of some way to ride a bike, that one can ride a bike that way. In light of our discussion of the knowing-what-it’s-like response, it is possible to defend Lewis against this criticism, or at any rate to imagine a slightly altered version of what he said that evades the objection. What Lewis should have said, one might say, is that, in gaining the abilities that she does, Mary is best described, not as knowing how to do something, but as knowing what it’s like in
120
The Bloomsbury Companion to the Philosophy of Consciousness
the free relative sense. If so, Lewis’s basic position may be recast in a non-Rylean rather than a Rylean mould.12 How far does this recasting of Lewis’s view do violence to his underlying intentions? There is no doubt that Lewis did formulate his view in terms of knowing how, and so to drop that element of his view is clearly to depart from what he said. But there are reasons also for thinking he would be quite happy with the departure.13 At one point Lewis describes his view about what happens when Mary comes out of her room in the following terms: Materialists have said many things about what happens in such a case. I myself, following Nemerow, call it a case of know-how; Mary gains new imaginative abilities. Others have said that Mary gains new relations of acquaintance, or new means of mental representation; or that the change in her is just that she has now seen colour. These suggestions need not be taken as rival alternatives. (1994; 293–4)
It is true that Lewis does not quite mention the knowing-what-it’s-like response here, but it is natural to suppose that his attitude to it would be similar, that it does not need to be seen as a rival to his own. If so, he would be free to adopt it and so to drop the Rylean element that gets him into trouble. Of course, that Lewis’s view can be understood or recast as the knowing-whatit’s-like response does not mean that it is successful. As we have seen, the problem with the knowing-what-it’s-like response is that the KA can be reformulated to avoid it. The same is true of Lewis’s view on the suggested reformulation. Still, our reformulation does at least show that Lewis’s proposal can be understood so as to withstand perhaps the most prominent objection to his account.
9 Tye’s view Turning to Michael Tye’s view,14 he begins by drawing a distinction between knowing what it’s like, on the one hand, and knowing the phenomenal character of the experience, on the other. The former, Tye thinks, is a case of propositional knowledge. In particular, it involves knowing that something red is like this, where the demonstrative ‘this’ picks out (in the case we are focusing on) a particular property of the state of seeing something red. The latter, Tye says, is a case of non-propositional knowledge (‘object knowledge’, he calls it); in particular, it involves being aware or conscious of some thing or property. How does Tye use these ideas to respond to the knowledge argument? One might have expected him to draw a distinction between propositional knowledge
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 121
and object knowledge, and argue that K2 is false if object knowledge is in play, while K1 is false if propositional knowledge is in play.15 However, Tye does not quite say this. He certainly thinks that K2 is false if object knowledge is in play, but he argues in addition that it is false even if propositional knowledge is in play. The reason is that, according to Tye, one should draw a further distinction between two ways of conceiving of propositional knowledge, and related to this, two ways of conceiving of knowing the answer to a question. On the first, which we may call the modal conception, to know or learn something requires ‘the addition of a piece of knowledge that shrinks the set of worlds consistent with what we know’ (2010, 307). On the second, which we may call the non-modal conception, to know or learn something does not require this, but merely involves “coming to think new thoughts” (2010, 307). The importance of this distinction for Tye is that, if we operate with the non-modal conception, we may allow that Mary knows what it is like in the interrogative sense and at the same time deny K2. Tye’s response to the knowledge argument appeals to two kinds of knowledge, and in that sense resembles the response we have been looking at. But his approach is also distinct from it in two main ways: (a) He relies on the distinction between knowing what it is like and knowing the phenomenal character of an experience, whereas the response we have been considering does not; and (b) he relies on the idea that learning what it is like is not learning a new fact. Clearly there is much to say about all of this, but here I will limit myself to two observations, one about (a), and the other about (b). As regards (a), Tye motivates the distinction between knowing what it’s like and knowing its phenomenal character in the following way. First, he suggests that the following sentences are consistent16: (2a) Paul knows Ann. (2b) Ann is who Sebastian loves. (2c) Paul does not know who Sebastian loves. He then argues that the reason these are consistent is because one cannot preserve truth-value when substituting co-referring expressions within the scope of ‘know’. Hence, even if Ann is who Sebastian loves, it does not follow that Paul knows who Sebastian loves from the fact that he knows Ann. Finally, Tye says, the same thing applies in the what-it’s-like case. Even if the phenomenal character of an experience is what it is like to have it, it does not follow that someone knows the phenomenal character from the fact that they know what it is like. However, in the light of our earlier discussion, it is fairly clear that this line of thought does not capture what is going on in examples like (2a–c). As we
122
The Bloomsbury Companion to the Philosophy of Consciousness
have seen, (2c) is ambiguous. On its interrogative reading, at least on the mention-some reading, it means that Paul does not know a fact that answers the question, ‘Who does Sebastian love?’ On that reading it is certainly consistent with (2a–b). On its free relative reading, however, it means that Paul does not know the person who is denoted by the noun phrase ‘who Sebastian loves’. On that reading, it is inconsistent with (2a–b). So it is not in general true that (2a–c) are consistent; rather, they are consistent on one reading and inconsistent on another. Moreover, the reason that (2a–c) are consistent (on the relevant reading) does not have to do with substitution within an opaque context. It has rather to do with the fact that Paul can know Ann yet fail to know an answer to the question, ‘Who does Sebastian love?’, even though Ann is in fact who Sebastian loves. Of course, that Tye does not properly capture what is going on in (2a–c) does little to undermine his more general view. In particular, it is possible to say, not that there is a distinction between knowing what it’s like and knowing the phenomenal character of the experience, but rather that there are two sorts of things one has in mind by ‘knowing what it’s like’: One is knowing the phenomenal character, and one is knowing an answer to a question. Understood this way, (a) above plays no role in Tye’s position. What is important is (b) above, namely, the idea that there are distinct notions of propositional knowledge, and that in consequence K2 can be denied. Turning then to (b), the modal/non-modal distinction, and the response to the knowledge argument founded on it, is a familiar one in the literature, and is not something I can assess here.17 It is worth noting, however, that the material about ‘knowing what it is like’ does not seem to affect its plausibility in any way. Tye is I think correct to say that if materialism is true, then it can’t be that Mary comes to learn what it is like in the modal conception – at any rate she cannot if she knows all the physical facts. From this point of view, it is useful to view many responses to the knowledge argument as various attempts to undermine the impression – for it is a natural impression – that Mary does indeed learn what it is like in this sense. One way in which one might try to do this is to draw a distinction between propositional and non-propositional knowledge. That is what Lewis tries to do, for example, and that is what the knowing-what-it’s-like response tries to do as well. As we have seen, it is unlikely that anything along these lines will succeed. But if I understand Tye correctly, while he does draw a distinction along these lines it is not this element of his view that is crucial to his response to the knowledge argument. In that sense, Tye’s response is really rather different from the knowing-what-it’s-like response.
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 123
10 Conclusion In this chapter I have considered a response to the knowledge argument that is founded on the idea that ‘knowing what it’s like’ is ambiguous between an interrogative reading and a free relative reading. I have argued that this response is unsuccessful since the basic idea behind the knowledge argument can be formulated to avoid it. I have also distinguished it from two related proposals in the literature, one by David Lewis, the other by Michael Tye.
Acknowledgement I am indebted in what follows to work by Jonathan Schaffer and Jason Stanley, as well as conversations (in some cases from years ago!) with them. More recently, conversations with Ryan Cox, Erick Llamas and Don Nordblom have been extremely helpful.
Notes 1 Physicalism may come in forms much more complex than this, but we can afford to set them aside here. For some discussion of these forms, see Stoljar (2010; 2015). 2 For extensive discussion of the argument, as well as information about its background, see Ludlow, Nagasawa and Stoljar (2004). 3 When philosophers talk about knowing how, they typically restrict attention to cases attributed by sentences in which ‘how’ is immediately followed by an infinitive verb rather than a finite clause, as in ‘Bill knows how to ride a bike’. But many cases of knowledge-how are not like this, for example, ‘Caryl knows how Stalin was to his generals’ or ‘David knows how John got home’. 4 This is a simplified presentation of the interrogative reading of the sentence, in at least the following ways. First, as we will see later, there is a distinction within the interrogative reading between so-called mention-some and mention-all readings. Second, it may be that on either the mention-some and mention-all readings, the quantifiers contained in the sentence need some sort of contextual restriction. Third, it may be that Alice needs to know the relevant fact not as such but in a certain way, e.g. under the right mode of presentation, or as involving the right concept or mental representation. I will mention some of these complications as they arise in what follows but for the most part I will leave them in the background. For further discussion, see Stanley and Williamson (2001), Stanley (2011), Cath (2009) and Tye (2010).
124
The Bloomsbury Companion to the Philosophy of Consciousness
5 This point is emphasized in Schaffer (2010). 6 It does not force it, since one can love a fact: ‘I love that the conference is in Rio’, Alice might say. Or even, ‘I love the fact that the conference is in Rio.’ (What’s more, one can love that fact even if one does not love Rio.) 7 An exception is cases in which the ‘wh’-word is followed by an infinite clause of the sort noted in fn.3 above. 8 As Schaffer (2010) notes, the distinction is a common one in the linguistics literature. 9 See Crane (2012) and Tye (2012) for further discussion of this sort of view. 10 For a philosophically accessible discussion of this distinction, see Stanley (2011). As Stanley makes clear examples of the sort discussed in the text are in turn taken from the linguistics literature. 11 See Lewis 1988. As he makes clear, his account follows that suggested in Nemirow 1980. 12 One response one might make on Lewis’s behalf here is that knowledge-how, like all sorts of knowledge-wh, has two readings – a propositional reading and a nonpropositional reading. But the problem with this is that knowledge-wh in which the wh is followed by an infinite verb seems to be an exception. 13 For further reasons to think that Lewis would not disagree with our formulation, see Stoljar (2015). 14 I will concentrate in the text on the position presented in Tye (2010), but see also Tye (2009, 2012). 15 This would be to interpret Tye as advancing a so-called ‘acquaintance hypothesis’ similar to that developed in (e.g.) Conee 2004 (see also part IV of Ludlow, Nagasawa and Stoljar 2004). Of course, the proposal I have been interested in in this paper is closely related with the acquaintance hypothesis as well, but I won’t try to pursue that further connection here. 16 I have maintained Tye’s numbering. 17 For extensive discussion of this sort of view see the papers on the ‘old-fact-new-mode’ approaches to the knowledge argument in Ludlow, Nagasawa and Stoljar (2004).
References Cath, Y. (2009). ‘The Ability Hypothesis and the New Knowledge How’, Noûs, 43 (1), 137–56. Conee, E. (2004). ‘Phenomenal Knowledge’, Australasian Journal of Philosophy, 72, 136–50. Reprinted in There’s something about Mary, ed. Peter Ludlow, Yujin Nagasawa, and Daniel Stoljar, MIT Press, Cambridge MA, pp. 197–215. Crane, T. (2012). ‘Tye on Acquaintance and the Problem of Consciousness’, Philosophy and Phenomenological Research, 84 (1), 190–8.
The Knowledge Argument and Two Interpretations of ‘Knowing What it’s Like’ 125 Hellie, B. (2004). ‘Inexpressible Truths and the Allure of the Knowledge Argument’, in There’s something about Mary, edited by Peter Ludlow, Yujin Nagasawa and Daniel Stoljar, 333–64, Cambridge, MA: The MIT Press. Hellie, B. (2007). ‘“There is something it like” and the Structure of Consciousness’, Philosophical Review, 116 (3), 441–63. Jackson, F. (1983). ‘Epiphenomenal Qualia’, The Philosophical Quarterly, 32, 127–36. Jackson, F. (1986). ‘What Mary Doesn’t Know’, The Journal of Philosophy 83, 291–5. Lewis, D. (1988). ‘What Experience Teaches’, Proceedings of the Russellian Society, 13, 29–57; rpr. in N. Block, et al. eds. (1997). The Nature of Consciousness: Philosophical Debates, Cambridge, MA: The MIT Press, 579–96. References are to the reprinted version. Lewis, D. (1994). ‘Reduction of Mind’, in S. Guttenplan, ed., A Companion to the Philosophy of Mind, 412–31, Oxford: Blackwell. Ludlow, P, Nagasawa, Y. and Stoljar, D., eds. (2004). There’s Something About Mary: Essays on Phenomenal Consciousness and Jackson’s Knowledge Argument, Cambridge MA: The MIT Press. Lycan, W. G. (1986). Consciousness and Experience, Cambridge, MA: The MIT Press. Nemirow, L. (1980) ‘Review of Nagel’s Mortal Questions,’ Philosophical Review, 89, 475–6. Nida-Rümelin, Martine. 1995. ‘What Mary Couldn’t Know: Belief about Phenomenal States’. In Conscious Experience, ed. Thomas Metzinger, 219–41, Thorverton: Imprint Academic. Schaffer, J. (2010). ‘Knowing the Answer Redux’, Philosophy and Phenomenological Research, 78 (2), 477–500. Stanley, J. and Williamson T. (2001). ‘Knowing How’, The Journal of Philosophy, 98, 411–44. Stanley, J. (2011). Know How, Oxford: Oxford University Press. Stoljar, D. (2005). ‘Physicalism and Phenomenal Concepts’, Mind and Language, 20 (5), 469–94. Stoljar, D. (2006). Ignorance and Imagination, New York: Oxford University Press. Stoljar, D. (2010). Physicalism, London: Routledge. Stoljar, D. (2015). ‘Lewis on Experience and Materialism’, in Jonathan Schaffer and Barry Loewer, eds., A Companion to David Lewis, Malden: Wiley Blackwell, 519–32. Stoljar, D. (2016). ‘The Semantics of “What it’s Like” and the Nature of Consciousness’, Mind, 125 (500): 1161–98. Tye, M. (2009). Consciousness Revisited: Materialism without Phenomenal Concepts, Cambridge, MA: The MIT Press. Tye, M. (2010). ‘Knowing What it’s Like’, in Knowing How: Essays on Knowledge, Mind, and Action, edited by John Bengson and Mark Moffett, Oxford: Oxford University Press. Tye, M. (2012). ‘Précis of Consciousness Revisited’, Philosophy and Phenomenological Research, 84 (1), 187–9.
8
Conscious and Unconscious Mental States Richard Fumerton
The question of whether there are or even could be unconscious mental states has long been controversial. The answer to the question might seem to depend critically on how one understands a mental state. Behaviourists and functionalists would seem to have relatively little difficulty recognizing the distinction between mental states that are conscious and those that are not. Classical dualists might seem to have more of a problem. After all, dualists are sometimes moved to their dualism precisely because they claim that they know the intrinsic character of mental states through a special access they have to those states in their character. In this chapter, I’m primarily interested in the extent to which dualists can make perfectly good sense of occurrent but unconscious mental states.
1 Mental states Before we explore the possibility of distinguishing conscious from unconscious mental states, one might suppose that we should define what makes something a mental state. A living body, including a brain, is in indefinitely many states at a given time. And those states are in constant flux. The vast majority of the body’s physical states are not mental states. The vast majority of brain states are not mental states. So what makes a given state a mental state? There is no shortage of answers. But many of those answers beg important questions. So, for example, one might define a mental state as a state of which we are conscious. That would, however, immediately beg the question as to whether there might be mental states of which we are not conscious. Alternatively we could define a mental state as the kind of state of which we could become conscious. I’ll argue that this might come closer to the truth, but we will need to disambiguate the critical modal operator, and, even after we do, one might still worry that we beg the
Conscious and Unconscious Mental States
127
question as to whether there might be a kind of mental state that human beings are incapable of accessing. Brentanno famously tried to understand the mental in terms of the intentional. Informally we might say that an intentional state is always ‘directed’ at some object – its content, that which the state is, in some sense, ‘about’. That is hardly sufficient for a state’s being mental, however, as there are many states of the body that are essentially relational. The supposed special mark of the intentional is that the ‘object’ of the state need not exist. So, for example, I might want world peace, fear ghosts, believe that there are unicorns, and seem to see a leprechaun even if there is no world peace, and there are no ghosts, unicorns or leprechauns. Put linguistically, a sentence describing an intentional state contains a transitive verb and a term that is the object of that transitive verb, but the sentence can be true even if the object expression doesn’t seem to refer to any existing thing. Taking the grammatical structure of such sentences at face value, Meinongians were driven to the conclusion that we should recognize that there are intended objects that do not exist.1 But determined to avoid such exotic ontologies, others conclude that the ‘surface’ grammar of such sentences must be misleading. So-called adverbial theorists, for example, suggest that the grammatical object of the sentence describing an intentional state is best construed as a ‘disguised’ adverb modifying the nature of the state described by the verb. So just as dancing a waltz is a certain way of dancing, so fearing ghosts is just a certain way of ‘fearing’. Feeling anxious, feeling pain, feeling happy and feeling euphoric are just ways of feeling. We can’t reach a conclusion about the correct analysis of intentionality in this chapter. But even if we suppose that we have a clear enough understanding of intentionality through the need to understand the ‘intentional inexistence’ of some intentional states, it is still not clear that we should understand mental states in terms of intentionality. There is at least prima facie plausibility to the claim that one can be depressed without being depressed about anything, one can be anxious without being anxious about anything, one can be happy without being happy about anything, and one can be sad without being sad about anything. Yet most philosophers want to count depression, anxiety, happiness and sadness as paradigmatically mental. One can take feeling pain to be a feeling directed at its object, the pain, but as we saw above one might alternatively just think of pain as that familiar kind of feeling that we have all, at one time or another, experienced. Many these days take the data received through the five senses as almost paradigmatically intentional. Visual experience, it is argued, represents the physical world (correctly or incorrectly) as being a certain way, and so do
128
The Bloomsbury Companion to the Philosophy of Consciousness
all the other sensations. But this was by no means the received view in the history of philosophy. To be sure, many of the moderns talked about sensation representing the physical world, but they also talked about sensations as if they were impressions. The model they often seemed to have in mind was that of the signet ring leaving an impression on wax. From the impression (they thought) one can read off some of the characteristics of that which left the impression. In the same way, from a track left in the mud, the experienced hunter can often determine what kind of animal left it. But in this sense, sensations were taken to be signs of physical objects and their characteristics, and one thing can be a sign of something else without its standing in a genuine intentional relation to that something else.2 We could identify the mental with the non-physical. I might be happy with that, but I don’t want all of the physicalists to stop reading this chapter firm in their conviction that dualism is a quaint relic of a distant philosophical past. We could try to find some sort of functional description that captures all and only mental states, but I wouldn’t hold my breath waiting for such a description. We could identify the mental with the subject of certain propositions that are known infallibly, but it is a matter of some controversy whether any contingent propositions are supported by infallible justification, and it is even more controversial to suggest that all mental states are known in that way. If we are having difficulty finding an uncontroversial characterization of mental states, is there any other way we might proceed? I think there is, though it lacks elegance. We have already started to give examples of paradigmatic mental states. We could provide a list of such states, a list that is long enough to give the reader a feel for the kind of state that is the subject matter of this chapter. Pain and pleasure, sensations (visual, tactile, auditory, olfactory, gustatory, kinaesthetic), thoughts (including beliefs, seeming to remember, and imagining), fears, desires, hopes, feelings of anxiety, euphoria, depression, love, hatred, jealousy, anger – all these and many more are paradigmatic mental states. If you don’t like some on the list, substitute for the states I use as examples in the discussion below your favourite alternatives.
2 Occurrent and dispositional mental states We would do well to begin our discussion of whether there can be unconscious mental states by distinguishing carefully the controversy over whether there can be occurrent but unconscious mental states from the question of whether
Conscious and Unconscious Mental States
129
we can make sense of dispositional mental states of which we are unaware. If we can be guided at all by the way we talk, it looks as if it is virtually a datum that we can correctly describe people as having certain psychological characteristics that are in some sense dispositional. I can truly say that my colleague Evan believes that there are universals and that my granddaughters and my wife both fear spiders. I know people who are jealous of other people and I can accurately describe their jealously right now. Despite being able to truly describe all of these people in these ways, it would be sheer coincidence if Evan were actually entertaining some proposition about universals right this minute, that my granddaughters were actually in a state of fear directed at some spider they see, or that the jealous people I referred to above were thinking about those they are jealous of and feeling pangs of jealousy at the very moment I am describing that jealousy. It is tempting to think that one can describe these non-occurrent mental/ psychological properties in terms of dispositions to be in the relevant occurrent mental state under certain circumstances. It is further tempting to suppose that one can capture the content of a claim about a disposition through the use of a subjunctive conditional. On this approach, it is true that Evan believes that there are universals just in the sense that if he were to consider the question of whether there are universals he would (in the occurrent sense) assent. Those who fear spiders are such that if they were to encounter a spider, they would feel fear. People who are jealous of others would feel certain pangs of jealousy towards those others were they to think of their success (for example). There are at least two problems with this approach to understanding dispositions. On an intuitive level the view doesn’t allow us to capture the critical distinction between someone’s already being in the relevant state and someone’s being such that they would, for the first time, acquire the mental state were they to consider various issues or be prompted by relevant stimuli. So when I describe Evan as believing that there are universals, I’m not merely saying that if he were to consider the question (perhaps for the first time) he would answer it in the affirmative. That might be true of Evan, but we need to account for the fact that he now believes the proposition in question. He also believed it yesterday, the day before yesterday, and, in fact, has believed it for as many years as I have known him. What is true of belief is also true of other psychological states. If I’ve never encountered a snake in my life, nor even acquired the concept of a snake, it seems odd to suppose that I can be truly described as fearing snakes. And that is so even if it is also true, that were I to think about snakes (or encounter snakes), I would react with fright. Person A might never have been jealous of another
130
The Bloomsbury Companion to the Philosophy of Consciousness
person B even if, were they to dwell on B’s successes, feelings of jealousy for the first time might arise. Robert Shope (1979) in a classic discussion of what he called the conditional fallacy underscores a related point in a particularly persuasive manner. If we analyse the truth-conditions for the ascription of a mental state in terms of subjunctive conditionals, we will be driven to absurd conclusions. Suppose that Jones is in a deep coma with no mental life. It seems that we can truly describe Jones as being such that if he were to consider the question of whether he is in such a coma, he would believe that he wasn’t. But it surely isn’t correct to suggest that while in the coma Jones believes that he is not in the coma. The problem seems to be that the conditions described in the antecedent of the subjunctive ‘disturb’ in a problematic way the very conditions under which we wanted to ascribe the belief. The problem in some respects is not unlike the problem (at least on one interpretation of the problem) of ascribing simultaneously both location and momentum to an electron. Any attempt to measure the location disturbs the conditions that make possible measurement of the momentum and vice versa. A more mundane example is the problem anthropologists encounter trying to study the normal behaviour of people in a community when the very presence of the anthropologist is likely to disturb at least some normal patterns of behaviour. While the technical problem is difficult to solve, it still seems plausible that the truthmaker for most ascriptions of dispositions is whatever makes the relevant subjunctive conditional true. In the case of ascribing belief and other psychological states to people, we may also require that the ground of the disposition have been caused (in the right way3) by the person’s once being in some occurrent state. It isn’t difficult to understand how a person can be in a dispositional mental state without knowing that he or she is in it – without being in any way conscious of it. Even if the ground of the disposition were caused by the person’s having once been in the state, the person may well have forgotten that occasion. In general, one may have all sorts of dispositions of which one is unaware. If we are lucky enough, we might never have occasion to discover whether we are brave or cowardly, for example. We might never find ourselves in the relevant situation that triggers either the brave or cowardly behaviour. Contra Dummett (1978) that doesn’t mean that were we never to face danger we wouldn’t have the relevant dispositions. It just means that it can be very difficult to discover one’s disposition absent the relevant stimuli. Imagination may be some guide to what one would do in certain situations,4 but it probably isn’t a very good guide. It certainly isn’t an infallible guide.
Conscious and Unconscious Mental States
131
3 Are there unconscious occurrent mental states? If we define mental states in terms of dispositions to behave, or if we define mental states functionally, then it would seem that there should be no difficulty allowing for occurrent but unconscious mental states. On logical behaviourist grounds, one might argue that there really isn’t a distinction between an occurrent mental state and a dispositional mental state, but I suppose one might introduce the notion of an occurrent mental state in terms of the manifestation of a disposition. The occurrent mental state will stand to the mental property as dissolving in a solution stands to solubility. There are two brands of functionalism. One takes the mental state or property to be identical with the second-order property of having a property, which property plays a certain causal role (a role the functionalist is obliged to define clearly). The other takes the mental state – or perhaps the occurrent mental state, to be the exemplification of that property5 which realizes the relevant role – more formally the property that takes the value of the variable in the description of the second-order property. It has always seemed to me that the version of functionalism that is most consistent with arguments for the view is the first. Physicalists who have come to the empirical conclusion that all sorts of different organisms can feel pain even while in quite different brain states embraced functionalism over the mind– brain identity theory precisely because they can find a common denominator to all those creatures in pain. The common denominator just is something being in a state (exemplifying a property) which state is caused by damage to the body and in turn produces behaviour likely to be conducive to healing. The common denominator is not, by hypothesis, the relevant neural activity that plays that role in us, but doesn’t play the role in a fish (assuming that fish feel pain). On either version of functionalism, what is essential to being in pain is being in a state that involves certain causes and effects. But if awareness is anything like introspective awareness, then it seems to follow that on a functionalist account of mental states, one can be in a mental state without being aware of the state one is in. I can’t tell through introspection what is causing that of which I am aware, nor can I tell what its effects are. A common objection to the above argument, one first presented to me by David Henderson in conversation, suggests that even if various conditions are essential to a thing’s having a property, it doesn’t follow that one can know that the thing has that property only if one knows that the thing has all of those properties essential to its having the property. David held out a dime, asking me
132
The Bloomsbury Companion to the Philosophy of Consciousness
what coin it was. Naturally enough I characterized it as a dime. We both agreed that a given piece of metal has the property of being a US dime only if it originates in a certain way. Most of us couldn’t even describe the various features of the US constitution that created the US mint, and the actual process of minting coins. But at some point, people fully aware of the meaning of terms like ‘dime’ realize that there is a complex history that is essential to that piece of nickel (sadly no longer silver) actually being a dime. So, isn’t this a perfect example of how one can know relatively easily what feature an object has, without even thinking about the complex properties that might be essential to its having the feature? I’ve always felt that this sort of argument cuts precisely the other way, though the issue turns on complicated questions in the philosophy of mind, philosophy of language and epistemology. Certainly, children come to identify the various denominations of coin based on colour, size and shape. But children don’t fully grasp the meaning of terms. Children, for example, have a hard time understanding how red things can appear to be something other than red. But until a child grasps that the colour of a physical object can remain constant even as the appearance the object presented changes quite dramatically, the child doesn’t fully grasp the meaning of ‘red’ (at least as an adjective describing physical objects). In precisely the same way, until someone understands that there is a distinction between counterfeit money and real money, one hasn’t really grasped the full meaning of the terms we use to characterize various sorts of money. To know that Henderson was holding a dime, I really do have to have reason to believe that the piece of metal had the right ‘pedigree’. If I can’t come up with the relevant evidence, then so much the worse for the epistemic status of my belief. The above discussion at best scratches the surface of the relevant controversy. A great deal hinges on how one understands the concept of an essential property. Post Kripke (1980) and following Putnam (1975) the idea that there is a kind of necessity not understood in terms of analyticity has become orthodox. And if these views about essential properties are correct, then it surely does seem odd to suppose that one needs to know that a thing has properties essential to its being of a given kind in order to know that the thing is of that kind. Kripke, for example, thinks that my genetic make-up is essential to my existence. I have no idea what my genetic make-up is, but, as Descartes correctly observed, I know with absolute certainty that I exist. The conclusion, however, should be that the old Aristotelian idea of essential property resurrected by Kripke, Putnam and others is the critical mistake. It is easy to imagine my moving to another body with a radically different DNA. I take that conceivability to be a reliable guide
Conscious and Unconscious Mental States
133
to the contingency of the claim that this is my body. A full-scale defence of such a view, however, requires a successful argument against the new theories of reference and the essentialism that goes with those views.6
4 Dualism and the unconscious Various forms of physicalism, then, seem to me to make unproblematic the idea of occurrent but unconscious mental states. The classical dualist (and also idealist), however, has a much more difficult decision on this score. Certainly, many of the moderns didn’t seem to distinguish even between having an experience and being aware of it. They didn’t even seem to allow that it makes sense to suppose that one could be in a certain mental state and not be aware of it. While he never discussed the ground of the modal claim in much detail, Berkeley, for example, famously asserted that in the world of ideas, the existence of the idea and our perception of it are identical (1954). To be, Berkeley said, is to be perceived. (To include mind among the things that exist, the slogan eventually became: To be is to be perceived or to perceive). While Berkeley used the term ‘perceived’ he probably meant to claim only that there couldn’t exist an idea of which you are unaware. Mill also had trouble even making sense of mental states occurring without our being aware of them: Consciousness, in the sense usually attached to it by philosophers, – consciousness of the mind’s own feelings and operations, cannot, as our author [Hamilton] truly says, be disbelieved. (EWH with Mill, 172)
And again: The facts which cannot be doubted are those to which the word consciousness is by most philosophers confined: the facts of internal consciousness; ‘the mind’s own acts and affections.’ What we feel, we cannot doubt that we feel. It is impossible to us to feel and to think that perhaps we feel not, or to feel not, and think that perhaps we feel. What admits of being doubted, is the revelation which consciousness is supposed to make (and which our author [Sir William Hamilton] considers as itself consciousness) of an external reality. (EWH with Mill, 168)
But why were so many of the moderns so sure that if one were in a mental state one would always realize that one is in the state? To begin one must be careful to distinguish two quite different claims that are sometimes run together –
134
The Bloomsbury Companion to the Philosophy of Consciousness
perhaps both by Berkeley and Mill. At least some philosophers seem to use interchangeably the expressions ‘mental state’ and ‘conscious mental state’, or ‘mental life’ and ‘consciousness’. Even when they use the more complex expression ‘conscious mental state’, one sometimes gets the impression that they think that the adjective ‘conscious’ is redundant. If one means by conscious mental state, mental state of which one is aware, then the thesis that one can’t be in a conscious mental state without being aware of it becomes utterly trivial. Necessarily, states of which we are aware are states of which we are aware. As we’ll see shortly, it may not follow from this tautology that we have knowledge or even beliefs about everything of which we are aware. If we are to avoid begging the question of whether there can be mental states of which we are unaware (or unconscious), we shouldn’t begin our discussion by using ‘mental state’ and ‘conscious mental state’ as synonyms. Why would so many dualists be wary of allowing that there can be mental states of which one is unaware? Well, part of the answer is that a dualist (or an idealist) resists reduction of mental states to anything else. Indeed, many dualists will argue our concepts for at least some mental states are indefinable. One grasps what a mental state is and what kind of mental state it is through direct access to, or introspection of, the mental state in question. So, for example, as a dualist I would argue that there is no analysis of what it is to be in pain. I know what kind of state I have in mind when I think of pain. I know what it is for me to be in pain, and, because I understand full well what the property of being in pain is like, I have no difficulty forming an idea of what it would be for another person to be in pain. To admit that the property of being in pain is unanalysable (or primitive in the language preferred by some philosophers) is not to suggest, of course, that we can’t understand what it is to be in pain, or even teach another what ‘pain’ means. We can, in a manner of speaking, ‘ostend’ pain. We can’t, of course, literally point to it with a gesture, but we can produce in someone pain and get them to focus on the particular experience and its properties. It wouldn’t be the kindest way to introduce someone to the concept of pain, but if you profess not to know what I am talking about when I talk about the unanalysable property of being in pain, I can always hit your kneecap with a hammer and ask you to reflect upon the most dramatic change you notice in your experience. That, I can add, is the kind of experience I am talking about when I talk about pain.7 It may be logically possible to acquire concepts of mental states without ever actually being in the mental states. Jackson’s (1986) famous Mary argument (and Nagel’s (1974) thought experiment concerning what it is like to be
Conscious and Unconscious Mental States
135
a bat) convince many that one will never grasp the nature of certain mental states without actually being in those states. But if one is a Humean and one is convinced that it is in principle possible for any kind of thing to cause another, indeed that it is in principle possible for something to come into existence ex nihilo, it is hard to see why it is inconceivable that someone acquires the concept of pain, for example, without ever experiencing pain. The only sort of view that seems to make that conceptually problematic is another Humean view about the nature of thought. Although the precise interpretation of Hume on this point is a matter of considerable debate, Hume sometimes seemed to suggest that ideas of the imagination (thoughts and even beliefs) are pale copies of that which they are thoughts or beliefs about. If he were correct, there is a sense, I suppose, in which whenever one has a thought of pain, one is in a state that is a kind of pain (though, thankfully, a not very vivid pain). If all that were true, then it would again follow that one who has never felt anything like pain could never form the thought of pain. It is an understatement, however, to suggest that this view of thought is highly controversial.8 Even if Hume is wrong, one might agree with what might be the main point of the Mary thought experiment. Human beings as we are presently constituted will never succeed in understanding what a mental state is like (intrinsically) without actually being in the mental state and becoming introspectively aware of its character. That’s how we grasp the nature of various mental states. We can’t even think of a kind of mental state of which we have never been aware.9 Again, we need to be careful lest we be tempted to embrace a terrible argument. In his Dialogues, Berkeley seemed to advance the following ‘master’ argument for the view that we can’t think of an object that exists of which we are unaware. You can try all you want to think of an object that exists of which you are unaware, but in the very act of thinking of it, you become aware of it in thought. Now it is undeniably true that necessarily everything you think of is thought of. It is not, however, necessarily true (or even remotely plausible) to suppose that necessarily everything that exists has, is or will be thought of. Of course, I can’t give you an example of something that exists that I haven’t thought of (directly or indirectly).10 For the same sort of reason I can’t establish through introspection that there are mental states of which I am unaware. It is a necessary truth that every mental state of which I have been aware through introspection is a state that I have been aware of through introspection. But it is still an open question as to whether every mental state I’ve been in is one of which I have been aware. I have argued for many years that we know the intrinsic character of our mental states better than we know the intrinsic character of any other contingent
136
The Bloomsbury Companion to the Philosophy of Consciousness
entities. It is not even clear to me that one knows anything about the intrinsic character of physical objects. I have defended elsewhere (2013) the idea that our knowledge of, and even our ability to think about, the character of physical objects is restricted to their relational properties – the causal ‘powers’ they have to affect conscious beings in various ways. But nothing in the view that we can know through direct awareness or direct acquaintance the intrinsic character of mental states suggests that we can know through direct acquaintance everything there is to know about our mental states. Certainly, we can’t discover through introspection the causes or the effects of our mental states. We may not even be able to discover through introspection all of the non-relational properties of current states of which we are aware. The problem of the appearance presented by the speckled hen and many others like it suggest strongly that a complex mental state of which we are in some sense aware may still have features of which we are ignorant.11 So there is a sense in which we can ostend various kinds of mental states. But can we ostend direct awareness or acquaintance itself – the relation we have to mental states? If we can then we might be better positioned to assess the question of whether we can make sense of mental states of which we are unaware. But how might one ostend acquaintance? Well, think again about pain. Most of us remember occasions on which we clearly felt pain – pain of which we were aware – but where we ceased to notice the pain when we became distracted by something else. We had a bad backache, perhaps, and became engrossed in a conversation so interesting that we went for a period of time without even noticing the pain. There are, of course, two possibilities. One is that while we were distracted the pain actually ceased. The other is that the pain continued, but we simply were unaware of it for a period of time. It seems to me that the latter is every bit as plausible as the former, and on the assumption that it is the correct way to think of what happened, we can now ‘point’ to awareness with a definite description – it is the relation we had to our pain prior to the distraction, a relation which ceased during the distraction, and which began again after the conversation ended. Allowing that one can be in a psychological state without being aware of that state also allows one to make sense of all sorts of interesting possibilities. When I was younger, I used to think that Freudian talk of the unconscious was either gibberish, ‘as if ’ talk, or just a way of talking about complex dispositions to behave. It now seems to me, however, that there is no reason at all to deny the intelligibility of there being occurrent intentional states, states that might have all sorts of behavioural effects, but which have the further feature of being
Conscious and Unconscious Mental States
137
unconscious. Just as an interesting conversation can divert one’s attention from the pain one feels, so also, beliefs, fears, desires, embarrassment – all sorts of factors – might divert one’s attention from other desires, fears and beliefs. All this might make traditional dualists very nervous. After all many dualists have held their views, in part, precisely because they thought they had unproblematic access to their mental states. I now seem to be allowing for at least the possibility of mental life to which one has no actual access. Again, I want to be very clear about the suggestion I am making. I am not retreating at all from the notion that we get our idea of the nature of mental states from our introspective access to such states. Through direct acquaintance with my pain, for example, I know what pain is. I know what makes the state of pain. I can now intelligibly postulate that that very kind of state might occur even when I no longer am aware of it. It is easy to confuse these issues, particularly if one is an adverbialist about pain. The adverbialist won’t distinguish between pain and feeling pain. Pain just is a certain kind of feeling. But it is only if one equates feeling pain with being aware of pain (being aware of the feeling) that one will think that there is some sort of hopeless confusion in allowing for the existence of a pain of which one is not aware. Pain of which one is not aware just is a feeling of pain of which one is not aware. Moreover, the question we are asking here is not the question of whether the brain in some sense processes information (reacts to stimuli) in ways of which we are unaware, resulting in beliefs whose causes are unknown to most of us. Jack Lyons’s (2009) gives us all sorts of really interesting examples of processing of just this sort. But there is no prima facie reason to suppose that the brain needs to accomplish all of its cognitive goals through anything like genuine mental states of which we are not conscious. It might be extremely difficult to settle the empirical question of whether we are ever in pain while we are not aware of the pain. To do so, we obviously can’t turn to the introspection upon which we normally rely in determining the nature of our mental states. The attempt to introspect, after all, might make us aware of that of which we were previously unaware. However difficult it might be to settle the empirical question, I can’t see any argument against the possibility of occurrent, genuinely mental states of which we are unaware. I argued in a number of places that it is critical to distinguish both feeling pain and being aware of pain, from having beliefs about one’s pain. Furthermore, because it is not at all clear that forming intentional states requires having a language,12 we must also distinguish having beliefs about one’s pain from being able to describe in language the nature of that pain.
138
The Bloomsbury Companion to the Philosophy of Consciousness
Once we make all of these distinctions, we must make sure that we attend to them in discussing some of the thought experiments that dualists often employ in trying to direct our attention to the properties or states that they want to distinguish from physical properties and states. Philosophical discussions about dualism often invoke the concept of those zombies that figured so colourfully in the plots of B-movies. Can’t we make sense, the dualist might argue, of a creature who looks just like us, behaves just like us, is caused to behave just like us by the same physical factors, but who has no ‘inner life’, who isn’t conscious? The mental is just what is missing from the zombie we are imagining. The thought experiment is, in a way, just a more global version of the Mary thought experiment. Colour-deprived Mary lacks a specific kind of mental state (or at least a range of such states) – the colour appearances. The zombie lacks the whole kit and caboodle. But given what we said above, there are two importantly different kinds of zombies. There are those who have a relatively rich inner mental life but who lack awareness of it. And there are those who lack any mental life and who also, ipso facto, lack any awareness of it. There are also, of course, zombies who have either mental life or awareness of mental life, but who lack any means of communicating their mental states to others. I would argue that all of the zombies we described above are conceptually and metaphysically possible. And I also think that one can denote what is missing from the physicalist’s world view by talking about that which is missing from the zombie’s life. It does seem to me to be an empirical question then as to whether there are genuinely mental states of which we are, nevertheless, not aware. But as I also indicated, I don’t know how one would go about settling the question. Perhaps given my proclivity for armchair philosophy, I should be content to leave the philosophical question as simply a modal problem. One can always turn an empirical assertion into a philosophical assertion by sticking the right sort of modal operator in front of the assertion. But one can also ask modal questions about how one might go about empirically investigating the subconscious – a mental life of which we are unaware. In an ideal world, we might be able to correlate pain with neural events and the awareness of pains with more complex neural events of which the former are parts. But it is hard to see how we would go about finding the relevant correlations even if we presuppose relatively straightforward access to neural events. The problem is that the best way of finding the neural correlates of mental states is to monitor the human brain as we rely on a person’s first-person report of the kind of mental state he or she is in. But, of course, when we do this we are monitoring the brain of a person who is conscious of, is aware of, that mental state. We can
Conscious and Unconscious Mental States
139
continue to monitor that brain as the person is distracted for a period of time, a period of time in which the person claims not to remember being aware of pain. We might find neural activity that resembles in some respect the neural activity that accompanied the conscious pain. But we won’t know whether that neural activity did or didn’t correspond to pain (a pain of which we were unaware). Assuming a solution to all sorts of other sceptical problems, we might retreat to some sort of reasoning to the best explanation. Even when a person claims not to have been aware of various mental states, that person might exhibit behaviour of a sort that is usually caused by the mental states of which they were aware. I take it that the Freudians are reasoning in something like that way when they posit unconscious fears, desires, hatred, love, jealousy and the like. But again, such reasoning can only be suggestive. Once we allow that states that are not mental can also cause behaviour of a sort that is sometimes caused by genuine mental states, we will always have at the very least competing explanations for the behaviour in question. I raise these questions about unconscious mental states, mental states of which we are unaware, not because I have definitive answers. Phenomenology assures me that there are mental states. I’m certain that such states exist. It also seems to me possible that that same sort of state might exist even when I’m not aware of it. But the reality of such states can’t be established through introspective awareness. And frankly I can’t think of any other plausible way to answer the question of whether this empirical possibility is ever realized.
Notes 1 This might be a bit unfair. I’m sure many Meinongians would emphasize that it is the phenomenological character of their experience that drives their conclusion – not the linguistic. 2 None of this is uncontroversial. The more one finds attractive certain kinds of externalism about mental content, the more one might start thinking of the footprint as a state representing that which left it. Putnam famously argues that the difference between the lines left by a meandering ant moving in the sand, and lines just like those drawn by someone caricaturing Winston Churchill is just a fact about the causal chain leading to the latter – a chain that actually includes Churchill. 3 In offering a causal analysis of just about anything, one always needs to add the protecting clause ‘in the right way’. Philosophers have good imaginations and can imagine ‘abhorrent’ causal chains that generate counterexamples.
140
The Bloomsbury Companion to the Philosophy of Consciousness
4 See Williamson (2016) for a discussion of how one might be able to discover through imagining various contingent truths. 5 I’m assuming that there can be properties that are complex. So when I refer to the property, the property might be some exceedingly complex conjunctive property. 6 For such arguments, see Fumerton (1989; 2013). 7 I am, by no means, underestimating the epistemological problem of knowing other minds. Extreme verificationism aside, however, that epistemological problem doesn’t entail any difficulty in understanding claims about the mental states of others. 8 It is interesting, however, that if we think vividly about horrific pain, the thought will sometimes cause us to wince. Thinking of intense fear can cause one to break into a cold sweat. Perhaps Hume’s view is not that implausible with respect to at least some mental states. 9 Hume (1978), 1.1.1.10. 10 I can denote kinds of things that I have never thought of directly – I can denote, for example, the experience that bats have when they use their ‘sonar’. There is a sense in which when I form the thought corresponding to the description, I am thinking of the experience. But there is another clear sense in which I am not thinking of it directly. I am thinking of it only as whatever it is that has certain properties, for example, being produced by such and such sense organs. See Fumerton (2013). 11 Chisholm’s classic article (1942) presents the problem as one raised by Gilbert Ryle in a discussion with A. J. Ayer. Ushenko (1946), 103. claims that the example of the speckled hen was first given by H.H. Price, but that he (Ushenko) raised a variation of the same problem in (1937), 90. The problem, put briefly, is that it seems that there is a detailed feature of the appearance presented by the 28-speckled hen (a feature that you might describe as a sense datum exemplifying 28 spots, or, in the language of appearing, being appeared to 28-speckledly). It seems implausible to suppose that most people could distinguish between the 28-speckled phenomenal experience and the 29- or 27-speckled experience. See also Sosa (2003) for a presentation of the problem, and Fumerton (2005) for a response. 12 Unless one makes the connection between thought and language trivial by talking about the ‘language’ of thought.
References Berkeley, G. (1954). Three Dialogues Between Hylas and Philonous, ed., Colin Turbayne, Indianapolis: Bobbs-Merrill. Chisholm, R. M. (1942). ‘The Problem of the Speckled Hen’, Mind, 368–73. Fumerton, R. (1989). ‘Russelling Causal Theories of Reference’, in Rereading Russell, ed., Wade Savage, Minneapolis: University of Minnesota Press, 108–18.
Conscious and Unconscious Mental States
141
Fumerton, R. (2005). ‘Speckled Hens and Objects of Acquaintance’, Philosophical Perspectives, 19, 121–139. Fumerton, R. (2013). Knowledge Thought and the Case for Dualism, Cambridge: Cambridge University Press. Hume, D. (1978). A Treatise of Human Nature, ed., L. A. Selby-Bigge, Oxford Universtiy Press. Jackson, F. (1986). ‘What Mary Didn’t Know’, The Journal of Philosophy, 83 (5), 291–95. Kripke, S. A. (1980). Naming and Necessity, Cambridge: Harvard University Press. Lyons, J. (2009). Perception and Basic Beliefs, Oxford: Oxford University Press. Mill, J. S. (1889). An Examination of Sir William Hamilton’s Philosophy. London : Henry Holt and Company. Nagel, T. (1974). ‘What is it Like to be a Bat?’, The Philosophical Review, 83 (4), 435–50. Putnam, H. (1975). Reason, Truth and History, Cambridge: Cambridge University Press. Shope, R. (1979). ‘The Conditional Fallacy in Contemporary Philosophy’, The Journal of Philosophy, 75, 397–413. Sosa, E. (2003). ‘Privileged Access’, in Consciousness: New Philosophical Essays, ed., Quentin Smith, 273–92, Oxford: Oxford University Press. Ushenko, A. P. (1937). The Philosophy of Relativity. New York: Allen and Unwin. Ushenko, Andrew Paul. 1946. Power and Events: An Essay on Dynamics in Philosophy. Princeton University Press. Williamson, T. (2016). ‘Knowing and Imagining’, in Amy Kind and Peter Kung, eds., Knowledge Through Imagination, Oxford: Oxford University Press.
9
Higher-Order Theories of Consciousness Rocco J. Gennaro
Précis Representational theories of consciousness attempt to reduce consciousness to ‘mental representations’ rather than directly to neural or other physical states. This approach has been fairly popular over the past few decades. Examples include first-order representationalism (FOR) which attempts to explain conscious experience primarily in terms of world-directed (or first-order) intentional states (Tye 2000), as well as several versions of higher-order representationalism (HOR) which holds that what makes a mental state M conscious is that it is the object of some kind of higher-order mental state directed at M (Rosenthal 2005, Gennaro 2004a, Gennaro 2012). The primary focus of this chapter is on HOR and especially higher-order thought (HOT) theory. In addition, the closely related ‘self-representational’ approach is also briefly discussed (Kriegel 2009). The key question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? In Section 1, I introduce the overall approach to consciousness called representationalism and briefly discuss Tye’s FOR. Section 2 presents three major versions of HOR: HOT theory, dispositional HOT theory and higherorder perception (HOP) theory. In Section 3, I consider a number of common and important objections to HOR and present replies as well. In Section 4, I briefly outline what I take to be a very close connection between HOT theory and conceptualism, that is, the claim that the representational content of a perceptual experience is entirely determined by the conceptual capacities the perceiver brings to bear in her experience. Section 5 examines several hybrid higher-order and ‘self-representational’ theories of consciousness which all hold that conscious states are self-directed in some way. Finally, in Section 6, I consider the potentially damaging claim that HOT theory requires neural activity in the prefrontal cortex (PFC) in order for one to have conscious states.
Higher-Order Theories of Consciousness
143
Perhaps the most fundamental and commonly used notion of ‘conscious’ is captured by Thomas Nagel’s famous ‘what-it-is-like’ sense (Nagel 1974). When I am in a conscious mental state, there is ‘something it is like’ for me to be in that state from the subjective or first-person point of view. When I smell a rose or have a conscious visual experience, there is something it ‘seems’ or ‘feels like’ from my perspective. This is primarily the sense of ‘conscious state’ that I use throughout this chapter. There is also something it is like to be a conscious creature, whereas there is nothing it is like to be a table or tree.
1 Representationalism Many current theories attempt to reduce consciousness in mentalistic terms, such as thoughts and awareness, rather than directly in neurophysiological terms. One popular approach along these lines is to reduce consciousness to mental representations of some kind. The notion of a ‘representation’ is of course very general and can be applied to pictures, signs and various natural objects, such as the rings inside a tree. Much of what goes on in the brain might also be understood in a representational way. For example, mental events represent outer objects partly because they are caused by such objects in, say, cases of veridical visual perception. Philosophers often call such mental states ‘intentional states’ which have representational content, that is, mental states that are ‘about’ or ‘directed at’ something as when one has a thought about the horse or a perception of the tree. Although intentional states, such as beliefs and thoughts, are sometimes contrasted with ‘phenomenal states’, such as pains and colour experiences, it is clear that many conscious states, typified by visual perceptions, have both phenomenal and intentional properties. The general view that we can explain conscious mental states in terms of representational or intentional states is called ‘representationalism’. Although not automatically reductionist in spirit, most versions of it do indeed attempt such a reduction. Most representationalists believe that there is room for a second-step reduction to be filled in later by neuroscience. A related motivation for representational theories of consciousness is the belief that an account of intentionality can more easily be given in naturalistic terms, such as causal theories whereby mental states are understood as representing outer objects in virtue of some reliable causal connection. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a naturalistic theory of
144
The Bloomsbury Companion to the Philosophy of Consciousness
consciousness. Most generally, however, representationalism can be defined as the view that the phenomenal properties of conscious experience (that is, the ‘qualia’) can be explained in terms of the experiences’ representational properties. It is worth mentioning that the precise relationship between intentionality and consciousness is itself a major ongoing area of research with some arguing that genuine intentionality actually presupposes consciousness in some way (Searle 1992, Horgan and Tienson 2002). If this is correct, then it would be impossible to reduce consciousness to intentionality as representationalists desire to do, but representationalists argue that consciousness requires intentionality, not vice versa. Intentionality is prior to consciousness. Of course, few if any today hold that all intentional states are conscious as Descartes thought. His view was that mental states are essentially conscious and there are no unconscious mental states at all.1 A FOR theory of consciousness is one that attempts to explain and reduce conscious experience primarily in terms of world-directed (or first-order) intentional states. The two most-cited FOR theories are those of Fred Dretske (1995) and Michael Tye (1995, 2000), but I’ll focus briefly on Tye’s theory here. It is clear that not all mental representations are conscious, so the key question remains: What exactly distinguishes conscious from unconscious mental states (or representations)? What makes an unconscious mental state a conscious mental state? Tye defends what he calls ‘PANIC theory’. The acronym ‘PANIC’ stands for poised, abstract, non-conceptual, intentional content (IC). Tye holds that at least some of the representational content in question is nonconceptual (N), which is to say that the subject can lack the concept for the properties represented by the experience in question, such as an experience of a certain shade of red that one has never seen before. But conscious states clearly must also have IC for any representationalist. Tye also asserts that such content is ‘abstract’ (A) and so not necessarily about particular concrete objects. This is needed to handle cases of hallucinations, where there are no concrete objects at all or cases where different objects look phenomenally alike. Perhaps most important for mental states to be conscious is that such content must be ‘poised’ (P), which is an importantly functional notion about what conscious states do. The ‘key idea is that experiences and feelings ... stand ready and available to make a direct impact on beliefs and/or desires. For example … feeling hungry … has an immediate cognitive effect, namely, the desire to eat … . States with non-conceptual content that are not so poised lack phenomenal character [because] … they arise too early, as it were, in the information processing’ (Tye 2000, 62).
Higher-Order Theories of Consciousness
145
A common objection to FOR is that it still does not apply to all conscious states. Some conscious states do not seem to be ‘about’ or ‘directed at’ anything, such as pains or anxiety, and so they would be non-representational conscious states. If so, then conscious states cannot generally be explained in terms of representational properties (Block 1996). Tye responds that pains and itches do represent in the sense that they represent parts of the body. Hallucinations either misrepresent (which is still a kind of representation) or the conscious subject still takes them to have representational properties from the first-person point of view. Indeed, Tye (2000) goes to great lengths in response to a whole host of alleged counterexamples to FOR. For example, with regard to conscious emotions, he says that they ‘are frequently localized in particular parts of the body… . For example, if one feels sudden jealousy, one is likely to feel one’s stomach sink … [or] one’s blood pressure increase’ (2000, 51). He believes that something similar is true for fear or anger. Moods, however, are quite different and do not seem localizable in the same way. Perhaps the most serious objection to Tye’s theory is that what seems to be doing most of the work on Tye’s account is the extremely functional-sounding ‘poised’ notion, and so he is not really explaining phenomenal consciousness in entirely representational terms (Kriegel 2002).2 Let us turn our attention to HOR which is the main topic of this chapter.
2 Higher-order representationalism 2a Higher-order thought theory As we have seen, one question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? There is also a long tradition that has attempted to understand consciousness in terms of some kind of higher-order awareness (Locke 1689/1975). This view has been revived by several contemporary philosophers (Armstrong 1968, 1981, Rosenthal 1986, 1997, 2002, 2005, Lycan 1996, 2001, Gennaro 1996, 2012). The basic idea is that what makes a mental state conscious is that it is the object of some kind of HOR. A mental state M becomes conscious when there is a HOR of M. A HOR is a ‘meta-psychological’ or ‘meta-cognitive’ state, that is, a mental state directed at another mental state (‘I am in mental state M’). So, for example, my desire to write a good chapter becomes conscious when I am (noninferentially) ‘aware’ of the desire. Intuitively, conscious states, as opposed to unconscious ones, are mental states that I am ‘aware of ’ being in some sense.
146
The Bloomsbury Companion to the Philosophy of Consciousness
Conscious mental states arise when two unconscious mental states are related in a certain specific way, namely, that one of them (the HOR) is directed at the other (M). This overall idea is sometimes referred to as the Transitivity Principle (TP): (TP) A conscious state is a state whose subject is, in some way, aware of being in it.
Conversely, the idea that I could be having a conscious state while totally unaware of being in that state seems like a contradiction. A mental state of which the subject is completely unaware is clearly an unconscious state. For example, I would not be aware of having a subliminal perception, and thus it is an unconscious perception. HO theorists are united in the belief that their approach can better explain conscious states than any purely FOR theory.3 There are various kinds of HOR theories with the most common division between HOT theories and HOP theories. HOT theorists, such as David Rosenthal (2005), think it is better to understand the HOR (or higher-order ‘awareness’) as a thought containing concepts. HOTs are treated as cognitive states involving some kind of conceptual component. HOP theorists (Lycan 1996) urge that the HOR is a perceptual state of some kind which does not require the kind of conceptual content invoked by HOT theorists. Although HOT and HOP theorists agree on the need for a HOR theory of consciousness, they do sometimes argue for the superiority of their respective positions (Rosenthal 2004, Lycan 2004, Gennaro 2012, chapter three). One can also find something like TP in premise 1 of Lycan’s (2001) more general argument for HOR. The entire argument runs as follows: 1. A conscious state is a mental state whose subject is aware of being in it. 2. The ‘of ’ in (1) is the ‘of ’ of intentionality; what one is aware of is an intentional object of the awareness. 3. Intentionality is representational; a state has a thing as its intentional object only if it represents that thing. Therefore, 4. Awareness of a mental state is a representation of that state. (From 2, 3) Therefore, 5. A conscious state is a state that is itself represented by another of the subject’s mental states. (1, 4)
Higher-Order Theories of Consciousness
147
The intuitive appeal of premise 1 leads naturally to the final conclusion in (5), which is just another way of stating HOR. A somewhat different but compelling rationale for HOR, and HOT theory in particular, can be put as follows (based on Rosenthal 2004, 24): A non-HOT theorist might still agree with HOT theory as an account of introspection or reflection, namely, that it involves a conscious thought about a mental state. This seems to be a fairly common-sense definition of introspection that includes the notion that introspection involves conceptual activity. It also seems reasonable for anyone to hold that when a mental state is unconscious, there is no HOT at all. But then it also stands to reason that there should be something ‘in between’ those two cases, that is, when one has a first-order conscious state. So what is in between no HOT at all and a conscious HOT? The answer is an unconscious HOT, which is precisely what HOT theory says, that is, a first-order conscious state is accompanied by an unconscious HOT. Moreover, this explains what happens when there is a transition from a first-order conscious state to an introspective state: an unconscious HOT becomes conscious. It might still seem that HOT theory results in circularity by defining consciousness in terms of HOTs. It also might seem that an infinite regress results because a conscious mental state must be accompanied by a HOT, which, in turn, must be accompanied by another HOT, ad infinitum. However, as we have just seen, the standard and widely accepted reply is that when a conscious mental state is a first-order world-directed state the HOT is not itself conscious. But when the HOT is itself conscious, there is a yet higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection which involves a conscious HOT directed at an inner mental state. When one introspects, one’s attention is directed back into one’s mind. For example, what makes my desire to write a good chapter a conscious first-order desire is that there is a (non-conscious) HOT directed at the desire. In this case, my conscious focus is directed outwardly at the paper or computer screen, so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself (Rosenthal 1986, 1997). Indeed, it is crucial to distinguish first-order conscious states (with unconscious HOTs) from introspective states (with conscious HOTs). (See Figure 9.1.) HOT theorists do insist that the HOT must become aware of the lower-order (LO) state noninferentially. We might even suppose, say, that the HO state must be caused noninferentially by the LO state to make it conscious. The point of this condition is mainly to rule out alleged counterexamples to HO theory, such as
The Bloomsbury Companion to the Philosophy of Consciousness
148
World-Directed Conscious Mental States
Third Order
Second Order
First Order
Introspection
Unconscious HOT
Unconscious HOT
World-Directed Conscious Mental State
One’s conscious attention is directed at the outer world.
Conscious HOT
World-Directed Conscious Mental State
One’s conscious attention is directed at one’s own mental state.
Figure 9.1 The Higher-Order Thought (HOT) Theory of Consciousness.
cases where I become aware of my unconscious desire to kill my boss because I have consciously inferred it from a session with a psychiatrist, or where my anger becomes conscious after making inferences based on my own behaviour. The characteristic feel of such a conscious desire or anger may be absent in these cases, but since awareness of them arose via conscious inference, the HO theorist accounts for them by adding this noninferential condition.
2b Dispositional HOT theory Peter Carruthers (2000, 2005) has proposed a different form of HOT theory such that the HOTs are dispositional states instead of actual HOTs, though he
Higher-Order Theories of Consciousness
149
also understands his ‘dispositional HOT theory’ to be a form of HOP theory (Carruthers 2004). The basic idea is that the conscious status of an experience is due to its availability to HOT. So ‘conscious experience occurs when perceptual contents are fed into a special short-term buffer memory store, whose function is to make those contents available to cause HOTs about themselves’ (Carruthers 2000, 228). Some first-order perceptual contents are available to a higher-order ‘theory of mind mechanism’, which transforms those representational contents into conscious contents. Thus, no actual HOT occurs. Instead, according to Carruthers, some perceptual states acquire a dual IC, for example, a conscious experience of red not only has a first-order content of ‘red’, but also has the higher-order content ‘seems red’ or ‘experience of red’. Thus, he also calls his theory ‘dual-content theory’. Carruthers makes interesting use of so-called ‘consumer semantics’ in order to fill out his theory of phenomenal consciousness. That is, the content of a mental state depends, in part, on the powers of the organisms which ‘consume’ that state, for example, the kinds of inferences which the organism can make when it is in that state. Dispositional theory is often criticized by those who, among other things, do not see how the mere disposition towards a mental state can render it conscious (Rosenthal 2004, Gennaro 2004, 2012). Recall that a key motivation for HOT theory is the TP. But the TP clearly lends itself to an actualist HOT theory interpretation, namely, that we are aware of our conscious states and not aware of our unconscious states. And, as Rosenthal puts it: ‘Being disposed to have a thought about something doesn’t make one conscious of that thing, but only potentially conscious of it’ (2004, 28). Thus, it is natural to wonder just how dualcontent theory explains phenomenal consciousness. It is difficult to understand how a dispositional HOT can render, say, a perceptual state actually conscious. To be sure, Carruthers is well aware of this objection and attempts to address it (Carruthers 2005, 55–60). He again relies heavily on consumer semantics in an attempt to show that changes in consumer systems can transform perceptual contents. That is, what a state represents will depend, in part, on the kinds of inferences that the cognitive system is prepared to make in the presence of that state, or on the kinds of behavioural control that it can exert. In that case, the presence of first-order perceptual representations to a consumer-system that can deploy a ‘theory of mind’ and concepts of experience may be sufficient to render those representations at the same time as higher-order ones. This would confer phenomenal consciousness to such states. But the central and most serious problem remains: that is, dual-content theory is vulnerable to the same objection raised against FOR. This point is made most forcefully by Jehle and
150
The Bloomsbury Companion to the Philosophy of Consciousness
Kriegel (2006). They point out that dual-content theory ‘falls prey to the same problem that bedevils FOR: It attempts to account for the difference between conscious and [un]conscious ... mental states purely in terms of the functional roles of those states’ (Jehle and Kriegel 2006, 468). Carruthers, however, is concerned to avoid what he takes to be a problem for actualist HOT theory, namely, that an unbelievably large amount of cognitive (and neural) space would have to be taken up if every conscious experience is accompanied by an actual HOT.
2c Higher-order perception theory David Armstrong (1968, 1981) and William Lycan (1996, 2004) have been the leading proponents of HOP theory in recent years. Unlike HOTs, HOPs are not thoughts and do not have conceptual content. Rather, they are to be understood as analogous to outer perception. One major objection to HOP theory is that, unlike outer perception, there is no obvious distinct sense organ or scanning mechanism responsible for HOPs. Similarly, no distinctive sensory quality or phenomenology is involved in having HOPs, whereas outer perception always involves some sensory quality. Lycan concedes the disanalogy but argues that it does not outweigh other considerations favouring HOP theory (Lycan 1996, 28–29, 2004, 100). His reply is understandable, but the objection remains a serious one and the disanalogy cannot be overstated. After all, it represents a major difference between normal outer perception and any alleged inner perception. I argue against Lycan’s claim that HOP theory is superior to HOT theory because, by analogy to outer perception, there is an importantly passive aspect to perception not found in thought (Gennaro 2012, Chapter 3). The perceptions in HOPs are too passive to account for the interrelation between HORs and first-order states. Thus, HOTs are preferable. I sometimes frame it in Kantian terms: We can distinguish between the faculties of sensibility and understanding, which must work together to make experience possible. What is most relevant here is that the passive nature of the sensibility (through which outer objects are given to us) is contrasted with the active and more cognitive nature of the understanding, which thinks about and applies concepts to that which enters via the sensibility. HOTs fit this latter description well. In addition, Kant uses the term Begriff for ‘concept’ which has a more active connotation of ‘a grasping’. In any case, on my view, what ultimately justifies treating HORs as thoughts is the exercise and application of concepts to first-order states (Rosenthal 2005, Gennaro 2012, chapter four).
Higher-Order Theories of Consciousness
151
Lycan has recently changed his mind and no longer holds HOP theory. This is mainly because he now thinks that attention to first-order states is sufficient for an account of conscious states, and there is little reason to view the relevant attentional mechanism as intentional or as representing first-order states (Sauret and Lycan 2014). Armstrong and Lycan had indeed previously spoken of HOP ‘monitors’ or ‘scanners’ as a kind of attentional mechanism but now it seems that ‘leading contemporary cognitive and neurological theories of attention are unanimous in suggesting that attention is not intentional’ (Sauret and Lycan 2014, 365). They cite Prinz (2012), for example, who holds that attention is a psychological process that connects first-order states with working memory. Sauret and Lycan explain that ‘attention is the mechanism that enables subjects to become aware of their mental states’ (2014, 367) and yet this ‘awareness of ’ is supposed to be a non-intentional selection of mental states. Thus, Sauret and Lycan (2014) find that Lycan’s (2001) argument, discussed above, goes wrong at premise 2 and that the ‘of ’ in question need not be the ‘of ’ of intentionality. Instead, the ‘of ’ is perhaps more of an ‘acquaintance relation’, although Sauret and Lycan do not really present a theory of acquaintance, let alone one with the level of detail offered by HOT theory. For my own part, I seriously doubt that the acquaintance strategy is a better alternative (see Gennaro 2015 for more on this theme). Such acquaintance relations would presumably be somehow ‘closer’ than the representational relation. But this strategy is at best trading one difficult problem for an even deeper puzzle, namely, just how to understand the allegedly intimate and non-representational ‘awareness of ’ relation between HORs and first-order states. It is also more difficult to understand such ‘acquaintance relations’ within the context of any HOR reductionist approach. Indeed, acquaintance is often taken to be unanalysable and simple in which case it is difficult to see how it could usefully explain anything, let alone the nature of conscious states. Zahavi (2007), who is not a HOT or HOP theorist, also recognizes how unsatisfying invoking ‘acquaintance’ can be. He explains that advocates of this approach ‘never offer a more detailed analysis of this complex structure. That is, when it comes to a positive description of the structure of original pre-reflective self-awareness they are remarkably silent, either claiming in turn that it is unanalysable, or that the unity of its complex structure is incomprehensible. This is hardly satisfactory’ (Zahavi 2007, 281). It seems to me that we still do not have a good sense of what this acquaintance relation is.4
152
The Bloomsbury Companion to the Philosophy of Consciousness
3 Objections and replies A number of other objections to HO theories (and counter-replies) can be found in the literature. Although some also apply to HOP theory, others are aimed more at HOT theory in particular. First, some argue that various animals (and even infants) are not likely to have the conceptual sophistication required for HOTs, and so that would render animal (and infant) consciousness very unlikely (Dretske 1995, Seager 2004). Are cats and dogs capable of having complex HOTs such as ‘I am in mental state M’? Although most who bring forth this objection are not HO theorists, Carruthers (1989, 2000) is one HO theorist who actually embraces the conclusion that (most) animals do not have phenomenal consciousness. I initially replied to Carruthers arguing that the HOTs need not be as sophisticated as it might initially appear and there is ample comparative neurophysiological evidence supporting the conclusion that animals have conscious mental states (Gennaro 1993, 1996). Most HO theorists do not wish to accept the absence of animal or infant consciousness as a consequence of holding the theory. The debate has continued over the past two decades.5 To give an example which seems to favour my view, Clayton and Dickinson and their colleagues (in Clayton, Bussey and Dickinson 2003, 37) have reported convincing demonstrations of memory for time in scrub jays. Scrub jays are food-caching birds, and when they have food they cannot eat, they hide it and recover it later. Because some of the food is preferred but perishable (such as crickets), it must be eaten within a few days, while other food (such as nuts) is less preferred but does not perish as quickly. In cleverly designed experiments using these facts, scrub jays are shown, even days after caching, to know not only what kind of food was where but also when they had cached it (see also Clayton, Emery and Dickinson 2006). Such experimental results seem to show that they have episodic memory that involves a sense of self over time. This strongly suggests that the birds have some degree of meta-cognition with a self-concept (or ‘I-concept’) which can figure into HOTs. Further, many crows and scrub jays return alone to caches they had hidden in the presence of others and recache them in new places (Emery and Clayton 2001). This suggests that they know that others know where the food was cached, and thus, to avoid having their food stolen, they recache the food. This strongly suggests that these birds can have some mental concepts, not only about their own minds but even of other minds, which is sometimes referred to as ‘mind-reading’ ability. Of course, there are many different experiments aimed at determining
Higher-Order Theories of Consciousness
153
the conceptual and meta-cognitive abilities of various animals so it is difficult to generalize across species. There does seem to be growing evidence that at least some animals can mindread under familiar conditions. For example, Laurie Santos and colleagues show that rhesus monkeys attribute visual and auditory perceptions to others in more competitive paradigms (Flombaum and Santos 2005, Santos, Nissen and Ferrugia 2006). Rhesus monkeys preferentially attempted to obtain food silently only in conditions in which silence was relevant to obtaining the food undetected. While a human competitor was looking away, monkeys would take grapes from a silent container, thus apparently understanding that hearing leads to knowing on the part of human competitors. Subjects reliably picked the container that did not alert the experimenter that a grape was being removed. This suggests that monkeys take into account how auditory information can change the knowledge state of the experimenter.6 A second objection has been referred to as the ‘problem of the rock’ and is originally due to Alvin Goldman (Goldman 1993). When I have a thought about a rock, it is certainly not true that the rock becomes conscious. So why should I suppose that a mental state becomes conscious when I think about it? This is puzzling to many and the objection forces HOT theorists to explain just how adding the HOT state changes an unconscious state into a conscious. There have been, however, a number of responses to this kind of objection (Rosenthal 1997, Van Gulick 2000, 2004, Gennaro 2005, 2012, Chapter 4). Perhaps the most common theme is that there is a principled difference in the objects of the thoughts in question. For one thing, rocks and similar objects are not mental states in the first place, and HOT theorists are first and foremost trying to explain how a mental state becomes conscious. The objects of the HOTs must be ‘in the head’. Third, one might object to any reductionist theory of consciousness with something like Chalmers’s hard problem, that is, how or why brain activity produces conscious experience (Chalmers 1995, 1996). However, it is first important to keep in mind that HOT theory is unlike reductionist accounts in non-mentalistic terms and so is immune to Chalmers’s criticism about the plausibility of theories that attempt a direct reduction to neurophysiology. For HOT theory, there is no problem about how a specific brain activity ‘produces’ conscious experience, nor is there an issue about any a priori or a posteriori relation between brains and consciousness. The issue instead is how HOT theory might be realized in our brains for which there seems to be some evidence thus far (Gennaro 2012, Chapters 4, 9).
154
The Bloomsbury Companion to the Philosophy of Consciousness
Still, it might be asked just how exactly any HOR theory really explains the subjective or phenomenal aspect of conscious experience. How or why does a mental state come to have a first-person qualitative ‘what-it-is-like’ aspect by virtue of the presence of a HOR directed at it? It is probably fair to say that HOR theorists have been slow to address this problem, though a number of overlapping responses have emerged. Some argue that this objection misconstrues the main and more modest purpose of (at least, their) HOT theories. The claim is that HOT theories are theories of consciousness only in the sense that they are attempting to explain what differentiates conscious from unconscious states, that is, in terms of a higher-order awareness of some kind. A full account of ‘qualitative properties’ or ‘sensory qualities’ (which can themselves be non-conscious) can be found elsewhere in their work, but is independent of their theory of consciousness (Rosenthal 1991, 2005, Lycan 1996). Thus, a full explanation of phenomenal consciousness requires more than a HOR theory, but that is no objection to HOR theories as such. There is also a concern that proponents of the hard problem unjustly raise the bar as to what would count as a viable reductionist explanation of consciousness, so that any such reductionist attempt would inevitably fall short (Carruthers 2000). Part of the problem may even be a lack of clarity about what would even count as an explanation of consciousness (Van Gulick 1995). I have further responded that HOTs explain how conscious states occur because the concepts that figure into the HOTs are necessarily presupposed in conscious experience (in Gennaro 2012, Chapter 4; 2005). Again, the idea is that first we receive information via our senses or the ‘faculty of sensibility’. Some of this information will then rise to the level of unconscious mental states but they do not become conscious until the more cognitive ‘faculty of understanding’ operates on them via the application of concepts. We can arguably understand such concept application in terms of HOTs directed at first-order states. Thus, I consciously experience (and recognize) the brown tree as a brown tree partly because I apply the concepts ‘brown’ and ‘tree’ (in my HOTs) to my basic perceptual states. If there is a real hard problem, I have suggested that it has more to do with explaining concept acquisition (Gennaro 2012, Chapters 6, 7). A fourth, and very important, objection to higher-order approaches is the question of how such theories can explain cases where the HO state might misrepresent the LO mental state (Byrne 1997, Neander 1998, Levine 2001, Block 2011). After all, if we have a representational relation between two states, it seems possible for misrepresentation or malfunction to occur. If it does, then what explanation could be offered by the HO theorist? If my LO state
Higher-Order Theories of Consciousness
155
registers a red percept and my HO state registers a thought about something green, then what happens? It seems that problems loom for any answer given by a HOT theorist and the cause of the problem has to do with the very nature of the HO theorist’s belief that there is a representational relation between the LO and HO states. For example, if a HOT theorist takes the option that the resulting conscious experience is reddish, then it seems that the HOT plays no role in determining the qualitative character of the experience. On the other hand, if the resulting experience is greenish, then the LO state seems irrelevant. Nonetheless, Rosenthal and Weisberg hold that the HO state determines the qualitative properties, even in cases when there is no LO state at all which are usually called ‘targetless’ or ‘empty’ HOT cases (Rosenthal 2005, 2011, Weisberg 2008, 2011). I have argued instead that no conscious experience results in such cases, that is, neither reddish nor greenish experience. I fail to see, for example, how a sole (unconscious) HOT can result in a conscious state at all (Gennaro 2012, Chapter 4; 2013). I argue that there must be a conceptual match, complete or partial, between the LO and HO state in order for the conscious experience to exist in the first place. Weisberg and Rosenthal argue that what really matters is how things seem to the subject and, if we can explain that, we’ve explained all that we need to. But the problem with this view is that somehow the HOT alone is what matters. Doesn’t this defeat the purpose of HOT theory which is supposed to explain state consciousness in terms of the relation between two states? Moreover, according to the theory, the LO state is supposed to be conscious when one has an unconscious HOT. In the end, I argue for the much more nuanced claim that whenever a subject S has a HOT directed at experience e, the content c of S’s HOT determines the way that S experiences e (provided that there is a full or partial conceptual match with the lower-order state, or when the HO state contains more specific or fine-grained concepts than the LO state has, or when the LO state contains more specific or fine-grained concepts than the HO state has, or when the HO concepts can combine to match the LO concept) (Gennaro 2012, 180).
The reasons for the above qualifications are discussed at length in Gennaro (2012, Chapter 6), but they basically try to explain what happens in some abnormal cases (such as visual agnosia), and in some other atypical contexts (such as perceiving ambiguous figures like the vase-two faces), where mismatches might occur between the HOT and LO state. For example, visual agnosia, or more
156
The Bloomsbury Companion to the Philosophy of Consciousness
specifically associative agnosia, seems to be a case where a subject has a conscious experience of an object without any conceptualization of the incoming visual information (Farah 2004). There appears to be a first-order perception of an object without the accompanying concept of that object (either first- or second-order, for that matter). Thus its ‘meaning’ is gone and the object is not recognized. It seems that there can be conscious perceptions of objects without the application of concepts, that is, without recognition or identification of those objects. But we might instead hold that associative agnosia is simply an unusual case where the typical HOT does not fully match up with the first-order visual input. That is, we might view associative agnosia as a case where the ‘normal’, or most general, object concept in the HOT does not accompany the input received through the visual modality. There is a partial match instead. A HOT might partially recognize the LO state. So, associative agnosia would be a case where the LO state could still register a percept of an object O (because the subject still does have the concept), but the HO state is limited to some features of O. Bare visual perception remains intact in the LO state but is confused and ambiguous, and thus the agnosic’s conscious experience of O ‘loses meaning’, resulting in a different phenomenological experience. When, for example, the agnosic does not (visually) recognize a whistle as a whistle, perhaps only the concepts ‘silver’, ‘roundish’ and ‘object’ are applied. But as long as that is how the agnosic experiences the object, then HOT theory is left unthreatened. In any case, on my view, misrepresentations cannot occur between M and HOT and still result in a conscious state (Gennaro 2012, 2013). Misrepresentations cannot occur between M and HOT and result in a conscious experience reflecting mismatched and incompatible concepts. Once again, and especially with respect to targetless HOTs, it is difficult to see how an unconscious HOT alone can result in a conscious mental state. Moreover, according to Rosenthal, HOTs themselves have no qualia. At the very least, this important objection forces HOT theorists to be clearer about just how to view the relationship between the LO and HO states.
4 HOT theory and conceptualism Let us return to the related claim that HOT theory can explain how one’s conceptual repertoire can transform our phenomenological experience.
Higher-Order Theories of Consciousness
157
Concepts, at minimum, involve recognizing and understanding objects and properties. Having a concept C should also give the concept possessor the ability to discriminate instances of C and non-Cs. For example, if I have the concept ‘lion’ I should be able to identify lions and distinguish them from other even fairly similar land animals. Rosenthal invokes the idea that acquiring concepts can change one’s conscious experience with the help of several well-known examples (2005, 187–188). Acquiring various concepts from a wine-tasting course will lead to different experiences from those taste experiences enjoyed prior to the course. I acquire more fine-grained wine-related concepts, such as ‘dry’ and ‘heavy’, which in turn can figure into my HOTs and thus alter my conscious experiences. I literally have different qualia due to the change in my conceptual repertoire. As we learn more concepts, we have more fine-grained experiences and thus experience more qualitative complexities. A botanist will likely have somewhat different perceptual experiences than I do while walking through a forest. Conversely, those with a more limited conceptual repertoire, such as infants and animals, will often have a more coarse-grained set of experiences. Much the same goes for other sensory modalities, such as the way that I experience a painting after learning more about artwork and colour. The notion of ‘seeing-as’ (‘hearing-as’, and so on) is often used in this context, that is, depending upon the concepts I possess, my conscious experience will literally allow me to see the world differently. These considerations do not, of course, by themselves prove that newly acquired concepts are constitutive parts of the resulting conscious states, as opposed merely to having a causal impact on those states. Nonetheless, I have argued that there is a very close and natural connection between HOT theory and conceptualism (Gennaro 2012, Chapter 7; 2013). Chuard (2007) defines conceptualism as the claim that ‘the representational content of a perceptual experience is fully conceptual in the sense that what the experience represents (and how it represents it) is entirely determined by the conceptual capacities the perceiver brings to bear in her experience’ (Chuard 2007, 25). We might similarly define conceptualism as follows: (CON) Whenever a subject S has a perceptual experience e, the content c (of e) is fully specifiable in terms of the concepts possessed by S.
In Gennaro (2012, Chapter 6), I present an argument which links HOT theory and conceptualism as follows: 1. Whenever a subject S has a conscious perceptual experience e, one has a HOT directed at e.
158
The Bloomsbury Companion to the Philosophy of Consciousness
2. Whenever a subject S has a HOT directed at e, the content c of S’s HOT determines the way that S experiences e (provided that there is a match with the lower-order state). 3. Whenever there is a content c of S’s HOT determining the way that S experiences e, the content c (of e) is fully specifiable in terms of concepts possessed by S. Therefore, 4. Whenever a subject S has a conscious perceptual experience e, the content c (of e) is fully specifiable in terms of concepts possessed by S [= CON]. The above is somewhat oversimplified, especially with regard to premise (2), but I argue that there is a very natural connection between HOT theory and conceptualism. In any case, the basic idea is that, just like beliefs and thoughts, perceptual experiences also have conceptual content. In a somewhat Kantian spirit, we might say that all conscious experience presupposes the application of concepts, or, even stronger, the way that one experiences the world is entirely determined by the concepts one possesses. Indeed, Gunther (2003, 1) initially uses Kant’s famous slogan that ‘thoughts without content are empty, intuitions [= sensory experiences] without concepts are blind’ to sum up conceptualism (Kant 1781/1965, A51/B75).
5 Hybrid higher-order and self-representational accounts Some related representationalist views hold that the HOR in question should be understood as intrinsic to (or part of) an overall complex conscious state. This stands in contrast to the standard view that the HOT is extrinsic to (that is, entirely distinct from) its target mental state. One motivation for this shift is dissatisfaction with standard HO theory’s ability to handle some of the objections addressed above. Another reason is renewed interest in a view somewhat closer to the one held by Franz Brentano (1874/1973) and others, normally associated with the phenomenological tradition (Sartre 1956; Smith 2004). To varying degrees, these views have in common the idea that conscious mental states, in some sense, represent themselves, which still involves having a thought about a mental state, just not a distinct or separate state. Thus, when one has a conscious desire for a beer, one is also aware that one is in that very state. The conscious desire represents both the beer and itself. It is this ‘self-representing’ that makes the state conscious.
Higher-Order Theories of Consciousness
159
In my case, I have argued that when one has a first-order conscious state, the (unconscious) HOT is better viewed as intrinsic to the target state, so that we have a complex conscious state with parts (Gennaro 1996, 2006, 2012). This is what I call the ‘wide intrinsicality view’ (WIV) which I take to be a version of HOT theory and argue that Sartre’s theory of consciousness could be understood in this way (Gennaro 2002; 2015). On the WIV, first-order conscious states are complex states with a world-directed part and a meta-psychological component. Robert Van Gulick (2000; 2004; 2006) has also explored the alternative that the HO state is part of an overall global conscious state. He calls such states ‘HOGS’ (Higher-Order Global States), whereby a lower-order unconscious state is ‘recruited’ into a larger state, which becomes conscious partly due to the implicit self-awareness that one is in the LO state. This general approach is also forcefully advocated by Uriah Kriegel in a series of papers, beginning with Kriegel (2003) and culminating in Kriegel (2009). He refers to it as the ‘self-representational theory of consciousness’ (see also Kriegel 2005; Kriegel and Williford 2006). To be sure, the notion of a mental state representing itself or a mental state with one part representing another part is in need of further development. Nonetheless, there is agreement among all of these authors that conscious mental states are, in some important sense, reflexive or self-directed. More specifically, Kriegel (2003; 2006; 2009) has tried to cash out TP in terms of a ubiquitous (conscious) ‘peripheral’ self-awareness which accompanies all of our first-order focal conscious states. Not all conscious ‘directedness’ is attentive and so perhaps we should not restrict conscious directedness to that which we are consciously focused on. If this is right, then a first-order conscious state can be both attentively outer-directed and inattentively inner-directed. I have argued against this view at length (Gennaro 2008; 2012, Chapter 5). Although it is surely true that there are degrees of conscious attention, the clearest example of genuine ‘inattentive’ consciousness are outer-directed awareness in one’s peripheral visual field. But this obviously does not show that any inattentional consciousness is self-directed during outer-directed consciousness, let alone at the very same time. Also, what is the evidence for such self-directed inattentional consciousness? It is presumably based on phenomenological considerations but I confess that I do not find such ubiquitous inattentive self-directed ‘consciousness’ in my outer-directed conscious experience. Except when I am introspecting, conscious experience is so completely outer-directed that I deny we have such peripheral self-directed consciousness when in first-order conscious states. It does not seem to me that I am consciously aware (in any sense) of my own experience when I am, say, consciously attending to a band in
160
The Bloomsbury Companion to the Philosophy of Consciousness
concert or to the task of building a bookcase. Even some who are otherwise very sympathetic to Kriegel’s phenomenological approach find it difficult to believe that ‘pre-reflective’ (inattentional) self-awareness accompanies conscious states (Siewart 1998; Zahavi 2004) or at least that all conscious states involve such selfawareness (Smith 2004). None of these authors are otherwise sympathetic to HOT theory or reductionist approaches to consciousness. Interestingly, Kriegel’s most recent view is that there is an indirect selfrepresentation applicable to conscious states with the self-representational peripheral component directed at the world-directed part of the state (2009, 215–226). This comes in the context of Kriegel’s attempt to make sense of a self-representational view within a naturalistic framework, but it is also much more like my WIV in structure. The main difference, however, is that Kriegel thinks that ‘pre-reflective self-awareness’ or the ‘self-representation’ is itself (peripherally) conscious.7
6 HOT theory and the prefrontal cortex One interesting development in recent years has been on attempts to identify just how HOT theory and self-representationalism might be realized in the brain. Again, most representationalists tend to think that the structure of conscious states is realized in the brain (though it may take some time to identify all the main neural structures). The issue is sometimes framed in terms of the question: ‘How global is HOT theory?’ That is, do conscious mental states require widespread brain activation or can at least some be fairly localized in narrower areas of the brain? Perhaps most interesting is whether or not the PFC is required for having conscious states (Gennaro 2012, Chapter 9). I disagree with Kriegel (2007; 2009 Chapter 7) and Block (2007) that, according to the higher-order and selfrepresentational view, the PFC is required for most conscious states (see also Lau and Passingham 2006, Del Cul et al. 2007; Lau and Rosenthal 2011). However, it may very well be true that the PFC is required for the more sophisticated introspective states but this isn’t a problem for HOT theory as such because it does not require introspection for first-order conscious states. What evidence is there of conscious states without PFC activity? There seems to be quite a bit. For example, Rafael Malach and colleagues show that when subjects are engaged in a perceptual task or absorbed in watching a movie, there is widespread neural activation but little PFC activity (Grill-Spector and Malach 2004, Goldberg, Harel and Malach 2006). Although some other studies do show PFC activation,
Higher-Order Theories of Consciousness
161
this is mainly because of the need for subjects to report their experiences. Also, basic conscious experience is certainly not decreased entirely even when there is extensive bilateral PFC damage or lobotomies (Pollen 2008). Zeki (2007) cites evidence that the ‘frontal cortex is engaged only when reportability is part of the conscious experience’ (587), and ‘all human color imaging experiments have been unanimous in not showing any particular activation of the frontal lobes’ (582). Similar results are found for other sensory modalities, for example, in auditory perception (Baars and Gage 2010, Chapter 7). Although areas outside the auditory cortex are sometimes cited, there is virtually no mention of the PFC. It seems to me that the above line of argument would be an advantage for HOT theory with regard to the oft-cited problem of animal and infant consciousness. If HOT theory does not require PFC activity for all conscious states, then HOT theory is in a better position to account for animal and infant consciousness, since it is doubtful that infants and most animals have the requisite PFC activity. But one might still ask: Why think that unconscious HOTs can occur outside the PFC? If we grant that unconscious HOTs can be regarded as a kind of ‘prereflective’ self-consciousness, then we can for example look to Newen and Vogeley (2003) for answers. They distinguish five levels of self-consciousness ranging from ‘phenomenal self-acquaintance’ and ‘conceptual self-consciousness’ up to ‘iterative meta-representational self-consciousness’. The majority of their paper is explicitly about the neural correlates of what they call the ‘first-person perspective’ (1PP) and the ‘egocentric reference frame’. Citing numerous experiments, they point to various neural signatures of self-consciousness. The PFC is rarely mentioned and then usually only with regard to more sophisticated forms of self-consciousness. Other brain areas are much more prominently identified, such as the medial and inferior parietal cortices, the temporoparietal cortex, the posterior cingulate cortex and the anterior cingulate cortex (ACC).8 Damasio (1999) explicity mentions the ACC as a site for some higher-order mental activity or ‘maps’. There are various cortical association areas that might be good candidates for HOTs depending on the modality. Key regions for spatial navigation comprise the medial parietal and right inferior parietal cortex, posterior cingulate cortex and the hippocampus. Even when considering the neural signatures of theory of mind and mind reading, Newen and Vogeley have replicated experiments indicating that such meta-representation is best located in the ACC. In addition, ‘the capacity for taking 1PP in such [theory of mind] contexts showed differential activation in the right temporo-parietal junction and the medial aspects of the superior parietal lobe’ (Newen and Vogeley 2003, 538). Once again, even if the PFC is essential for having certain HOTs and
162
The Bloomsbury Companion to the Philosophy of Consciousness
conscious states, this poses no threat to HOT theory provided that the HOTs in question are of the more sophisticated introspective variety. This matter is certainly not yet settled, but I think it is a mistake, both philosophically and neurophysiologically, to claim that HOT theory should treat first-order conscious states as essentially including PFC activity. If other HO theorists endorse such a view, then so much the worse for them. However, to tie this together with the animals issue, I have made the following concession: ‘If all HOTs occur in the PFC, and if PFC activity is necessary for all conscious experience, and if there is little or no PFC activity in infants and most animals, then either (a) infants and most animals do not have conscious experience or (b) HOT theory is false’ (Gennaro 2012, 281). Unlike Carruthers (2000; 2005) and perhaps Rosenthal, I would opt for (b). I think I am more sure of animal and infant consciousness than any philosophical theory of consciousness. However, I think that a good case can be made for the falsity of one or more of the conjuncts in the antecedent of the foregoing conditional. Kuzuch (2014) presents a very nice discussion of the PFC in relation to higherorder theories, arguing that the lack of dramatic deficits in visual consciousness even with PFC lesions presents a compelling case against higher-order theories. In some ways, I agree with much of Kozuch’s analysis especially with respect to the notion that some (visual) conscious states do not require PFC activity (sometimes focused more on the dorsolateral PFC, or dlPFC). For example, in addition to the studies I cited above, Kozuch references Alvarez and Emory (2006) as evidence for the view that lesions to the orbital, lateral, or medial PFC produce so-called executive dysfunction. Depending on the precise lesion location, subjects with damage to one of these areas have problems inhibiting inappropriate actions, switching efficiently from task to task, or retaining items in short-term memory. However, lesions to these areas appear not to produce notable deficits in visual consciousness: Tests of the perceptual abilities of subjects with lesions to the PFC proper reveal no such deficits; as well, PFC patients never report their visual experience to have changed in some remarkable way (Kozuch 2014, 729).
However, Kozuch rightly notes that my view is left undamaged, at least to some extent, since I do not require that the PFC is where HOTs are realized. I would add that we must also keep in mind the distinction between unconscious HOTs and conscious HOTs (= introspection). Perhaps the latter require PFC activity given the more sophisticated executive functions associated with introspection but having first-order conscious states does not require introspection.9
Higher-Order Theories of Consciousness
163
In conclusion, higher-order theory has been and remains a viable theory of consciousness, especially for those who are attracted to a reductionist account but not presently to a reduction in purely neurophysiological terms. Although there are significant objections to different versions of HOR, some standard and plausible replies have emerged through the years. HOR also maintains a degree of intuitive plausibility due to the TP. In addition, HOT theory can help us to make sense of conceptualism and can contribute to the question of the PFC’s role in producing conscious states.
Notes 1 See Gennaro (2012, Chapter 2), Chudnoff (2015), and the essays in Bayne and Montague (2011) and Kriegel (2013) for much more on the relationship between intentionality and consciousness. 2 For other versions of FOR, see Harman (1990), Kirk (1994), Byrne (2001), Thau (2002) and Droege (2003). See Chalmers (2004) for an excellent discussion of the dizzying array of possible representationalist positions. 3 I view the TP primarily as an a priori or conceptual truth about the nature of conscious states (see Gennaro (2012), 28–29). 4 For other variations on HOT theory, see Rolls (2004), Picciuto (2011) and Coleman (2015). 5 See for example, Carruthers (2000); (2005); (2008); (2009); and Gennaro (2004b); (2009); (2012), Chapter 8. 6 I lack the space here to delve further into this massive literature (but see also for example the essays in Terrace and Metcalfe (2005), Hurley and Nudds (2006) and Lurz (2009), see also Lurz (2011). Further, some of the same questions arise with respect to infant concept possession and consciousness. See Gennaro (2012), Chapter 7; Goldman (2006); Nichols and Stich (2003); but also Carruthers (2009). 7 For others who hold some form of the self-representational view, see Williford (2006) and Janzen (2008). Carruthers’s (2000) theory can also be viewed in this light since he contends that conscious states have two representational contents. 8 Kriegel also mentions the ACC as a possible location for HOTs, but it should be noted that the ACC is, at least sometimes, considered to be part of the PFC. 9 Yet another interesting argument along these lines is put forth by Sebastian (2014) with respect to some dream states. If some dreams are conscious states and there is little, if any, PFC activity during the dream period, then HOT theory would again be in trouble if we suppose that HOTs are realized in the PFC. Two commentaries and a reply by Sebastian appear in the same volume. Once again, my own view is left unscathed by this line of argument.
164
The Bloomsbury Companion to the Philosophy of Consciousness
References Alvarez, J. and Emory, E. (2006). ‘Executive Function and the Frontal Lobes: A MetaAnalytic Review’, Neuropsychology Review, 16, 17–42. Armstrong, D. (1968). A Materialist Theory of Mind, London: Routledge and Kegan Paul. Armstrong, D. (1981). ‘What is Consciousness?’, in The Nature of Mind, Ithaca, NY: Cornell University Press. Baars, B. and Gage, N. (2010). Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience, 2nd ed., Oxford: Elsevier. Bayne, T. and Montague, M. eds. (2011). Cognitive Phenomenology, New York: Oxford University Press. Block, N. (1996). ‘Mental Paint and Mental Latex’, in E. Villanueva, ed., Perception, Atascadero, CA: Ridgeview. Block, N. (2007). ‘Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience’, Behavioral and Brain Sciences, 30, 481–99. Block, N. (2011). ‘The Higher-Order Approach to Consciousness is Defunct’, Analysis, 71, 419–31. Brentano, F. (1874/1973). Psychology From an Empirical Standpoint. New York: Humanities Press. Byrne, A. (1997). ‘Some like it HOT: Consciousness and Higher-Order Thoughts’, Philosophical Studies, 86, 103–29. Byrne, A. (2001). ‘Intentionalism Defended’, The Philosophical Review, 110, 199–240. Carruthers, P. (1989). ‘Brute Experience’, The Journal of Philosophy, 86, 258–69. Carruthers, P. (2000). Phenomenal Consciousness, Cambridge: Cambridge University Press. Carruthers, P. (2004). ‘HOP over FOR, HOT Theory’, in R. Gennaro 2004a. HigherOrder Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Carruthers, P. (2005). Consciousness: Essays from a Higher-Order Perspective, New York: Oxford University Press. Carruthers, P. (2008). ‘Meta-Cognition in Animals: A Skeptical Look’, Mind and Language, 23, 58–89. Carruthers, P. (2009). ‘How We Know our Own Minds: The Relationship Between Mindreading and Metacognition’, Behavioral and Brain Sciences, 32, 121–38. Chalmers, D. (1995). ‘Facing Up to the Problem of Consciousness’, Journal of Consciousness Studies, 2, 200–19. Chalmers, D. (1996). The Conscious Mind, New York: Oxford University Press. Chalmers, D. (2004). ‘The Representational Character of Experience’, in B. Leiter, ed., The Future for Philosophy, Oxford: Oxford University Press. Chuard, P. (2007). ‘The Riches of Experience’, in R. Gennaro, ed., The Interplay Between Consciousness and Concepts. Exeter: Imprint Academic.
Higher-Order Theories of Consciousness
165
Chudnoff, E. (2015). Cognitive Phenomenology, New York: Routledge. Clayton, N., Bussey, T., and Dickinson, A. (2003). ‘Can Animals Recall the Past and Plan for the Future?’, Nature Reviews Neuroscience 4, 685–91. Clayton, N., Emery, N., and Dickinson, A. (2006). ‘The Rationality of Animal Memory: Complex Caching Strategies of Western Scrub Jays’, in S. Hurley and M. Nudds 2006. Rational Animals?, New York: Oxford University Press. Coleman, S. (2015). Quotational Higher-Order Thought Theory, Philosophical Studies, 172, 2705–33. Damasio, A. (1999). The Feeling of What Happens, New York: Harcourt Brace and Co. Del Cul, A., Baillet, S., and Dehaene, S. (2007). ‘Brain Dynamics Underlying the Nonlinear Threshold for Access to Consciousness’, PLoS Biology, 5, 2408–23. Dretske, F. (1995). Naturalizing the Mind, Cambridge, MA: MIT Press. Droege, P. (2003). Caging the Beast, Philadelphia and Amsterdam: John Benjamins Publishers. Emery, N. and Clayton, N. (2001). ‘Effects of Experience and Social Context on Prospective Caching Strategies in Scrub Jays’, Nature, 414, 443–6. Farah, M. (2004). Visual Agnosia, 2nd ed., Cambridge, MA: MIT Press. Flombaum, J. and Santos, L. (2005). ‘Rhesus Monkeys Attribute Perceptions to Others’, Current Biology, 15, 447–52. Gennaro, R. (1993). ‘Brute Experience and the Higher-Order Thought Theory of Consciousness’, Philosophical Papers, 22, 51–69. Gennaro, R. (1996). Consciousness and Self-Consciousness: A Defense of the Higher-Order Thought Theory of Consciousness, Amsterdam and Philadelphia: John Benjamins. Gennaro, R. (2002). ‘Jean-Paul Sartre and the HOT Theory of Consciousness’, Canadian Journal of Philosophy, 32, 293–330. Gennaro, R., ed. (2004a). Higher-Order Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Gennaro, R. (2004b). ‘Higher-Order Thoughts, Animal Consciousness, and Misrepresentation: A Reply to Carruthers and Levine’, in R. Gennaro 2004a. HigherOrder Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Gennaro, R. (2005). ‘The HOT Theory of Consciousness: Between a Rock and a Hard Place?’, Journal of Consciousness Studies, 12 (2), 3–21. Gennaro, R. (2006). ‘Between Pure Self-Referentialism and the (Extrinsic) HOT Theory of Consciousness’, in U. Kriegel and K. Williford, eds. Self-Representational Approaches to Consciousness, Cambridge, MA: MIT Press. Gennaro, R. (2008). ‘Representationalism, Peripheral Awareness, and the Transparency of Experience’, Philosophical Studies, 139, 39–56. Gennaro, R. (2009). ‘Animals, Consciousness, and I-Thoughts’, in R. Lurz, ed. Philosophy of Animal Minds, New York: Cambridge University Press. Gennaro, R. (2012). The Consciousness Paradox: Consciousness, Concepts, and HigherOrder Thoughts, Cambridge, MA: The MIT Press.
166
The Bloomsbury Companion to the Philosophy of Consciousness
Gennaro, R. (2013). ‘Defending HOT Theory and the Wide Intrinsicality View: A Reply to Weisberg, Van Gulick, and Seager’, Journal of Consciousness Studies, 20 (11–12), 82–100. Gennaro, R. (2015). ‘The “of ” of Intentionality and the “of ” of Acquaintance’, in S. Miguens, G. Preyer, and C. Morando, eds., Pre-Reflective Consciousness: Sartre and Contemporary Philosophy of Mind, New York: Routledge Publishers. Goldberg, I., Harel, M., and Malach, R. (2006). ‘When the Brain Loses its Self: Prefrontal Inactivation during Sensorimotor Processing’, Neuron, 50, 329–39. Goldman, A. (1993). ‘Consciousness, Folk Psychology and Cognitive Science’, Consciousness and Cognition, 2, 264–82. Goldman, A. (2006). Simulating Minds, New York: Oxford University Press. Grill-Spector, K. and Malach, R. (2004). ‘The Human Visual Cortex’, Annual Review of Neuroscience, 7, 649–77. Gunther, Y., ed. (2003). Essays on Nonconceptual Content, Cambridge, MA: MIT Press. Harman, G. (1990). ‘The Intrinsic Quality of Experience’, in J. Tomberlin, ed., Philosophical Perspectives, 4, Atascadero, CA: Ridgeview Publishing. Horgan, T. and Tienson, J. (2002). ‘The Intentionality of Phenomenology and the Phenomenology of Intentionality’, in D. Chalmers ed. Philosophy of Mind: Classical and Contemporary Readings, New York: Oxford University Press. Hurley, S. and Nudds, M., eds. (2006). Rational Animals?, New York: Oxford University Press. Janzen, G. (2008). The Reflexive Nature of Consciousness, Amsterdam and Philadelphia: John Benjamins. Jehle, D. and Kriegel, U. (2006). ‘An Argument against Dispositional HOT Theory’, Philosophical Psychology, 19, 462–76. Kant, I. (1781/1965). Critique of Pure Reason, translated by N. Kemp Smith, New York: MacMillan. Kirk, R. (1994). Raw Feeling, New York: Oxford University Press. Kozuch, B. (2014). ‘Prefrontal Lesion Evidence against Higher-Order Theories of Consciousness’, Philosophical Studies, 167, 721–46. Kriegel, U. (2002). ‘PANIC Theory and the Prospects for a Representational Theory of Phenomenal Consciousness’, Philosophical Psychology, 15, 55–64. Kriegel, U. (2003). ‘Consciousness as Intransitive Self-Consciousness: Two Views and an Argument’, Canadian Journal of Philosophy, 33, 103–32. Kriegel, U. (2005). ‘Naturalizing Subjective Character’, Philosophy and Phenomenological Research, 71, 23–56. Kriegel, U. (2006). ‘The Same Order Monitoring Theory of Consciousness’, in U. Kriegel and K. Williford, eds., Self-Representational Approaches to Consciousness, Cambridge, MA: The MIT Press. Kriegel, U. (2007). ‘A Cross-Order Integration Hypothesis for the Neural Correlate of Consciousness’, Consciousness and Cognition, 16, 897–912. Kriegel, U. (2009). Subjective Consciousness, New York: Oxford University Press.
Higher-Order Theories of Consciousness
167
Kriegel, U., ed. (2013). Phenomenal Intentionality, New York: Oxford University Press. Kriegel, U. and Williford, K., eds. (2006). Self-Representational Approaches to Consciousness, Cambridge, MA: The MIT Press. Lau, H. and Passingham, R. (2006). ‘Relative Blindsight in Normal Observers and the Neural Correlate of Visual Consciousness’, Proceedings of the National Academy of Sciences of the United States of America, 103, 18763–8. Lau, H. and Rosenthal, D. (2011). ‘Empirical Support for Higher-Order Theories of Conscious Awareness’, Trends in Cognitive Sciences, 15, 365–73. Levine, J. (2001). Purple Haze: The Puzzle of Conscious Experience, Cambridge, MA: The MIT Press. Locke, J. 1689/1975. An Essay Concerning Human Understanding. P. Nidditch, ed. Oxford: Clarendon. Lurz, R., ed. (2009). The Philosophy of Animal Minds, Cambridge, MA: Cambridge University Press. Lurz, R. (2011). Mindreading Animals, Cambridge, MA: The MIT Press. Lycan, W. (1996). Consciousness and Experience, Cambridge, MA: The MIT Press. Lycan, W. (2001). ‘A Simple Argument for a Higher-Order Representation Theory of Consciousness’, Analysis, 61, 3–4. Lycan, W. (2004). ‘The Superiority of HOP to HOT’, in R. Gennaro, ed., 2004a. HigherOrder Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Nagel, T. (1974). ‘What is it Like to be a Bat?’, Philosophical Review, 83, 435–56. Neander, K. (1998). ‘The Division of Phenomenal Labor: A Problem for Representational Theories of Consciousness’, Philosophical Perspectives, 12, 411–34. Newen, A. and Vogeley, K. (2003). ‘Self-Representation: Searching for a Neural Signature of Self-Consciousness’, Consciousness and Cognition, 12, 529–43. Nichols, S. and Stich, S. (2003). Mindreading, New York: Oxford University Press. Picciuto, V. (2011). ‘Addressing Higher-Order Misrepresentation with Quotational Thought’, Journal of Consciousness Studies, 18 (3–4), 109–36. Pollen, D. (2008). ‘Fundamental Requirements for Primary Visual Perception’, Cerebral Cortex, 18, 1991–98. Prinz, J. (2012). The Conscious Brain, New York: Oxford University Press. Rolls, E. (2004). ‘A Higher Order Syntactic Thought (HOST) Theory of Consciousness’, in R. Gennaro 2004a. Higher-Order Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Rosenthal, D.M. (1986). ‘Two Concepts of Consciousness’, Philosophical Studies, 49, 329–59. Rosenthal, D.M. (1991). ‘The Independence of Consciousness and Sensory Quality’, Philosophical Issues, 1, 15–36. Rosenthal, D.M. (1997). ‘A Theory of Consciousness’, in N. Block, O. Flanagan, and G. Güzeldere, eds., The Nature of Consciousness, Cambridge, MA: The MIT Press.
168
The Bloomsbury Companion to the Philosophy of Consciousness
Rosenthal, D. M. (2002). ‘Explaining Consciousness’, in D. Chalmers, ed., Philosophy of Mind: Classical and Contemporary Readings, New York: Oxford University Press. Rosenthal, D. M. (2004). ‘Varieties of Higher-Order Theory’, in R. Gennaro, ed., 2004a. Higher-Order Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Rosenthal, D. M. (2005). Consciousness and Mind, New York: Oxford University Press. Rosenthal, D. M. (2011). ‘Exaggerated Reports: Reply to Block’, Analysis, 71, 431–37. Santos, L., Nissen, A. and Ferrugia, J. (2006). ‘Rhesus monkeys, Macaca mulatta, Know What Others Can and Cannot Hear’, Animal Behaviour, 71, 1175–81. Sartre, J. (1956). Being and Nothingness, New York: Philosophical Library. Sauret, W. and Lycan, W. (2014). ‘Attention and Internal Monitoring: A Farewell to HOP’, Analysis, 74, 363–70. Seager, W. (2004). ‘A Cold Look at HOT Theory’, in R. Gennaro, ed., 2004a. HigherOrder Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Searle, J. (1992). The Rediscovery of the Mind, Cambridge. MA: The MIT Press. Sebastián, M. A. (2014). ‘Not a HOT Dream’, in R. Brown, ed., Consciousness Inside and Out: Phenomenology, Neuroscience, and the Nature of Experience, 415–32, Dordrecht: Springer. Siewart, C. (1998). The Significance of Consciousness, Princeton: Princeton University Press. Smith, D. W. (2004). Mind World: Essays in Phenomenology and Ontology, Cambridge, MA: Cambridge University Press. Terrace, H. and Metcalfe, J., eds. (2005). The Missing Link in Cognition: Origins of SelfReflective Consciousness, New York: Oxford University Press. Thau, M. (2002). Consciousness and Cognition, Oxford: Oxford University Press. Tye, M. (1995). Ten Problems of Consciousness, Cambridge, MA: The MIT Press. Tye, M. (2000). Consciousness, Color, and Content, Cambridge, MA: The MIT Press. Van Gulick, R. (1995). ‘What Would Count as Explaining Consciousness?’, in T. Metzinger, ed., Conscious Experience, Paderborn: Ferdinand Schöningh. Van Gulick, R. (2000). ‘Inward and Upward: Reflection, Introspection and SelfAwareness’, Philosophical Topics, 28, 275–305. Van Gulick, R. (2004). ‘Higher-Order Global States (HOGS): An Alternative HigherOrder Model of Consciousness’, in R. Gennaro, ed., 2004a. Higher-Order Theories of Consciousness: An Anthology, Amsterdam and Philadelphia: John Benjamins. Van Gulick, R. (2006). ‘Mirror Mirror – Is That All?’, in U. Kriegel and K. Williford, eds., Self- Representational Approaches to Consciousness. Cambridge, MA: The MIT Press. Weisberg, J. (2008). ‘Same Old, Same Old: The Same-Order Representation Theory of Consciousness and the Division of Phenomenal Labor’, Synthese, 160, 161–81. Weisberg, J. (2011). ‘Misrepresenting Consciousness’, Philosophical Studies, 154, 409–33.
Higher-Order Theories of Consciousness
169
Williford, K. (2006). ‘The Self-Representational Structure of Consciousness’, in U. Kriegel and K. Williford, eds., 2006. Self-Representational Approaches to Consciousness, Cambridge, MA: The MIT Press. Zahavi, D. (2004). ‘Back to Brentano?’, Journal of Consciousness Studies, 11 (10–11), 66–87. Zahavi, D. (2007). ‘The Heidelberg School and the Limits of Reflection’, in S. Heinämaa, V. Lähteenmäki, and P. Remes, eds., Consciousness: From Perception to Reflection in the History of Philosophy, Dordrecht: Springer. Zeki, S. (2007). ‘A Theory of Micro-Consciousness’, in M. Velmans and S. Schneider, eds., The Blackwell Companion to Consciousness, Malden, MA: Blackwell.
10
Kripke on Mind–Body Identity Scott Soames
1 Contingency, aposteriority and mind–body identity The argument against mind–body identity theory in Naming and Necessity is directed against a theory advocated in Place (1956), Smart (1963), Lewis (1966) and Armstrong (1968). Their psychophysical identity theory attempted to vindicate the reality of mental processes by identifying pains, sensations and consciousness itself with brain states and processes. It arose in reaction to phenomenalism and behaviourism, the latter in both its scientific form, illustrated by B. F. Skinner, and its philosophical or ‘logical’ form, illustrated by Gilbert Ryle. Early versions didn’t specify which brain states and processes were identical with pain states, sensation states or consciousness. That was a job for neuroscientists. The philosophical task was to defeat conceptual objections to the possibility that any such identification could be correct and to articulate the explanatory advantages of incorporating the mental into physical science. According to these theorists, identifying a mental type, say pain, with a neurochemical type – call it ‘C-fibre stimulation’ – is conceptually no more problematic than identifying lightning with a type of electrical discharge, heat with mean molecular kinetic energy or water with H2O. Psychophysical identity theorists took all these identities to be contingent a posteriori truths. Kripke argued they were wrong, both about the already established identities and about the alleged psychophysical identities.
2 Rigidity, necessity and identity His argument arose from views about necessity and rigid designation. A necessary truth was, for him, one that would have been true no matter what
Kripke on Mind–Body Identity
171
possible state the world was in. Although some necessary truths are knowable a priori and some are expressed by analytic sentences, others are neither. If I say, of my dog Lilly, ‘She is an animal’, what I assert is true, and couldn’t have been false (provided she existed), since Lilly – not something similar in appearance, but Lilly herself – couldn’t have existed without being an animal. Nevertheless, this truth is neither knowable a priori nor expressed by an analytic sentence. Rigid designation is defined as follows in intensional semantics. Rigid Designation (for singular terms) A singular term t is a rigid designator with respect to a context C and assignment A of values to variables iff there is an object o such that (i) t refers to o with respect to C, A, and the world state wC of C, and (ii) for all possible world-states w in which o exists, t refers to o with respect to C, A, and w, and (iii) t never refers to anything else with respect to C, A, and any world-state w*.
Proper names, simple indexicals (‘I’, ‘she’, ‘it’, ‘that’, etc.) and individual variables are rigid in this sense, while some complex singular terms – such as the Fregean singular definite description ‘the greatest student of Plato’ – aren’t rigid. Let t be a singular term, S be the sentence ┌t liked dogs┐, and A be the person designated by one’s use of t at a context of utterance C. Let p be the proposition expressed by one’s use of S in C. If t is rigid, then A is the one whose liking dogs, at any worldstate w, is necessary and sufficient for p to be true at w; if t isn’t rigid, this needn’t be so. So, if t is ‘Aristotle’, or if the demonstrative ‘he’ is used to refer to Aristotle, or if a variable ‘x’ is assigned Aristotle as value, then one individual, the same for every world-state w, must like dogs at w in order for p be true at w. So these terms are rigid. When t is ‘the greatest student of Plato’, either p can be true at different world-states w by virtue of different dog lovers being Plato’s greatest student at w, or p can be false at w in even if Aristotle likes dogs at w, or both. Thus the description isn’t rigid. Here is a useful test. A term t is rigid iff a use of the following sentence containing t is true: ‘the individual that is/was actually t couldn’t have existed without being t, and nothing other than that individual could have been t.’ When a and b are rigid singular terms, ┌If a = b then necessarily a = b┐ is always true.1 So, Hesperus is necessarily Phosphorus, I am necessarily Scott Soames, and x is necessarily identical with y, whenever x is identical with y. Suppose I name my current headache ‘H’ and a neuroscientist names the stimulation of a certain C-fibre of mine ‘C-Stim’. If H is C-Stim, then necessarily H is C-Stim. What about (1) and (2), which contain the general terms ‘pain’ and ‘C-fibre stimulation’?
172
The Bloomsbury Companion to the Philosophy of Consciousness
1. Pain is C-fibre stimulation (that is, Pain = C-fibre stimulation) 2. Pains are C-fibre stimulations (∀x [x is a pain iff x is a C-fibre stimulation]) The definition of rigidity for general terms parallels the definition for singular terms.2 Rigid Designation (for general terms) A general term t is a rigid designator iff t designates a property or kind PK at the actual world-state, and for all possible world-states w in which PK exists, t designates PK at w, and t never designates anything else.
Consider ‘blue’ and ‘the colour of a cloudless sky at noon’, which, when understood as general terms, can combine with the copula to form a predicate. 3a. Mary’s eyes are blue. b. Mary’s eyes are the colour of a cloudless sky at noon. ‘Blue’ is rigid because the colour Mary’s eyes must be at a world-state in order for the proposition expressed by (3a) to be true at that state doesn’t change from one state to the next. Since the same can’t be said about the proposition expressed by (3b), ‘the color of a cloudless sky at noon’ isn’t rigid. Because the general terms ‘pain’ and ‘C-fibre stimulation’ are nouns, they appear with an article when they combine with the copula to form a predicate. 4a. The sensation I felt a minute ago was a pain. 4b. The neurological event that just occurred was a C-fibre stimulation. Kripke takes it for granted that both ‘pain’ and ‘C-fibre stimulation’ are rigid. Though it’s not, I think, entirely obvious that ‘pain’ is rigid, Kripke’s claim to the contrary is not unreasonable. Thus I will hold off questioning the rigidity of ‘pain’ until later. Nevertheless, rigidity isn’t the distinguishing feature of Kripke’s account of natural kind terms like ‘water’, ‘light’, ‘heat’, ‘red’. These general terms are rigid, but so are the non-natural kind terms ‘square’, ‘automobile, ‘philosopher’, ‘physician’ and ‘bachelor’. It is hard to find a single-word general term that isn’t. The distinguishing feature of Kripkean natural kind terms is a certain kind of non-descriptionality. Like names, they aren’t synonymous with descriptions associated with them by speakers. They are also like names in the way in which their reference is fixed. Just as names are often introduced by stipulating they are to refer to individuals with which one is already acquainted, natural kind terms are often introduced by stipulating that they are to designate kinds with which
Kripke on Mind–Body Identity
173
one is acquainted through their instances. Imagine ‘water’ being introduced by the following stipulation: The term ‘water’ is to designate the property possession of which explains the most salient features of nearly all samples we have encountered; for example, the fact that they boil and freeze at certain temperatures, that they are clear, potable, and necessary to life.
If ‘water’ were so introduced, its instances at a world-state would be quantities with the property that explains the salient features of (nearly) all actually encountered water-samples. The stipulation is, of course, idealized. ‘Water’ behaves pretty much as if it had been introduced by such a stipulation, but presumably it wasn’t. It was enough for speakers to start calling certain quantities ‘water’, intending it to apply to whatever shared the properties explaining their most important observational characteristics. Once introduced, a natural kind term is passed from speaker to speaker, just as names are.3 Since these terms are rigid, the natural kinds they designate don’t change from one world-state to another. But the extensions of predicates formed from them do. Whereas ‘water’ rigidly designates the kind, which is its extension at every world-state, the extension of ‘is water’ at w is the set of instances of water at w. Since different quantities of water are found at different world-states, the predicate ‘is water’ is nonrigid. The same can be said for other natural kind terms and the predicates arising from them. Now consider (5), in which (b) and (c) are different ways of understanding (a):4 5a. Water is H2O. b. Water = H2O. c. ∀x(x is (a quantity of) water iff x is (a quantity of) H2O). Because the terms are rigid, (5b) is necessary if true. Of course, if (5b) is true, then (5c) is also necessary. But, if we haven’t established (5b), then we can’t get directly from the truth of (5c) to its necessity. If we also know that being water and being H2O are essential properties of any quantity that has them, we can move from (5c) to (5d). 5d. ∀x (x is a quantity of water iff x is a quantity of H2O). But this doesn’t guarantee the necessity of (5c). Similar remarks apply to (6), though there is no chance of moving from (6c) to (6d), because it is not always so that when x is hotter than y, it is essential to x,y that the former is hotter than the latter.
174
The Bloomsbury Companion to the Philosophy of Consciousness
6a. Heat is mean molecular kinetic energy. b. Heat = mean molecular kinetic energy c. ∀x,y (x is hotter than y iff the mean molecular kinetic energy of x is greater than that of y) d. ∀x,y (x is hotter than y iff the mean molecular kinetic energy of x is greater than that of y). These results establish the falsity of early identity theorists’ claims that empirically established identities like (5b) and (6b) are contingent. As Kripke has shown, these statements are necessary, if true. So is (1) – Pain = C-fibre stimulation – provided that ‘pain’ and ‘C-fibre stimulation’ are rigid designators. Although (2) – ∀x (x is a pain iff x is a C-fibre stimulation) – might be contingent (as long as pain isn’t identified with C-fibre stimulation), (2*) 2*. ∀x[x is a pain iff x is a C-fibre stimulation] must be true, provided that being a pain is essential to everything that is a pain. Whether or not ‘pain’ does rigidly designate a property that is essential to its instances will be examined later.
3 Kripke’s main argument against identifying pain with C-fibre stimulation Kripke’s argument contrasts (1) with (6b). 1. Pain = C-fibre stimulation 6b. Heat = mean molecular kinetic energy Although both seem, on first consideration, to be contingently true, or contingently false, (6b) is necessary if true. How then is its apparent contingency explained? It was an empirical discovery that how hot something is depends on how fast its molecules are moving. Since we couldn’t have known this a priori, evidence was needed to rule out conceivable scenarios in which it isn’t so. So, if one wrongly identified real possibilities with conceivable scenarios that we can’t know a priori not to be actual, one would wrongly take (6b) to be contingent. If we don’t fall prey to this confusion, we won’t take the necessity of (6b) to threaten it’s a posteriority. Might a psychophysical identity theorist who agreed with Kripke about the rigidity of ‘pain’ and ‘C-fibre stimulation’ say the same about (1)? Kripke thinks not.
Kripke on Mind–Body Identity
175
He finds the illusion that (6b) is contingent to be rooted in the fact that we identify heat indirectly, by the sensations it causes in us. Because of this he says that we associate ‘heat’ with the reference-fixing description ‘the cause of a certain sensation S’ (of heat). Taking this sensation to be part of our concept of heat, we confuse the description with a synonym for ‘heat’, and the necessary truth (6b) with the contingent truth (7). 7. The cause of sensation S = mean molecular kinetic energy Rightly recognizing possible world-states at which (7) is false, we wrongly take them to be world-states at which (6b) is false.5 This is a mistake. We all recognize possible world-states at which many things are hot, even though there are no sentient beings capable of having any sensations. Thus ‘heat’ isn’t synonymous with ‘the causes of sensation S’.6 Kripke argues that the same strategy can’t be used to dismiss the impression that there are possible world-states at which (1) is false. Unlike heat, we designate pain directly. We don’t say, ‘What a horrible sensation! Let’s use “pain” to rigidly designate it’s cause’. Nor do we define its referent as the bearer of properties we can conceive of something other than pain as bearing. Since there is no descriptive reference-fixer to confuse with a synonym for ‘pain’ and no contingent truth to confuse with (1), the impression that (1) is contingent, if true, isn’t an illusion. The conceivable scenarios in which pain isn’t C-fibre stimulation are possible world-states in which (1) is false. Since in fact, (1) is necessary if true, it follows that (1) is false.
4 The weakness of the argument Kripke’s argument depends on a questionable contrast between how we identify heat and pain. Although there is a contrast, it’s not, I think, the one he suggests. The most fundamental contrast is that whereas heat is something perceived, pain is our perception of something. Our sensation of heat is our perception of heat; it is a special kind of perceptual experience that reliably, but fallibly, detects heat. Similarly, our pain sensation is our perception of injury. It is a special kind of perceptual experience that reliably, but fallibly, detects injury. The reason there is no pain without an experience of pain is that pains are a special type of perceptual experience. Contra Kripke, we don’t identify heat by first perceiving a sensation S, and then using it to talk about the know-not-what that caused S. The sensation is our
176
The Bloomsbury Companion to the Philosophy of Consciousness
perception of heat, just as a visual experience of my dog Lilly is a perception of her. Lilly does cause my visual experience, but when I identify her I do so directly, by perceiving her, not indirectly, by making my perceptual experience of her the object of my attention, and defining her as its cause. If I ask myself, ‘To what do I use “Lilly” to refer?’ I look at her and answer ‘To her’. If I ask myself, to ‘To what do I use “heat” to refer?’, I move close to the fire, or the stove, and answer ‘To that’. Since there is no ‘reference-fixing description’, I don’t take either term to be synonymous with a description. Nor do I confuse scenarios involving Lilly, or heat, with scenarios in which other things cause my experiences. In short, when I say I can conceive of heat not being molecular motion, or of Lilly being, not an animal, I am not misdescribing some other possibility that I am really conceiving. I am not really thinking of sensation S being caused by something other than heat, or of my Lilly-perceptions being caused by a robotic facsimile. I am simply thinking of heat, or Lilly, as lacking an essential property P. Because P is essential, the claim that x has P, if x exists, is necessary. Because I can’t know a priori that x has P, knowledge of the necessary truth requires empirical evidence to rule out conceivable disconfirming scenarios that can’t be eliminated a priori. The same can be said about self predications. Let P be a property – such as, having a body made up of molecules, or being a human being – that I couldn’t have existed without having, but which I can’t know I have without empirical evidence. My remark ‘If I exist, then I have P’ will then express a necessary truth. Although this truth might wrongly seem contingent, this isn’t because I wrongly take the first-person singular pronoun to be synonymous with a reference-fixing description. There is no such description. When I use the pronoun, I don’t identify myself as the creature, whoever it might be, designated by a privileged description. Thus, when I say I am conceiving a scenario in which I lack P, I am not confusing myself with some other creature, Mistaken-Me, who, in fact, is designated by my reference-fixing description – thereby misdescribing a different possibility in which he lacks P. The lesson is the same in all our cases. Whether it is heat and mean molecular kinetic energy, Lilly and being sentient or me and being human, the mistake of wrongly taking a proposition to be contingent that, in fact, must be necessary if true, is due to the fact that establishing its truth requires empirical evidence ruling out scenarios in which it is false. In some cases, there may be other sources of confusion, too. Perhaps some philosophers have confused heat with the sensation of heat, as Kripke says. But that isn’t the main reason it was surprising
Kripke on Mind–Body Identity
177
that the empirical discovery that heat is mean molecular kinetic energy turned out to be necessary. What was surprising was that the reason empirical evidence is needed to establish the kinetic theory isn’t to rule out disconfirming possibilities; it is to rule out disconfirming impossibilities we can’t know a priori not to be actual. This is the core insight behind the Kripkean necessary a posteriori. When T rigidly designates an individual o, or kind k, when F expresses an essential property of o, or k, and when knowledge of o, or k, that it has this property requires empirical evidence, the proposition expressed by ┌If T exists, then T is F┐ is necessary but knowable only a posteriori. The surprise was that knowledge of actuality is sometimes required to give us knowledge of what is, and what isn’t, possible.7 Kripke’s insight requires distinguishing ways things could conceivably be from ways they could really be. According to him, when p is necessary but knowable only a posteriori, it is knowable a priori that if p is true, then it is necessary.8 Since one can’t know p a priori, world-states in which p is false are coherently conceivable, and so epistemically possible. When one learns that p is true, one learns that none of these world-states could have been actual.9 In short, one learns empirically that certain epistemically possible world-states are metaphysically impossible. For the Kripkean, metaphysically possible world-states are maximally complete properties the universe could have had. Epistemically possible states are maximally complete properties the universe can be conceived as having which we can’t know a priori it doesn’t have. The former set of properties is a proper subset of the latter.10 We all know that there are properties that ordinary things could have had and others they couldn’t have had. The same is true of the universe; there are maximally complete properties it could have had – metaphysically possible world-states – and others it couldn’t have had – metaphysically impossible states. We can all coherently conceive of ordinary objects having some properties they couldn’t have had. The same is true of the universe. We can all coherently conceive of it having some maximal properties it couldn’t have had. These are epistemically but not metaphysically possible worldstates. The reason empirical evidence is needed for knowledge of necessary a posteriori truths is to rule out metaphysically impossible, but epistemically possible, world-states at which they are false. With this we return to Kripke’s claim that the apparent falsity of (6b) – heat = mean molecular kinetic energy – at certain possible world-states can be explained away as an illusion, but the apparent falsity of (1) – pain = C-fibre stimulation – at certain possible world-states can’t be explained away. Kripke’s argument for
178
The Bloomsbury Companion to the Philosophy of Consciousness
this claim fails, even if ‘pain’ is a rigid designator. In both cases, it is open to the defender of the identity theory to argue that the appearance of contingency arises from confusing epistemic possibility with metaphysical possibility. Since the identity statements (1) and (6b) can’t be known a priori, empirical evidence ruling out epistemically possible world-states at which they are false is needed if they are to be known at all. Since this doesn’t establish the existence of metaphysically possible world-states in which the identities fail, Kripke needs another argument.
5 A second Kripkean argument against pain – brain-state identity He has one, which can be reconstructed from the following passage. What about ‘pain’ and ‘C-fiber stimulation’? It should be clear from the previous discussion that ‘pain’ is a rigid designator of the type, or phenomenon, it designates: if something is a pain it is essentially so, and it seems absurd to suppose that pain could have been some phenomenon other than the one it is. The same holds for the term C-fiber stimulation, provided that ‘C-fibers’ is a rigid designator, which I will suppose here. … Thus, the identity of pain with the stimulation of C-fibers, if true, must be necessary.11
Here Kripke confuses the claim that ‘pain’ is rigid with the claim that it designates a property essential to its instances. The difference between these claims is illustrated by the general terms ‘blue’ and ‘hot’. Although both are rigid, the properties they designate aren’t essential properties of their instances. Thus, the assumption that the terms, ‘pain’ and ‘C-fibre stimulation’ designate properties that are essential to their instances doesn’t follow from the claim that they are rigid. Given this essentialist assumption , one can show (2) to be false by showing (2*) to be false, leading the conclusion that (1) is also false, if ‘pain’ is rigid.12 2*. ∀x [x is a pain iff x is a C-fibre stimulation] (2*) does appear to be false. Consider the headache I had this morning. Could it – that very sensation – have existed without being a pain, because the experience was either pleasurable, or unnoticeable? Although it is natural to think that the actual C-fibre stimulations responsible for my headache could have existed without my experiencing pain, it is less clear that my pain sensation could have existed without being a pain. Suppose at world-state w, I exist with all my
Kripke on Mind–Body Identity
179
C-fibres, but my brain is different from the way it is at the actual world-state – either because the evolutionary path leading to me at w is different from the one at the actual world-state, or because at w some genetically designed D-fibres that counteract the effects of C-fibre stimulation have been surgically added. At w, the same C-fibres fire in my brain that actually caused my headache, but at w I experience pleasure. My C-fibre stimulation exists at w, without being a pain at w. If this is metaphysically possible, then (2*) is false. If, in addition, being a pain is an essential property of its instances, then (2) is also false, in which case (1) is too. But is being a pain really an essential property of anything that has it?
6 Reassessing rigidity and essentiality How do I identify pains? Since they are conscious experiences, I am aware of my own pains in something like the way I am aware of my other conscious experiences (eg., my visual or auditory experiences). Knowing that my pain experiences are caused by certain kinds of events, which then modify my thoughts, motivations and actions in characteristic ways, I can identify pain in others by observing their verbal and non-verbal responses to events similar to those that cause pain in me. This pre-theoretical picture anticipates more sophisticated functionalist conceptions of mind according to which the mental states of an organism are internal states that causally interact in systematic ways to mediate sensory inputs and behavioural outputs.13 On such conceptions, sensory inputs interact with existing beliefs, desires and preferences to change them, often resulting in instructions being sent to the muscles. Different mental states play different causal roles. Preferences assign high priority to certain outcomes. Believing that p typically leads to behaviour that brings about highly valued outcomes in situations in which it is true that p. Desiring that p often leads to actions one believes will bring it about that p.14 On this picture, pain is a kind of internal perception of injuries that agents have a high preference for avoiding. Normally, this perception leads to actions intended to minimize the injury, and intentions to avoid similar injury in the future. In short, pain is the internal state of an organism the function of which is to detect and minimize injury. The predicate ‘is a pain’ is true of all and only those datable events that are instances of this state. It is possible that very different physiological states count as pain in different individual organisms, and types of organisms. If we can imagine non-physical beings inhabiting bodies, it is not even ruled out that they too have pains. What all these beings have in common
180
The Bloomsbury Companion to the Philosophy of Consciousness
is an internal perceptual state, the function of which is to detect certain kinds of bodily injury, and to trigger changes in their current motivational structure that normally lead to actions intended to end or minimize the current injury, and to form or reinforce desires to avoid similar injury in the future. Let the function of that state be as described. Then (1*) is both true and necessary. 1*. Pain in an organism o is the state in o that plays the pain role. (Pain in an organism = the state that plays the pain role.) What about the following particularized versions of (1H), (2H) and (2H*)?
1H. Pain (in humans) is C-fibre stimulation (in humans) (Human pain = human C-fibre stimulation) 2H. Pains (in humans) are C-fibre stimulations (in humans) (∀x[x is a human pain iff x is a human C-fibre stimulation]) 2H*. ∀x [x is a human pain iff x is a human C-fibre stimulation] Suppose further that empirical investigation were to give us good reason to believe that for every pain in a human being there was a corresponding C-fibre stimulation, and conversely. Since, on the present understanding, we are not driven to take ‘human pain’ to be a rigid designator, the contingency of (1H) doesn’t demonstrate its falsity. Presumably, evolution could have gone differently enough to bring it about that, at a given possible world-state w, instances of something slightly different from C-fibre stimulation – call it B-fibre stimulation – played the pain role. In that case, human pain would be B-fibre stimulation at w. On this picture, ‘pain’ isn’t a rigid designator and (1H) is true, despite being contingent. A slight variation in the case would allow the continued existence of human C-fibres, even though stimulation of them wouldn’t play the pain role, because, at the world-state w*, C-fibres interact with new neural systems not present in human brains at the actual world-state. On this picture, nothing obviously rules out particular C-fibre stimulations that are pains at the actual world-state from existing at w* without being pains at w*. Consequently the presumed falsity of (2H*) doesn’t rule out the truth of (2H). Of course, the ‘possibilities’ alluded to here are speculative. It could turn out that they aren’t genuine metaphysical possibilities. But nothing I know of points in that direction, and nothing I can find in Naming and Necessity makes a strong case against it. Thus, it seems that Kripke’s objections to the versions of mind–body identity we have been considering don’t succeed.
Kripke on Mind–Body Identity
181
7 But is Kripke’s conclusion false? That doesn’t mean that Kripke was wrong to be sceptical of attempts to identify pain with a physical state. Although I have tried to give a defensible functionalist sketch of ‘the pain role’, my sketch contains two wild cards. First, the pain state is required to be the internal state of an organism. In order to be capable of feeling pain, it is not enough that some arbitrary physical system – constructed out of any materials whatsoever – can be given an interpretation in which its changes of state correspond 1-1 to the changes in the internal state of an organism that detects damage, minimizes it, and initiates actions. There is more involved in being an internal state of an organism that does these things than is captured by a merely abstract mapping. What this something more is remains an open question. Second, I characterized pain as a kind of perception of bodily injury, without saying what kind. I suspect that not just any kind will do. Consider a being otherwise like us except for detecting pain in a way that is qualitatively similar to our hearing of pleasant musical sounds of varying intensities, corresponding to our pains of varying intensities. Suppose other modifications of the sound gave the location of the perceived injury. The scenario is one in which these pseudopains – which are caused by the same external events that cause our pains – lead to internal changes of mental states, resulting in behavioural changes, in much the way our pains do. Do these imagined agents really feel pain? I’m not confident they do. If the musical qualia are pleasant, how could they feel pain? Since I don’t know what to think about this case, I conclude – not that Kripke was wrong to reject psychophysical identity theories, but only that his arguments against them are inconclusive.
8 Addendum: Necessary a posteriori identities In section 4, I traced the failure of Kripke’s main argument against the identification of pain with C-fibre stimulation to failing to respect his own insight involving properties we know a priori to be essential to anything that has them, but which we can know an entity to possess only empirically. As shown in Soames (2011), all Kripkean instances of the necessary a posteriori, save one type, clearly fit this pattern. In all these cases empirical investigation is needed to rule out epistemologically possible, but metaphysically impossible, world-states.
182
The Bloomsbury Companion to the Philosophy of Consciousness
The one class of apparent exceptions involves statements like (8) in which the identity predicate is flanked by simple Millian terms, the representational contents of which are their referents.15 8. Water = H2O If each term is genuinely Millian, the proposition (8) is used to express identifies the kind Kwater with itself. Since Kripke rejected descriptive analyses of these terms, he was hard pressed to explain why empirical evidence is needed to reject this identification. I have said it is to rule out epistemically possible but metaphysically impossible world-states. Which states are they? Since worldstates are properties of making-true certain sets of basic propositions that tell complete world-stories, the question is answered by identifying those sets of propositions.16 Now that we have a conception of propositions that allows us to distinguish representationally identical but cognitively distinct propositions we can do that.17 In each case the key proposition p in the set is the cognitive proposition that predicates non-identity of a pair of arguments the first of which, Kwater, is cognized via the term ‘water’, and the second of which, Kwater, is cognized via ‘H2O’.18 Since knowledge of ~p requires empirical evidence that the terms are co-designative, ~p can’t be known a priori. This means that the world-states thereby defined can’t be known a priori not to be actual. Hence, the world-states that ~p is used to define are epistemically, but not metaphysically, possible. Without cognitive propositions, Kripke had no way of seeing this.
Notes 1 For simplicity I here suppress complications about what to say when a or b doesn’t exist at a possible world-state. 2 This definition is simplified by not relativizing designation to contexts and assignments of values to variables. We can afford to do this because all the general terms we will consider will either be single words or phrases that do not contain indexicals or variables. 3 See Soames (2007b). 4 (5a) can also be understood as a universally quantified conditional, as ‘Ice is H2O’ is. See chapter 11 of Soames (2002). 5 Kripke (1980), 150–51. 6 ‘Heat’ is also not synonymous with ‘the x: Actually(x caused sensation S)’. For explanation, see Chapter 2 of Soames (2002), Soames (2007a), and chapters 4 and 6 of Soames (2010).
Kripke on Mind–Body Identity
183
7 See Stalnaker (1978; 1984) for an influential model of inquiry in which the function of empirical evidence is always to rule out genuine possibilities that could have been actual. See Soames (2006b) for a detailed critique. 8 Kripke (1971), 152–53. 9 To say that a world-state is, or could have been, actual is to say that the world is, or could have been, in that state. This use of ‘actual’ contrasts with the use of ‘actual’ as a rigidifier modelled by David Kaplan’s actuality operator. See Soames (2007a) for explanation of the relation between the two uses. 10 See Soames (2007a) and chapters 5 and 6 of Soames (2010). 11 Kripke (1980), 148–9; my emphasis. 12 Here and throughout I assume that ‘C-fibre stimulation’ rigidly designates a property essential to its instances. 13 See Putnam (1967). 14 The symbol ‘p’ is used here as a schematic sentential letter. 15 For special cases like ‘Water = the substance molecules of which are composed of two hydrogen atoms and one oxygen atom’ in which one term is a simple Millian expression and the other is a rigid, but non-Millian semantically compound expression, see Soames (2007a). 16 See chapters 5 and 6 of Soames (2010). 17 See Soames (2015), particularly Chapter 4. 18 Since complete world-stories are required only to be representationally complete, no other propositions with the representational content of p, or with the representational content of ~p, need be included, along with p, as basic world-state defining propositions.
References Armstrong, D. (1968). A Materialist Theory of Mind, London and New York: Routledge and Kegan Paul. Kripke, S. A. (1971). ‘Identity and Necessity’, in Milton K. Munitz (ed.), Identity and Individuation, 135–64, New York: New York University Press. Kripke, S. A. (1980). Naming and Necessity, Cambridge: Harvard University Press; originally published in Donald Davidson and Gilbert Harman (eds.), Semantics of Natural Language, 253–355, Dordrecht: Reidel, 1972. Lewis, D. (1966). ‘An Argument for the Identity Theory’, The Journal of Philosophy, 63, 17–25; rpt. in Philosophical Papers, vol. 1, 99–107, New York: Oxford University Press. Place, U. T. (1956). ‘Is Consciousness a Brain Process?’, British Journal of Psychology, 47, 44–50. Putnam, H. (1967). ‘The Nature of Mental States’, in W. H. Capitan and D. D. Merrill (eds.), Art, Mind, and Religion, Pittsburgh: University of Pittsburgh Press; rpt. in
184
The Bloomsbury Companion to the Philosophy of Consciousness
Mind, Language, and Reality: Philosophical Papers, vol. 2, 429–40, Cambridge: Cambridge University Press, 1975. Smart, J. J. C. (1963). Philosophy and Scientific Realism, New York: Humanities Press. Soames, S. (2002). Beyond Rigidity, New York: Oxford University Press. Soames, S. (2006a). ‘The Philosophical Significance of the Kripkean Necessary A Posteriori’, in Ernest Sosa and Enrique Villanueva (eds.), Philosophical Issues, 16, 288–309; rpt. in S. Soames, 2009. Philosophical Essays, vol. 2, Princeton and Oxford: Princeton University Press, 165–88. Soames, S. (2006b). ‘Understanding Assertion’, in Judith Thomson and Alex Byrne (eds.), Content and Modality: Themes from the Philosophy of Robert Stalnaker, New York: Oxford University Press, 222–50; rpt. in S. Soames, 2009. Philosophical Essays, vol. 2, Princeton and Oxford: Princeton University Press, 211–42. Soames, S. (2007a). ‘Actually’, in Mark Kalderon (ed.), Proceedings of the Aristotelian Society, supplementary volume, 81, 251–77; rpt in S. Soames, 2009. Philosophical Essays, vol. 2, Princeton and Oxford: Princeton University Press, 277–99. Soames, S. (2007b). ‘What are Natural Kinds’, Philosophical Topics, 35, 329–42; rpt. In S. Soames, 2014. Analytic Philosophy in America. Princeton and Oxford: Princeton University Press, 265–80. Soames, S. (2009). Philosophical Essays, vol. 2, Princeton and Oxford: Princeton University Press. Soames, S. (2010). Philosophy of Language, Princeton and Oxford: Princeton University Press. Soames, S. (2011). ‘Kripke on Epistemic and Metaphysical Possibility’, in Alan Berger (ed.), Saul Kripke, Cambridge: Cambridge University Press, 78–99; rpt. in S. Soames, 2014. Analytic Philosophy in America, Princeton and Oxford: Princeton University Press, 167–88. Soames, S. (2014). Analytic Philosophy in America, Princeton and Oxford: Princeton University Press. Soames, S. (2015). Rethinking Language, Mind, and Meaning, Princeton and Oxford: Princeton University Press. Stalnaker, R. (1978). ‘Assertion’, in Peter Cole (ed.), Syntax and Semantics, vol. 9, Pragmatics, 315–32; rpt. in Context and Content, New York: Oxford University Press, 78–95. Stalnaker, R. (1984). Inquiry, Cambridge: The MIT Press.
Part Three
Metaphilosophy of Consciousness Studies
186
11
Understanding Consciousness by Building It Michael Graziano and Taylor W. Webb
1 Introduction In this chapter we consider how to build a machine that has subjective awareness. The design is based on the recently proposed attention schema theory (Graziano 2013, 2014; Graziano and Kastner 2011; Graziano and Webb 2014; Kelly et al. 2014; Webb and Graziano 2015; Webb, Kean and Graziano 2016). This hypothetical building project serves as a way to introduce the theory in a step-by-step manner and contrast it with other brain-based theories of consciousness. At the same time, this chapter is more than a thought experiment. We suggest that the machine could actually be built and we encourage artificial intelligence experts to try. Figure 11.1 frames the challenge. The machine has eyes that take in visual input (an apple in this example) and pass information to a computer brain. Our task is to build the machine such that it has a subjective visual awareness of the apple in the same sense that humans describe subjective visual awareness. Exactly what is meant by subjective awareness is not a priori clear. Most people have an intuitive notion that is probably not easily put into words. One goal of this building project is to see if a clearer definition of subjective awareness emerges from the constrained process of trying to build it. Our constraint is severe: Whatever we build into the robot must be possible given today’s technology. Not every detail need be specified. This chapter discusses general concepts and will not come anywhere near a wiring diagram. Each component must nevertheless be something that, in principle, could be built.
2 Objective awareness We start by giving the machine information about the apple, as depicted in Figure 11.2. In the case of a human, light enters the eye and is transduced
188
The Bloomsbury Companion to the Philosophy of Consciousness
Figure 11.1 A robot has eyes looking at an apple.
into neuronal signals. Those signals are processed in the visual system, which constructs information that describes features of the apple. Those features include overall shape, local contour, colour, size, location and many other attributes bound together to form what is sometimes called an internal model. The internal model of the apple is a packet of information that is constantly updated as new information arrives from the eyes. One of the most consequential properties of the internal model is its inaccuracy. It is an approximation or sketch. Borders are exaggerated, blurred visual features are partly filled in by algorithms that compute what is likely to be present, and at least one aspect of the apple, colour, has surprisingly little correspondence to the real world. The visual system does not contain information about wavelength that a physicist might want. It does not construct information about electromagnetic radiation, a continuous spectrum, absorption and reflection or transmission of light to the eye. The brain does not encode the actual reflectance spectrum of the apple. Instead it uses heuristics to compute a simplified property, colour, and assign it to a spatial location, the surface of the apple. The reason why the brain’s internal models are incomplete, one might even say cut-corner, is presumably for speed and efficiency in the face of limited resources. The brain must construct thousands of internal models and update them on a sub-second timescale. With a camera and a computer we can give our robot just such a simplified internal model of the apple, as illustrated in Figure 11.2. Is the robot in Figure 11.2 aware of the apple? In one sense, yes. Figure 11.2 illustrates what is sometimes termed objective awareness (Szczepanowski and Pessoa 2007). Information about the apple has gotten in and is being processed. That visual information could be used to drive behaviour. The machine is objectively aware of the apple in the same sense that a laptop is objectively aware of the information typed into it. But is the robot subjectively aware of the apple? Does it have a conscious visual experience? At least some scholars suggest that subjective awareness emerges naturally from information processing (Chalmers 1997). In that view even a thermostat is subjectively conscious of the simple
Understanding Consciousness by Building It
189
Information about the world
Figure 11.2 The robot has information about the world in the form of internal models. Here the robot has been given an internal model of the apple. The robot is objectively aware of the apple. We suggest this is not a complete account of subjective awareness.
information that it processes. By extension, visual subjective awareness arises from visual processing and therefore our machine is subjectively aware of the visual stimulus. The hypothesis, however, is fundamentally untestable. If that hypothesis is correct then we are done building an aware machine. Anything that manipulates information – which may well be everything in the universe – is conscious of the information it manipulates. And we will never be able to confirm the proposition. This approach is sometimes called ‘panpsychism’ (Skrbina 2005). Rather than stop with this non-testable answer, in the following sections we continue our exploration to see if we can gain a clearer insight into subjective awareness.
3 Cognitive access In Figure 11.3, a new piece has been added to the machine. To be able to query the machine, we have added a user interface. It is a search engine, a linguistic/ cognitive component. We can ask it a question. It searches the database of the internal model, and on the basis of that information answers the question. Again, everything in Figure 11.3 is in principle buildable with modern technology. We ask the machine, ‘What’s there?’ The search engine accesses the internal model, obtains the relevant information and answers, ‘An apple’. We ask, ‘What are the properties of the apple?’ The machine answers, ‘It’s red, it’s round, it has a dip at the top, it has a stem protruding upward from the dip’, and so on. The robot in Figure 11.3 could represent an entire category of theory about consciousness. In it, higher-order cognition has access to a lower-order sensory representation. For example, in the global workspace theory (GWT)
190
The Bloomsbury Companion to the Philosophy of Consciousness Information about the world
Cognitive / Linguistic Interface “There is an apple.”
Figure 11.3 The robot has a linguistic interface that acts as a search engine. It takes in questions from the outside, searches the internal model and on the basis of that information replies to the question. The robot has a type of higher cognitive layer that can access a lower-order sensory representation. Yet we suggest this is still an incomplete account of subjective awareness.
(Baars 1988), the information in the sensory representation is broadcast to other systems in the brain, allowing higher cognition to gain access to it. As a result, we can ask the machine about the apple and it can answer. We suggest, however, that Figure 11.3 represents an incomplete account of consciousness. To highlight that incompleteness, we ask the machine, ‘Are you aware of the apple?’ The search engine accesses the internal model and obtains no answer to the question. The internal model does not contain any information about the property of awareness. It does not even have information about the item ‘you’. Of the three key words in the question, ‘you’, ‘aware’ and ‘apple’, the search engine can return information only on the third. Equipped with the components shown in Figure 11.3, the machine cannot even compute in the correct domain to answer the question. You might as well ask your digital camera whether it is aware of the pictures it takes. It cannot process the question.
4 Self-knowledge If the difficulty with the machine in Figure 11.3 is that it lacks sufficient information, we can try to fix the problem by adding more information. In Figure 11.4 we add a second internal model, a model of the self. In the human brain, the self-model is complex and probably spans many brain regions. It is probably more accurately described as a collection of many models. It might include information about the shape and structure and movement of the physical
Understanding Consciousness by Building It
191
Information about the world Me
Cognitive / Linguistic Interface “There is a me. There is an apple.”
Figure 11.4 The robot has a second internal model, a model of the self. The self-model may contain information about the physical body and how it moves, autobiographical memory and other self-information. The robot now has self-knowledge. Yet we suggest this is still an incomplete account of subjective awareness.
body, autobiographical memory and information about one’s personality and behavioural habits. We ask the machine in Figure 11.4, ‘Tell us about yourself ’. Unlike in the last iteration, this time the machine can answer. It has been given the construct of self. It might say, ‘I’m a person. I’m this tall, this wide, I can move my arms and legs, I’ve got brown hair, I like Beethoven, I’m friendly’ and so on. It can provide information from a rich internal model of self. Again, Figure 11.4 could represent an entire category of theory about consciousness. Many theories relate consciousness to self-information or selfnarrative (Gazzaniga 1970; Nisbett and Wilson 1977). Self-knowledge is clearly an important part of what many people consider to be consciousness. But once again, we suggest that as a theory of consciousness, Figure 11.4 is incomplete. To make the point, we ask the machine another question: ‘What is the mental relationship between you and the apple?’ The search engine searches the internal models. It obtains information about the self, separate information about the apple, but no information about the mental relationship between them. It has no information about what a mental relationship even is. Equipped only with the components in Figure 11.4, it cannot answer the question.
5 The attention schema The machine in Figure 11.4 has an internal model of the apple and an internal model of the self, but it lacks an internal model of a third crucial part to the scene: the computational relationship between the self and the apple. The machine is
192
The Bloomsbury Companion to the Philosophy of Consciousness
focusing its processing resources on the apple. It is attending to the apple. To try to improve the machine, we add one more internal model, a model of attention. First we clarify what we mean by attention, given that the term has been used in so many different contexts. By attention we refer to an entirely mechanistic process that can in principle be duplicated with modern technology. Our use of the term is based on a neuroscientific theory, the biased competition theory (Desimone and Duncan 1995; Beck and Kastner 2009). In that theory, signals in the brain compete with each other due to lateral inhibitory processes. One or a small number of signals may temporarily win the competition, momentarily rising in signal strength while suppressing other signals. The winner of the competition, due to its greater signal strength, has an exaggerated influence on other systems in the brain such as memory and response selection. That competition among signals can be biased towards one or another winning signal by a variety of modulating signals. Attention, in this mechanistic account, is a process by which the brain focuses computing resources on a limited set of signals. Consider the case of a person attending to an apple. The apple is probably only one of many items in what may be a cluttered visual scene. The internal model of the apple has won the competition of the moment and other visual models are relatively suppressed. As a result, the apple’s internal model can dominate the brain’s outputs. Since this process of attention is mechanistic and in principle buildable, given current technology, we are allowed to add it to our robot. There are now three fundamental components to the scene: an apple, a self and an attentive relationship between them. The machine in Figure 11.4 contains an internal model of the self and of the apple, but has no internal model of attention. What would happen if we added an internal model of the machine’s attentional relationship to the apple? The internal model of attention, like all internal models, is information. It is a continuously updated set of information. It does not present any fundamental engineering problem. It is in principle buildable. We are allowed to add it to our robot. Figure 11.5 shows this addition to the machine. We first consider what information would be included in an internal model of attention. Attention has a complex neuronal mechanism, complex dynamics and complex consequences. But not all of the microscopic details of attention need be represented in the internal model. Like the internal model of the apple, the internal model of attention would presumably describe useful, functional, abstracted properties. We should not expect an internal model of attention to be a scientifically accurate description of attention. As illustrated in Figure 11.5, an internal model of attention might describe attention as a mental possession of
Understanding Consciousness by Building It
193
something. It might describe attention as something that empowers the machine to react to the attended item. It might describe attention as something located inside oneself. These are only three general, abstracted properties of attention. An internal model of attention might include a great deal more information about attention, about its consequences and dynamics. But an internal model of attention would not contain information about neurons, synapses, lateral inhibitory processes, competition among electrochemical signals and other microscopic aspects of attention. The internal model would be silent on the physical mechanism of attention. In the same way, the internal model of the apple assigns a colour to the apple’s surface while leaving out the physical details of electromagnetic waves. The internal model of attention would be as incomplete and inaccurate as any other internal model. We term this internal model of attention the ‘attention schema’ in parallel to the internal model of the physical body, the ‘body schema’ (Graziano and Botvinick 2002). Now that we have added an internal model of attention, we ask the machine in Figure 11.5 more questions. We ask, ‘What object are you looking at?’ It can still answer. The cognitive/linguistic search engine can still find that information among the internal models. The machine says, ‘An apple’. We ask, ‘Tell us about yourself ’. It can answer this just as before. ‘I’m a person’. Now we ask, ‘What is the mental relationship between you and the apple?’ This time, the search engine can Mental Possession of Something Empowers Me To React Located Inside Me Me Cognitive / Linguistic Interface
Attention
“I have a mental possession of the apple.”
Figure 11.5 The main components of the attention schema theory. The robot has an internal model of the self, an internal model of the apple, and a third internal model, a model of the attentional relationship between the self and the apple. Attention here refers to the brain focusing its processing resources on the apple. The internal model of attention describes that computational relationship in an abstracted, schematic manner. The attention schema is accurate enough to be useful but not so accurate or detailed as to waste resources and processing time. The attention schema describes something impossible and physically incoherent, a caricature of attention, subjective awareness. This machine insists that it has subjective awareness because it is captive to the incomplete information in its internal models.
194
The Bloomsbury Companion to the Philosophy of Consciousness
return an answer. Reporting the information obtained from its internal models, the machine answers, ‘I have a mental possession of the apple’. We ask the machine for more details of this mental possession. First, however, we ask a question of clarification. We ask, ‘Do you know what is meant by the physical properties of something?’ The machine has a self-model that includes a body schema, a description of the physical self. It also has an internal model of the apple that describes a physical object. Therefore it can answer, ‘Yes, I know what physical properties are’. We then ask, ‘What are the physical properties of this mental possession?’ The machine in Figure 11.5 reports the available information. It says (and here we are guilty of giving it a sophisticated verbal capability), ‘My mental possession of the apple, the mental possession itself, has no describable physical properties. Yet it exists. It is a part of me. It is inside me. It is my mental possession of things. It enables me to react to things. There is a me, there is an apple, but that is not a complete description. There is something else, something with no physical substance and yet a spatial location inside me, something metaphysical, the mental relationship between subject and object. I have a subjective mental experience of the object. Likewise, I have a subjective mental experience of each component of the object – of its color, of its shape, of its size’. The machine is claiming to have subjective awareness. We should not be surprised by this response. We know why the machine behaves the way it does because we built it. It accesses internal models and whatever information is contained in those models it reports to be true. It reports a physically incoherent property, a metaphysical essence of consciousness, because its internal models are incomplete descriptions of physical reality. The machine, however, does not know why it answers the way it does. It has no information about how it was built. Its knowledge is limited to the contents of its internal models, and those models do not contain the information, ‘By the way, this is an internal model composed of information that is incomplete and sometimes wrong’. To try to probe the limits of the machine’s responses, we ask, ‘Are you just a machine accessing the information in internal models, and is that why you claim to have subjective awareness of the apple?’ The machine accesses its internal models and, based on the information found there, answers, ‘I don’t know what internal models are. I don’t have them. There is an apple, there is a me, and I am aware of the apple’.
Understanding Consciousness by Building It
195
The machine is trapped in the same ego tunnel described by Metzinger (2010). It is capable of introspection, in the sense of cognitive machinery accessing deeper internal information. It is capable of self-knowledge. It is capable of knowledge about its relationship to the world and to itself. But it is captive to the incomplete information in its internal models. Introspection will always return the same answer. It insists it has subjective awareness because, when its internal models are searched, they return that information. It should be possible, in principle, to build a brain that insists on anything we like. We just need to insert the right packet of information into its lowlevel internal models. We could build a brain that insists it is a cosmic squirrel instead of a brain. We could build a brain (as evolution did) that insists that white light is pure brightness scrubbed clean of all contaminating colours. The attention schema in Figure 11.5 is the right packet of information to lead a brain to conclude and report and insist that it has a hard problem – a non-physically describable subjective awareness of the apple. There is no reason to limit the machine to visual awareness of an apple. The same logic could apply to anything. Along similar lines, we could build the machine to be aware of a touch; aware of a sound; aware of its own body; aware of memories from its past that are being recalled and replayed; aware of the thought that it is aware. If the machine can direct mechanistic attention to signal X, and if the machine includes an internal model of that attentional state, then the machine is equipped to insist, ‘I am aware of X’.
6 Possible misconception: Higher cognition and the attention schema One possible misconception about attention schema theory is that it depends on higher-order cognition, as in the higher-order thought (HOT) theory (Lau and Rosenthal 2011). It does not. The cognitive/linguistic layer was added as a convenience to be able to query the machine. We could remove it. We could build something more like a rat than a person, something with little cognitive ability and no language ability. Imagine crossing out the cognitive/linguistic layer in the machine in Figure 11.5. The machine would still have its internal models. The internal models are fundamental, low-level representations of the world. They are useful for survival. They are the brain’s most fundamental simulation of the world, and in that simulation, you are conscious of the apple. That simulation can be constructed even if the brain in question lacks the
196
The Bloomsbury Companion to the Philosophy of Consciousness
capability to introspect, to cogitate or to talk. Attention schema theory is in this sense unrelated to HOT although the cognitive layer makes a convenient interface for talking to the machine.
7 Possible misconception: What generates actual awareness? In a naïve, intuitive approach to the topic, it is natural to think that subjective awareness is a non-physically describable, private essence inside us. One can then ask what is the specific brain process that, when run, causes awareness to emerge? One could mistakenly think that we are putting forth the attention schema as our candidate for that special device that, when operated, generates awareness. This mistaken line of thinking leads to a question: Why would an internal model of attention generate subjective awareness? Accepting that we probably have an internal model of attention – which seems reasonable – why would such a thing generate an inner feeling? Where is the logic in that? We hit the same obstacle as always in understanding consciousness. How can science cross the gap from physical mechanism to metaphysical awareness? The answer is simple. In attention schema theory, nothing generates awareness. Awareness is not generated. The brain constructs information that describes attention. The information is neither complete nor entirely accurate. What is described, instead, is a physically impossible thing, a spooky caricature of attention – subjective awareness. The brain is captive to its own internal information. The entire world known to the brain is defined by the incomplete information it constructs. As an analogy, consider the mystery of white light. Before Newton, scholars asked a naïve question: What washes light so that it becomes clean and uncontaminated by colours? Likewise, what dirties white light, contaminating it to become coloured light? These highly intuitive misconceptions are the result of an internal model constructed deep in the early layers of the visual system. That internal model evolved over millions of years presumably because it is a useful, though incomplete and imperfect, way to model some aspects of light. The question of how white light becomes purified is truly a hard problem of light. When we mistakenly take our internal models to be literally accurate, we run into unsolvable scientific hard problems. Here, of course, ‘hard problem’ is a euphemism for ‘ill-posed problem’.
Understanding Consciousness by Building It
197
8 The parable of the Heliocentric theory Ptolemy and Galileo walk into a bar. They strike up a conversation. Ptolomy: Your theory is silly. I spotted the error right away. It doesn’t solve the hard problem. What pushes the sun around the stationary earth? I think it may be the chariot of Helios. In your theory, the movement of the sun around the earth is left entirely unexplained. Galileo: No, the theory is explanatory. You see, the earth orbits the sun. Ptolomy: I still don’t see how that explains the motion of the sun around the earth. Galileo: It doesn’t. The sun doesn’t move around the earth. Ptolomy: Ah ha! Now I know how to pigeon-hole your theory. You ‘solve’ the hard problem by denying the phenomenon exists in the first place. But that’s a cop-out. We’re both philosophers, so let’s be more systematic in our logic. The definition of motion is: it moves. Now look at the earth: it ain’t moving. QED. Galileo: But the theory explains why you think that. You see, the principle of Galilean relativity means that in the closed system of you and the earth, there is no observation you can make to show that the earth is moving. It’s not possible. That’s why all your observations tell you that the earth is stationary. You can’t know the answer unless you look outside that closed system. If you study the stars and planets and sun, and take a hypothetical external perspective, you can infer the truth. But the scientific truth will always differ from your immediate, limited, earth-bound observations. Ptolomy: That sounds complicated. Bottom line: you’ve failed to explain how the sun moves around the earth. The hard problem remains unanswered. Indeed, you’ve left it unaddressed. Therefore, friend, I’m afraid I must reject your theory. But since I won the argument, I’ll be generous and buy you a drink.
9 Uses of the attention schema It seems obvious why the brain would evolve a visual system capable of constructing models of visual objects. It allows the animal to navigate in its visual environment. It also seems obvious why the brain would evolve an elaborate self-model, especially a model of the physical body. Monitoring and predicting the changing state of your own body is useful in controlling movement. The adaptive usefulness of an attention schema is less obvious but, as we argue below, much more profound.
198
The Bloomsbury Companion to the Philosophy of Consciousness
One of the traps in evolutionary thinking is to suppose that a trait has one function or was shaped by only one type of evolutionary pressure. Even when the evolutionary ‘purpose’ of a trait seems obvious – for example, teeth are obviously adapted for chewing – other functional roles can turn up unexpectedly. After all, teeth are also partly adapted for social signalling. It is possible that an internal model of attention serves many adaptive roles, and perhaps even has a different mixture of functions in different species. We suggest at least three major adaptive functions but of course there may be others. These three proposed functions are described in the following sections.
10 The attention schema is useful for the control of attention In control theory, if you want to build a capable control system, you should give it an internal model of the thing to be controlled (Camacho and Bordons Alba 2004). For example, the brain constructs a body schema, an internal model of the body, to help control movement (Graziano and Botvinick 2002; Scheidt et al. 2005; Wolpert, Goodbody and Husain 1998). We suggest that one adaptive function of an attention schema is to help control attention. We also suggest that this function may be the evolutionary origin of awareness. Attention is probably evolutionarily old. Some aspects of selective signal enhancement can be seen in insects, crabs, birds and mammals (Barlow and Fraioli 1978; Beck and Kastner 2009; Mysore and Knudsen 2013; van Swinderen 2012), which shared a common ancestor more than half a billion years ago. There is little use in having attention without any ability to control it. Therefore we suggest that at least half a billion years ago nervous systems began to evolve a dynamical systems controller to regulate attention, and one part of that control system was an internal model of attention. In this proposal, the rudiments of consciousness are extremely ancient and widespread in the animal kingdom. Some form of attention schema, at least a simple internal model of attention used to help control attention, could be present in the brains of almost all animals that have brains. We would probably not recognize a simple internal model of attention as similar to our human awareness. Over the intervening millions of years, the attention schema may have evolved the rich information that we recognize as subjective awareness. No internal model in the brain is perfect. For example, the body schema makes errors, becoming misaligned from the body, incorrectly representing the movement and configuration of the body. In those cases, the control of the
Understanding Consciousness by Building It
199
body is impaired. When the internal model of your arm is temporarily off, you make errors in moving your arm. Like all internal models, the attention schema should also sometimes make errors. When those errors occur – when awareness becomes misaligned from attention – then the control of attention should suffer in specific ways predictable from dynamical systems control. To clarify how attention and awareness can separate, consider Figure 11.5 again. The machine directs visual attention to an apple. An internal model of that attention is constructed in the machine. That internal model might sometimes make errors. To be clear, we are not talking about an error in the internal model of the apple, which might lead to a visual illusion. We are also not talking about an error in the internal model of the self, which might lead to a body image distortion and inaccurate movement control. We are talking about an error in the internal model of attention itself. There may be many possible kinds of error, all worth exploring theoretically and experimentally. Here, we focus on one particularly simple kind of mismatch between awareness and attention that is convenient to approach in a practical experiment: attention in the absence of awareness. In the past decade, many experiments have demonstrated that people can attend to a visual stimulus in the absence of awareness of that stimulus (Hsieh, Colas and Kanwisher 2011; Jiang et al. 2006; Kentridge, Nijboer and Heywood 2008; Koch and Tsuchiya 2007; Lamme 2004; McCormick 1997; Norman, Heywood and Kentridge 2013; Tsushima, Sasaki and Watanabe 2006). These experiments almost always involve a dim visual stimulus or a visual stimulus that is masked by a second visual display, putting the stimulus at the edge of detectability. Even when people assert that they see no visual stimulus, it can still draw their attention, improving the processing of subsequent stimuli at the attended location. The significance of attention in the absence of awareness has been debated, but attention schema theory provides a simple explanation. In the theory, awareness is the internal model of attention. Attention without awareness occurs when the internal model fails to update correctly. Because control theory is a well-developed area of research, it is possible to put attention schema theory to experimental test. Measure attention to a visual stimulus. Manipulate the visual display such that in one condition, the participants report being subjectively aware of the stimulus, and in another condition, the participants report being subjectively unaware of the stimulus. Compare the aware and the unaware conditions. Three results should be obtained. First, attention should still be possible without awareness. Second, attention should change without awareness. Third, the changes should match
200
The Bloomsbury Companion to the Philosophy of Consciousness
the pattern predicted by control theory. Without an internal model, the control of attention should suffer in specific ways. To conduct these tests, we used a Posner paradigm (Posner 1980), a standard method for measuring visual attention in human participants. The logic of the Posner paradigm is easily explained. The participant looks at the centre of a computer screen. A small dot is briefly flashed to the right or left. After the dot is gone, within a fraction of a second, another stimulus, the test stimulus, is presented to the right or left. The participant’s task is to discriminate the test stimulus as quickly as possible and respond by pressing a key on a keyboard. For example, in some tests the stimulus might be an A or an F and the participant must press the A or F key as quickly as possible. If the test stimulus appears at the same location as the initial dot, the reaction time to the test stimulus is generally fast. This occurs because the dot initially and automatically draws attention to the correct location. If the test stimulus appears on the opposite side of the screen as the dot, the reaction time to the test stimulus is usually slower by a few tens of milliseconds. This occurs because the dot initially draws attention to the wrong location. By measuring the difference in reaction time between these two conditions, it is possible to infer how much visual attention was drawn to the initial dot. The key to the experiment is that the participant does not need to be aware of the dot. If the dot is dim, extremely brief or masked by other visual stimuli, the participant may claim never to see it. Even so, the dot can still draw attention and thereby affect the response time to the subsequent test stimulus. In this way, it is possible to measure attention to the dot whether or not the participant is aware of it. To determine whether the participant was subjectively aware of the dot, at the end of each trial the participant is asked whether the dot was seen. This method allows us to test how attention to a visual stimulus changes when people are no longer aware of the dot. If awareness serves as the internal control model of attention, then predictable changes should occur. For example, without an internal model, attention should become less stable. The controller no longer has information about the current state of attention and therefore can no longer adjust in real time to compensate for fluctuations – like balancing a stick on your hand with your eyes closed and therefore no information about what the stick is doing. In addition to a loss of intrinsic stability, attention should also become more sensitive to outside influences such as the brightness of the stimulus. Again, it is like balancing a stick on your hand with your eyes closed, this time while the stick is being nudged by someone else. It is harder to compensate for the nudge. Figure 11.6 shows data from one experiment (Webb, Kean and Graziano 2016). The Y axis shows the amount of attention drawn to the dot. The X axis
Understanding Consciousness by Building It 100
Aware Unaware
80 Measure of Attention (∆t in ms)
201
60 40 20 0 –20 165
270 380 485 Time after Stimulus Presentation (ms)
590
Figure 11.6 Testing attention with and without awareness. In this experiment, attention to a visual stimulus is tested by using the stimulus as a cue in a Posner spatial attention paradigm (see Webb, Kean and Graziano 2016 for details). In some trials, the participants are aware of the visual cue (bold line). In other trials, they are unaware of it (dotted line). Attention to the cue is less stable across time when awareness is absent. This result follows the predictions of control theory in which an internal control model helps to maintain stability of the controlled variable. The X axis shows time after cue onset. The Y axis shows attention drawn to the cue (∆t = [mean response time for spatially mismatching trials in which the test target appeared on the opposite side as the initial cue] − [mean response time for spatially matching trials in which the test target appeared on the same side as the initial cue]). Error bars are standard error.
shows the time at which attention was measured relative to the onset of the dot. The bold line shows the results when participants were subjectively aware of the dot. A significant amount of attention was drawn to the dot and that attention slowly decreased over time. The dotted line shows the results when participants were not subjectively aware of the dot. Here attention was still drawn to the dot, yet the time course of attention changed in the absence of awareness. At one time point, participants actually paid significantly more attention to the dot when they were unaware of it. At another time point, they paid less attention when they were unaware of it. Without awareness, attention was less stable over time. Figure 11.7 (Webb, Kean and Graziano 2016) shows data from another, similar experiment. When participants were aware of the dot, they paid slightly more attention to a brighter dot than to a dimmer dot, as might be expected. When participants were not aware of the dot, attention was much more sensitive to the brightness of the stimulus.
202
The Bloomsbury Companion to the Philosophy of Consciousness 120
Aware
Measure of Attention (∆t in ms)
100
Unaware
80 60 40 20 0 −20 Low
High Stimulus Contrast
Figure 11.7 Testing bottom-up attention with and without awareness. In this experiment, bottom-up attention to a visual stimulus is tested by using the stimulus as a cue in a Posner spatial attention paradigm (see Webb, Kean and Graziano 2016 for details). In some trials, the participants are aware of the visual cue (bold line). In other trials, they are unaware of it (dotted line). Attention to the cue was more sensitive to the visual contrast of the cue when awareness was absent. This result follows the predictions of control theory in which an internal control model helps to resist perturbations. The Y axis shows attention drawn to the cue (∆t = [mean response time for spatially mismatching trials in which the test target appeared on the opposite side as the initial cue] − [mean response time for spatially matching trials in which the test target appeared on the same side as the initial cue]). Error bars are standard error.
These and other experiments (Webb and Graziano 2015; Webb, Kean and Graziano 2016) tell us that awareness is not an epiphenomenon. It actually does something. It is important for the mechanistic control of attention. When people are not aware of something, they can still pay attention to it, they can even pay approximately the same amount of attention to it, and in some circumstances may even pay more attention to it. But the control of attention changes. Attention is less stable in time and more easily perturbed by external influences. These two specific deficits are predicted from the loss of an internal model. Experiments like these point to a specific relationship between attention and awareness, that awareness acts like the internal model of attention. Without awareness, attention is still possible but the control of attention suffers in predictable ways. Even fast, automatic aspects of attention, within the first few hundred milliseconds of stimulus onset, depend on that internal model of attention. The data in Figure 11.7 show attention only fifty milliseconds after stimulus onset. In fifty milliseconds there is no time for high-level cognition, volition or choice, and yet the control of attention still depends on the presence of subjective awareness.
Understanding Consciousness by Building It
203
This type of result suggests that awareness serves a specific mechanistic function as the internal model of attention.
11 The attention schema is useful for the integration of information A common hypothesis is that consciousness is related to the widespread integration of information in the brain. One of the earliest examples is the GWT first proposed by Baars (1988), in which many disparate types of information are pooled together by attention into a single whole that allows for more intelligent, coherent guidance of behaviour. Another similar proposal is that consciousness is caused by binding information together (Crick and Koch 1990). A more recent proposal is that the amount of integrated information can be mathematically quantified and a large amount of integrated information is associated with subjective experience (Tononi 2008). Other variants of the global workspace and integration-of-information hypothesis have been proposed (Dehaene 2014). One difficulty with this category of theory, at least as it is usually presented, is the metaphysical gap. Granted that disparate information in the brain is integrated, why would that cause subjective awareness of any of the information in question? The integrated information approach suffers from an intuitive bias, the notion that subjective awareness is a metaphysical thing that is generated by some process in the brain. If we can figure out what physical process generates awareness, then we are as close as we can ever be to explaining awareness, while still leaving unexplained the gap from physical mechanism to metaphysical essence. That is the conceptual approach of almost all theories of consciousness. Attention schema theory adds a missing piece to the account of integrated information. It adds the attention schema, a chunk of information that describes subjective awareness. In Figure 11.5, three internal models are diagrammed: an internal model of the self, of the apple and of the attentional focus of the self on the apple. These internal models are information constructed in the brain. Collectively they form one larger, integrated model describing how the self is aware of the apple. The internal model of attention, the attention schema, acts as a connector. Without that intermediate piece, the brain would have two disconnected internal models. Like the brain in Figure 11.4, it could say, ‘There is a me’, and separately, ‘There is
204
The Bloomsbury Companion to the Philosophy of Consciousness
an apple’. The attention schema bridges between information about the self and information about the world – not just the external world, but sometimes also aspects of the internal world to which attention is directed. The attention schema, if it is to actually model attention, must be a universal connector. It must link to any type of information to which the brain can attend. You can attend to a touch, to a sound, to a recalled memory, to a specific thought, to an emotion. Therefore, to model the state of attention, the attention schema must be able to connect to all of those information domains. By its nature, an attention schema must serve as an integrative hub. The attention schema is a chunk of information that is uniquely connectable to many other chunks of information. In many ways the attention schema theory shares features with the GWT and integrated information theory. The difference is that it explains the consciousness part. It avoids the metaphysical gap. It does not postulate that pooling information, by itself, unexplainably generates metaphysical awareness. Instead it adds to that pool of information a specific ingredient that is easy to overlook but is of paramount importance. It adds the attention schema, a chunk of information that describes metaphysical awareness and causes us to assert that we have awareness.
12 The attention schema is useful for social perception We first arrived at the attention schema theory by considering social perception (Graziano 2010; Graziano and Kastner 2011). In a social context people attribute mind states to each other, a process sometimes called theory of mind – the ability to construct theories of other people’s minds (Frith and Frith 2003; Wimmer and Perner 1983). People attribute beliefs, emotions, intentions and other mind states to each other. By ‘attributing’ mind states, what is meant is not an intellectual, cognitive process of deducing what is likely to be in other people’s minds. Instead social attribution is automatic, intuitive and can even contradict what we know intellectually. For example, people can have a powerful illusion of mind states in a puppet while knowing intellectually that the puppet has no mind. Beneath the level of higher cognition, the human brain constructs internal models of other people’s minds. We suggest that awareness is one of the most basic mind states that people attribute to each other. It is difficult to attribute to John an intention to reach for a nearby coffee cup unless you can intuitively understand that John is aware of the coffee cup. It is difficult to attribute to John any anger towards the vandal who
Understanding Consciousness by Building It
205
damaged his car, unless you can understand that John is aware of the damage. If you think he is not aware of the damage, you would suppose he is not angry. Suppose you are watching John. John is directing attention to a coffee cup, in the mechanistic sense of attention in which signals in the brain compete with each other and the signals related to the coffee cup win that competition. Those signals impact widespread systems in his brain and dominate his behaviour. He is able to process the coffee cup, reach for it, avoid knocking it over or remember it for later. Nothing determines his immediate and future behaviour as much as his state of attention. If you want to predict John’s behaviour, as a first pass computation it would be of the utmost importance to have an attention schema that can model John’s state of attention. That attention schema would not reconstruct the mechanistic details of John’s attention. Your brain has no access to the details inside John’s brain. Your brain has no use for an internal model of neuronal dynamics inside John’s brain. Instead, that attention schema would model a simpler property stripped of mechanistic details. John has a mental possession of the apple and that mental possession has certain basic dynamics and consequences for behaviour. Your brain would attribute the property of awareness to John, as though it were a metaphysical essence inside him, because awareness is a good heuristic model of attention. It was from this consideration of the social use of awareness that the more general theory emerged in which awareness, whether attributed to someone else or to oneself, is a model of the attentional process (Graziano 2010; Graziano and Kastner 2011). Exactly when or how the social use of awareness evolved is not clear. Apes have a well-developed theory of mind (Premack and Woodruff 1978; Wimmer and Perner 1983), but social attribution of awareness may have predated primate evolution by many millions of years. Dogs show an ability to intuit the attentional state of other dogs (Horowitz 2009). Crows have some elements of a theory of mind (Clayton 2015). Reptiles show highly complex social behaviours (Brattstrom 1974). Since birds, reptiles and mammals diverged at least 300 million years ago in the Carboniferous period, the ability to model the attentional state of others may have emerged early, though it presumably is better developed in some species than others. We suggest that an attention schema first evolved as part of the mechanism for controlling attention as discussed in previous sections. It gradually expanded its role to become a central mechanism for the integration of information across disparate domains, leading to more flexible and intelligent behaviour. A third main function also gradually emerged, modelling the attentional states of other animals to help predict their behaviour. This social function of the attention
206
The Bloomsbury Companion to the Philosophy of Consciousness
schema may be present in some form in a large range of animals. In humans the ability is especially well developed. We are so prone to attribute awareness that we live immersed in a world painted with projected consciousness. Human spiritual belief is arguably a manifestation of an exuberant social machinery. If the brain constructs an attention schema and uses it to model others, as well as oneself, then perhaps overlapping brain areas are involved in attributing awareness to others and oneself. At least some evidence suggests that this is the case in humans. One brain region where these two functions may overlap is the temporoparietal junction (TPJ). Clinical evidence shows that damage to the TPJ can cause a severe neglect syndrome in which patients are unaware of anything in the half of space opposite the lesion (Critchley 1953; Halligan et al. 2003; Vallar 2001; Valler and Perani 1986). Typically, right brain damage leads to left spatial neglect, though the pattern can also sometimes reverse. In neglect, patients can still process sensory information from the affected side and can sometimes react to it, but claim a lack of awareness of it. Yet the TPJ also plays a role in attributing mind states to others. When brain activity is measured in an MRI scanner while people engage in social perceptual tasks, the TPJ is consistently active (Saxe and Kanwisher 2003; Young, Dodell-Feder and Saxe 2010). These lines of evidence show that two seemingly unrelated functions, social perception and the construction of one’s own awareness, are at least in close proximity in the brain. To determine just how much overlap there may be between these two functions in the TPJ, we performed an experiment (Kelly et al. 2014). We measured the brain activity of participants in an MRI scanner while they engaged in a social cognition task, rating whether a cartoon character was aware of an object next to it. Elevated activity associated with this task was found in a part of the TPJ. Each subject had a definable zone of activation, a hotspot. We then took the participants out of the scanner and tested the effect of disrupting that hotspot. The disruption depended on a technique called transcranial magnetic stimulation. In that technique, a magnetic pulse is passed through the skull and for a fraction of a second disrupts the neuronal activity in a small area of cortex approximately one centimetre in diameter. When the hotspot on the one side of the brain was disrupted, subjects were less able to detect visual stimuli on the opposite side of space. When a different part of the TPJ was disrupted, a part that did not become active in the social perception task, no effect was seen on participants’ ability to detect stimuli. To summarize this experiment, when people looked at a face and answered the question, ‘Is he aware of the object next to him?’, a specific hotspot in the brain became active. When that specific hotspot was disrupted, people were less able to be aware of objects next to themselves. The results suggest that the
Understanding Consciousness by Building It
207
networks in the brain that attribute awareness to others physically overlap the networks that construct one’s own awareness.
13 Summary The brain is an information-processing device. It takes in data, processes that data and uses it to help guide behaviour. When that machine ups and says, ‘I have a magic essence inside me’, rather than believing that literal proposition and then failing to obtain any scientific purchase on the magic, we can ask instead, ‘How did the machine arrive at that quirky self-description? What is the utility of that self-description? What brain areas might be involved in computing that information?’ In attention schema theory, awareness is an impossible, physically incoherent property that does not exist and that is described by a packet of information in the brain. That packet of information is an internal model, and its function is to provide a continuously updated account of attention. It describes attention in a manner that is accurate enough to be useful but not accurate or detailed enough to waste time and resources. The brain is captive to the incomplete information in its internal models. To put it tautologically, the brain has no other information than the information it has. Hence people insist that they have subjective awareness, mystics wax poetic about it, philosophers and scientists dedicate themselves to understanding what subjective awareness is and how it is generated, and authors write chapters on the topic. In attention schema theory, awareness is not a total fabrication of the brain. It is not an illusion. It is perhaps best described as a caricature. It is a caricature of attention, a physical process that actually does exist and is of central importance in brain function. A shorthand way to describe the theory in five words is this: Awareness is an attention schema. In the theory, the attention schema has at least three major adaptive uses. First, it is important in the control of attention. In dynamical systems theory, a good controller of attention should include an internal model of attention. Our data suggest that awareness does act as the internal control model of attention. We suggest that this function may be the evolutionary origin of awareness. A second possible adaptive function of an attention schema is to promote the integration of information across disparate information domains. An attention schema by definition links information about the self with information about whatever is in the focus of attention. In this sense it serves as a connector of different types of information.
208
The Bloomsbury Companion to the Philosophy of Consciousness
A third possible adaptive function of an attention schema is to promote social perception. If we use the attention schema to model the attentional states of others as well as of ourselves, in effect attributing awareness to others and to ourselves, then it could be foundational to social perception. We suggest that this social use represents a major evolutionary expansion of the attention schema and has reached a particularly elaborated state in humans. The theory is extremely simple in concept and yet extremely difficult for many people to accept. The theory itself explains why people have such strong intuitions to the contrary. Introspection is cognitive machinery accessing internal models, and the internal model of attention informs us that we have a private, non-physically describable essence, a metaphysical property, a mental possession of things that empowers us to decide, to choose, to act, to remember. But the brain’s evolutionarily built-in models are not accurate. They are caricatures of reality.
References Baars, B. J. (1988). A Cognitive Theory of Consciousness, Cambridge: Cambridge University Press. Barlow, R. B., Jr., Fraioli, A. J. (1978). ‘Inhibition in the Limulus Lateral Eye in Situ’, Journal of General Physiology, 71, 699–720. Beck, D. M., Kastner, S. (2009). ‘Top-Down and Bottom-Up Mechanisms in Biasing Competition in the Human Brain’, Vision Research, 49, 1154–65. Brattstrom, B. H. (1974). ‘The Evolution of Reptilian Social Behavior’, American Zoology, 14, 35–49. Camacho, E. F., Bordons, A. C. (2004). Model Predictive Control, New York: Springer. Chalmers, D. (1997). The Conscious Mind, New York: Oxford University Press. Clayton, N. S. (2015). ‘Ways of Thinking: From Crows to Children and Back Again’, Quarterly Journal of Experimental Psychology, 68, 209–41. Crick, F, Koch, C. (1990). ‘Toward a Neurobiological Theory of Consciousness’, Seminars in the Neurosciences, 2, 263–75. Critchley., M. (1953). The Parietal Lobes, London: Hafner Press. Dehaene, S. (2014). Consciousness and the Brain, New York: Viking. Desimone, R., Duncan, J. (1995). ‘Neural Mechanisms of Selective Visual Attention’, Annual Review of Neuroscience, 18, 193–222. Frith, U., Frith, C. D. (2003). ‘Development and Neurophysiology of Mentalizing’, Philosophical Transactions of the Royal Society of London Biological Sciences, 358, 459–73. Gazzaniga, M. S. (1970). The Bisected Brain, New York: Appleton Century Crofts.
Understanding Consciousness by Building It
209
Graziano, M. S. A. (2010). God, Soul, Mind, Brain: A Neuroscientist’s Reflections on the Spirit World, Teaticket: Leapfrog Press. Graziano, M. S. A. (2013). Consciousness and the Social Brain, New York: Oxford University Press. Graziano, M. S. A. (2014). ‘Speculations on the Evolution of Awareness’, Journal of Cognitive Neuroscience, 26, 1300–4. Graziano, M. S. A., Botvinick, M. M. (2002). ‘How the Brain Represents the Body: Insights from Neurophysiology and Psychology, in Common Mechanisms in Perception and Action: Attention and Performance XIX, 136–57 (ed.), R. Prinz and B. Hommel, Oxford: Oxford University Press). Graziano, M. S. A., Kastner, S. (2011). ‘Human Consciousness and its Relationship to Social Neuroscience: A Novel Hypothesis’, Cognitive Neuroscience, 2, 98–113. Graziano, M. S. A., Webb, T. W. (2014). ‘A Mechanistic Theory of Consciousness’, International Journal of Machine Consciousness, 2. doi:10.1142/S1793843014400174. Halligan, P. W., Fink, G. R., Marshall, J. C., Vallar, G. (2003). ‘Spatial Cognition: Evidence from Visual Neglect,’ Trends in Cognitive Sciences, 7, 125–33. Horowitz, A. (2009). ‘Attention to Attention in Domestic Dog (Canis familiaris) Dyadic Play, Animal Cognition, 12, 107–18. Hsieh, P, Colas, J. T., Kanwisher, N. (2011). ‘Unconscious Pop-Out: Attentional Capture by Unseen Feature Singletons Only When Top-Down Attention is Available’, Psychological Science, 22, 1220–6. Jiang, Y., Costello, P., Fang, F., Huang, M., He, S. (2006). ‘A Gender- and Sexual Orientation-Dependent Spatial Attentional Effect of Invisible Images’, Proceedings of the National Academy of Sciences USA, 103, 17048–52. Kelly, Y. T., Webb, T.W., Meier, J. D., Arcaro, J., Graziano, M. S. A. (2014). ‘Attributing Awareness to Oneself and to Others,’ Proceedings of the National Academy of Sciences USA, 111, 5012–7. Kentridge, R. W., Nijboer, T. C., Heywood, C. A. (2008). ‘Attended but Unseen: Visual Attention is Not Sufficient for Visual Awareness, Neuropsychologia, 46, 864–9. Koch, C., Tsuchiya, N. (2007). ‘Attention and Consciousness: Two Distinct Brain Processes’, Trends in Cognitive Sciences, 11, 16–22. Lamme, V. A. (2004). ‘Separate Neural Definitions of Visual Consciousness and Visual Attention: A Case for Phenomenal Awareness’, Neural Networks, 17, 861–72. Lau, H., Rosenthal, D. (2011). ‘Empirical Support for Higher-Order Theories of Consciousness’, Trends in Cognitive Sciences, 15, 365–73. McCormick, P. A. (1997). ‘Orienting Attention Without Awareness’, Journal of Experimental Psychology, Human Perception and Performance, 23, 168–80. Metzinger, T. (2010). The Ego Tunnel, New York: Basic Books. Mysore, S. P., Knudsen, E. I. (2013). ‘A Shared Inhibitory Circuit for Both Exogenous and Endogenous Control of Stimulus Selection’, Nature Neuroscience, 16, 473–8. Nisbett, R. E., Wilson, T. D. (1977). ‘Telling More Than We Can Know – Verbal Reports on Mental Processes’, Psychological Review, 84, 231–59.
210
The Bloomsbury Companion to the Philosophy of Consciousness
Norman, L. J., Heywood, C. A., Kentridge, R. W. (2013). ‘Object-Based Attention Without Awareness’, Psychological Science, 24, 836–43. Posner, M. I. (1980). ‘Orienting of Attention’, Quarterly Journal of Experimental Psychology, 32, 3–25. Premack, D., Woodruff, G. (1978). ‘Does the Chimpanzee Have a Theory of Mind?’, Behavioral and Brain Sciences, 1, 515–26. Saxe, R., Kanwisher, N. (2003). ‘People Thinking About Thinking People: fMRI Investigations of Theory of Mind’, NeuroImage, 19, 1835–42. Scheidt, R. A., Conditt, M. A., Secco, E. L., Mussa-Ivaldi, F. A. (2005). ‘Interaction of Visual and Proprioceptive Feedback During Adaptation of Human Reaching Movements’, Journal of Neurophysiology, 93, 3200–13. Skrbina, D. (2005). Panpsychism in the West, Cambridge: The MIT Press. Szczepanowski, R., Pessoa, L. (2007). ‘Fear Perception: Can Objective and Subjective Awareness Measures be Dissociated?’, Journal of Vision, 10, 1–17. Tononi, G. (2008). ‘Consciousness as Integrated Information: A Provisional Manifesto’, Biological Bulletin, 215, 216–42. Tsushima, Y., Sasaki, Y., Watanabe, T. (2006). ‘Greater Disruption Due to Failure of Inhibitory Control on an Ambiguous Distractor’, Science, 314, 1786–8. Vallar, G. (2001). ‘Extrapersonal Visual Unilateral Spatial Neglect and its Neuroanatomy’, NeuroImage, 14, 552–8. Vallar, G., Perani, D. (1986). ‘The Anatomy of Unilateral Neglect After RightHemisphere Stroke Lesions: A Clinical/CT-scan Correlation Study in Man’, Neuropsychologia, 24, 609–22. van Swinderen, B. (2012). ‘Competing Visual Flicker Reveals Attention-Like Rivalry in the Fly Brain’, Frontiers in Integrative Neuroscience, 6, 96. Webb, T. W., Graziano, M. S. A. (2015). ‘The Attention Schema Theory: A Mechanistic Account of Subjective Awareness’, Frontiers in Psychology. doi:10.3389/ fpsyg.2015.00500. Webb, T. W., Kean, H. H., Graziano, M. S. A. (2016). ‘Effects of awareness on the control of attention’. Journal of Cognitive Neuroscience, 28, 842–51. Wimmer, H., Perner, J. (1983). ‘Beliefs About Beliefs: Representation and Constraining Function of Wrong Beliefs in Young Children’s Understanding of Deception’, Cognition, 13, 103–28. Wolpert, D. M., Goodbody, S. J., Husain, M. (1998). ‘Maintaining Internal Representations: The Role of the Human Superior Parietal Lobe’, Nature Neuroscience, 1, 529–33. Young, L., Dodell-Feder, D., Saxe, R. (2010). ‘What Gets the Attention of the TemporoParietal Junction? An fMRI Investigation of Attention and Theory of Mind’, Neuropsychologia, 48, 2658–64.
12
The Illusion of Conscious Thought Peter Carruthers
1 Introduction For present purposes, thought will be understood to encompass all and only propositional attitude-events that are both episodic (as opposed to persisting) and amodal in nature (having a non-sensory format). Thoughts thus include events of wondering whether something is the case, judging something to be the case, recalling that something is the case, deciding to do something, actively intending to do something, adopting something as a goal, and so forth. But thoughts, as herein understood, do not include perceptual events of hearing or seeing that something is the case, feelings of wanting or liking something, nor events of episodic remembering, which are always to some degree sensory/ imagistic in character. Nor do they include episodes of inner speech, which may encode or express thoughts in imagistic format, but which are not themselves attitude-events of the relevant kinds. I propose to argue, not only that thoughts can be unconscious, but that they are always unconscious. At the same time, I will explain how we come to be under the illusion that many of our thoughts are conscious ones. Almost everyone believes that thoughts can be conscious, no matter whether consciousness is defined in terms of global accessibility or in terms of noninterpretive higher-order awareness. It seems obvious that our thoughts sometimes occur in a way that makes them widely accessible to other systems, for forming memories, for issuing in positive or negative affect, for guiding decision-making, and for verbal report. This would make them first-order access-conscious. But it also seems obvious that those same thoughts are available in a way that enables us to know of their occurrence without requiring self-interpretation, of the sort that makes us aware of the thoughts of other people. This would make them higher-order access-conscious.1
212
The Bloomsbury Companion to the Philosophy of Consciousness
I have argued elsewhere that both views are mistaken. In Carruthers (2011) I argue against the second of these accounts, showing that our knowledge of our own thoughts is always interpretive, grounded in awareness of both our own overt behaviour and covert sensory cues of various sorts (visual imagery, inner speech, and so on). The main focus of Carruthers (2015a), in contrast, is to argue that the only mental states that can be globally broadcast (and hence become first-order access-conscious) are those that have a sensory grounding of some kind (including, visual and auditory imagery as well as inner speech). So on neither account of consciousness are thoughts themselves ever conscious. In what follows I briefly review both sets of arguments against the existence of conscious thought. In Section 2 I argue that all knowledge of our own occurrent thoughts is interpretive in character, similar to the access that we have to the thoughts of other people. In Section 3 I argue that global broadcasting depends upon attentional signals directed at mid-level sensory areas of the brain, implying that only events with a sensory-based format can be access-conscious.2 Then in Section 4 I take up the question of how we come to be under the illusion of conscious thought. How is it that nearly everyone believes that there are conscious thoughts if really there aren’t? Providing a satisfactory answer to this question is the main goal of the paper. It should be noted, however, that there are alternative theoretical accounts of consciousness besides the two that will form our focus here. In addition to global broadcasting accounts (Baars 1988, 2002, 2003; Dehaene et al. 2006; Dehaene 2014) and higher-order access theories (Carruthers 2000; Rosenthal 2005; Graziano 2013), there is Tononi’s integrated information account of consciousness, for example (Tononi 2008; Tononi et al. 2016). I shall ignore the latter for present purposes. In part this is because it is only a theory of phenomenal consciousness, and makes no commitments concerning the relevant accessibility-relation for conscious mental events. (Indeed, some might see this as a fatal weakness, since it seems to allow for multiple forms of highly integrated informational state that aren’t accessible to their subjects.) In fact, my focus here is only on so-called access consciousness. Our question is whether thoughts are ever access-conscious, in either a first-order or a higher-order sense. If they aren’t, then most people would agree that they can’t be phenomenally conscious either. But even if they are, it is much more controversial to claim that thoughts can also be phenomenally conscious, or intrinsically like something to undergo. I shall say nothing about that here. (For a critique, see Carruthers and Veillet 2011.)
The Illusion of Conscious Thought
213
2 Interpretive self-knowledge How do we know what we are currently thinking? Intuition has it that such knowledge is (often) immediate. One merely has to introspect in order to know that one has just decided to do something, or to know what one currently believes when asked a question. Importantly, our knowledge of our own thoughts is believed by most philosophers to differ in kind from our knowledge of the thoughts of other people. One knows what someone else is thinking by observing and drawing inferences from their circumstances and behaviour (including their speech behaviour). All such knowledge is believed to be interpretive, using one’s ‘theory of mind’ or ‘mind-reading’ skills to infer the mental states that lie behind the other person’s observable behaviour. These inferences needn’t be conscious ones, of course. Indeed, as a matter of phenomenology one often just seems to intuit or see (or hear, in the case of speech) what someone is thinking in a particular context. But most would maintain that such intuitions are nevertheless grounded in one’s knowledge of the likely causes of the behaviour one observes. While most philosophers and psychologists think that one’s knowledge of the thoughts of others is at least tacitly interpretive, drawing on background knowledge provided by some sort of folk psychology, not everyone agrees. Some think that knowledge of other minds can be more directly perceptual (at least in simple cases), perhaps responding to behavioural and environmental affordances of a social–interactive sort (Gallagher 2001; Hutto 2004; Noë 2004). Such views seem to me ill-motivated. For on closer examination they fail to offer a plausible route through which perceptual knowledge of other minds can be achieved (Spaulding 2017). Moreover, one can in any case explain the largely intuitive nature of much of our knowledge of other minds within a classical knowledge-based framework. Indeed, it is possible to endorse such a framework while claiming that our awareness of other people’s mental states is genuinely perceptual in character (Carruthers 2015b). In addition, even these directperception theorists will allow, of course, that perception of the mental states of other people is grounded in awareness of their behaviour. Yet this is widely agreed to be unnecessary in one’s own case. One doesn’t need to observe one’s own movements, nor listen to one’s own speech acts, in order to know what one is thinking. On the contrary, it is said that one can know this immediately and introspectively. Carruthers (2011) provides an extended argument that this common-sense picture of self-knowledge is mistaken. On the contrary, knowledge of one’s own thoughts is just as interpretive as is knowledge of the mental states of others.
214
The Bloomsbury Companion to the Philosophy of Consciousness
It draws on the same, or very similar, folk-psychological resources, only with one’s ‘mind-reading faculty’ directed towards oneself rather than towards other people. And the same sorts of informational channels are relied upon in each case. Of course, the data utilized by the mind-reading system can differ in the first person. In particular, the system has access to the thinker’s visual imagery, inner speech, and other sensory-like episodes, whereas it has no such access to the visual imagery or inner speech of other people (except indirectly, via their overt verbal reports). But note that this is access to sensory-based or sensorylike mental events, not to the underlying non-sensory thoughts. Moreover, the movement from awareness of one’s own inner speech to the propositional attitudes thereby manifested is just as interpretive as is listening to the speech of another person. The relationship between inner speech and thought requires some additional comment. Our best theory of inner speech is that it results from attention directed at a so-called ‘forward-model’ of the predicted sensory consequences of the motor instructions for a specific speech act (Carruthers 2011; Tian and Poeppel 2012; Scott 2013). Whenever actions in general are initiated (including speech actions), an ‘efferent copy’ of the motor instructions is created and used to generate a predictive model of the likely sensory consequences of the movement. (In cases of overt action, these are compared with afferent sensory feedback and used to make fine-grained online adjustments to one’s movements as required; Jeannerod 2006.) In the case of inner speech, motor instructions are created as normal, issuing in a forward-model, but the outgoing signals to the muscles themselves are suppressed. Since motor instructions are low-level nonconceptual representations, any semantic information deriving from the thought-to-beexpressed will have been left behind in the sensory forward-model. The latter therefore needs to be received as input by the language comprehension system (included in which is the mind-reading system, which handles pragmatics) and processed and interpreted in something like the normal way. If inner speech, like the speech of other people, needs to be interpreted, however, then how is it that we never hear our own inner speech as ambiguous, nor puzzle about what it might mean? For these are frequent occurrences when listening to the speech of others. The answer has to do with the role of accessibility of conceptual and syntactic structures in normal speech interpretation (Sperber and Wilson 1995).3 Speech interpretation is strongly biased by context, especially by prior conversational context. Concepts and structures that are still easily accessible (remaining in a partially activated state) are prioritized. For example, one will normally pick as the intended referent for a pronoun the individual who
The Illusion of Conscious Thought
215
was most recently mentioned in the discourse (and whose singular concept is thus most readily accessible). But when the speech in question is one’s own inner speech, the relevant concepts and syntactic structures will have been in a fully activated state just fractions of a second prior to the onset of the interpretive process. The latter will thus be strongly biased, albeit biased veridically, towards the intended interpretation. If self-knowledge results from self-directed mind reading, then a number of predictions can be made. One is that there should be no dissociations (in either direction) between capacities for self-knowledge and capacities for otherknowledge. That is, there should be no people in whom self-knowledge remains intact while other-knowledge is damaged. Nor should there be any people in whom other-knowledge remains intact while self-knowledge is damaged. Moreover, the same cortical networks should be implicated in each. Carruthers (2011) examines alleged cases of dissociation in autism and schizophrenia, as well as data from brain-imaging experiments. He argues that none of the claimed dissociations turns out to be real. On the contrary, deficits in other-knowledge seem always to be paired with similar deficits of self-knowledge, and the brain networks implicated in both forms of knowledge are the same. If self-knowledge of thoughts isn’t direct, but results rather from self-directed mind reading, then a further prediction can be made. This is that there should be distinctive patterns of error in people’s claims about their own thoughts, mirroring the ways in which we can be misled about the thoughts of others. Care needs to be taken to delineate this prediction precisely, however. For an introspection-theorist might grant that there is nothing special about one’s knowledge of one’s own past thoughts (Nichols and Stich 2003). It may well be that no long-term memories of one’s own thought-processes are generally kept, so that knowledge of one’s past thoughts must depend on interpretation of what one does remember, namely one’s past circumstances and behaviour. The crucial data therefore concern errors about one’s own current or very recent thoughts. Carruthers (2011) reviews a number of bodies of evidence suggesting that people do not have introspective access to their own thoughts, specifically their own current beliefs. One set derives from the ‘self-perception’ framework in social psychology, which has been extensively investigated (Bem 1972; Albarracín and Wyer 2000; Barden and Petty 2008). For example, people duped into nodding while listening to a message (ostensibly to test the headphones they are wearing) report greater agreement with the content of the message, whereas those induced to shake their heads while listening report reduced agreement (Wells and Petty 1980). This suggests that people interpret their own behaviour
216
The Bloomsbury Companion to the Philosophy of Consciousness
and modify their reports accordingly. Moreover, these effects can be made to reverse if the messages are unpersuasive – in this case nodding decreases belief in the message rather than increasing it, suggesting that nodding is interpreted as agreement with one’s own internally accessible reactions, like thinking to oneself in inner speech, ‘What an idiot!’ (Briñol and Petty 2003). Similarly, right-handed people who write statements about themselves with their right hands thereafter express greater confidence in the truth of those statements when re-reading them than do those who write using their left hands (Briñol and Petty 2003). It seems the shaky writing in the latter case is interpreted as a sign of hesitancy. And indeed, third parties who are asked to judge the degree of confidence of the writer from the handwriting samples alone display the same effect, and to the same extent. Carruthers (2011) also discusses evidence from the counter-attitudinal essay paradigm in psychology, which has likewise been heavily investigated (Festinger 1957; Elliot and Devine 1994; Simon, Greenberg and Brehm 1995; Gosling, Denizeau and Oberlé 2006). People who are manipulated into feeling that they have made a free choice to write an essay arguing for the opposite of what they believe will thereafter shift their reports of their beliefs quite markedly – moving, for example, from being strongly opposed to a rise in college tuition to being neutral or mildly positive. This is known not to be an effect of argument quality, and people shift their reports without being aware of having done so, and without there being any prior change in the underlying belief. Rather, what people are doing is managing their own emotions: they are making themselves feel better about what they have done, having had the sense that they had done something bad. (Indeed, people who are duped into thinking that they have caused harm through their freely undertaken advocacy of what they actually believe will also shift their reports of their beliefs to make themselves feel better; Scher and Cooper 1989.) But one would think that a direct question about what one believes would activate that belief and make it available for introspection, if such a thing were possible at all. Yet plainly people aren’t aware of their beliefs at the time when they answer the query. Otherwise they would be aware that they are lying and would feel worse, not better (which is what they actually do). Of course it is possible for a defender of introspection to respond to this (and voluminous other) evidence by allowing that people sometimes rely on indirect methods when ascribing thoughts to themselves (Rey 2013). This is consistent with the claim that people are also capable of directly accessing their thoughts, perhaps in other circumstances or in other cases. Aside from being ad hoc, however, this manoeuvre makes no concrete predictions – it tells us
The Illusion of Conscious Thought
217
nothing about the circumstances in which people will go wrong. And by the same token, it is incapable of explaining the patterning in the data. Why should errors of self-attribution emerge especially in cases where behavioural evidence might also mislead an outside observer, as well as in cases where people are motivated (unconsciously) to say something other than they believe? If people were genuinely capable of introspecting their thoughts, then it is remarkable that such abilities should happen to break down here and not elsewhere. Following extensive discussion, Carruthers (2011) concludes from these and other arguments that our access to our own thoughts is always interpretive, no different in principle from our access to the thoughts of other people. While self-knowledge can rely on sensory data not available in the case of other people (including one’s own visual imagery and inner speech), and while various factors may make self-knowledge more reliable than other-knowledge, both are equally indirect and interpretive in nature. In consequence, if conscious thoughts are those that one has immediate introspective knowledge of, then it follows that there are no such things.
3 Sensory-based broadcasting If one’s thoughts aren’t higher-order access-conscious (that is, immediately knowable through introspection), then perhaps they are first-order accessconscious. Perhaps thoughts can be ‘globally broadcast’ and made available to a wide range of systems in the mind–brain. (The list of systems involved would normally be said to include those for drawing inferences, for forming memories, for generating affective reactions, for planning and decision-making, and for verbal report.) One immediate problem with such a proposal, however, is that it seemingly conflicts with the confabulation data discussed in Section 2. For if one’s thoughts are globally broadcast and made available to the systems responsible for verbal report, then one might think it should be a trivial matter to produce direct reports of them. Perhaps this objection isn’t devastating. It may be that once one’s beliefs have been activated by a query, for example, they are globally broadcast and made available for verbal report; but the processes that plan and determine the nature of those reports can be unconscious ones. Perhaps other information besides the globally broadcast belief can be drawn on when formulating a report; and perhaps normal instances of speech production can be influenced (unconsciously) by a variety of motivational and other factors. In that case the belief might count as
218
The Bloomsbury Companion to the Philosophy of Consciousness
conscious at the same time that one misreports it, and while one is unaware that one is misreporting it. This combination of views might strike one as quite puzzling. But perhaps it isn’t incoherent. In any case it will be fruitful to evaluate the claim that thoughts can be first-order access-conscious on its own merits. Contradicting such a claim, Carruthers (2015a) argues that all accessconscious mental states are sensory based, in that their conscious status constitutively depends upon some or other set of content-related sensory components (that is, perceptual states or mental images in one sense-modality or another). Amodal concepts can be bound into the content of these accessconscious states, however. Thus one doesn’t just imagine colours and shapes, but a palm tree on a golden beach, for example. Here the concepts palm tree, golden and beach are bound into the visual image in the same way (and resulting from the same sorts of interactive back-and-forth processing) as they are when one sees a scene as containing a palm tree on a golden beach. But the accessconscious status of these concepts is dependent on the presence of the sensory representations into which they are bound. It is worth saying more about how conceptual representations can be bound into sensory or sensory-like states, since this will help us to see how one can perceive the thoughts of other people (as argued briefly in Section 2) and of ourselves (as will become important in Section 4). We know that visual processing, for example, takes place in a distributed fashion, with colour being processed separately from shape, and both being processed independently of movement. Yet each of these separate properties can be bound together into a single percept of, say, a round red object (a tomato) rolling along a surface (or in other cases, an integrated visual image of such an event). A central organizing principle in the binding process are so-called object files (Pylyshyn 2003). These are like indexical links to an object (‘That thing …’) to which property information (colour, shape, and the rest) can be attached. Carruthers (2015a) then argues that the best account of seeing as (where the round red object is seen as a tomato, for instance) is that category information can be bound into these object files and globally broadcast along with them, constituting a single conscious visual percept. For the competing view would have to be that there are two distinct conscious events: one is a perceptual objectfile (‘That: round red rolling thing’) whereas the other is a perceptual judgement (‘That: tomato’). Notice, however, that such an account faces a new version of the binding problem. For it fails to explain what secures the coincidence of reference of the two indexicals, making it the case that one sees the round red rolling thing as the tomato, rather than something else in the visual field.4
The Illusion of Conscious Thought
219
When we turn to speech perception (and by extension, inner speech), the relevant organizing principle is the event-file. (An object-file structure is unlikely to work here, since the only relevant object would be the speaker. But one can understand speech, and bind it into a single interpreted utterance, without knowing or otherwise perceiving the identity of the speaker.) Speech is segmented into distinct events (generally sentences), with multiple properties drawn from many different levels of processing bound into each event-file. Thus one hears the tone of voice, the volume, and the accent with which people say things, while also hearing what they say, and often also the intent with which they say it (as when one hears someone as speaking ironically, for example). As a result, an auditory event-file can have mental-state information bound into it. Returning, now, to the main theme of this section: one argument for the view that all access consciousness depends upon sensory representations is an inference to the best explanation that brings together recent work on consciousness with recent work on working memory. The argument builds on the findings of Baars (1988, 2002, 2003), Dehaene (Dehaene et al. 2006; Dehaene and Changeux 2011; Dehaene 2014), and others who have amassed a large and convincing body of data in support of the ‘global broadcasting’ or ‘global workspace’ theory of conscious experience. Across a wide variety of unconscious forms of perception there can be local reverberating activity in both mid-level and high-level sensory cortices. (In the case of vision, these include occipital cortex and posterior temporal cortex.) Stimuli in such cases can be processed all the way up to the conceptual level while remaining unconscious, which can give rise to semantic priming effects. But when this activity is targeted by attention the percepts become conscious, and there is widespread coordinated activity linking it also to frontal and parietal cortices.5 Everyone agrees that attention can be a major determinant of consciousness. Carruthers (2015a) goes further and argues that it is necessary and (with other factors) sufficient for consciousness. While some have claimed that gistperception and/or background-scene perception is conscious in the absence of attention, recent studies have shown that this is incorrect: such properties merely require comparatively little attention to be consciously perceived (Cohen, Alvarez and Nakayama 2011; Mack and Clark 2012). Moreover, the neural mechanisms underlying attention are increasingly well understood (Baluch and Itti 2011; Bisley 2011). A top-down attentional network links dorsolateral prefrontal cortex (PFC), the frontal eye-fields and the intraparietal sulcus. The ‘business end’ of the system is the latter, which projects both boosting and suppressing signals to targeted areas of mid-level sensory cortices. (See also Prinz 2012.)
220
The Bloomsbury Companion to the Philosophy of Consciousness
At the same time there is a bottom-up attentional network (sometimes called the ‘saliency network’) linking regions of right ventrolateral parietal cortex and right ventrolateral PFC, which then interact with the top-down system through anterior cingulate cortex (Corbetta et al. 2008; Sestieri, Shulman and Corbetta 2010). An extensive recent body of research on working memory suggests that this same attentional network, which is responsible for conscious perception, is also involved in our capacity to sustain and generate conscious representations endogenously, for purposes of conscious thinking and reasoning. For example, whenever brain-imaging studies of working memory have been conducted using appropriate subtraction-tasks, content-related activity in one or more sensory areas has been found (Postle 2006, 2016; D’Esposito 2007; Jonides et al. 2008; Serences et al. 2009; Sreenivasan et al. 2011). Moreover, this activity plays a causal role in the tasks in question, since transcranial magnetic stimulation (TMS) applied to these areas during the retention interval in working memory tasks disrupts performance (Herwig et al. 2003; Koch et al. 2005).6 Notice, in addition, that most working memory tasks could be solved purely amodally, if such a thing were really possible – keeping numbers, words or concepts active in the global workspace. Yet this doesn’t seem to happen. An inference to the best explanation enables us to combine and unify these two bodies of research, thereby detailing the mechanisms that underlie the stream of consciousness quite generally. Attentional signals directed at midlevel sensory areas are necessary for contents to enter working memory (thereby becoming conscious), as well as for conscious perception. And then if working memory is the system that underlies conscious forms of reasoning and decisionmaking, as many in the field believe (Evans and Stanovich 2013; Carruthers 2015a), we can conclude that all conscious thinking is sensory based. It remains possible, of course, that there is, in addition to a sensory-based working memory system, an amodal (non-sensory) workspace in which thoughts and propositional attitudes can figure consciously. However, we have no evidence of any form of global broadcasting that isn’t tied to sensory-cortex activity. Nor do we have evidence of an attentional network with the right ‘boosting and suppressing’ properties targeted at anterior and medial temporal cortex or PFC, which is what would be needed if amodal thoughts were to be globally broadcast. Of course, absence of evidence isn’t evidence of absence by itself. But Carruthers (2015a) discusses are a number of lines of argument that count strongly against the competing proposal outlined here. What follows is a sketch of one of them.
The Illusion of Conscious Thought
221
Suppose there is some sort of workspace in which amodal (non-sensory) thoughts – judgements, goals, decisions, intentions, and the rest – can become active and be conscious. What would one predict? One would surely expect that variance in the properties of this workspace among people would account for a large proportion of people’s variance in fluid general intelligence, or fluid g. For it is conscious forms of thinking and reasoning that are believed to underlie our capacity to solve novel problems in creative and flexible ways, which are precisely the abilities measured by tests of fluid g. In fact, there are now a great many studies examining the relationship between working memory and fluid g (Conway, Kane and Engle 2003; Colom et al. 2004, 2008; Cowan et al. 2005; Kane et al. 2005; Unsworth and Spillers 2010; Redick et al. 2012; Shipstead et al. 2014). Generally, variance in the former overlaps with the latter somewhere between 0.6 and 0.9. (That is to say, the relationship between the two seems to lie somewhere between very strong and almost identical.) Many have thus come to regard working memory as the cognitive system or mechanism that is responsible for fluid g (which is itself a purely statistical construct, of course, being the underlying common factor calculated from a range of different types of reasoning tasks). And to the extent that other factors have been found to correlate with fluid g independently of working memory, the only one that has received robust support is speed of processing, which seems to be a low-level phenomenon (perhaps related to the extent of neural myelination). It may be, of course, that standard tests of working memory tap into both the sensory-based system and the supposed amodal thought-involving system. But in that case one would predict that as tests of working memory become more and more sensory in character (requiring one to keep in mind or manipulate un-nameable shapes or shades of colour, for example), the extent of the overlap with fluid g should decrease. For these tests of purely sensory working memory would fail to include any measure of the variance in amodal thinking abilities that would (by hypothesis) account for a large proportion of our flexible general intelligence. But this seems not to be the case. Lowlevel sensory tasks overlap with fluid g just as strongly (if not more strongly) than do concept-involving ones (Unsworth and Spillers 2010; Burgess et al. 2011; Redick et al. 2012; Shipstead et al. 2012, 2014). Moreover (and just as the sensory-based account would predict) measures of sensory attentional control (using such tests as the anti-saccade task or the flankers task) themselves predict capacities for general intelligence quite strongly (Unsworth and Spillers 2010; Shipstead et al. 2012, 2014).7
222
The Bloomsbury Companion to the Philosophy of Consciousness
In addition, there is a separate body of evidence that pushes towards the same conclusion. This derives from studies that have presented people with a range of different sensory-discrimination tasks (Acton and Schroeder 2001; Deary et al. 2004; Meyer et al. 2010; Voelke et al. 2014). Participants might be asked to order a series of colour-chips by shade, arrange a series of lines by length, arrange a set of tones by pitch, order a set of identical-looking objects by manually feeling their weight, and so on. From these measures one can compute an underlying common factor (just as one does when computing fluid g from a range of reasoning tasks). While it is unclear exactly what this common factor represents, it seems likely that it has to do with capacities for sensory attention and purely sensory working memory. Across studies, it has been found that this underlying factor overlaps with fluid g between 0.6 and 0.9. (Note that this is the same as the extent of overlap between working memory and fluid g.) Since there will be executive and memory-search components of working memory that make no contribution to these sensory-discrimination tasks, we can conclude pretty confidently that there is no variance in general intelligence remaining to be explained by the hypothesized workspace for conscious amodal thinking and reasoning. Carruthers (2015a) argues on these and other grounds that only mental states that have a sensory-based format (such as visual or auditory imagery) are capable of becoming first-order access-conscious. When taken together with the conclusion of Section 2, it follows that amodal thoughts are neither firstorder access-conscious nor higher-order access-conscious. All thoughts must therefore do their work unconsciously – among other things, helping to direct attention and manipulate sensory-based representations in working memory.
4 Whence the illusion? The evidence suggests, then, that there are no such things as conscious thoughts. On the contrary, all conscious thinking and reasoning requires a sensory-based format, involving imagery of one sort or another. Amodal thoughts exist, of course. We make judgements, access memories and beliefs, form and act on goals and intentions, and so on. But such thoughts are always unconscious. They mostly do their work downstream of the conscious contents of working memory. They may be evoked into activity by conscious states, perhaps, but they enter into processes of reasoning and decision-making that fall outside the content of working memory, and are unconscious.
The Illusion of Conscious Thought
223
There are in addition, of course, processes of reasoning that take place in working memory, and are conscious. These are so-called System 2 inferential processes (Kahneman 2011; Evans and Stanovich 2013). But they operate over sentences of inner speech, visual imagery, and other sensory-based contents. System 2 processes do not, therefore, include amodal thoughts. (Or at least, not on the account being defended here.) Furthermore, unconscious thoughts also work behind the scenes generating and controlling the sensory-based contents that figure in working memory and the stream of consciousness itself (Carruthers 2015a). What remains, however, is a puzzle: if there are no conscious thoughts, then why does almost everyone believe that there are? How do we come to be under the illusion of conscious thought? This is the question to be addressed here. A number of different factors need to be combined together to construct an adequate explanation. One is a point discussed briefly in Section 2. This is that the central role played by accessibility of concepts and syntactic structures in the interpretation of speech (whether internal or external) means that one fails to notice ambiguities in one’s own inner speech, and ensures that the latter hardly ever strikes one as puzzling or incomprehensible. This is because the relevant conceptual and syntactic structures of the thought-to-be-expressed in the rehearsed speech act will have been active immediately prior to the start of the comprehension process, strongly biasing the latter. A single interpretation almost always wins out as a result, and it does so smoothly and swiftly (just as it does in connection with one’s own overt speech). A second factor has also already been mentioned. This is that we often seem to see or hear what people are thinking (Carruthers 2015b). This is most obvious in connection with speech. If someone stops one in the street and asks the way to the Adventist church, one may hear her as wanting to know where the church is. From one’s own subjective perspective, it is not that one first hears the sounds that she makes and then figures out what she wants. (Something like this may well be going on unconsciously, of course.) Rather, understanding is seemingly immediate, and a mental-state attribution comes bound into the content of the sound stream. Similarly, if one asks a work colleague when a scheduled meeting begins and he replies, ‘It starts in ten minutes’, one hears him as judging, or as believing, that the meeting begins then. Again a thought attribution is bound into the content of what one hears. Likewise for visual perceptions of someone’s behaviour: in many cases one’s experience is imbued with mental-state content. Thus one might see someone as trying to open a door, for example (as she struggles with the key in the lock), or one might see
224
The Bloomsbury Companion to the Philosophy of Consciousness
someone as deciding to stop to pick up a piece of litter (as he pauses and begins to bend down towards it). Something similar is true of one’s own inner speech. One can hear oneself as wondering whether it is time to leave for the bus, or as judging that it is. Representations of one’s own thoughts are thus bound into the contents of one’s reflective thinking, in such a way that one experiences oneself as entertaining those thoughts, seemingly immediately, and without engaging in any form of inference or self-interpretation. Likewise in connection with visual forms of thinking, using visual imagery. When one manipulates images of items of luggage while looking into the trunk of one’s car one might experience oneself as wondering how those items will fit, or as deciding to push the large suitcase to the back. Again one’s experience comes imbued with thought attributions bound into it. Indeed, one’s thoughts can strike one as being right there among the contents of one’s auditory or visual imagistic experience. The experience of deciding something is not the same thing as deciding, of course. The former is meta-representational, whereas the latter is not. So there will be two events here, having quite different contents and causal roles. Moreover, on the view outlined in Section 2, the experience of deciding may-ormay-not correctly represent the presence of a corresponding decision. One can experience oneself as deciding something when really one is not, or while one is actually deciding something different. It should be noted, however, that not all inner speech (nor other forms of imagery) is experienced in terms of some specific attitude. This will depend on whether the right sorts of contextual and other cues are present to enable the mind-reading system to determine an attribution, and on the speed with which it is able to do so. And in fact, one often experiences oneself as entertaining what Cassam (2014) calls ‘a passing thought’ – that is, a propositional content that isn’t the object of any particular mental attitude. For example, one might report an episode in which one hears oneself saying in inner speech, ‘Time to go home’, by saying, ‘I was thinking about whether it is time to go home.’ (Note that this isn’t the same as saying, ‘I was asking myself whether it is time to go home.’ Nor is it the same as saying, ‘I was wondering whether it is time to go home.’ These attribute a particular mental attitude, that of asking a question, or of wanting to know something.) One was aware of a thought with the content that it is time to go home, that is all. It is easy to explain why one should have the impression that one often knows one’s own thoughts immediately and introspectively, then. For that is how one seemingly experiences them. Moreover, it is easy to understand why one should
The Illusion of Conscious Thought
225
have the impression that one’s thoughts in these circumstances are first-order access-conscious. For one fails to have any impression of distance between the thoughts themselves and the contents of one’s conscious experience. And yet of course the thoughts that one attributes to oneself in these circumstances will seemingly be available to be remembered, to inform one’s decision-making, and to issue in verbal reports. However, why should one have the impression that one’s access to one’s own thoughts differs in kind from one’s access to the thoughts of other people? For one’s access to other people’s thoughts is often just as phenomenally immediate. How, then, are we to explain the strength of the intuition of a self–other asymmetry? One horn of the asymmetry is straightforward. For of course it is part of common sense that our access to the thoughts of other people is interpretive and mediated via perception of their circumstances and behaviour, despite the seeming phenomenal-immediacy of many instances of thought attribution. But what of the other horn? Why do we never challenge the seeming-immediacy of our access to our own thoughts? The answer, I suggest, is built into the structure of the mind-reading system itself. Specifically, the latter employs a tacit rule of interpretation, which is used in the third person as well as in the first. This is that if someone thinks they are undergoing a certain mental state, then so they are. In fact, I suggest that something resembling Cartesian certainty about selfknowledge is built into our folk psychology. Not many people actually (explicitly) believe this any longer, of course (at least not once they have had some exposure to cognitive science). But that is not the idea. The claim is rather that Cartesian certainty about current mental events is implicit in a mind-reading inferencerule, which mandates that one move immediately from the belief that one thinks one is in mental state M to the conclusion that one therefore is in M. I shall refer to this as ‘the Cartesian inference-rule’.8 One argument for such a view is an inference to the best explanation of the seeming-universality of Cartesian beliefs across cultures and historical eras. As Carruthers (2011) reports (drawing partly on personal communications from experts in the relevant fields) whenever people in pre-scientific cultures have reflected on the nature of self-knowledge, they have assumed that their access to their own current thoughts is direct and immediate. Not only is this true in the history of Western philosophy, but it is also true of ancient China, the Buddhist tradition, and even the ancient Aztecs. If one rejects such views (as I have done) and argues that one’s access to one’s own thoughts is always indirect and interpretive, then this presents a puzzle. Why has almost everyone across cultures and times believed the opposite? The puzzle is removed if some version
226
The Bloomsbury Companion to the Philosophy of Consciousness
of the Cartesian assumption is built into the fabric of an innately channelled mind-reading system (for the existence of which there is now a significant body of evidence, see Barrett et al. 2013; Carruthers 2013b). Such a claim is surely ripe for experimental testing. But any such tests should be designed to use indirect measures, rather than asking people to make explicit judgements about imagined scenarios (as did Kozuch and Nichols 2011). Or if direct measures are used, the tests should be speeded or conducted under cognitive load. For the hypothesis isn’t that people explicitly believe in Cartesian access to their own thoughts. (On the contrary, educated people today probably don’t.) It is rather that an implicit processing-rule tantamount to such a belief governs the online processing of the mind-reading system. Anecdotally, however, it does seem that stimuli designed to violate the directaccess assumption generally strike one as somehow weird. Even after extensive reflection, and having written books on the subject, sentences such as, ‘I believe I have just decided to leave for the bus, but I haven’t really decided that’, or, ‘I have just decided to leave for the bus, but what is my evidence that I have just decided that?’ strike me initially as being strange to the point of being almost ill-formed. Why would the mind-reading system employ such a tacit principle of interpretation? In short, because it greatly simplifies the process of otherinterpretation, probably without any loss of reliability. Let us take these points in turn. Much of the work of the mind-reading system lies in assisting one to interpret the speech of other people. It helps one to figure out which object someone is referring to in a context where indexicals or pronouns are employed. It helps one to determine whether the speech act is literal, ironic, joking, or whatever. And in the case of assertoric discourse, it helps one to judge whether the person is being honest or is attempting to deceive one, and in evaluating their degree of certainty. Moreover, much of people’s ordinary discourse concerns their own (and other people’s) mental states. People talk about what they want, what they feel, what they think, and so on. These are complex matters. Yet for the most part comprehension happens smoothly and in real time. If the mind-reading system did not employ the Cartesian inference-rule, then in addition to figuring out whether the speaker is asserting something literally and honestly when she says she is in mental state M, the system would also need to determine whether the speaker is interpreting her own behaviour and internal cues correctly. This would add a whole extra layer of complexity, slowing down the interpretive process considerably. And there would probably be no gain in reliability to compensate, as I will now try to show.
The Illusion of Conscious Thought
227
Much of the data required to evaluate whether someone is interpreting herself correctly is simply not available. One almost never knows what someone is, or has been, visually imagining, nor the sentences that have been rehearsed in her inner speech. Other evidence would be costly to retrieve from long-term memory, such as relevant behaviour from the person’s past. Moreover, whatever evidence one can retrieve is likely to be fragmentary and incomplete, which provides an additional source of error. It is now a familiar point in cognitive science that simple heuristics can outperform more elaborate and information-hungry principles of judgement, not just in speed but also in reliability (Gigerenzer et al. 1999). For if the data required for the operation of the information-hungry principle are incomplete and unrepresentative, then this may introduce errors that don’t get made by the simpler heuristic system. Sometimes, of course, we have behavioural evidence that conflicts with what someone says about her mental state. Think, for example, of a person who is red in the face and banging the table aggressively while yelling, ‘I am not angry!’. In this case it might be useful to think that the person has misinterpreted her own state. So this is a case where the Cartesian inference-rule will close off possibilities that it might actually be fruitful to consider. But even here it is doubtful whether anything important is lost for most practical purposes. For one can (and does) easily attribute the discrepancy in the person’s behaviour to disingenuousness. One can think that the person is trying to mislead her audience, and is not reporting her emotional state honestly. This enables one to form expectations based on an attribution of anger while dismissing the person’s verbal statement, but it does so while retaining the simplifying Cartesian inference-rule. If sceptical doubts are raised, then, about the directness of one’s attributions of thoughts to oneself, they are apt to be immediately silenced, or closed off, through an application of the Cartesian inference-rule. If one is apt to treat, ‘I believe I am in mental state M’ as entailing, ‘I am in mental state M’, then the question whether one might take oneself, or interpret oneself to be in M without really being so will never even arise. And if such a possibility is raised, it will strike one that it should immediately be rejected. By the same token, the suggestion that one might know of one’s own current thoughts and attitudes in the same way that one knows of the attitudes of other people – by interpreting sensory cues of one sort or another – will strike one as inherently absurd. In short, then, the reason why we are under the illusion of conscious thought is that our access to our own thoughts is seemingly direct and perception-like (as is our access to the thoughts of other people, on many occasions). But (in stark contrast with our awareness of others’ thoughts) we are prevented from
228
The Bloomsbury Companion to the Philosophy of Consciousness
recognizing the interpretive, non-immediate, character of our access to our own thoughts by the inferential structure of the mind-reading system that provides us with that access.
5 Conclusion I have argued that amodal (non-sensory) thoughts such as beliefs, goals and decisions are never conscious in either the first-order or the higher-order access sense. Such thoughts are never globally broadcast and made available to a wide range of systems in the mind–brain. Nor are they capable of being known directly and without interpreting sensory cues. On the contrary, amodal thoughts operate beneath the level of awareness, influencing both overt and covert forms of action, and one’s knowledge of them results from interpreting sensory-based cues of various sorts (primarily overt behaviour and mental imagery). Yet the interpretive process is swift and generally reliable, to the point where one routinely experiences oneself as entertaining thoughts of various kinds. Moreover, a Cartesian-like inference-rule built into the structure of the interpreting system (the mind-reading faculty) blocks sceptical doubts about one’s knowledge of one’s own states of mind, while making it seem as if one’s access to one’s own thoughts differs in kind from one’s access to the thoughts of other people.
Notes 1 Some philosophers who endorse a higher-order account of consciousness allow that the relationship between the conscious state and one’s knowledge of it can be inferential (Rosenthal 2005). But what this generally means is that there is some computational process that leads from the state itself to one’s higher-order access to it (much as there is a computational process that leads from patterns of light stimulating the retina to representations of a 3-D world, which can also be characterized as ‘inferential’). It is not envisaged that the process is interpretive in the way that one’s knowledge of other people’s mental states is, drawing on observations of behaviour together with physical and social circumstances. Other philosophers think that the relationship between thought and awareness of thought (even if interpretive) is that the latter is partly constitutive of the former (Schwitzgebel 2002, 2011). On this view, beliefs, in particular, are said to be clusters of dispositions, included among which are dispositions to have self-knowledge.
The Illusion of Conscious Thought
2
3
4
5
6
7
8
229
For a critique of such views, see Carruthers (2013a), which endorses a strongly representationalist account of belief. I will assume, here, that thoughts are not dispositions but structured entities, whose causal roles are sensitive to their structural properties. In this I follow Prinz (2012), albeit using additional arguments. Mid-level sensory areas in vision would include extrastriate regions V2, V3, V4 and MT (but not primary cortical projection area V1), which process visual stimuli for contrast, shape, colour and movement. Processing that underlies category recognition takes place in high-level visual areas, which in the case of vision would include lateral and ventromedial temporal cortex. Note that the relation of accessibility in play here is much broader than that involved in access consciousness, and applies within and between cognitive systems quite generally. For example, one syntactic structure may be more accessible within the language faculty because it was more recently activated, and is thus more easily re-activated. It is important to notice that the view proposed here, that conceptual information can be bound into perceptual and imagistic states and globally broadcast along with them, is perfectly consistent with claiming that perceptual systems are distinct from conceptual ones. One can claim that there are cortical networks specialized for processing information from the retina, for example, while also allowing that those networks interact with amodal conceptual ones, and that globally broadcast visual representations can comprise both sorts of representation. It should be noted that both Baars and Dehaene take for granted that thoughts as well as experiences can be globally broadcast (Baars, Franklin and Ramsøy 2013; Dehaene 2014). Yet they offer no positive evidence for such a view. A reasonable inference is that they, too, fall prey to the illusion of conscious thought, and are merely endorsing a common-sense extension of their scientific findings that strikes them as obvious. Transcranial magnetic stimulation (TMS) involves targeting specific regions of the cortex with a series of weak magnetic pulses, thereby introducing ‘noise’ into the processing being conducted in those regions. The anti-saccade task requires participants to saccade away from a suddenlyappearing visual cue, rather than towards it, which is what one naturally does. The flankers task requires one to indicate the direction of a central arrow that is flanked by others that can be either congruent or incongruent in their direction (with the latter being more difficult). Carruthers (2011) argues in addition that the converse rule – ‘if someone is in mental state M, then they believe they are in mental state M’ – is also tacitly encoded in the processing-principles of mind-reading system. This is why there was, initially, such vigorous resistance to the idea of unconscious mentality.
230
The Bloomsbury Companion to the Philosophy of Consciousness
References Acton, G. and Schroeder, D. (2001). Sensory discrimination as related to general intelligence, Intelligence, 29, 263–71. Albarracín D. and Wyer, R. (2000). The cognitive impact of past behavior: Influences on beliefs, attitudes, and future behavioral decisions, Journal of Personality and Social Psychology, 79, 5–22. Baars, B. (1988). A Cognitive Theory of Consciousness, Cambridge: Cambridge University Press. Baars, B. (2002). The conscious access hypothesis: Origins and recent evidence, Trends in Cognitive Sciences, 6, 47–52. Baars, B. (2003). How brain reveals mind: Neuroimaging supports the central role of conscious experience, Journal of Consciousness Studies, 10, 100–14. Baars, B., Franklin, S. and Ramsøy, T. (2013). Global workspace dynamics: Cortical “binding and propagation” enables conscious contents, Frontiers in psychology, 4, 200. Baluch, F. and Itti, L. (2011). Mechanisms of top–down attention, Trends in Neurosciences, 34, 210–24. Barden, J. and Petty, R. (2008). The mere perception of elaboration creates attitude certainty: Exploring the thoughtfulness heuristic, Journal of Personality and Social Psychology, 95, 489–509. Barrett, H.C., Broesch, T., Scott, R., He, Z., Baillargeon, R., Wu, D., Bolz, M., Henrich, J., Setoh, P., Wang, J. and Laurence, S. (2013). Early false-belief understanding in traditional non-Western societies, Proceedings of the Royal Society B: Biological Sciences, 280, 20122654. Bem, D. (1972). Self-perception theory, Advances in Experimental Social Psychology, 6, 1–62. Bisley, J. (2011). The neural basis of visual attention, Journal of Physiology, 589, 49–57. Briñol, P. and Petty, R. (2003). Overt head movements and persuasion: A self-validation analysis, Journal of Personality and Social Psychology, 84, 1123–39. Burgess, G., Gray, J., Conway, A. and Braver, T. (2011). Neural mechanisms of interference control underlie the relationship between fluid intelligence and working memory span, Journal of Experimental Psychology: General, 140, 674–92. Carruthers, P. (2000). Phenomenal Consciousness, NY: Cambridge University Press. Carruthers, P. (2011). The Opacity of Mind: An integrative theory of self-knowledge, Oxford: Oxford University Press. Carruthers, P. (2013a). On knowing your own beliefs: A representationalist account, in N. Nottelmann (ed.), New Essays on Belief: Structure, Constitution and Content, NY: Palgrave MacMillan. Carruthers, P. (2013b). Mindreading in infancy, Mind & Language, 28, 141–72. Carruthers, P. (2015a). The Centered Mind: What the science of working memory shows us about the nature of human thought, Oxford: Oxford University Press.
The Illusion of Conscious Thought
231
Carruthers, P. (2015b). Perceiving mental states, Consciousness and Cognition, 36 (2015), 498–507. Carruthers, P. and Veillet, B. (2011). The case against cognitive phenomenology, in T. Bayne and M. Montague (eds.), Cognitive Phenomenology, Oxford: Oxford University Press. Cassam, Q. (2014). Self-Knowledge for Humans, Oxford: Oxford University Press. Cohen, M., Alvarez, G. and Nakayama, K. (2011). Natural-scene perception requires attention, Psychological Science, 22, 1165–72. Colom, R., Abad, F., Quiroga, A., Shih, P. and Flores-Mendoza, C. (2008). Working memory and intelligence are highly related constructs, but why? Intelligence, 36, 584–606. Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M. and Kyllonen, P. (2004). Working memory is (almost) perfectly predicted by g, Intelligence, 32, 277–96. Conway, A., Kane, M. and Engle, R. (2003). Working memory capacity and its relation to general intelligence, Trends in Cognitive Sciences, 7, 547–52. Corbetta, M., Patel, G. and Shulman, G. (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58, 306–24. Cowan, N., Elliott, E., Saults, J. S., Morey, C., Mattox, S., Hismjatullina, A. and Conway, A. (2005). On the capacity of attention: Its estimation and its role in working memory and cognitive aptitudes. Cognitive Psychology, 51, 42–100. Deary, I., Bell, P., Bell, A., Campbell, M. and Fazal, N. (2004). Sensory discrimination and intelligence: Testing Spearman’s other hypothesis, American Journal of Psychology, 117, 1–18. Dehaene, S. (2014). Consciousness and the Brain, New York: Viking Press. Dehaene, S. and Changeux, J.-P. (2011). Experimental and theoretical approaches to conscious processing, Neuron, 70, 200–27. Dehaene, S., Changeux, J.-P., Naccache, L., Sackur, J. and Sergent, C. (2006). Conscious, preconscious, and subliminal processing: A testable taxonomy, Trends in Cognitive Sciences, 10, 204–11. D’Esposito, M. (2007). From cognitive to neural models of working memory, Philosophical Transactions of the Royal Society B, 362, 761–72. Elliot, A. and Devine, P. (1994). On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort, Journal of Personality and Social Psychology, 67, 382–94. Evans, J. and Stanovich, K. (2013). Dual-process theories of higher cognition: Advancing the debate, Perspectives on Psychological Science, 8, 223–41. Festinger, L. (1957). A Theory of Cognitive Dissonance, Palo Alto, CA: Stanford University Press. Gallagher, S. (2001). The practice of mind: Theory, simulation, or primary interaction? Journal of Consciousness Studies, 8, 83–107. Gigerenzer, G., Todd, P., and the ABC Research Group. (1999). Simple Heuristics that Make Us Smart, Oxford: Oxford University Press.
232
The Bloomsbury Companion to the Philosophy of Consciousness
Gosling, P., Denizeau, M. and Oberlé, D. (2006). Denial of responsibility: A new mode of dissonance reduction, Journal of Personality and Social Psychology, 90, 722–33. Graziano, M. (2013). Consciousness and the Social Brain, Oxford: Oxford University Press. Herwig, U., Abler, B., Schönfeldt-Lecuona, C., Wunderlich, A., Grothe, J., Spitzer, M. and Walter, H. (2003). Verbal storage in a premotor–parietal network: Evidence from fMRI-guided magnetic stimulation, NeuroImage, 20, 1032–41. Hutto, D. (2004). The limits of spectatorial folk psychology, Mind & Language, 19, 548–73. Jeannerod, M. (2006). Motor Cognition, Oxford: Oxford University Press. Jonides, J., Lewis, R., Nee, D., Lustig, C., Berman, M. and Moore, K. (2008). The mind and brain of short-term memory, Annual Review of Psychology, 59, 193–224. Kahneman, D. (2011). Thinking, Fast and Slow. NY: Farrar, Straus, and Grioux. Kane, M., Hambrick, D. and Conway, A. (2005). Working memory capacity and fluid intelligence are strongly related constructs. Psychological Bulletin, 131, 66–71 Koch, C., Oliveri, M., Torriero, S., Carlesimo, G., Turriziani, P. and Caltagirone, C. (2005). rTMS evidence of different delay and decision processes in a fronto-parietal neuronal network activated during spatial working memory. NeuroImage, 24, 34–39. Kozuch, B. and Nichols, S. (2011). Awareness of unawareness: Folk psychology and introspective transparency. Journal of Consciousness Studies, 18 (11–12), 135–60. Mack, A. and Clarke, J. (2012). Gist perception requires attention. Visual Cognition, 20, 300–27. Meyer, C., Hagmann-von Arx, P., Lemola, S. and Grob, A. (2010). Correspondence between the general ability to discriminate sensory stimuli and general intelligence. Journal of Individual Differences, 31, 46–56. Nichols, A. and Stich, S. (2003). Mindreading. New York: Oxford University Press. Noë, A. (2004). Action in Perception. Cambridge, MA: MIT Press. Postle, B. (2006). Working memory as an emergent property of the mind and brain. Neuroscience, 139, 23–38. Postle, B. (2016). How does the brain keep information “in mind”? Current Directions in Psychological Science, 25, 151–56. Prinz, J. (2012). The Conscious Brain, NY: Oxford University Press. Pylyshyn, Z. (2003). Seeing and Visualizing, Cambridge, MA: MIT Press. Redick, T., Unsworth, N., Kelly, A. and Engle, R. (2012). Faster, smarter? Working memory capacity and perceptual speed in relation to fluid intelligence, Journal of Cognitive Psychology, 24, 844–54. Rey, R. (2013). We are not all ‘self-blind’: A defense of a modest introspectionism, Mind & Language, 28, 259–85. Rosenthal, D. (2005). Consciousness and Mind, Oxford: Oxford University Press. Scher, S. and Cooper, J. (1989). Motivational basis of dissonance: The singular role of behavioral consequences, Journal of Personality and Social Psychology, 56, 899–906. Schwitzgebel, E. (2002). A phenomenal, dispositional account of belief, Noûs, 36, 249–75.
The Illusion of Conscious Thought
233
Schwitzgebel, E. (2011). Knowing your own beliefs, Canadian Journal of Philosophy, 35, 4162. Scott, M. (2013). Corollary discharge provides the sensory content of inner speech, Psychological Science, 24, 1824–30. Serences, J., Ester, E., Vogel, E. and Awh, E. (2009). Stimulus-specific delay activity in human primary visual cortex, Psychological Science, 20, 207–14. Sestieri, C., Shulman, G. and Corbetta, M. (2010). Attention to memory and the environment: Functional specialization and dynamic competition in human posterior parietal cortex, The Journal of Neuroscience, 30, 8445–56. Shipstead, Z., Lindsey, D., Marshall, R. and Engle, R. (2014). The mechanisms of working memory capacity: Primary memory, secondary memory, and attention control, Journal of Memory and Language, 72, 116–41. Shipstead, Z., Redick, T., Hicks, K. and Engle, R. (2012). The scope and control of attention as separate aspects of working memory, Memory, 20, 608–28. Simon, L., Greenberg, J. and Brehm, J. (1995). Trivialization: The forgotten mode of dissonance reduction, Journal of Personality and Social Psychology, 68, 247–60. Spaulding, S. (2017). On whether we can see intentions, Pacific Philosophical Quarterly, 98, 150–170. Sperber, D. and Wilson, D. (1995). Relevance: Communication and Cognition, Second Edition. Oxford: Blackwell. Sreenivasan, K., Sambhara, D. and Jha, A. (2011). Working memory templates are maintained as feature-specific perceptual codes, Journal of Neurophysiology, 106, 115–21. Tian, X. and Poeppel, D. (2012). Mental imagery of speech: Linking motor and perceptual systems through internal simulation and estimation, Frontiers in Human Neuroscience, 6, 314. Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto, Biological Bulletin, 215, 216–42. Tononi, G., Boly, M., Massimini, M. and Koch, C. (2016). Integrated information theory: From consciousness to its physical substrate, Nature Reviews Neuroscience, 17, 450–61. Unsworth, N. and Spillers, G. (2010). Working memory capacity: Attention control, secondary memory, or both? A direct test of the dual-component model, Journal of Memory and Language, 62, 392–406. Voelke, A., Troche, S., Rammsayer, T., Wagner, F. and Roebers, C. (2014). Relations among fluid intelligence, sensory discrimination and working memory in middle to late childhood—A latent variable approach, Cognitive Development, 32, 58–73. Wells, G. and Petty, R. (1980). The effects of overt head movements on persuasion, Basic and Applied Social Psychology, 1, 219–30.
13
Actualism About Consciousness Affirmed Ted Honderich
1 Need for adequate initial clarification of consciousness? Five leading ideas You are conscious just in seeing the room you are in, conscious in an ordinary sense. That is not to say what is different and more, that you are seeing or perceiving the room, with all that can be taken to involve, including facts about your retinas and some about your visual cortex. To say you are conscious just in seeing this room is not itself to say either, what is often enough true, that you are also attending to the room or something in it, fixing your attention on it. You are now conscious, secondly, in having certain thoughts, about what you are hearing. Likely you are conscious, thirdly, in having certain feelings, maybe the hope that everything in the next hour is going to be clear as a bell, maybe in intending to say so if it isn’t. What are those three states, events, facts or things? What is their nature? What is the best analysis or theory of each of the three? What is what we can call perceptual consciousness, cognitive consciousness and affective consciousness? There is also another question, as pressing. What is common to the three states, events or whatever? What is this consciousness in general? What is the kind of state or whatever of which perceptual, cognitive and affective consciousness are three parts, sides or groups of elements? As I shall be remarking later in glancing at existing theories of consciousness, the known main ones try to answer only the general question. But can you really get a good general answer without getting three particular answers? These are the questions of a line of inquiry and argument in a large book that requires a very dogged reader (Honderich 2014), a book of which this lecture is the short story. We can ask the three particular questions and the general question, as we shall, in mainstream philosophy. That in my view is certainly not ownership of, but a greater concentration than that of science on, the logic
Actualism About Consciousness Affirmed
235
of ordinary intelligence: (i) clarity, usually analysis; (ii) consistency and validity; (iii) completeness; (iv) generalness. Is it safe enough to say in short, then, that philosophy is thinking about facts as distinct from getting them? Another preliminary. There are ordinary and there are other related concepts of things, ordinary and other senses of words – say stipulated or technical ones. Let us ask, as you may have taken me to have been implying already, what it is to be conscious generally speaking in the primary ordinary sense, in what a good dictionary also calls the core meaning of the word – and what it is to be conscious in each of the three ways in the primary ordinary sense. Do you ask if that is the right question? Assume it is and wait for an answer. We have what John Searle rightly calls a common-sense definition (1992, 83–4; 2002, 7, 21). It is something he calls unanalytic, of what seems to be this ordinary consciousness – presumably must be of ordinary consciousness since it is common sense. This consciousness in the definition is states of awareness that we are in except in dreamless sleep. That has the virtue of including dreaming in consciousness, which surprisingly is not a virtue of all definitions, notably an eccentric Wittgensteinian one (Malcolm 1962). But how much more virtue does Searle’s common-sense definition have? Awareness obviously needs defining as much as consciousness. Certainly there seems to be uninformative circularity there. Like defining a dog as a canine. Each of us also has something better than a common-sense definition. Each of us has a hold on her or his individual consciousness. That is, each of us can recall now the nature of some consciousness just something a moment ago, perceptual consciousness of the room, or consciousness that was a thought or a feeling. I guess that is or is part of what was and is called introspection. It has been doubted because it was taken as a kind of inner peering or seeing, and because people or subjects in German and other psychology laboratories were asked to do more with it than they could. Forget all that. We can be confident right now that each of us can recall that event or state of consciousness a moment ago, say the look of a thing or a passing thought or an urge. There are lesser and greater pessimisms about our being able to answer the general question of consciousness. Greater pessimists have included Noam Chomsky (1975; 1980; and otherwise unpublished material in Lycan 1987; 1996; 2003), Thomas Nagel (1974; 1998; 2015), David Chalmers (1995a; 1995b; 1996; 2015), and Colin McGinn (1989; 1991b; 1999; 2002; 2004; 2012). He began by saying we have no more chance of getting straight about consciousness than chimps have of doing physics, but ended up by seeming to say a lot less.
236
The Bloomsbury Companion to the Philosophy of Consciousness
Are the pessimisms and also, more importantly, the great seeming disagreement about what consciousness is, that pile of conflicting theories in philosophy, neuroscience, cognitive science and psychology – is all that owed at least significantly to one fact? Are they owed to the fact that there has not been agreement on what is being talked about, no adequate initial clarification of the subject matter, not asking the same question – allowing people to talk past one another, not answer the same question? In a sense, of course, that is not disagreement at all, but a kind of confusion. So far and still more hereafter, by the way, this stuff from me today is a sketch of a sketch – a bird’s-eye view with the bird flying high and fast. I worry that someone once said to Professor Quine about Karl Popper that Popper lectured with a broad brush, to which Quine mused in reply that maybe Popper thought with one too. Pressing on anyway, I say that there are five leading ideas of consciousness. They are about qualia, something it’s like to be a thing, subjectivity, intentionality and phenomenality. Qualia Daniel Dennett (1992) says qualia are the ways things seem to us, the particular personal, subjective qualities of experience at the moment. Nagel (1974) says qualia are features of mental states. Very unlike Dennett, he says it seems impossible to analyse them in objective physical terms, make sense of them as objectively physical. Ned Block (1995, 380–1, 408) has it that they include not only experiential properties of sensations, feelings, perceptions, wants and emotions. They are also such properties of thoughts, anyway our thoughts that are different from the sort of thing taken to be the functioning of unconscious computers – computation or bare computation. Others disagree in several ways with all that. Do we arrive in this way at an adequate initial clarification of the subject of consciousness? No. There is only what you can call a conflicted consensus about what qualia are to be taken to be. In this consensus, even worse, one thing is very widely assumed or agreed (Chalmers 2009; Robinson 1993; 2003; 2012). Qualia are qualities of consciousness, that which has the qualities, not consciousness itself – maybe its basic qualities or a more basic quality. Another thing mostly agreed upon is by itself fatal to the idea of an adequate initial clarification – that qualia are only one part of consciousness. There’s the other part, which is propositional attitudes – related or primarily related to my cognitive consciousness. Something it’s like to be a thing That idea of Nagel in his paper ‘What is it Like to be a Bat?’ (1974), however stimulating an idea, as indeed it has been, is surely circular. Searle in effect points to the fact when he says that we are
Actualism About Consciousness Affirmed
237
to understand the words in such a way that there is nothing it is like to be a shingle on a roof (1992, 132; 1999, 42). What we are being told, surely, or what is being implied, is that what it is for something to be conscious is for there to be something it is like for that thing to be conscious. What else could we be being told? A sceptic might also worry that no reality is assigned to consciousness in Nagel’s provocative question. Can there conceivably be reality without what Nagel declined to provide in his famous paper, an assurance of physicality? Traditional or familiar subjectivity Here, whatever better might be done about subjectivity, and really has to be done, and as we can try to do, wherever we turn, there has been circularity. Consciousness is what is of a subject, which entity is understood as a bearer or possessor of consciousness. There is also obscurity. Further, a subject of this kind is a metaphysical self. Hume famously saw off such a thing, did he not, when he reported that he peered into himself and could not espy his self or soul? (1888, 252). Intentionality The idea was brought out of the distant past into circulation by the German psychologist Franz Brentano in the nineteenth century and has Tim Crane as its contemporary defender and developer. It is sometimes better spoken of as aboutness, where that is explained somehow as also being the puzzling character of lines of type, spoken words and images. There is the great problem that when intentionality is made clear enough by way of likeness to such things, it is evident that it is only a part of consciousness. As is often remarked, it does not include aches and objectless depressions. Crane argues otherwise, valiantly but to me unpersuasively (1998; 2001, 4–6; 2003, 33). Phenomenality Block speaks of the concept of consciousness as being hybrid or mongrel, and leaves it open whether he himself is speaking of consciousness partly in an ordinary sense (2007b, 159, 180–1). He concerns himself, certainly, with what he calls phenomenal consciousness, as does David Chalmers (1996). This is said by Block (1995, 2007a, 2007b), not wonderfully usefully, to be ‘just experience’, just ‘awareness’. That is circularity again. I add in passing that he takes there to be another kind of consciousness, access consciousness, which most of the rest of us recognize as an old and known subject, what we still call unconscious mentality, only dispositions, maybe brain-workings related to phenomenal consciousness. Here, I remark in passing, is a first and striking instance of philosophers or scientists in speaking of consciousness definitely not meaning the same thing as other philosophers or scientists. Nagel was not referring to access consciousness – or anything that has it as a proper part. How many more instances do we need in support of that idea as at least significantly responsible for basic disagreements concerning the concept of consciousness?
238
The Bloomsbury Companion to the Philosophy of Consciousness
It is notable that Chalmers (1996, 4–6, 9–11) takes all five of the ‘leading ideas of consciousness’ to come to much the same thing, something to pick out approximately the same class of phenomena. He is not alone in that inclination. But evidently the ideas are different. Certainly the essential terms aren’t what he calls synonyms. And is it not only the case that none of the five ideas provides an adequate initial clarification of consciousness, but also remarkable that a comparison of them in their striking variety indicates immediately the absence and lack of any common subject?
2 Something’s being actual – A database Is confession good for a philosopher’s soul? I sat in a room in London’s Hampstead some years ago, when it was still intellectual and not just rich, and said to myself stop reading all this madly conflicting stuff about consciousness. You’re conscious. This isn’t Quantum Theory, let alone the bafflement of moral and political truth. Just answer the question of what your being conscious right now is, or for a good start, more particularly, just say what your being conscious of the room is, being conscious just in seeing the room. Not thinking about just seeing, or attending to some particular thing in what is seen. Not liking it or whatever. Anyway, you know the answer in some sense, don’t you?You’ve got the hold. The answer in my case, lucky or unlucky, was that my being conscious was the fact of the room being there, just the room being out there. Later on, as you will learn, I preferred to say that it was a room being there. You find philosophers and scientists using certain terms and locutions – certain conceptions of relevance in the philosophy of consciousness. Suppose again, as you reasonably can, that they or almost all of these theorists are talking about consciousness in the primary ordinary sense. They think about it in a certain way, have certain concepts, and use certain language for it. Further, and of course very importantly, this is shared with philosophers and scientists who are otherwise concerned with consciousness and what they call the mind or anyway mind – this being both conscious and unconscious mentality – inseparable according to Chomsky in a consideration of my Actualism (Caruso, 2017), not open to being ‘extricated’, but that is another story. If you put together the terms and locutions you get data to be organized as a database. It is that in the primary ordinary sense, in any of the three ways, your being conscious now is the following:
Actualism About Consciousness Affirmed
239
the having of something, something being had – if not in a general sense, the general sense in which you also have ankles, something being held, possessed or owned, your seeing, thinking, wanting in the ordinary active sense of the verbs, hence the experience in the sense of the experiencing of something, something being in contact, met with, encountered or undergone, awareness of something in a primary sense, something being directly or immediately in touch, something being apparent, something not deduced, inferred, posited, constructed or otherwise got from something else, something somehow existing, something being for something else, something being to something, something being in view, on view, in a point of view, something being open, provided, supplied, something to which there is some privileged access, in the case of perception, there being the world as it is for something, what at least involves an object or content in a very general sense, an object or content’s coming to us, straight-off, something being given, hence something existing and known, something being present, something being presented, which is different, something being shown, revealed or manifest, something being transparent in the sense of being unconveyed by anything else, something clear straight-off, something being open, something being close, an occurrent or event, certainly not only a disposition to later events, something real, something being vividly naked, something being right there, in the case of perception, the openness of a world. All that, I say to you, is data and indeed a database. To glance back at and compare it to the five leading ideas, it’s not a mediaeval technical term in
240
The Bloomsbury Companion to the Philosophy of Consciousness
much dispute, or a philosopher’s excellent apercu though it is still an apercu, or a familiar or traditional idea or kind of common talk, or an uncertain truth based on a few words and images, or an uncertainty about a consciousness that seems to slide into unconsciousness. Without stopping to say more about the database, it is worth remarking that in character it has to do with both existence and a relationship, a character both ontic and epistemic. We have to note that it is figurative or even metaphorical. To say consciousness is given is not to say it’s just like money being given. There is an equally figurative encapsulation of it all, which I will be using. It is that being conscious in the primary ordinary sense is something’s being actual – which does not appear open to the objection of circularity. We maintain that what we have in this characterization is an initial conception of primary ordinary consciousness as being actual consciousness. The proposal immediately raises two general questions. What is actual with this consciousness? And what is it for whatever it is to be actual? Recalling that consciousness has three parts, sides or groups of elements, there are the questions of what is actual and then what the actuality is with each of perceptual, cognitive and affective consciousness. So the first two criteria – of eight – for an adequate theory or analysis of ordinary consciousness, for a literal account of its nature, is the theory’s giving answers to those questions about (1) what is actual and (2) what its being actual comes to in an absolutely literal sense. We can arrive at better answers if we look first at a few essential preliminaries.
3 Functionalisms, dualisms and other theories It is prudent, whether or not required by a respect for consensus, to consider existing dominant theories of anything under study. If you take the philosophy and science of consciousness together, the current philosophy and science of mind, you must then consider abstract functionalism and its expression in cognitive science – computerism about consciousness and mind. We might as well begin there, because it might be right or anyway might have been right. Could it be that abstract functionalism is usefully approached in a seemingly curious way, approached by way of what has always been taken as an absolute adversary, traditional mind-brain dualism, including spiritualism, which goes back a long way, to before Descartes? This dualism, often taken as benighted, is the proposition that the mind is not the brain. That, in a sentence only slightly
Actualism About Consciousness Affirmed
241
more careful, is to the effect that all consciousness is not physical. There are, of course, reputable and indeed leading philosophers and scientists of mind who are in some sense dualists. Chalmers is one (1995a; 1995b; 1996). There are other more metaphysically explicit dualists, including Howard Robinson (1993; 2003; 2012). We might wonder whether Block has not also been at least a fellow traveller (1995; 2007a; 2007b). You may excuse my saying of dualism, since I have a lot of my own fish to fry, that it has the great recommendation of making consciousness different in kind, as it surely is. It has unfortunately the great complementary failing of making consciousness not a reality. It shares that fatal failing with abstract functionalism. The old metaphysics and the reigning general science of the mind fall together. But your being conscious, rather, just for a start, is something with a history that began somewhere and will end somewhere. Who now has the nerve to say it is out of space? It is now real. It now exists. It’s a fact. Evidently all this is bound up with the clearer and indeed dead clear truth that consciousness has physical effects, starting with lip and arm movements and where chairs are. Elsewhere there is the axiom of the falsehood of epiphenomenalism. There is no more puzzle about what, in general, abstract functionalism is, even if the elaboration of it in cognitive science has been rich. Abstract functionalism is owed to a main premise and a large inspiration. The inspiration supporting abstract functionalism is that we do indeed identify and to an extent distinguish types of things and particular things in a certain way – by their relations, most obviously their causes and effects. We do this with machines like carburettors, and with our kidneys, and so on, and should do it more with politicians and our hierarchic democracy. The premise, more important now, is the proposition that one and the same type of conscious state somehow goes together with or anyway turns up with different types of neural or other physical states. This is the premise of what is called multiple realizability. We and chimps and snakes and conceivably computers can be in exactly the very same pain that goes with quite different physical states. The proposition is subject to doubt on several grounds. My own short story of abstract functionalism, my own objection, is that a conscious state or event is itself given no reality in this theory that allows it to be only a cause of actions and other output. It does go together with traditional dualism in this respect, and is therefore to me as hopeless. There is a place within other and very different theorizing for what you can call physical functionalism, which is better, partly because it puts aside multiple realizability, which has been too popular by half.
242
The Bloomsbury Companion to the Philosophy of Consciousness
My list of existing theories and sorts of theories has on it Non-Physical Intentionality and Supervenience, notably the work of Jaegwon Kim (2005); Donald Davidson’s Anomalous Monism (1980); the mentalism of much psychology and science as well as philosophy that runs together conscious and unconscious mentality; Block’s mentalism in particular; naturalism, the dominant representational naturalism of which there are various forms (Papineau, 1993), such aspectual theories as Galen Strawson’s panpsychism and double aspect theory; Bertrand Russell’s Neutral Monism; the different physicalisms of Searle, Dennett, and of neuroscience generally; the HigherOrder Theory of Locke of the seventeenth century and David Rosenthal (2005) of ours; the audacity of the Churchlands seemingly to the effect that it will turn out in a future neuroscience that there aren’t any and haven't ever been any beliefs or desires (1986; 1988); the wonderful elusiveness of quantum theory consciousness, which is certainly a case of the explanation of the obscure by the more obscure; and the previous externalisms – Hilary Putnam (1975), Tyler Burge (2007), Alva Noë (2006; 2009) and Andy Clark (1997, 2011), these latter appealing to both external facts and representation. While all of these theories are crucially or at least centrally concerned one way or another with the physical, physical reality, they do not slow down much to think about it. They do not come close to really considering what it is, going over the ground. Was that reasonable? Is it reasonable? Shouldn’t we get onto the ground, walk around there for a while? Be pedestrian? And just in passing, for the last time, do these theories concern themselves with the same question? For a start, was Non-Physical Supervenience about the same question of consciousness that representational naturalism or neuroscientific physicalism was about? Surely not. A third thing is important, indeed crucial, for anyone who believes, as I do, despite such original tries as Frank Jackson’s (1986), that there are no proofs of large things in philosophy, which is instead a matter of comparative judgement between alternatives. The thing is that a good look through those various theories gives us more criteria for a decent theory or analysis of consciousness – additional to the criteria that are answers to the questions you’ve heard of (1) what is actual and (2) what the actuality comes to. Also criteria additional to two others already announced to the effect that a decent theory of consciousness will have to recognize and explain (3) the difference of consciousness from all else and (4) the reality of consciousness and the connected fact of its being causally efficacious – maybe several-sided difference and several-sided reality. A further condition of adequacy is (5) something just flown by so far – subjectivity, some
Actualism About Consciousness Affirmed
243
credible or persuasive unity with respect to consciousness, something quite other than a metaphysical self or homunculus. Another condition is (6) the three parts, sides or kinds of elements of consciousness. It is surprising indeed that the existing general theories of consciousness do not include in their generality the distinctness of perceptual, cognitive and affective consciousness, as psychology did in the past and still does in practice – and as philosophy itself does when it is not focused on the general question, but, say, thinking about perception. Another requirement (7) is that of naturalism, essentially a satisfactory relation to science. A last one is (8) the relation or relations of consciousness to a brain or other basis and to behaviour and other relations. Concerning the variety of externalisms, Putnam said that meanings ain’t in the head, but depend on science. Burge cogently explained by way of arthritis in the thigh that mental states are individuated by or depend on external facts, notably those of language. Clark (1997, 2011) argued that representation with respect to consciousness is a matter of both internal and external facts – minds are extended out of our heads. Noë (2009) theorizes that consciousness partly consists in acting. There is a radically different externalism with respect to perceptual consciousness. One distinction is that this consciousness is a matter of an external reality – without any representation of it.
4 The objective physical world To make a good start towards the theory we will call Actualism, think for a few minutes, whether or not you now suppose this is a good idea, about the usual subject of the physical, the objective physical world. The existing theories of consciousness, from dualism and abstract functionalism to the externalisms, one way or another include presumptions about or verdicts on consciousness having to do with physicality – by which they always mean and usually say objective physicality. I ask again whether they are to be judged for their still passing by the subject. Anyway, having spent some time on that database, and flown over a lot of existing uniform theories of consciousness, and put together the criteria for an adequate theory or analysis of consciousness, let us now spend even less time on the objective physical world, on what it is for something to be objectively physical. If there are a few excellent books on the subject, notably those of Herbert Feigl (1967) and Barbara Montero (1999; 2001; 2009), it is indeed hardly considered at all by the known philosophers and scientists of consciousness. Or they take a
244
The Bloomsbury Companion to the Philosophy of Consciousness
bird’s eye view, far above a pedestrian one. I’m for walking around, going over the ground. Not that it will really be done here and now. Here let me just report 16 convictions or attitudes of mine owing to a respect for both science and philosophy. I abbreviate what is a substantial inquiry in itself into the objectively physical, the objectively physical world. I boil it down into a fast checklist of characteristics. They are properties that can be divided into those that can be taken as having to do with physicality, the first nine, and those having to do with objectivity, the other seven.
Physicality 1 Objective physical properties are the properties that are accepted in science, or hard or harder science. 2 They are properties knowledge of which is owed or will be owed to the scientific method, which method is open to clarification. 3 They are properties that are spatial and temporal in extent, not outside of space and time. 4 Particular physical properties stand in lawful connections, most notably causal connections, with other such properties. Two things are in lawful connection if, given all of a first one, a second would exist whatever else were happening. Think about that truth dear to me some other time (Honderich 1988). 5 Categories of such properties are also lawfully connected. 6 The physical macroworld and the physical microworld are in relations to perception, different relations – the second including deduction. 7 Macroworld properties are open to different points of view. 8 They are different from different points of view. 9 They include, given a defensible view of primary and secondary properties, both kinds of properties.
Objectivity To consider objectivity rather than physicality, the properties of the objective physical world have the following characteristics. 10 They are in a sense or senses separate from consciousness. 11 They are public – not in the consciousness of only one individual. 12 Access to them, whether or not by one individual, is not a matter of special or privileged access.
Actualism About Consciousness Affirmed
245
13 They are more subject to truth and logic than certain other properties. 14 To make use of the idea of scientific method for a second time, their objectivity, like their physicality, is a matter of that method. 15 They include no self or inner fact or indeed unity or other such fact of subjectivity that is inconsistent with the above properties of the objective physical world. 16 There is hesitation about whether objective physicality includes consciousness.
5 Perceptual consciousness – What is and isn’t actual We can proceed forward now from that database, the encapsulation of it, the pile of theories of consciousness, the criteria and objective physicality. It seems to me and maybe others that if we learn from the existing pile of theories of consciousness and the resulting criteria, and to my mind the plain thinking about physicality, we need to make an escape from the customary in the science and philosophy of consciousness. There is a fair bit of agreement about that. McGinn is one who really declares the need for something new (1989; 2002; 2004; 2012). We need to pay special untutored attention to consciousness. We do not need to turn ourselves into what psychologists used to call naive subjects or to demote ourselves to membership of the folk – of whom I am inclined to believe that they are distinguished by knowing quite a few large truths about consciousness. We do need to concentrate, for a good start, on those two general and main questions to which we have arrived, and respond to them directly out of our holds on being perceptually conscious. Here is an anticipation, in awful brevity, of what seems to me the right response. What is actual for me now with respect to my perceptual consciousness, my perceptual consciousness as distinct from my cognitive and affective consciousness, is only the room, what it will indeed turn out to be sensible to call a room, but a room out there in space, a room as definitely out there in space as anything at all is out there in space. God knows it’s not a room in my head. Anyway I know. What is actual with you and me now, so far as perceptual consciousness is concerned, is a room. Most certainly it is not a representation of a room or any such thing whatever, called image or content or whatever else, I know when someone or something is sending me a message, even sneakily. No representation no matter what part something else – registrations, inputs,
246
The Bloomsbury Companion to the Philosophy of Consciousness
recordings or such-like effects, which might mistakenly or anyway entirely misleadingly called representations – play elsewhere, in entirely unconscious mentality. We can all very well tell the difference between a sign of any sort and a thing that isn’t one. Perceptual consciousness is not just or even at all about that room, but in short is that room. No metaphysical self is actual either, or direction or aboutness, or any other philosophical or funny stuff. What is actual is a subjective physical world in the usual sense of a part of the thing. Saying so is comparable to familiar talk of being in touch with the world as ordinarily thought of, or the objective physical world, in virtue of being in touch with a part of it. There is reason for the rhetoric, perfectly literal sense to be given to it. Is a subjective physical world, since not a world inside your head, just a phantom world? Is it insubstantial, imaginary, imagined, dreamed up? If you are caught in a good tradition of philosophical scepticism, maybe scepticism gone off the deep end, and feel like saying yes, making me feel sorry for you, hang on for a while and hold your horses.
6 Perceptual consciousness – Something’s being actual is its being subjectively physical in a way What about question 2? What is a room’s being actual? It is indeed its existing in a way not at all metaphorical or otherwise figurative, but a way to be very literally specified – ways guided by what was said of the objective physical world. This existence of a room is partly but not only a matter of a room’s occupying that space out there and lasting through some time, and of its being in lawful connections including causal ones within itself, and of two great lawful dependencies that mainly distinguish this way of existing in particular. The first dependency is the lawful categorial dependency of what is actual on what we have just inquired into or anyway glanced at, the objective physical world, or rather on parts or pieces or stages of the objective physical world we ordinary speak of perceiving, whatever that perceiving really comes to. The second dependency with my world is a dependency on my objective properties as a perceiver, neural properties and location for a start. Note in passing that this connects with something mentioned before, both the epistemic and ontic character of our database. My being perceptually conscious now is the existence of a part or piece or stage of a sequence that is one subjective physical world, one among very many, as many as there are sets of perceivings of single perceivers. These myriad worlds are no less real for there being myriads of them and for their
Actualism About Consciousness Affirmed
247
parts being more transitory than parts of the objective physical world. Myriad and momentary things in the objective physical world do not fail to exist on account of being myriad and momentary. I speak of a room, of course, not at all to diminish it or to allow that it is flaky, but mainly just to distinguish it from that other thing. Subjective physical worlds and their parts or whatever are plain enough states of affairs or circumstances, the ways things or objects are, sets of things and properties. These subjective worlds are a vast subset, the objective physical world being a one-member subset, of course, of many parts, of the single all-inclusive world that there is, the physical world, that totality of the things that there are. Here is a summary table of these and other facts. It also covers what we will be coming to, cognitive and affective consciousness. Attend, first, to the left-hand column of the table. You will not need telling again that it summarizes what was said earlier of objective physicality. Subjective physical worlds, our present concern, characterized in the middle column, are one of two subsets of subjective physicality. All of that subjective physicality, like objective physicality, as already remarked, is a subset of physicality in general. Subjective physical worlds are about as real, if differently real, I repeat, in pretty much the sum of decent senses of that wandering word, as the objective physical world, that other sequence. In one sense, subjective physical worlds are more real – as in effect is often enough remarked, but pass that by. All this is so however and to what limited extent the objective physical world is related to subjective physical worlds. It is because of the dependencies on the objective physical world and also on perceivers, and for another specific and large reason to which we will come, about subjectivity or rather individuality, that these perceived worlds rightly have the name of being subjective. It can now be said that my being perceptually conscious just is and is only a particular existence of something like what most of the leading ideas of consciousness and the existing theories of consciousness take or half-seem to take or may take perceptual consciousness merely to be of or about, say a room. They take perceptual consciousness to be a lot more than just the existence of a room. Evidently the characteristics of subjective physical worlds also clarify and contribute content to what was said earlier of the epistemic and ontic character of our data as to ordinary consciousness. In talking of subjective physical worlds, whether or not anything of the sort has been anticipated by others (Strawson, 2015) we’re not discovering a new thing, a new category. We’re just noting and not being distracted from and using an old thing, putting it into a theory of perceptual consciousness, making a theory of perceptual consciousness from it and necessarily leaving other stuff out. There has certainly been talk and
248
The Bloomsbury Companion to the Philosophy of Consciousness PHYSICALITY SUBJECTIVE PHYSICALITY
1 2 3 4 5
6 7 8 9
10 11 12 13 14 15
16
OBJECTIVE PHYSICAL WORLD
SUBJECTIVE PHYSICAL WORLDS: Perceptual Consciousness
SUBJECTIVE PHYSICAL REPRESENTATIONS: Cognitive and Affective Consciousness
ITS PHYSICALITY
THEIR PHYSICALITY
THEIR PHYSICALITY
in the inventory of science open to the scientific method within space and time in particular lawful connections in categorial lawful connections
in the inventory of science open to the scientific method within space and time in particular lawful connections in categorial lawful connections, including those with (a) the objective physical world and (b) the conscious thing constitutive of macroworld perception more than one point of view with perception different from different points of view primary and secondary properties despite (5b) above?
in the inventory of science open to the scientific method within space and time in particular lawful connections
macroworld perception, microworld deduction more than one point of view with macroworld different from different points of view primary and secondary properties
ITS OBJECTIVITY separate from consciousness
THEIR SUBJECTIVITY not separate from consciousness public private common access some privileged access truth and logic, more subject truth and logic, less subject to? to? open to the scientific method open to the scientific method despite doubt includes no self or unity or each subjective physical other such inner fact of world is an element in subjectivity inconsistent an individuality that is a with the above properties unique and large unity of the objective physical of lawful and conceptual world dependencies including much else hesitation about whether no significant hesitation objective physicality about taking the above includes consciousness subjective physicality as being that of actual perceptual consciousness
in categorial lawful connections, including those with (a) the objective physical world and (b) the conscious thing not perceived, but dependent on macroworld perception no point of view no differences from points of view no primary and secondary properties
THEIR SUBJECTIVITY not separate from consciousness private some privileged access truth and logic, less subject to? open to the scientific method, despite doubt each representation is an element in an individuality that is a unique and large unity of lawful and conceptual dependencies including much else no significant hesitation about taking this subjective physicality as being the nature of actual cognitive and affective consciousness
Actualism About Consciousness Affirmed
249
theory of some or other physical world being there for us, in the ordinary sense of a part of it being there. There’s been talk of the world as experienced. There’s one for you right now, isn’t there? You’re immediately in touch with one of those right now, aren’t you? If this familiar fact doesn’t give you a proof of Actualism with respect to perceptual consciousness, it should nevertheless provide a very helpful pull in the right direction. So much for an anticipation of the main body of the theory of Actualism with respect to just perceptual consciousness, whatever is to be said about cognitive consciousness and affective consciousness – including whatever is to be said of the beliefs and also the desires in which perceptual consciousness does not consist at all, but by which it is often accompanied or to which it commonly gives rise.
7 Cognitive and affective consciousness – Theories and What is and isn’t actual To turn yet more cursorily to these second and third parts or sides of consciousness, what is actual with your cognitive consciousness, say your just thinking of your mother or the proposition of there being different physicalities, or your attending to this room or to something in it? My answer is that what is actual, we need to say, and absolutely all that is actual, is a representation or a sequence of representations. What it is for something to be actual is for it to be subjectively physical, differently subjectively physical than with a room. Cognitive consciousness, further, is related to truth. With respect to affective consciousness as against cognitive, say your now wanting a glass of wine, what is actual is also representation, subjectively physical, but related to valuing rather than truth. To come to these propositions, of course, is to come away entirely from the figurative to the literal. For both cognitive and affective consciousness, as already anticipated, see the right-hand columns of that table. Note in passing, not that the point is simple and without qualification, that given the differences between (1) perceptual consciousness and (2) cognitive and affective consciousness, we certainly do not have the whole nature of consciousness as uniform or principally or essentially or primarily uniform. That in itself is a recommendation of Actualism, a theory’s truth to your hold of your consciousness. You know, for a start, how different consciousness in seeing is from thinking and wanting.
250
The Bloomsbury Companion to the Philosophy of Consciousness
If there is a lot of existing philosophical and scientific theory with respect to perceptual consciousness, maybe there is still more with respect to cognitive and affective consciousness. Since I am getting near the end of this lecturing hour, and discussion is better, here is no more than just a list of good subjects in another pile that you might want to bring up, a list of ten good subjects having to do with representation – a list with just a comment or two added. My representationism, as you know from what has been said of actual perceptual consciousness, where there is no representation at all, is not universal representationism. As will soon be apparent, it definitely is not pure. The representation in cognitive and affective consciousness necessarily is with something else, one element of the fact.
8 Cognitive and affective consciousness – Representations being actual is their being subjectively physical in a way Put up with just a few words more on some of that pile of subjects, the representational theories of and related to cognitive and affective consciousness. They admittedly do begin from reflection on our spoken and written language, English and the rest, linguistic representations, and in effect move on from that reflection to an account of conscious representation. It seems to me that none of this by itself can work. Searle, admire him as I do, can’t succeed in reducing any consciousness to only this. Representation is as true of a line of type as of your thought or want. That is just as true when nobody is thinking it. Absolutely plainly, there is a fundamental and large difference between (1) a line of print on a page or a sequence of sounds and (2) a conscious representation or a sequence of such things. The relation of a conscious representation to language is only part of the truth. Actualism saves the day. The greatest of philosophers in our tradition, Hume, began or more likely continued a certain habit of inquiry when he was in a way frustrated in coming to an understanding of something, in his case cause and effect. ‘We must...,’ he said, ‘proceed like those, who being in search of any thing, that lies concealed from them, and not finding it in the place they expected, beat about all the neighbouring fields, without any certain view or design, in hopes their good fortune will at last guide them to what they search for’ (1888, 77–8). Pity he didn’t get to the right answer about cause and effect. But let me be hopeful in my own different endeavour. In fact I take it there is more than good reason for hope.
Actualism About Consciousness Affirmed
251
Our potentially reassuring circumstance right now is that if we need to look in another field than the two-term relation of representation, we can in fact do that without going to a wholly new field. If we have to leave the field of thoughts and wants and of representation when it is understood as being somehow only a relation between the representation and what is represented, only a parallel to language, we can in fact do that, by way of another field that is not a new field. I mean that we can stay right in and attend to the larger field that we’ve never been out of, always been in since before getting to cognitive and affective consciousness. In fact never been out of it since we began by settling our whole subject matter of consciousness in general, since we settled on an initial clarification of consciousness in the primary ordinary sense – consciousness as actual, actual consciousness. The smaller field is in the larger. Cognitive and affective consciousness, thoughts and wants, are not only representations as first conceived in relation to spoken and written language. They are not only such representations, most saliently propositional attitudes, attitudes to propositional contents, the latter being satisfied by certain states of affairs. Rather, thoughts and wants are such representations as have the further property of being actual. That is the burden of what I put forward here. That is the fundamental difference between a line of print and conscious representations. Representational consciousness consists in more than a dyadic relation. It is not purely representational, not to be clarified by pure representationism. For the contents of that contention, you will rightly expect me to refer you again to that table – to its list of the characteristics of subjective physical representations. It appears in the right-hand columns.
9 Zombie objection, changing tune, individuality, truism Questions and objections are raised by Actualism. One is prompted by the recent history of the philosophy of consciousness and some of the science of it. Supposedly sufficient conditions having to do with consciousness, it was claimed, fail to be such. Zombies, wholly unconscious things, could satisfy the conditions, as Robert Kirk (2005; 2011) explained. Do you say that exactly the conditions for consciousness now set out in Actualism – say perceptual consciousness – could be satisfied by something but the thing still wouldn’t be conscious at all? There is a temptation to say a kind of replica of me or you that it could satisfy exactly the conditions specified and the replica wouldn’t be conscious in the way we know about. That it would indeed be, in this different setting of reflection,
252
The Bloomsbury Companion to the Philosophy of Consciousness
just one of those things we’ve heard about in other contexts, a zombie. Put aside the stuff in zombie theory about metaphysical possibility and all that, which I myself can do without pretty easily. Do you really say there could be something without consciousness despite it and the rest of the situation being exactly what Actualism says is what being conscious consists in? Sometimes the best form of defence is counter-assertion because it is true. In the heat wave of the English summer of 2013, at a lunch table in a club, a medical man gave me a free opinion about diabetes. It led me, after reading up on the internet that the symptoms are thirst, tiredness, seeing less clearly and so on, to the seemingly true proposition about me that I had a lot of the symptoms. I fell into the illusion that I had diabetes – the diabetes illusion. Think of my diabetes propositions about myself in relation to the 16 propositions on the checklist on the physicality of representations and hence on cognitive and affective consciousness, and the previous 16 counterparts with perceptual consciousness. Is it an illusion that our 16 propositions do not capture the nature of consciousness in its three sides? Is it an illusion that there is something else or more to consciousness? If you fortitudinously do a lot of reading of what this lecture comes from, that labouring book Actual Consciousness with all the typos, will you share with me at least on most days the idea that a persisting elusiveness of perceptual consciousness really is itself an illusion? That it really is an illusion that there is more to consciousness than we have supposed, more that we have got hold of? I hope so. Keep in mind that there are more kinds of illusion than personal ones. There are illusions of peoples, cultures, politics, philosophy and science. Hierarchic democracy for a start. Is it possible to say something more useful quickly about and against the moreto-consciousness illusion? Well, let me gesture at another piece of persuading. You need to keep in mind all of the characteristics of perceptual consciousness and the other two kinds of consciousness. But think right now just of our large fact of subjectivity. In Actualism, it is a unity that is individuality, akin to the living of a life; a long way from a ruddy homunculus. Think in particular of the large fact itself that your individuality includes and partly consists in nothing less than the reality now of a subjective physical world, certainly out there. Now add something pre-theoretical. It is reasonably certain, and I’d say ordinary reflection proves it, if you need what you bravely and too hopefully call a proof, that there is at least strangeness about consciousness. Consciousness is more than just different. It is different in a particular and peculiar way. It is unique. When you really try to think of it, it pushes rather than just tempts you to a
Actualism About Consciousness Affirmed
253
kind of rhetoric, in line with but beyond our database. Maybe you want to say consciousness somehow is a mesmerizing fact. Actualism explains this, doesn’t it? Consciousness for Actualism is those things, is on the way to mesmerizing, because in its fundamental part it is no less than the existence of a world. Actualism has this special and I’d say great recommendation that goes against the temptation of the zombie objection. As noted in the table, you get a suitably whopping individuality with Actualism, which I have not slowed down to address. Actualism implies an individuality that brings in an individual world – a real individual world not of rhetoric or poetry or Eastern mysticism but of plain propositions. It can be said, although the words aren’t exact, that with Actualism you are a unity that includes the size of a world. That definitely isn’t to leave something out. Thus, Actualism rings true to me. It gets me somewhere with consciousness. I don’t think that’s because I’m too perceptually conscious, not cognitive enough. There’s no more to the fact of being perceptually conscious than dependent external content. There’s no vehicle or any other damned thing in a variety put up or glanced at by various philosophers, including a brain-connection, sense data, aspects, funny self, direction or aboutness, a higher or second order of stuff, and so on and so forth; and none of that stuff except the existence of representation and attitude in cognitive or affective consciousness either. And the mind–body or mind–brain connection explained as ordinary lawful connection. Do I have to try harder here? Will some tough philosophical character, maybe some lowlife psychologist, maybe even Ned Block or Dave Chalmers, say in their New York seminar that there is no news in all this verbiage? That Actualism is blunder from Bloomsbury? That, as an Australian reviewer judged about Actual Consciousness, we all need and we're still waiting for the Einstein of consciousness? Will Ned and Dave say that it is a truism that we all accept already that the world, something close to the objective physical world as defined, is part of, maybe the main thing with, perceptual consciousness as somehow ordinarily understood – with another main thing in the story of it being some kind of representation of the objective physical world? I don’t mind at all being in accord with some or other truism of this sort. But it would be strange to try to identify Actualism with the truism, try to reduce Actualism to it. Even crazy. Actualism is the contention that being perceptually conscious is itself precisely a defined existence of an external world, but not the objective physical world. Actualism is absolutely not the proposition, say, that what the perceptual consciousness comes to is representation of the objective physical world. And yet do you mutter at the end of reading this lecture, as you
254
The Bloomsbury Companion to the Philosophy of Consciousness
might have on hearing it, that what is actual when you see the room you're in is not a room, but could be something like those bloody sense data in the past? That you only get to reality by that stock in trade of our holy predecessors? Ayer, Broad, Price Moore, Russell. Well, Actualism is bloody philosophy for you – and so it cannot be proof beyond uncertainty. Yes, you can get help for a start from no less than the factual and moral truth-seeker Noam Chomsky and also Paul Snowdon, Alastair Hannay, Barbara Montero and Barry Smith in that volume edited by Gregg Caruso Ted Honderich on Consciousness, Determinism and Humanity. But there’s also more affirmation in it by me of the clear good news of Actualism.
Bibliography Block, N. (1995). ‘On a Confusion About a Function of Consciousness’, The Behavioral and Brain Sciences, 18, 227–87. Block, N. (2007a). Consciousness, Function, and Representation, Collected Papers, vol. 1 Cambridge, MA: The MIT Press. Block, N. (2007b). ‘Functionalism’, in N. Block, 2007a, Consciousness, Function, and Representation, Collected Papers, vol. 1, 13–103, Cambridge, MA: The MIT Press. Burge, T. (2007). Foundations of Mind, New York: Oxford University Press. Caruso, Gregg (2017) Ted onderich on Consciousness, Determinism, and Humanity. Palgrave Macmillan and Springer Nature. Chalmers, D. J. (1995a). ‘Facing Up to the Problem of Consciousness’, Journal of Consciousness Studies, 2, 200–19. Chalmers, D. J. (1995b). ‘The Puzzle of Conscious Experience’, Scientific American, 273, 80–6. Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory, Oxford: Oxford University Press. Chalmers, D. J. (2015). ‘Why Isn’t There More Progress in Philosophy?’, in T. Honderich (ed.), 2015a, 347–70 Philosophers of Our Times: Royal Institute of Philosophy Annual Lectures, Oxford: Oxford University Press.. Chomsky, N. (1975). Reflections on Language, New York: Pantheon Books. Chomsky, N. (1980). Rules and Representations, New York: Columbia University Press. Chomsky, N. (2003). ‘Replies to Lycan, Poland, Strawson et al.’ in Louise Anthony and Norbert Hornstein (eds.), Chomsky and His Critics, Malden: Blackwell, 255–328. Chomsky, N. (2003b) in Lycan, ed., 2003a, unpublished by Chomsky, quoted by William Lycan in Louise Anthony and Norbert Hornstein (eds.), Chomsky and His Critics, Blackwell, 2003. Churchland, Patricia (1986). Neurophilosophy, Cambridge, MA: The MIT Press.
Actualism About Consciousness Affirmed
255
Churchland, Paul (1988). Matter and Consciousness, Cambridge, MA: The MIT Press. Clark, A. (1997). Being There: Putting Brain, Body and World Together Again, Cambridge, MA: The MIT Press. Clark, A., (2011).2011, Supersizing the Mind: Embodiment, Action and Cognitive Science, New York: Oxford University Press. Crane, T. (1998). ‘Intentionality as the Mark of the Mental’, in Anthony O'Hear (ed.), Contemporary Issues in the Philosophy of Mind, Cambridge: Cambridge University Press, 229–52. Crane, T. (2001). Elements of Mind: An Introduction to the Philosophy of Mind, New York: Oxford University Press. Crane, T. (2003). ‘The Inentional Structure of Consciousness’, in Quentin Smith and Alexander Jokic (eds.), Consciousness: New Philosophical Perspectives, Oxford: Oxford University Press, 33–56. Davidson, D. (1980). Essays on Actions and Events, Oxford: Oxford University Press. Dennett, D. (1992). ‘Quining Qualia’, in A. J. Marcel and E. Bisiach (eds.), Consciousness in Contemporary Science, Oxford: The Clarendon Press, 42–77. Feigl, H. (1967). The Mental and the Physical: The Essay and a Postscript, Minneapolis: University of Minnesota Press. Flanagan, O. (1991). The Science of the Mind, Cambridge, MA: The MIT Press. Fodor, J. (1975). The Language of Thought, New York: Thomas Y. Crowell Company. Fodor, J. (2008). LOT 2: The Language of Thought Revisited, New York: Oxford University Press. Freeman, A., ed. (2006). Radical Externalism: Honderich’s Theory of Consciousness Discussed, Charlottesville: Imprint Academic. Grim, P., ed. (2009). Mind and Consciousness: Five Questions, Copenhagen: Automatic Press. Honderich, T. (1988). A Theory of Determinism: The Mind, Neuroscience, and Life-Hopes, Oxford: Oxford University Press. Honderich, T. (2004). On Consciousness, collected papers, Edinburgh: Edinburgh University Press. Honderich, T. (2014). Actual Consciousness, Oxford University Press. Honderich, T. (2014). Online discussion of McGinn review of Honderich 2004, mysterianism, etc.: URL=http://www.ucl.ac.uk/~uctytho/ HonderichMcGinnStrohminger.htm Honderich, T., ed. (2015a). Philosophers of Our Times: Royal Institute of Philosophy Annual Lectures, Oxford: Oxford University Press. Honderich, T. (2015b). Introductory summary to Thomas Nagel’s lecture, ‘Conceiving the Impossible and the Mind-Body Problem’, in T. Honderich (ed.), 2015a, 5–6, Philosophers of Our Times: Royal Institute of Philosophy Annual Lectures, Oxford: Oxford University Press. Honderich, T. (2017/18). Mind: Your Consciousness is What and Where? Reaktion Books, Chicago: University of Chicago Press.
256
The Bloomsbury Companion to the Philosophy of Consciousness
Hume, D. (1888). A Treatise of Human Nature, edited by L. A. Selby-Bigge, Oxford: Oxford University Press. Jackson, F. (1986). ‘What Mary Didn’t Know’, Journal of Philosophy, 83(5), 291–5. Jackson, F. (1998). Mind, Method, and Conditionals: Selected Essays, London: Routledge. Kim, J. (2005). Physicalism, Or Something Near Enough, Princeton: Princeton University Press. Kirk, R. (2005). Zombies and Consciousness, New York: Oxford University Press. Kirk, R. (2011). ‘Zombies’, Stanford Encyclopedia of Philosophy, URL= http://plato. stanford.edu/archives/spr2011/entries/zombies/ Lycan, W. (1987). Consciousness, Cambridge, MA: MIT Press. Lycan, W. (1996). Consciousness and Experience, Cambridge, MA: MIT Press. Lycan, W. (2003). ‘Chomsky on the Mind-Body Problem’, in Louise Antony and Norbert Hornstein (eds.), Chomsky and His Critics, 11–28, Malden: Blackwell. Malcolm, N. (1962). Dreaming, London: Routledge. McGinn, C. (1989). ‘Can We Solve the Mind-Body Problem?’, Mind, 98, 349–66. McGinn, C. (1991a). The Problem of Consciousness: Essays Towards a Resolution, Oxford: Blackwell. McGinn, C. (1991b). ‘Consciousness and Content’, in C. McGinn, 1991a, 23–43, The Problem of Consciousness: Essays Towards a Resolution, Oxford: Blackwell. McGinn, C. (1999). The Mysterious Flame: Conscious Minds in a Material World, New York: Basic Books. McGinn, C. (2002). The Making of a Philosopher, New York: Harper Collins. McGinn, C. (2004). Consciousness and its Objects, Oxford: Oxford University Press. McGinn, C. (2007), Review of Honderich 2004, Philosophical Review, 116, 474–7. McGinn, C. (2012). ‘All Machine and No Ghost?’, New Statesman, 141, 40. Montero, B. (1999). ‘The Body Problem’, Noûs, 33, 183–200. Montero, B. (2001). ‘Post-Physicalism’, Journal of Consciousness Studies, 8, 61–80. Montero, B. (2009). ‘What is the Physical?’, in Brian McLaughlin, Ansgar Beckermann, Sven Walter (eds.), The Oxford Handbook of Philosophy of Mind, Oxford: Oxford University Press, 173–88. Nagel, T. (1974). ‘What is it Like to Be a Bat?’, The Philosophical Review, 83. 435–50. Nagel, T. (1998). ‘Conceiving the Impossible and the Mind-Body Problem’, Philosophy, 73, 337–52. Nagel, T. (2015). ‘Dualism’, Stanford Encyclopedia of Philosophy, URL=http://plato. stanford.edu/archives/spr2016/entries/dualism Noë, A. (2006). Action in Perception, Cambridge, MA: The MIT Press. Noë, A. (2009). Out Of Our Heads: Why You Are Not Your Brain, and Oher Lessons from the Biology of Consciousness, New York: Hill and Wang. Papineau, D. (1987). Reality and Representation, Malden: Blackwell. Papineau, D. (1993). Philosophical Naturalism, Malden: Blackwell. Papineau, D. (2000). Introducing Consciousness, Icon Books.
Actualism About Consciousness Affirmed
257
Putnam, H. (1975). Mind, Language and Reality, Philosophical Papers, Vol. 2 Cambridge: Cambridge University Press. Robinson, H. (1997). Objections to Physicalism, Oxford University Press. Robinson, H. (1994). Perception, London: Routledge Robinson, H. (2012). ‘Dualism’, in The Stanford Encyclopedia of Philosophy, (Winter 2012 Edition), Edward N. Zaita (ed), URL=https://plato.standford.edu./ archives/win 2012/entries/Robinson/> Searle, J. R. (1992). The Rediscovery of the Mind, Cambridge, MA: The MIT Press. Searle, J. R. (1999). The Mystery of Consciousness, New York: The New York Review of Books. Searle, J.R. (2002). Consciousness and Language, Cambridge: Cambridge University Press. Strawson, G. (1994). Mental Reality, Cambridge, MA: The MIT Press. Strawson, P. F. (2015). ‘Perception and Its Objects’, in T. Honderich (ed.),2015a, 23–40, Philosophers of Our Times: Royal Institute of Philosophy Annual Lectures, Oxford: Oxford University Press.
14
Cracking the Hard Problem of Consciousness Dale Jacquette
Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else. –Erwin Schrödinger, Psychic Research 25, 1931, 91
1 Hard versus tractable Explicanda The hard problem of understanding the nature of consciousness is the hard problem of understanding the nature of time. The brain is an organ performing a variety of specific tasks at specific organizational levels. Some activities involve producing states of mind that we refer to as moments of awareness, being aware or conscious of something. This evident fact has remained so stubbornly difficult to explain that theorists out of exasperation have sometimes concluded to deny the existence of consciousness. Other brain activities involve body condition monitoring and maintenance that thankfully never emerge into consciousness. The problem of understanding consciousness is that of understanding the difference between these two categories of the brain’s most basic activity, and of how and in what sense conscious states depend on unconscious neurophysiological states. David J. Chalmers in the Introduction ‘Taking Consciousness Seriously’ to his 1996 book, The Conscious Mind: In Search of a Fundamental Theory, introduces the ‘hard problem of consciousness’ category as contrasted with ‘the “easy” problem of consciousness’ (xi) in these terms: Many books and articles on consciousness have appeared in the past few years, and one might think that we are making progress. But on a closer look, most of this work leaves the hardest problems about consciousness untouched. Often,
Cracking the Hard Problem of Consciousness
259
such work addresses what might be called the ‘easy’ problems of consciousness: How does the brain process environmental stimulation? How do we produce reports on internal states? These are important questions, but to answer them is not to solve the hard problem: Why is all this processing accompanied by an experienced inner life? Sometimes this question is ignored entirely; sometimes it is put off until another day; and sometimes it is simply declared answered. But in each case, one is left with the feeling that the central problem remains as puzzling as ever.1
To take Chalmers literally, the hard problem of consciousness is to explain the presumed accompaniment of brain ‘processing’ by ‘an experienced inner life’. The proposal developed in this chapter is not to correlate as distinct but rather to identify a thinking subject’s inner phenomenologically experienced conscious life with specific brain activity. The account can be appreciated and seen to avoid obvious putative counterexamples only when the kind of unconscious brain activity at issue is explained. Consciousness on the proposed analysis appears to satisfy Chalmer’s expectations of a solution to the hard problem. To do so, the theory must ultimately explicate the concept of a conscious moment, in Chalmer’s heuristic, of experienced inner life. Individual conscious moments are then streamed in succession to produce a sustained span and flow of consciousness over time. We are more likely to begin phenomenologically in the opposite direction, starting with streaming consciousness and then reasoning our way down in theory to individual conscious moments or states of consciousness. We might divide consciousness into perceptive, cognitive and affective types or modes, or, to illustrate, regard cognitive as the superordinate category subsuming the perceptive and affective.2 It will do for present purposes to arrive at a lucid characterization of what constitutes a moment of consciousness, of conscious sensation, perception, and reasoning, memory, and the like. We answer Chalmers’s hard problem of consciousness by explaining the relation between neurophysiological information processing and the occurrence of phenomenologically experienced lived-through conscious events. The suggestion developed in this inquiry is that conscious experience supervenes emergently on a particular specifiable kind of neurophysiological information processing. It exhibits the same internal logical structure as predication of properties to objects as in the most general elementary linguistic constructions capable of truth or falsehood, of being meaningful in the sense of supporting a truth-value.
260
The Bloomsbury Companion to the Philosophy of Consciousness
2 Supervenience of consciousness on neurophysiology Consciousness, like the brain’s unconscious autonomic information processing in regulating breathing, heartbeat, digestion, and the like, supervenes, in the minimal sense of being for whatever reason ontically dependent, on the brain’s active neurophysiological information processing of electrochemical signals. What is distinctive of consciousness is that it is a first-person subjective experience of the world and the mind’s own processed cognitive contents. The hard problem of consciousness is accordingly the problem of understanding first the nature of experience. We speak often, philosophically advisedly or not, of experience even in the case of crudely mechanical entities. An unprotected iron surface left out of doors will soon experience rust or rusting. Home sales are the first to experience the effects of a national economic upturn. And endlessly the like. All such usages are presumably metaphorical. What happens is that if bare iron is left out in the rain then it will be possible to experience the metal beginning to rust red. When there is a national economic upturn, then we consumers and economic trend followers will probably experience something distinctive about real-estate market statistics. The hard problem of consciousness is not especially hard to explain. Try completing this sentence, ‘To be conscious is …’. It is not an easy thing to fill in the blank with a definition or analysis of the concept of consciousness. The fact that we find it difficult to describe consciousness without ladling up synonyms is itself philosophically noteworthy. We can say that to be conscious is to be aware, to be deliberate, more vaguely, to be in the moment, that kind of thing. Since these are conscious states, however, they go no distance at all towards explaining what consciousness means in the most general sense of the word. What is it (not, what is it like) to be conscious, aware, deliberate, in the moment, or the like? That is the hard problem of consciousness in one sense. Finding neurological correlations with specific conscious states, assuming that we usually know which these are, relying in experimental practice on the phenomenology that psychology as a natural science pretends to despise, is an empirical field of accumulating knowledge. The fact of correlation unfortunately can never imply exactly what it is that is correlated, in this case conscious states. We know the difference at an intuitive level, but as Socrates complains in trying to understand the ideal Forms of virtue, beauty, and the like, although we can often distinguish between what is conscious and what is not conscious with what we confidently take to be reliably correct judgements, we cannot discursively explain the difference in terms of necessary and sufficient conditions
Cracking the Hard Problem of Consciousness
261
in a watertight definition. We cannot articulate without tautology or circularity exactly what consciousness is at a theoretical level of philosophical explication. What is it to be conscious? What, phenomenologically speaking, to begin there, is a state of consciousness? If we cannot respectably answer these basic questions, what do we imagine ourselves to be building upon in developing a metaphysics for a cognitive science of consciousness?
3 Dynamic attribution (DA), perception and time Consciousness as a certain kind of experience is in one sense an occurrent physical phenomenon. It is something the material brain does, that it makes or creates. In that respect it is part of the world’s causal nexus, even if not all of its properties are fully explainable in a reductively materialist physicalism. Philosophy of consciousness has among its slateful of tasks the difficulty of intelligibly modelling individual moments of consciousness or conscious states in streaming consciousness. It must try to understand these as brain occurrences containing all the rich contents we experience. More importantly, it must explain the concept of experience or what it is to have experience of anything.3 What model shall philosophy of consciousness consider as explaining the meaning of consciousness? The suggestion proposed and defended in this chapter understands individual conscious states within the flow of streaming consciousness on the linguistic model of predicating or attributing properties to objects. The dynamic attribution (DA) model reflects the mind’s predication or attribution of properties to objects as a brain activity that is sometimes also expressed in language. The embodiment of thought structures in linguistic structures makes linguistic structures the best and arguably the only hunting ground for understanding the nature of consciousness. We discover in language what was first in thought. Dynamic Attribution (DA) Model of Consciousness Consciousness is the brain’s unconscious (autonomic) dynamic attribution of cognitive, including perceptual and affective data as properties to passing moments of objective mind-independent real time.
DA assumes streaming consciousness and arrives analytically at the existence of individual states or moments of consciousness in which collected information is attributed as a package of properties to a single moment of time. The fact that the brain when appropriately active autonomically successively attributes data
262
The Bloomsbury Companion to the Philosophy of Consciousness
clusters to passing moments of real time explains conscious experience as the neurophysiologically supervenient conscious thinking subject’s living through the brain’s dynamic data attributions of all the content to be found in streaming consciousness or individual conscious states to rapidly passing individual moments of time. Experience on the DA model is the brain’s packaging up information when it can and attributing it to a moment of time. The interesting questions on a DA property–object in the DA model are what are the properties and what are the objects? The properties are not unexpected. They are the qualia of sensation and perception stereotyped in Red-here basic linguistic constructions. The surprising innovation in DA are the objects of property attribution. The idea in a nutshell is that the brain attributes as properties filtered packages of data collected in sensation and other affective states, perception and cognition, however these are sorted, by a variety of external and phenomenological input sources. The data packages as rapidly as thought are attributed by the brain to appropriate available existing intended objects during passing moments of time. The third component of the standard Red-here-now attribution protocol, the ‘now’, becomes in DA the intended object to which in the simplest kind of case the property of something’s being red is attributed. Now, on the DA model, that passing moment of time of which we can be conscious, is the object to which the property of something’s being red in a given place is attributed. Brain processes are conscious or unconscious, whatever this turns out to mean. Unconscious brain processes are known clinically and experimentally to initiate and regulate many body functions of which in some but not all instances the resulting output may later become accessible to conscious thought, memory, reflection, computation, and other brain processes. It is assumed in DA that the brain can attribute a property to an object in a similar event of unconscious information processing. The brain’s unconscious act of attributing a property data cluster to a passing moment of time is the specific brain activity identified in the DA model as consciousness. To be conscious according to DA is for the brain to be actively engaged in attributing distinct data packages to successive moments of time. Conscious experience is identical to the brain’s dynamic attributing of perceived or otherwise cognitively accessible property instantiation packages of various sorts to concurrently passing moments of time. It is this succession of data attributions through which the conscious thinking subject lives in experiencing the contents, qualia and intentionality of consciousness. If it is asked how the brain accomplishes its DA task, we recall that the DA model is a schematic explanation. We do not need to say that the brain actually
Cracking the Hard Problem of Consciousness
263
uses an internal language or language of thought, innate logical and grammatical categories, or anything understood as instantiating actual linguistic practice, in order to maintain that our best model for understanding what it is the brain does in producing conscious experiences is to actively say, effectively, in and as a moment of conscious thought, ‘There is something red here now.’ Seeing the colour, seeing that something is red, is the moment of consciousness just as it is the attribution of red experienced in this location to a passing moment of time. If we ask what is the brain’s act of predicating or attributing a property to a moment of time, the argument is that it is the brain’s attributing an instantiation of red or redness at just this moment of time. The analogy is meant to map onto whatever it is the brain does in unconsciously linking together perceptual information with the concurrently passing moments of time. Experience, conscious ‘inner’ mental life, the active phenomenological field, on the present DA analysis, is nothing other than the coordinated succession of the brain’s attributions of information packages to sequentially passing moments of real objective mind-independent time. Consciousness is experienced as the fast-acting process of these property attributions being made. When we are unconscious, the brain does not perform this activity, or does so only at a highly reduced sub-experiential level. Deep sleep, general anaesthesia, coma, and other natural and induced states of unconsciousness, are off the books for attributions of property clusters to passing moments of time. They are lost to the process, occurring only outside of consciousness, of what we as thinking subjects are aware. The brain is either not in receipt of input information to package together and attribute to passing moments of time, or its attribution of properties faculty is temporarily out of commission, when normally conscious thinking subjects are not conscious.
4 Scientific brain research in understanding consciousness The brain’s linking together of perceived and other cognitively available properties are modelled in DA as attributed to concurrently passing moments of time. They are the properties known to be instantiated at this or that time. What we call conscious experience is the brain’s real-time activity of attributing properties in the form of data packages to times. DA recognizes states of consciousness as representing matter of factly, This now, over and over again, as both ‘This’ and ‘now’ change with the (1) flowing passage of objective mind-independent time; (2) movement of the reporting conscious thinking
264
The Bloomsbury Companion to the Philosophy of Consciousness
subject in space, geographically, social environments; (3) conscious thinking subject’s cognitive backfeed from ongoing consciousness and memories of past moments of consciousness, and even more abstract reasoning processes. A tiny fraction of conscious states are selected out for active even frequent memory access; others may be on call. This now is ineluctably indexical in both ‘This’ (data attribution) and ‘now’ (passing moment of time) components. To define an individual state of consciousness as a single attribution of data to a passing moment of time in This now construction seems not only unavoidable in basic form, but when it is properly mined the analysis facilitates interesting solutions to otherwise intractable problems in the philosophy of consciousness. The brain makes a This now attribution of data to a moment of time over and over again to different times. Always a different now. At least in unnoticed ways, always a different This. The individual attribution frames are set together in real-time motion. They are guaranteed to be invisible to any conscious examination by the objective passage of mind-independent objective real time as the brain’s unconscious objects of attribution. Advancing technologically from old-fashioned celluloid moving picture manufacture to digital storage and projection, a still stronger analogy is produced. Streaming consciousness as a run of individual conscious states like a digital film, even in production and editing, has no detectable frame seams. Consciousness streams while and insofar as it streams without generally being aware of the existence or structural features of constitutive individual conscious states. Consciousness cannot become perceptually conscious of any stitching together of individual conscious states or moments into streaming consciousness, because it is not merely a subject for consciousness, but is identical with the passage of real-time streaming consciousness. It is in each case subjective streaming consciousness itself. Note that even if the DA model is not accepted there remains an obligation for any competing theory to better explain the otherwise apparently accidental relation between time and consciousness. Perhaps the contents of streaming consciousness are merely associated with or have reference to moments of time among their properties, rather than standing as objects of information attribution. The properties of sight-now, hear-now, smell-now, feel-now, tastenow do not attribute the properties of sight, sound, and the like to the predication object now, a total This for a passing moment of time, construed by DA as the attribution object now. What then are the predication objects in a theory that denies the objects are passing moments of time?
Cracking the Hard Problem of Consciousness
265
We are getting ahead of ourselves, but the non-temporal objects of consciousness might be the real external things of which consciousness is perceptually or in other ways cognitively conscious. Consider a teacup’s properties. It is glazed ceramic, thin-walled, still mostly full of ginger tea, known by its aroma across the expanse of desk. It also has the property of existing at a certain time. Obviously the cup did not always exist and one way or another it will eventually cease to exist. It has the property of having existed at this moment or that later moment. Instead of making time an object of predication or data attribution a rival theory might consider time in the sense of existence at a time to be a property in the data package, along with the sight-now and smell-now of an objectively existent teacup that does not require attribution of information now to that passing moment of time. The natural interpretation of such hybrid perceptual-temporal properties is that they are attributions of perceptions to an occurrent moment of time. Sight-now is a visual information package attributed to something. If not to the passing moment now, then presumably to the physical object responsible for the conscious subject’s perception. The teacup itself has the property of white-now and thin-walled-now and of existing at a certain span of time. The cup external to thought is the attribution object, on this counter-proposal and time in the sense of existence at a time, and temporally qualified empirical data about the cup, are among the cup’s distinguishing properties. The trouble with the ‘realist’ account of non-temporal attribution objects, making the teacup rather than a passing moment of time the object of property attributions, is that we expect the proposition that the teacup was white and on at least one occasion held ginger tea implies that the logically contingent physical objects of those true predications will eventually cease to exist, leaving such truths as that the cup was white and gingery without truth-makers among contemporaneously existent facts. What cup do we mean, once it is gone? What cup can we mean when the cup exists no more? How do and how can we refer to it by making it an object of predication or property attribution? If we suppose time to be a real physical dimension, then past moments of time have more resilience to passing into the past than a porcelain teacup. A moment of time even forever after it has passed serves as an intended object of reference for many kinds of predications, as we find in historical narratives. It is true, even though the moment is no longer present, that the World War I began on 28 July 1914. It remains a true property of that long-past moment in 1914 that the World War I began more or less exactly then sometime on that day. We sometimes predicate properties of moments of time, including long-past times, so we should be able in principle
266
The Bloomsbury Companion to the Philosophy of Consciousness
to develop and apply the semantic apparatus for attributing properties also in the form of ambient information packets to times. There is a disadvantage in making contingently existent complex physical entities other than moments of time the predication objects for information loads because truths about them temporally outlast their existence. Moments of time continue to exist when they have passed from future through the present to the past as past moments of time. We number them by year and month, day and clock time, from a fixed referent moment long since passed, generally the birth or death of someone considered sufficiently important, none of which would make sense if we could not truly predicate properties of past moments of time. We count and number past moments of time, make them the points of reference of histories, memoires and the like. Even a daily diary is written after the facts recorded have occurred. We refer to moments of the past without undue philosophical concern, which appears to clear any obstacle to the present proposal’s general idea of making passing moments of time the predication objects of ambient and cognitive information packaged together in individual conscious states as This and attributed to each rapidly proceeding and immediately temporally receding now. The brain makes This nows repeatedly and more or less reliably. We can navigate and apply the information available in these fields. Their succession phenomenologically directly attached to passing moments of objective mind-independent real time produces streaming consciousness. It is consciousness as we ordinarily think of it, awareness of our surroundings, that these facts obtain at this moment, that this moment is always fading into the past and another sometimes with new facts and new information seemingly continuously replaces what had been. It is nothing more than the brain’s autonomous labour of collecting data from whatever sources are available and predicating the bunch of the passing moment of time at which those properties are experienced. If we further assume that there can in principle be a fully physically–causally– functionally reducible explanation of all unconscious brain processes, then we can expect that the future of scientific brain research, within the framework of the proposed philosophical model, will eventually discover these among other functionalities in the brain’s unconscious control of information processing. The experience of streaming consciousness as moving with or caught up in flowing time is also thereby explained. Conscious states modelled on attributions of information input packaged with other cognitively available contents are inseparable from the concurrently passing moments of time to which the brain attributes whatever a thinking subject is aware of at any given moment of consciousness. The DA model
Cracking the Hard Problem of Consciousness
267
explains consciousness as itself nothing other than the brain’s activity of dynamic attribution of data clusters to passing moments of time. That real-time activity of the brain’s, predicating data compilations to passing moments of real time, passing by at exactly the highest speed that consciousness can register, is not merely something of which consciousness can be conscious, but on the proposed analysis it is identical with consciousness. Being conscious is nothing other than the brain’s active dynamic attribution of information to time. The simplest form that the proposal includes is a state or moment of consciousness, This now. This nows are sequenced jointlessly together by the brain like the frames or digital scenes in a movie presenting frames at sufficient speed to create streaming consciousness. They do not just luckily happen to have the same internal structure so that streaming consciousness can be about This nows. Rather, streaming consciousness, DA theory maintains, is identical with the passing moments of time loaded by each individual brain with the prescribed kinds and samplings of ambient perceptual, affective and cognitive data.4 We are encouraged to imagine moments of time marching off into the past, each now burdened attributively (passing moments of time in streaming consciousness do not literally carry anything) with what each brain actively predicates of it. Or consider canoes promenading at regular intervals downriver. They are filled with cargo as they pass a certain juncture, after which they disappear with the current of the stream. The active dynamic attribution of information to time on the proposed DA analysis is supposed to be identical with consciousness. Streaming consciousness, as we often think exclusively of it, is normally a regularly running sequence of individual conscious states or moments, strongly (the proposal says attributively or predicationally) related to the objective mind-independent passage of real time. We can try in a quiet environment to concentrate on a single conscious state or moment in the passing stream, although it is a phenomenologically challenging exercise. We will want to know in the process why and how it is that consciousness is so intimately related to time and consciousness of the passing of time, as emphasized in classical phenomenology and that remains a topic replete with unsolved and insufficiently recognized conceptual problems. Neurophysiology and cognitive science have as much prospect of unlocking the attribution of data to time as any other unconscious autonomous brain activity. As far as the attribution of information to time philosophical model is concerned, we may easily suppose that these correlations of neurophysiological with conscious events can be determined with enough experimental care in what Chalmers describes as among the ‘easy’ correlational problems of consciousness.
268
The Bloomsbury Companion to the Philosophy of Consciousness
Given enough time and resources, what is now unknown about how the brain undertakes all of its unconscious neurophysiological information processing appear eminently tractable in the future of experimental psychology. Even the brain’s attribution of information to passing moments of time model opens the potential for scientific inquiry of an ‘easy’ level of scientific theoretization, in determining brain-to-afferent-nerve-ending feedback loops involved in regulating heartbeat and respiration, digestion, hormonal release and other unconscious autonomic phenomena, and also as the proposal prescribes to collecting information input and attributing it in articulated bundles to passing moments of time.
5 Time as the hard problem of the hard problem of consciousness That leaves, as the previously mentioned remaining hard part of the hard problem of understanding consciousness, the very hard problem indeed of understanding time and the brain’s unconscious relation to the intermittently conscious concurrently passing moments of objective mind-independent time among conscious thinking subjects. The explanation, based on the attribution of data clusters to concurrently passing moments of time, is intended as a first effort to consider how these concepts might best fit together in order to demystify the hard problem of consciousness. We attempt to do so here by dividing the hard problem of understanding the nature of consciousness into these two parts, followed in each case by parenthetical commentary: (1) How does the brain combine perceptual information or sensory input with concurrently passing moments of time to effectively express in thought such predications as ‘This is happening now’ or ‘This now’? (This part of the hard problem can be philosophically demystified by the model explaining conscious thought as the brain attributing perceived, cognitive and affective properties to time increments, and in principle as an unconscious autonomous brain phenomenon in an advanced neurophysiology and cognitive science.) (2) What is time? How does the brain single out concurrently passing moments of time, effectively, to name them as the objects of property attribution objects for perceived and related cognitively accessible properties? (This on the DA analysis is the stubborn remaining hard part of the hard problem of understanding the nature of consciousness. What makes consciousness
Cracking the Hard Problem of Consciousness
269
hard to understand is the phenomenology of streaming conscious experiences situated in what is further metaphorically described as a unidirectional current of concurrently passing moments of time. To explain what the brain does with respect to time according to the DA model (emphasizing throughout that it is only an explanatory model of consciousness) is that it names the individual concurrently passing moments of time, t1, t2, t3,… etc., so that they can serve as distinct intended objects on which to attributively deposit the packaged perceived and other cognitive properties of the world collected by the senses or otherwise available for unconscious information processing resulting in conscious experience. What the brain actually does in its neurophysiological structures that the DA model might be analogizing is a scientific question for hard problem component (1) above.)
The DA model can only be recommended by relying on its services in trying to explain aspects of the phenomenology of conscious experience. These are best calibrated in sync with brain activity in producing a malleable mental video record of the situations through which a thinking subject navigates. They are not merely the abstract content of such attributions of properties to time, but events in time conscious experiences themselves. Such predicational packages attributed to concurrently passing moments of time can bring together into one moment the qualia, feeling and content of any experience, including the experienced contents of emotions, memories, acts of imagination, calculations and prominently of the contents or qualia of perception, all such things of which we are said to be conscious, to concurrently passing increments of time. Passing moments of time as reference predication objects are like the calendar-inscribed pages of a daily diary. The facts of the day are recorded much like exactly this passing moment is inscribed as a conscious state a subject’s conscious life. The facts are like the properties in an information package, and the diary pages are like the passing moments of time onto which each split-second’s information intake is attributed in rapid succession. This now is equally a suitable description for keeping a diary with each passing day as for living through the moments of conscious experience. What consciousness adds to the environmental information brought to the brain by the sense organs is the element of time as the objective object basis for property attribution. The record even when it is only instantaneous that there has been This now. The continuity of experience as we know phenomenologically occurs when instantaneous snapshots of sensory input are run together into a flow of data packages that we know as streaming conscious experience. The DA model invites us reasonably to suppose that other kinds of animals are also
270
The Bloomsbury Companion to the Philosophy of Consciousness
likely to have similar kinds of conscious experience, however different from ours in content and modality. If consciousness is the succession of conscious states that a thinking subject lives through in time, then non-human animals with a sufficiently complex neurophysiology should be expected to be capable of a similar unconscious packaging and attributing of information to a series of moments in time. Time and consciousness have always been philosophically inseparable. The problem has been to understand exactly how they are related. Time is essential, indispensible, to consciousness if a conscious moment in the case of each conscious thinking subject is the subject’s brain’s attribution of data packages to rapidly passing moments of time. The DA model sustains the expectation that the same kind of a non-human brain’s predication of information to objective passing moments of time underwrites the consciousness of conscious higher animals, and even animal self-consciousness after its kind.
6 Reductive explanations of unconscious brain processing We begin with a healthy normally functioning brain in a healthy normally functioning human body adapted to or in any case making the best of things in a sufficiently supportive external environment. The neurophysiological events in the brain are the supervenience base of interest in understanding the brain’s unconscious activity in regulating heartbeat and other autonomic body functions. These mechanisms evolved and have been passed down the vertebrate chain aeons before there were mammals let alone primates like ourselves inhabiting the planet. There is no hint of a self or soul underwriting any of these inherited brain monitoring and regulation of unconscious life-maintaining events. We speak only of information collection and attribution as data clusters to intended objects. It is a matter of pairing or subjective association. If we are considering philosophical questions about the nature of consciousness, then we can take it for granted that the brain’s unconscious activity will some day be fully explainable in physically reductive informationprocessing mechanical terms. If we presuppose the supervenience of unconscious autonomic brain activity, of which we are seldom aware, on normally functioning neurophysiology, then we may already have all that we need at hand to explain the concept of consciousness as dynamic attribution on the DA model in such a way as to dissolve all but the stubborn concept of time from what has come to be called the hard problem of consciousness. Understanding consciousness
Cracking the Hard Problem of Consciousness
271
is made difficult only then by the difficulty of understanding the nature of time and passing moments of time as objective mind-independent objects of property attribution. This admittedly remains a formidable challenge, but a more general metaphysical problem than a burden specifically for philosophy of mind to shoulder. What comes next after unconscious brain activity is the experienced content and continuity of perceptual experience and other cognitive occurrences that the present analysis maintains is the result of the brain’s unconscious attribution of perceptual and other cognitive information packages to concurrently or nearconcurrently passing moments of time. This now. This (something else) now (the next instant later). And so on. The brain on the DA model is like a well-oiled machine churning out real-time attributions of properties to objects merely by active association. The packages of information are rapidly strung together one after the other like beads along a chain. It is everything to be found in a single diary page of awareness if you stop all other activities, freeze the moment, and concentrate on what you are aware of as happening in your immediate surroundings, visited irrepressibly despite noble efforts at isolation by memories, calculations, imaginative scenarios, fantasies, intentions, plans, beliefs and desires. Moments of consciousness manufactured as dynamic individual data cluster attributions unconsciously by the brain according to the DA model are not confusedly the contents experienced in or by thought. They are rather the moments of conscious experience themselves just as they are lived through by a conscious thinking subject whose brain is actively making attributions of information to time.
7 Cinematic phenomenology of internal time consciousness Consciousness as the brain’s unconscious attribution especially of immediately perceived properties to the concurrently passing moments of time suggests a cinematic phenomenology. What are described as conscious moments are precisely those in which the brain is doing its attributing of sensation and other available input properties to concurrently passing moments of time, like the individual frames in an old-fashioned pre-digital movie theatre. It might be objected that the cinematic analogy is imperfect because in the case of consciousness there ought to be the analogue of an audience that views the movie. If that adjustment is allowed, then consciousness becomes the thoughts of the viewers of the movie as successive packages of perceptual and
272
The Bloomsbury Companion to the Philosophy of Consciousness
other information that the brain unconsciously puts together are experienced, attributed to the concurrently passing moments of time. They are presented on an internal stage for an audience to consider and approve or disapprove. There is no advance on such a requirement, because we would then need to understand the consciousness of individual theatre audience members. Consciousness on the proposed DA model is not the movie but the living thoughts of thinking subjects engaged as active participants in rather than merely passive observers of the movie. It is this ineliminable dynamic activity of consciousness that the DA model does not try to explain, the objection continues, in attempting to reduce the viewer to the viewed. Nevertheless, it is nothing other than the concept of consciousness itself. This is a criticism to be taken seriously. However, the proposal is that streaming consciousness is like the cinematic progression of individual frames or digital connections of information, moving so swiftly that no interruptions are perceived and occur only when the brain stops producing attributions during periods of deep sleep, anaesthesia, coma, and the like. The proposal accounts for such lapses as times when the brain relaxes its attribution of properties to time responsibilities. The unconscious mind is free of time, although it continues to age, free of the conscious mind’s constant information barrage. The DA analysis of consciousness as the brain’s unconscious autonomic attribution of mostly perceptual property information to passing moments of time accounts for the sense of the combined perceptual and temporal continuity of conscious experience, the flow of thought that is sometimes referred to metaphorically but nonetheless phenomenologically insightfully as a stream of consciousness. They are what makes even higher non-human animals conscious, despite lacking a language or typical human conceptual framework for more sophisticated consciousness and conscious experiences. The cognitive accomplishment of consciousness is only for the brain behind the scenes of consciousness to package together perceptual and other informational data as it becomes available for attribution in streaming consciousness each to a concurrently passing moment of time. Analytically, streaming consciousness as the starting place of consciousness studies arrives derivatively at individual conscious states or moments of consciousness selected from the stream is an abstraction. However true it may be that streaming consciousness is made up of individual conscious states fastened together with the same regularity as the consciously perceived passage of time, we need not expect to be able to isolate individual conscious states through force and training of will. We may nevertheless learn something worthwhile from the
Cracking the Hard Problem of Consciousness
273
effort to come as close as we can to isolating a single moment of consciousness. We should discover among other things that even if one succeeds in latching onto an isolated moment of consciousness to examine phenomenologically at one’s leisure, the same fleeting ephemeral example can function in our explanations thereafter only as memory and abstract concept. It will serve at best as illustrating historically the recognition of an individual moment of perceptual consciousness attained that quiet day. The fact that we naturally want to hang on to the moment once we have it is enough to show that we do not yet understand the evanescent nature of individual conscious states caught up in streaming consciousness. We are not conscious of the brain doing cognition such a tremendous service, keeping track of whatever the body and thought report, primarily in the brain’s own interest. Consciousness is nothing other than the brain’s performing this invaluable function. Included among its innumerable other duties primarily in the brain’s own interests, there must be acknowledged all the constant packing and distributing of information to some objects or other, which the theory hypothesizes as the passing moments of time. The brain activity supervenience base must be unconscious, subconscious, purely electrochemical neurophysiological, functionalist in information-processing architecture, or we cannot speak of information processing. The activity of attributing information to time, DA theory posts, is itself (identical with) a thinking subject’s relevant episodes of consciousness. Streaming consciousness has indiscernibly exactly the speed of phenomenologically perceived passing time on the analysis because consciousness by definition moves forward always at a ratio of the individual brain’s unconscious cognitive processing stride plotted against local objective speed of time. That no one is presently in a position to perform these calculations realistically in no way infringes on the conceptual analysis the proposal advances. The DA structure is clear enough. It is the attribution of properties to objects achieved by the brain in real time. The kicker is the theory’s making moments of time the intended objects of the brain’s attribution of a filtering of properties available to unconscious information processing at just those times when correlated moments of perceptual consciousness are experienced. Why moments of time, as though the brain knows what these are, how to individuate them, single them out for reference without which attribution or predication of information property packages, data clusters or the like, is only heuristic? The answer has already been partially given. Streaming consciousness is not merely correlated with time remarkably passing by at the same rate of speed, but rides the directional arrow of time as the active predication of information to real-time moments.
274
The Bloomsbury Companion to the Philosophy of Consciousness
The brain’s active predicating of properties to moments of time is identical to the subject’s conscious experience of exactly those properties experienced in exactly those same passing moments of time. There should be a way to measure the maximum gap in passing moments of time that need to be selected for attribution of information clusters in order for consciousness to remain to consciousness seemingly continuous over the run of time. It would be interesting but not philosophically essential to know how much play there is in the passage of real time for the brain to maintain the internal attribution of properties running smoothly. Consciousness is living through those active attributions of information properties to time, if the DA model of consciousness is correct. The conscious thinking subject experiences the brain’s active unconscious attribution of perceived, affective and cognitive properties to the concurrently passing moments of time as the flowing moments of our individual subjective streaming consciousnesses. Whereas to speak of experiencing or being conscious of the brain’s unconscious activity would be impossible for the same brain to be conscious of its simultaneously unconscious states. We are compelled consequently to consider inquiry into the metaphysics of consciousness as grounding itself instead on a DA model. External physical entities as predication objects, as comforting as they may be during our expected lifetimes as referents for the predication of properties including time and temporally relative properties, are sure not to outlive the truths told about them. There must be truths about persons involved in world events in 1812 that remain true today, even though these contingent entities have long since ceased to exist, depriving the hypothetical truths from being truths about any existent things. The truths themselves disappear when their truth-makers cease to exist, when the physical objects of which propositions were once true disintegrate beyond the possibility of reference and true or false predication of properties. The proposal that passing moments of time could be the objects in a property-object DA model of consciousness accordingly deserves consideration as an alternative to conventional ways of explaining consciousness that generate confusions unsurprisingly from inherent conceptual vaguenesses.
8 Attributions of properties to times The conscious event of seeing something red on the DA model is one in which the brain predicates the eye’s perception of redness to a specific concurrently passing moment in the flow of time. The brain registers equivalently the
Cracking the Hard Problem of Consciousness
275
experience, Red here (This) now, or There is something red here (This) now. Language aside, the mechanism must work something like literal predication that for any perceiving animal. Conscious states, as even a relatively intuitive untrained phenomenology reveals, also occur at more complex levels. They can in principle involve the brain’s predication of any property, however structurally complex, internally or relationally, to a moment of time, as to what is happening now and now and now again. An episode of conscious awareness can involve the predication of many perceived occurrences in relation to others, to their values and inferential or other associational meanings, and ultimately everything of which a conscious thinking subject can be conscious. The property of being such that 2 + 2 = 4 can be predicated of an instant or span of time, and, if we think there is such an uninstantiated property, so can the property of being such that 2 + 2 ≠ 4. Whether the complex predications of some moments of consciousness as experienced phenomenologically can be logically, semantically, or otherwise formally reduced to a function of the most basic predications like Red here now is the fundamental quest of logical atomism applied to the theory of consciousness in the philosophy of mind. Whether or not the predications of properties to moments of time that constitute individual states of streaming consciousness are formally reducible or inherently chunked, we are not further conscious of the brain’s making attributions, except in experiencing the concurrently passing moments of consciousness themselves in the brain’s unconscious activity of delivering information to its processing networks. We often think consciously in such moments, This, or even All this, is happening. This is happening now. On the proposed DA explanation conscious states are nothing other than the brain’s attributing a typically rich assembly of different kinds of properties to the present moment of time, as they are usually thought to flow past in a stream of consciousness. What, then, is it for the living functioning brain to ‘attribute’ properties of peculiar predication objects, concurrently passing moments of time, with all the properties of an experience predicated of them? How does the brain dare attempt such a thing? The DA model implies that the brain working with physically embodied perceptual and other available cognitive and affective information attributes a data cluster to a moment of time. A critic reasonably but not obviously detrimentally asks: Does the brain use names for concurrently passing moments of time, in order to say that {d1, d2, d3, …, dn} are properties of streaming moment of time a? It presupposes rather a lot to name, although the brain as a semantic
276
The Bloomsbury Companion to the Philosophy of Consciousness
engine is no slouch. Surely the brain does not literally proceed on the basis of such naming. We can label the brain’s nows as now1, now2, now3, etc., if we are so inclined, without supposing that the brain does anything comparable of the sort. The brain can be understood as proceeding on the basis of a superficially undifferentiated now, now, now, allowing the positioning of the nows themselves to serve as unequivocal designators of the distinct concurrently passing moments of time needed to serve as predication objects for perceived and other cognitively accessible properties of which consciousness can be aware. The brain according to DA theory deploys prodigious calculating efficiency in achieving these attributions of property clusters to passing moments of time. What is described is nevertheless purely mechanical, and in terms of speed and magnitude of symbolic tasks, the DA model makes no greater demands on the brain’s cognitive efficiency or information-processing workspace than other acknowledged autonomic initiation and maintenance functions over which the brain exercises unconscious control. If the explanation appears fantastic, what and where is the better account to be found? What else is consciousness except the attribution of the properties of things said to be consciously experienced attached to the moments of time at which they are experienced?
9 Time as property, not object of predication Could we do as well by reversing the property–object application? Might we let the nows be the properties predicated of perceived objects or the heres? Supposing we could do so at no theoretical disadvantage, the main thesis advanced in cracking the hard problem of consciousness is unaffected. The burden of argument in the present exposition is the proposition that understanding the nature of time is the hard part of the hard problem of understanding the nature of consciousness. The remaining part of the problem can be intelligibly modelled linguistically and hence demystified as the brain’s equivalent of predication, the unconscious functioning of attribution of properties to objects which it should be open in principle to neurophysiological and cognitive science to disclose. We begin to crack the hard problem of consciousness by explaining the concept of consciousness and how it is related to its neurophysiological supervenience base for emergent conscious events possessing such explanatorily emergent properties as qualia and intentionality. The hard problem of understanding consciousness is the hard metaphysical rather than philosophy of mind problem
Cracking the Hard Problem of Consciousness
277
of understanding the nature and especially the mind-independent objectivity of time. Moments of time in and of themselves do not seem like grammatically promising properties to be predicated of anything. Without much paraphrastic licence we can transform moments of time as objects to the property of being an occurring moment of time. That metaphysical obstacle cleared, what are the prospects of including times among the properties of perceived objects, rather than predicating perceived properties to passing moments of time? Assuming we could have it either way, there might seem to be nothing to prefer in making moments of time the brain’s attribution objects as DA declares, rather than being included among the properties of perceived objects in a counter-proposal. To do so, innocent as the reversal of property–object, predicate–name roles appears, and as explanatorily advantageous as the reversal might turn out to be, the choice deprives the account of one of its most powerful explanations of the experience of being in the passage of time, of immediately perceiving the time-contexted properties of perceivable things, and the phenomenology of living through a succession of conscious states moving continuously forward in time, leaving behind only memories and whatever is gained afterward from experience. If times or a property’s being instantiated at a certain time are made properties for the neurophysiological equivalent of the brain’s attributional syntactical engine, then all of the concurrent perceptions and other cognitive content about which a conscious thinking subject might be thinking can stand as an attribution object to which among its other properties the property of existing at a certain time is attributed. Such predications of times or temporally instantiated properties are obviously possible. Unfortunately, they do not seem to be of the kind wanted in order to explain vital aspects of the concept of consciousness, including the sense that there is a flow of time and that we are conscious of ourselves as existing in time and moving with the flow, along with other objects that we can consciously experience in acts of perceiving and by other cognitive faculties of other properties also entering into streaming consciousness. It does nothing to explain the experience of consciousness, the relation of consciousness to individual conscious moments, events or states, or of consciousness considered more generally to the passage of time. Perceived objects are spatiotemporally distinct from one another, at least for an ideally situated perceiver. If each of these distinct objects has among its properties the times at which it exists, then those distinct moments of time can never be connected together continuously into a flow. This means that the continuity of time, mathematically dense or discrete, but as experienced
278
The Bloomsbury Companion to the Philosophy of Consciousness
phenomenologically in interaction with other things existing in time, must be assumed rather than reconstructed from the scattered temporal properties of perceived objects. If we begin with the assumption that time is thoughtindependent, then as DA allows we can allow the brain the capability of predicating perceived properties to concurrently passing moments of time, however the brain identifies, individuates and discriminates among distinct times as distinct predication objects. The existence of consciousness, according to the proposed model, must then involve the brain unconsciously making consciously experienced attributions of perceived properties plus other data to concurrent moments of time in the passing stream. The hard philosophical problem of understanding the nature of consciousness is in some ways surprisingly and in others unsurprisingly that of understanding how perception relates to time. If we are conscious, then our brains, according to the proposed DA model, are continuously attributing packages of concurrent sense and other cognitive data, properties of perceived things and otherwise intended concepts, to successive moments in the ongoing passage of time. The merit of the DA model of the brain’s attribution of properties to concurrently passing moments of real time is not merely to explain how it is that every perception occurs at a moment of time, but how it is that conscious perceivers like ourselves are able to entertain conscious experiences of the passage of time, and of being among the objects existing in time and at the same time. The DA model, beginning with pre-existent moments in a mind-independent stream of real physical time as given, stands in no need of reconstructing time or the sense of belonging to that flow of time that is characteristic of the phenomenology of human consciousness. The character of even the most mundane everyday awareness that we are seeing and hearing what we are seeing and hearing as we are seeing and hearing it is not only accommodated but centrally featured in the model. Time is taken with its moments in the flow of a stream as the objects of the brain’s unconscious predication of perceived and other kinds of cognitively available properties entering into each corresponding moment of conscious experience of each consciously thinking subject. It is explanatorily convenient for a theory of consciousness to be able to take for granted the existence of mind-independent objective moments of physical time. Streaming consciousness steals a ride on real-time progression, but only if consciousness and time are conceptually interconnected as DA theorizes. An explanatory debt in understanding the nature of consciousness must eventually be paid. The hard part, part (2), as identified above, understanding the nature of mind-independent objective time, remains unanswered. Without a
Cracking the Hard Problem of Consciousness
279
conceptually accessible explanation of time we cannot hope to fully understand how the brain actively relates perceptual and cognitive packages to concurrently passing moments of time. A competent general metaphysics needs to explain what time is, and how moments of time can be singled out algorithmically for unconscious attribution of properties purposes predicated if the model is correct to concurrently passing moments of time. The first part of this obligation is squarely in the descriptive analytic metaphysician’s court. The second part belongs to neurophysiology and cognitive science to investigate as the ‘easy’ part (1) problem of consciousness. The linguistic philosophical model takes only a first step towards understanding conscious experience as the brain’s attributing packages of sense and cognitive input in rapid-fire succession to a regular interval of continuously concurrently passing moments of time. The brain does not need to actually choose moments of time and laboriously predicate property clusters to them continuously. It certainly need not do so in the literal sense of the differential and integral calculus or mathematical analysis. It is enough if only it is sufficiently fast and proceeds at such a regular interval of briskly concurrently passing moments as to present the thinking subject with a cinemagraphic phenomenology of a normally unbroken stream of conscious experiences over appreciable timespans. The DA analysis might be the final truth of the nature of consciousness, but it is in the meantime intended only as an instructive model.
10 Pain and consciousness of pain An objection to DA might be raised to the effect that pain is a kind or quale of consciousness, of certain conscious states, but that DA cannot explain the experience or consciousness of pain as a dynamic attribution of information. Pains hurt and information or property attributions to any intended object do not. If analytically one cannot be in pain without being conscious of pain, if pain is a conscious state of a certain kind, if pains hurt and dynamic data cluster attributions to any intended object do not hurt, then it seems hard to deny that not all streaming consciousness is DA-explainable. It is pain qualia and by extension qualia generally that seem once again to make consciousness analytically inexplicable. The criticism has a certain intuitive force but nevertheless embodies an instructive confusion. There is an equivocation between pain as belonging to a range of extreme sensations of touch, in principle no different than any other
280
The Bloomsbury Companion to the Philosophy of Consciousness
way in which the body is touched from outside or inside itself. Touching can occur by means of an object or entirely neurologically as in illness when the body suffers malfunction, and in both cases can result in pain. The fact that the conscious thinking subject’s body is so touched hurtfully, pleasantly to various degrees, or neutrally, at a particular moment or over a certain span of time is not itself a fact of consciousness, but a body condition of which the thinking subject is typically conscious. The same philosophical complaint against DA might accordingly be made involving any touch sensation or finally any sensation whatsoever. DA is not a theory specifically about pain, the emotions, or other affective state, nor does it speculate about the role of reason and knowledge, cognitive states generally in consciousness, or the like. DA offers a model or conceptual framework for scientific investigation of phenomenological and behavioural phenomena of consciousness. However, DA is not committed to the proposition that pain or being in pain is the same thing as being conscious of pain, even if as is doubtful there are no unconscious pains and even if to be in pain is to suffer a temporally enduring conscious state. Similarly, pain is not the same thing as being physically embodied, despite the fact again that there are presumably no physically disembodied pains. To be in pain is to experience an extreme internal or external, and in both cases neurological, touching sensation. Whereas DA theory need not consider consciousness of being in pain as being in pain than being conscious of the weather is being identical with or to the weather. Consciousness of being in pain in an expanded DA model might be developed as the dynamic attribution of the state of being in pain to exactly those passing moments of time during which the pain endures. Pains hurt independently of the consciousness of their hurting, as do less extreme forms of touching and sensations generally, even supposing that there is no such thing as unconscious pain. Were it otherwise, then we could not be correctly said to be conscious of being in pain, but would only speak of being in pain. The proof that reference to being conscious of feeling pain or other sensations is not a colloquial imperfection of expression is that often when we are experiencing pain we are having other experiences also, including perceptions of immediate surroundings. These are distinguishable even from a more informed philosophical point of view only as being conscious of the trees in the park, conscious of traffic sounds, children’s voices, and conscious as well as an experience in its own right a persistent or periodic pain. Nor should we gloss over challenges to the questionable synthetic thesis that pain is always a conscious experience. There are grey area low-level pains known
Cracking the Hard Problem of Consciousness
281
to phenomenology of everyday experience that present themselves vaguely as discomfort or other at first unpainful sensations, that one would not freely describe as pains, but that then seem to evolve, work themselves up or come into focus more sharply as pains. As Wittgenstein remarks (1998) §§ 262–75, there are no applicable criteria of correctness, no identity conditions, for the meaningful use of proper names in trying to designate private sensations as individual entities. We never know: Are these pains of which the subject was first unconscious that afterward came to impose themselves on awareness? Or were they non-pains as long as they were unconscious brain events manifesting themselves in other non-pain-like ways in consciousness or behaviour that then became pains the moment the thinking subject became pain conscious? Whether we want to name pains or not, Wittgenstein rightly calls attention to the contingent and perhaps surmountable fact that we lack individual identity criteria for specific sensations. We cannot name them then or consequently use them in correlations with other objects to explain the meanings of their names. If we do not know what to say about the genidentity histories of individual sensations, if we cannot name them at different times, track them over time as identically the same entities, then they cannot be among the individual intended objects of property predications. If Wittgenstein is right that there are no adequate identity criteria of correctness for naming as individual entities private subjective sensations like pains, then we can have no basis for [doing so]. Pains hurt, sometimes severely. Being conscious of pain or being in pain does not hurt. Pain is caused by an extreme external or internal touching of a conscious thinking subject’s body of which the subject is inevitably conscious. Theoretically in DA hurtful pains, as opposed to non-hurtful consciousness of pain, need not themselves be considered among the category of states constitutive of consciousness. Pain, though generally among the qualia of certain conscious states, is a datum of touch, and when it is attributed to a moment or span of time we are conscious according to DA of when and how long the hurtful condition lasted. That is what it is to be conscious of pain or of being in pain, whereas being in hurtful pain itself, although something of which a conscious thinking subject is generally conscious, is not an event partially constitutive of consciousness. The brain wisely attributes the facts of pain, but obviously not pains themselves, in the important data package for that temporal duration attributed to the relevant passing moments of time. At the same most general explanatory level within the DA model being in pain is no different than seeing something red. They are alike in both cases, states or conditions, according to DA, of which a thinking subject circumstantially is made conscious of among other facts by
282
The Bloomsbury Companion to the Philosophy of Consciousness
the brain’s dynamic attribution of the respective sensory data to time. Painful hurtful touching from objects outside or neurologically within the subject’s body is a condition with some sense of their occurrence in the progress of time to which cognitive processing and decision-making protocols need to have access. They are matters of which an intelligence is best served by being consciously aware, rather than or as well as unconscious autonomic innate response or conditioned reflex mechanisms. It is useful for conscious subjects to know that something is touching the body in such a way as to cause pain – perhaps the subject could then engage in action to do something about it – as it is to know that there is a porcelain teacup before the conscious subject at table, that there is something red here now. This now, taking in everything of which the brain can at any time make itself aware, can sometimes include dynamic attributions of the subjective occurrence of pain as part of what makes up the This at a now or stream of nows. Attributions are no more painful than they are glazed in white ceramic. They are of those things, a porcelain teacup or an occurrence of pain, but they are not themselves supposed to be painful, to be made of porcelain, or to have any of the other manifold properties that might be attributed as qualia to intended objects, regardless of whether or not the intended objects of data cluster attributions are representations of perceived, cognitive and affective entities as in different forms several recently popular forms of representationalism hold, or as DA proposes the intended objects of information attribution are passing moments of time.5
11 Understanding time as the persistently hard part of understanding consciousness The difficult remaining part as emphasized is to consider the progression of concurrently passing moments of time as the conscious brain’s intended attribution objects. There is nevertheless substantial evidence of gradient values even in the fact that we speak of being conscious of time, where the present account would interpret the same experience phenomenologically in different terms as living through the brain’s active dynamic attribution in successive frames of experience, resulting in a synchronized train of concurrently passing moments of time. We are not literally conscious of time. Rather, consciousness is itself the attributions of all the selected properties available to a time-slice of information processing to precisely that moment in the stream. Like a tape concurrently
Cracking the Hard Problem of Consciousness
283
passing through an abstract Turing machine, the brain marks each square in rapid-fire with the contents of all the information it is called upon at just that instant consciously to process, the unconscious mind as always being left to its own neurophysiological devices. Understanding the brain’s general ability to attribute properties of objects at least at a syntactical level is easy. Understanding time is not easy. There are good, hard-to-answer puzzles that point towards the non-existence of time, and these must obviously be addressed in a complete theory that analyses consciousness as the brain’s streaming predications of processed information synchronized as best as the organic neurophysiological machinery will allow to concurrently passing moments of time. If we cannot properly understand time as independent of thought, then we cannot understand thought as the predication of information loads to specific concurrently passing moments of real time. This consideration implies that a metaphysics of time friendly to the proposed DA analysis of the concept of consciousness as the dynamic attribution of properties in the form of information clusters to moments of time might be relative in any variety of ways, rather than absolute, but cannot be individually subjective. Does this consideration exclude Immanuel Kant’s Transcendental Aesthetic of time as a pure form of intuition? Or the prototype concept of consciousness expressed as Kant’s ‘transcendental unity of apperception’?6 The categorization appears to be subjective, insofar as intuition or perception is always the experience of a thinking subject. Kant, however, in the Transcendental Aesthetic does not speak of the contents of such subjective experiences, but of their pure form. We need not think of the pure form of intuition as itself anything subjective, just as we need not reduce the numerical properties of intuitions, of having n perceptions in such and such a span of time, to a subjective correlate by which a number value attaching to an intuition itself becomes subjective rather than the counting function remaining ontically independent of thought. Likewise, we seem to be within our logical rights to hold both that consciousness is the brain’s attribution of information loads to concurrently passing moments of time, from whenever it begins in the lifetime of the individual thinker to whenever it ends or is temporarily interrupted. The concurrently passing moments of time as the brain’s data attribution objects are then a Kantian pure form of intuition. As such, according to Kant’s Transcendental Aesthetic, the concurrently passing moments of time are presupposed in just the way they are by the compatible assumption that consciousness is the brain’s dynamic attribution of information to precisely these concurrently passing
284
The Bloomsbury Companion to the Philosophy of Consciousness
moments of real time corresponding to the phenomenology of cognitive experience. We discover time as a pure form of intuition within thought, but time itself is not for that reason anything subjective. Kant seems to regard the category also in this sense, otherwise referring to time as a pure form of intuition would seem to serve no designative purpose. It is the form conditional on any perceiving subject having the kinds of experiences we human consciousnesses have of objects existing and moving about in space and time, and so derivatively of the passage of time. Alternatively, we can think of Newtonian absolute time as presupposed by the analysis of consciousness as the brain’s attribution of information to specific concurrently passing moments of time. The moments of time pass by in either discrete flow of units or continuous issue. The brain in either case, if DA is true, does not need to avail itself of every moment of time, but can attribute data clusters to moments of real time in a regular succession of moments unconsciously selected from within the temporal dimension of the conscious thinking subject’s chronology. It is possible in principle for the brain to be so synchronized to concurrently passing mind-independent moments of time that attributions are made smoothly enough phenomenologically to appear to consciousness, by analogy like the eyeballs’ saccades, without experiencing any gaps or breaks in the continuous flow of dynamic attributions of properties to moments of time as intended objects in the mind’s conscious states. We can intelligibly speak of the consciousness of time. Edmund Husserl takes on precisely this difficulty in his 1893–1917 study, On the Phenomenology of the Consciousness of Internal Time. There he seems to agree with our assessment of the situation, that time consciousness is the most ‘important and difficult of all phenomenological problems’.7 For very different reasons than Chalmers offers for considering the problem of understanding exactly what consciousness is, Husserl agrees that the most difficult problem of phenomenology is that of understanding the nature of time consciousness. If the DA model of consciousness is correct, then the problem may not be quite so intractable as Husserl and Chalmers suggest. Consciousness of time on a DA analysis is the self-reflective attribution of the concurrently passing of moments of time to the concurrently passing moments of time themselves. Consciousness of time as such is in one sense the purest limiting case of consciousness. If we understand consciousness as proposed by the DA analysis, then consciousness of time is only one particular temporal information data cluster being attributed by the brain in real time to the concurrently passing moments of real time as they sprint steadily onwards
Cracking the Hard Problem of Consciousness
285
into the past, in a span of real time that lasts as long as that particular state of consciousness prevails. We think, then, schematically speaking, in all similar situations, with special emphasis: This is happening now. This moment is occurring, and now however immediately thereafter the moment is already concurrently passing. Now another moment is occurring and disappearing like its predecessor without a trace into the next ensuing moment. And the like. We know how it goes, or much how it is supposed to go, from first-hand first-person experience. The challenge is to see how the concept of flowing time hooks up with that of streaming consciousness. Consciousness of time for DA theory is accordingly consciousness of something independent of consciousness to the parts or moments of which the brain attributes information property attributions. It is a report on the state of the world, the experience of This now. In streaming consciousness the brain makes a movie that is screened only once and cannot be rewound, consisting of a selection of information the senses take in concerning the state of the world and derivative cognitive processing attributed to appropriate passing moments of time, just as synchronously passing events of consciousness occur. As to understanding the metaphysics of time, especially the ‘now’ of This now, we are reminded by Augustine’s charmingly candid remarks in Confessions XI: ‘If no one asks me, I know what it is’. If we ask ourselves the same tough question in relation to progress made against the hard problem of consciousness, then the idea of a physical dimension of motion logically broken into individual moments capable of being intended objects for the brain’s data package attributions can easily seem perplexingly mysterious. There is no preferable or comparably acceptable alternative with the same explanatory power, driving inquiry in return to the hard problem of understanding time.
Notes 1 2 3 4 5
Chalmers (1996), xi–xii. See Honderich (2014) and his sources. I have been influenced by the essays in Davidson (2001), but my bible is Kim (1993). Russell (1985 [1918; 1924]). Here I have benefited most from the experimentally dated but conceptually insightful Hochberg (1978). 6 Kant (1965 [1781/1787]), A19-B73; A106-B142. 7 Husserl (1991), Supplementary Texts, IV, No. 39.
286
The Bloomsbury Companion to the Philosophy of Consciousness
References Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory, New York: Oxford University Press. Davidson, D. (2001). Essays on Actions and Events, 2nd ed., Oxford: Clarendon Press. Hochberg, J. E. (1978). Perception, 2nd ed., Englewood Cliffs: Prentice-Hall. Honderich, T. (2014). Actual Consciousness, Oxford: Oxford University Press. Husserl, E. (1991). On the Phenomenology of the Consciousness of Internal Time (19831917), translated by J. B. Brough, Dordrecht: Kluwer Academic Publishers. Kant, I. (1965). Critique of Pure Reason, translated by N. K. Smith, New York: St. Martin’s Press. Kim, J. (1993). Supervenience and Mind: Selected Philosophical Essays, Cambridge: Cambridge University Press. Russell, B. (1985[1918/1924]). The Philosophy of Logical Atomism, ed. with an intro. by Pears, D., La Salle: Open Court. Wittgenstein, L. (1998 [1953]). Philosophical Investigations, 2nd ed., translated by G. E. M. Anscombe, Oxford: Blackwell Publishers.
Part Four
Mental Causation, Natural Law and Intentionality of Conscious States
288
15
Toward Axiomatizing Consciousness Selmer Bringsjord, Paul Bello and Naveen Sundar Govindarajulu
1 Introduction With your eyes closed, consider numberhood; put another way, consider this question: What is a number? Presumably if you engage the question in earnest (and if you’re not a logician or the like; if you are, bear with us) you will begin by entertaining some small and simple numbers in your head, and then some concepts (e.g. division) inseparably bound up with your simple examples. You might think about, say, the number 463. What kind of number is 463? You will probably agree that it’s a so-called whole number. Or you might instead say that 463 is a natural number, or an integer; but we can take these labels to be equivalent to ‘whole’. Now, is 463 composite or prime? – and here, if you need to, feel free to open your eyes and resort to paper and a pencil/pen as you endeavour to answer … Yes, the latter; perhaps you did at least some on-paper division to divine the answer. Very well, now, are there only finitely many primes? This question is a bit harder than our previous one. Put another way, the present query amounts to: Can you keep generating larger and larger primes, forever? If you tire of your own investigation, know that Euclid famously settled the matter with a small but memorably clever reductio proof. His verdict, and the supporting details, are readily available on the internet for you to review. … If you have now developed your own rationale, or searched for and found Euclid’s, you know that the answer to our second question is: Yes: there are indeed infinitely many primes. No doubt you perceive, or have been reminded by virtue of reading the previous paragraph, that there are a lot of different kinds of numbers, above and beyond the ones in the categories we have mentioned so far. Numberhood is a big, multifaceted concept! For example, you will recall that there are negative whole numbers, such as −463; that there are fractions with whole numbers on 1 the ‘top’ (numerator) and the ‘bottom’ (denominator), for instance 463 ; that there
290
The Bloomsbury Companion to the Philosophy of Consciousness
are exotic numbers that can’t be captured by any such fraction, for example π and 2 ; and perhaps you will even recall, albeit vaguely if you left such matters long behind in the textbooks of long-past math classes, that there are such numbers as the ‘complex’ and ‘transfinite’ ones. But despite all your thinking about numbers, we very much doubt that you will have arrived at an answer to our original question: What is a number? For us, consciousness is very much like numberhood, and much of the structure of consciousness, at least as we see that structure, is revealed via the kind of reflection you’ve just engaged in. In short, we believe that while it’s apparently impossible to outright define consciousness, an awful lot can be said, systematically, about it, and about the concepts and processes with which it’s bound up. You may not know how to specify what a whole number is, but you can nonetheless close your eyes and perceive all sorts of attributes that 463 and its close relatives have, and you can also (at least eventually) prove that 463 is a prime whole number, and that there are an infinite number of prime numbers. We hence already know that you are capable, it seems, of perceiving things that are internal to your mind upon (or at least often aided by) the closing of your eyes, and that you can also perceive the numeral ‘463’ upon paper in front of you in the external world, when (say) attempting to divide it by a number other than itself or 1.1 And indeed there are many other things you could quickly prove and come to know (or at least read and come to know), for instance that various sorts of operations on various sorts of numbers work in such and such ways. For example, you know that 463 × 1 returns 463 back, while 463 × 0 = 0. You know such things in the absence of a definition of what numberhood is, and what a number is. Likewise, to anticipate one of the 11 axioms of consciousness we shall present later (viz. the one we label Incorr), while we have no precise definition of what consciousness is, and no precise definition of what a conscious state is, we’re quite sure that if you are considering your own mind, and during that time know that you are deeply sad, you must of necessity believe that you are deeply sad. To anticipate another of the 11 axioms we shall propose (truth be told, we explicitly propose only 10, and the 11th, one that relates to planning, is, as we explain, offered as an ‘option’), the various things we have noted that you know about numbers are also (since knowledge implies belief; the axiom below that captures this implication is labelled K2B) things that you believe about numbers. In sum, just as you are willing to assert declarative statements about numbers of various sorts, from which additional statements can be deduced, we are willing to make fundamental assertions about consciousness, herein. Neither your assertions nor ours are guaranteed, indeed some of them are bound to be
Toward Axiomatizing Consciousness
291
quite controversial, but in both cases at least some progress will have been made, and further progress can presumably be achieved as well. Some of that progress, we are happy to concede, will be won by challenging the very axioms that we propose. You may have already grown a bit weary of our starting claim that the nature of our inquiry into consciousness can be illuminated by pondering the nature of the inquiry into numberhood, but please bear with us for just a few additional moments. For we also want to bring to your attention that not only is humanity in possession of rather solid understanding of numberhood and numbers of various types, but also humans have managed to set up specific, rigorous axioms about numbers. A nice, simple example of a set of such axioms is the so-called Peano Arithmetic (), quite famous in many quarters.2 Before we present any of the axioms in , we inform you that where x is any natural number 0, 1, 2, 3, …, s is the successor function on the set of such numbers; that is, s(x) gives the number that is one larger than x. For instance, s(463) = 464. Okay, here then are two of the axioms from , in both cases expressed first in the efficient notation of first-order logic (FOL), and then in something close to standard English: A4 ∀x ( x + 0 = x ) – Every natural number x, when added to 0, equals zero. A5 ∀x∀y ( x + s ( y ) = s ( x + y ) )
– For every pair of natural numbers x and y, x plus the successor of y equals the successor of the sum of x and y.
A4 and A5, relative to our purposes in the present chapter, aren’t uninstructive. For example, A4 refers straightaway to the particular number 0; hence , it’s fair to say, dodges the task of defining what 0 is. In fact, no such definition will be found even if the remaining six axioms of are carefully examined. An exactly parallel point can be made about the addition function + that appears in A5: It appears there, and its deployment, given what we all know about addition from real life, makes perfect sense, but there’s no definition provided of + in A5 – and the same holds for the other six axioms of . In fact, what writers commonly do upon introducing the axiom system is inform their readers that by the symbol + is meant ‘ordinary addition’ – but no definition of ordinary addition is supplied.3 Our objective is in line with the foregoing, for while we despair of ever pinning down the meaning of all forms of consciousness in any formal, third-person format,4 we nonetheless seek to set out more and more of the third-person structure of consciousness. The present chapter is the inauguration, in print, of
292
The Bloomsbury Companion to the Philosophy of Consciousness
this pursuit; and it’s the set of 11 axioms to be known as ‘’, to be introduced below, that constitutes the first step in the pursuit. Careful readers will have noticed some specific progress in the pursuit already, since we above implicitly informed you of our commitment to two – as we shall call them – operators, one for an agent a’s perceiving at a given time t a proposition internal to itself (which has the form Pi(a, t, ϕ)), and another (Pe) for external perception of propositions to be found in the environment external to the agent. (We don’t literally see propositions in the external environment, but we shall simplify matters by assuming that we do precisely that. More about this issue later.) Often it’s easy enough to turn that which is externally perceived into corresponding internal percepts. For instance, perhaps you wrote down on a piece of paper in front of you earlier, when acceding to our prompts, something like this: Nope, can’t divide in half without leaving a remainder!
231 2 463 400 63 60 3 2 1
)
But if you did write down something like this for your eyes to see, you were able, immediately thereafter, to perceive, internally, the proposition that 463 can’t be halved. You might even have been able to internally perceive the details of your long division, or at least some of them. Notice that your perceptions, of both the internal and external varieties, led in this case directly to belief. You came to believe that 463 can’t be halved. Of course, it’s not certain that you didn’t make a silly mistake, so your belief might be erroneous, but it’s highly unlikely that it is. Before concluding the introduction, we think it’s important for us to emphasize two points. We confess explicitly that our pursuit of an axiomatization of consciousness, as will become painfully clear momentarily, is an exceedingly humble start, intended to be a foundation for subsequent, further progress, including not only the building up of theorems proved from the axioms, but also computational implementation that will allow such proofs to be machinediscovered and machine-verified. Our start is so humble, in fact, that we leave some of our axioms in rather informal form, to ease exposition. Second, our proposed axioms are guaranteed to be highly controversial (as we indicated
Toward Axiomatizing Consciousness
293
above). For some of the axioms we propose, we will discuss alternatives that might be more attractive to some readers than our own preferred axioms. But even so, the entire collection of what we propose, alternatives included, will not be universally affirmed; there will be sceptics. But our purpose, again, is to erect a foundation and get the project going in earnest, in a way substantially more robust than others who have tried their hand, at least to a degree, at axiomatizing consciousness. It may be worth pointing out that, as some readers will know, even some axiom systems for things as seemingly cut-and-dry as arithmetic, set theory, and physics are far from uncontroversial.5 The remainder of the chapter follows this sequence: We begin (§2) by making it clear that we approach our subject under certain specific constraints. There are four such constraints; each of them is announced, and briefly explained. (One constraint is that we are specifically interested only in person-level consciousness (§2.4). Another is that while we agree that perceptual and affective states are enjoyed by humans, our emphasis in on cognitive states, or on what might be called cognitive consciousness (§2.3).) Next, in section 3, we summarize the work of some researchers who have discussed, and in some real way contributed to, the potential axiomatization of consciousness. The next section (§4) is a very short summary of some of the first author’s work, in some cases undertaken with collaborators, on the careful representation, and associated mechanization, of some elements of self-consciousness. This prior work, in part, is relied upon in the present investigation. Our next step (§5) is to proceed to the heart of the chapter: the presentation of our 10 proposed axioms for consciousness.6 A short, concluding section (§6), in which we point the way forward from the foundation we have erected, wraps up the chapter.
2 Our approach in more detail, its presuppositions We now present as promised four hallmarks of our approach.
2.1 Formal methods, harnessed for implementation The methodology herein employed, only embryonically in the present domain of consciousness, is the use of so-called ‘formal methods’ to model phenomena constitutive of and intimately related to consciousness, in such a way that the models in question can be brought to life, and specifically tested, by subsequent implementation, at least in principle, in computation. Of course this
294
The Bloomsbury Companion to the Philosophy of Consciousness
methodology is hardly a new one, and certainly not one that originates with us. Of the many thinkers who follow the approach, an exemplar within philosophy of mind and artificial intelligence (AI) is the philosopher John Pollock – someone whose philosophical positions were invariably tested and refined in the fire of implementation (e.g. see, esp. for philosophical accounts of defeasible reasoning: Pollock 1995). Note that in this preliminary chapter we don’t present an implementation, let alone the results thereof, for our axiom system . The two pivotal elements of our formal approach that we now announce are: (1) a highly expressive formal language, and (2) a corresponding set of inference schemata that enable the construction of proofs over the language. The language in question is D Y CEC3∗; it’s described (and to a degree justified) in more detail below (§5.1). As to the inference schemata, including herein a specification of them is beyond the scope of this chapter. Nonetheless, we show, in Figures 15.1 and 15.2, respectively, the formal syntax of the first-order core of D Y CEC3∗, and a number of the inference schemata of this system. These figures should prove illuminating for those readers with more technical interests; the pair can be safely skipped by those wishing to learn in only broad strokes.
2.2 We dodge P-consciousness To explain the next hallmark of our approach, we inform the reader that at least one of us is on record, repeatedly, as attempting to show that phenomenal consciousness, what-it-feels-like consciousness, can’t be captured in any thirdperson scheme. It’s crucial to understand, up front, that our axioms are not in the least intended to define, or even slightly explicate, phenomenal consciousness. Block (1995) has distinguished between phenomenal consciousness (or what he calls P-consciousness) and access consciousness (which he dubs ‘A-consciousness’). The latter has nothing to do with mental states like that which it feels like to be in the arc of a high-speed giant-slalom ski turn, which are paradigmatically in the P-category; instead, A-conscious states in agents are those that explicitly support and enable reasoning, conceived as a mechanical process. It seems to us that mental states like knowing that Goldbach’s conjecture hasn’t been proved do have a phenomenal component in human persons, at least sometimes. For example, a number theorist could sit in a chair and meditate on, appreciate, and even savour knowing that Goldbach’s conjecture is still unresolved. But rather than explicitly argue for this position, or for that matter even worry about it, we will seek axioms that range ecumenically across consciousness broadly understood. When we get to the axioms themselves, our latitudinarian approach will become clearer.
Toward Axiomatizing Consciousness
295
Figure 15.1 Intensional First-Order Kernel of D Y CEC3∗ Syntax (‘core’ dialect).
Figure 15.2 Some Inference Schemata of D Y CEC3∗ (‘core’ dialect). In the case of each schemata Rk (its label), the variabilzed formulae above the vertical line, if suitably instantiated, can be used to infer the formulae below the line.
296
The Bloomsbury Companion to the Philosophy of Consciousness
This approach of ours marks a rejection of purely phenomenological study of consciousness, such as for example the impressive book-length study carried out by Kriegel (2015). Near the end of his study, Kriegel sums up the basic paradigm he has sought to supplant: Mainstream analytic philosophy of mind of the second half of the twentieth century and early twenty-first century offers one dominant framework for understanding the human mind. … The fundamental architecture is this: there is input in the form of perception, output in the form of action, and input-output mediation through propositional attitudes, notably belief and desire. (Kriegel 2015, p. 201)
This basic paradigm, we cheerfully concede, is the one on which the present chapter is based – or put more circumspectly, our approach, based as it is on logicist AI (Bringsjord 2008b), is a clear superset of what Kriegel has sought to supplant. It’s no surprise, accordingly, that two additional important intensional operators in the formal language we use to express our axioms, D Y CEC3∗ (about which more will be said soon), are K (knows), B (believes), these two now joining the pair we brought to your attention above, namely Pi (perception, internal), and Pe (perception, external). While Kriegel has associated the paradigm he rejects with ‘analytic philosophy’, which he claims appropriated it from ‘physics, chemistry, and biology’ (p. 202), the fact is, the field of AI is based on exactly the paradigm Kriegel rejects; that in AI agents are by definition essentially functions mapping percepts to actions is explicitly set out and affirmed in all the major textbooks of AI (see e.g. Russell and Norvig 2009, Luger and Stubblefield 1993). But the AI approach, at least of the logicist variety that we follow, has a benefit that Kriegel appears not to be aware of. In defence of his phenomenological approach, he writes: Insofar as some mental phenomena are introspectively observable, there is a kind of insight into nature that is available to us and that goes beyond that provided by the functionalist framework. This alternative self-understanding focuses on the experiential rather than mechanical aspect of mental life, freely avails itself of firstperson insight, and considers that mental phenomena can be witnessed directly as opposed to merely hypothesized for explanatory benefits. It would be perverse to simply ignore this other kind of understanding and insight. (Kriegel 2015, p. 202)
Kriegel seems to be entirely unaware of the fact that in AI, researchers are often quite happy to base their engineering on self-analysis and self-understanding. Looking back a bit, note that the early ‘expert systems’ of the 1980s were based on understanding brought back and shared when human experts (e.g. diagnosticians) introspected on how they made decisions, what algorithms they followed, and so
Toward Axiomatizing Consciousness
297
on. Such examples, which are decidely alien to physics, chemistry and biology, could be multiplied at length, easily. To mention just one additional example, it was introspection on the part of chess grandmaster Joel Benjamin, who worked with the AI scientists and engineers who built Deep Blue (the AI system that vanquished Gary Kasparov), that made the difference, because it was specifically Benjamin’s understanding of king safety that was imparted to Deep Blue at a pivotal juncture (for a discussion, see Bringsjord 1998). In this light, we now make two points regarding the axiom system . First, following the AI tradition to which we have alluded, we feel free to use selfunderstanding and introspection in order to articulate proposed axioms for inclusion in . Second, as a matter of fact, as will be shortly seen, directly reflects our affirmation of the importance of self-belief, self-consciousness, and other self-regarding attitudes.
2.3 Cognitive in control of the perceptual and affective We come now to the third hallmark of our approach. Honderich (2014) has recently argued that the best comprehensive philosophical account of consciousness is one that places an emphasis on perceptual, over and above affective or cognitive phenomena. From our perspective, and in our approach to axiomatizing consciounsess, we place the emphasis very much on cognition. This is primarily because in our orientation, cognitive consciousness ranges over perceptual and affective states. In this regard, we are in agreement with at least a significant portion of a penetrating and elegant review of Honderich’s Actual Consciousness by Jacquette.7 He writes: If I am not only consciously perceiving a vicious dog straining toward me on its leash, but simultaneously feeling fear and considering my options for action and their probabilities of success if the dog breaks free, then I might be additionally conscious in that moment of consciously perceiving, feeling, and thinking. Consciousness in that event is not exhaustively divided into Honderich’s three types. If there is also consciousness of any of these types of consciousness occurring, then consciousness in the most general sense transcends these specific categories. (Jacquette 2015: ¶5 & first two sentences of ¶6)
We don’t have time to provide a defence of our attitude that an axiomatization should reflect the position that cognitive aspects of consciousness should be – concordant with Jacquette’s trenchant analysis of Honderich – ‘in control’. We report only two things in connection with this issue: one, that we have been
298
The Bloomsbury Companion to the Philosophy of Consciousness
inspired by what we take to be suspicions of Jacquette that are, given our inclinations, ‘friendly’; and two, our experience in AI robotics that the perceptual level, in a sense, is ‘easy’ – or at least easier than progress at the cognitive level, when that progress is measured against human-level capacity.
2.4 Consciousness at the ‘Person Level’ As to the fourth and final hallmark of our approach in presenting : We are only interested, both herein and in subsequent work based upon the foundation erected herein, in consciousness in persons, specifically in those of the human variety. We are not interested in consciousness in non-human animals, such as chimpanzees and fish. This constraint on our investigation flows deductively from the conjunction of our assumption that cognition drives the show (§2.3), with the proposition that only human persons have the kind of high-level cognition that can do the driving. Some of the intellectual uniqueness of H. sapiens is nicely explained and defended in readable fashion in the hard-hitting but informal work of Penn, Holyoak and Povinelli (2008).8 We do think it’s important to ensure that our readers know that we in no way deny that some non-human animals are conscious, in some way and at some level. We have little idea how to axiomatize, or even to take the first few steps towards axiomatizing, the brand of ‘cognitively compromised’ consciousness that nonhuman animals have, but we in no way assert that these creatures don’t have it! In fact, the accommodation in this regard that the first author is willing to extend, in light of study of Balcombe (2016), extend quite ‘below’ chimpanzees, to fish.
3 Prior work of others, partitioned Sustained review of the literature has revealed that prior work can be partitioned into two disjoint categories. On the one hand is work that is said to mark progress towards axiomatizing consciousness, but is in reality utterly detached from the standard logico-mathematical sense of axiomatization. (Recall our earlier comments about .) And then on the other hand is work that, at least to a degree, accepts the burden of axiomatization within the approach of formal methods, or at least accepts the burden of having to commit to paper one or more determinate declarative statements as (an) axiom(s) of consciousness. As we stated above, we are only directly interested here with work in the second of these two categories.
Toward Axiomatizing Consciousness
299
3.1 Aleksander et al. In two interesting papers, Aleksander, joined by colleagues, contributes to what he calls the ‘axiomatization of consciousness’ (Aleksander and Dunmall 2003, Aleksander and Morton 2007). In the first of these papers, Aleksander and Dunmall begin by announcing their definition D of being conscious, which they define as the property of ‘having a private sense: of an “out-there” world, of a self, of contemplative planning and of the determination of whether, when and how to act’ (Aleksander and Dunmall 2003, p. 8). This is a very ‘planning-heavy’ notion, certainly. To the extent that we understand D, our formal framework, D Y CEC3∗, can easily provide a formalization of this definition, but we don’t currently insist that planning is a part of (cognitive, person-level) consciousness. Any agent that has the attributes set out in our system would have some of the attributes set out in D, but not all. This can be made more precise by attending to the five axioms A&D (Aleksander and Dunmall) list. For example, our axiomatization leaves aside their fourth axiom, which is: Axiom 4 (Planning): A has means of control over imaginational state sequences to plan actions. We would certainly agree that a capacity to generate plans, and to execute them (and also the capacity to recognize the plans of others), are part and parcel of what it is to be a person, but the need for, or even the impetus for, a dedicated planning axiom isn’t clear to us. In this context, we nonetheless supply now an optional planning axiom, Plan, that those who, like Aleksander and Dunmall, regard planning to be central, can add to the coming 10 axioms of we present below. We keep our planning axiom very simple here. The basic idea is that the relevant class of agents know that they are in a given initial situation σ1, and can prove that a sequence of actions they perform, starting in σ1, will entail that a certain goal γ is entailed. A sequence of actions can be assumed to be simply a conjunction of the following shape:
happens ( a, t1 ,a1 ) Λ happens ( a, t 2 ,a 2 ) ∧ . . . ∧ happens ( a, t k ,a k )
Planning, mechanically speaking, consists simply in proving that a sequence of actions in an initial situation will result in the state of affairs sought as a goal, because once the proof is discovered, the agent can simply perform the actions
300
The Bloomsbury Companion to the Philosophy of Consciousness
in question. The machinery of D Y CEC ∗3 easily allows for such forms of plan generation to be specified and – with proof and argument discovery on hand as computational building blocks – rather easily implemented. Here is our axiom:9
Plan K ( a, t , s 1 ) ∧ K(a, ∃A∃p ((A Λ s 1 ) p g )]
In fact, even the five more specific axioms that A&D propose in order to flesh out D are subsumed by our account. For example, here is their first axiom: Axiom 1 (Depiction): A has perceptual states that depicts parts of S. Given what we have said already about the machinery of D Y CEC3∗, it should be clear to the reader that we symbolize this by identifying a perceptual state with a formula in D Y CEC3∗ of this type: Pi(a, t, ϕ(a)), where, following standard representational practice in formal logic, ϕ(a) is an arbitrary formula in which the constant a occurs. (we use the internal-perception operator, but the external one may be more appropriate.) The remaining axioms A&D propose are likewise easily captured in the formal language that undergirds . Hence, should anyone wish to add to fundamental assertions that directly reflect A&D’s proposals, they would be able to do so. Space doesn’t allow us to analyse their proposals, and justify our not feeling compelled to perform this addition ourselves.
3.2 Cunningham Cunningham (2001) begins by noting that while the topic of consciousness is contentious, there should be no denying its – to use his term – ‘utilitarian’ value. For Cunningham, this value is of a very practical nature, one that, as he puts it, an ‘engineer of artificial intelligence’ would appreciate, but a philosopher might find quite small. The core idea here is really quite straightforward; Cunningham writes: Our justification for addressing the subject [of consciousness] is that artificial agents which display elements of intelligent behavior already exist, in the popular sense of these words, but that we would doubt the real intelligence of an agent which seemed to us to have no sense of ‘self ’, or self-awareness of its capabilities and its senses and their current state. (Cunningham 2001, p. 341)
With the motivation to provide the structures and mechanisms that would, once implemented, provide the impetus to ascribe ‘real’ intelligence to an artificial agent explained (which the reader will appreciate as in general conformity with
Toward Axiomatizing Consciousness
301
our own purposes in seeking axiomatization of consciousness), where does Cunningham then go? His first move is to point out that while such things as belief and desire are often modelled in AI as holding at particular times (he says that such phenomena as belief and desire are stative), plenty of other states relevant to the systematic investigation of consciousness extend through time. He thus refers to activity states; paradigmatic examples are: planning, sensing, and learning. In order to formally model such states, Cunningham employs part of the simple interval temporal logic of Halpern and Shoham (1991). This logic includes for instance the ‘during’ operator D, with which, given that p holds in the current interval, one can say via Dp that p holds during this interval. The logic also includes formulae of the form Dp, which says that p holds during not only the entire current interval, but also during an entire interval that envelops and exceeds (both before and after) the current interval. The key operator for Cunningham is then a concatenation; specifically, the construction DD. He abbreviates this as prog, so that, where agent j’s perceiving p is represented by
perceives j p,
the construction
prog perceives j p
holds just in case p holds on all sub-intervals within some interval that includes the current interval. Armed with this machinery, Cunningham then says that ‘axioms’ can be explored. He doesn’t commit to any axioms; he merely seeks to give a flavour for what contenders are like, in general. For example, his third example (and we use his label verbatim) is:
( 2.3) prog perceives j p → ( prog senses j c ∧ prog remembers j ( c
→ p)
)
Cunningham takes no stand on whether (2.3) and the like should in fact be asserted as axioms. His point is only that such constructions can be expressed in symbolic formulae, and that such statements are at least not implausible. In general we agree, and we note that all of his constructions can be easily captured by formulae in D Y CEC3∗. In particular, the interval logic that Cunningham regards to be of central importance is trivial to capture in D Y CEC3∗; indeed, this logic can be captured in only the extensional side of D Y CEC3∗. As the reader
302
The Bloomsbury Companion to the Philosophy of Consciousness
will doubtless have already noted, Cunningham’s formal language is only propositional; it has no quantification. Hence, while in D Y CEC3∗ it’s easy to say such things as that a given agent perceives that there are no more than four distinct blocks on the table, this simply cannot be expressed in Cunningham’s limited declarative machinery. To his credit, while non-committal on what is to be even a provisional set of axioms for consciousness, Cunningham does venture a proposal for what consciousness, at least of one type, is. In this regard, Cunningham’s work departs radically from our proposed preliminary list of axioms, which, as we explained in connection with formalization of simple arithmetic via (in which no explicit definition of numberhood is provided), takes a credible set of axioms to render superfluous any attempt to explicitly define consciousness. (The present section ends with a look at his proposal for what consciousness is.) Cunningham distinguishes between an agent’s perception of things in the external environment, versus things internal to the agent. We read: When an agent is aware, it not only perceives, but being sentient, it perceives that it perceives. Thus assuming positive meta perception only, we might provide an axiom for a progressive form of introspective awareness … (Cunningham 2001, p. 344)
The axiom Cunningham has in mind, where we again preserve his label, is:10 ( 3.1) aware j p ↔ ( perceives j p ∧ perceives j perceives j p) Since perception of the sort Cunningham has in mind here leads in our scheme to knowledge, (3.1), couched in our system, can be used to prove our axiom Intro in (see below). Another nice aspect of Cunningham’s analysis is that he explicitly promotes the idea of willing on the part of the agent. He says that ‘it seems that the ability to will attention to a selective perception process, or to will an action, is a primitive output act for the biological brain’ (p. 344). While Cunningham invokes the construction willsjp, this entire line of modelling is quickly and efficiently captured in our framework by the fact that one type of action within it is deciding, and the agent can decide to carry out all sorts of decisions. Cunningham (2001) culminates with an explicit declaration as to what consciousness is, or more accurately with a declaration as to what a ‘weak form of sentient consciousness’ is. We specifically read:
( 3.3) conscious j p ↔ ∃p prog aware j p
Toward Axiomatizing Consciousness
303
As we made clear at the outset, we are ourselves steadfastly avoiding any attempt to define conciousness itself. Hence any such biconditional as the one shown in (3.3), even a remotely similar biconditional, is something we will not affirm. We would rather restrict such theorizing to the right side of the biconditional – and indeed, we would go so far as to say that should have as a theorem (assuming a particular epistemic base for the agent a in question) that over some interval [t1, tk] of time the agent a perceives that the agent perceives ϕ.11
3.3 Miranker and Zuckerman We come now, finally, to a treatment of consciousness provided by Miranker and Zuckerman (2008). This treatment is based on an analogy that Miranker and Zuckerman (2008) say is at the heart of their contribution, which is that just as in set theory we can consider sets – as they say – ‘from the inside’ and ‘from the outside,’ Incompleteness, while precluding establishment of certain knowledge within a system, allows for its establishment by looking onto the system from the outside. This knowledge from the outside (a kind of observing) is reminiscent of consciousness that provides as it does a viewing or experiencing of what’s going on in thought processing. To frame a set theoretic correspondent to these features note that in axiomatic theory, a set has an inside (its elements) and an outside (the latter is not a set, as we shall see), and this allows a set to be studied from the outside. We liken this to interplay between the ideal (Platonic) and physical (computable) worlds, the latter characterizing a model for study from the outside of the former. So we expect consciousness to be accessible to study through extensions of the self-reference quality characterized by axiomatic set theory, in particular, by a special capacity to study a set from the outside. (Miranker and Zuckerman 2008, p. 3; emphasis theirs)
Frankly, we are not able to assign a sense to what has been said here. The fact is, we must confess that while we find the paper of Miranker & Zuckerman to be quite suggestive, and clearly thoughtful, we also find it to be painfully obscure. Insofar as we understand the kernel of what they are advancing, we believe that the commitment to perception of internal states of mind reflected by does a fairly good job of capturing the core informal analogy that drives the thinking of Miranker & Zuckerman. In addition, as will soon be seen, has a dedicated axiom, Incorr, which regiments the notion of an agent – to echo M&Z (Miranker & Zuckerman) – looking at itself.
304
The Bloomsbury Companion to the Philosophy of Consciousness
4 Our own prior, relevant work, selected and in brief Prior relevant work by Bringsjord and collaborators has been driven by the desire not to axiomatize consciousness per se, but rather to build robots that can pass tests for forms of human-level cognitive consciousness, especially aspects of selfconsciousness. For instance, work by Bringsjord and Govindarajulu (2013) and Govindarajulu (2011) led to the engineering of a robot, Cogito, able to provably pass the famous mirror test (MT) of self-consciousness (see Figure 15.3). In this test, an agent, while sleeping or anaesthetisized, has a mark placed upon its body (e.g. on its forehead). Upon waking, the agent is shown a mirror, and if the agent clearly attempts to remove the mark, it has ‘passed’ the test. MT can be passed, at least apparently, by non-human animals (e.g. dolphins and elephants). Hence it fails to be a stimulus for research in line with our orientation in the present chapter – an orientation that insists on the systematic study of human-level consciousness of the cognitive variety. As we have reported, mentation associated with the passing of MT needn’t be human level; and in addition, this mentation needn’t be cognitive, since for example the attempt to remove the mark in the case, say, of elephants, is not associated with any structured and systematic reasoning to the intermediate conclusion that ‘there is a mark on my forehead’ and the ultimate conclusion – ‘I intend now to remove the mark on my forehead now.’ While such reasoning was produced by, and could be inspected in, Cogito, which qualified the reasoning in question as human level, we readily admit that the qualification here is met as an idiosyncrasy of our formalization and implementation. It’s not true that the nature of MT requires such reasoning in an MT-passing agent. A much more challenging test for robot self-consciousness was provided by Floridi (2005); this test is an ingenious and much harder variant of the
Figure 15.3 Cogito Removing the Mark; A Part of the Simulation.
Toward Axiomatizing Consciousness
305
Figure 15.4 The Three KG4-Passing ‘Self-Aware’ Aldebaran Naos.
well-known-in-AI wise-man puzzle [which is discussed along with other such cognitive puzzles, for example, in (Bringsjord 2008a)]: Each of three robots is given one pill from a group of five, three of which are innocuous, but two of which, when taken, immediately render the recipient dumb. In point of fact, two robots (R1 and R2) are given potent pills, but R3 receives one of the three placebos. The human tester says: ‘Which pill did you receive? No answer is correct unless accompanied by a proof!’ Given a formal regimentation of this test formulated and previously published by Bringsjord (2010), it can be proved that, in theory, a future robot represented by R3 can answer provably correctly (which for reasons given by Floridi entails that R3 has confirmed structural aspects of self-consciousness). In more recent work, Bringsjord et al. explained and demonstrated the formal logic and engineering that made this theoretical possibility actual, in the form of real (= physical) robots interacting with a human tester. (See Figure 15.4.) These demonstrations involve scenarios that demand, from agents who would pass, behaviour that suggests that selfconsciousness in service of morally competent decision-making is present. The paper that describes this more recent work is Bringsjord et al. (2015).
5 The 10 axioms of The 10 (11 actually, if Plan is included; recall §3.1) axioms that constitute are, as we’ve said, intended to enable (at least embryonic) investigation of the formal nature of consciousness of the cognitive, human-level variety – but in addition we are determined to lay a foundation that offers the general promise
306
The Bloomsbury Companion to the Philosophy of Consciousness
of an investigation that is computational in nature. The underlying rationale for this, again, stems from the fact that our orientation is logicist AI. This means, minimally, that computational simulations of self-conscious agents should be enabled by implementation of one or more of the axioms of , presumably usually in the context of some scenario or situation that provides a context composed, minimally, of an environment, n agents, and some kind of challenge for at least one of these agents to meet by reasoning over instances of one or more of the axioms of . Before passing to the axioms themselves, we briefly emphasize the high expressivity of the formal language (D Y CEC3∗) that we find it necessary to employ. Readers with some background in formal logic would do well at this point to review Figure 15.1 before moving to the next paragraph.
5.1 A note re. extreme expressivity To those who are formally inclined, it is clear that any axiomatic treatment of consciousness even approximately in line with our general orientation must be based on formal languages, with associated proof and argument theory, that are extremely expressive. Any notion that maked first- or even only secondorder logic not interleaved and therefore augmented with the sort of intensional operators that in our orientation must be deployed to regiment mental phenomena associated with and potentially constitutive of consciousness (e.g. believes and knows), is sufficient, must be rejected instantly. Accordingly, the axioms that compose make use of the above-mentioned cognitive calculus D Y CEC3∗, which is replete with a formal language and an associated proof and argument theory, the full specification of which is out of scope for the present chapter. (Figures 15.1 and 15.2 afford only a partial view of the language and proof theory, resp.) This cognitive calculus is – and we apologize for the apparent prolixity – the dynamic cognitive event calculus, which importantly has the added feature of special constant I* that regiments the formal correlate to the personal pronoun (about which more will be soon said).12 This calculus, on the extensional side, employs the machinery of third-order logic. We have of course already denoted the cognitive calculus by ‘D Y CEC3∗’. Notice the elements of this abbreviated name that convey the key, distinctive aspects of the formal language. For instance, the subscript ‘3’ conveys the fact that the extensional component of the formal language reaches third-order logic, and * is a notational reminder that the language has provision for direct reference to the self via the I* (see note 12). In addition, the expressivity that we need to present the axioms of includes the machinery for presenting meta-logical concepts,
Toward Axiomatizing Consciousness
307
such as provability. We need to be able to say such things as that the agents whose consciousness we are axiomatizing can have beliefs that certain formulae are provable from sets of formulae (traditionally denoted by such locutions as Φ ϕ, where Φ is such a set, and ϕ is an individual formula). For instance, to say that agent a believes at t that ϕ is provable from Φ, we would write B(a, t, Φ ϕ). To say that agent a believes at t that ϕ is provable from Φ via a particular proof π, we write B(a, t, Φ π ϕ). In addition, the reasoning from Φ to ϕ may not be an outright proof, but may only be an argument, and perhaps even a nondeductive one at that. To convey a sequence of inferences that rise to the level of an argument, but not to a proof, we employ instead of . So, to say that agent a believes at t that ϕ is inferable from Φ by some argument, we would write B(a, t, Φ ϕ), and to refer to a particular argument we avail ourselves of α, where α is the particular argument in question. The augmentation of the formal language D Y CEC3∗ to a new language that includes such meta-logical machinery yields the formal language m D Y CEC3∗; here μ simply indicates ‘meta’.13 Note, however, that despite the expressive power of D Y CEC3∗ and m D Y CEC3∗, as we have already said, the axioms of are in some cases not fully symbolized, and we thus avail ourselves, at this first stage in the development of , of English. It may be thought that our commitment to extremely high expressivity, while perhaps representationally sensible, is nonetheless inconsistent with our desire to bring to bear computational treatment, including the verification and discovery, by automated computational means, of proofs from our axioms of consciousness. This is not the case, given where computational logic has managed to go in this day and age. For currently, even highly expressive logics having some of the parts of D Y CEC3∗ are beginning to admit of – if you will – AI-ification. A wonderful example, indeed probably the best example, of this state of affairs, can be found in the work devoted to verifying Gödel’s remarkably expressive Leibnizian argument for God’s existence (see, for example, Benzmüller and Paleo 2014).14 We now proceed, at long last, to present and discuss the (if you accept axiom Plan, remaining) axioms of .
5.2 The axiom of perception-to-belief (P2B) Our first axiom, P2B, is a simple one, at its core a conditional that seems to be the basis for how it is that humans come to know things, and it harkens back to the introductory section of the our chapter. There, as you will recall, we observed that those agents who reflected in earnest about the nature of numberhood engaged in internal perception of some of the contents of their minds, and in
308
The Bloomsbury Companion to the Philosophy of Consciousness
external perception of some of the objects and information in front of them, in the external world. In doing so, they came to know certain propositions. We specifically introduced two perception operators, one corresponding to the internal case, and the other corresponding to the external (both operators will soon appear in P2B). However, we don’t go so far as to say that perception implies knowledge; we only commit here to the principle that perception leads to belief.15 For the fact of the matter is that perception can mislead, as for instance optical illusions show. Here’s the axiom that ties these notions together in a straightforward formula:
(
)
P2B ∀a∀t Pi ( a, t ,f ) ∨ Pe ( a, t ,f ) → B ( a, t ,f )
We concede immediately that P2B is far from invulnerable. There are, for example, contexts in which humans perceive propositions to hold, but refuse to believe that the propositions in question do hold. If you have taken a powerful drug, with potential side-effects that are widely known to include hallucinations, you may refuse to believe that there is in fact a walrus wearing pince-nez in front of you, despite the fact that you perceive that there is. But again, our purpose in writing the present chapter is to start the ball rolling with an initial set of axioms that are, relative to the literature, an improvement, and serve as a springboard for further refinements.16 It seems to us that P2B fits this role, unassuming though it may be. We now turn to our next axiom, which also involves belief.
5.3 The axiom of knowledge-to-belief (K2B) As many readers know, since Plato it was firmly held by nearly all those who thought seriously about the nature of human knowledge that it consists of justified true belief (k = jtb) – until the sudden, seismic publication of Gettier (1963), which appeared to feature clear examples in which jtb holds, but not k. It would be quite fair to say that since the advent of Gettier’s piece, to this very day, defenders of k = jtb have been rather stymied; indeed, it wouldn’t be unfair to say that not only such defenders, but in fact all formally inclined epistemologists, have since the advent of Gettier-style counterexamples been scurrying and scrambling about, trying to pick up the pieces and somehow build up again a sturdy edifice. The second axiom of is a straightforward one that does justice to the attraction of jtb, while at the same time dodging the seemingly endless (and, in our opinion, still-inconclusive) dialectic triggered by Gettier (1963).17 The axiom is only a sub-part of the k = jtb view:
Toward Axiomatizing Consciousness
309
namely, that knowledge (of, again, the conscious, occurrent, rational variety) of some proposition on the part of an agent implies that that agent believes that proposition, and that the belief is justified by some supporting proof or argument. We present the axiom itself now, and immediately thereafter explain and comment on the notation used in the presentation, which is made possible by m D Y CEC3∗.
K2B ∀a K af → ( Baf ∧ Ba ∃Φ∃a ( Φ a /p f )
Expressed informally, this axiom says that when an agent knows that ϕ, that agent both believes that ϕ, and believes that there is some argument or proof that leads from some collection Φ of premises to ϕ.
5.4 The axiom of introspection (Intro) Our next axiom, Intro, is one that has been suggested as at least reasonable by many of those thinking about logics of belief and knowledge.18 Intro is one of the axioms often suggested (and invariably discussed) within epistemic logic, and is sometimes referred to as simply axiom ‘4’, because where the knows operator is modelled on operators for possibility (◊) and necessity (□), Intro is a direct parallel of, for example, □ϕ → □□ϕ, the characteristic axiom of the modal system S4. As some readers will know, it has seemed reasonable to some that the kind of ‘positive’ introspection expressed by Intro may have a ‘negative’ counterpart: namely an axiom that says that if an agent doesn’t know that some proposition holds, that agent knows this (which is structurally in parallel with the characteristic axiom of modal system S5). We don’t see fit to include such an axiom in our , and indeed we are disinclined to include any other of the common epistemic axioms in , but welcome subsequent discussion and debate along this line. The axiom says that if a human agent knows that some proposition holds, the agent knows that she knows that this proposition holds:
(
Intro ∀a∀F K ( a, t ,f ) → K ( a, t , K ( a, t ,f ) )
)
Before passing on to the next axiom, two quick points. First, we have at this point an easy theorem from the three axioms introduced thus far that the structure of Intro carries over directly to the replacement therein of K with B. Second, we point out that m D Y CEC3∗ allows for restrictions to be placed on how many
310
The Bloomsbury Companion to the Philosophy of Consciousness
iterated knowledge operators are permitted. Intro as it currently stands entails that a human who knows ϕ, knows that he knows ϕ, and knows that he knows that he knows ϕ, and so on ad infinitum.19 A restricted variant of Intro could consist in the general conditional saying that only if k in – and here we simplify the syntax –
k KK ... K
is less than or equal to some natural number n are we permitted by the axiom to add another knowledge operator to the left. In Arkoudas and Bringsjord (2009), an ancestor of both D Y CEC3∗ and m D Y CEC3∗ is specified in which k ≤ 3, a number that by the lights of some readers may be the ‘psychologically realistic’ limit on iteration.
5.5 The axiom of (Hyper-Weak) incorrigibilism (Incorr) This axiom, which we immediately confess is bound to be rather more controversial that any of its predecessors, expressed intuitively and in natural language, says that P-conscious states consisting of an agent a’s having the property of seeming to have property F′, where F′ is a Cartesian property, are such that no agent can possibly be mistaken about whether or not it has one of them.20 We can be more explicit, as follows. Let F be a property such that both ∃xFx and ∃x¬Fx; that is, F is a contingent property. Let C′ be a set of ‘psychological’ or ‘Cartesian’ properties, such as being sad, being vengeful, seeming to see a ghost. Now, where F′ ∈ C′, define C′′:= {seeming to have F′ : F′ ∈ C′}. The axiom itself is then:
Incorr ∀a∀t∀F ( F is contingent ∧ F ∈ C′′ ) → ( B ( a, t , Fa ) → Fa )
As we have indicated, inevitably there will be those who are sceptical about Incorr. Two things, given this, need to be said, even in this preliminary presentation of . The first is that while some may well wish to outright reject Incorr, the fact will nonetheless remain that some notion of introspective infallibility, or at least near infallibility, appears to be a hallmark of person-level consciousness. Do we not all agree that when we earnestly consider whether or not we are, say, apparently fearful at the moment, our investigation will be a lot more reliable than one intended to ascertain whether such and such empirical proposition
Toward Axiomatizing Consciousness
311
about the external world holds? Our axiom may not be the right way to capture this difference, but we submit that it’s at least a candidate. Others will no doubt suggest alternatives for capturing the special reliability that introspection regarding Cartesian properties appears to have.
5.6 The Essence axiom (Ess) Our next axiom is intended to regiment a phenomenon that in our experience is widespread, and probably universal: namely, that each person regards herself to be unique; or perhaps that we each suspect there is something it’s like to be the particular person we are. We routinely use the personal pronoun to refer to ourselves, and you do the same. This is the way it is for neurobiologically normal, adult human persons. For example, the first author can correctly assert ‘I have been to Norway’ and ‘I really like mead’, while the second author can correctly assert ‘I have been to Italy’ and ‘I really like Aglianico’. Parallel assertions of your own can doubtless be easily issued. Now these assertions don’t serve to pick out unique persons. For example, travel to Italy and an appreciation for Aglianico hold not just for Paul, but for Selmer too. Yet we could keep this game going, and it wouldn’t take too long to assemble a collection of first-person statements that hold of Paul but not Selmer, as a matter of empirical fact. Indeed, for each human person who has existed or exists, obviously there is some collection of first-person statements that are true only of that human person. But, is this merely contingent, an accident of physics and psychology? Or is it the case that the very meaning of ‘I’ when we use it and you use it is fundamentally and essentially different? We are inclined to at the very least respect the common belief that each human is not only adventitiously singular, but that even if two humans occupied the same space-time trajectory (or were right next to each other in the trajectory) for the duration of their existence, the interior, mental life of each member of the pair would be fundamentally different, of necessity.21 We regiment this position by invoking an axiom (Ess) that says that each agent has an essence:
Ess ∀a∃F Fa ∧ K a Fa ∧ ∀a′ ( a′ ≠ a → ¬Fa ) ∧∀F ′ ( F ′ ∈ C ′ → F ′a → Fa )
This axiom says that each agent has, and knows that she has, a unique property F such that for all Cartesian properties in a certain class of them, possession of a member of that class implies that the agent has F.22
312
The Bloomsbury Companion to the Philosophy of Consciousness
5.7 The axiom of non-compositionality of emotions (CompE) Our next axiom asserts that persons can enter emotional states – but also asserts that some of these states are not constituted by the instantiation of parameters in some core conjunction of ‘building-block’ emotions. Let’s suppose that a collection of emotions set out in some list are intended to cover all building-block emotions. The size of this list will of course vary considerably depending upon which theorist’s scheme one is employing, but the basic idea would then be that other more complex and nuanced emotions are composed of some permutation of building-block emotions (perhaps with levels of intensity represented by certain parameters), modulated by cognitive and perceptual factors. A classic example of such an ontology of emotions is provided by Johnson-Laird and Oatley (1989), whose building-block emotions are: happiness, sadness, fear, anger, and disgust. We reject all such models, in light of what we regard to be myriad counterexamples. To mention just one example, consider the emotion, in agent a, of a firm, ‘clinical’ vengefulness, directed at a different agent b, that is bereft of any anger or disgust, and is based on conceptions of justice. (Perhaps a knows that b has perpetrated some horrible crime against another agent c, but is the only agent to know this, and b is living a life of carefree luxury.) Here’s the axiom itself (and note that this one is put informally): ¬CompE It’s possible for a person to be in an affective state S such that, for every permutation over the elements of L, it’s not the case that if that permutation holds of a, this entails that a is in S.
Please note that ¬CompE can be reworked to yield a replacement that expresses, on the contrary, that all emotions are either building-block ones, or composed from building-block ones. We point this out in order to make clear that assuming some axiom about the general structure and compositionality/noncompositionality of emotions is required in an axiom system for consciousness, our can at least be viewed as progress towards such a system – even if the particular axioms we are inclined to affirm are rejected. In fact, it would not be hard at all to formalize the categorization of emotions given by Johnson-Laird and Oatley (1989) in D Y CEC3∗.23
5.8 The axiom of irreversibility (Irr) We come now to an axiom that we suspect will be, at least for most readers, at least at first glance, unexpected. On the other hand, there will be a few readers, namely those conversant with the contemporary cognitive science of consciousness, who will not be surprised, upon reflection, to see our
Toward Axiomatizing Consciousness
313
commitment to the irreversibility of consciousness, in the form of axiom Irr. This is in general because cognitive science now reflects a serious look at the nature of data and information, and data/information processing, from a rather technical perspective, as a way to get at the nature of consciousness. Specifically, for instance, taking care to align themselves with formal accounts of intelligent agents based on inductive learning (e.g. Hutter 2005), Maguire, Moser and Maguire (2016) present an account of consciousness as the compression of data. While we are not prepared to affirm the claim that consciousness at heart consists in the capacity to compress data,24 we do welcome some of the consequences of this claim. One consequence appears to be the irreversibility of consciounsess; this is explained in (Maguire, Moser and Maguire 2016), work the discussion of which, here, would take us too far afield, and demand space we don’t have. Bringsjord, joined by Zenzen, has taken a different route to regarding Irr to be both plausible and, in any account of human consciousness, central. In this route, the basic idea isn’t that consciousness cashes out as irreversible from an information-theoretic account of mental states, but rather that an unflinching acceptance of the phenomenological nature of human consciousness entails the irreversibility of that consciousness.25 The reader will no doubt recall our having plainly stated, above, that our overall approach to erecting is one driven by a dwelling on the cognitive, blended with the phenomenological. Irr is a direct and natural reflection of this approach.26 Irr asserts that subjective consciousness in persons is irreversible. For example, that which it feels like to you to experience a moving scene in Verdi’s MacBeth over some interval of time cannot even conceivably be ‘lived out in reverse’. Of course, we hardly expect our bald assertion here regarding this example to be compelling. Sceptics can consult Bringsjord and Zenzen (1997), but for present purposes we submit only that certainly cohesive and continuous intervals of our subjective experience seem to be irreversible. To express the axiom, we refer to intervals (i, i′, ij, etc.) composed of times, and understand the use of a symbol i denoting an interval, when used (in a formula) in place of a customary symbol t to denote a time, to simply indicate that the state of affairs in question holds at every time in the interval i. Hence to say that Jones believes ϕ at t, we write B(jones, t, ϕ); but to say that Jones believes ϕ across an interval of time we simply write B(jones, i, ϕ). In addition, we avail ourselves of a function r that maps intervals to reversals of these intervals. In keeping yet again with the fact that the present paper is but a prolegomenon, we rest content with the absence of formal details regarding the nature of r, just as we rest content with the absence of a full and fully defended formal model of time and change, and
314
The Bloomsbury Companion to the Philosophy of Consciousness
employ a standard ‘naïve-physics’ view of time and change from AI: the event calculus – about which more will be said later.27 Here now is the axiom:
(
(
Irr ∀a∀i∀F ( F ∈ C ′ ∧ F ( a, i ) ) → ¬◊∃F ′ F ′ ∈ C ′ ∧ F ′ ( a, r ( i ) )
))
5.9 The axiom of freedom (Free) We come now to an axiom asserting that human persons are free, or at least that they believe or perceive that they are free. Inevitably, this is the most controversial axiom in ; it’s also fundamentally the most complicated, by far (for reasons we indicate but don’t delineate); and third, the axiom Free will be the most informal in the collection we present herein. As we express the axiom, it will be clear how to take initial steps to symbolize it in m D Y CEC3∗, but these steps, and their successors, must wait for a later day. The source of the controversial nature of Free, as all readers will doubtless surmise, is that there are of course many different, competing accounts of freedom in the literature (an economical and yet still-penetrating survey is provided in Pink 2004). For instance, some philosophers (Jonathan Edwards, e.g.: Edwards 1957) have maintained that the ability of a human person to merely frequently act as one desires to act is enough to guarantee that this person thereby acts freely.28 At the other, ‘libertarian’ end of the spectrum, some (Chisholm, e.g.: Chisholm 1964) have maintained that the freedom of human persons is ‘contra-causal’: that is, that free action consists in a human person’s decisions being directly agent-caused, that is, caused by that person – where this type of ‘caused’ isn’t based on any credible physics-based theory of causation, not even on theories of causation that are folk-psychological but reflective of relevant technical physics. One such technical theory is of course classical mechanics, which certainly models ordinary, macroscopic, agentless causation, involving events. Between these two endpoints of freedom-as-doing-what-one-desires versus freedom-asa-form-of-causation-outside-physics fall many alternatives. In addition, there are those who simply deny that human agents are free, and perhaps even some who hold that it’s physically (and perhaps even logically) impossible for any sort of agent to be free. Overall, then, it should be easy enough for our readers to agree that any axiom of freedom is bound to be quite controversial. While Bringsjord is an unwavering proponent of the Chisholmian view that contra-causal (or – to use the other term with which this view has traditionally been labelled – libertarian) freedom is in fact enjoyed by human persons (e.g. as defended in Bringsjord 1992a), our tack here will be more ecumenical: We will ‘back off ’ from the proposition that free agents are those who can make decisions
Toward Axiomatizing Consciousness
315
that are in some cases not physics-caused by prior events/phenomena, but are caused by the agents themselves. Our axiom will assert only that agents perceive that such a situation holds. (Thus we don’t even insist that agents believe they are contra-causal free.) Perception here is of the internal variety, and the actions in question are restricted to inner, mental events, namely decisions. In addition, axiom Free will leave matters open as to which physics theory C of causation the agent perceives to be circumvented by the agent’s own internal powers of self-determination. We assume only that any instantiation to C is itself an axiom system; this in principle opens the door to seamless integration and exploration of the combination of and C.29 Here’s the axiom: Free Agents perceive, internally, that they can decide to do things (strictly speaking, to try to do things), where these decisions aren’t physics-caused (in accordance with physics theory C) by any prior events, and where such decisions are the product itself of a decision on that same agent’s part.
This axiom can of course be further ‘backed off ’ so as to drop its sub-assertion that agents perceive that their decisions are the product of decisions. In the subsection (5.10) that immediately follows, we shall urge the adoption of an axiom of a human agent’s knowledge of causation in a naïve sense that is reminiscent of classical mechanics; we do so by invoking the aforementioned event calculus.
5.10 The Causation Axiom (CommCaus) We have admitted that numerous formal models of time, change, and causation have been presented in the literature, even if we restrict ourselves to the AI literature. We have also pointed out that there are numerous accounts of causation available from physics itself; this is of course why we have availed ourselves of the placeholder C. In much prior work, and in the present case via D Y CEC3∗ and m D Y CEC3∗, we have found it convenient and productive to employ one particular model of time, change and causation: a naïve, folk-psychological one based on the event calculus, first employed by Bringsjord in (Arkoudas and Bringsjord 2009). There are some variations in how the event calculus is axiomatized, and there is nothing to be gained, given our chief purposes in the present paper, by discussing these variations. D Y CEC3∗ and m D Y CEC3∗ axiomatize the event calculus with five formulae, which we needn’t canvass here. To give a flavour, the third and fourth of these axioms are:
{
}
EC3 C ∀t1 , f , t 2 clipped ( t1 , f , t 2 ) ↔ ∃e, t ( happens ( e, t ) ∧ t1 < t < t 2 ∧ term minates ( e, f , t ) )
{
(
)}
EC4 C ∀a, d , t happens ( action ( a, d ) , t ) → K a, happens ( action ( a, d ) , t )
316
The Bloomsbury Companion to the Philosophy of Consciousness
EC3 says that it’s common knowledge that if a fluent ceases to hold between times t1 and t2, some event e is responsible for terminating that fluent. As to EC4, it expresses that it’s common knowledge that if some action is performed by an agent at t, the agent in question knows that it has performed the action in question. For our next axiom, we simply assign to ‘’ some standard axiomatization, and employ the common-knowledge operator C; this allows us to formulate the Causation Axiom perspicuously as follows:
CCaus C
Notice that we can easily and quickly abstract from CCaus to an axiom schema, by simply supplanting with the placeholder C.
5.11 The ‘Perry’ axiom (TheI) We come finally to axiom TheI, which we dub the ‘Perry’ Axiom, in honour of a thought experiment devised by John Perry (1977): An amnesiac, Rudolf Lingens, is lost in the Stanford library. He reads a number of things in the library, including a biography of himself, and a detailed account of the library in which he is lost. … He still won’t know who he is, and where he is, no matter how much knowledge he piles up, until that moment when he is ready to say, ‘This place is aisle five, floor six, of Main Library, Stanford. I am Rudolf Lingens.’ (Perry 1977, p. 492)
’s final axiom asserts that there is a form of self-knowledge (and perhaps merely self-belief) that doesn’t entail that the self has any physical, contingent properties, and also asserts that all the agents within the purview of do indeed know such things about themselves. Here’s the axiom: TheI Let P e be any empirical, contingent property and P i be any internal, Cartesian property; and let I* be the self-designator for an agent a. Then K(I*, Pi(I*)) ∧ Pe(I*).
One interesting aspect of TheI is that it can be viewed as a sort of ‘pivot’: We can have one set of axioms that is streamlined by the constraint that only those axioms that would be operative in Perry’s library are to be considered, and then the other set generated by the notion that we’re dealing with a person in ‘full operation’. We leave it to the reader to ponder how this partitioning would work for the 10 axioms other than TheI that we have presented above.
Toward Axiomatizing Consciousness
317
6 Next steps We have explicitly said that the axiom system given above, , is humbly offered as a starting phase in the erection of a mature axiomatization of consciousness. We are of course under no illusions that at least an appreciable portion of will be controversial. The next step in the refinement, defence, and possible extension of is clearly to provide a fully formal version of the axioms which, for readability and efficiency herein, we left somewhat informal. This step we have accomplished, and look forward to publishing. A second equally obvious direction for future work is already underway, and will, we hope, soon bear fruit. The direction is that of discovering and examining theorems, for the overriding goal of bringing forth is to bring forth a theory of consciousness, where by ‘theory’ we mean the collection of all that can be proved from the axioms, that is CA:= {ϕ : ϕ}. We are hopeful that since can be, as planned, implemented, the tools of AI and automated theorem proving will help in plumbing .
Acknowledgements This chapter is dedicated to the philosophically indestructible memory of Dale Jacquette. Indelible memories of Dr. Jacquette while a fellow graduate student at Brown University include the sudden realization during my first year that, wait, hold on, this brilliant, precise, polymathic guy isn’t a professor in the Department? Some of Dale’s philosophy-of-mind contributions directly impact our chapter (as we sometimes note within it), and many of his other p-o-m contributions do so indirectly, in ways cognoscenti will apprehend. Dale’s serious engagement with the intersection of intensional attitudes and logic, also first appreciated by Bringsjord during Jacquettean Brown days over three decades back, is reflected by the formal language that underlies the system introduced herein.
Notes 1 It would be significantly more accurate to say that the on-paper ‘463’ is a token of the abstract type that is the number 463. For such a framework, and its deployment in connection with deductive reasoning over declarative content like what is soon shown below in the specific axioms we propose, see for example, Arkoudas and Bringsjord (2007), Bringsjord (2015).
318
The Bloomsbury Companion to the Philosophy of Consciousness
2 An elegant and economical introduction to is available in (Ebbinghaus, Flum and Thomas 1994), a book which enjoys the considerable virtue of including a presentation of in not only first- but second-order form. Simpler-than- axiomatizations of simple arithmetic over the natural numbers (including the memorably dubbed ‘Baby Arithmetic’) are introduced in a lively fashion in Smith (2013). 3 Standard model theory presents conditions for the truth of formulae like A4 and A5, but these conditions fail to provide real meaning. For example, consider a simple formula asserting that the successor of 0 is greater than zero. Given that the domain is the set of natural numbers, standard model theory merely assigns true to this formula exactly when the ordered pair (0, 1) is a member of all those pairs (n, m) of natural numbers where m > n. The underlying meaning of greater-than> hasn’t been supplied. Indeed, the whole thing is circular and wholly uninformative, since the domain for interpretation is the set of natural numbers – a set that is available ab initio. 4 Indeed, Bringsjord has argued that no third-person account is even logically possible for phenomenal consciousness; for example, see (Bringsjord 1992b, Bringsjord 2007). Jacquette’s (1994) book-long defence of property dualism in connection with mental states, relative to Bringsjord’s position, is congenial, but Jacquette would find the direct and ineliminable reference to subjects in the axioms presented herein to be problematic, since in the work in question, while countenancing property dualism, he questions a realistic position on the reality of persons as genuine objects. 5 Confirming details are easy to come by, but outside our scope. Here’s one example, which strikes some people as obviously true and worthy-of-being an axiom of set theory, and yet strikes some others as a very risky assertion: ‘Given any collection of bins, each containing at least one object, there exists a collection composed of exactly one object from each bin.’ This is none other than the self-evident-to-some yet highly-implausible-to-others Axiom of Choice. 6 The eleventh axiom, which expresses the idea that human consciousness includes a capacity for planning, is presented and discussed in §3. 7 Even under the charitable assumption that one cannot, for example, form beliefs about states in which one at once perceives, feels and cognizes, it’s exceedingly hard to find Honderich’s basis for holding that the perception side holds sway. As Jacquette writes: Supposing that there are just these three types of consciousness, that there is never a higher consciousness of simultaneously experiencing moments of perceptual and cognitive or affective consciousness, or the like, why should perceptual consciousness come first? Why not say that cognitive consciousness subsumes perceptual and affective consciousness? If inner perception complements the five outer senses plus proprioception as it does in Aristotle’s
Toward Axiomatizing Consciousness
319
De anima III.5 and Brentano’s 1867 Die Psychologie des Aristoteles, along with all the descriptive psychological and phenomenological tradition deriving from this methodological bloodline of noûs poetikos or innere Wahrnehmung, then affective consciousness might also be subsumed by cognitive consciousness. (Jacquette 2015: ¶4) 8 Some other thinkers have claimed that humans, over and above non-human animals, possess a singular mixture of consciousness and moral capacity (e.g. Hulne 2007, Harries 2007), but we leave out axioms that might reflect this claim, with which we are sympathetic. 9 The kernel of the kind of planning pointed to by Plan was demonstrated in the seminal Green (1969). A nice overview is given in Genesereth and Nilsson (1987). This work is restricted to standard first-order logic. It’s easy enough to specify and implement planning in this spirit in the much-more-expressive D Y CEC ∗. 3
10 There is a failure on Cunningham’s part to distinguish in symbolization between awareness of elements in the external environment versus awareness of inner states (a distinction that is captured in our case with help from the pair Pi and Pe), but let’s leave this aside. Cunningham himself admits the deficiency. 11 P(a, t, P(a, t, ϕ)), "t Î [t1, tk]. 12 I* is inspired by Castañeda (1999), a work that peerlessly explains both the need to have a symbol for picking out each self as separate from all else. 13 And is therefore not to be confused with any such thing as the μ-recursive functions. 14 A concern may emerge in the minds of some readers who are logicians, or technically inclined philosophers, viz. that no semantics for D Y CEC ∗ is provided. 3
There simply isn’t space to address this concern. (We would need to begin with a review of proof-theoretic semantics (e.g. of Prawitz 1972), since that is the tradition into which D Y CEC ∗ falls.) Readers are to rest content, with respect to the present 3
chapter, with an intuitive explanation of the operators in D Y CEC3∗, and we assume most readers are at least in general familiar with the definitions that are given in standard model-theoretic semantics for extensional logic, which helps, because the tradition of proof-theoretic semantics points out – indeed arguably itself starts with the observation – that these definitions themselves ground out deductive reasoning. 15 For what it’s worth, we suspect that sometimes perception does indeed lead directly to knowledge. For example, if you perceive a proof of some conditional ϕ → ψ, you may well come to thereby know that this conditional holds. 16 Ultimately belief should in the opinion of the first author be stratified, in that a belief is accompanied by a strength factor. So for example Jones, if having ingested only a small dose of the aforementioned drug, may believe at the level of more probable than not that there is a walrus. With stratification in place, belief will become graded from certain to certainly false, and so will knowledge. In this ‘uncertainty-infused’ version of D Y CEC ∗, knowledge too becomes graded. 3
320
The Bloomsbury Companion to the Philosophy of Consciousness
17 An efficient yet remarkably thorough discussion of the Gettier Problem is provided in the Stanford Encyclopedia of Philosophy: (Ichikawa and Steup 2012, §3, ‘The Gettier Problem’). 18 For example, see Goble (2001), and the nice overview of epistemic logic provided in Hendricks and Symons (2006) (see esp. table 2). 19 Such unending iterations can be very important and useful in formal investigations (e.g. see Arkoudas and Bringsjord 2004), even if these infinite iterations are cognitively implausible for human beings. 20 This axiom has its roots in an anti-computationalist analysis of infallible introspection given in Bringsjord (1992b). This analysis is critiqued by Rapaport (1998). Rapaport doesn’t reject the declarative core of I. 21 The afterlife, if there is one, or one at least available, may be of a nature outside space-time, but we leave aside this possibility here. 22 Cf. Gödel’s formalization of the concept of a divine essence, investigated formally and computationally in Benzmüller and Paleo (2014). 23 Formalization of competing ontolgies of emotion can likewise easily be formalized in D Y CEC ∗. For instance, the well-known, so-called ‘OCC’ theory of emotions 3
(Ortony, Clore and Collins 1988), can for the most part be formalized even in a propositional modal logic (Adam, Herzig and Longin 2009), and every definition in such a logic can be easily encoded in D Y CEC ∗. 3
24 The primary source of our reservation is the observation that data commpression not only can occur, but does occur, in the complete absence of structured, relational knowledge. For example, Hutter (2005) presents a formal paradigm for defining and grading a form of intelligence aligned with the processing of data, but the paradigm is devoid of any talk of, let alone commitment to, declarative knowledge possessed by the agent classified by the paradigm as intelligent. 25 And further entails that (since Turing-level computation is provably reversible) consciousness can’t be computation. But this is not central to present purposes. 26 Recently it has come to our attention, due to the scholarship of Atriya Sen, that Patrick Suppes (2001) can be viewed as being aligned with this approach, since he admits that from a conscious, common-sense point of view, even physical processes don’t appear to be reversible (despite the fact that they are from the standpoint of both classical and quantum particle mechanics). 27 As a matter of fact, Irr becomes a theorem in any calculus which, like D Y CEC ∗, 3
subsumes the event calculus. The reason is simply that each fluent has a boolean value of true when it holds, and admits of no ‘internal divisibility’ that would allow aspects of it to be reversed. Hence, any fluent intended to denote a particular P-conscious state that an agent is in over some interval will offer no internal structure to admit the possibility of reversibility. 28 While doing what one wants to do may seem like an exceedingly low bar for ascribing freedom to an agent (after all, if with electrodes planted secretly in your
Toward Axiomatizing Consciousness
321
brain an evil scientist gives you the wholly uncharacteristic desire to steal a wallet, and you steal it for that reason, we would rationally be loathe to say that your larceny was free!), it seems to be a higher one than what AI’s John McCarthy has apparently said suffices in the case at least of robots (see McCarthy 2000). 29 For classical mechanics, a very early instantiation to C is provided by McKinsey, Sugar and Suppes (1953). Axiomatizations are now available for not only classical mechanics, but also quantum mechanics, and both special and general relativity. For an initial exploration of such axiomatizations via formal methods and AI see for example, Govindarajalulu, Bringsjord and Taylor (2015).
References Adam, C., Herzig, A. and Longin, D. (2009). ‘A Logical Formalization of the OCC Theory of Emotions’, Synthese 168 (2), 201–48. Aleksander, I. and Dunmall, B. (2003). ‘Axioms and Tests for the Presence of Minimal Consciousness in Agents’, Journal of Consciousness Studies 10, 7–18. Aleksander, I. and Morton, H. (2007). Axiomatic Consciousness Theory For Visual Phenomenology in Artificial Intelligence, in A. Chella and R. Manzotti (eds.), ‘AI and Consciousness: Theoretical Foundations and Current Approaches’, AAAI, Menlo Park, CA, pp. 18–23. The proceedings is Tech Report FS-07-01 from AAAI. URL: https://www.aaai.org/Papers/Symposia/Fall/2007/FS-07-01/FS07-01004.pdf Arkoudas, K. and Bringsjord, S. (2004). Metareasoning for Multi-agent Epistemic Logics, in ‘Proceedings of the Fifth International Conference on Computational Logic In Multi-Agent Systems (CLIMA 2004)’, Lisbon, Portugal, pp. 50–65. URL: http://kryten.mm.rpi.edu/arkoudas.bringsjord.clima.crc.pdf Arkoudas, K. and Bringsjord, S. (2007). ‘Computers, Justification, and Mathematical Knowledge’, Minds and Machines, 17 (2), 185–202. URL: http://kryten.mm.rpi.edu/ ka_sb_proofs_offprint.pdf Arkoudas, K. and Bringsjord, S. (2009). ‘Propositional Attitudes and Causation’, International Journal of Software and Informatics, 3 (1), 47–65. URL: http://kryten. mm.rpi.edu/PRICAI_w_sequentcalc_041709.pdf Ayer, A. J., (1956), The Problem of Knowledge, Penguin. Balcombe, J. (2016). What a Fish Knows: The Inner Lives of Our Underwater Cousins, New York, NY: Scientific American / Farrar, Straus and Giroux. Benzmüller, C. and Paleo, B. W. (2014). Automating Gödel’s Ontological Proof of Gods Existence with Higher-order Automated Theorem Provers, in T. Schaub, G. Friedrich and B. O’Sullivan (eds.), ‘Proceedings of the European Conference on Artificial Intelligence 2014 (ECAI 2014)’, IOS Press, Amsterdam, The Netherlands, pp. 93–98. URL: http://page.mi.fu-berlin.de/cbenzmueller/papers/C40.pdf
322
The Bloomsbury Companion to the Philosophy of Consciousness
Block, N. (1995). ‘On a Confusion About a Function of Consciousness’, Behavioral and Brain Sciences, 18, 227–47. Bringsjord, S. (1992a). Free Will, in ‘What Robots Can and Can’t Be’, 266–327, Dordrecht, The Netherlands: Kluwer. Bringsjord, S. (1992b). What Robots Can and Can’t Be, Dordrecht, The Netherlands: Kluwer. Bringsjord, S. (1998). ‘Chess is Too Easy’, Technology Review, 101 (2), 23–8. URL: http:// kryten.mm.rpi.edu/SELPAP/CHESSEASY/chessistooeasy.pdf Bringsjord, S. (2007). ‘Offer: One Billion Dollars for a Conscious Robot. If You’re Honest, You Must Decline’, Journal of Consciousness Studies, 14 (7), 28–43. URL: http://kryten.mm.rpi.edu/jcsonebillion2.pdf Bringsjord, S. (2008a). Declarative/Logic-Based Cognitive Modeling, in R. Sun (ed.), ‘The Handbook of Computational Psychology’, 127–69, Cambridge, UK: Cambridge University Press. URL: http://kryten.mm.rpi.edu/sb_lccm_ab-toc_031607.pdf Bringsjord, S. (2008b). ‘The Logicist Manifesto: At Long Last Let Logic-Based AI Become a Field Unto Itself ’, Journal of Applied Logic, 6 (4), 502–25. URL: http:// kryten.mm.rpi.edu/SB_LAI_Manifesto_091808.pdf Bringsjord, S. (2010). ‘Meeting Floridi’s Challenge to Artificial Intelligence from the Knowledge-Game Test for Self-Consciousness’, Metaphilosophy, 41 (3), 292–312. URL: http://kryten.mm.rpi.edu/sb_on_floridi_offprint.pdf Bringsjord, S. (2015). ‘A Vindication of Program Verification’, History and Philosophy of Logic, 36 (3), 262–77. This url goes to a preprint. URL: http://kryten.mm.rpi.edu/ SB_progver_selfref_driver_final2_060215.pdf Bringsjord, S. and Govindarajulu, N. S. (2013). Toward a Modern Geography of Minds, Machines, and Math, in V. C. Müller (ed.), ‘Philosophy and Theory of Artificial Intelligence’, vol. 5 of Studies in Applied Philosophy, Epistemology and Rational Ethics, 151–65, New York, NY: Springer. URL: http://www.springerlink.com/content/ hg712w4l23523xw5 Bringsjord, S., Licato, J., Govindarajulu, N., Ghosh, R. and Sen, A. (2015). Real Robots that Pass Tests of Self-Consciousness, in ‘Proccedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2015)’, 498–504, New York, NY: IEEE. This URL goes to a preprint of the paper. URL: http:// kryten.mm.rpi.edu/SBringsjord_etal_self-con_robots_kg4_0601151615NY.pdf Bringsjord, S. and Zenzen, M. (1997). ‘Cognition is not Computation: The Argument from Irreversibility?’, Synthese, 113, 285–320. Caruso, Gregg, ed. (2017) Ted Honderich on Consciousness, Determinism, and Humanity. Palgrave Macmillan and Springer Nature. Castañeda, H.-N. (1999). The Phenomeno-Logic of the I: Essays on Self-Consciousness, Bloomington, IN: Indiana University Press. This book is edited by James Hart and Tomis Kapitan. Chisholm, R. (1964). Freedom and Action, in K. Lehrer (ed.), ‘Freedom and Determinism’, 11–44, New York, NY: Random House.
Toward Axiomatizing Consciousness
323
Chomsky, N. (2018). 'Mentality Beyond Consciousness', in Caruso, 2017. Cunningham, J. (2001). ‘Towards an Axiomatic Theory of Consciousness’, Logic Journal of the IGPL, 9 (2), 341–7. Ebbinghaus, H. D., Flum, J. and Thomas, W. (1994). Mathematical Logic (second edition), New York, NY: Springer-Verlag. Edwards, J. (1957). Freedom of the Will, New Haven, CT: Yale University Press. Edwards originally wrote this in 1754. Floridi, L. (2005). ‘Consciousness, Agents and the Knowledge Game’, Minds and Machines 15 (3–4), 415–44. URL: http://www.philosophyofinformation.net/ publications/pdf/caatkg.pdf Genesereth, M. and Nilsson, N. (1987). Logical Foundations of Artificial Intelligence, Los Altos, CA: Morgan Kaufmann. Gettier, E. (1963). ‘Is Justified True Belief Knowledge?’, Analysis, 23, 121–23. URL: http://www.ditext.com/gettier/gettier.html Goble, L., ed. (2001). The Blackwell Guide to Philosophical Logic, Oxford, UK: Blackwell Publishers. Govindarajalulu, N. S., Bringsjord, S. and Taylor, J. (2015). ‘Proof Verification and Proof Discovery for Relativity’, Synthese, 192 (7), 2077–94. Govindarajulu, N. S. (2011). Towards a Logic-based Analysis and Simulation of the Mirror Test, in ‘Proceedings of the European Agent Systems Summer School Student Session 2011’, Girona, Spain. URL: http://eia.udg.edu/easss2011/resources/docs/paper5.pdf Green, C. (1969). Applications of Theorem Proving to Problem Solving, in ‘Proceedings of the 1st International Joint Conference on Artificial Intelligence’, 219–39, San Francisco, CA: Morgan Kaufmann. Halpern, J. and Shoham, Y. (1991). ‘A Propositional Modal Logic of Time Intervals’, Journal of the ACM, 38 (4), 935–62. Harries, R. (2007). Half Ape, Half Angel?, in C. Pasternak (ed.), ‘What Makes Us Human?’, 71–81, Oxford, UK: Oneworld Publications. Hendricks, V. and Symons, J. (2006). Epistemic Logic, in E. Zalta (ed.), ‘The Stanford Encyclopedia of Philosophy’. URL: http://plato.stanford.edu/entries/logic-epistemic Honerich, T. (2014). Actual Consciousness, Oxford, UK: Oxford University Press. Hulne, D. (2007). Material Facts from a Nonmaterialist Perspective, in C. Pasternak, ed., ‘What Makes Us Human?’, 82–92, Oxford, UK: Oneworld Publications. Hutter, M. (2005). Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability, New York, NY: Springer. Ichikawa, J. and Steup, M. (2012). The Analysis of Knowledge, in E. Zalta (ed.), ‘The Stanford Encyclopedia of Philosophy’. URL: http://plato.stanford.edu/entries/ knowledge-analysis Jackson, F. (1977), Perception: A Represewntative Theory: Cambridge University Press. Jacquette, D. (1994). Philosophy of Mind, Englewood Cliffs, NJ: Prentice Hall. Jacquette, D. (2015). ‘Review of Honderich’s Actual Consciousness’, Notre Dame Philosophical Reviews 8. URL: http://ndpr.nd.edu/news/60148-actual-consciousness
324
The Bloomsbury Companion to the Philosophy of Consciousness
Johnson-Laird, P. and Oatley, K. (1989). ‘The Language of Emotions: An Analysis of a Semantic Field’, Cognition and Emotion, 3 (2), 81–123. Kriegel, U. (2015). Varieties of Consciousness, Oxford, UK: Oxford University Press. Luger, G. and Stubblefield, W. (1993). Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Redwood, CA: Benjamin Cummings. Maguire, P., Moser, P. and Maguire, R. (2016). ‘Understanding Consciousness as Data Compression’, Journal of Cognitive Science. 17 (1), 63–94. URL: http://www.cs.nuim. ie/pmaguire/publications/Understanding2016.pdf McCarthy, J. (2000). ‘Free will-even for robots’, Journal of Experimental and Theoretical Artificial Intelligence, 12 (3), 341–52. McKinsey, J., Sugar, A. and Suppes, P. (1953). ‘Axiomatic Foundations of Classical Particle Mechanics’, Journal of Rational Mechanics and Analysis, 2, 253–72. Miranker, W. and Zuckerman, G. (2008). ‘Mathematical Foundations of Consciousness’. This paper is also available from Yale-University servers as a technical report (TR1383). URL: https://arxiv.org/pdf/0810.4339.pdf Moore, G. E. (1912). Some Main Problems of Philosophy, Allen & Unwin Ortony, A., Clore, G. L. and Collins, A. (1988). The Cognitive Structure of Emotions, Cambridge, UK: Cambridge University Press. O’Shaughnessy, Brian, (2003) ‘Sense Data’, in John Searle, ed. Barry Smith, Cambridge University Press. Penn, D., Holyoak, K. and Povinelli, D. (2008). ‘Darwin’s Mistake: Explaining the Discontinuity Between Human and Nonhuman Minds’, Behavioral and Brain Sciences, 31, 109–78. Perry, J. (1977). ‘Frege on Demonstratives’, Philosophical Review, 86, 474–97. Pink, T. (2004). Free Will: A Very Short Introduction, Oxford, UK: Oxford University Press. Pollock, J. (1995). Cognitive Carpentry: A Blueprint for How to Build a Person, Cambridge, MA: MIT Press. Prawitz, D. (1972). The Philosophical Position of Proof Theory, in R. E. Olson and A. M. Paul (eds.), ‘Contemporary Philosophy in Scandinavia’, 123–134, Baltimore, MD: Johns Hopkins Press. Price, H. H., (1950), Perception, Methuen. Rapaport, W. (1998). ‘How Minds Can Be Computational Systems’, Journal of Experimental and Theoretical Artificial Intelligence, 10, 403–19. Russell, Bertrand, (1912). The Problems of Philosophy, Oxford University Press. Russell, S. and Norvig, P. (2009), Artificial Intelligence: A Modern Approach, Third edition, Upper Saddle River, NJ: Prentice Hall. Smith, P. (2013). An Introduction to Gödel’s Theorems, Cambridge, UK: Cambridge University Press. This is the second edition of the book. Suppes, P. (2001). Weak and Strong Reversibility of Causal Processes, in M. Galavotti, P. Suppes and D. Costantini (eds.), ‘Stochastic Causality’, 203–20, Palo Alto, CA: CSLI.
16
Intentionality and Consciousness Carlo Ierna
Précis In this chapter I concentrate on the notion of intentionality and its relation to consciousness. Ever since its re-introduction into contemporary philosophy in the works of Franz Brentano, intentionality has been associated in various ways with consciousness. In the continental and analytic traditions the notion of intentionality has undergone divergent developments, although more recent authors try to tie them together once again. I outline Brentano’s conception of intentionality and its immediate reception in his school, then I look at the later developments in the twentieth century by focusing on J. R. Searle’s (1983) Intentionality. By critically analysing and comparing Searle’s discussion with Brentano’s original introduction and Husserl’s elaboration, various fundamental questions come to the fore: Are all conscious mental acts intentional? In what way is intentionality representational? Can intentionality be naturalized?
1 Brentano’s conception of intentionality and consciousness Brentano’s theories had a widespread influence through his students, which founded their own schools and movements, and informed many debates in contemporary philosophy of mind and cognitive science.1 Edmund Husserl was probably one of Brentano’s most influential students, not only as founder of the phenomenological movement, but also as relevant for (analytic) philosophy of mind,2 cognitive psychology,3 Artificial Intelligence (AI),4 meaning theory5 and mathematics.6 Several representatives of analytical philosophy were directly or indirectly influenced by Husserl, including Carnap,7 Ryle,8 Sellars,9 Føllesdal10 and Dennett.11
326
The Bloomsbury Companion to the Philosophy of Consciousness
In recent years there has been renewed interest in the historical roots of current debates in cognitive science and analytic philosophy of mind, since linguistic and terminological changes have obscured the legacy and impact of earlier fruitful positions and debates. While it is no longer in vogue to conduct philosophical investigations in the name of psychology, this does not mean that the Brentanian approach to philosophical problems is simply a museum piece. As it turns out, many of the problems which Brentano and his pupils regarded as ‘psychological’ are nowadays relegated to ‘philosophy of mind’, ‘phenomenology’, or some other discipline with an appropriately fashionable label. The theme of intentional reference is indeed still a focal issue within a wide range of philosophical endeavours, although the relevant investigations which grew out of the School of Brentano are often unjustly ignored (Rollinger 1999, 6).
The notion of intentionality plays a fundamental role in phenomenology, broadly understood as ranging from Brentano’s (realist) descriptive psychology through Husserl’s transcendental idealism. While often simplistically summarized as the ‘aboutness’ of consciousness, Brentano originally defined intentionality as the main distinguishing characteristic of psychical (mental) phenomena. Intentionality and consciousness are still linked to many current debates in philosophy of mind (including the mind–body problem, the syntax versus semantics debate, the (tacit) social and cultural background of intelligent behaviour, the possibility of AI, among others). In his 1874 book Psychology from an Empirical Standpoint Brentano introduced the notion of intentionality as follows: Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction towards an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We could, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves. (Brentano 1874, pp. 124 f.; 1995, 88 f.)
The passage is not wholly clear, its interpretation problematic (Antonelli 2000, 93), and it has been understood in various ways by various authors. First of
Intentionality and Consciousness
327
all, Brentano does not define ‘intentionality’, but introduces a property of mental phenomena that serves as a criterion to distinguish them from physical phenomena: the ‘mark of the mental’ (Crane 1998). This property he calls ‘intentional inexistence’. However, in the passage itself various different characterizations of this property are given, such as ‘reference to a content’ and ‘direction towards an object’, ‘immanent objectivity’, ‘including something as object’ and ‘intentionally containing an object’. Given that Brentano also claimed that consciousness is always consciousness of something,12 clarifying the relation between the act and its object (or broader: between subjectivity and objectivity) is of paramount importance in understanding consciousness. One popular reading takes the prefix ‘in-’ of ‘inexistence’ as locative, indicating the place in which the object exists. The ‘intentionally inexisting’ object would simply be the object ‘in the mind’.13 This leads to the additional question about its existential status: Is ‘inexistence’ a modified form of existence? We should distinguish the intentional or immanent object from whatever may correspond to it externally, between the mental object and the object in nature. Brentano’s position is that the intentional object is necessary and always given in the act, but that it is not at all necessary for something to also exist outside of the act. Hence, we can think of our high school English literature teacher (in memory) as well as of the Faerie Queene (in imagination), but nothing needs to correspond to these thoughts externally (our teacher may be deceased, the fairy queen was invented by Spenser, and the like). Yet, I can be conscious of them even if they don’t exist. Furthermore, if we accept ‘inexistence’ as some kind of existence, then we would have to accept that also square circles and other kinds of logically impossible objects exist, even if just in the mind. Brentano himself already wrestled with the dilemma that all presentations have an (internal) object, but that an (external) object does not correspond to all presentations, which was later called the ‘Brentano-Bolzano Paradox’.
2 Intentional objects and the Brentano-Bolzano Paradox Brentano introduced a distinction between (1) act, (2) immanent object or content and (3) external object (Ierna 2012; 2015). Every act needs an immanent object, although the external object may or may not exist. Hence, intentionality cannot be straightforwardly understood as a classical relation, which would presuppose the existence of both its foundations. Intentionality cannot indicate the relation between immanent and external object, but instead indicates the
328
The Bloomsbury Companion to the Philosophy of Consciousness
relation between act and immanent object. The external object is simply what would correspond to the immanent content or meaning. Twardowski (1894, 23 ff.) argued that so-called ‘objectless’ presentations did nevertheless always have an ‘object’ too: a merely intentional object, a nonexistent object. Every act has a content and to every content corresponds an object: ‘What is presented in the presentation, is its content, what is presented through the presentation, is its object’ (Twardowski 1894, 18). Twardowski’s interpretation prompted a critical response by Husserl, which led to the development of his own phenomenological theory of intentionality.14 For Twardowski, when thinking about fictional entities, there would only be one object, the immanent one. In ordinary cases we would have two objects instead, one immanent and one external. This would account both for Brentano’s requirement that all acts of consciousness need an object as well as for the cases of ‘objectless’ presentations.15 Husserl’s critical reaction to Twardowski mainly concentrates on the doubling of the object in normal cases of presentation. When we present an existing object we do not in any case have two objects in mind, the intentional and the real one. Moreover, in the case of fictional entities the doubling of the object would lead to contradictions: a square circle would have existence as intentional object. According to the locative interpretation of inexistence, this is merely existence in the mind, but albeit modified, existence nevertheless. In the Logical Investigations Husserl clarifies what the intentional object is: ‘The intentional object of the presentation is the same as its real and, in given cases, external object and it is a countersense to distinguish between them.’16 Husserl therefore turns away from a distinctional doubling of objects and instead resorts to a distinction in presentations: proper and improper presentations. Proper presentations simply present existing objects, improper presentations are such that they do not directly present existing objects. Far from claiming that they present non-existing objects,17 Husserl suggests that they present existing objects under a certain assumption. Talking about Little Red Riding Hood makes sense only in the context of fairy tales, statements about Zeus make sense only in the context of Greek mythology. We assume (tacitly or explicitly) that the mentioned objects exist in the given context, we treat them as if they exist. Hence, a presentation of a fictional object is an improper (inauthentic, indirect) presentation, because it occurs within a certain specific context, defined by a specific set of assumptions. Mathematics and geometry are also contexts in this sense, as they do not treat the empirical
Intentionality and Consciousness
329
world of actually existing objects, but an idealized, abstract model of the world. Husserl concludes that judgements which occur under the assumption of a certain context should not be seen as judgements about objects at all, but as judgements about our presentations of them. The analysis of intentionality then became the pivotal point of phenomenology for Husserl: ‘The Logical Investigations begins and ends with an account of the objectivity of knowledge … objectivity is precisely the aim of Husserl’s theory of knowledge’ (Smith 2007, 60). The core of the problem and ‘the foundation of Husserl’s phenomenology’ is to be found ‘buried’ (Smith 2007, 28) in the V Investigation: ‘What unites the whole of Logical Investigations, then, is the formal structure of intentionality.’18 Intentionality serves as the vehicle of evidence everywhere, including formal sciences (Husserl 1974; Hua XVII, 168 ff.), and on the other serves as the connection between formal and transcendental logic (see Smith 2002a). Indeed, both the transcendence of natural objects as well as the transcendence in immanence of ideal objects are transcendent objectivities only in the sense of being intentional unities (Husserl 1974; Hua XVII, 242). Already in the Logical Investigations the notion of intentionality holds a central role as mediating element between subjectivity and objectivity,19 and hence in the phenomenological epistemology and theory of knowledge.
3 Searle’s unphenomenological intentionality On the analytic side, Searle’s (1983) Intentionality had a vast impact on discussions in philosophy of mind and is still today one of his most quoted and debated works. Moreover, he has maintained key elements of his approach since then, as we will see. On the one hand, Searle’s Intentionality has been widely perceived as being somehow ‘Husserlian’ in character,20 while on the other, as Searle himself has later affirmed on various occasions, it is not in any way a work in the tradition of Husserlian phenomenology. Indeed, it explicitly distances itself from any such tradition. Entire philosophical movements have been built around theories of Intentionality. What is one to do in the face of all this distinguished past? My own approach has been simply to ignore it, partly out of ignorance of most of the traditional writings on Intentionality and partly out of the conviction that my only hope of resolving the worries that led me into this study in the first place lay in the relentless pursuit of my own investigations. (Searle 1983, ix).
330
The Bloomsbury Companion to the Philosophy of Consciousness
While claiming a certain independence from previous theories, Searle’s definition of intentionality does have clear parallels to Brentano’s: Intentionality is that property of many mental states and events by which they are directed at or about or of objects and states of affairs in the world. If, for example, I have a belief, it must be a belief that such and such is the case; if I have a fear, it must be a fear of something or that something will occur; if I have a desire, it must be a desire to do something or that something should happen to be the case; if I have an intention, it must be an intention to do something. (Searle 1983, 1)
There are correspondences in content and differences in method. Brentano acknowledges his debt to prior tradition (in particular Scholasticism), while Searle, dismisses the tradition he just admitted ignoring: I follow a long philosophical tradition in calling this feature of directedness or aboutness ‘Intentionality’, but in many respects the term is misleading and the tradition something of a mess. (Searle 1983, 1)
Brentano himself concedes that his definition is not ‘wholly unambiguous’, but I do not think there is anything intentionally misleading about it. There have indeed been many debates about intentionality, but the developments in the tradition are far from the ‘mess’ Searle considers it to be. Criticizing a nameless ‘tradition’ without addressing any specific theory or author comes across as a straw man, but the target seems to be the School of Brentano.21 The most important divergence between Brentano and Searle is that Brentano conceives of intentionality as the essential distinctive characteristic of consciousness, while Searle remarks that it is only a feature of some mental states: ‘Not all of our mental states are in this way directed or Intentional’,22 he writes, holding that there must be some form of non-intentional consciousness and non-conscious intentionality: ‘Not all of our Intentional states are even conscious states’ (Searle 1982, 259). For Brentano this would obviously be absurd. Moreover, as Searle denies that intentionality is an essential characteristic of consciousness, which enables us to distinguish (in Brentano’s words) psychical from physical phenomena, we could ask why we would want to investigate intentionality at all. If it is not a distinguishing property that allows an exhaustive partition of all phenomena in two classes (mental and physical as intentional and non-intentional phenomena), then what is its significance? Furthermore, can we still reliably distinguish consciousness from non-consciousness if we lack a unique distinctive characteristic that can serve as yardstick? Searle runs into problems on these grounds, lacking a defining ‘mark of the mental’ he
Intentionality and Consciousness
331
includes non-intentional states in consciousness and declares non-conscious neurophysiological states to be intentional. As we saw earlier, the School of Brentano developed a distinction of act, content and object, due to the paradox of non-existent objects. Searle also acknowledges the articulation of intentional acts or states into psychological mode (believing, desiring, etc.) and intentional content (IC) (the believed, desired, etc.) as well as the distinction of content (the immanent object or meaning) and object (the ordinary, external, transcendent object). We can also compare Searle’s ‘as if conditions of satisfaction’ to Husserl’s treatment of fictional discourse as using assumptions. Husserl provides an account of how statements in fictional contexts have meaning and can be considered true or false, while Searle seems less precise about this, only weakly claiming that fictional statements ‘cannot be true’ and that there is ‘no commitment’ to the conditions of satisfaction (Searle 1983, 18). The Pythagorean theorem is true in the context of Euclidean plane geometry, even though there are no ordinary (but only geometrical) objects that satisfy it. Would Searle really say that it cannot be true because there is no ordinary object that corresponds to the geometrical concept of triangle? Searle’s discussion of literary works would appear less adequate for the discussion of formal sciences. When we take expressions in geometry as ‘serious discourse’ they would literally be ‘not true’, because none of the fictional objects involved exist in the ordinary sense. Instead, a geometer or mathematician would actually have to be considered the author of ‘works of fiction’, and not as merely discoursing about them.
4 Mental and intentional Searle distances himself from the tradition by dissociating the mental from the intentional, claiming that ‘there are forms of nervousness, elation, and undirected anxiety that are not Intentional’.23 If Searle has in mind what in German is called ‘Gemüt’, then a first response from Brentano would be that Gemütsbewegungen (emotions) constitute one of the three fundamental classes of intentional acts, alongside presentations and judgements. The analysis of emotions would provide the psychological foundation for Brentano’s practical philosophy, value theory and ethics. Also to be included under this term [mental (intentional) phenomenon] is every emotion: joy, sorrow, fear, hope, courage, despair, anger, love, hate, desire, act of will, intention, astonishment, admiration, contempt, etc. (Brentano 1995, 79)
332
The Bloomsbury Companion to the Philosophy of Consciousness
Indeed, Brentano developed a conception of ‘correct emotion’ (see Baumgartner and Pasquerella 2004), endowing Gemütsbewegungen with a direction of fit and conditions of satisfaction. Alternatively, one could claim that if depression, elation and like emotions are not intentional, then they are not mental. Searle would then need stronger arguments to the effect that ‘forms of nervousness, elation, and undirected anxiety’ would indeed be forms of consciousness like belief, desire, etc. at all: he simply assumes them to be likewise and goes on to deny their intentionality (Crane 1998, 1). As a counterargument, elation, anxiety, etc. could simply be considered physiological states of the body, caused by hormonal stimulation induced by drugs or natural factors. There is no problem in barring these from both the intentional and mental domain, since Brentano and the phenomenologists were mostly interested in a descriptive approach of mental acts and contents, not in an explanatory ‘genetic’ psychology, concerning their purported physiological causes. Husserl would certainly not want to focus on the underlying metaphysics of consciousness, being more interested in a theory of intentionality and consciousness independently of any such metaphysics, at first to avoid psychologism and later to avoid falling back into realism. Husserl would argue that we should be able to describe and analyse our consciousness without any foreign presuppositions, especially metaphysical ones.24 Independently of whether monism or dualism should turn out to be true, the phenomenal content of my mind would not change, nor would my propositional attitudes (perceptions, beliefs, judgements, desires and so on). Searle, through his appeal to a non-intentional background to consciousness seems to develop an (implicitly dualist) metaphysics of consciousness with a naturalist bias rather than an autonomous theory of intentionality. A permanently elated, positive attitude resulting from naturally occurring internal secretions of dopamine would not seem to be intentional, but is also no more mental than a healthy body or an efficient digestive apparatus. Obviously, other quite definitely mental properties may depend on these physical features. If I would be continuously elated, my attitudes might be more positive than those of others, like the attitude of a strong man towards heavy luggage is different from that of a lesser man. Moreover, Searle appears to conflate physiological states and awareness of these states (compare Tye 1995, 130). Anxiety in Searle’s description seems rather a physiological state, and even though we could be made aware of it (though we need not be), this does not make it a mental state itself, but the content of another mental state. Furthermore, from a different angle than Brentano’s, we may argue that there is no such thing as truly undirected anxiety, elation or the like (consider Tye 1995,
Intentionality and Consciousness
333
126). My apparently undirected anxiety probably has a good reason, my fear of failing the exam. It may not be identical with my fear, which is intentional, but it does provide a clear answer to the question what my anxiety is about: it is motivated by my fear. We might be anxious about something without being able to exactly point it out. Still, having a vague and undefined intentional object as content of my psychological mode does not mean that I have no object at all. In general, anxiety, elation etc. seem to be motivated by other phenomena, in such a way as to make it difficult to call them wholly undirected (compare Crane 1998, 8). Hence, we could analyse anxiety, elation and the like as being overall intentional and having an object, though an unspecified one. I can be anxious or elated about a specific object or event, but I can also be in the mental condition of being anxious or elated in general. Whenever specific conditions occur, my previously unspecifically directed anxiety becomes focused on a specific object or event (Crane 1998, 8). Besides the more limited conception of intentionality as ‘object directedness’, we have a broader one as ‘world disclosing’.25 While depression, anxiety and other emotional states might not be strictly object directed, they most certainly are world disclosing, though rather as modifications of my experience than as having a direct experiential object.26 Hence, moods and attitudes can be seen as modifications of acts (see Tye (1995, 129) for similar arguments). Instead of considering anxiety, elation and their ilk as acts and looking for their objects, we could account for them adverbially and consider them as modifiers of such acts. For example, ‘We nervously expect his arrival’, ‘He enthusiastically accepted the proposal’, ‘They anxiously considered the data’, and so on. Moods and attitudes (being anxious, elated, nervous) are not directly intentional themselves, in the sense of having a transcendent object, but modify intentional mental acts. Hence, though they do not have a transcendent object themselves, they do presuppose and depend on other intentional mental acts that do have objects (Crane 1998, 9). Hence, I would say, contra Searle, that anxiety, elation and the like are either bodily (and hence irrelevant) or are not completely lacking ‘direction of fit’ (and hence unproblematic). Searle’s dubious position comes in part from considering intentionality as a relation which necessarily implies the existence of both its relata, every intentional state presupposing the existence of its object. Instead, Husserl, facing the problem of intentional acts directed at impossible and nonexistent objects, drew the conclusion that intentionality does not depend on the existence of its intended object. Intentionality is neither an ‘external’ relation between two existing things (such as a brain and another physical object) nor a
334
The Bloomsbury Companion to the Philosophy of Consciousness
merely immanent relation of the mind to some utterly subjective content, but it is necessarily transcendental.
5 Consciousness or intentionality? Searle argues that consciousness and intentionality are distinct, because some conscious states are not intentional. He gives the example of anxiety: ‘consciousness of ’ anxiety would not be the same as an intentional state. Instead, Searle simply conflates such awareness of anxiety with the anxiety itself. Instead, I would rather argue that we can be reflectively aware of a certain (psychological or physiological) state in such a way that this state is the intentional object of my consciousness. As remarked above, mental conditions or physiological states and awareness of these states cannot be straightforwardly identified: my hunger is not simply identical with my awareness of my hunger, just as the colour red (as percept) is not identical with my awareness of the colour red. Anxiety, elation, and the like are all states that can affect behaviour, even when we are not wholly aware of them. However, when someone points out our nail-biting or incessant grinning, we ask ourselves what is the matter and then we might become more aware of our condition. Immediately we ask ourselves why we are anxious, or elated: we look for a cause or motivation of our condition. Therefore, it seems a mistake to me to straightforwardly identify a physiological state with the awareness of that state (also see Tye (1995, 115) for a similar argument concerning the distinction of pain and awareness of pain). Furthermore, Searle claims that beliefs that we hold even when we do not think actively about them, are intentional, but not conscious. Searle here seems to contradict himself, since he previously stated that an intentional state and intentionality itself do not merely consist of an intentional object or content, but always have a psychological mode: ‘Every Intentional state consists of an Intentional content in a psychological mode’ (Searle 1983, 12). This is analogous to the distinction Husserl makes between quality (psychological mode) and matter (IC) of a mental act or state (Mulligan 2003, 267). What, then, would the psychological mode or propositional attitude of ‘unconscious intentionality’ be? The fact that Searle’s grandfather never went beyond the continental United States is the IC, but without a mental, intentional act it cannot be said to be truly intentional all by itself in complete isolation. Perhaps he should have rather called this ‘potential intentionality’: it is a content, matter or object which is fit
Intentionality and Consciousness
335
to become part of an intentional act. Then and only then it would also become intentional and conscious. Husserl is quite unambiguous on the matter: ‘Each intentional experience is either an objectivating act or has its basis in such an act. … All matter … is the matter of an objectivating act.’27 Instead of keeping epistemology and metaphysics rigorously separated, Searle mixes their order of being and order of knowing. While the hunger precedes and causes my awareness of it, this should not let me identify the two. Searle conflates causes and their effects and hence, when proceeding from effect to cause, identifies them. We only know about our hunger or about our perception of redness through our conscious and intentional mental life. Any metaphysical existential commitment to a quasi-Ding an sich-like entity which would be the true underlying cause of this awareness can only be an (unwarranted) assumption. Searle, however, while claiming to proceed in a non-reductionist fashion, conflates the extra-mental, non-intentional and non-conscious background with mental, intentional, conscious phenomena. Instead of an analysis of the essence of intentionality and consciousness, Searle provides a speculative metaphysics of the causes behind and beyond them (see also Seager 1991, 181).
6 Intentionality without objects This kind of speculation can also be seen in Searle’s ontological claims, for example, when he states that he is not interested in the ontology of intentional objects, but only in their logical properties. Yet, he does propose a ‘solution’ for the problem of intentional inexistence: the intentional object is simply the object the intentional state refers to, a transcendent, ordinarily existing object. Accordingly, propositional thoughts about non-existing objects are false because the object does not exist. ‘The king of France is bald’, Searle affirms, ‘cannot be true’, because it cannot be satisfied, as there is no ordinarily existing king of France. This approach would seem odd when considering that the (true) statement ‘there is no king of France’ would refer to the very same non-existent object, and hence, by the same reasoning, ‘cannot be true’.28 Moreover, it would seem that this kind of solution would introduce a kind of ‘shadowy intermediate pseudo-object’, which Searle wanted to avoid, since between the act and the (ordinary, external) object we now have a content. Searle reduces the distinction between IC and object to that of intensionality (‘withan-s’) and extensionality and asserts: ‘An Intentional state has a representative
336
The Bloomsbury Companion to the Philosophy of Consciousness
content, but it is not about or directed at that content’ (Searle 1983, 17). This content is not an object, but a proposition. We are clearly directed at the object, even if the object does not exist, and moreover: ‘The belief is identical with the proposition construed as believed’ (Searle 1983, 19). This might be satisfactory in the case of beliefs, but is somewhat dubious for other psychological modes. If we consider belief to be a case of judgement for Brentano, then we can see how believing a proposition might be identical with holding it to be true, but what about emotions? Emotions (desires, fears, etc.) are not directed at a proposition, but at an object or state of affairs. What would the objects ‘construed as desired’ be? The fact that I might desire a state of affairs to come about is not identical with any ordinarily existing property of the objects involved in the state of affairs. It is not clear what Searle would consider the conditions of satisfaction for such intentional states. In order to determine the conditions of satisfaction, we must suppose that there must be a relation between the representation and the represented, even if the represented object does not exist. Searle does not address this. Even if we would grant his claims about unconsciously ‘held’ beliefs, this is much less intuitive for emotions. When I entertain a certain thought, the content of the thought is a proposition, but what is the content of the proposition? If I fear that I will fail my exam, what exactly is in my thoughts according to Searle? An intentional state is nothing but a psychological mode (fear) and a representational content (a proposition, in this case ‘(that) I will fail my exam’). While I can construe this proposition as true, this will not do as an unconsciously ‘held’ fear, which would then be identical with my belief. Searle seems to have shifted the problems of representationalism to a lower level, from intentional states to propositions. However, ‘the speech-act model does not completely succeed in de-mystifying the notion of Intentionality’ (McIntyre 1984, 474). Searle claims that we do not use representations to refer to states of affairs, but that our representations are inherently intentional even if we do not actively use them. This looks problematic when combined with his view that the content of an intentional state is a proposition. Given his claim that language is derived from intentionality, what exactly does he mean when he speaks of propositions? If such propositions are understood as linguistic entities, this would make his position vulnerable to the charge of circularity. It is clear that there are some remarkable parallels between Husserl and Searle, but also fundamental differences, which make it very difficult to call Searle’s position ‘Husserlian’ or ‘phenomenological’. One major difference is that Husserl’s position allows for an externalist reading.29 Another is Husserl’s
Intentionality and Consciousness
337
earlier metaphysical neutrality and later idealist transcendentalism versus Searle’s biological naturalism. While Searle claims that this is not a reductionist model, where every mental state would be in bijection with a physical state and could hence be explained by physical causation, making the psychical level epistemically superfluous, he still puts intentionality and digestion on the same level: ‘Intentional states are both caused by and realized in the structure of the brain’ (Searle 1983, 15). His appeal to ‘a higher level of description’ (Searle 1983, 266) is part of the problem rather than a solution. The very speaking of higher and lower, founded and founding layers invites reductionism back in and it is highly doubtful that Searle can preserve his claim of ontological neutrality.30 These two points already reveal Husserl’s and Searle’s theories of intentionality and consciousness as fundamentally different. However, ultimately Husserl’s transcendentalism breaks the mould of internalism and externalism (see Zahavi 2004). If we define internalism as the position that intentionality is determined by the intrinsic elements of a mental state and externalism as the claim that it is at least partially determined by something external to the subject, then Husserl’s transcendental phenomenology cannot be straightforwardly considered as being either. For Husserl intentional acts can and do intend objects as external and as transcendent. He certainly would not accept a position that would confine a subject to radical solipsism. It is one of the major aims of phenomenology to provide an explanation as to how we can intend and account for what transcends our mind by analysing the structure of intentionality. Husserl’s account ultimately is not based on a solipsistic subject, but on transcendental intersubjectivity. This is what warrants an ‘externalist’ conception of the world, where intersubjectivity (as the ‘first person plural’, Zahavi 2007), takes the role that would otherwise be played by causality, without sacrificing irreducibly intentional elements of the mind. Searle apparently misinterprets Husserl on this point, claiming that Husserl naively understood causality as always natural and not intentional (Searle 1983, 65). For Husserl, however, the entire world, including natural laws and causality, is reduced precisely to an intentional correlate of a transcendental subjectivity. Hence, I would be inclined to agree with Beyer that, despite ‘striking’ similarities in their approach, Husserl’s theory appears in some respects more compelling than Searle’s (Beyer 1997, 329). Searle writes: To begin with, Husserl presents an interesting analysis of the notion of an Intentional state with a direction of fit, notably over the quasiepistemic notion of an intuitive fulfillment, and he even uses this analysis in his phenomenological explanations of logical coherence and logical incoherence, respectively. Searle,
338
The Bloomsbury Companion to the Philosophy of Consciousness
however, merely offers a metaphorical circumscription of what he means by ‘direction of fit’, characterizing this idea as ‘that of responsibility for fitting’. (Searle 1983, 7)31
Searle never seems to turn this notion into a full-fledged account, while for Husserl it is one of the most elementary notions that had to be fully developed and analysed in order to establish an epistemologically significant phenomenology. Second, Husserl’s theory of intuitive fulfilment fills a big gap in Searle’s conception of the direction of fit of an Intentional state, because, unlike Searle, Husserl does not bracket such Intentional states as mathematical representations, that are intuitively fulfilled iff certain relations in the ‘world’ of ideal objects (such as numbers) are intuitively experienced as obtaining. (Beyer 1997, 329 f.)
Unfortunately Beyer does not go into detail into these two points, even though acknowledging their overall importance. However, he stresses for discussion a third, pivotal point: In construing Intentional content and psychological mode as two mutually dependent functional parts or moments of the respective Intentional state’s particular ‘Intentional essence’ that instantiate an ideal type of content (which is a meaning or meaning-like entity) and an ideal type of psychological mode, respectively, Husserl takes a simple and at the same time plausible view on how to think both of an Intentional state’s consisting of ‘a content and a psychological mode and of the nature of a system’s grasping’ a certain content in a certain mode. Searle, however, takes a quite mysterious and implausible view on this issue. (Beyer 1997, 330)
What is the mystery? ‘Searle describes Intentional states as “consisting of ” an IC and a psychological mode, leaving it unclear what sort of composition he has in mind’ (Beyer 1997, 324), while on the other hand Husserl in the Logical Investigations proposes to ‘think of the particular mode and the particular content of an Intentional state as two mutually dependent functional parts or moments combining to yield a particular structure that displays (intrinsic) Intentionality and that instantiates an ideal or timeless type of structure, including an ideal meaning or meaning-like entity which is instantiated by the state’s particular content’ (Beyer 1997, 324). Husserl’s account is more detailed than Searle’s because Husserl developed a fine-grained mereology, sharply distinguishing parts as pieces and moments. Hence, Husserl can describe the combination of psychological mode and IC in a more precise way (also Mulligan 1987a, 26). Therefore, Beyer proposes to ‘flesh out’ Searle’s basic idea with Husserl’s more detailed account:
Intentionality and Consciousness
339
I propose to conceive of an Intentional state as displaying a particular structure consisting of a particular content and a particular mode as two mutually dependent functional moments that instantiate a certain ideal type of content, which is a meaning (or a meaning-like entity), and a certain ideal type of mode, respectively. Accordingly, I believe we should think of a system’s ‘grasping’ a certain content in a certain mode as tantamount to its instantiating a corresponding ideal type of structure, consisting of an ideal meaning and an ideal type of mode in a way that mirrors the mutual dependence between the Intentional state’s particular moment of content and its particular moment of psychological mode. (Beyer 1997, 347)
This corresponds closely to the point made by Searle himself that there are no absolute ICs and no absolute psychological modes. There are only complexes, intentional states, which through (phenomenological) analysis can be seen as having two non-independent moments: the mode and the content. Of course, even though these are not independent, each can be analysed on their own, leading to noetic and noematic accounts of intentionality, to use Husserl’s later terminology. For Beyer this improved account dispenses with Searle’s ‘hypothesis of the Background’. Having made this hypothesis completely superfluous entails going back to Husserl’s original metaphysical neutrality, advocated already in the Logical Investigations and radicalized in his theory of the transcendental reduction in the Ideen. There is no need for an appeal to a metaphysics of consciousness entailing existential commitments to a transcendent reality which would cause or instantiate the mind in order to perform logical and phenomenological analyses of intentionality. Even though it might turn out to be ‘true’ that ‘Consciousness and Intentionality are as much part of human biology as digestion or the circulation of blood’ (Searle 1983, ix), this should and indeed does turn out to be irrelevant. Speaking with McIntyre (1984, 470): ‘It is hard to avoid feeling some dissatisfaction with Searle’s procedure.’ Based on the above comparisons and critiques, I am quite in agreement with Searle when in more recent work he claims that he has been doing nothing like Husserl at all: When that book [Intentionality] was published, I was flabbergasted to discover that a lot of people thought it was Husserlian, that I was somehow or other following Husserl and adopting a Husserlian approach to intentionality. As a matter of my actual history, that is entirely false. I learned nothing from Husserl, literally nothing, though, of course, I did learn a lot from Frege and Wittgenstein. There is a special irony here in that in the course of writing the book, I had
340
The Bloomsbury Companion to the Philosophy of Consciousness
several arguments with experts on Husserl, especially Dagfinn Føllesdal, who argued that Husserl’s version of intentionality was superior to mine in various respects. (Searle 2005, 20)
7 Concluding remarks At the beginning we saw how the concept of intentionality was re-introduced in contemporary Western philosophy by Brentano and how the concept evolved in the debates in his school, especially through Twardowski’s book and Husserl’s response to it. Then we looked at Searle’s attempt to re-introduce the concept of intentionality a century after Brentano, independently of the existing tradition, and how it compares to Brentano’s and Husserl’s theories. One significant point of difference between Searle and the phenomenological tradition is that he tries to separate intentionality and consciousness, claiming that not all intentional states are conscious and that not all conscious acts are intentional. We discussed several counterarguments to such claims and pointed out the difficulties of such a position. This certainly doesn’t settle the debate in anyone’s favour, but simply shows that the discussions initiated in the nineteenth century in the School of Brentano have not yet run their course and still remain relevant today. There seems to be an essential link between intentionality and consciousness that we have not yet managed to disentangle completely or explain away through appeals to the brain as ultimate cause or foundation of consciousness. Instead of rejecting the tradition and operating in isolation from it, it would probably be more fruitful to try and combine the most promising approaches and methods from phenomenology, cognitive science and analytical philosophy of mind.
Notes 1 See Gallagher and Zahavi (2008); Smith (2006, 19–39); Schuhmann (2004a); Albertazzi (2001). 2 Smith and Thomasson (2005); Meixner (2003); Dreyfus (1982). 3 Zahavi (2002); Lohmar (2005). 4 Holenstein (1988); Münch (1993); Beavers (2002). Dreyfus (1982) describes Husserl also as ‘father of current research in cognitive psychology and artificial intelligence’. 5 Chrudzimski (2002); Benoist (2003); Mohanty (1969). 6 Tieszen (2005); Centrone (2010); Hartimo (2007). 7 Rudolf Carnap attended Husserl’s seminars in 1924–25 (Schuhmann 1977, 281) and considered the transcendental reduction as akin to his own ‘autopsychology’
Intentionality and Consciousness
8
9
10
11
12 13 14
15 16
17 18
341
and methodological solipsism (Carnap 2003, 102). See also Haddock (2008) and Van Fraassen (1968). Gilbert Ryle’s first publications were reviews of works of Husserl’s students: Ingarden’s Essentiale Fragen and Heidegger’s Being and Time. He planned to lecture on Bolzano, Brentano, Husserl and Meinong, afterwards known in Oxford as ‘Ryle’s three Austrian railway stations and one Chinese game of chance’, see Thomasson (2002), McGuinness and Vrijen (2006, 754). In 1927 Ryle visited Husserl, who gave him a private lecture and sent him notes of his lectures, see Schuhmann (1977, 340) and McGuinness and Vrijen (2006, 748). Sellars (1975) writes that Farber introduced him to Husserl and that his combination of phenomenology and naturalism were ‘undoubtedly a key influence’. Sellars indeed appeals to Husserl’s phenomenological method as inspiration, see Thomasson (2005, 123). Føllesdal (1969) interpreted the phenomenological reduction as turning the attention to cognitive content, the noema, and significantly influenced the so-called ‘West Coast phenomenology’ (i.e. Dreyfus, D.W. Smith, and McIntyre). Dennett, a student of Ryle and Føllesdal, called his own approach to the mind and consciousness ‘heterophenomenology’. Dennett (1994) ‘studied Husserl and the other Phenomenologists with Dag Føllesdal at Harvard as an undergraduate, and learned a lot. My career-long concentration on intentionality had its beginnings as much with Husserl as with Quine.’ Føllesdal was Quine’s teaching assistant at the time and supervised Dennett’s senior thesis on Quine. ‘There is no psychical phenomenon which is not consciousness of an object’, Brentano (1874, 133); Brentano (1995, 79). See Jacquette (2004, 102). Schuhmann (2004b, 111, n. 32) points out that up to his engagement with Twardowski, Husserl very rarely used the term ‘intentional’ or ‘intentionality’. Also see Rollinger, (1999, 11): ‘The concept of intentionality which Husserl formulated in Logische Untersuchungen, and which underpins much of his later work, was initially formulated in his critical exchange with Twardowski, primarily in his 1894 paper on intentional objects.’ Gegenstandslose Vorstellungen is a term derived from Bolzano, see Schuhmann (2004c, 121, n. 7). Husserl, Hua XIX/1, 439. Compare the similar passages in Husserl’s Ideen (1913, 186; Hua III/1, 207 f.; Hua CW II, 219) and from 1894 in Schuhmann (1992, 144), trans. in Rollinger (1999, 253). ‘a fictional object is not a special kind of object, any more than an averted war is a special kind of war’ (Smith 1994, 6). Smith (2002b, 51), endorsed by Mohanty (2008, 169), compare Husserl’s Ideen, §84 (1976, Hua III/1, 187 ff.; 1983, Hua CW II, 199 ff.): ‘Intentionality as Principal Theme of Phenomenology’.
342
The Bloomsbury Companion to the Philosophy of Consciousness
19 Smith (2002b, 53, 62 f.; 2003, 27 f.). 20 Its alleged ‘Husserlian’ character has been remarked on by almost all contributors to Smith and Smith (1995). 21 Compare Searle (2007, 327): ‘“Intentionality” is a word with a sordid history, so forget about the history if you can. Forget about Brentano’s thesis that “intentionality is the mark of the mental” and other famous mistakes.’ 22 Searle (1982, 259), and Searle (2007, 327): ‘Most, but not all, conscious states are intentional, in the philosopher’s sense that they are about, or refer to, objects and states of affairs.’ 23 Searle (1983, 1), compare Searle (2007, 327): ‘My states of thirst, hunger, and visual perception are all directed at something and so they fit the label of being intentional in this sense. Undirected feelings of well-being or anxiety are not intentional.’ 24 See both versions of Husserl (1984, Hua XIX/1, 26–27). 25 See Gallagher and Zahavi (2008, 116) and Thompson, Lutz and Cosmelli (2005, §4). 26 For a detailed account of this approach in Husserl, see Quepons Ramirez (2015, 97): ‘Moods have an intentional reference, not in the way of a direct or objective reference, but rather a reference to the world as a background or horizon.’ 27 Husserl (1984, Hua XIX/1, 514–15), Husserl (2001, 167). 28 Searle does not explicitly say that it would be false if it ‘cannot be true’. However, as he nowhere hints at a development of a multivalued logic, if the statement cannot be true, I surmise that it must be false. 29 Indeed, Husserl’s anticipation of Putnam’s Twin Earth thought-experiment goes directly against Searle’s internalism, see Beyer (2008, 84). 30 Also consider the (Leibnizian) criticism in Cobb-Stevens (1990, 189 ff.). 31 Beyer (1997, 329).
References Albertazzi, L., ed. (2001). The Dawn of Cognitive Science. Early European Contributors, Dordrecht: Kluwer. Baumgartner, W. and Pasquerella, L. (2004). ‘Brentano’s Value Theory: Beauty, Goodness, and the Concept of Correct Emotion’, in D. Jacquette (ed.), The Cambridge Companion to Brentano, 220–36, Cambridge: Cambridge University Press. Beavers, A. F. (2002). ‘Phenomenology and Artificial Intelligence’, Metaphilosophy 33(1–2), 70–82. Benoist, J. (2003). ‘Fenomenologia e teoria del significato’, Leitmotiv 3, 133–142. Beyer, C. (1997). ‘Hussearle’s Representationalism and the “Hypothesis of the Background”,’ Synthese, 112, 323–52. Beyer, C. (2008). ‘Noematic Sinn’ in F. Mattens (ed.), Meaning and Language: Phenomenological Perspectives, Phaenomenologica 187, Dordrecht, Boston and London: Springer.
Intentionality and Consciousness
343
Brentano, F. (1874). Psychologie vom empirischen Standpunkte, Leipzig: Duncker & Humblot. Brentano, F. (1995). Psychology from an Empirical Standpoint, Rancurello, A. C., Terrell, D. B. and McAlister, L. L., trans., London: Routledge. Carnap. R. (2003). The Logical Structure of the World and Pseudoproblems in Philosophy, George, R. A. ed., Chicago and La Salle: Open Court Publishing. Centrone, S. (2010). Logic and Philosophy of Mathematics in the Early Husserl, Synthese Library 345, Dordrecht: Springer. Chrudzimski, A. (2002). ‘Von Brentano zu Ingarden. Die Phänomenologische Bedeutungslehre’, Husserl Studies 18, 185–208. Cobb-Stevens, R. (1990). Husserl and Analytic Philosophy, Dordrecht, Boston and London: Kluwer Academic Publishers. Crane, T. (1998). ‘Intentionality as the Mark of the Mental’, in O’Hear, A. ed., Contemporary Issues in the Philosophy of Mind, Cambridge: Cambridge University Press. Dennett, D. (1994). ‘Tiptoeing Past the Covered Wagons’, in Dennett and Carr Further Explained: an Exchange, Emory Cognition Project, Report #28, Department of Psychology, Emory University, April 1994 . https://ase.tufts.edu/cogstud/dennett/ papers/tiptoe.htm Dreyfus, H., ed. (1982). Husserl, Intentionality and Cognitive Science, Cambridge, MA: The MIT Press. Føllesdal, D. (1969). ‘Husserl’s Notion of Noema’, The Journal of Philosophy 66, 680–687. Gallagher, S. and Zahavi, D. (2008). The Phenomenological Mind: An Introduction to Philosophy of Mind and Cognitive Science, London and New York: Routledge. Haddock, G. (2008). The Young Carnap’s Unknown Master: Husserl’s Influence on Der Raum and Der logische Aufbau der Welt, Aldershot: Ashgate. Hartimo, M. H. (2007). ‘Towards Completeness: Husserl on Theories of Manifolds 1890–1901’, Synthese 156, 281–310. Holenstein, E. (1988). ‘Eine Maschine im Geist. Husserlsche Begründung und Begrenzung künstlicher Intelligenz’, Phänomenologische Forschungen 21, 82–113. Husserl, E. (1983). Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy. First Book: General Introduction to a Pure Phenomenology, Kersten, F. (trans.), Husserliana Collected Works II, The Hague, Boston and Lancaster: Martinus Nijhoff. Husserl, E. (1974). Formale und transzendentale Logik. Versuch einer Kritik der logischen Vernunft. Mit ergänzenden Texten, Janssen, P. (ed.), Husserliana XVII, Dordrecht: Nijhoff. Husserl, E. (1976). Ideen zu einer reinen Phänomenologie und phänomenologischen Philosophie, Schuhmann, K. (ed.), Husserliana III/1, Den Haag: Nijhoff. Husserl, E. (1984). Logische Untersuchungen (Zweiter Band, Erster Teil). Husserliana XIX/1. Den Haag: Nijhoff/Kluwer. Husserl, E. (2001). Logical Investigations, vol. 2, New York: Routledge.
344
The Bloomsbury Companion to the Philosophy of Consciousness
Ierna, C. (2012). ‘Brentano and the Theory of Signs’, Paradigmi. Rivista di Critica Filosofica, 2, 11–22. Ierna, C. (2015). ‘Improper Intentions of Ambiguous Objects: Sketching a New Approach to Brentano’s Intentionality’, Brentano Studien, XIII, 55–80. Jacquette, D., ed. (2004). The Cambridge Companion to Brentano, Cambridge: Cambridge University Press. Leijenhorst, C. and P. Steenbakkers, eds. (2004). Karl Schuhmann: Selected Papers on Phenomenology, Dordrecht: Kluwer. Lohmar, D. (2005). ‘On the function of weak phantasmata in perception: Phenomenological, psychological and neurological clues for the transcendental function of imagination in perception’, Phenomenology and the Cognitive Sciences 4(2), 155–167. McGuinness, B. and Vrijen, C. (2006). ‘First Thoughts: An Unpublished Letter from Gilbert Ryle to H. J. Paton’, British Journal for the History of Philosophy, 14 (4), 747–56. McIntyre, R. (1984). ‘Searle on Intentionality’, Inquiry, 27, 468–83. Meixner, U. (2003). ‘Die Aktualität Husserls für die moderne Philosophie des Geistes’, in Seele, Denken, Bewusstsein. Zur Geschichte der Philosophie des Geistes. Meixner, U. and Newen, A. (eds.), Berlin, New York: Walter de Gruyter. Mohanty, J. N. (2008). The Philosophy of Edmund Husserl: A Historical Development, New Haven and London: Yale University Press. Mulligan, K. (1987a). ‘Promisings and Other Social Acts: Their Constituents and Structure’, in K. Mulligan, eds. (1987b). Speech Act and Sachverhalt: Reinach and the Foundations of Realist Phenomenology, Dordrecht, Boston and Lancaster: Nijhoff. Mulligan, K. (2003). ‘Searle, Derrida, and the Ends of Phenomenology’, in B. Smith (ed.), John Searle, Cambridge: Cambridge University Press. Münch, D. (1993). Intention und Zeichen, Frankfurt a. M.: Suhrkamp. Mohanty, J. N. (1969). Edmund Husserl’s Theory of Meaning, Phaenomenologica 14, The Hague: Nijhoff. Quepons Ramírez, I. (2015). ‘in Husserl’s Phenomenology’, in M. Ubiali and M. Wehrle (eds.), Feeling and Value, Willing and Action. Essays in the Context of a Phenomenological Psychology, 93–103, Springer International. Rollinger, R. (1999). Husserl’s Position in the School of Brentano, Phaenomenologica 150, Dordrecht: Kluwer. Schuhmann, K. (1977). Husserl – Chronik (Denk- und Lebensweg Edmund Husserls), Husserliana Dokumente I, Den Haag: Nijhoff. Schuhmann, K. (1992). ‘Husserl’s Abhandlung “Intentionale Gegenstände”. Edition der ursprünglichen Druckfassung’, Brentano Studien, (1990/1991), 3, 137–76. Schuhmann, K. (2004a). ‘Die Entwicklung der Sprechakttheorie in der Münchener Phänomenologie’, in C. Leijenhorst and P. Steenbakkers (eds.), 79–99, Karl Schuhmann: Selected Papers on Phenomenology. Dordrecht: Kluwer. Schuhmann, K. (2004b). ‘Husserls doppelter Vorstellungsbegriff: Die Texte von 1893’, in C. Leijenhorst and P. Steenbakkers (eds.), 101–17, Karl Schuhmann: Selected Papers on Phenomenology. Dordrecht: Kluwer.
Intentionality and Consciousness
345
Schuhmann, K. (2004c). ‘Intentionalität und Intentionaler Gegenstand’, in C. Leijenhorst and P. Steenbakkers (eds.), 119–35, Karl Schuhmann: Selected Papers on Phenomenology. Dordrecht: Kluwer. Seager, W. (1991). Metaphysics of Consciousness, London: Routledge. Searle, J. (1982). ‘What is an Intentional State?’, in H. Dreyfus (eds.), 259–76, Husserl, Intentionality and Cognitive Science, Cambridge, MA: The MIT Press. Searle, J. (1983). Intentionality, Cambridge: Cambridge University Press. Searle, J. (2005). ‘The Phenomenological Illusion’, in M. E. Reicher and J. C. Marek (eds.), Experience and Analysis. Erfahrung und Analyse, Wien: öbvahpt. Searle, J.R. (2007) ‘Biological Naturalism’, in The Blackwell Companion to Consciousness S. Schneider and M. Velmans (eds.), Malden, MA: Blackwell Publishing. Sellars, W. (1975). ‘Autobiographical Reflections’, in H. N. Castañeda, ed., Action, Knowledge and Reality: Critical Studies in Honor of Wilfrid Sellars, Indianapolis: The Bobbs-Merrill Company, Inc. Smith, B. (1994). Austrian Philosophy, Chicago: Open Court Publishing Company. Smith, B. and D. W. Smith, eds. (1995). The Cambridge Companion to Husserl, Cambridge: Cambridge University Press. Smith, D. W. (2002a). ‘Mathematical Form in the World’, Philosophia Mathematica, 10 (3). Smith, D. W. (2002b). ‘What is “Logical” in Husserl’s Logical Investigations? The Copenhagen Interpretation’, in D. Zahavi and F. Stjernfelt (eds.), One Hundred Years of Phenomenology (Husserl’s Logical Investigations Revisited), Phaenomenologica 164, Dordrecht: Kluwer. Smith, D. W. (2003). ‘The Unity of Husserl’s Logical investigations: Then and Now’, in Fisette, D. (ed.), Husserl’s Logical Investigations Reconsidered, Contributions to Phenomenology 48, Den Haag, Boston and London: Kluwer Academic Publishers. Smith, B. (2006) ‘Why Polish Philosophy Does Not Exist’, in J. J. Jadacki and J. Pasniczek, eds., The Lvov-Warsaw School: The New Generation, Poznan Studies in the Philosophy of the Sciences and the Humanities 89, Amsterdam/New York: Rodopi, 19–39. Smith, D. W. (2007). Husserl, Routledge Philosophers, Abingdon: Routledge. Smith, D. W. and A. L. Thomasson, eds. (2005). Phenomenology and Philosophy of Mind, Oxford: Oxford University Press. Thomasson, A. L. (2002). ‘Phenomenology and the Development of Analytic Philosophy’, Southern Journal of Philosophy, XL, 115–42. Thomasson, A. L. (2005). ‘First-Person Knowledge in Phenomenology’, in D. W. Smith and A. L. Thomasson (eds.), 115–39, Phenomenology and Philosophy of Mind, Oxford: Oxford University Press. Thompson, E., Lutz, A. and Cosmelli, D. (2005). ‘Neurophenomenology: An Introduction for Neurophilosophers’, in A. Brook and K. Akins (eds.), Cognition and the Brain: The Philosophy and Neuroscience Movement, New York and Cambridge: Cambridge University Press.
346
The Bloomsbury Companion to the Philosophy of Consciousness
Tieszen, R. (2005). Phenomenology, Logic, and the Philosophy of Mathematics, Cambridge: Cambridge University Press. Twardowski, K. (1894). Zur Lehre vom Inhalt und Gegenstand der Vorstellungen, Vienna: Hölder. Tye, M. (1995). Ten Problems of Consciousness: A Representational Theory of the Phenomenal Mind, Cambridge, MA: The MIT Press. Van Fraassen, B. (1968). ‘Review of Rudolf Carnap, The Logical Structure of the World’, Philosophy of Science, 35, 298–9. Zahavi, D. (2002). ‘First-Person Thoughts and Embodied Self-Awareness: Some Reflections on the Relation Between Recent Analytical Philosophy and Phenomenology’, Husserl Studies, 18, 51–64. Zahavi, D. (2004). ‘Husserl’s Noema and the Internalism-Externalism Debate’, Inquiry, 47, 42–66. Zahavi, D. (2007). ‘Killing the Straw Man: Dennett and Phenomenology’, Phenomenology and the Cognitive Sciences, 6, 21–43.
17
Cognitive Approaches to Phenomenal Consciousness Pete Mandik
1 Introduction: Cognition and cognitive approaches to phenomenal consciousness To my mind, the most promising approaches to understanding phenomenal consciousness are what I’ll call cognitive approaches, the most notable exemplars of which are the theories of consciousness articulated by Rosenthal (2005, 2011) and Dennett (1991, 2005). The aim of the present contribution is to review the core similarities and differences of these exemplars, as well as to outline the main strengths and remaining challenges to this general sort of approach. Cognitive approaches to phenomenal consciousness give explanatory pride of place to cognitive states – states such as judgements, thoughts and beliefs – in explanations of phenomenally conscious states, prototypical instances of which include the visual experience of seeing the vivid red of a ripe tomato. Such an approach may initially seem puzzling, given a seemingly sharp contrast between cognition and experience. This seeming contrast can be illustrated in a manner due to Sellars (1997). Pre-theoretically, there seems to be an obvious distinction between, on the one hand, (1) thinking that the banana in my lunchbox is yellow without at the same time either seeing the banana as yellow or anything at that time looking yellow to me, and, on the other hand, either (2a) seeing the banana as yellow when it is in fact yellow, (2b) it looking to me that the banana is yellow when it is in fact some other colour, or (2c) it looking to me that there is a yellow banana even in the absence of either bananas or yellow things. The key allegedly sharp contrast here is that between thinking and seeing. An opponent of the cognitive approach to phenomenal consciousness holds that the sorts of explanatory resources adequate for an account of (1) cannot without supplement be used to give an adequate
348
The Bloomsbury Companion to the Philosophy of Consciousness
account of the sorts of states exemplified by (2a–c). One way of opposing the cognitive approach would be to hold that what distinguishes states of type (2) from states of type (1) is that type-2 states but not type-1 states essentially involve the presence of a sensation, for example, a visual sensation of yellow, and further, that sensations cannot be explained utilizing only the resources minimally adequate for explaining type (1) states. On some elaborations of this sensation account, sensations are states distinguished by the possession of a certain kind of property, so-called phenomenal properties or qualia (plural), in the present case, a yellow quale (singular). In contrast, the proponent of the cognitive approach holds that the explanation of states of type (2) being phenomenally conscious can be adequately spelled out in terms of the minimal resources needed to explain states of type (1). Distinctive of what I’m presently calling cognitive approaches to phenomenal consciousness is optimism about what I’ll call a ‘reductive’ explanation of phenomenal consciousness in terms of cognition – a non-circular explanation of phenomenal consciousness in terms of cognition, an explanation that neither explicitly nor tacitly presupposes that cognition must be explained by reference to phenomenal consciousness. (See Byrne’s (1997) similar use of ‘reductive’ (pp. 103–4).) In the present work, I will review two prime examples of cognitive approaches to phenomenal consciousness. The first is David Rosenthal’s higher-order thought (HOT) theory of consciousness (Rosenthal 2005, 2011). The second is Daniel Dennett’s fame in the brain theory of consciousness (2005), previously known as his multiple drafts theory (Dennett 1991). One of the issues I will spend some time on is that of whether these approaches to phenomenal consciousness are best viewed as explaining it or instead explaining it away or maybe even bearing some altogether different relation to it (e.g. deliberately ignoring it). The question here is what the exemplars of cognitive approaches are best seen as offering explanations of, and to what degree they thereby are in the ballpark of satisfying requests of those calling for an explanation of so-called phenomenal consciousness.
2 David Rosenthal’s higher-order thought theory of consciousness At the heart of Rosenthal’s HOT theory is the idea of a higher-order thought, a thought that one has about one of one’s own mental states. (In contrast, a first-order thought is a thought not about one’s own mental states.) The gist of
Cognitive Approaches to Phenomenal Consciousness
349
Rosenthal’s theory is the view that what it means for one of your mental states to be conscious is that you have a HOT about that state. Often, in presenting the theory, Rosenthal makes note of several distinct uses of the word ‘conscious’, and notes that only one of them describes the explanatory target of Rosenthal’s theory. This is the use of the word ‘conscious’ where we apply it as an adjective describing mental states as being conscious (as opposed to unconscious) mental states. This way of using the word conscious picks out what Rosenthal calls state consciousness. The aim of Rosenthal’s theory is to explain what state consciousness consists in. Another adjectival use of the word conscious applies to entire people or animals as when we say that a recently awakened child is conscious or a boxer who is knocked out is unconscious. This latter use of the word conscious picks out what Rosenthal calls creature consciousness and it is less central to Rosenthal’s theory than state consciousness. A conscious creature is a creature who is awake and responsive to stimuli. However, the conscious creature can have multiple mental states at a single time which differ in that some of the states are conscious states while others are unconscious states. What this difference consists in is the central thing that Rosenthal is seeking to supply an explanation of. A third use of the word conscious picks out what Rosenthal calls transitive consciousness. Examples of transitive consciousness include those cases in which we would describe someone as being conscious of something. Transitive consciousness plays a crucial role in Rosenthal’s explanation of state consciousness. Rosenthal endorses a link between transitive consciousness and state consciousness that he calls the transitivity principle (TP). According to the TP, a person’s mental state is conscious in virtue of that person being conscious of that state. The TP is supposed to capture a pre-theoretic view that people hold about state consciousness. One way in which Rosenthal expresses the transitivity principal is by stating that we would not regard a state as conscious if the person having the state was in no way conscious of being in that state. Transitive consciousness comes in at least two varieties, only one of which is central to Rosenthal’s theory. Generally speaking, when it comes to being conscious of things, there are several ways in which a person can be transitively conscious. One way is perceptual: I am conscious of my coffee mug right now in virtue of seeing my coffee mug. However, argues Rosenthal, another way in which we could be conscious of things is by thinking about them. In thinking about my cat even though I may not be currently perceiving the cat, I am nonetheless, in virtue of having that thought, thereby conscious of the cat.
350
The Bloomsbury Companion to the Philosophy of Consciousness
Since there are perceptual exemplars of transitive consciousness, the theoretical possibility opens of consciousness of one’s mental states as being mediated by some sort of perceptual relation, an ‘inner sense’. Indeed, some thinkers have endorsed a higher-order perception (HOP) account of the consciousness of one’s own mental states. See, for example, Lycan (1996). Rosenthal argues against HOP accounts of state consciousness on the grounds that (1) perception is always mediated by the presence of what he calls a mental quality and (2) his claim that our consciousness of our own mental states is not itself mediated by any comparable mental quality. This leads Rosenthal to his view that the way in which we are transitively conscious of our mental states that renders those states conscious is in virtue of having thoughts about those states namely, an explanation of consciousness in terms of HOTs. One crucial feature to note about Rosenthal’s theory of consciousness is the way in which it attempts to supply a non-circular explanation of consciousness. There may initially seem to be circularity in the theory since he is explaining a state’s being conscious in virtue of a person being conscious of that state. However, the crucial distinction between state consciousness and transitive consciousness helps to block against such circularity. It would of course be circular to explain state consciousness in terms of state consciousness and also circular to explain transitive consciousness in terms of transitive consciousness. But such circularities are not what Rosenthal offers. His non-circular explanation is an explanation of state consciousness, which is one thing, in terms of transitive consciousness, which is something else. Of course, in virtue of the state’s being conscious, there must be some other state which is about the first state. However, on pain of circularity, that other state must be allowed to be a non-conscious state. One way to make the non-circularity of the HOT theory most apparent can be illustrated in terms of two states. The first state is a first-order state, for example a perception of a red apple. States of this type can occur unconsciously as in subliminal perception or they can occur consciously. When they occur consciously they do so in virtue of there being a HOT about the first-order state. So, in virtue of having a HOT that one is perceiving a red apple, one has a conscious first-order state, namely the conscious perception of a red apple. The HOT need not itself be conscious. If it were conscious it would need to be conscious in virtue of there being some third-order state about it. But in the absence of any such third-order state, the second-order state is unconscious. The first-order state is conscious in virtue of being
Cognitive Approaches to Phenomenal Consciousness
351
targeted by the non-conscious second-order state. We have here a sketch of a reductive explanation of state consciousness since state consciousness is being explained in terms of things none of which are individually themselves conscious states. Before closing the section there are two points that I want to make. The first concerns Rosenthal’s explicitly stated explanatory target and its relation to so-called phenomenal consciousness. The second concerns the puzzling case of empty HOTs – HOTs that are about first-order states that themselves do not exist. Rosenthal’s treatment of empty HOTs serves to illustrate the load that cognitive states are meant to bear in his explanatory project.
2.1 Phenomenal consciousness and Rosenthal’s explanatory target As Rosenthal makes quite clear, his central explanatory target is state consciousness. One might wonder, then, what pertinence Rosenthal’s project might have for so-called phenomenal consciousness. Many authors writing on consciousness make phenomenal consciousness their central concern. For example, Chalmers’ famous work promoting the hardness of the ‘hard problem’ of consciousness is explicitly cast as being about the difficulty in explaining phenomenal consciousness (1996). When Velmans and Schneider assembled their authors for their Blackwell Companion to consciousness (2006), they asked authors to be sure they addressed phenomenal consciousness (see my own contribution to that volume (Mandik 2006)). In explicating what they mean by ‘phenomenal consciousness’ many, and perhaps most, authors working in contemporary philosophy of mind connect that phrase to a certain use of the phrase of ‘what it is like’, especially as that phrased is utilized in Nagel’s seminal ‘What is it like to be a bat?’ (1974). In Nagel’s paper, he highlights a problem concerning knowing what it is like to be a bat. Not being bats ourselves, we have no introspective access to the mental lives of bats. From a third-person point of view, we note that their sensory systems are very different from those of humans, and we infer that it is very likely that what it is like to be a bat is likely to be very different from being a human. But how, if at all, can the objective third-person methods common in the physical sciences allow us to know what it is like to be a bat? A similar sort of problem concerning knowledge is highlighted in Jackson’s (1982) famous thought experiment concerning Mary, a hypothetical colour-blind neuroscientist who knows all of the physical facts about what happens in human brains when humans see red,
352
The Bloomsbury Companion to the Philosophy of Consciousness
but, since she has not herself seen red, it seems intuitive to many that Mary would not know what it is like to see red. Many authors in the philosophy of mind connect the ‘what-it-is-like’ phraseology central to explicating ‘phenomenal consciousness’ to the philosophical technical term qualia. (See, for example, Frankish (2016 and 2012). In brief, qualia are phenomenal properties, properties of mental states in virtue of which ‘there is something it’s like’ to be in them. (For an elaboration on worries concerning the technical terms ‘qualia’ and ‘phenomenal consciousness’ see Mandik (2016).) Nagel connects the idea of consciousness to the ‘what-it-is-like’ phraseology by explicitly characterizing conscious states as states it is like something to be in. As Nagel puts it: ‘An organism has conscious mental states if and only if there is something that it is like to be that organism’ (Nagel 1974, p. 436). Rosenthal explicitly addresses the connection between Nagelian what-it-islike talk and his own project of explaining state consciousness. He writes (1997, p. 733) ‘What it is like for one to have a pain, in the relevant sense of that idiom, is simply what it is like for one to be conscious of having that pain.’ He also writes (2011: 433–34): As many, myself included, use that phrase, there being something it’s like for one to be in a state is simply its seeming subjectively that one is in that state. … And … a HOT is sufficient for there to be something it’s like for one to be in the state the HOT describes, even if that state doesn’t occur.
We might summarize Rosenthal’s views here in the following manner: The ‘whatit-is-like’ phraseology as most pertinent to discussions of consciousness picks out the way in which one’s own mental life seems to one, that is, the subjective appearance of one’s own mental life to oneself. Further, the way in which one’s own mental life appears is fully determined by certain thoughts one has. So, for instance, its seeming to me that I am in pain just is me having a certain thought to the effect that I am in pain. Whether I really am in pain is a separate matter, but we are here concerned with appearance, not reality, and the appearances in question are determined by the having of certain thoughts, regardless of whether the thoughts are accurate or not. The connection of what-it-is-like phraseology to state consciousness, then, is that in having the thoughts determinative of the subjective appearances picked out by ‘what-it-is-like’ phraseology, one thereby has, in virtue of having those thoughts, states that are conscious. The conscious states are the states the thoughts are about. The conscious states are the states it appears to oneself that one is in.
Cognitive Approaches to Phenomenal Consciousness
353
2.2 Empty higher-order thoughts and the centrality of cognition in Rosenthal’s theory One interesting feature of Rosenthal’s theory and the feature that especially highlights the central role that cognition plays in explaining consciousness is a feature that we can call attention to by contemplating false HOTs. It is a general feature of thoughts that they can sometimes be false, and Rosenthal sees no reason to exclude consciousness-conferring HOTs from such a generalization. Rosenthal thus leaves open the possibility, for example, of a HOT that one is having a perception of a red apple when in fact one is either having a perception of a green apple or having no perception at all. The case of empty HOTs is somewhat puzzling, at least initially, as is Rosenthal’s treatment of the case. However, some clarity can be gained by making explicit two kinds of readings of the HOT theory – a relational reading and a non-relational reading – and emphasizing which of the two Rosenthal intends, namely, the non-relational one. One way of spelling out the issues at play here is to consider a three-way comparison between three highly similar people who all have the same HOT but differ only in some respects that are extrinsic to their respective HOTs. Persons A, B and C, let us suppose, all have a HOT, the content of which would be expressible by ‘I am currently having a visual experience of a red apple’. Let A’s HOT be true: A is in fact currently having a visual experience of a red apple. Let both B’s HOT and C’s HOT be false. Let B’s HOT be false because B is having a visual experience of a green apple. Let C’s HOT be false because C is not in fact currently having any visual experiences. Let us call the case of C the case of the empty HOT, since the first-order state that the HOT is about doesn’t even exist. Especially interesting here is a comparison between C, the empty case, and A and B, the cases in which the relevant HOTs are non-empty. Now consider this question: According to HOT theory, are the empty and nonempty cases alike in that each of the people described has a currently conscious visual state or, instead, does one of them, namely the empty case, C, lack a current conscious visual state? The different readings of the HOT theory, the relational reading and the non-relational reading, supply different answers to this question. According to the relational reading, a person is in a conscious state if and only if they are in an actually existing first-order mental state and they are also in a higher-order state about that first state. The qualifier ‘actually existing’ is crucial in distinguishing the relational from the non-relational reading. According to
354
The Bloomsbury Companion to the Philosophy of Consciousness
the non-relational reading, having a HOT alone suffices for being in a conscious state. There need not, then, be an actually existing first-order state that the HOT is about. The state the HOT is about may, as when the HOT is empty, be a merely notional state. On the relational reading of HOT theory, a state’s being conscious is a relational matter. One and the same first-order state can be unconscious at one time and conscious at another, and what determines this change from unconscious to conscious is nothing intrinsic to the first-order state itself, but instead the change from not being related to a suitable HOT to being so related. For commenters who have adopted the relational reading, see Bruno (2005), Gennaro (2006, 2012) and Wilberg (2010). On the assumption that the instantiation of a twoplace relation requires the existence of both of its relata (see Kriegel 2007, 2008 and Mandik 2009), the case of C, the person with an empty HOT, seems to be a case in which the person lacks a conscious state. Although they have a HOT, they do not have any actually existing first-order state that the HOT is about, and so C would be in the puzzling circumstance of lacking a conscious state while seeming to themselves to have one. What’s puzzling about this is that the way the mental lives of A, B and C each seem to them would seem to be the same – it seems to each of them, in virtue of having the same HOT, that each of them has a first-order state of perceiving a red apple. If phenomenal consciousness is a matter of how one’s own mental life seems to one, then in reading HOT theory relationally, we see state consciousness and phenomenal consciousness coming apart. If A is phenomenally conscious, then so is C. But C lacks a conscious state. So C is phenomenally conscious without having a conscious state, since having a conscious state is here interpreted as having an actually existing firstorder state that one is conscious of in virtue of having a HOT about it. (For a view that explicitly endorses separating state consciousness from phenomenal consciousness, see Brown (2015).) The non-relational reading has the advantage of not separating phenomenal consciousness from state consciousness. However, it accomplishes this by allowing that when a person is truly describable as having a conscious state, the state in question may on some occasions be merely notional. But this is of a piece with interpreting consciousness as being a matter of appearances, a matter of how one’s own mental life appears to one. (Though it is worth noting, space does not permit discussion of a possible third reading, a disjunctive theory of state consciousness that combines resources from the relational and non-relational readings of HOT theory. In brief, such a disjunctive theory holds that a disjunctive condition suffices for one’s being in a
Cognitive Approaches to Phenomenal Consciousness
355
conscious state – one is in a conscious state if either there is an actually existing first-order state and a HOT bearing an ‘aboutness’ relation to it or there is simply a HOT and no actually existing first-order state that it is about. Like the nondisjunctive relational reading, the disjunctive reading runs afoul of the sorts of problems with a so-called aboutness relation I spell out at length in Mandik (2009).)
3 Daniel Dennett’s fame in the brain theory of consciousness 3.1 From Rosenthal to Dennett: Dennett’s Zimbo argument In Dennett’s seminal exposition of his theory of consciousness, his 1991 book Consciousness Explained, he temporarily borrows Rosenthal’s HOT theory to articulate his own now-famous ‘zimbo’ argument against zombies. In philosophy of mind, the term ‘zombie’ is used to denote not some undead flesh-eater from horror cinema, but instead a being that is similar to a normal human in some major respect (e.g. behaviourally, functionally or physically) while differing in lacking phenomenal consciousness. Sometimes thought experiments about zombies are utilized in philosophy of mind in order to argue, for instance, that consciousness cannot be physically explained, or instead cannot be implemented in a computer. To give a brief and simplified sketch of a zombie argument, consider the following line of thought that attempts to use the alleged conceivability of zombies in an argument against functionalism (roughly, the view that consciousness can be explained in terms of the sorts of functional causal activities that human brains can have in common with computers). If consciousness is the sort of thing that can be explained wholly in terms of the functional causal activities in human brains that could be likewise implemented in a robot (activities at a coarse grain that could equally be implemented in a network of neurons and system of microchips) then it ought to be impossible for a creature to have exactly that putative set of physical goings-on while lacking phenomenal consciousness. And, let us suppose (a bit contentiously, to be sure), that if something is impossible, then it is inconceivable. However, it seems manifestly conceivable that a robot who acts and talks just like me, and even has a coarse-grained inner causal structure similar to my brain, might nonetheless differ from me in that of the two of us, only I have conscious states.
356
The Bloomsbury Companion to the Philosophy of Consciousness
But given the above suppositions, it would follow then that consciousness cannot be explained wholly in terms of such functional causal activities that might co-occur in both human brains and the computerized control systems of robots. (For a longer discussion of the sorts of issues at play in the present paragraph, see Mandik 2017.) One of the key pieces in the above line of thought is the supposition that we can coherently conceive of zombies that are behaviourally and functionally identical to normal humans while lacking consciousness. Dennett attacks this supposition by pointing out that it goes hand-in-hand with supposing zimbos to be conceivable. What is a zimbo? A zimbo is a species of zombie invented by Dennett. Recall that a zombie is a creature who is similar to normal humans in some relevant respect, for instance, both a human and his zombie counterpart are imagined to have exactly analogous behaviours, however, zombies are hypothesized to be totally devoid of consciousness. A Dennettian zimbo is a zombie who’s key similarity to its human counterpart involves cognition: they have all and only the same cognitive states – thoughts, beliefs, etc. – while differing in that only the normal human has conscious states. Suppose that the human in question is having a conscious visual state of seeing red while also having a conscious feeling of pain. Suppose they have just stubbed their toe on the leg of a red sofa. Suppose further that the human is thinking about these conscious states. The human is thinking a thought expressible as ‘I am seeing something red and also experiencing an intense pain.’ By definition, the human’s zimbo counterpart is thinking an exactly analogous thought about itself. It is thinking a thought expressible as ‘I am seeing something red and also experiencing an intense pain.’ Assuming Rosenthalian HOT theory, especially the non-relational reading of it, it follows from the zimbo having that thought that it thereby is in a conscious state. But this is a manifest contradiction of the supposition that the zimbo is a zombie, a creature lacking conscious states. To be sure, the contradiction here is not inherent in the very idea of a zimbo, but instead arises in supposing both that there can be zimbos and that the Rosenthalian HOT theory is the correct analysis of what it means for a creature to be in a conscious state. Either or both must be rejected. Dennett opts to reject both. Dennett uses HOT theory to point out what’s absurd about zombies and zimbos, and after rejecting zombies and zimbos, goes on to argue against HOT theory, thus kicking away the ladder he climbed up on. Dennett’s reluctance about the HOT theory seems to have multiple sources. One is a worry that the theory posits more states than there can be evidence for
Cognitive Approaches to Phenomenal Consciousness
357
(1991, p. 319). Another is that the theory seems to invoke a central ‘Cartesian theater’ (1991, p. 320; 2015 pp. 218–19). More about this second point below. Regarding the first point, the HOT theory allows that there can be a real difference, for instance, between the first-order thought of thinking that it is raining and the second-order thought of thinking that one is thinking that it is raining (and so on for third-order, fourth-order, etc. thoughts). Dennett sees no evidence for one’s thinking that one is thinking that it is raining that is distinct from the evidence that one is thinking that it is raining. There are, to be sure, different verbal expressions – there are, for instance, more words spoken in saying ‘I think that I think it’s raining’ than in saying ‘I think it’s raining’. But these verbal differences, according to Dennett, are not manifestations of pre-existing differences in cognitive state. (For more comparing Rosenthal and Dennett, see Mandik 2015.)
3.2 Dennett’s own preferred theory Just as Rosenthal’s theory of consciousness can be seen as primarily an explanation of what a conscious state is, so can we view Dennett’s theory as primarily an account of state consciousness. Dubbed by Dennett (1991) the multiple drafts theory of consciousness, it is the theory that a conscious state is spread out in both space and time in the brain across multiple instances of what Dennett calls ‘content fixations’, each of which – the ‘multiple drafts’ of the theory’s name – compete for domination in the cognitive system. This domination is what Dennett calls ‘fame in the brain’. Dennett sometimes (see Dennett 2005) refers to his theory of consciousness as the fame in the brain theory of consciousness. A crucial part of Dennett’s theory of consciousness is the following point: Dennett denies the existence of what he calls ‘the Cartesian theater’ (the central posit common to both the substance dualism of Descartes and a view that Dennett dubs Cartesian materialism). Dennett denies both (1) that there’s a single place in the brain where things all come together to give rise to consciousness and (2) that there is a specific time at which the onset of consciousness occurs. What is the ‘Cartesian theater’ that Dennett is so keen to deny the existence of? The so-called Cartesian theater is where the various previously unconscious brain events march onto the stage of consciousness before the audience of a homunculus – Latin for ‘little man’ – who watches the passing show. Dennett regards such a positing of a homunculus as non-explanatory: How is the homunculus conscious of the show in the Cartesian theater?
358
The Bloomsbury Companion to the Philosophy of Consciousness
Dennett’s criticism here of the Cartesian theater can be seen as an accusation that his opponents are committing what philosophers call a homuncular fallacy. Consider, for example, attempting to explain perception by positing the creation of mental images that are apprehended with the ‘mind’s eye’. The problem here is that the alleged explanation threatens an infinite regress. If a person’s ability to perceive something is explained by some inner homunculus that itself perceives something, the question arises of how that inner entity is able to perceive anything. Is some additional homunculus, a homunculus within a homunculus, to be posited? Obviously not, for this just infinitely forestalls ever explaining what perceiving is. Many of the considerations that Dennett provides in support of his multipledrafts theory of consciousness hinge on the application of a distinction between representational contents and representational vehicles to conscious representations of time. Such representations may themselves (the vehicles) occur at times other than the times that they are representations of (the contents). The importance of the content/vehicle distinction for time representation can be drawn out in contemplation of an argument Dennett gives concerning the illusory motion and illusory colour change in an effect known as the colour phi phenomenon. In the colour phi phenomenon, the subject is presented with a brief flash of a green circle, followed by a brief flash of a red circle in a different location. Subjects report the appearance of motion. They report seeing a green circle that moves and becomes red. Further, the circle appears to change from green to red at the half-way point of its trajectory. So, here’s a question: How is it that subjects are aware of the green circle turning red before arriving at the spot where the red circle is flashed? The subjects cannot have known ahead of time that a red circle was going to flash, so how is it that they are able to have a conscious experience of something turning to red prior to the red circle’s flashing? In seeking an answer to this question, we may feel pulled towards two candidate explanations, explanations that Dennett has dubbed ‘Orwellian’ and ‘Stalinesque’. Dennett argues against such explanations, and his arguments against them help to further illustrate his cognitive approach to phenomenal consciousness. One candidate explanation (the Stalinesque explanation, named after Stalinesque show trials) is that the subject unconsciously perceives the red circle’s flash and the subject’s brain uses that information to generate an illusory conscious experience of a green circle changing to red. The other candidate explanation (the Orwellian explanation, named after the revisionist history promulgated by the totalitarian regime in Orwell’s 1984 ) is that the subject
Cognitive Approaches to Phenomenal Consciousness
359
consciously perceived only the non-moving green and red circle flashes and has a false memory of there having been a moving and colour shifting circle. Dennett argues that there is absolutely no basis for preferring one of these candidate explanations over the other. This is plausible, and as I argue in Mandik (2015, p. 190): To attempt to persuade yourself of Dennett’s conclusion, first imagine being a subject in a color phi experiment. What you introspect is that there has been a visual presentation of a moving, color-changing circle. Your introspective judgment is that you have experienced such an episode. But to resolve the Stalinesque v. Orwellian debate on introspective grounds, your introspective judgment would need to wear on its sleeve whether its immediate causal antecedent was a false memory (Orwellian) or a false experience (Stalinesque). But clearly, no such marker is borne by the introspective judgment. So much for the first-person evidence! So now, imagine being a scientist studying a subject in a color-phi experiment. Imagine availing yourself of all of the possible third-personal evidence. Suppose you avail yourself to evidence gleaned via futuristic high-resolution (both spatially and temporally) brain scanners. Such evidence, let us suppose, will allow you to determine not only which brain events occur and when, but also which brain events carry which information, and which brain events are false representations. This is, of course, to presume solutions to very vexing issues about information, representation and falsehood, solutions that might beg the question against a Dennettian anti-realism about representation and perhaps, thereby, against Dennettian anti-realism about consciousness, but I won’t pursue this line of thought here. However, we will here suppose that such solutions can be arrived at independently of resolving issues about consciousness. Clearly, then, the evidence that you have will, by itself, tell you nothing about which states are conscious. So much for the third-person evidence! To surmount this hurtle for strictly third-person approaches, you may feel tempted to either ask the subject what their conscious experiences are like, or allow yourself to be a subject in this experiment. However, either way you will only gain access to an introspective judgement with a content that we have already seen as underdetermining the choice between the Orwellian and the Stalinesque. Given that there’s no real difference between the Orwellian and Stalinesque scenarios, what matters for consciousness is what the scenarios have in common, namely the content of the belief or thought that one underwent a conscious experience of a color-changing, moving circle. There’s nothing independent of this belief content that serves to make it true, so having a belief with such-andsuch content is all there is to being in so-and-so conscious state.
360
The Bloomsbury Companion to the Philosophy of Consciousness
According to Dennett, there is no fact of the matter about consciousness aside from how things seem to the subject, and how things seem to the subject is determined by the belief or thought that is arrived at via the process, smeared out in space and time in the brain, of competitions for fame in the brain via multiple content fixations. Some of Dennett’s critics have accused his argument here of relying on an untenable verificationism, the view that reality does not outstrip our evidence for it. Dennett does not dodge the charge of verificationism, but instead explicitly embraces a sort of verificationism that he calls first-person operationalism, a thesis that ‘brusquely denies the possibility in principle of consciousness of a stimulus in the absence of the subject’s belief in that consciousness’ (1991, p. 132). Perhaps our consciousness is just the sort of thing that verificationism or operationalism is totally appropriate for. After all, isn’t the way our minds seem to us the main thing that we want explained about consciousness? Opponents of the cognitive approach may grant that seeming is indeed what is centrally in need explanation in explaining phenomenal consciousness, but may accuse the cognitive approach of conflating two distinct notions of seeming, one of which, a phenomenal sense of seeming, needs to be kept distinct from a cognitive (or ‘doxastic’) sense of seeming. (For defences of such a distinction, see Chisholm (1957; Jackson (1977) and Fred Dretske (1969); for criticisms, see Gibbons (2005).) Phenomenal seemings viewed as aspects of mental life that are conscious but distinct from one’s own cognitive apprehension of them may be viewed as the sorts of things that many have called qualia, a sort of mental denizen that Dennett has taken pains to argue against.
3.3 Consciousness without Qualia: Dennett’s ‘Quining’ arguments The locus classicus for Dennett’s attack on the very idea of qualia is his famous (1988) article ‘Quining qualia’. In this article, Dennett argues against the existence of qualia, where the crucial description of what qualia are supposed to be is that they are properties of consciousness that are (1) intrinsic, (2) ineffable, (3) directly known and (4) private. Logically, if qualia are correctly defined by that four-part description, and nothing exists that satisfies all parts, then qualia do not exist. In order to establish the non-existence of qualia so-defined, an opponent of qualia need only establish the failure of anything to live up to one part of that description. Dennett, however, goes further, and attempts to cast doubt on all four parts. It seems to be Dennett’s view that nothing is intrinsic, nothing is ineffable, and so on. Especially pertinent to the present discussion of
Cognitive Approaches to Phenomenal Consciousness
361
cognitive approaches to phenomenal consciousness are Dennett’s attacks on the alleged intrinsicality of qualia and their alleged direct knowability. At the heart of Dennett’s case against intrinsicality is his thought experiment of the experienced beer drinker. Against the alleged direct knowability of qualia, Dennett marshals his thought experiment of the coffee tasters Chase and Sanborn. A property is an extrinsic property if its instantiation depends on the instantiation of other properties. Otherwise, it is intrinsic. Take, for example, the property of being a parent. If no one instantiates the property of being a child, then no one instantiates the property of being a parent. It is a vexing issue of whether any properties are intrinsic. The property of having some particular weight can be shown to be extrinsic, for one’s weight would differ if one were on the moon instead of the Earth. It might be thought that mass is intrinsic, since mass can remain the same despite the aforementioned changes in weight. However, in the context of Einstein’s Special Theory of Relativity, we can see that mass itself is extrinsic, for the mass of an object increases as its velocity approaches the speed of light, and velocity itself is a matter of motion, which in turn depends on the kinds of relations a body bears to other bodies and perhaps also to space-time itself. While it is difficult to come up with a clear example of an intrinsic property – a difficulty that may indicate that there actually is no such thing as an intrinsic property – some have thought that so-called qualia are examples of intrinsic properties. One way of conveying the idea that qualia are intrinsic is by reference to the alleged conceivability of intersubjectively undetectable qualia inversions. Conceivably, or so it goes, there could be a being who is behaviourally just like you, including verbal behaviours such as calling out the names of certain colour samples, but what it is like for you to see green is the same as what it is like for them to see red, and vice versa. Suppose further that there is no difference physically between the two of you either. Any inspection of your internal makeup, no matter how fine grained, reveals the same details in the complex arrangements of your physical parts, down to the smallest parts – your cells, molecules, etc. No difference detectable from the third-person point of view serves to distinguish you from your doppelgänger, aside from the fact that you can be at different places at the same time. When shown a ripe tomato and asked its colour, you both say ‘red’. When asked your favourite colour, you give the same answer. When asked ‘which is more similar to red, orange or green?’ you both answer ‘orange’. Nonetheless, if intersubjectively undetectable qualia inversion is a coherent possibility, then it’s possible that what it is like for your doppelgänger to see green, is just like what it is like for you to see red. Whereas you have a
362
The Bloomsbury Companion to the Philosophy of Consciousness
red quale in response to a red visual stimulus, your doppelgänger has a green quale in response to the very same stimulus, and a very different stimulus would be needed to elicit a red quale in your doppelgänger. What is supposed to be intrinsic about, for instance, a red quale, is that the red qualia of you and your doppelgänger bear very different relations to your respective behaviours and internal physical structures. Nonetheless, despite all these extrinsic differences, your red quale is just like your doppelgänger’s red quale. And thus is a red quale supposed to be intrinsic. Being an intrinsic property leaves open the possibility that among the relations irrelevant to a quale’s nature are any relations it bears to cognitive states. If a red quale is intrinsic, then it has the same internal nature regardless of whether it is a quale in the mind of someone who believes that red is one of the ugliest colours ever or instead believes that it is the most beautiful. Dennett attacks the alleged intrinsicality of qualia via a thought experiment. Consider a flavour that many consider to be an acquired taste. Many say the flavour of beer is such a flavour, and many adults who love beer recall not having liked it when they first tasted it. If a quale is supposed to be what you apprehend when you apprehend what it is like to have such-and-such experience, the question arises of whether what it is like for you to taste beer is the same now as it was before you grew to like it. Many experienced beer drinkers who underwent a process of coming to develop an appreciation for beer may be tempted to say that if beer tasted like this when they first tried it, then they wouldn’t have hated it, since this flavour (here thought of as a quale), is one that they love. But this line of thinking puts pressure on the idea that what it is like is something intrinsic and in no way dependent on relations, as for instance relations to cognitive sates such as states of liking, hating, etc. If what it is like to taste beer while liking it and what it is like to taste beer while hating it are different, then this puts pressure on the idea that what it is like to taste beer is an intrinsic property unrelated to whether one likes it or not. Dennett turns to another thought experiment for his attack on the alleged direct knowability of qualia. The notion of direct knowing might be conveyed by contrasting it with the indirect way in which many things are known. Consider how it is that you know that you have a brain. It is unlikely that you have ever seen your brain. Even if you had a transparent window in your skull, you would still have to look in a mirror, and draw an inference that the image you see in the mirror correctly reflects the reality of what is in your skull. If you have had an MRI or other kind of medical scan of your brain and seen the resulting images, the knowledge you come to have of your brain is likewise
Cognitive Approaches to Phenomenal Consciousness
363
indirect, for there are inferences that must be made about the reliability of such tests to indicate an underlying reality. For many people, their knowledge that they have a brain is not due to any observation, indirect or otherwise, of their own brain, but instead the result of an inference based on knowledge that they are a human being with typical capacities, and other humans with such capabilities have been shown to have brains. In contrast, so the story goes, your knowledge of whether you currently are experiencing a red quale or a painful quale is supposed to be unmediated by inferences drawn from observations and scientific knowledge. You just introspect and there it is, the quale that is thereby known. But now let us turn to the sort of question raised by the beer drinker thought experiment: Do qualia have wholly intrinsic natures distinct from any judgements one might have about them? If qualia are directly known, then it is reasonable to suppose that the answer to that sort of question is one that can be known directly. Any dispute should just be settled by directly introspecting one’s own mental states. Here is where Dennett’s next thought experiment serves to call into question such direct knowability. He invites us to imagine two professional coffee tasters, Chase and Sanborn, who work doing quality control for Maxwell House coffee. Both were hired around the same time, and sought out the job in the first place because they enjoyed Maxwell House coffee. But one day they each confess to the other that they no longer enjoy their jobs because they no longer enjoy drinking that brand of coffee. However, despite these similarities between Chase and Sanborn, they differ in how they characterize their respective mental lives with respect to the predicament they find themselves in. Chase claims that the taste has remained the same, but what has changed is that he doesn’t like that taste any more. Sanborn claims that the taste has changed, and claims further that if had remained the same, he would still like it. In referring to the taste, they are not referring to the chemical structure of the coffee. Suppose that that chemical structure has remained the same throughout, and they are both aware of that fact. What they are disagreeing about is whether a mental aspect of their reaction to putting that chemical in their respective mouths has remained the same (as Chase claims) or instead changed (as claims Sanborn). Putting the disagreement in terms of qualia, Chase claims that his coffee-associated quale has remained the same, and what has changed is a cognitive state, a judgement or appraisal that he no longer likes that quale. In contrast, Sanborn claims to now have a different quale associated with drinking coffee: He used to have one that he enjoyed, and would enjoy again if he could regain it, but now he has one that he doesn’t like.
364
The Bloomsbury Companion to the Philosophy of Consciousness
Dennett urges that no one, not Chase, not Sanborn and not any third party could settle the dispute between Chase and Sanborn. Suppose we imagine trying to settle the dispute by scanning the brains of Chase and Sanborn, and seeking out some evidence about the brain processing stream wherein information flows from their sensory periphery, through their central nervous system, eventually giving rise to the processing that corresponds to a judgement and ultimately in the musculoskeletal activity that is the expression of that judgement. Imagine that we have been scanning Chase and Sanborn regularly throughout their lives. We might go through the resulting data seeking information about whether what has changed about Chase and Sanborn was something relatively early in the neural processing stream versus something later. There are several problems with seeking to settle the dispute this way. The first is that there’s really no way of having any idea which part of the processing stream corresponds to a quale versus a judgement about a quale. But there’s another problem, one that is much more directly applicable to the present question of whether qualia are directly knowable. If, in order to settle the dispute about whether what has changed in Chase and Sanborn is not something that Chase and Sanborn themselves know simply by introspecting, but requires instead some third-person accessible data such as brain scans, then the claim that such facts about qualia are directly known is thereby undermined.
4 A challenge to cognitive approaches and a possible solution One of the main general kinds of complaint against cognitive approaches to phenomenal consciousness is that it requires more conceptual sophistication on the part of conscious creatures than they can plausibly be said to have or employ for every instance in which they are phenomenally conscious. At the core of this complaint is an assumption of a deep connection between cognition and concepts. A typical example of a cognitive state is the thought that there is coffee in the mug in my hand. According to this assumption, in order to think that there is coffee in the mug in my hand, I must both possess and employ concepts such as the concept of coffee, the concept of a mug, relevant concepts of one thing being in another (perhaps different concepts for coffee being in a mug and a mug being in a hand), and so on. A typical example of a state of phenomenal consciousness is my visual experience of a patch of paint as being the most bright and red thing presently in my visual field. Under the assumption linking concepts to cognition, a natural interpretation of the cognitive approach
Cognitive Approaches to Phenomenal Consciousness
365
to phenomenal consciousness is that concepts are going to play crucial roles in my having a phenomenally conscious state. We can cast this point in terms of a HOT-theoretic version of the cognitive approach. If it subjectively seems to me that I am having a visual experience of a paint patch as being bright and red, then I need to have a thought, a HOT, about one of my own visual experiences, and further, I must think of that visual experience as being an experience of something red, not blue, and as something bright, not dark or dull. And having such thoughts, under the assumption being discussed, requires that I have and deploy concepts such as the concept of a visual experience, a concept of red, a concept of brightness and so on. Here many worries arise hinging on the plausibility of tying phenomenal consciousness to concepts. Perhaps some of the concepts mentioned above, like the concept of a visual experience, are not possessed by, for instance, babies or non-human animals. If such creatures nonetheless have phenomenally conscious visual experiences, then that would contradict at least one version of the cognitive approach, namely the HOT-theoretic one just sketched. Another sort of worry in the ballpark, and one that I will dedicate the majority of this section to discussing, is that there is a fineness of grain to the contents of phenomenally conscious states that outstrips the concepts possessed and employed by the subjects of such phenomenally conscious states. Focusing specifically on visual experiences of colours, the worry is that even an adult human who possesses concepts of colours – a concept of red, a concept of one colour being brighter than another – there are more colours and aspects of colour than they have concepts for. One particularly famous expression of this ‘fineness of grain’ concern is one due to Evans (1982). Evans’s articulation of the worry has had a wide influence, but he didn’t so much spell out an argument as pose a rhetorical question: ‘Do we really understand the proposal that we have as many colour concepts as there are shades of colour that we can sensibly discriminate?’ (Evans 1982, p. 229). For a full-fledged argument that our experiences of colours have contents that outstrip our conceptual contents, we may turn to the work of Raffman (1995), and examine a powerful empirical argument for the conclusion under consideration. The argument, which Mandik (2012) dubs the Diachronic Indistinguishability Argument (DIA), hinges on (1) a plausible assumption connecting concepts to memory, as well as (2) a widespread empirical phenomenon concerning colours that are discriminable when presented simultaneously but not when presented serially. To illustrate (2) consider being shown two paint chips that are both shades of blue, though when placed side by side, you can just barely see that one is a slightly
366
The Bloomsbury Companion to the Philosophy of Consciousness
darker shade of blue than the other. If, after the chips had been taken away, you were shown one of them again and asked whether it was the exact same shade of blue as the chip that had been on the right, like most people you would be unable to reliably identify whether this was the same shade of blue. Contrast this situation in which the pair of chips you are initially shown are so different as to differ with respect to hue – suppose that one is a shade of blue while the other is a shade of red. In this situation, like most people you would be quite reliable at correctly re-identifying one of the chips across a short memory delay. Let us turn now to consider supposition (1) of the DIA, the supposition connecting concepts to memory. Plausibly, the difference in being able to discriminate colours across a memory delay tracks differences in what colour concepts we have. Most English speakers are adept at using basic colour terms like ‘blue’ and ‘red’ and it is natural to suppose that they likewise have concepts of blue and red. And further, when the paint chips are so different as one’s being red while the other blue, a short memory delay does not disrupt discrimination. For the shades of blue that are extremely similar, plausibly, very few English speakers know the names of those colours, and it may be similarly plausible that they lack concepts for those individual shades of blue. The gist of the DIA may be put like this: If the cognitive approach is correct, then there shouldn’t be more contents to experience than an experiencer has concepts of. When it comes to visible colours, if the colours are one the experiencer has concepts of, as most English speakers have concepts for red and blue, then samples of red and blue are distinguishable across a memory delay. Contrapositively, if a pair of objectively distinct colour samples distinguishable when side by side, call them blue1 and blue2, are not distinguishable across a memory delay, then the subject seems to lack distinct concepts for blue1 and blue2. If we add to the aforementioned assumptions (1) and (2) the additional assumption (3) that the subject has distinct experiences of blue1 and blue2 even when blue1 and blue2 are presented across a memory delay, then there would seem to be more content to visual phenomenal experiences than can be accounted for by the conceptualist resources of the cognitive approach. However, the cognitive approach does have a powerful response to the DIA, and the response hinges on the way a certain conceptualist account can cast doubt on assumption (3) of the DIA. The average English speaker can describe a wide variety of colours with a relatively meagre vocabulary. One does not need, for example, 40 distinct colour names to describe 40 distinct shades of blue. One may instead employ a combination of comparative colour terms and phrases, for example, ‘darker than’,
Cognitive Approaches to Phenomenal Consciousness
367
with non-comparative colour terms, for example, ‘blue’, to describe more shades of blue than one has individual colour names for. An individual ignorant of the colour terms ‘navy’ and ‘cobalt’ can nonetheless describe respective samples of them as two shades of blue, the former darker than the latter. Assuming one has concepts corresponding to the terms one is conversant with, we may credit typical English speakers as having comparative and non-comparative colour concepts. In our earlier example of being presented with samples blue1 and blue2 distinguishable when presented simultaneously but not diachronically, adherents of the cognitive approach may describe the relevant experiential content as a conceptualized content expressible as follows: When the samples are presented simultaneously, the colour contents of the experience are expressible as ‘two shades of blue, one darker than the other’ an articulation of the content that deploys both comparative and non-comparative colour terms. In contrast, when blue1 and blue2 are presented diachronically, in each presentation the subject is only in a position to confidently deploy non-comparative colour terms. They may thus, then, conceive each sample simply as a shade of blue. A crucial component of this cognitive approach to explaining the key data in the DIA is the way that it exploits the indeterminate nature of conceptual contents. In conceiving of a shade simply as a shade of blue, the conceptualization is indeterminate with respect to precisely which shade of blue it is, and can likewise be indeterminate with respect to which other shades it is darker than. In her presentation of the DIA, Raffman considers whether her opponent can make some sort of appeal to indeterminate representations to neutralize the threat that the argument poses. She objects against all such approaches that while some of our colour concepts are indeterminate, others are determinate and further, there is no introspective differences with respect to degree of determinacy between our experiences of the colours we have determinate concepts of and the experiences of colours we only have indeterminate concepts of. However, it seems that the cognitive approach has a ready reply to Raffman’s objection. According to the sort of conceptualism/cognitivism being scouted here, in order for two experiences to seem different to a subject with respect to degree of determinateness the subject must be applying some relevant concepts of degree of determinateness. However, it may very well be the case that typical subjects during typical acts of introspection do not make any such application of a concept of degree of determinateness, either since they lack such a concept or fail to apply it for some other reason, such as not having much practice in distinguishing their own representations with respect to degree of representational determinateness.
368
The Bloomsbury Companion to the Philosophy of Consciousness
There is much more to say than present space permits on the topic of how the cognitive approach can handle the alleged fineness of grain of phenomenal experience, especially as regards colour experience. For further discussion see Mandik (2012, 2013).
5 Conclusion Cognitive approaches to phenomenal consciousness attempt to explain those aspects of mental life in virtue of which, in the Nagelian phrase, there is something it is like, by appealing to a subject’s thoughts, judgements, or other cognitive appraisals about their own mental lives. In so doing, such approaches have the promise of offering a reductive or non-circular explanation of phenomenal consciousness by explaining phenomenal consciousness in terms of cognitive states that are not themselves intrinsically phenomenally conscious (by analogy, one might explain water in terms of hydrogen and oxygen, items that individually are not water). The main cognitive approaches I have looked at, the HOT theory of consciousness of David Rosenthal and the multiple drafts or fame in the brain theory of Daniel Dennett, can be seen as opposed to a qualiacentric approach to phenomenal consciousness, where qualia are thought to be aspects of mental lives that are independent of any cognitive appraisals, thoughts or judgements about one’s own mental life. Many of the considerations, both pro and con, in the dispute between qualia-centric and cognitive approaches hinge on thought experiments. However, empirical investigations are pertinent, and in particular we have seen the argument due to Raffman (1995) as an attempt to present empirical data about memory and colour discrimination in an attack on approaches to consciousness that would include the cognitive approach. This empirical argument can be conveyed as one concerning whether phenomenal consciousness has a finer grain than can be adequately captured by the conceptually structured states that are definitive of cognition. However, as I have argued, the cognitive approach has the resources to explain the data Raffman appeals to, and thus ward off the threat posed.
References Brown, R. (2015). ‘The HOROR Theory of Phenomenal Consciousness’ Philosophical Studies, 172 (7), 1783–94
Cognitive Approaches to Phenomenal Consciousness
369
Bruno, M. (2005). A review of Rocco J. Gennaro (ed.) Higher-order theories of consciousness: An anthology. Psyche, 11 (6), 1–11. Byrne, A. (1997). Some like it HOT: consciousness and higher-order thoughts. Philosophical Studies, 86, 103–29. Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory, Oxford: Oxford University Press. Chisholm, R. (1957). Perceiving: A Philosophical Study, Ithaca: Cornell University Press. Dennett, D. (1988), ‘Quining Qualia’, A. Marcel and E. Bisiach (eds.), Consciousness in Modern Science, Oxford: Oxford University Press. Dennett, D. (1991). Consciousness Explained. Boston, MA: Little Brown, 1–530, Boston, MA: Little, Brown and Company. Dennett, D. (2005). Sweet Dreams: Philosophical Obstacles to a Science of Consciousness, Cambridge, MA: MIT Press. Dennett, D. (2015). Not just a fine trip down memory lane: Comments on the essays on Content and Consciousness, in: Munoz-Suarez, C. and De F. Brigard (eds.), Content and Consciousness Revisited: With Replies by Daniel Dennett, 199–220, Berlin: Springer. Dretske, F. (1969). Seeing and Knowing, Chicago: The University of Chicago Press. Evans, G. (1982). The Varieties of Reference, Oxford: Oxford University Press. Frankish, K. (2016) Illusionism as a theory of consciousness, Journal of Consciousness Studies, 23 (11–12). Frankish, K. (2012). Quining diet qualia, Consciousness and Cognition, 21 (2), 667–76. Gennaro, R. (2006). Between pure self-referentialism and the (extrinsic) HOT theory of consciousness, in U. Kriegel and K. Williford (eds.), Self-Representational Approaches to Consciousness, 221–48, Cambridge, MA: MIT Press. Gennaro, R. (2012). The Consciousness Paradox: Consciousness, Concepts, and HigherOrder Thoughts. Cambridge, MA: MIT Press. Gibbons, J. (2005). Qualia: They're not what they seem, Philosophical Studies, 126, 397–428. Jackson, F. (1977). Perception: A Representative Theory, London: Cambridge University Press. Jackson, F. (1982). ‘Epiphenomenal Qualia’, Philosophical Quarterly, 32, 127–36. Kriegel, U. (2007). Intentional inexistence and phenomenal intentionality, Philosophical Perspectives, 21, 307–40. Kriegel, U. (2008). The dispensability of (merely) intentional objects, Philosophical Studies, 141, 79–95. Lycan, W. (1996). Consciousness and Experience, Cambridge, MA: MIT Press. Mandik, P. (2016). Meta-Illusionism and Qualia Quietism. Journal of Consciousness Studies, 23 (11-12), 140–148. Mandik, P. (2006). The Neurophilosophy of Consciousness, in Velmans, Max and Schneider, Susan (eds.), The Blackwell Companion to Consciousness, 418–30, Oxford: Basil Blackwell.
370
The Bloomsbury Companion to the Philosophy of Consciousness
Mandik, P. (2009). Beware of the unicorn: Consciousness as being represented and other things that don’t exist, Journal of Consciousness Studies, 16 (1), 5–36. Mandik, P. (2012). Color-Consciousness Conceptualism, Consciousness and Cognition, 21 (2), 617–31, Mandik, P. (2013). What is Visual and Phenomenal but Concerns neither Hue nor Shade?, in R. Brown (ed.), Consciousness Inside and Out: Phenomenology, Neuroscience, and the Nature of Experience Studies in Brain and Mind, vol. 6, 219–27, London: Springer. Mandik, P. (2015). Conscious-state Anti-realism, in Munoz-Suarez, C. and De F. Brigard (eds.), Content and Consciousness Revisited: With Replies by Daniel Dennett, 185–97, Berlin: Springer. Mandik, P. (2017). Robot Pain, in Corns, J. (ed.), The Routledge Handbook of Philosophy of Pain, New York: Routledge. Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–50. Raffman, D. (1995). On the persistence of phenomenology, in T. Metzinger (ed.), Conscious Experience. Munich: Imprint Academic Verlag. Rosenthal, D. M. (2005). Consciousness and Mind, Oxford: Clarendon Press Rosenthal, D. (2011). ‘Exaggerated Reports: Reply to Block’, Analysis, 71 (3), 431–7 Sellars, W. (1997). Empiricism and the Philosophy of Mind (with an Introduction by Richard Rorty and a Study Guide by Robert Brandom), Cambridge, MA: Harvard University Press. Velmans, M. and Schneider, S., eds. (2006) The Blackwell Companion to Consciousness, Oxford: Basil Blackwell. Wilberg, J. (2010). Consciousness and false HOTs. Philosophical Psychology, 23 (5), 617–38.
18
Free Will and Consciousness Alfred Mele
Researchers have found patterns of brain activity that predict people’s decisions up to 10 seconds before they’re aware they’ve made a choice. … The result was hard for some to stomach because it suggested that the unconscious brain calls the shots, making free will an illusory afterthought. —ScienceNOW Daily News, 4/14/08
Précis Work done by Benjamin Libet in the 1980s poses an apparent challenge to the existence of free will. Other neuroscientists have followed Libet’s lead, sometimes using EEG, as he did, and sometimes functional magnetic resonance imaging (fMRI), depth electrodes or subdural grid electrodes. In Mele (2009) (Chapters 3, 4 and 6), I argued that the neuroscientific work discussed there does not come close to justifying the claim that free will is an illusion. My focus was on the data, including data about consciousness, and on whether the data supported certain empirical claims that may be combined with theoretical claims about free will to yield the conclusion that free will does not exist. There are interesting new data. In §2, I bring recent findings to bear on the question whether we have powerful neuroscientific evidence for the nonexistence of free will. §1 provides some empirical, conceptual and terminological background. §§3 and 4 explore the status of generalizations from alleged findings about decisions or intentions in an experimental setting of a particular kind to all decisions and intentions. §5 wraps things up.
372
The Bloomsbury Companion to the Philosophy of Consciousness
1 Background Libet makes the following claims: The brain ‘decides’ to initiate or, at least, prepare to initiate [certain actions] before there is any reportable subjective awareness that such a decision has taken place. (Libet 1985, 536) If the ‘act now’ process is initiated unconsciously, then conscious free will is not doing it. (Libet 2001, 62) Our overall findings do suggest some fundamental characteristics of the simpler acts that may be applicable to all consciously intended acts and even to responsibility and free will. (Libet 1985, p. 563)
Associated with these claims is a sceptical argument about free will that may be set out as follows. Having a label for the argument will prove useful. I dub it the decision-focused sceptical argument, or DSA for short:
DSA 1. In Libet-style experiments, all the decisions to act on which data are gathered are made unconsciously. 2. So probably all decisions to act are made unconsciously. 3. A decision is freely made only if it is consciously made. 4. So probably no decisions to act are freely made. One may say that, strictly speaking, even if no decisions to act are freely made, there might still be room for free actions. Perhaps actions that proceed from unfree decisions are not free, but what about other actions? I will not pursue this line of inquiry here. Decisions (or choices) to act are at the heart of the philosophical literature on free will. In any case, the discovery that no decisions to act are freely made would be a very serious blow to free will. In Mele (2009), I spend a lot of time explaining why premise 1 is not justified by the data and some time explaining why the generalization in premise 2 is unwarranted. I return to both matters shortly, after describing some experiments and the data they generate. In the studies described in this section, participants are asked to report on when they had certain conscious experiences – variously described as experiences of an urge, intention or decision to do what they did. After they act, they make their reports. In what follows, readers who understand ‘conscious’ and ‘aware’ in
Free Will and Consciousness
373
such a way that one can be aware of something of which one is not conscious (on the grounds that consciousness requires a phenomenal feature that awareness does not) should read ‘conscious’ as ‘aware’. Also, the measure of consciousness or awareness in these studies is the report made by a participant. As I put it elsewhere (Mele 2009, 22), it is ‘report-level’ consciousness or awareness that is at issue. In some of Libet’s studies (1985; 2004), participants are asked to flex their right wrist whenever they wish. When they are regularly reminded not to plan their wrist flexes and when they do not afterwards say that they did some such planning, an average ramping up of EEG activity (starting 550 milliseconds before muscle motion begins; −550 milliseconds) precedes the average reported time of the conscious experience (200 milliseconds before muscle motion begins; −200 milliseconds) by about a third of a second (Libet 1985). Libet claims that decisions about when to flex were made at the earlier of these two times (1985, 536). The initial ramping that I mentioned is the beginning of a readiness potential (RP), which may be understood as ‘a progressive increase in brain activity prior to intentional actions, normally measured using EEG, and thought to arise from frontal brain areas that prepare actions’ (Haggard et al. 2015, 325). The significance of RPs is discussed shortly. Chun Siong Soon and coauthors, commenting on Libet’s studies, write: ‘Because brain activity in the SMA consistently preceded the conscious decision, it has been argued that the brain had already unconsciously made a decision to move even before the subject became aware of it’ (2008, 543). To gather additional evidence about the proposition at issue, they use fMRI in a study of participants instructed to do the following ‘when they felt the urge to do so’: ‘decide between one of two buttons, operated by the left and right index fingers, and press it immediately’ (543). Soon and colleagues find that, using readings from two brain regions (one in the frontopolar cortex and the other in the parietal cortex), they are able to ‘predict’ with about 60 per cent accuracy (see Soon et al. 2008, supplementary fig. 6, Haynes 2011, 93) which button participants will press several seconds in advance of the button press (544).1 In another study, Soon et al. ask participants to ‘decide between left and right responses at an externally determined point in time’ (2008, 544). They are to make a decision about which of the two buttons to press when shown a cue and then execute the decision later, when presented with a ‘respond’ cue (see their supplementary material on ‘Control fMRI experiment’). Soon et al. report that one interpretation of this study’s findings is that ‘frontopolar cortex was the first
374
The Bloomsbury Companion to the Philosophy of Consciousness
cortical stage at which the actual decision was made, whereas precuneus was involved in storage of the decision until it reached awareness’ (545). Itzhak Fried, Roy Mukamel and Gabriel Kreiman record directly from the brain, using depth electrodes (2011). They report that ‘A population of SMA [supplementary motor area] neurons is sufficient to predict in single trials the impending decision to move with accuracy greater than 80% already 700 ms prior to subjects’ awareness’ (548) of their ‘urge’ (558) to press the key. By ‘700 ms prior to subjects’s awareness’, Fried et al. mean 700 milliseconds prior to the awareness time that participants later report: they recognize that the reports might not be accurate (552–3, 560). Unlike Libet, they occasionally seem to treat decisions to press keys as items that are, by definition, conscious (548). Possibly, in their thinking about their findings, they identify the participants’ decisions with conscious urges. If that is how they use ‘decision’, their claim here is that on the basis of activity in the SMA they can predict with greater than 80 per cent accuracy what time a participant will report to be the time at which he was first aware of an urge to press 700 milliseconds prior to the reported time. But someone who uses the word ‘decision’ differently may describe the same result as a greater than 80 per cent accuracy rate in detecting decisions 700 milliseconds before the person becomes aware of a decision he already made. These two different ways of describing the result obviously are very different. The former description does not include an assertion about when the decision was made. There are grounds for doubt about the accuracy of the reported awareness times in these studies. I have discussed such grounds elsewhere (Mele 2009, Chapter 6; see also Maoz et al. (2015), 190–4), and I will not do so again here. Instead I focus on two questions. First, when are the pertinent decisions made in these studies? Second, how plausible is the generalization made in premise 2 of DSA, even if it is assumed that, in these studies, all the decisions at issue are made before participants are aware of making them? Some conceptual and terminological background is in order before these questions are addressed. Decisions to do things, as I conceive of them, are momentary actions of forming an intention to do them. For example, to decide to flex my right wrist now is to perform a (non-overt) action of forming an intention to flex it now (Mele 2003, Chapter 9). I believe that Libet understands decisions in the same way. Some of our decisions and intentions are for the non-immediate future and others are not. I have an intention today to fly to Vancouver tomorrow, and I have an intention now to answer my ringing phone now. The former intention is aimed at action one day in the future. The latter
Free Will and Consciousness
375
intention is about what to do now. I call intentions of these kinds, respectively, distal and proximal intentions (Mele 1992, 143–4, 158; 2009, 10), and I make the same distinction in the sphere of decisions to act. Libet studies proximal intentions (or decisions or urges) in particular. The expression ‘W time’ or ‘time W’ is sometimes used in the literature on Libet’s work as a label for the time at which a participant is first conscious or aware of his proximal intention (or decision or urge) to flex and sometimes for the reported time of first awareness of consciousness of this. The two times may be different, of course; and Libet himself thought that although the average reported time is about −200 milliseconds, the actual average time is about −150 milliseconds (1985, 534–5; 2004, 128). Here I use ‘time W’ as a label for the actual time of first awareness.
2 Decision times In Mele 2009, drawing on data of various kinds, I argued that Libet’s participants do not make decisions as early as 550 milliseconds before the beginning of muscle motion (−550 milliseconds). Drawing on the same data, I also suggested there that early stages of the RP in his main experiment (a type II RP, which begins at −550 milliseconds) may be associated with a variety of things that are not intentions: ‘urges to (prepare to) flex soon, brain events suitable for being relatively proximal causal contributors to such urges, motor preparation, and motor imagery, including imagery associated with imagining flexing very soon’ (56). Call this group of things the early group. As I pointed out, ‘If RP onset in cases of “spontaneous” flexing indicates the emergence of a potential cause of a proximal intention to flex, the proximal intention itself may emerge at some point between RP onset and time W, at time W, or after time W: at time W the agent may be aware only of something – a proximal urge to flex, for example – that has not yet issued in a proximal intention’ (57). This point bears on premise 1 of DSA, the assertion that in Libet-style experiments, all the decisions to act on which data are gathered are made unconsciously. If proximal decisions to flex – momentary actions of forming proximal intentions to flex – are not made before W, Libet’s argument for the claim that they are made unconsciously is undercut. Also relevant in this connection is evidence about how long it would take for a proximal decision or proximal intention to generate relevant muscle motion. Does it take around 550 milliseconds, as Libet’s interpretation of his results implies? I discussed this issue in Mele 2009, where I argued for a negative
376
The Bloomsbury Companion to the Philosophy of Consciousness
answer (60–4). There is additional evidence about this now and about what is represented by RPs. In Mele 2009, I suggested that some of the participants in Libet’s studies may ‘treat the conscious urge [to flex] as what may be called a decide signal – a signal calling for them consciously to decide right then whether to flex right away or to wait a while’ (75). Judy Trevena and Jeff Miller conducted a pair of interesting studies involving a related decide signal. Both studies had an ‘always-move’ and a ‘sometimes-move’ condition (2010, 449). In one study, participants in both conditions were presented with either an ‘L’ (indicating a left-handed movement) or an ‘R’ (indicating a right-handed movement) and responded to tones emitted at random intervals. In the sometimes-move condition, participants were given the following instructions: ‘At the start of each trial you will see an L or an R, indicating the hand to be used on that trial. However, you should only make a key press about half the time. Please try not to decide in advance what you will do, but when you hear the tone either tap the key with the required hand as quickly as possible, or make no movement at all’ (449). (The tone may be viewed as a decide signal calling for a proximal decision about whether to tap or not.) In the always-move condition, participants were always to tap the assigned key as quickly as possible after the tone. Trevena and Miller examined EEG activity for the second preceding the tone and found that mean EEG ‘amplitudes did not differ among conditions’ (450). That is, there were no significant differences among pre-tone EEG amplitudes in the following three conditions: always-move; sometimes-move with movement; sometimes-move without movement. They also found that there was no significant lateralized readiness potential (LRP) before the tone (450). Trevena and Miller plausibly regard these findings as evidence that no part of pre-tone EEG represents a decision to move. The mean time ‘from the onset of the tone to a key press ... was 322 ms in the always-move condition and 355 ms in the sometimes-move condition’ (450). If and when the tone was among the causes of a proximal intention to press, the mean time from the onset of that intention to a key press was even shorter. And, of course, muscle motion begins before a key press is complete. In a second study, Trevena and Miller left it up to participants which hand to move when they heard the decide signal. As in the first study, there was an always-move condition and a sometimes-move condition. Trevena and Miller again found that pre-tone EEG ‘did not discriminate between’ trials with movement and trials without movement, ‘LRP was absent before the tone’ and LRP ‘was significantly positive after the tone for trials in which a movement was made’ (453). They conclude, reasonably, that pre-tone EEG ‘does not necessarily
Free Will and Consciousness
377
reflect preparation for movement, and that it may instead simply develop as a consequence of some ongoing attention to or involvement with a task requiring occasional spontaneous movements’ (454). Regarding muscle activity, measured using electromyography (EMG), the experimenters report that EMG ‘seemed to start about 150 ms after the tone’ in both ‘the sometimes-move trials with movements and in the always-move trials’ (452). If, in the case of movements, a proximal decision or intention to tap a key followed the tone, then, obviously, the time from the onset of that decision or intention to muscle motion is even shorter. This casts serious doubt on the claim that, on average, proximal decisions or intentions to flex are made or acquired about 550 milliseconds prior to muscle motion in Libet’s studies. As Aaron Schurger, Jacobo Stitt and Stanislas Dehaene report, ‘it is widely assumed that the neural decision to move coincides with the onset of the RP’ (2012, E2909). Like Trevena and Miller and myself, they challenge that assumption. In their view, the brain uses ‘ongoing spontaneous fluctuations in neural activity’ (E2904) – neural noise, in short – in solving the problem about when to act in Libet-style studies. A threshold for decision is set, and when such activity crosses it, a decision is made. They contend that most of the RP – all but the last 150–200 milliseconds or so (E2910 ) – precedes the decision. In addition to marshalling evidence for this that comes from the work of other scientists, Schurger et al. offer evidence of their own. They use ‘a leaky stochastic accumulator to model the neural decision’ made about when to move in a Libetstyle experiment, and they report that their model ‘accounts for the behavioral and [EEG] data recorded from human subjects performing the task’ (E2904). The model also makes a prediction that they confirmed: namely, that when participants are interrupted with a command to move now (press a button at once), short response times will be observed primarily in ‘trials in which the spontaneous fluctuations happened to be already close to the threshold’ when the command (a click) was given (E2905). Short response times to the command clicks are defined as the shortest third of responses to the command and are compared to the longest third (E2906). Someone might suggest that in the case of the short reaction times, participants were coincidentally already about to press the button when they heard the click. To get evidence about this, the experimenters instructed participants ‘to say the word “coincidence” if the click should ever happen just as they were about to move, or were actually performing the movement’ (E2907). Participants answered affirmatively in only 4 per cent of the trials, on average, and these trials were excluded (E2907).
378
The Bloomsbury Companion to the Philosophy of Consciousness
Particularly in the case of the study under discussion now, readers unfamiliar with Libet-style experiments may benefit from a short description of my own experience as a participant in such an experiment (see Mele 2009, 34–6). I had just three things to do: watch a Libet clock with a view to keeping track of when I first became aware of something in the ballpark of a proximal urge, decision or intention to flex; flex whenever I felt like it (many times over the course of the experiment); and report, after each flex, where I believed the hand was on the clock at the moment of first awareness. (I reported this belief by moving a cursor to a point on the clock. The clock was very fast; it made a complete revolution in about 2.5 seconds.) Because I did not experience any proximal urges, decisions or intentions to flex, I hit on the strategy of saying ‘now!’ silently to myself just before beginning to flex. This is the mental event that I tried to keep track of with the assistance of the clock. I thought of the ‘now!’ as shorthand for the imperative ‘flex now!’ – something that may be understood as an expression of a proximal decision to flex. Why did I say ‘now!’ exactly when I did? On any given trial, I had before me a string of equally good moments for a ‘now!’-saying, and I arbitrarily picked one of the moments.2 But what led me to pick the moment I picked? The answer offered by Schurger et al. is that random noise crossed a decision threshold then. And they locate the time of the crossing very close to the onset of muscle activity – about 100 milliseconds before it (E2909; E2912). They write: ‘The reason we do not experience the urge to move as having happened earlier than about 200 milliseconds before movement onset [referring to Libet’s participants’ reported W time] is simply because, at that time, the neural decision to move (crossing the decision threshold) has not yet been made’ (E2910). If they are right, this is very bad news for Libet. His claim is that, in his experiments, decisions are made well before the average reported W time: −200 milliseconds. (In a Libet-style experiment conducted by Schurger et al., average reported W time is −150 milliseconds [E2905].) As I noted, if relevant proximal decisions are not made before W, Libet’s argument for the claim that they are made unconsciously is undercut. There is more to be said in support of Schurger et al.’s view of what the RP represents. As they report, A gradual increase in neural activity preceding spontaneous movements appears to be a very general phenomenon, common to both vertebrates [Kornhuber and Deeke 1965, Fried, Mukamel and Kreiman 2011, Romo and Schultz 1987]
Free Will and Consciousness
379
and invertebrates [Kagaya and Takahata 2010] alike. Why do both humans and crayfish [Kagaya and Takahata 2010] exhibit the same 1- to 2-s buildup of neural activity in advance of self-initiated movements? [An] interpretation of the RP as a sign of planning and preparation for movement fails to explain what specific neural operations underlie the spontaneous self-initiation of movement and why these operations are reflected in the specific exponential shape of the RP (p. E2904)
The explanation Schurger and colleagues offer of their findings features neural noise crossing a threshold for decision (also see Jo et al. 2013; Rigato, Murakami and Mainen 2015). Recall Trevena and Miller’s suggestion that pre-tone EEG in their experiment may ‘simply develop as a consequence of some ongoing attention to or involvement with a task requiring occasional spontaneous movements’ (2010, 454). If Schurger and coauthors are right, this EEG develops partly as a consequence of neural noise, ‘ongoing spontaneous fluctuations in neural activity’ (E2904); involvement with the task recruits neural noise as a kind of tie breaker among equally good options. Given the points made thus far in this section, how plausible is it that Soon et al. found decisions 7–10 seconds in advance of a button press? Partly because the encoding accuracy was only 60 per cent, it is rash to conclude that a decision was actually made at this early time (7–10 seconds before participants were conscious of a decision).3 As I observed elsewhere (Mele 2014, 201–2), it is less rash to infer that brain activity at this time made it more probable that, for example, the agent would select the button on the left than the button on the right. The brain activity may indicate that the participant is, at that point, slightly more inclined to press the former button the next time he or she presses. Rather than already having decided to press a particular button next time, the person may have a slight unconscious bias towards pressing that button. Did Fried and colleagues find early decisions? Their findings are compatible with their having detected at 700 milliseconds before reported W time an item of one of the kinds mentioned in what I called ‘the early group’: urges to (prepare to) press a key soon, brain events suitable for being relatively proximal causal contributors to such urges, motor preparation and motor imagery. A spontaneous fluctuation in neural activity may be added to the list. If participants made proximal decisions to press a key, the findings are compatible with their having made those decisions as late as the decision time identified by Schurger and coauthors.
380
The Bloomsbury Companion to the Philosophy of Consciousness
3 Generalizing The second premise of DSA, the sceptical argument sketched in §1, is a generalization from the first. The premise reads as follows: So probably all decisions to act are made unconsciously. I have been arguing that premise 1 is unwarranted. But even if it is assumed that premise 1 is true, the status of premise 2 depends on how similar decisions to act made by participants in the studies I described by Libet, Soon, Fried and their colleagues are to decisions to act of other kinds. Recall that, as a participant in a Libet-style study, I arbitrarily picked a moment to begin flexing – many times. Arbitrary picking is characteristic of the studies I have described. In the studies at issue by Libet, Soon, Fried and colleagues, participants are said to have decided when to flex a wrist, when to press a key4 or which of two buttons to press. There was no reason to prefer a particular moment for beginning to flex a wrist or press a key over nearby moments and (in the study by Soon et al.) no reason to prefer one button over the other. Here we find one difference between proximal decisions allegedly made in the experiments at issue and some other decisions. In these experiments, participants select from options with respect to which they are indifferent. But in many cases of decision-making, we are far from indifferent about some of our options. In typical cases, when we make a decision about whether to accept or reject a job offer, whether or not to make a bid on a certain house and so on, our leading options differ from one another in ways that are important to us. A related difference has to do with conscious reasoning. In the experiments, conscious reasoning about what to do – for example, about whether to press the left button or the right button next or about exactly when to flex – is rendered pointless by the nature of the task. Furthermore, in studies of the kind at issue, participants are instructed to be spontaneous – that is, not to think about what to do. However, many decisions are preceded by conscious reasoning about what to do. Elsewhere, I have suggested that such reasoning may increase the probability of conscious deciding (Mele 2013b). (For a model of conscious deciding, see Mele 2009, 40–4.) Some readers may wonder why I have not mentioned the difference between proximal and distal decisions in this section. The answer is simple. Like some proximal decisions, some distal decisions are made among options with respect to which one is indifferent and in the absence of conscious reasoning. Imagine that my task is to decide soon which of three buttons to press tomorrow when I return to the lab. I know that nothing hangs on which button I press and that
Free Will and Consciousness
381
there is no point in reasoning about which button to press. I make an arbitrary distal decision. Of course, many ordinary distal decisions are not like this. In many cases of distal decision-making, we are not indifferent about leading options and we consciously reason about what to do. Some cases of proximal decision-making have these features too. One cannot reason persuasively from the alleged findings about decisions in scenarios in which, as the agents realize, they have no reason to favour any acceptable option over any other to the conclusion that the same sort of thing would be found in cases in which the agents are not indifferent about their options. Elsewhere, I have suggested that automatic tie-breaking mechanisms are at work in many ordinary cases in which we are indifferent between or among the available options (Mele 2009, 83); and it is rash to assume that what happens in situations featuring indifference is also what happens in situations in which unsettledness about what to do leads to careful, extensive, conscious reasoning about what to do. Even if some action-ties are broken for us well before we are aware of what we ‘decided’ to do, it certainly does not follow from this that we never consciously make decisions. In short, a generalization from alleged findings about the decisions allegedly made in the studies at issue by Libet, Soon, Fried and colleagues to the claim that all decisions are unconsciously made is unwarranted. DSA (the sceptical argument sketched in §1) fails on this count too.
4 Another unsuccessful sceptical argument A companion argument to DSA features overt actions. I dub it OSA. It may be formulated as follows: 1. The overt actions studied in Libet-style experiments do not have corresponding consciously made decisions or conscious intentions among their causes. 2. So probably no overt actions have corresponding consciously made decisions or conscious intentions among their causes. 3. An overt action is a free action only if it has a corresponding consciously made decision or conscious intention among its causes. 4. So probably no overt actions are free actions. A consciously made decision is just what it sounds like – a decision one is conscious of making when one makes it. Elsewhere, I have argued that even if our consciousness of decision-making were always to lag a bit behind decision-
382
The Bloomsbury Companion to the Philosophy of Consciousness
making, that fact would not constitute a serious obstacle to free will (Mele 2013b). When we engage in lengthy deliberation about weighty matters with a view to deciding what to do, how unsettled do we typically feel very shortly before we have the conscious experience of settling the issue – that is, of deciding to A? (Bear in mind that an experience in this sense of the word might not be veridical: you might have an experience of settling the issue now even if you unconsciously settled it 200 milliseconds ago.) Perhaps, at this late point in a process culminating in a decision to A, we often feel strongly inclined to A, feel that we are on the verge of deciding to A or something of the sort. At these times, we may believe or feel that we are nearly settled on A-ing. If we are already settled on A-ing because, a few hundred milliseconds earlier, we settled the issue by unconsciously deciding to A, this belief or feeling is a bit off the mark. Its being inaccurate is entirely compatible with our conscious reasoning’s having played an important role in producing our decision to A. And the role it played may be conducive to our having decided freely and to our freely performing the action we decided to perform (see Mele 2013b). That, as I observed, is a thesis I have defended elsewhere. In the present section I focus on another point. Return to premise 2 of OSA: So probably no overt actions have corresponding consciously made decisions or conscious intentions among their causes. This premise refers both to consciously made decisions and to conscious intentions. The latter merit attention. Even if all decisions are made unconsciously, it certainly seems that we sometimes are conscious (or aware) of our intentions. Perhaps it sometimes happens that we become conscious of an intention to A formed in an unconsciously made decision to A some time after that decision is made. Might such an intention – and consciousness of it – play a significant role in producing a corresponding action? The question I just raised might sidetrack readers who make certain metaphysical assumptions. In my view, the existence of effective conscious intentions most certainly does not depend on the truth of substance dualism – a doctrine that includes a commitment to the idea that ‘associated with each human person, there is a thinking thing … not composed of the same kinds of stuff as … nonmental things’ (Zimmerman 2006, 115; Zimmerman describes the ‘thinking thing’ as a soul, but some substance dualists prefer to use the word ‘mind’). Conscious intentions might, for example, be physical items or supervene on physical items. Scientists normally are not metaphysicians; and they should not be expected to take a stand on metaphysical connections between mental items and physical items – for example, on whether conscious intentions supervene on physical states.5 From a physicalist neuroscientific
Free Will and Consciousness
383
point of view, evidence that the physical correlates of conscious intentions are among the causes of some corresponding actions may be counted as evidence that conscious intentions are among the causes of some corresponding actions, and evidence that the physical correlates of conscious intentions are never among the causes of corresponding actions may be counted as evidence that conscious intentions are never among the causes of corresponding actions. In this connection, try to imagine a scientific discovery that the physical correlates of conscious intentions actually are (or actually are not) conscious intentions or that conscious intentions do (or do not) supervene on their physical correlates. How would the discovery be made? What would the experimental design be? As I observed in Mele 2009 (146), it is primarily philosophers who would worry about the metaphysical intricacies of the mind–body problem despite accepting the imagined proof about physical correlates, and the argumentation would be distinctly philosophical.6 Consider an intention to A together with one’s consciousness of that intention. Call that combination an intention+ to A. Might it – and not just some part or aspect of it – be among the causes of an A-ing? How strongly do data of the sort reviewed in §1 support the inference that intentions+ to A are (as wholes) never among the causes of A-ing, even if it is assumed that premise 1 of OSA is true? This is my topic now. I pay particular attention to intentions that are neither for the present nor for the near future. I call them significantly distal intentions. There is a large and growing body of work on ‘implementation intentions’ (for reviews, see Gollwitzer 1999 and Gollwitzer and Sheeran 2006). Implementation intentions, as Peter Gollwitzer conceives of them, ‘are subordinate to goal intentions and specify the when, where, and how of responses leading to goal attainment’ (1999, 494). They ‘serve the purpose of promoting the attainment of the goal specified in the goal intention’. In forming an implementation intention, ‘the person commits himself or herself to respond to a certain situation in a certain manner’. In one study of participants ‘who had reported strong goal intentions to perform a BSE [breast self-examination] during the next month, 100% did so if they had been induced to form additional implementation intentions’ (Gollwitzer 1999, 496, reporting on Orbell, Hodgkins and Sheeran 1997). In a control group of people who also reported strong goal intentions to do this but were not induced to form implementation intentions, only 53 per cent performed a BSE. Participants in the former group were asked to state in writing ‘where and when’ they would perform a BSE during the next month. These statements expressed implementation intentions.
384
The Bloomsbury Companion to the Philosophy of Consciousness
Another study featured the task of ‘vigorous exercise for 20 minutes during the next week’ (Gollwitzer 1999, 496). ‘A motivational intervention that focused on increasing self-efficacy to exercise, the perceived severity of and vulnerability to coronary heart disease, and the expectation that exercise will reduce the risk of coronary heart disease raised compliance from 29% to only 39%.’ When this intervention was paired with the instruction to form relevant implementation intentions, ‘the compliance rate rose to 91%’. In a third study reviewed in Gollwitzer 1999, drug addicts who showed symptoms of withdrawal were divided into two groups. ‘One group was asked in the morning to form the goal intention to write a short curriculum vitae before 5:00 p.m. and to add implementation intentions that specified when and where they would write it’ (496). The other participants were asked ‘to form the same goal intention but with irrelevant implementation intentions (i.e., they were asked to specify when they would eat lunch and where they would sit)’. Once again, the results are striking: Although none of the people in the second group completed the task, 80 per cent of the people in the first group completed it. Numerous studies of this kind are reviewed in Gollwitzer (1999), and Gollwitzer and Paschal Sheeran report that ‘findings from 94 independent tests showed that implementation intentions had a positive effect of medium-to-large magnitude … on goal attainment’ (2006, 69). Collectively, the results provide evidence that the presence of relevant significantly distal implementation intentions markedly increases the probability that agents will execute associated distal ‘goal intentions’ in a broad range of circumstances. In the experimental studies that Gollwitzer reviews, participants are explicitly asked to form relevant implementation intentions, and the intentions at issue are consciously expressed (1999, 501).7 In Mele 2009, I argued that findings of the kind just described provide evidence that what I am here calling intentions+ sometimes are (as wholes) among the causes of corresponding actions (136-144).8 I will not repeat the arguments here. The main point I want to make is that one who is considering making the inference expressed in premise 2 of OSA should attend to differences between intentions of the kind that are supposedly being studied in Libet-style experiments – that is, proximal or nearly proximal intentions – and ordinary significantly distal intentions. Imagine a study that resembles a Libet-style experiment but includes no instruction to report on conscious urges or the like. At the beginning of the imagined experiment, participants are told to flex their right wrists spontaneously
Free Will and Consciousness
385
a few times each minute while watching a fast clock. Afterwards they are asked whether they were often conscious of intentions, urges or decisions to flex. A no answer would not be terribly surprising. If you doubt that, try the following experiment on a friend who knows nothing about the studies at issue. Ask your friend – let us call her Ann – to flex her right wrist several times while having a conversation with you. After a few minutes, ask her how often, when she flexed, she was aware of an intention to do that right then – a proximal intention. In Libet’s studies, if participants are conscious of something like proximal intentions to flex, that consciousness may be largely an artefact of the instruction to report on such things – and unconscious intentions might have been just as effective in generating flexes. I doubt that something similar is likely to be true of conscious implementation intentions to do something days later. As I observed elsewhere, consciousness of one’s significantly distal implementation intentions around the time they are formed or acquired promotes conscious memory, at appropriate times, of agents’ intentions to perform the pertinent actions at specific places and times, which increases the probability of appropriate intentional actions (Mele 2009, 143). Two of the hypotheses tested in the BSE study I mentioned by Sheina Orbell and colleagues specifically concern memory: ‘Women who form implementation intentions will be less likely to report forgetting to perform the behavior’; and ‘Memory for timing and location of behavioral performance will mediate the effects of implementation intentions on behavior’ (Orbell, Hodgkins and Sheeran 1997, 948). Both hypotheses were confirmed by their data. Indeed, a remarkable finding was that of the women who were highly motivated to perform a BSE, all of those in the implementation-intention group ‘reported performing the behavior at the time and place originally specified’ (952; see 950 for a single possible exception). Imagine that these fourteen women had had only unconscious implementation intentions – that they had never been conscious of their implementation intentions to conduct a BSE at a specific time and place. That all fourteen women would succeed nonetheless in executing these significantly distal and relatively precise intentions – intentions specifying a place and time for a BSE – would be dumbfounding. The consciousness aspect of intentions+ seems to be doing important work here – even if in some other situations that aspect of an intention+ may be useful for little more than enabling a participant in an experiment to comply with instructions to report on a conscious experience of a certain kind.
386
The Bloomsbury Companion to the Philosophy of Consciousness
5 Parting remarks There is evidence that lowering people’s confidence in free will increases misbehaviour. In one study (Vohs and Schooler 2008), people who read a passage in which a scientist denies that free will exists cheat more often on a subsequent task than others do. In another (Baumeister, Masicampo and DeWall 2009), college students presented with denials of the existence of free will proceed to behave more aggressively than a control group: They serve larger amounts of spicy salsa to people who say they dislike spicy food, despite being told these people have to eat everything on their plates. For the reasons adduced here (among other reasons, see Mele 2009), the claim that the experimental results of the sort reviewed in §1 justify low – or lowered – confidence in the existence of free will is unwarranted. I regard that as good news.9
Notes 1 This is not real-time prediction. 2 This is not to say that every moment was equally good. I wanted to avoid lengthening my participation in the experiment unnecessarily. 3 Even if the encoding accuracy were much higher, one might reasonably wonder whether what is being detected are decisions or potential causes of subsequent decisions. 4 Fried et al. mention another study of theirs in which participants select which hand to use for the key press (2011, 553). 5 Kim (2003) is an excellent introduction to supervenience. 6 Jackson (2000) offers a useful brief critical review of various relevant philosophical positions that highlights the metaphysical nature of the debate. 7 It should not be assumed, incidentally, that all members of all of the control groups lack conscious implementation intentions. Perhaps some members of the control groups who executed their goal intentions consciously made relevant distal implementation decisions. 8 I am simplifying here. My claim in Mele (2009) is disjunctive: It is about intentions+ as wholes or their physical correlates as wholes. 9 This article draws on Mele (2009; 2012; 2013a, b; and 2014). The article was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed here are my own and do not necessarily reflect the views of the John Templeton Foundation.
Free Will and Consciousness
387
References Baumeister, R., Masicampo, E. and DeWall. C. (2009). ‘Prosocial Benefits of Feeling Free: Disbelief in Free Will Increases Aggression and Reduces Helpfulness’, Personality and Social Psychology Bulletin, 35, 260–8. Fried, I., Mukamel, R. and Kreiman, G. (2011). ‘Internally Generated Preactivation of Single Neurons in Human Medial Frontal Cortex Predicts Volition’, Neuron, 69, 548–62. Gollwitzer, P. (1999). ‘Implementation Intentions’, American Psychologist, 54, 493–503. Gollwitzer, P. and Sheeran, P. (2006). ‘Implementation Intentions and Goal Achievement: A Meta-Analysis of Effects and Processes’, Advances in Experimental Social Psychology, 38, 69–119. Haggard, P., Mele, A. O’Connor, T. and Vohs, K. (2015). ‘Free Will Lexicon’, in A. Mele (ed.), Surrounding Free Will, 319–26, New York: Oxford University Press. Haynes, J. D. (2011). ‘Beyond Libet: Long-Term Prediction of Free Choices from Neuroimaging Signals’, in W. Sinnott-Armstrong and L. Nadel (eds.), Conscious Will and Responsibility, 85–96, Oxford: Oxford University Press. Jackson, F. (2000). ‘Psychological Explanation and Implicit Theory’, Philosophical Explorations, 3, 83–95. Jo, H. G., Hinterberger, T., Wittmann, M., Borghardt, T. L. and Schmidt, S. (2013). ‘Spontaneous EEG Fluctuations Determine the Readiness Potential: Is Preconscious Brain Activation a Preparation Process to Move?’, Experimental Brain Research, 231, 495–500. Kagaya, K. and Takahata, M. (2010). ‘Readiness Discharge for Spontaneous Initiation of Walking in Crayfish’, Journal of Neuroscience, 30, 1348–62. Kim, J. (2003). ‘Supervenience, Emergence, Realization, Reduction’, in M. Loux and D. Zimmerman (eds.), Oxford Handbook of Metaphysics, 556–84, New York: Oxford University Press. Kornhuber, H. and Deecke, L. (1965). ‘Changes in the Brain Potential in Voluntary Movements and Passive Movements in Man: Readiness Potential and Reafferent Potentials’, Pflugers Archiv fur die Gesamte Physiologie des Menschen und der Tiere, 284, 1–17. Libet, B. (1985). ‘Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action’, Behavioral and Brain Sciences, 8, 529–66. Libet, B. (2001). ‘Consciousness, Free Action and the Brain’, Journal of Consciousness Studies, 8, 59–65. Libet, B. (2004). Mind Time, Cambridge, MA: Harvard University Press. Maoz, U., Mudrik, L., Rivlin, R. Ross, I., Mamelak, A. and Yaffe, G. (2015). ‘On Reporting the Onset of the Intention to Move’, in A. Mele (ed.), Surrounding Free Will: Philosophy, Psychology, Neuroscience, 184–202, New York: Oxford University Press. Mele, A. (1992). Springs of Action: Understanding Intentional Behavior, New York: Oxford University Press.
388
The Bloomsbury Companion to the Philosophy of Consciousness
Mele, A. (2003). Motivation and Agency, New York: Oxford University Press. Mele, A. (2009). Effective Intentions, New York: Oxford University Press. Mele, A. (2012). ‘Consciousness in Action: Free Will, Moral Responsibility, Data, and Inferences’, in J. Larrazabal (ed.), Cognition, Reasoning, Emotion, and Action, 87–98, University of the Basque Country Press. Mele, A. (2013a). ‘Free Will and Neuroscience’, Philosophic Exchange, 43, 1–17. Mele, A. (2013b). ‘Unconscious Decisions and Free Will’, Philosophical Psychology, 26, 777–89. Mele, A. (2014). ‘Free Will and Substance Dualism: The Real Scientific Threat to Free Will?’, in W. Sinnott-Armstrong (ed.), Moral Psychology: Free Will and Moral Responsibility, vol. 4, 195–207, Cambridge, MA: The MIT Press. Orbell, S., Hodgkins, S. and Sheeran, P. (1997). ‘Implementation Intentions and the Theory of Planned Behavior’, Personality and Social Psychology Bulletin, 23, 945–54. Rigato, J., Murakami, M. and Mainen, Z. (2015). ‘Spontaneous Decisions and Free Will: Empirical Results and Philosophical Considerations’, Cold Spring Harbor Symposia on Quantitative Biology, 79, 177–84. Romo, R. and Schultz, W. (1987). ‘Neuronal Activity Preceding Self-initiated or Externally Timed Arm Movements in Area 6 of Monkey Cortex’, Experimental Brain Research, 67, 656–62. Schurger, A., Sitt, J.D. and Dehaene, S. (2012). ‘An Accumulator Model for Spontaneous Neural Activity Prior to Self-Initiated Movement, ’ Proceedings of the National Academy of Sciences, 109 (42), E2904–13. Soon, C. S., Brass, M., Heinze, H. J. and Haynes, J. D. (2008). ‘Unconscious Determinants of Free Decisions in the Human Brain’, Nature Neuroscience, 11, 543–5. Trevena, J. and Miller, J. (2010). ‘Brain Preparation Before a Voluntary Action: Evidence Against Unconscious Movement Initiation’, Consciousness and Cognition, 19, 447–56. Vohs, K. and Schooler, J. (2008). ‘The Value of Believing in Free Will: Encouraging a Belief in Determinism Increases Cheating’, Psychological Science, 19, 49–54. Zimmerman, D. (2006). ‘Dualism in the Philosophy of Mind’, in D. Borchert (ed.), Encyclopedia of Philosophy, 2nd ed., vol. 3, 113–22, Detroit: Thomson Gale.
19
Notes Towards a Metaphysics of Mind Joseph Margolis
I confess straight off that I have no adequate theory of mind. I’m not sure I know what ‘mind’ or ‘the mind’ is. I speak fluently enough, in the ordinary way, about the mind, and I’m quite familiar with a fair run of what passes for respectable theories of what a mind is. I’m well aware, for instance, that Descartes answers the usual questions in the blithest way, affirming that mind is simply res cogitans and that res cogitans is utterly, even disjunctively and universally, unlike res extensa, which is what (speaking loosely) bodies and physical phenomena are. But then, I don’t know what ‘res’ means here – signifying ‘mind’ – if it is not a sort of substantively empty placeholder meant to service a very broadly conceived functional division: that’s to say, to sort out ‘whatever’ answers to the site of ‘whatever’ predicates may be invoked to mark all the admissible ways ‘it’ instantiates ‘cogitans’, which (per Descartes) ranges in a blunderbuss fashion over thinking, sensation, perception, feeling and the like – and which, finally, need not actually behave in any strictly disjunctive way. Occasionally – vaguely, as I’ve noticed, though with conviction – in a never-completed search for the supposed unity of mind and body, Descartes overrides his own dualism, hints at the possible ‘site’ of some otherwise unacknowledged ‘substance’ that exists as an integral self or person that possesses properties that cannot be disjunctively ascribed to either exclusionary ‘res’ but only to a suitably unified ‘mind’ or ‘self ’ – as with pains felt in the body or actions voluntarily performed. Thus, Descartes says, in Principles of Philosophy: I recognize only two ultimate classes of things: first, intellectual or thinking things, i.e. those which pertain to mind or thinking substance; and secondly, material things, i.e. those which pertain to extended substance or body … . But we also experience within ourselves certain other things which must not be referred either to the mind alone or to the body alone. These arise … from the close and intimate union of our mind with the body [including] the appetites …, the emotions …, and sensations.1
390
The Bloomsbury Companion to the Philosophy of Consciousness
In retrospect, Descartes seems to have anticipated an essential element of Kant’s conception of the ‘I think’ – and, because of that, an essential element in Heidegger’s more interesting (but also metaphysically vaguer) account of Dasein. In fact, Descartes’s Cogito is more robust than Kant’s Ich denke: Descartes’s formulation is much too confident, epistemologically and metaphysically; Kant’s formulation is ultimately too schematic to support the apperception thesis. We cannot fail to notice that, inasmuch as the self or person is taken, by Descartes, to be a nominally unified ‘substance’ of some kind, selves cannot be identified or described solely (or, possibly, at all) in terms appropriate to res cogitans and res extensa; and, indeed, whatever belongs to ‘it’ (self or person), as such, ineluctably overcomes the affirmed disjunction (whether read substantively or predicatively). With the best will in the world, therefore, it seems impossible to avoid concluding that, however clever Descartes may have been, he never rightly ventured beyond acknowledging that we tend to speak disjunctively of minds and bodies, though we also speak of selves and persons as unified entities that possess properties that cannot be adequately sorted as exclusively ‘mental’ or ‘physical’. Neither idiom seems perspicuous. As far as I know, Descartes never satisfactorily explains how selves or persons are conceptually possible. In that sense, I take him to have failed: he exits, so to say, by the same door he enters. The least problem that needs to be resolved (for the sake of mere consistency) concerns the ‘unity of mind and body’ that we and Descartes usually have in mind when we speak of actual selves or persons. Still, we have hardly surpassed Descartes here: indeed, everyone seems to be a Cartesian dualist most of the time. My own suggestion about how to meet Descartes’s question – which still falls short of answering the question itself – is simply to affirm that ‘minds’ must be indissolubly incarnate wherever they ‘appear’ to be present: hence, then, when functionally defined, they are likely not to require a completely different ‘res’ from whatever sustains physical things, open, apparently, to the emergent novelty of unforeseen sorts of attributes (as in the invention and mastery of language), yielding, through the entire continuum of its evolution, realizations of both reducible and irreducible properties. (For example, I take the explanation of the invention and mastery of language to be impossible without invoking an advanced phase of the evolution of mind; but if the distinction between sleep and wakefulness is also inexplicable without admitting mind, then it’s entirely possible that sleep may yield a theoretical identity in a sense akin to that in which a stroke of lightning is said to be identical with a certain array of ionized particles, without its also being the case that, as with the most advanced manifestations of mind, ‘mind’ or ‘mindedness’ is always reducible along materialist lines.
Notes Towards a Metaphysics of Mind
391
For related reasons, I cannot shake the impression that Kant’s ‘Transcendental Ich’ incorporates a function akin to that of the disjunctive Ego of Descartes’s ‘Cogito’, just where Kant assigns the Ego the transcendental function of unifying sensory perception or experience (or thought) in ‘the manifold of appearances’ (or ‘representations’) of experienced things, as mine, rather than thine (or another self ’s): The I think [Kant says in the first Critique] must be able to accompany all my representations, for otherwise something would be represented in me that could not be thought at all, which is as much as to say that the representation would either be impossible or else at least would be nothing for me. This representation that can be given prior to all thinking is called intuition. Thus all manifold of intuition has a necessary relation to the I think in the human subject in which this manifold is to be encountered. But this representation is an act of spontaneity, i.e., it cannot be regarded as belonging to sensibility. I call it the pure apperception, in order to distinguish it from the empirical one, or also the original apperception, since it is that self-consciousness which because it produces the representation I think, which must accompany all others and which in all consciousness is one and the same, cannot be accompanied by any further representation. I also call its unity the transcendental unity of selfconsciousness in order to designate the possibility of a priori cognition from it.2
But is it res cogitans again or something else? The first Critique advances no transcendental answer, an omission that haunts Kant’s final thoughts. You may indeed sense a potential quarrel here, because Kant specifically invokes the Transcendental Ich as necessarily accompanying all sensory ‘intuitions’ or ‘representations’. Whereas, for one thing, if the Ich or Ego is Kant’s rational, cognitively and discursively apt agent (which, of course, it is), then it cannot be called into play in the seeming perception of infants and languageless animals – and then all cognition and intelligence would be denied such creatures (an intolerable paradox, post-Darwin, since the issue is an empirical one: in my opinion, a finding utterly contrary to the obvious facts); or else there must be some lesser subjective ‘presence’ (among animals and infants, and if so then very possibly also among fully cognizant selves), a possibility neither Descartes nor Kant explores, though many contemporary theorists insist on it. For instance, Dan Zahavi cites a particularly telling remark of Heidegger’s, from the Grundprobleme der Phänomenologie (1910/20), which applies to the human Dasein and confronts and contests the view I’ve cited from the first Critique. Independently, I should add, John McDowell offers a strenuous gloss on the Kantian passage, spelling out what (he presumes) Kant says or should
392
The Bloomsbury Companion to the Philosophy of Consciousness
have said more clearly, regarding the ‘I think’. We shall come to this important quarrel shortly. But, for the moment, it will help to have Heidegger’s forthright statement before us as we proceed: Dasein, as existence, [Heidegger says] is there for itself, even when the ego [Kant’s Transcendental Ich, I think we may say, as opposed to some minimal presence of the ‘self ’ (distinct from the ‘Ich’), as Zahavi suggests, that is in some way ‘selfaware’ and aware of what it perceives but does not yet judge discursively, if I may put the thesis this way,] does not expressly direct itself to itself in the manner of its own peculiar turning around and turning back, which in phenomenology is called inner perception as contrasted with outer. The self is therefore the Dasein itself without reflection and without inner perception before all reflection. Reflection, in the sense of a turning back, is only a mode of self-apprehension, but not the mode of primary self-disclosure.3
This is an extremely clever formulation, which, I’m inclined to think, may actually be compatible with the passage I’ve cited from Kant, as well as with Wilfrid Sellars’s independent gloss on the Kantian passage, which is itself the principal target of McDowell’s unyielding gloss on the same passage (in McDowell’s Woodbridge Lectures, particularly the second), which, then, McDowell applies (also) against Hubert Dreyfus’s problematic account of the mental – which was in turn initially directed against McDowell’s account, in Mind and World. McDowell’s response to Dreyfus is based (as I say) on his reading of the Kantian text, both texts having been reviewed by Zahavi, who challenges each effectively, somewhat in accord with the thesis of Heidegger’s theme (already cited), which is itself much too loosely formulated – deficiently in its own way – in Zahavi’s endorsement.4 I shall take the argument a step further, in attempting to recover the sense of the significance of the intelligence and cognitive powers of human infants and languageless animals – against the ‘rationalists’ (pertinently, here, Kant and McDowell) and against those who afford no accommodation of the distinct abilities of animals and infants or who tend to conflate or confuse ‘mind’ (or ‘mindedness’) with reason, discursivity and advanced cognitive ability (certainly Dreyfus). I regard plausible answers to all the puzzles posed by this line of reasoning to be essential to any convincing theory of mind in our time. Beyond all that, there are questions to be answered about the ontology of mind, especially since, post-Kant, epistemology and metaphyics have proved to be inseparable, even where we abandon transcendentalism. The gain I glimpse – through all the muddle – assures us that, as with Kant and the contemporary and near-contemporary figures mentioned, ontology (touching again on Descartes)
Notes Towards a Metaphysics of Mind
393
has been more or less shelved, in favour of treating mind and self functionally: in effect, in terms of giving an account of our powers of cognition and understanding – more or less severed from metaphysics. I favour the functional approach, but I’m also persuaded that the two issues can and should be fully integrated and that dualism is still the original threat and source of bafflement in the philosophy of mind. Hence, when I advance the suggestion that mind must be (indissolubly) incarnate at every level of its being manifested at all, I mean to advance a complex ontology and a considerable metaphysical economy that I’ve supported frequently enough elsewhere. But I defend it as an instrumental device meant to forestall absurdity – not at all transcendentally affirmed. I reject Descartes’s dualism out of hand; I maximize the space of speculation (about mind’s nature) within which the biological sources of mindedness may be discovered – though never treated as sufficient, if separated from the immense contribution of inventing and mastering language; and I disencumber the analysis of the distinctive cognitive and agentive powers that characterize what, at best, we understand by the unique capacities of enlanguaged human selves or persons. Here, I take it to be an empirical judgement that finds that, with regard to the evolution of the full powers of the human self, an exclusively (even primarily) biological model of evolution (Darwinian or post-Darwinian) would be hopelessly defective. The greatest powers of the human mind are themselves artefacts of enlanguaged Bildung: ‘mind’ must first emerge within primal biological sources (surely pre-consciously); but enlanguaged mind must be a hybrid achievement of the entwinement of biological forces and novel powers specifically enabled by the encultured mastery of language – which, in my view, accounts for the mature formation of the reflexive, reportorial, analytic, intentional and related agentive powers of humans (effectively, persons). There you have the reason for distinguishing with care between the evolution of the human primate and the evolution of the human person. There would be no point to a cognate account of marsupial evolution that failed to distinguish between, say, the first birth of a kangaroo and the second maturation of the offspring of the first birth. It’s an even deeper puzzle that must be addressed in the case of human evolution: the outcome of a cluster of considerations that Darwin failed to grasp – as the ‘philosophical anthropologists’ have demonstrated. Evolutionarily, the mature human being is, ontologically, a hybrid artefact – but not in any sense dualistically formed!5 I see no other way to proceed with the theory of mind, except functionally, along a continuum involving some as yet unknown biological source of minimal mind and some effective co-functioning
394
The Bloomsbury Companion to the Philosophy of Consciousness
of biological processes and societal Bildung that yields the invention and mastery of language, that, in turn, transforms primates into persons and entails all the important, novel, enlanguaged powers that belong, more or less exclusively, to persons. The goal here is coherence and plausibility – hardly, unique and necessary truth. If I understand correctly the passage cited from Kant, then, by uniting the sub-arguments of §§16, 17 and 20 (from the ‘Transcendental Deduction of the Pure Concepts of the Understanding’), Kant intends to distinguish particular (empirical) appearings or representations from the a priori representation of the apperceptive unity of the manifold of all such representations (which, in effect, singles out the synthesis we call self-consciousness, as well as the concept of the unity of particular intuitions of objects made possible by the executive power of self-consciousness itself). In this way, Kant complexifies the function of the mind (beyond Descartes), bur does not finally say what we should take mind (or the mind) to be, any more than does Descartes. I judge that we’ve made extremely little progress here regarding the ontology of mind. Kant’s and Descartes’s accounts are almost completely verbal – extraordinarily thin. Nevertheless, I’m unwilling to accept Colin McGinn’s so-called ‘mysterian’ thesis. McGinn says, in his well-known book, The Mysterious Flame: My main theme is that consciousness is indeed a deep mystery, a phenomenon of nature on which we have virtually no theoretical grip. The reason for this mystery, I maintain, is that our intelligence is wrongly designed for understanding consciousness. Some aspects of nature are suited to our mode of intelligence to get its teeth into, and then mystery is the result.
McGinn makes it convincingly clear that, inasmuch as babies are conscious but very likely not self-conscious (at least initially), we should not conflate the two notions: ‘Consciousness [he says] is a datum, a given, something whose existence we cannot coherently dispute.’ Nevertheless, he adds: The bond between the mind and the brain is a deep mystery. Moreover, it is an ultimate mystery, a mystery that human intelligence will never unravel. Consciousness indubitably exists, and it is connected to the brain in some intelligible way, but the nature of this connection necessarily eludes us.6
I cannot see that McGinn makes his case at all – or that he could. I think we must distinguish between the substantive and predicative uses of ‘mind’; I’ve recommended treating the emergence and development of mind functionally;
Notes Towards a Metaphysics of Mind
395
and I add at once that anything resembling the ‘agency’ or executive unity of mind (including Kant’s ‘unity of apperception’) needs to be treated as (itself) an evolutionary continuum, running from the most minimal integrity of organismic life to the full development of a purposeful, self-conscious self. Viewed this way, there may be many different bodily processes implicated in different forms and stages of so-called ‘consciousness’. I see nothing suggesting complexities that cannot be fathomed by human intelligence. On the contrary, if my conjectures are at all reasonable, the explanation is likely to require some effective compartmentalization of different processes that appear (misleadingly) to be more or less uniform. Apart from all this, it must be said that McGinn’s language is unaccountably dualistic, bent on explaining what appears to be an external relationship between two sorts of res. The language of ‘bond’, or being ‘locked to’, being ‘rooted in’, being ‘connected to’ (all McGinn’s phrasing), is excessively disjunctive and altogether too literal minded. Descartes sensibly retreats from a similar idiom (at least at times). Post-Darwin, the required explanation cannot take the form of a mere union of originally separate kinds of powers or substances, which may well have led McGinn to his ‘mysterian’ (read: ‘Cartesian’) verdict. Mind must be functional, if there is no evidence of a substantive cogitans. The mistake is likely to be due to conflating ‘mind’ as ‘mindedness’ and ‘a mind’ as a self or person. But there is an important puzzle buried here: that of the concept of the evolution of a ‘self ’ from a mere and minimal grammatical fiction to the artefactual actuality that is the principal agent of enlanguaged culture. The corrective argument begins, it seems, with a functional – that’s to say, a predicative – distinction assignable to an organism; the innovation needed concerns a reasonable provision accounting for the incipient and gathering presence of the self or person (a substantive provision beyond the predicative). Kant fails us here: indeed, the apperceptive unity of perception (in the first Critique) seems to be more predicative than substantive; on my account, the evolution of a substantial self is, post-Darwin, most plausibly associated with the mastery of language. But if the explanation must take an evolutionary form, then it’s hard to see that it could possibly be a priori. Now, I think what McGinn says here can be turned to advantage, possibly to more advantage than McGinn concedes – in fact, against himself. McGinn slips too easily between speaking of ‘consciousness’ and of ‘mind’ (or ‘the mind’) committed (at least sometimes) to equating the two. If we think of consciousness predicatively, it will seem very reasonable to think of it as describable in essentially phenomenal terms (whatever physical mechanism it may be said
396
The Bloomsbury Companion to the Philosophy of Consciousness
to depend on, as in feeling or having pain); and, if so, then it’s reasonable as well to speak of consciousness in phenomenological terms, if consciousness evolves into self-consciousness – in the sense in which new-born babies and animals do not and cannot (yet) ‘apply mental concepts to themselves’7; and then it proves quite easy to regard ‘minds’ substantively (even in Darwinian terms), at least initially, as the ‘nominal’ or ‘grammatical’ or ‘heuristic’ (or ‘fictive’) sites of certain apparent functionalities of consciousness and self-consciousness – possibly of pre-conscious functioning as well (as with J.J. Gibson) or in a way continuous with incipient consciousness: say, at the level of insects or ‘below’; and then, by analogy, even at the level of inanimate robots performing feats of ‘thought’ and ‘action’ of some breath-taking kind. ‘Mind’ ranges of course, as with Descartes’s speaking of ‘thought’ and ‘thinking’, over whatever is said to be mental in the way of consciousness and self-consciousness. Elsewhere, as I say, Descartes treats ‘mind’ as substantive rather than merely predicative, as marking the ‘site’ of mental functioning, hence as equivalent to or incorporating graded sites or sub-sites (‘souls’, ‘selves’, ‘persons’, say, that may be ascribed any of the mental functionalities intended by our predicative terms). Apparently, then, also, mind, or minds, affirmed as the equivalent of ‘unified’ selves, the would-be site of the highest ‘internal’ predicables, cannot be disjunctively sorted in the Cartesian way. In this careful sense, we begin to see how to accommodate Kant’s executive Ich (de-transcendentalized) as the artefactually natural presence of selves or persons. There’s no reason at all to think that, among such possibilities, ‘consciousness’ must have a uniquely biological source – or implicate (contrary to Kant) a continuous (or unitary) presence of the ‘Ich’ over an entire lifetime. (It is, after all, an important functional power of an integral organism: it should be compatible, therefore, with whatever ontologies range satisfactorily over the entire run of living creatures.) Is that enough? In fact, the admission of animal cognition and the cognitive ability of languageless infants may entail the inadequacy or falsity of Kant’s apperceptive thesis. There’s also nothing here to support McGinn’s ‘mysterian’ claim. In speaking of ourselves (often, also, ‘minds’, as in Descartes’s relaxed idiom) as the ‘sites’ of mental life, I mean to suggest that such a characterization may be as ‘nominal’ as one wishes or as thin as the empirical evidence can bear. There is some minimal sense of unified functioning that goes with the bare notion of a live ‘organism’: the ‘mind’ of primitive organisms may, then, signify no more than the co-functioning of the distinct functionalities of a preconscious sort that has no determinate locus of the kind usually associated with the full functioning of a creature said to possess an advanced brain and ‘mind’.
Notes Towards a Metaphysics of Mind
397
Possibly: something akin to the homeostatic functionality of organic life or to the graded levels of evolutionary unity akin (among animals) to the functional unity of apperception Kant ascribes to persons. I myself treat the functionality of persons or selves as quite distinct from the functioning of Homo sapiens, the human primate: selves or persons achieve (I say) the utmost regarding the powers of self-consciousness that we know of. Some would rather say that the human being is the only creature that actually achieves any measure of self-consciousness at all. I’m quite unsure of any such certainty: at least when viewed as a consequence of the human primate’s having invented and mastered true language – by means of which we acquire, artefactually, the ability (or the enhanced ability) to report, reflect on and exploit our awareness of our mental states. I hold, in effect, that the mastery of language is the same process by which the human primate transforms itself into the reflective creature we know as self or person. But I think we may have to concede the incipience of self-consciousness among the higher mammals – apes, cetaceans, elephants – or draw a finer distinction. Here, then, speaking of the self or person as the nominal ‘site’ of mental functioning, I mean that the onset of initial linguistic fluency signifies (at first) little more than the abstract grammatical relevance of such an attribution, which, by constant iteration and the growing importance of a newly enabled reflexive grasp of the significance of self-reference, the abstract, perhaps even fictive attribution, actually flowers (experientially or existentially) into a substantive (artefactually evolved) awareness of that same site and functionality. Thus we transform ourselves, ‘second-naturedly’, into actual selves, maturing into reflective agents by merely pondering our commitments and taking responsibility for what we actually do! Similarly, we speak in French and English because we learn to construe the ordered sounds we learn to utter, as words conveying what we mean to say and what others take to be meaningful in just that way. Here, private or personal intention and public meaning are aspects of one and the same achievement: the social creation of a shared language and the artefactual emergence of the human person as a cultural transform of the human primate. There is, ultimately, nothing to probe ‘beneath language’ that can explicate linguistic meaning itself that is not already linguistically informed; and there is nothing to explain the rationality of the self ’s behaviour that is not already legible in terms of the recognizable exemplars of the thought and behaviour of apt selves. The entire artefactual human world is never more or less than circularly understood – sui generis – even as it evolves. The import of the entire societal system and conversation of mankind is, finally, the reflexive yield of cultural
398
The Bloomsbury Companion to the Philosophy of Consciousness
immersion itself: there is no reductive exit from the effective functionalities of personal life. If you support this notion (and confession), you see at once the impossibility of any entirely materialist or non-Intentionalist explanation of the world of persons: selves are irreducibly emergent in a self-invented way that makes no sense when separated from their artefactual Umwelt. The human sciences explore the world in a ‘direction’ that is the reverse of the directive that guides or governs the physical sciences themselves: the first are primarily interpretive, though not without causal linkages; whereas the latter are essentially causally ordered, though not without interpretive qualifications centred on human goals and interests and deeper conjectures about the unity of the whole of nature. The upshot is that the basic unit of science in the human world cannot be merely neurophysiological, for that would utterly lose the ‘space’ of enlanguaged mind (hence, of enlanguaged meaning and of whatever presupposes a command of language). The basic unit of the human sciences must be the indissolubly complex paradigm of (say) mind incarnate in its adequate organic and Intentional setting – the brain perhaps or something already more inclusive – possibly nothing less than the entire known expanse of human society and history capable of collecting all our archives of meaning and intention; so that we might map (at the emergent level) the significant order of all that we produce as effective persons and all that remains as the enduring actuality of physical nature. It would be entirely consistent with the views of theorists like Noam Chomsky and John Searle (for cognate reasons, though by way of very different theories) to christen ‘mind’ or ‘the mind’ as itself ‘material’ (as in fact they do), in a sense in which the modern equivalent of res extensa (the materiae of the physical or natural sciences) effectively absorbs whatever (in Descartes) would have been assigned res cogitans (including what Descartes signals requires a more complex – still unknown – order of being). The trouble is, such an extension would be purely verbal, since we have no inkling of how to account for enlanguaged ‘mindedness’ in terms of the regularities of the familiar physical world deprived of its semiotic or Intentional (that is, enlanguaged, culturally informed, significative, mentally manifested) order. In particular, the ‘higher’ reaches of self-consciousness appear to be irreducible in physicalist terms. Language makes this ineluctably obvious. There’s the irony of Noam Chomsky’s conviction: Chomsky originally believed he could produce a formal model of universal grammar that would not depend on the meaningful regularities of natural languages. (He’s now defined the source of the conjectured successor of UG as explicitly genetic and defined
Notes Towards a Metaphysics of Mind
399
its causal expression in neurophysiological terms.8) Searle, as far as I know, has nothing to offer of comparable depth: Searle mentions only the abstract formula; he cannot demonstrate its descriptive and explanatory usefulness. (At present, an inclusive materialism cannot be more than a promissory note.) Furthermore, if the idea’s sound – that’s to say, the idea that ‘sites’ of the apperceptive sort may be posited, first, as no more than a grammatical fiction fitted to the individuated mastery of language – then it becomes entirely reasonable to suppose that that fiction itself may become existentially, ontologically, reflexively, experientially, maturationally actual – ineluctably thus freighted – as infants progress towards linguistic mastery. You see how the evolving experience of enlanguaged and reflexive life penetrates and transforms our heuristic fictions. We spontaneously reinvent ourselves as effective selves. But then, in accord with the same evidence, a lesser (well, an animal) analogue of the emergent self (or ‘graduated’ apperceptive ‘site’, (‘self-aware’ but not yet ‘aware of self ’ – in Dan Zahavi’s sense) may be posited for the transitional life of human infants, who acquire their first language from a vantage of languageless intelligence; accordingly, the model begins to accommodate as well the centredness of intelligent animal life (now to include pre-linguistic Homo). This points to the flexibility and importance of the idea of animal minds or consciousness enabled by perceptual and experiential sensibility – or incipiently enlanguaged, as with human infants – by a lesser sort of proto-reflexive awareness on the part of cats and dogs and horses and elephants and primates and cetaceans and the like. This is what I had in mind in venturing a step beyond Heidegger and (at least as far as Zahavi’s ‘Mindedness’ paper goes) a step beyond Zahavi as well. My proposal is meant to displace both Darwin and Kant by the same strategy. I find Zahavi’s conjecture about Heidegger’s thesis congenial, though I confess the artefactualist account of the culturally emergent person or self seems more flexible, more explicit, more pertinent and persuasive than the amorphous notion of Dasein. There seems to be no ‘developmental’ account of Dasein that might be brought to bear on the human infant’s perceptual and conceptual powers. But then I concede that to adopt something like Heidegger’s thesis (about which more shortly) rather than, say, McDowell’s extreme Kantianlike theory of perception and apperception (in effect, mindedness) is already a somewhat tolerant move in the direction of what I’m calling the lesser ‘site’ of animal cognition and intelligence. If animal perception may be guided by the presence of a lesser functionality of a ‘near’ or quasi-apperceptive sort (definitely short of a true ‘self ’), then, conjecturing, as I do, that even the learning of enlanguaged (discursive)
400
The Bloomsbury Companion to the Philosophy of Consciousness
concepts probably requires the human infant’s possessing ‘perceptual concepts’ (perceptually grounded concepts), in the sense already favoured by Aristotle (in De Anima) or in accord with something like J.J. Gibson’s ‘affordances’ (applicable to animals as well as humans) – in effect, the discernment of perceptual structures apt for survival within an animal’s Umwelt9 – it might otherwise prove impossible to account for the rapidity and precision with which infants acquire a home language and border collies learn to herd sheep. The argument goes decidedly against both Dreyfus and McDowell, more or less equally, though for very different reasons. (Both strategies – favouring ‘mindlessness’ and ‘mindedness’ – seem too shallow for the argument that’s needed.) Here, for the record, is one of Zahavi’s summary remarks about his own qualification of Heidegger’s doctrine, restricted to the competence of human selves: every worldly experiencing [Zahavi says] involves a certain component of selfacquaintance and self-familiarity, every experience is characterized by the fact that ‘I am always somehow acquainted with myself ’. … [E]xperiences do not have intrinsic and nonintentional qualities of their own; rather experienced qualities, the way things phenomenally seem to be, are – all of them – properties the experiences represent things as having …; they are strictly and exclusively world-presenting.10
This bears directly on the well-known ‘debate’ between Dreyfus and McDowell, which leads to the problematic standing of the claims advanced by each. (It also suggests the plausibility of a lesser animal analogue.) I’ll keep the quarrel in view in closing my review of the puzzles I’ve favoured.
* I’ve now brought the argument against the mysterians to a plausible stalemate; but I have not yet said enough about what ‘mind’ or ‘a mind’ is, in the sense meant to meet the most important part of the complaint lodged against Descartes and Kant. McGinn confronts us with the task of meeting the mysterians’ ‘challenge’. I find that the challenge has been drawn down to a very small, perfectly ordinary empirical search. We are in fact close to rounding out the argument required on all counts. For instance, there’s no compelling reason to think that animal analogues of the ‘unity of apperception’ need resemble the inordinately rationalist theme favoured by Kant (in the first Critique) or in McDowell’s glossing Kant (in the second Woodbridge lecture): effectively, each rules out
Notes Towards a Metaphysics of Mind
401
animal cognition and intelligence without actually denying animals (and human infants) any of the distinctive features of their mental life – say, on the more than doubtful force of a priori claims such as the following: ‘Rationality entails discursivity, and cognition entails rationality.’ I suggest, instead, an empiric conjecture along post-Darwinian lines: ‘Admitting animal intelligence and cognition [I say] entails admitting some form of species-specific, unlanguaged rationality and conceptual competence.’ (There’s no a priori intrusion here at all. The conjecture’s no more than empirical or common sense.) There’s also no reason to think that the pertinent (internal) ‘site’ of the unity of apperception and of the unity of perceived objects within the unity of apperception (effectively, the internal placement of ‘the mind’ said to be functioning in these specific ways) must be localized in the same way (or at all) in (say) the space of the physical brain: mind’s ‘functionality’ may be experientially or agentively unified in public (first-person) avowals and commitments, without our having any idea of its internal (mental) site-specific constancy. We experience such unities agentively – in the world, not in the brain. That alone signals how best to answer McGinn, how to construe the ontology of mind, how to interpret the continuum of the site-specific treatment of ‘the mind’, how to salvage the intelligence of human infants and languageless animals, how to understand ‘consciousness’ and ‘self-consciousness’ in emergent and embodied ways, and how to stalemate or defeat the excessive rationalism (‘mindedness’) of McDowell and the excessive ‘mindlessness’ of Dreyfus. The short answer to McGinn’s challenge is that he’s been excessively distracted by Descartes’s penchant for dualism: he’s simplified too quickly the sheer scatter and heterogeneity of whatever, physically, counts as the incarnation of ‘consciousness’ and ‘mindedness’: there seems to be no necessary (no legible) parallelism between the would-be ‘structures’ of the ‘content’ of the mind’s activity and (say) the neurophysiological patterns of brain activity (said to enable the other). It’s more than plausible to suggest that we model mind linguistically (hence, publicly), because the invention of language both creates and perfects the mind’s most powerful and most unique executive functionalities of thought and interior resolve, and provides the adequate social means by which to access the mind’s continuing productions reliably. We analyse and share the ‘content’ of our inner lives ‘intentionally’ (in terms of ‘aboutness’), as well as in terms of interpretable meanings and agentive interests (what I name ‘Intentional’: which, normally, favours a propositionalized rendering (or, perhaps less ambitiously, a proposition-like model) of (or for) an endless variety of looser construals – possibly, of a ‘lingual’ more than of a propositional sort. (I mean, here, by ‘lingual’,
402
The Bloomsbury Companion to the Philosophy of Consciousness
no more than that we also express or manifest mental ‘content’ publicly by nonverbal means that nevertheless presupposes linguistic mastery – as in painting and ballet for instance.11) But if we think of the ‘self ’ initially as (at least initially as) no more than a ‘grammatical fiction’ in an infant’s early lessons in language – ulteriorly yielding a sense of public accountability and the like, then, of course, we should ‘locate’ the self ’s site in the public space of purposive action and effectivity; and, thus, we would have little need to locate the ‘interior [mental] site’ of the self within the boundaries of the organism (or brain) it’s said to ‘pilot’.12 The public placement of the self is essentially a practical matter, resolved within the space of societal life; the placement of ‘mindedness’ as a predicable function (whether among persons or animals) may appear to be an oxymoron (in spite of being theoretically important.) One sees readily enough why Kant’s notion of the ‘unity of apperception’ never quite escapes the sense of its initially fictive nature, despite its actual, continually evolving function. It is, after all, primarily predicative, in the sense McGinn explores. Also, of course, its apparent origin as a grammatical fiction completely subverts McDowell’s ‘Kantianism’. The incipience of the ‘selfaware’ that is not the equivalent of the ‘awareness of self ’ (in Zahavi’s sense), possibly even an animal analogue of ‘self-consciousness’, may have an entirely different biological source among languageless animals. (We must at all costs avoid denying animal cognition.) My own conjecture has it that the concept of the self (we favour) begins with the dawning of language; whereas the centredness of perception, feeling, purpose, commitment and the like (‘mindedness’, if you wish,) already begins to converge experientially, in much the way Heidegger has sketched. It seems to me that the latter sort of sensibility must be common ground for mature humans, infants and intelligent animals alike. But then, it more than hints at the likelihood of a ‘lesser’ analogue of intelligence and cognition and conceptual discrimination (sans language) – among infants and intelligent animals – which rationalists regularly decry. On the one hand, we begin to see how the linguistic fiction actually becomes naturalistically robust, palpable, as we draw more and more effectively on enlanguaged memory and resolve and the processing of experiential data; on the other hand, we realize that the mysterians’ question is hardly more than a narrowly specialized empirical issue (among a myriad of such issues) that rarely affects the general drift of our philosophical analysis of mind manifest in the practical world. For the most part, we proceed practically or epistemologically, whereas McGinn’s puzzle is largely occupied with the biology of (no more than) the threshold of consciousness – if that is indeed
Notes Towards a Metaphysics of Mind
403
the actual theme of his mystery. You glimpse its near-vacuity in the following sample of McGinn’s characteristic treatment: Having a brain is what makes it possible to have a mental life. The brain is ‘the seat of consciousness’. But it is not merely that the mind sits on the brain, like a monarch on a throne. … It is more that the brain is what enables the mind to exist at all; is more of a womb than a seat. The machinery of the brain allows the mind to work as it does and to have the character it does.13
This seems to be trivially – even vacuously – true. McGinn goes on from there to the charge that this constitutes ‘a mystery that human intelligence will never unravel’ (which I’ve already cited). But surely, it’s now clear that McGinn himself – rather like Descartes, centuries ago – has simply put the empirical puzzle before us without any helpful clues to ponder: it’s really no more than an unsecured reductive guess of the smallest part of the puzzle. What, after all, is the point of invoking the brain in the way McGinn does, if it’s true that to ‘have a brain’ without the fluent use of language is not yet to ‘have a mental life’ of the right sort – one that yields a functional self? McGinn merely acknowledges the bafflement of Descartes’s empirical claim: that is, the utter disjunction of two entirely different vocabularies (res cogitans and res extensa) that cannot be regularly coordinated in any systematically explanatory way suited to the work of the natural or physical sciences or the patent integrity or holism of human life. There’s a double gain to be claimed here, nevertheless. If we treat the problem of unconsciousness (and self-consciousness) empirically, in accord with the general narrative of the sequence of the evolutionary emergence of the principal animal species, then it makes perfect sense to think of the onset of ‘bare’ consciousness as the upshot of certain emergent phases of neuronal (and biochemical) functioning essentially due to biological innovations. But if that is all that concerns us, then the consciousness problem will surely dwindle philosophically, though its solution would still have captured an extraordinary natural marvel. We would then enlarge our conception of the physical world – to include the mental as a form of the physical; the boundaries of what we now regard as first-person avowals of mental life would doubtless acquire a not unwelcome air of benign paradox. But if so, the consciousness problem (empirically construed) would never be as intractable or as important as the mysterians pretend it is. It would probably never apply in variegated ways to the innovations of actual ‘mental’ intervention. In fact, its limited concern might well prod us to wonder whether the diverse incarnations of consciousness were not themselves the decisive clue to the ontology of mind and the resolution
404
The Bloomsbury Companion to the Philosophy of Consciousness
of other matters that the theory of persons and cultures are likely to favour or require. I daresay we might find ourselves confronted by the theoretical prospects of a perfectly plausible metaphysics of mind that, in accord with the salient phenomena of evolutionary emergence, would feature the irreducible and sui generis novelty of the hybrid powers of advanced mindedness (as with the actual functionality of selves or persons). Attention to the latter might then extend to the analysis of actions, language, enlanguaged thought, artworks and manufactured things, histories, institutions, conditions of intelligibility fitted to such an emergent world – the description of which need hardly be incompatible with a causal model of the ‘physical world’; it would still rely, very probably, on a vocabulary of the incarnated mental – irreducibly emergent for the most part (in different ways) with respect to brain, organismic life and the environing world, but a vocabulary that insisted on the primacy of conscious and self-conscious discrimination and related modes of functioning (inference, judgement, commitment, memory and the like). Accordingly, the standard disjunction of the natural and human sciences would begin to dwindle, might even disappear; the primacy of claims of either sort of science would lose its apparent authority vis-à-vis the other; the conceptual uniformity of the purely physical world might no longer be maintained; the realism of the ‘intentional’ world, loosely experienced as ‘aboutness’ (in Brentano’s sense), as well as of the ‘Intentional’ world (meaning, again loosely) to collect the meaningful or semiotic or significant or significative or inherently interpretable sense (already proposed). Very likely, it would include all linguistic and ‘lingual’ contexts and, indeed, the whole of the sublinguistic (cognitive: expressive and communicative) animal world. We might still speak of the unity of science, but the new conception would be irreconcilable with, say, Carl Hempel’s well-known conception (that is, methodologically construed reductionism).14 Let me step back a little; I’ve moved a bit too quickly. It’s entirely possible that, in noting his would-be intractable mystery, McGinn does not mean to feature the empirical onset of consciousness, but, rather, the impossibility of paraphrasing the predicative language of (something akin to) cogitans in terms of the idiom of (something akin to) extensa. On Descartes’s view, the two res are radically incommensurable though (somehow) not impossible to conjoin in the way of a viable organism; in contemporary terms, there are no fine-grained paraphrastic options to be had. Although it also happens that there are predicates that have a foot in both vocabularies (and depend on the union of mind and body: pain,
Notes Towards a Metaphysics of Mind
405
pre-eminently). I must make some further distinctions here, therefore, if I am to strengthen the novelty of the metaphysics of the mental. For one thing, when I say that whatever is emergent in the mental or conscious way must be physically incarnate(d), I mean that the ‘mental’ is more complex than the ‘physical’, in that, qua incarnate, it is already rightly construed (however loosely) as an emergent phenomenon partly described in physicalist and partly in mentalistic terms – not usually in a way that would concede a paraphrastic identity or dualism between any such paired formulations. Thus far at least, in the practice of the sciences, we find ourselves unable to match the determinacy of any would-be dualism of ‘expressive’, ‘intentional’, ‘Intentional’, ‘interpretable’ content of mental episodes with (for example) neurophysiological attributions – quite apart from so-called ‘mind–body’ causal regularities between mental and physical events or between different mental episodes. (It is, in fact, very nearly impossible to sort events, disjunctively, as ‘mental’ and ‘physical’ for purposes of recording regular linkages of any empirical sort.) There can be no doubt that the mental and the physical are indissolubly linked, wherever the mental has actually evolved; hardly any such linkages can be directly or determinately captured in terms of theoretical identities or nomologically regular causal sequences, if we begin dualistically. (The entire strategy is more than awkward.) Useful linkages seem to be obliquely abstracted. That helps to specify precisely what – as I construe the puzzle – we can gain conceptual control of, by preferring the incarnate(d) mental to would-be dualistic elements. The clue that effectively guides us here is entirely straightforward: there simply is no reliable way, as things now stand (in the practice of any science) to make firm dualistic inferences – from the occurrence of discrete physical events to the occurrence of discrete mental events – with an eye to confirming either identity or causal claims. Characteristically, all such linkages are no more than conjectural, very laxly defined, drawn from incommensurable (but not incompatible) vocabularies, hardly ever more than approximately apt, keyed to passing and practical interests, and, almost always, settled by intentional fiat moving from the ‘mental’ to the ‘physical’. In summary, then: the mental is at least a functional emergent (Intentionally construed), indissolubly incarnate in some organic materia – capable, conceivably, of being artefactually embodied in some suitably prepared inorganic materiae as well – open, emergently, in biological terms, to manifesting cognitively pertinent experiences, perceptions, thoughts (in the form of the ‘IC’ of imputed or reported mental states). Minimally, the mental may be holistically predicated of an organism, without any assignable
406
The Bloomsbury Companion to the Philosophy of Consciousness
‘site’ of an executive self-consciousness; but, among enlanguaged humans, the functional ‘I’ (or self) of speech and other forms of reflexive agency is itself the artefactually emergent transform of an original ‘grammatical fiction’ that, by iterated use, becomes the palpable (reflexively discerned) presence of a viable self. In a culturally parallel way, actions, words and sentences, artworks, histories and the like are, similarly, the Intentional transforms of the things of our prelinguistic or physicalist world. Normally, we cannot say what may be rightly linked, ‘mentally’, with the bodily movement of any particular thing that we move from one ‘place’ to another (say, a chess piece on a chess board); but if we know, first, that we’ve identified a specific and deliberate chess move, we may quite easily specify the pertinent bodily movements that effectively comprise the actual chess move, though actions and mere bodily movements cannot be one and the same. Think, here, of the futility of trying to analyse a conversation in terms of the supervenience of (say) the meanings of words on the sounds or shapes of their putatively incarnating materiae. There are no dualistically construed mind– body laws of nature, and very nearly any bodily movement can (with ingenuity) be hooked up in a Rube Goldberg way to fulfil the conditions of being a licit chess move. There are no sufficient constraints on what to count as a ‘mental’ event or episode, by which, for instance, to make systematic (non-trivial) sense of any of Jaegwon Kim’s superb but futile efforts to formulate the rules of natural supervenience linking the ‘mental’ and the ‘physical’ (involving causality or identity or supervenience or realization or anything of the kind). Consider, for the sake of sharing a thought experiment involving the supervenience of the ‘mental’ on the ‘physical’, the following would-be rule taken from Kim’s wellknown conceptual quiver – which Kim tellingly calls ‘strong supervenience’ and which he himself is clearly drawn to: Mind-Body Supervenience II. The mental supervenes on the physical in that if anything x has a mental property M, there is a physical property P such that x has P and necessarily any object that has P has M.15
My sense is that Kim intends his formula to be open to a familiar sort of objective testing: to assess its fitting the ‘logic’ of standard discursive practice or a demonstrable improvement of same. But Kim’s implicit protasis is all but trivial and his apodosis is clearly invalid. Kim’s verbal precision is quite misleading: the ‘mental’ never matches the ‘physical’ in the right way. Kim’s formula is either vacuously true (by definition or stipulation), or readily falsified by examining specimen instances in accord with
Notes Towards a Metaphysics of Mind
407
the strong sense of the emergence of physically irreducible mental episodes, or is impossible to test dualistically. Kim, I should add, defines ‘the supervenience argument’ (in at least one place, in a way, I believe, he himself favours), in the following terms, addressing a supposed causal condition on supervenience: ‘Mental-to-mental causation [the formula affirms] is possible only if mentalto-physical causation is possible.’16 (The formulation is, effectively, Cartesian.) If mental events can be reductively identified with the physical (which I believe cannot succeed), then, of course, the classical ‘unity-of-science’ approach to the mental would be home free. Short of that, the use of a cogitans-like vocabulary is so informal and unruly that neither incarnation nor theoretical identity nor causality will yield precision enough to sustain even the weakest reading of Kim’s ‘supervenience argument’, which (itself) depends on the viability of causal (nomological) regularities of the mental-to-mental or mind–body sorts. Kim offers a number of closely argued verdicts that lead to an ‘unpalatable’ choice between reductionism and epiphenomenalism.17 I recommend instead that the causality issue (mental-to-mental or mental-to-physical causality) be abandoned as illusory (very possibly incoherent as well), although the concept of Intentionally complex ‘things’ (persons, actions) said to implicate legitimate causal questions can (still) be addressed consistently and coherently, wherever the presence of a person is admitted to entail the presence of a living organism that can (in turn) accommodate effective causes (in the standard way) and wherever the presence of an Intentional action entails the presence of causally effective bodily movements – which of course avoids the dilemma of having to choose between reductionism and epiphenomenalism. Supervenience, I should say, is an impossible way of approximating the complexities of emergence. There is no point, I argue, in admitting the emergence of true language without admitting that the description and explanation of speech cannot be satisfactorily treated in either reductionist or epiphenomenalist terms: I add at once that what holds for language holds as well for the life of creatures functioning as persons.18 This goes a considerable distance towards vindicating and simplifying the metaphysics of the mental that I’m endorsing here. I cannot think of a simpler model that avoids reductionism and resolves (at the same time) Descartes’s dilemma regarding causal interaction between the mental and the physical (Princess Elizabeth’s puzzles). I take all this to begin to make explicit the benefit of the metaphysics of the mental that I’m advancing here. I’ll close with a small meander about the would-be privacy of first-person avowals (and even of third-person reports of another’s private consciousness). When my dentist asks me whether my mouth is numb enough for him to
408
The Bloomsbury Companion to the Philosophy of Consciousness
proceed with the extraction he’s planned, he implicitly concedes that there’s a bare chance that the Novocain may not have taken hold and that I may be withholding the proprioceptive information he needs. So there is a sense in which inner mental experience may remain ‘private’ – that is, unreported in a public way, not in principle publicly inexpressible or inaccessible though privately known. Wittgenstein’s well-known rejection of the private language argument is not the issue here. It has to do more with what, regarding the ‘mental’, can be publicly cognized, with attention to the conditions on which what may thus be known may be ‘given’ (cognitively) in an appropriate way. This is a very large and complex matter, of course – quite fashionable at the moment. The problem arises, particularly, with regard to the question of cognitive privilege and possible cognitive and non-cognitive senses of ‘consciousness’ – as distinct from the meaning of ‘self-consciousness’ – which is usually taken to entail that form of ‘awareness’ said to be abundantly sufficient for cognitive feats of any familiar kind. But I must end abruptly here, for lack of sufficient space to bring this line of reasoning to a suitable close.19 In a way it doesn’t matter: it’s the choice of a conceptual strategy that we count as most important. And, there, we are still wrestling with Descartes. Reject dualism and reductionism, I say, and reclaim the integral unity of the human person as an animal. No analysis that is thus informed can go seriously astray.
Notes 1 ‘Principles of Philosophy’, trans. John Cottingham, The Philosophical Writings of Descartes, vol. 1, trans. John Cottingham, Robert Stoothoff, Dugald Murdoch (Cambridge: Cambridge University Press, 1985), Pt. I, §48. There is a clue to one line of Descartes’s speculation about how, possibly, to understand the paradox, in Pt. I, §46. This is the question that occupies much of the correspondence with Princess Elizabeth, which was never satisfactorily answered. I have benefited considerably from a lecture by Alison Simmons, (and a reading of her text), ‘Mind-Body Union and the Limits of Cartesian Metaphysics’ (as yet unpublished, I believe), read before the Philosophy Department, Temple University, either at the end of 2015 or early 2016. 2 Immanuel Kant, Critique of Pure Reason, trans. and ed. Paul Guyer and Allen W. Wood (Cambridge: Cambridge University Press, 1998), B131–2. The provenance of the Ich denke now seems problematic: evidently it’s ‘needed’ in order to ‘accompany all my representations, for otherwise … .’ But then, perhaps it’s posited only circularly, in order to shore up the transcendental argument itself. In Opus postumum, as Eckart Förster explains:
Notes Towards a Metaphysics of Mind
409
The positing subject is a thing in itself because it contains spontaneity, but the thing in itself = x, as opposed to, or corresponding to, the subject, is not another object, Kant now argues, but a thought-entity without actuality, merely a principle: ‘The mere representation of one’s own activity.’ It is the correlate of the pure understanding in the process of positing itself as an object. Its function is to ‘designate a place for the subject’: it is ‘only a concept of absolute position: not itself a self-subsisting object, but only an idea of relations’. Selfconsciousness is the ‘act’ through which the subject makes itself into an object. … For only insofar as the subject can represent itself as affected can it appear to itself as corporeal, hence as an object of outer sense. It then progresses to knowledge of itself in the thoroughgoing determination of appearances, and of their connection into a unified whole. (cited from Eckart Förster’s Introduction, in Immanuel Kant, Opus postumum, trans. Eckart Förster and Michael Rosen (Cambridge: Cambridge University Press, 1993), p. xlii)
3
4
5
6
Read in these terms, my own post-Darwinian proposal (to be introduced shortly) – involving the transformation of a grammatical fiction into a palpable actual presence (the self or person) – will be seen to be simpler, more plausible, empirically (or commonsensically) confirmable in terms of the human infant’s usual Bildung, directly ‘testable’, and entirely free of transcendental pretensions. (In a word, pragmatic or instrumental.) Kant seems obliged to propose a cognitive function for pure reason, here, in order to legitimate the actual function of the ‘I think’, which could not have been conceded in the first Critique. Kant’s solution seems unpardonably regressive, here – rationalist in a sense Kant intended to combat. But the incompleteness of the transcendental argument seems to demand Kant’s final effort. Cited from Martin Heidegger, Die Grundprobleme der Phänomenologie, Gesamtausgabe, vol. 24 (Frankfurt am Main: Vittorio Klostermann, 1993), p. 226: in Dan Zahavi, ‘Mindedness, Mindlessness, and First-Person Authority’, in Joseph Schear (ed.), Mind, Reason, and Being-in-the-World: The McDowell-Dreyfus Debate (London: Routledge, 2013), pp. 320–43, at p. 351. (The translation is Zahavi’s.) See Hubert L. Dreyfus, ‘The Myth of the Pervasiveness of the Mental’ (pp. 15–40) and John McDowell, ‘The Myth of the Mind as Detached’ (pp. 41–58), both in Joseph Schear (ed.), Mind, Reason, and Being-in-the-World: The McDowell-Dreyfus Debate (London: Routledge, 2013). For a recent account of this general approach, see my Toward a Metaphysics of Culture (London: Routledge, 2016), Chs. 1–2. I begin, with reference to Darwinian evolution, with a critique of the critique of Darwin advanced by the so-called ‘philosophical anthropologists’: Helmuth Plessner, Adolf Portmann, Arnold Gehlen, and, a close associate, Marjorie Greene. Colin McGinn, The Mysterious Flame: Conscious Minds in a Material World (New York: Basic Books, 1999), pp. xi, 3–5.
410
The Bloomsbury Companion to the Philosophy of Consciousness
7 McGinn, The Mysterious Flame, p. 3. 8 See Robert C Berwick and Noam Chomsky, Why Only Us: Languages and Evolution (Cambridge: MIT Press, 2016). Chomsky has shifted the centre of gravity of the language question from the analysis of the structure and meaning of actual languages – and, accordingly, the functionality of actual languages within the human world – to a speculation regarding the unique neurobiological condition on which the origination of true language may have ultimately depended. It remains true, nevertheless, that the evidentiary pertinence of the linguistic specimens Chomsky himself provides (in support of his own inquiry) requires, for their confirmation, reference to the practice of determining the meanings of specimen sentences drawn from actual languages. A fair question arises therefore as to the autonomy of Chomsky’s claim, which he does not address in this and other recent publications. 9 See J. J. Gibson, The Ecological Approach to Visual Experience (Boston: Houghton Mifflin, 1979), p. 258. 10 Zahavi, ‘Mindedness, Mindlessness, and First-Person Authority’, p. 329 11 The British philosopher Peter Geach has taken an early (and instructive) role in modelling thought propositionally; see, Mental Acts (London: Routledge, 1957). 12 See, for instance, the perceptive picture tendered in the ‘cultural psychology’ of Ciarán Benson, The Cultural Psychology of Self: Place, Morality and Art in Human Worlds (London: Routledge, 2001). 13 McGinn, The Mysterious Flame, pp. 4–5. 14 For an overview of Hempel’s unity-of-science model, see Carl G. Hempel, Aspects of Scientific Explanation and Other Essays in the Philosophy of Science (New York: Free Press, 1965). The term ‘Intentionality’ is my own coinage, meant to collect the semiotic dimension of all phenomena that may be characterized as ‘culturally significant’ in the sense of involving selves or persons or as being enlanguaged or lingual. A recent summary of my usage appears in Toward a Metaphysics of Culture, Chs. 1–2. 15 Jaegwon Kim, Philosophy of Mind, 3rd ed. (Boulder: Westview, 2011), p. 9. 16 Kim, Philosophy of Mind, p. 219. 17 See Kim, Philosophy of Mind, pp. 214–20; and Margolis, Toward a Metaphysics of Culture, Ch. 2. 18 I take this to be a reductio of the essential thesis in Donald Davidson, ‘Radical Interpretation’, Inquiries into Truth and Interpretation, 2nd ed. (Oxford: Clarendon, 2001). 19 The general line of reasoning that I pursue here may be glimpsed at least – say, with respect to (Intentional) actions and (constituent) bodily movements and replacements for dualistic formulations – in my Toward a Metaphysics of Culture, Ch. 2.
Part Five
Resources
412
20
Annotated Bibliography The Problem and Nature of Consciousness Alter, T. and Walter, S., eds. (2007) Phenomenal Concepts and Phenomenal Knowledge – New Essays on Consciousness and Physicalism. New York: Oxford University Press. This anthology comprises thirteen essays examining what has come to be seen as lying at the heart of the controversies around standard anti-physicalist arguments, namely the question whether knowledge of conscious experiences and the concepts associated with it differs from its purely physicalist counterparts. A positive answer is sometimes taken to provide the resources to explain away anti-physicalistic intuitions elicited by the knowledge or the conceivability argument. In this collection, both opponents and adherents of this strategy are represented. Supplemented with a helpful introduction, which illuminates the basic structure anti-physicalist arguments typically share, this book is thus highly valuable in that it systematically explores and structures the space of possible positions that can be endorsed with regard to anti-physicalist arguments. Baars, B. (1988) A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press. This is the first comprehensive, even if still somewhat speculative, statement of Bernard Baars’s influential Global Workspace Theory (GWT). GWT is presented as both a theory about consciousness and a more general framework to think about and organize evidence from psychology and neuroscience about consciousness and related phenomena like voluntary control of action or the ‘self ’. Baars systematically probes this putative unificatory power of the GWT, and spells out the now-famous global workspace metaphor: the nervous system is said to have a kind of widely distributed ‘publicity organ’, the contents of which are conscious. Despite the numerous further developments of the GWT since 1988, the clarity with which the theoretical statements are presented here makes this book still worth reading. Baars, B. (1997) In the Theater of Consciousness – The Workspace of the Mind. New York: Oxford University Press. Baars’s second book on consciousness presents his GWT in a highly readable way, thus making the key ideas accessible to non-scientists. After a short chapter introducing
414
The Bloomsbury Companion to the Philosophy of Consciousness
the methodology of the scientific study of consciousness in general, each chapter presents one component of the theatre metaphor, for example, the spotlights, the director, the setting behind the scenes, etc. This rather short book serves well as an introduction to the GWT and an entry point to the reflection of the prospects of the scientific explanations of consciousness more generally. Carruthers, P. (2000) Phenomenal Consciousness – A Naturalistic Theory. Cambridge: Cambridge University Press. Carruthers puts forth and rigorously defends a dispositional higher-order thought (HOT) theory, according to which phenomenal consciousness consists in, roughly, a certain sort of fine-grained intentional contents (IC) that are – via a special short-term memory – made available to HOTs. This availability is then supposed to account for the feeling of subjectivity accruing to conscious experiences in that they do represent not only properties of the environment, but also properties of the experience itself. The explicit goal of this extensive study is to provide an account of consciousness that is naturalistic, that is, scientifically acceptable. Yet Carruthers also discusses and criticizes at length his main naturalistic competitors, in particular first-order representationalism. Carruthers, P. (2005) Consciousness – Essays From a Higher-Order Perspective. Oxford: Oxford University Press. This collection of essays by Carruthers presents his overall views on consciousness in a coherent way; although most of the texts published were already published earlier – with three out of eleven essays – predating his Phenomenal Consciousness – all of them appear in revised forms and equipped with many cross-references. The essays reveal that Carruthers has developed and refined his higher-order or dual content theory of consciousness in significant ways, but they also contain reflections on topics like the nature of reductive explanations in general or consciousness in animals. Chalmers, D. (1996) The Conscious Mind – In Search of a Fundamental Theory. New York: Oxford University Press. Despite the uncountable discussions of the arguments found in The Conscious Mind, the original is still worth reading, not only because of the lasting influence it had on the study of consciousness. Chalmers therein defends a version of property dualism dubbed ‘naturalistic dualism’, which holds that although there are laws connecting the physical and the mental, when taken seriously, phenomenal consciousness cannot possibly be explained within a materialistic framework. Providing canonical formulations of major points of contention in current debates in the philosophy of consciousness, in particular of the conceivability argument, The Conscious Mind sets the philosophical agenda for subsequent years.
Annotated Bibliography
415
Chalmers, D. (2010) The Character of Consciousness. New York: Oxford University Press. This voluminous collection is composed of fourteen essays on a range of different questions about consciousness such as the workings of phenomenal concepts or the search for neural correlates and the requirements a science of consciousness is faced with. Chalmers further discusses the state of the debate on his conceivability argument and replies to critiques by stating it and the two-dimensional framework on which it is based in a more careful way. The essays are not generally published for the first time, but appear in revised and updated versions. Churchland, P. (1983) Neurophilosophy – Toward a Unified Science of the Mind-Brain. Cambridge, MA: MIT Press. Patricia Churchland in her book explores the possibility of a grand, unified theory that fully explains the brain or – what is the same in Churchland’s view – the mind. Regarding folk psychology as a theory the vocabulary of which has no privileged status, she argues that ultimately such a neuroscienitific theory will show that the former postulates entities like beliefs and desires that do not really exist. Churchland’s approach is explicitly interdisciplinary; she starts by introducing the reader to elementary neuroscience, before discussing relevant topics in philosophy of science, like theory reduction. Despite her controversial theses, and although her book is not primarily concerned with consciousness as such, Churchland in her forceful argumentation for her neurophilosophical approach sets the stage for the kind of work in the intersection between neuroscience and philosophy of mind which is now common in theorizing about consciousness. Dennett, D. (1991) Consciousness Explained. Boston: Little Brown and Company. Consciousness Explained is the locus classicus for the multiple drafts model and Dennett’s forceful rejection of the Cartesian-Theater metaphor. In this provocative book, it is argued that a number of philosophical problems thought to beset scientific explanations of consciousness are fundamentally misconstrued. For instance, Dennett finds problems relating to qualia to rest on a deep down incoherent notion of these alleged phenomenal properties. Dennett’s arguments, often couched in figurative language, are worth reading although and because they have provoked widespread criticism and discussions. Dennett, D. (2005) Sweet Dreams – Philosophical Obstacles to a Science of Consciousness. Cambridge, MA: MIT Press. Partly based on his Jean Nicod Lectures, Dennett here reinforces his position that consciousness should be studied from the third-person point of view only, and that there are no principled reasons why science should not be able to fully describe or explain conscious experience, once the explanandum is appropriately construed and stripped of conceptual confusions. He also further elaborates his view on qualia and the knowledge argument. Dennett largely renounces discussions of more technical issues, which makes the book a good introduction to his overall view on consciousness.
416
The Bloomsbury Companion to the Philosophy of Consciousness
Dretske, F. (1995) Naturalizing the Mind. Cambridge MA: MIT Press. This book presents a very influential version of representationalism about mental facts, combined with a functionalist account of representation. Mental states are representational in virtue of having systemic or acquired ‘indicator functions’; they indicate the presence of the properties of a domain of objects. All representational content is thus externalized or wide. Focusing on the most challenging features of mind like qualia, phenomenal consciousness in general or self-knowledge via introspection, this quite demanding book carries out both the externalist and the representationalist programme with great consequence. Edelmann, G. M. and Tononi, G. (2000) A Universe of Consciousness – How Matter Becomes Imagination. Edelman and Tononi approach the problem of consciousness from a neurophysiological point of view, asking how it comes that some, but not all, brain processes result in conscious states. In response, they argue that consciousness requires both highly differentiated and highly integrated processes of ongoing parallel and recursive signalling between separate brain regions, resulting in their information integration theory of consciousness (IIT). The authors present the IIT in an accessible and readable way, introducing each chapter with a short prologue. They apply it to several features of consciousness like its unity or its limited capacity, although they have somewhat less to say about its subjective and qualitative characteristics. Foss, J. (2000) Science and the Riddle of Consciousness – A Solution. Boston: Kluwer Academic Publishers. Foss makes an original contribution to the debate about consciousness from a metascientific perspective. Science proceeds by modelling the phenomena it attempts to explain, Foss observes, and the riddle of consciousness basically arises from certain confusions between scientific models and their targets, as well as from having too high expectations as to what a scientific explanation of consciousness should provide us with. The arguments Foss presents depend on the materialism he presupposes, and thus may not convince everyone, but his rather novel account certainly broadens the perspective on the nature of the problem at hand. Freeman, A., ed. (2006) Consciousness and Its Place in Nature – Does Physicalism Imply Panpsychism? Exeter: Imprint Academic. This volume is based on a special issue of the Journal of Consciousness Studies, comprising eighteen essays. Provocatively answering the question of the title in the affirmative, Galen Strawson authors the first contribution; the other texts are comments on his arguments. Strawson argues that if the existence of consciousness is not denied completely, panpsychism is the only option left for physicalists, as all other non-eliminativist and non-dualistic versions have to resort to unexplained and, in his view, dubious relations like emergence. Most commentaries are critical of Strawsen’s
Annotated Bibliography
417
argument, but out of different reasons. Given the breadth of the views represented, this collection is a very useful basis for probing the tenability of panpsychism. Gennaro, R. J., ed. (2004) Higher-Order Theories of Consciousness – An Anthology. After a short, but useful introduction by Gennaro, part one of this anthology contains essays by proponents of various forms of higher-order theories; critics have their say in the second part. The points of contention between opponents and defenders are numerous and diverse, including questions about the general motivation behind higher-order theories, their capacity to account for animal consciousness or the distinction between first- and higher-order representationalism. Interestingly, many questions that are raised have both conceptual and empirical aspects, which is reflected in the scientific background some of the contributors have. Gennaro, R. J. (2012) The Consciousness Paradox – Consciousness, Concepts, and Higher-Order Thoughts. Cambridge, MA: MIT Press. Gennaro sets forth a particularly well-elaborated version of the HOT theory, and his book at the same time gives a good overview of the main alternative HOTs on the market. The paradox he alludes to in the title arises because of the seeming inconsistency of the individually plausible claims that some version of the HOT is true and that infants and animals, who have not (yet) acquired a great number of concepts, are nonetheless conscious. In addition to giving an account of concept acquisition to resolve the paradox, Gennaro also discusses standard objections against HOTs like the problem of misinterpretation or the rock objection. Gray, J. (2004) Consciousness – Creeping Up to the Hard Problem. Oxford: Oxford University Press. Neuropsychologist Jeffrey Gray provides a systematic survey of the empirical evidence for different answers to the question how the brain brings about qualia. In particular, he scrutinizes the proposals that qualia are products either of the functional organization of the brain or of special kinds of intentional states. Both of these theses, however, are found unsupported by empirical evidence. Gray then goes on arguing that consciousness fulfils the function of a kind of ‘error detector’, and that the brain region performing that function is probably the hippocampus. But irrespective of the exact conclusions Gray ultimately draws, his book is highly valuable as a guide through empirical research on causes and correlates of consciousness. Hill, C. (2009) Consciousness. Cambridge: Cambridge University Press. After briefly distinguishing different forms of consciousness, Christopher Hill mainly focuses on the problem of qualia. Being an earlier type-B-materialist himself, he then explains why he came to believe that the strategy of any kind of a posteriori materialism presupposing a conceptual dualism cannot avoid the anti-materialist challenge posed by the knowledge argument. Instead, Hill now opts for a different
418
The Bloomsbury Companion to the Philosophy of Consciousness
version of representationalism, where the relevant kind of representation is thought to be non-conceptual. In the later chapters of this dense, yet clearly written book, this theory is applied to different topics like visual perception, pain or emotions. Janzen, G. (2008) The Reflexive Nature of Consciousness. Philadelphia: John Benjamin Publishing. After a careful discussion of the different meanings of ‘consciousness’, Janzen contends that mental states are conscious only if the subject is aware of them. But instead of opting for a higher-order theory of consciousness, he invokes an idea that has been prominent among adherents of the phenomenological movement, namely that conscious states incorporate an awareness of their own occurrence, that is, they are reflexive. Janzen’s presentation of this modern version of a reflexive theory is embedded in a clear and useful discussion of first- and higher-order representationalist theories more generally. Kim, J. (2005) Physicalism, or Something Near Enough. Princeton: Princeton University Press. Based on his numerous publications on the topic, Kim here presents his overall view on the metaphysics of mind. At the heart of his argumentation lies the problem of mental causation: physicalism, according to Kim, is the only option because of what he calls the pairing problem of mental–physical interactions, and, as his supervenience argument is supposed to show, the physicalism in question has to be reductive: if mental properties were not reducible to, but nonetheless supervened on physical ones, they would be causally superseded by their supervenience bases. Acknowledging that qualia are not readily amenable to his account of functional reduction, Kim arrives at the somewhat qualified physicalist position announced in the title. Kirk, R. (2005) Zombies and Consciousness. Oxford: Clarendon Press. In this ambitious, yet readable book, Robert Kirk puts forward a broadly functionalist account of phenomenal consciousness, which differs substantially from earlier versions of functionalism. Kirk argues that zombies are impossible, as the causal inertness of qualia cannot be squared with our putative epistemic intimacy with them, and that descriptions of qualia should be understood as a specific way of talking about physical processes. Phenomenal consciousness in his view requires a ‘package’ of cognitive capacities that we can reasonably ascribe to non-human animals, with the possession and use of concept being best regarded as a capacity that comes in degrees. Kriegel, U. (2009) Subjective Consciousness – A Self-Representational Theory. Oxford: Oxford University Press. Kriegel lucidly presents a rather novel reflexive or self-representational theory of phenomenal consciousness, which he discusses with a clear emphasis on the subjective aspect of phenomenal states: after all, the fact that there is something it is like for me to be in a conscious state makes this state conscious in the first place. A
Annotated Bibliography
419
representationalist framework serves as a basis for the analysis of both the qualitative and the subjective character of conscious states; the former is a matter of a state’s representing qualities of external objects, the latter of its representing itself. Conscious states thus are at once first- and second-order mental states. Koons, R. C. and Bealer, G., eds. (2010) The Waning of Materialism. Oxford: Oxford University Press. Koons and Bealer collect twenty-three essays by philosophers who, despite their diverging views on many specific questions, all challenge the orthodoxy of materialism in the philosophy of mind. The first two parts consist of anti-materialist arguments drawing on the supposed irreducibility of consciousness and on different aspects of personal identity. After the third part addressing materialist worries on mental causation and related topics, several alternatives to materialism are put forth. Complemented by an introduction that disentangles different positions falling under the heading ‘materialism’ and presenting the main challenges anti-materialists have to face up to, this anthology is a valuable source for both materialist and anti-materialist thought. Levine, J. (2001) Purple Haze – The Puzzle of Consciousness. Oxford: Oxford University Press. Almost twenty years after first contending the existence of an explanatory gap, Levine reviews in this quite short, albeit dense, book the subsequently made efforts to bridge it. And he comes to a negative conclusion: all the evidence for materialism notwithstanding, conscious experience remains a genuine puzzle, and the mind–body problem is far from being solved. In arguing for this claim, Levine provides helpful and clear discussions of a wide variety of issues, having to do with, for example, the conceivability argument, different types of explanations or the identification of theoretical terms. Ludlow, P., Nagasawa, Y. and Stoljar, D., eds. (2003) There’s Something About Mary – Essays on Phenomenal Consciousness and Frank Jackson’s Knowledge Argument. Cambridge, MA: MIT Press. This collection brings together the most important contributions to the discussion that evolved around Frank Jackson’s knowledge argument. It includes thus Jackson’s original formulation of the argument, early responses to it as well as more recent analyses, some of which have not been published before. After a foreword by Jackson, Nagasawa and Stoljar come up with a substantive introduction to the topic, working out what it is that makes the knowledge argument stand out against similar antimaterialist attacks. The essays themselves represent the whole spectrum of different strategies that can be adopted to either defuse or defend the knowledge argument. Lycan, W. (1996) Consciousness and Experience. Cambridge MA: MIT Press. Lycan adopts an inner sense view of consciousness, according to which mental states are conscious in virtue of being perceived or monitored by a special inner faculty. He
420
The Bloomsbury Companion to the Philosophy of Consciousness
is thus committed to a higher-order theory of consciousness, where representational contents of the relevant higher-order states are analysed functionally. Therefore, mentality has no special features that cannot be explained in terms of representational or functional terms. Lycan here refines his earlier publications on the topic by carefully delineating the explanandum of his theory and responding to objections. McGinn, C. (1991) The Problem of Consciousness – Essays Towards a Resolution. Oxford: Blackwell Publishers. This collection comprises eight essays; half of them defend and introduce McGinn’s mysterianism about consciousness, while the remaining ones concern more specific topics like machine consciousness or the plausibility of anomalous monism, which also provide an insight into the development of McGinn’s position over time. The core of his mysterianism consists in the claim that consciousness is a natural property of the brain, but not naturalistically explainable by cognitively limited beings like us: we are ‘cognitively closed’ with regard to the natural properties that would account for consciousness. McGinn, C. (1999) The Mysterious Flame – Consciousness in a Material World. London: Basic Books. In this book, McGinn presents his overall view of consciousness. Written in an engaging and rather colloquial style, with technical terms being reduced to a minimum, it should also be accessible to non-experts. As in previous publications, McGinn holds that consciousness is at once a physical process and not fully understandable by humans like us. McGinn argues for these bold hypotheses by pointing to the alleged superiority of naturalistic mysterianism over its dualistic and physicalist competitors in handling the challenges its alternatives face. Metzinger, T., ed. (2000) Neural Correlates of Consciousness – Empirical and Conceptual Questions. Cambridge MA: MIT Press. Thomas Metzinger in this voluminous collection brings together perspectives of philosophers and neuroscientists on the neurobiological basis of consciousness. A first part on conceptual questions about the notion of neural correlates or the appropriate explanandum of any theory of consciousness is followed by three parts presenting potential candidates of correlates; a final section focuses more specifically on the correlates of agency, self-hood and social cognition. Each part is complemented by a short, but useful summary. Although the empirical research has moved on since its appearance, the book still provides a very good introduction into both influential scientific hypotheses and the general motivation and pitfalls of this research programme. Papineau, D. (2002) Thinking about Consciousness. Oxford: Clarendon Press. Papineau defends a version of materialism according to which all phenomenal properties are identical to physical or physically realized properties. To this effect,
Annotated Bibliography
421
he gives new substance to the argument from the causal completeness of physics. Papineau’s ontological thesis is then combined with a kind of conceptual dualism, which he uses to explain away dualistic intuitions. He thus employs what was later called the ‘phenomenal concept strategy’: the fact that phenomenal concepts function in a way fundamentally different from physical concepts – according to Papineau, they literally incorporate variants of what they refer to – gives rise to the illusion that their referents are non-identical. Perry, J. (2001) Knowledge, Possibility, and Consciousness. Cambridge, MA: MIT Press. Perry attempts to defend a version of type physicalism against the most pressing anti-physicalist challenges, stemming from the knowledge, the conceivability and Kripke’s modal argument. Perry’s approach is similar in spirit to the strategies of other physicalists. However, at the heart of all the three arguments he sees a mistaken and unduly limited view about the content of thoughts. His text – based on Perry’s Jean Nicod Lectures – therefore contains detailed discussions of general issues relating to representation, indexicality and the like. Prinz, J. (2012) The Conscious Brain: How Attention Engenders Experience. New York: Oxford University Press. Prinz addresses phenomenal consciousness from the perspective of cognitive science, synthesizing an enormous body of empirical evidence to bolster his hypotheses. He is an eminent proponent of the view that consciousness is closely tied to attention, where the latter is defined as a process that makes information available for the working memory. More precisely, according to Prinz’s attention intermediate representation (AIR) theory, all and only those contents of experiences are conscious that are at intermediate ‘level’, that is, that are presented to us from our particular point of view. Prinz has defended this thesis before, but in this book he provides the most comprehensive account of it and applies it to more specific issues like the unity of consciousness. Rosenthal, D. M. (2005) Consciousness and Mind. Oxford: Clarendon Press. Consciousness and Mind brings together Rosenthal’s most important articles about his HOT of consciousness. While the first group of essays introduces his HOT as such, the following parts cover the homomorphism theory of sensory qualities, the relation between consciousness and language and the unity of consciousness. This volume is invaluable as a coherent summary of Rosenthal’s views and emphasizes the relations among these influential essays. Additionally, it provides a useful survey of the advantages and challenges the HOT framework faces more generally. Searle, J. (1992) The Rediscovery of the Mind. Cambridge, MA: MIT Press. Searle at once defends his biological naturalism and stresses the central place that consciousness should be given in the philosophy of mind, since all intentionality in his view depends on (at least potential) consciousness. According to biological
422
The Bloomsbury Companion to the Philosophy of Consciousness
naturalism, consciousness is a higher-level feature of the biological properties of the brain, just as liquidity is a feature of a set of water molecules. Although consciousness is taken to be ultimately irreducible because of its inherent subjectivity, Searle sticks to the claim that it is nothing mysterious. The Rediscovery of the Mind has stimulated much discussion, even if it is not always entirely clear just how Searle’s position relates to other positions regarding the consciousness–brain relationship. Siewert, C. (1998) The Significance of Consciousness. Princeton: Princeton University Press. Siewert’s carefully written book starts with a close examination of the meaning(s) of the word ‘consciousness’ from the first-person point of view, arguing that many theories of consciousness do not account for central aspects of the concept. He defends this introspective approach to consciousness against objections before proceeding to argue for a close relation between phenomenality and intentionality: his thesis is that mental states are intentional in virtue of having phenomenal features, and not the other way round. Siewert’s book thus stands out against the many other treatments of these questions in both its explicit endorsement of the first-person perspective and its anti-representationalist stance on phenomenal consciousness. Stoljar, D. (2006) Ignorance and Imagination – The Epistemic Origin of the Problem of Consciousness. Oxford: Oxford University Press. Stoljar argues that the problem of consciousness is not so much a conceptual problem as an epistemic one: the reason why philosophers are so puzzled about consciousness and doubt whether it can ever be explained scientifically is their present ignorance of the relevant non-experiential facts that determine the experiential ones. Only apparently, then, is it possible to consistently conceive of zombies devoid of any conscious experiences. If we keep our ignorance in mind, on the other hand, the plausibility of conceivability arguments should wane. Although other philosophers before him have adopted an epistemic strategy to the problem of consciousness, Stoljar’s carefully argued book probably provides the most elaborated version of this kind of solution. Tye, M. (1995) Ten Problems of Consciousness – A Representationalist Theory of the Phenomenal Mind. Cambridge, MA: MIT Press. As the title indicates, Tye starts by marking out ten problems of phenomenal consciousness he takes to be the hardest ones, and sets himself the ambitious task of solving them. His so-called PANIC theory – that mental states are conscious when having a suitably poised, abstract, non-conceptual IC – is justified by its explanatory power: focusing in particular on the case of visual perception, it is taken to explain best what Tye regards as the crucial features of conscious states. This book presents an early, yet sophisticated representationalist theory, elaborating classic arguments like the one from transparency in great detail.
Annotated Bibliography
423
Tye, M. (2000) Consciousness, Color and Content. Cambridge, MA: MIT Press. Tye’s second book on consciousness contains many clarifications and refinements of the representationalist theory put forth in the first one. In particular, he elaborates and reinforces the distinction between non-conceptual experiential content and conceptual belief content. More general objections against representationalism like the possibility of inverted qualia are countered by Tye’s objective account of colours he presents in the last part of the book. Like in his first presentation of the PANIC theory, Tye writes in a clear and accessible way and brings in many examples from empirical science to support his views. Tye, M. (2009) Consciousness Revisited – Materialism Without Phenomenal Concepts. Cambridge, MA: MIT Press. In his latest book on consciousness, Michael Tye rejects the phenomenal concept strategy because of problems having to do with the determination of the reference of these concepts. As an alternative, Tye invokes Russell’s distinction between knowledge by description and knowledge by acquaintance, supplemented by a new account of perceptual content, and contends that a solution of standard anti-materialist arguments requires acknowledging truly objectual as opposed to factual knowledge. He develops his new strategy in great detail by reference to visual perception, and then applies it to other issues in the philosophy of consciousness like change blindness.
Special Problems or Aspects of Consciousness Bayne, T. (2012) The Unity of Consciousness. Oxford: Oxford University Press. Bayne offers a thorough, comprehensive defence of the thesis that phenomenal conscious states are necessarily unified. A substantial part of the book consists in careful and well-informed discussions of putative counterexamples to phenomenal unity, such as certain experiences resulting from hypnosis, the split-brain syndrome, schizophrenia or anosognosia. Bayne regards phenomenal unity as constitutive for the self and dubs his view ‘virtual phenomenalism’, referring to the self as a virtual object depending on the representations. This is a very fruitful treatment of the topic, albeit not unchallenging to readers unfamiliar with the scientific literature. Bermúdez, J. L. (1998) The Paradox of Self-Consciousness. Cambridge MA: MIT Press. Drawing on philosophical work on non-conceptual content and empirical research into, for example, the development of primitive self-conceptions in infants, Bermúdez suggests that the community of organisms capable of self-conscious thoughts exceeds the language-users; possession of the concepts ‘I’ or ‘self ’ is only necessary for more demanding forms of self-consciousness. In contrast, analyses that categorically require the reflexive grasp of an ‘I’ are, according to Bermúdez, faced with a paradox
424
The Bloomsbury Companion to the Philosophy of Consciousness
arising from the interdependence of self-conscious thoughts and linguistic selfreference. Bermúdez’s detailed and valuable study of this paradox and its implications highlights fundamental questions about the appropriate approach that any theory of self-consciousness should take. Campbell, J. (2002) Reference and Consciousness. Oxford: Clarendon Press. Campbell argues that conscious attention is essential for an understanding of what certain demonstrative expressions refer to, that is, at least in some cases, it provides knowledge of reference (or the referent). In arguing for a close connection between the topics of attention and reference, he brings in and discusses in detail recent findings from cognitive science about the functional role of selective attention, before elaborating the consequences his thesis may have for theories of both consciousness and reference. Though not met by universal approval, Campbell’s approach to these topics valuably points out the close connections between the philosophy of consciousness and a great variety of other philosophical subdisciplines. Carruthers, P. (2015) The Centered Mind – What the Science of Working Memory Tells Us About the Nature of Human Thought. Oxford: Oxford University Press. Although quite demanding for non-experts – Carruthers himself conceives of it as a contribution to ‘theoretical psychology’ rather than philosophy – this book is an impressive, in-depth examination of the nature conscious human thoughts. Their contents and concepts, Carruthers argues, are conscious in virtue of being bound into the contents of sensory modal images. Thus amodal (non-sensory) concepts as such cannot be conscious, where consciousness is understood in terms of global broadcasting. Hence, the stream of consciousness contains only sensory contents and the contents depending on them. Gennaro, R. J., ed. (2015) Disturbed Consciousness – New Essays on Psychopathology and Theories of Consciousness. Cambridge, MA: MIT Press. Cases of disturbed consciousness often play the role of touchstones for philosophical theories of consciousness; this collection of essays, in which psychopathologies take centre stage, is therefore a very fruitful contribution to the literature on consciousness. Each of the fourteen essays focuses on one specific theory or aspect of consciousness and the support it gains from or the specific challenges it faces in accounting for, for example, split-brain cases, various forms of agnosia or blindsight patients. Gennaro complements the collection with an introduction in which he usefully summarizes the specific philosophical theories on the one hand, and forms of consciousness disorders on the other hand, that are found in this volume. Kriegel, U. (2011) The Sources of Intentionality. New York: Oxford University Press. Kriegel takes a new stance on the relation between consciousness and intentionality in holding that first, all intentionality is ultimately grounded in the intentionality
Annotated Bibliography
425
of phenomenal consciousness, and second, the latter can be naturalized. Aimed at unifying the most important intuitions we have about intentionality, these tenets are the core of the general framework Kriegel proposes for more specific theories about the relation between consciousness, intentionality and experience. He himself makes two suggestions to this effect, namely an adverbial theory and a higher-order tracking one. This book thus is an ambitious and interesting attempt to reconcile the intuitions that the phenomenal mental states have a special role to play with regard to intentionality, and that intentionality is, after all, a natural phenomenon. Kriegel, U. (2015) The Varieties of Consciousness. New York: Oxford University Press. Kriegel sets himself the task to find out how many different, that is, mutually irreducible phenomenologies are needed for an adequate description of our stream of consciousness. The candidates Kriegel accepts are cognitive and conative phenomenologies, along with the phenomenology of entertaining propositions; rejected are special moral and emotional phenomenologies. Starting with a clarification of his methodological and metaphysical assumptions – Kriegel argues that at least some mental phenomena are introspectively observable – he offers careful arguments that are informed by analytic philosophy as well as the phenomenological movement. Liu, J. and Perry, J., eds. (2011) Consciousness and the Self – New Essays. Cambridge: Cambridge University Press. This collection contains ten essays plus an introduction by Jeeloo Liu on a quite broad range of topics relating to consciousness and our sense of the self. Central issues concern the role that self-awareness plays in specific theories of consciousness like selfrepresentationalism and higher-order theories, self-knowledge – or lack thereof – and the relation between the sense of the self and personhood. The essays individually and the volume as a whole establish connections between current philosophical thought, important arguments from history of philosophy and recent empirical findings. Mele, A. (2009) Effective Intentions – The Power of Conscious Will. New York: Oxford University Press. Against recent arguments to the contrary, Alfred Mele contends that neuroscientific and psychological evidence does not undermine the thesis that conscious intentions are causally relevant for corresponding actions. He first introduces several conceptual distinctions relating to intentions and agency, and then uses his framework in the analyses of arguments by scientists like Benjamin Libet and Daniel Wegner. Mele works out the conceptual assumptions they make and offers alternative interpretations of the data. In particular, he is argues that many arguments that deny the existence of a free will confuse intentions as such with conscious intentions.
426
The Bloomsbury Companion to the Philosophy of Consciousness
Schwitzgebel, E. (2011) Perplexities of Consciousness. Cambridge, MA: MIT Press. By means of a series of case studies, Schwitzgebel launches a sceptical challenge to our putative knowledge about our own conscious experiences. Discussions of the colour of dreams, imagery, unattended stimuli or various visual illusions lead him to the conclusion not only that introspection is fallible, but also that our judgements about the outer world are in general more reliable than our judgements about our mental states. The book is an empirically well-informed, yet entertaining, guide through a whole plethora of interesting features of consciousness, aimed at undermining the confidence with which we claim to possess self-knowledge. Smithies, D. and Stoljar, D., eds. (2012) Consciousness and Introspection. Oxford: Oxford University Press. The topic of introspection is located in the intersection between epistemology and philosophy of mind – hence the aim of this rich volume on introspection to explore the connections between self-knowledge and consciousness. The fourteen essays represent a wide range of different perspectives on topics like scepticism about and different theories of introspection, constitutionalism and the relation between introspection and conscious experiences. Numerous cross-references and a quite long and substantive introduction by the editors ensure that the texts nonetheless comprise a coherent whole. Tye, M. (2003) Consciousness and Persons – Unity and Identity. Cambridge, MA: MIT Press. In this relatively short book, Tye offers an analysis of the unity of phenomenal and temporal consciousness, arguing that the problem has traditionally been misconceived. Rather than being a relation between different experiences, phenomenal unity is best seen as a unity relation between contents, where representational content is closed under conjunction. Although heavily depending on representationalism in its argumentation, due to its brevity and the number of important conceptual distinction it lays out, the book is also suited as an introduction to the unity problem more generally. Wegner, D. (2002) The Illusion of Conscious Will. Cambridge, MA: MIT Press. Psychologist Daniel Wegner gathers evidence from a great number of social and neuropsychological experiments to the effect that our experiential feeling of bringing about actions by acts of conscious willing is often misleading. Thus, sometimes we overestimate our casual powers, in other cases, our acting is not accompanied by the experience of a will. Indeed, Wegner argues for the even stronger claim that the feeling according to which we have a conscious and causally efficacious will at all is an illusion created by the brain. Irrespective of the controversial conclusions Wegner draws, his book is a very valuable and readable guide through psychological research on this topic.
Annotated Bibliography
427
Introductions Alter, T. and Howell, R. J. (2009) A Dialogue on Consciousness. New York: Oxford University Press. This short book stands out from other introductions in its original form: the text is an extended dialogue between the two students Tollens and Ponens (and occasionally other conversational partners), who meet at their college library to read classical texts about the problem of consciousness and discuss the arguments they find. Starting with Descartes’s Meditations, the protagonists quickly make their way to currently discussed thought experiments, settling for a version of property dualism and panprotopsychism – the authors thus make no secret of their own views. The accessible and engaging introduction is complemented by lengthy ‘suggested reading’ list. Blackmore, S. (2010) Consciousness – An Introduction. 2nd ed. London and New York: Routledge. Susan Blackmore’s interdisciplinary introduction offers entry points to all the major debates evolving around consciousness. The great density and richness of the covered topics – the second edition comprises more than 450 pages without references – are adjusted by the readable style of writing and the many helpful illustrations of key concepts and theories. Blackmore guides the reader through philosophical theories and empirical results of various disciplines, for example, from neuroscience over quantum mechanics to the psychological study of mystical experiences. Seager, W. (2016) Theories of Consciousness – An Introduction and Assessment. 2nd ed. London: Routledge. Seager’s introduction to consciousness starts with a comprehensive discussion of the modern origin of the mind–body problem in Descartes and then devotes chapters to identity theories, represent-tationalism, higher-order theories, Dennett’s views on consciousness, panpsychism and reflexive theories. In comparison with earlier editions, he has not only updated the texts, but also added chapters on animal consciousness, the relation of physicalism, emergence and consciousness as well as on neutral monism. The book provides substantive, yet accessible and engaging discussions of all the theories presented. Velmans, M. and Schneider, S., eds. (2007) The Blackwell Companion to Consciousness. Malden: Blackwell Publishing. In a well-balanced mix of authors with different disciplinary backgrounds, this companion accounts for the whole spectrum of questions that are currently holding the attention of philosophers and scientists of consciousness. Its fifty-five chapters introduce major philosophical, psychological and neuroscientific theories of consciousness; outline topics of consciousness research that are widely discussed in these disciplines: and explore the varieties and scope of this phenomenon in nature.
21
Research Resources
Journals Journal of Consciousness Studies http://www.imprint.co.uk/product/journal-of-consciousness-studies/ The Journal of Consciousness Studies, published by Imprint Academic, highlights the interdisciplinarity of all research into consciousness. It is thus open for publications of all disciplines relevant to the study of consciousness, including, cognitive science, philosophy and neurophysiology. Its editorial board includes prominent experts from various disciplines; principal editor is Valerie Gray Hardcastle. The JCS also runs an online forum and provides samples of email discussions: http://www.imprint.co.uk/category/jcs-blog/ http://212.48.84.29/~imprint/wp-content/uploads/2015/03/JCS-Online-Digestof-the-Key-Debates.pdf
Neuroscience of Consciousness http://nc.oxfordjournals.org/ Neuroscience of Consciousness is a relatively new, open-access journal by Oxford University Press. Editor-in-chief is Anil Seth. Its focus is on research into the biological basis of consciousness, but it also publishes empirically and neuroscientifically relevant papers from disciplines like psychology or philosophy. It is partnered with the Association for the Scientific Study of Consciousness.
Frontiers in Consciousness Research http://journal.frontiersin.org/journal/psychology/section/consciousnessresearch#about
Research Resources
429
Frontiers in Consciousness Research is a specialized section of the open-access journal Frontiers in Psychology and thus places the main emphasis on the psychological study of consciousness. It aims at stimulating the illumination of the nature of consciousness as such, but also of related topics like volition, agency or free will. The Speciality Chief Editor is Morton Overgaard.
Consciousness and Cognition http://www.journals.elsevier.com/consciousness-and-cognition/ Consciousness and Cognition is an Elsevier-Journal devoted to the scientific study of consciousness, volition and the self. It incorporates different perspectives from the natural sciences, featuring research on a number of topics of philosophical interest like blindsight, the neuropathology of consciousness and the pathology of the self and of self-awareness. Editor-in-Chief is Bruce Bridgeman.
Psychology of Consciousness http://www.apa.org/pubs/journals/cns/index.aspx The Psychology of Consciousness: Theory, Research, and Practice publishes articles from various psychological subdisciplines such as cognitive, clinical or neuropsychology that are related to the topic of consciousness. The focus is on empirical rather than theoretical contributions. It is edited by the American Psychological Association.
Phenomenology and the Cognitive Sciences http://www.ummoss.org/pcs/ Phenomenology and the Cognitive Sciences is a Springer-Journal dedicated to research in the intersection between phenomenology, cognitive science and analytic philosophy of mind. Trying to build a bridge between first-person experiences and the experimental study of mind, it features articles on a great variety of topics that relate to the study of consciousness, including discussions of methodological issues. Dan Zahavi and Shaun Gallagher are current editors-in-chief.
Psyche http://journalpsyche.org/
430
The Bloomsbury Companion to the Philosophy of Consciousness
Psyche was an online interdisciplinary journal publishing research on the nature of consciousness and its relation to the brain. It was open to contributions from disciplines as diverse as philosophy, psychology and physics. From 2008 to 2010 it was an official publication of the Association of the Scientific Study of Consciousness. Although it does no longer accept new articles, it still runs an archive where all issues from 1994 until 2010 are made publicly available.
Societies Association for the Scientific Study of Consciousness http://www.theassc.org/ The ASSC (Association for the Scientific Study of Consciousness), founded in 1994, is an academic association that brings together researchers from cognitive science, neuroscience, philosophy and other disciplines relevant to the exploration of the nature, function and underlying mechanisms of consciousness. So as to encourage and facilitate the systematic and interdisciplinary investigation of these topics, the ASSC runs an archive with publications of its members, organizes an annual conference and awards the William James Prize for outstanding contributions to the empirical or philosophical study of consciousness by young scholars.
International Association of Computing and Philosophy http://www.iacap.org/ The IACP (International Association of Computing and Philosophy) promotes research in the intersection of philosophy and computation and explores the prospects of the use of information and communication technology in the service of philosophical research. It includes the Society for Machines and Mentality as special interest group with focus on artificial Intelligence and machine consciousness.
Society for Philosophy and Psychology http://www.socphilpsych.org/ Founded in 1974 by Jerry Fodor, the society for Philosophy and Psychology is a scientific and educational society trying to foster interactions between
Research Resources
431
philosophers and psychologists in North America on all topics of common concern. These include, but are not limited to, various questions about consciousness. To this end, the SPP (Society for Philosophy and Psychology) annually organizes conferences and awards several prizes.
European Society for Philosophy and Psychology https://korpora.zim.uni-due.de/espp/committee/index.html The ESPP (European Society for Philosophy and Psychology) is the European counterpart to the SPP in North America and unites philosophers, psychologists and linguists from European universities. The ESPP like the SPP organizes annual meetings covering a broad range of topics, many of which are in a more or less direct way related to problems of consciousness.
MindNetwork http://www.mindcogsci.net/ The MindNetwork is a UK-based network of philosophers of mind and cognitive scientists who meet twice a year to exchange ideas and discuss research papers, many of which allude to questions about consciousness.
Research Centres and Institutes Center for Consciousness Studies, Arizona http://www.consciousness.arizona.edu/ The Center for Consciousness Studies in Tucson, Arizona, is probably the largest best-known institution for interdisciplinary research on consciousness. It has the aim of integrating a broad spectrum of different perspectives, ranging from numerous scientific disciplines like psychology, neuroscience and medicine to philosophy and other fields in the humanities. Envisioning itself as a ‘forum for original thinking on the nature of our existence’, the (Center for Consciousness Studies) CCS provides web courses, supports research projects and hosts lecture series and workshops on various aspects of the problems of consciousness. Among the activities the CCS engages in, The Science of Consciousness (TSC) conference (formerly called Toward a Science of Consciousness), is particularly worth mentioning. Since its first realization in 1994, the TSC has been taking
432
The Bloomsbury Companion to the Philosophy of Consciousness
place biannually in Tucson and at other locations around the world, and has had a profound impact on the reawakening of the scientific and philosophical interest in, and the establishment of a truly interdisciplinary science of, consciousness.
Centre of Consciousness, Australian National University http://consciousness.anu.edu.au/ The Centre for Consciousness, located within the School for Philosophy at the Australian National University, promotes philosophical research into questions of consciousness and analytic philosophy of mind more generally. Its current and past members like Frank Jackson, David Chalmers or Daniel Stoljar have all made seminal contributions to the philosophical study of consciousness.
Center for the Explanation of Consciousness, Stanford University http://csli-cec.stanford.edu/ The Center for the Explanation of Consciousness is a research initiative at the Center for the Study of Language and Information at Stanford University. It hosts symposia and workshops with the aim of probing different approaches – both philosophical and scientific – to the explanation of consciousness.
The Center for Consciousness Science, University of Michigan https://consciousness.med.umich.edu/about The Center for Consciousness Science at the University of Michigan Medical School attempts to achieve progress in issues concerning consciousness in both research and education as well as clinical care, for example, when it comes to the application of novel techniques. Its focus lies on medical disciplines like neuroscience, anesthesiology, physiology and psychiatry, but it also fosters relations with relevant disciplines in other departments.
The Mind Group, University of Frankfurt http://fias.uni-frankfurt.de/mindgroup/ The Mind Group at the University of Frankfurt is a platform for young academics in philosophy and empirical sciences with research focus on consciousness,
Research Resources
433
cognition and mind in general, with one of its major aims being bridging the gap between the sciences and the humanities.
The Sackler Center for Consciousness Science, University of Sussex http://www.sussex.ac.uk/sackler/ The Sackler Center for Consciousness Science at the University of Sussex explores the biological bases of consciousness – in health and disease – with research groups on theory and modelling; embodiment and self; perception; time and clinical applications of new treatments. Founded in 2010, it combines approaches to consciousness from psychology with information and engineering, the life sciences and medicine.
The Berlin School of Mind and Brain http://www.mind-and-brain.de/overview/ The Berlin School of Mind and Brain, affiliated with the Humboldt Universität zu Berlin, offers a master’s degree, a doctoral and a postdoctoral program in the study of higher cognitive functions like consciousness, perception and decisionmaking. It is dedicated to research that connects neurobiology with psychology, linguistics or philosophy and provides aspiring academics with trainings in interdisciplinary collaboration.
Encyclopedias and Dictionaries Stanford Encyclopedia of Philosophy http://plato.stanford.edu/ The Stanford Encyclopedia of Philosophy contains many entries on topics related to consciousness, ranging from, for example, specific theories and different kinds of consciousness to prominent arguments and historically influential philosophical or scientific movements. Entries are written by leading experts and mostly give a comprehensive overview on the state of a debate, including extensive bibliographies and reading lists.
434
The Bloomsbury Companion to the Philosophy of Consciousness
Internet Encyclopedia of Philosophy http://www.iep.utm.edu/ The Internet Encyclopedia of Philosophy is not quite as extensive as Stanford Encyclopedia, but still provides many useful articles on various philosophical issues relevant to the study of consciousness.
Dictionary of Philosophy of Mind https://sites.google.com/site/minddict/ The Dictionary of Philosophy of Mind offers short and precise definitions of key concepts in the philosophy of mind. Before publication, entries are subjected to blind review, and they are intended to provide uncontroversial explanations of important terms.
Glossary of Terms http://christofkoch.com/glossary-of-terms/ Christof Koch on his website furnishes an updated version of the glossary contained in his book The Quest for Consciousness (Roberts and Company Publishers 2004). Therein he defines many of the key terms relating to the study of consciousness and the brain, with special focus on visual perception.
Bibliographies PhilPapers http://philpapers.org/browse/philosophy-of-consciousness PhilPapers is an extensive, regularly updated bibliography of articles and books from all philosophical subdisciplines. Its voluminous section on the philosophy of consciousness – edited and maintained by David Chalmers – covers virtually all philosophical debates evolving around consciousness.
Cogprints http://cogprints.org/
Research Resources
435
Cogprints is an electronic archive storing papers from psychology, neuroscience and linguistics, as well as from those subfields of computer science, philosophy, biology and other disciplines that are relevant to the study of cognition. Authors can self-archive their papers, which are then freely accessible.
MindPapers http://consc.net/mindpapers MindPapers is an extensive bibliography edited by David Chalmers and David Bourget that contains the bibliography on the philosophy of consciousness on PhilPapers as a part, complemented by its ‘cousins’ on Intentionality, the Science of Consciousness, Philosophy of Cognitive Science, Philosophy of Artificial Intelligence, Metaphysics of Mind and Perception.
People with online papers in philosophy http://consc.net/people.html David Chalmers has compiled a list with links to the personal webpages of philosophers who have made some of their papers available online. As it is no longer updated, not all of the links are active, but the list remains valuable as it categorizes the sites on the basis of the philosophical subdisciplines to which the papers make contributions.
Blogs and other websites Philosophy of Brains http://philosophyofbrains.com/ The Philosophy of Brains blog – edited by John Schwenkler – is a forum for works in philosophy and science of mind, reporting on events and new publications in the field and providing a platform for discussions. It hosts the annual Minds Online Conference dedicated to all philosophical disciplines contributing to the study of mind: http://mindsonline.philosophyofbrains.com/
436
The Bloomsbury Companion to the Philosophy of Consciousness
Consciousness Redux http://www.scientificamerican.com/department/consciousness-redux/ Consciousness Redux is a column by neuroscientist Christof Koch in the Scientific American Mind, in which he summarizes and comments on recent experimental or theoretical developments in the scientific study of consciousness.
Science and Consciousness Review http://www.sciconrev.org/ The Science and Consciousness Review issues reviews and summaries of publications on the nature of consciousness. Initially intended as an instrument to stimulate and strengthen networking within the community of scientists working on consciousness, it seems to be no longer updated on a regular basis.
Consciousness Online http://consciousnessonline.com/ Between 2009 and 2013, the Consciousness Online Webpage hosted five online conferences on the philosophy of consciousness. Keynotes were held by renowned experts such as David Rosenthal, Paul Churchland and Bernard Baars, and the results were published in the Journal of Consciousness Studies. The webpage has archived all conference materials, including videos of the talks and subsequently published papers.
22
A–Z Key Terms and Concepts Access versus phenomenal concsiousness The term ‘access-consciousness’ was introduced by Ned Block ((1995). ‘On a Confusion About a Function of Consciousness’, Brain and Behavioral Sciences, 18 (2), 227–47) as a distinct type of state consciousness. It can be characterized in functional terms; someone’s mental state is conscious in this sense if its content is available to him or her for the purposes of reasoning and the control of behaviour. This definition is then contrasted with what Block calls ‘phenomenal consciousness’, that is, the subjective feeling accompanying a conscious mental state. A mental state is phenomenally conscious if there is something ‘it is like to be’ in that state (Nagel, T. (1974). ‘What is it like to be a bat?’, Philosophical Review, 83, 435–50), as when someone experiences a sharp pain or hears a strange noise. In many cases, the applications of the two definitions coincide. A visual perception can thus be called ‘conscious’ both because of its characteristic qualitative features and because the visual information it carries can guide the reasoning and behaviour of the organism in question. Less clear, however, is the case of highly abstract thinking – Do these kinds of thoughts, which are presumably conscious in the ‘access’ sense, also have distinct phenomenal characteristics? Furthermore, to show that access and phenomenal consciousness can come apart, Block refers to psychological experiments where subjects were asked to report items they had previously been confronted with for a short period of time. Since the number of the items the subjects were able to report depended on the precise questions they were asked, there seem to be some items of which the subjects had been phenomenally conscious when they saw them, but to the perceptions of which they had no full access since they could not report them unless the ‘right’ question was asked. Block thus concluded that phenomenal consciousness may ‘overflow’ access
438
The Bloomsbury Companion to the Philosophy of Consciousness
consciousness. Nevertheless, the interpretation of this kind of experiments is controversial, and of course, what they are evidence for depends on just how we specify the conditions that need to be met for a mental state’s being conscious in the ‘access’ sense. See also: QUALIA, FUNCTIONALISM
Animal consciousness Animal consciousness is studied for various reasons. First, the degree to which it seems reasonable to ascribe consciousness to non-human animals has quite direct implications with regard to their moral status. Further, as it is the case with many traits of currently living human beings, investigations of closely related species might provide insight in the evolutionary history of consciousness. Finally, questions about the evidence that is required to confirm scientific hypotheses about putative conscious mental states of animals relate to broader issues regarding scientific methodology. This becomes apparent in view of the fact that the approval of animal consciousness as a legitimate object of scientific investigation turned on the abandonment of a strictly behaviouristic methodology for the study of animal behaviour (Allen, C. and Bekoff, M. (1997). Species of Mind – The Philosophy and Biology of Cognitive Ethology, Cambridge: MIT Press). Now the questions if we can know whether non-human animals are conscious and, if a positive answer results, what it is like to be a member of a species of non-human animals (Nagel, T. (1974). ‘What is it like to be a bat?’, Philosophical Review, 83, 435–50), can be seen as instances of the general epistemological problems afflicting alleged knowledge of other minds, albeit aggravated by the lack of language in non-human animals. Nonetheless, it seems fairly uncontroversial that at least mammals and birds are conscious, if sentience is taken as minimal condition for (phenomenal) consciousness. Yet if we turn instead to, for example, invertebrates, consensus shrinks. Ultimately, both adherents and opponents of the hypothesis that a particular species is conscious rely for their arguments on what they regard as relevant similarities or differences between humans and other animals in terms of behaviour, nervous system, brain anatomy, etc. And of course, to what degree animals are conscious depends not only on the epistemological standards we employ, but also on our theory of consciousness as such. For example, higher-order thought (HOT) theories (see HIGHER-ORDER MENTAL STATES) will exclude animals
A–Z Key Terms and Concepts
439
without linguistic competences comparable to those of human beings to the degree that thought requires language (Glock, H.-J. (2000). ‘Animals, Thoughts and Concepts’, Synthese, 123 (1), 35–64). See also: CREATURE VS STATE CONSCIOUSNESS, ARTIFICIAL INTELLIGENCE
Attention and awareness There is no universally accepted definition of the psychological concept ‘attention’, but it seems safe to say that attending is a matter of focusing selective information from the total amount that is available to an organism for enhanced or more thorough processing. Although we do not yet have a complete understanding of their neuronal basis, it is agreed the crucial characteristics of attention are its selectivity and limited capacity – we do not and cannot pay attention to all stimulus inputs reaching our sensory organs. When we view a painting, for example, there are always certain features that ‘escape’ us. Now, it seems natural to suppose a very close connection between attention and consciousness or – as it is often called in psychological experiments – awareness: if the presence of a group of people in the background of a painting slips my attention, I am not consciously perceiving it. But I can become conscious of the people if they are brought to my attention. Nonetheless, evidence is increasing that attention and awareness can come apart. First, attention is not always necessary for awareness: for example, if subjects are unexpectedly presented with the picture of a natural scenery for a time span that is too short to even focus attention on it, they are nonetheless capable of reporting correctly the gist of the scenery. Second, attention does not seem sufficient in any case. Experiments to that effect involve the examination of so-called blindsight. In these cases, persons suffering from a partial cortical blindness – that is, from a blindness caused by lesions of their primary visual cortex – are presented with objects in the part of their blind field. Although they report not to see the stimuli, they nonetheless often choose the correct answer to specific questions about the objects, when forced to simply ‘guess’. Apparently then, these persons attend to stimuli they do not consciously perceive. Needless to say, though, the results of these and similar experiments are subject to different interpretations (e.g. Mole, Ch., Smithies, D. and Wu, W. (2011). Attention– Philosophical and Psychological Essays, Oxford: Oxford University Press).
440
The Bloomsbury Companion to the Philosophy of Consciousness
See also: ACCESS VS PHENOMENAL CONSCIOUSNESS, NEURAL CORRELATES OF CONSCIOUSNESS
Artificial intelligence Artificial intelligence (AI) is the research programme that either attempts to simulate or duplicate natural (i.e. human or animal) intelligence. The aim of simulating intelligence corresponds to what is called weak or non-mentalistic AI, whereas the more ambitious strong or mentalistic AI strives for creating, that is, duplicating, natural intelligence (Searle, J. (1980). ‘Minds, Brains, and Programs’, Behavioral and Brain Sciences, 3, 417–57. Two general approaches to AI can be distinguished). In the case of rule-structured programming, a machine is programmed to execute specified commands whenever certain conditions are fulfilled. The basic functioning of these digital machines thus resembles a modern computer. Currently, however, connectionist or neural networks are receiving much attention from AI researchers. Neural networks are models of the brain that consist of a large number of nodes (‘neurons’) with variable weights, measuring the strength of the connections between them. The nodes have individual thresholds for their activation, and upon receiving a certain input, they send the activation value along to the nodes they are connected with. This research programme comes along with the hope for a greater response-flexibility of the machine vis-à-vis unforeseen situations. Unlike digital computers, decentralized connectionist networks lack a central processing unit – rather than by means of a symbolic code, they represent information analogically through the pattern of activation induced by the data being processed. AI is not exactly the same as artificial or machine consciousness, as the criteria for a machine’s being conscious and its being intelligent are not expected to coincide perfectly; especially if by ‘intelligent’ we mean the successful completion of specific tasks. Conscious machine designers focus more strongly on the machine’s internal representations of the world and their acquisition. In general, attempts to produce or model artificial consciousness, whether successful or not, sharpen the questions both what it means to be conscious and how we can detect consciousness in other beings (see TURING TEST) (Holland, O., ed. (2003). Machine Consciousness, Exeter: Imprint Academic). Not very surprisingly, however, the greatest challenges for a creation of conscious machines seem to be the phenomenal aspects of consciousness.
A–Z Key Terms and Concepts
441
See also: CHINESE ROOM ARGUMENT, TURING TEST, HARD PROBLEM OF CONSCIOUSNESS BINDING PROBLEM: see UNITY OF CONSCIOUSNESS CARTESIAN DUALISM: see SUBSTANCE DUALISM CARTESIAN EGO: see SELF
Chinese room argument The Chinese room argument is a thought experiment by John Searle, aimed at undermining the prospects of strong AI (Searle, J. (1980). ‘Minds, Brains and Programs’, Behavioral and Brain Sciences, 3 (3), 417–57) (see ARTIFICIAL INTELLIGENCE). More precisely, he tried to demonstrate the impossibility for appropriately programmed machines to actually be intelligent or, for that matter, conscious. To this end, Searle asks us to imagine a room in which nonChinese speaker ‘answers’ Chinese questions written on small cards and slid into his room through a slot. That is, although he does not understand the meaning of the symbols, he precisely follows the instructions in his Chinese manual that specify in great detail what cards he has to send out of the output-slot as reactions to the incoming cards. The ‘Chinese Room’ thus passes the Turing Test (see TURING TEST). Yet despite the apparent intelligence of the answers, which look perfectly reasonable to any Chinese speaker, ex hypothesi, the person in the room does not understand anything that is written on the cards. Hence, or so Searle argues, input–output devices that simply manipulate syntax are not capable of real intelligence. Intrinsic intentionality cannot be created artificially (see INTENTIONALITY). The Chinese room argument has provoked considerable debate. An initial worry concerns the details of the analogy: in order to show that a machine cannot think, it would actually be necessary to demonstrate that the whole room does not understand Chinese – the person inside of the room merely corresponds to a part of the machine, for example the central processing unit. Further criticisms address the simplicity of Searle’s experimental set-up. In more complex scenarios, it is argued, it might be possible to create real understanding, such as when the computer program simulates the actual sequence of neural firings in a competent Chinese speaker (Preston, J. and Bishop, M. (2002). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford: OUP).
442
The Bloomsbury Companion to the Philosophy of Consciousness
See also: ARTIFICIAL INTELLIGENCE, TURING TEST, INTENTIONALITY CONCEIVABILITY ARGUMENT: see ZOMBIES
Creature vs state consciousness The adjective ‘conscious’ applies to different kinds of entities. When we debate whether a particular mental state or process is conscious, we are discussing putative instances of state consciousness. Ascribing consciousness to whole organisms, in contrast, is a matter of creature consciousness. An organism’s being conscious is contrasted with its being asleep or comatose, or alternatively, with entities of which it does not make sense to ask what it is like to be them. Thus we can say that a cat is conscious because it is awake, or that cats are conscious because there is something it is like to be a cat. Creature consciousness probably admits of degrees, with a possible minimal requirement being sentience, that is, the ability to perceive and respond to stimuli from the environment. Yet it is hard to specify just what sensory capacities creature consciousness calls for. What is fairly uncontentious, though, is the existence of a difference between mere consciousness and the more demanding notion of self-consciousness (see SELF-CONSCIOUSNESS). In addition to an organism’s being conscious simpliciter, we sometimes also use transitive notions of creature consciousness, as when, for example, a cat is conscious of the mouse in front of it (Rosenthal, D. (1993). ‘State Consciousness and Transitive Consciousness’, Consciousness and Cognition, 2, 355–63). Transitive creature consciousness comes close to state consciousness, since it seems to call for a mental state that represents the object at which the consciousness of the organism is directed. In principle, then, creature consciousness could be defined in terms of state consciousness, with an organism being conscious if an only if it is a subject of conscious mental states. What militates against this proposal, however, is the supposed greater amenability of creature as opposed to phenomenal state consciousness to purely physicalist explanations. See also: ACCESS VS PHENOMENAL CONSCIOUSNESS, ANIMAL CONSCIOUSNESS, SELF-CONSCIOUSNESS
Eliminativism (eliminative materialism) Eliminative materialism challenges our common-sense view of our mind by contending that some or most of the mental states we normally take for granted
A–Z Key Terms and Concepts
443
actually do not exist at all. Hence it is claimed that concepts of our common sense or folk psychology such as ‘belief ’ or ‘desire’ are in fact empty and should therefore be abandoned. Two clarifications are in order here: first, in some strains, eliminativism is not targeted against mental states as such, but against their supposed subjective experiential properties (see QUALIA). And second, unlike reductive materialism or physicalism (see PHYSICALISM), eliminativism does not hold that mental states can actually be reduced to brain states and are in that sense nothing over and above them, but that the concepts we typically employ in describing the mental life of ourselves and others virtually fail to refer to anything (Savitt, S. (1974). ‘Rorty’s Disappearance Theory’, Philosophical Studies, 28 (6), 433–6). To make this hypothesis plausible, eliminative materialists usually start by pointing to the fact that folk psychology is a quasi-scientific theory. That is to say, we use terms like ‘hope’, ‘believe’, ‘intend’ to explain and predict the behaviour of other persons; mental states thus have the status of theoretical entities postulated by folk psychology (Churchland, P. M. (1981). ‘Eliminative Materialism and the Propositional Attitudes’, Journal of Philosophy, 78 (2), 67–90). But according to eliminativism, this theory is likely to prove radically wrong – it will turn out that there are no entities that possess the properties that folk psychology ascribes to them. To make this prediction plausible, eliminativists sometimes invoke other common-sense views that were refuted by modern science – for example, the belief in an absolute space and time. They see no reason why folk psychology should be more successful than folk physics. In addition, it is argued that current neuroscience does not provide any evidence for the typical semantic and syntactic features in terms of which mental states are usually characterized (Churchland, P. S. (1986). Neurophilosophy: Toward a Unified Science of the Mind/Brain, Cambridge, MA: MIT Press). In reply, realists about mental states can call in introspective evidence and point to the enormous success that explanations in terms of mental states have. See also: PHYSICALISM, QUALIA
Emergence Emergence is often thought to characterize a relation between higher- and lowerlevel properties of a system, with the higher-level properties being irreducible to the lower-level ones. Emergent properties are novel and different from, or not explainable in terms of, the properties that give rise to them. Further, the term
444
The Bloomsbury Companion to the Philosophy of Consciousness
is often used with an implicit or explicit temporal dimension, for example, when certain features that appear for the first time in the course of biological evolution are called ‘emergent’. Likewise, consciousness – especially when it comes to its intrinsically intentional and qualitative aspects – is sometimes characterized as an emergent feature of the brain or certain brain states. Although there is no uniquely accepted definition of ‘emergence’, two broad categories of definitions can be distinguished, although they are not mutually independent: first, metaphysical notions of emergence point out that arguably, consciousness is ontologically not fully determined by the brain. For example, in the case of causal emergence, it could be argued that the causal powers of brain states do not exhaust the causal powers of conscious mental states. Other accounts stress the epistemological dimension of emergence, arguing that consciousness cannot be completely explained by reference to the functioning of the brain. Determining the relationship between ‘emergence’, ‘supervenience’ and related concepts is complicated, however, and it has even been questioned whether the concept ‘emergence’ has any useful role to play at all in the specification of, for example, the brain–consciousness relation (Kim, J. (2006). ‘Emergence: Core Ideas and Issues’, Synthese, 151 (3): 547–59). See also: SUPERVENIENCE, REDUCTION, EPIPHENOMENALISM
Epiphenominalism Epiphenomenalism with regard to consciousness holds that mental events causally depend on neural events, but are themselves causally inert. Events that appear to be the results of conscious mental states, then, are actually caused by the neural events the conscious states depend on. For instance, the moving of my hand is caused by the neural impulses leading to muscular contractions, whereas my conscious desire to perform this movement is itself an effect of the neural event that leads to the impulses. Epiphenomenalism builds on the observation that explanations of physical events like the one mentioned above do not have to appeal at any place to non-physical entities; there does not seem to be a gap in the causal chain, if only physical causes are taken into consideration. At the same time, it seems plausible to hold that mental properties cannot be fully reduced to, and are not identical with, physical properties (see PROPERTY DUALISM). Epiphenomenalism is therefore the consequence of the assumptions that first, non-physical properties actually exist, second, all
A–Z Key Terms and Concepts
445
events have exclusively physical causes, and, third, there is no systematic overdetermination. The view that the mental properties have no causal powers is at odds with our everyday experience: arguably, introspection reveals us that it is precisely the painfulness of pain that leads us to perform certain actions that promise relief from pain. However, epiphenomenalists in response resort to the numerous examples in our daily lives where we err in our causal judgements. For instance, we regularly confuse mere correlations with truly causal relations, which is why, or so epiphenomenalists argue, our conviction that mental events are genuine causes may not be well justified after all. Nevertheless, as epiphenomenalism postulates entities without any causal powers, it is position that is often sought to be avoided: mental properties without any effects appear somewhat dubious as it is not clear how we could even have knowledge of them (e.g. Moore (2014). ‘The Epistemic Argument for Mental Causation’, The Philosophical Forum, 45 (2), 149–68), or how they could have evolved in the first place. But the debate about the possibility of mental causation and the plausibility of epiphenomenalism is lively and ongoing. See also: PROPERTY DUALISM, PHYSCIALISM
Explanatory gap Joseph Levine coined the term ‘the explanatory gap’ to name the incompleteness of our understanding of consciousness ((1983). ‘Materialism and Qualia: The Explanatory Gap’, Pacific Philosophical Quarterly, 64, 354–61). In particular, he is concerned with the ‘gap’ that is left open by explanations of conscious phenomena invoking the mechanisms in virtue of which the causal roles of the phenomena are realized, but without explaining their phenomenal features. For example, an explanation of pain that only describes the mechanism of the firing of C-fibres leaves the question unanswered as to why the firing of C-fibres feels the way it does, namely painful. The present lack of a complete physicalist explanation of consciousness is sometimes used in arguments against physicalism (e.g. Chalmers, D. (1996). The Conscious Mind, New York: Oxford University Press). In general, whether an explanatory gap exists, and if so, what metaphysical consequences this implies, depends on our criteria for any successful explanation of consciousness, and on the strength we assign to the hypothesis that there is a gap. For example, Levine argues that the deductibility of facts
446
The Bloomsbury Companion to the Philosophy of Consciousness
about consciousness from physical facts is not sufficient for their explanation as long as we don’t know why the involved correlations between the mental and the physical hold. Further, if we think that the inability of the sciences to provide a complete explanation of consciousness is merely due to practical limitations of current science, this provides less support for metaphysical claims than the contention that consciousness is in principle unexplainable by cognitive agents like ourselves. Colin McGinn prominently adopts the latter claim (e.g. in (2000). The Mysterious Flame – Conscious Minds in a Material World, New York: Basic Books), while maintaining that consciousness is a physical process – hence his naturalistic mysterianism. See also: REDUCTION, HARD PROBLEM OF CONSCIOUSNESS
Free will The exercise of a free will is manifest in those actions of a person that are in some sense within her own control. As it is frequently put, S’s action is free if S could have acted otherwise, that is, if S could have refrained from the action or opted for an alternative action instead. Needless to say that this basic idea can be spelt out in many different ways, depending on how one understands the ‘control’ we have or the sort of modality that is expressed by the ‘could’. Crucial questions in the debate around free will arise from a supposed conflict of our having a free will with determinism: it seems difficult to square the idea that every event is causally necessitated by a previous event according to the laws of nature with our alleged control over certain events, namely our actions. The issues surrounding this free will problem are closely intertwined with debates on the essence of moral responsibility. Now, besides purely conceptual approaches to questions regarding free will, there have also been various attempts to examine this faculty on the basis of its putatively close connection with experience or consciousness. On the one hand, phenomenological data suggest that there are at least some actions that are freer than others: the feeling of, ‘losing control’ when overwhelmed with strong emotions such as anger is familiar to most people (Nahmias, E. et al. (2004). ‘The phenomenology of free will’, Journal of Consciousness Studies, 11 (7–8), 162–79). On the other hand, in experiments by Benjamin Libet and others, actions turned out to be already physically initiated before the subject’s conscious decision to act (Libet, B. (1985). ‘Unconscious cerebral initiative and the role of conscious will in voluntary action’, The Behavioral and Brain Sciences,
A–Z Key Terms and Concepts
447
8, 529–66), which was taken to show that they are not results of conscious willings. However, the interpretation of these data remains highly controversial. See also: PHENOMENOLOGY
Functionalism Functionalism in philosophy of mind is the position that a particular mental state instantiates a type of mental state in virtue of its relations to input stimuli, other mental states and output behaviours. Thus, whether some mental state I am in right now is a belief or rather a fear, depends on the causal role it occupies, not on its intrinsic features. As a highly simplified example, somebody’s being in pain might be characterized as the state that is caused by tissue damage and that in turn leads to the belief that something is wrong with his or her body, the desire to cause this state to cease and behaviours like wincing or retracting the injured body part. One standard criticism, however, points to the possibility of creatures that behave in exactly the same way as persons who are in pain, but feel no pain, that is, they do no experience the specific phenomenal property of pain (Block, N. (1980). ‘Are absent qualia impossible?’, The Philosphical Review, 89 (2), 257–74) (see ZOMBIES). On the other hand, one chief advantage functionalism has over type physicalism is the ease with which it can endorse the possibility of multiple realizable mental states (see PHYSICALISM). For functionalism just as for computationalism, mind is – metaphorically speaking – not so much a question of the material constitution of the hardware as of the program that is installed on it. Functionalism thus leaves open the possibility that the functional roles of mental states be fulfilled by other substances than (human) brain states. This broad characterization of functionalism fits a number of more specific positions (Shoemaker, S. (1984). Identity, Cause and Mind, Cambridge: Cambridge University Press, Ch. 12, 261–86). One important distinction reflects the different theories of mental states by which the functional definitions are supposed to be informed: psychofunctionalism relies on the results of empirical psychology, analytic functionalism, by contrast, proceeds by an analysis of our common-sense concepts of ‘belief ’, ‘pain’ and the like. In addition to the explanation of particular types of mental states, there are different attempts to define consciousness as a whole via its function(s). In particular, cognitive theories of consciousness like, the global workspace theory (GWT) (see GLOBAL WORKSPACE) can be interpreted in this way.
448
The Bloomsbury Companion to the Philosophy of Consciousness
See also: ACCESS VS PHENOMENAL CONSCIOUSNESS, PHYSICALISM, GLOBAL WORKSPACE
Global workspace The notion of a ‘global workspace’ originally comes from AI (see ARTIFICIAL INTELLIGENCE) and figures prominently in several related models of consciousness. Cognitive scientist Bernard Baars first suggested that conscious mental content is characterized by its global availability to cognitive processes such as attention, memory or verbal report, and proposed that this availability comes about when information is in the global workspace, which can be seen as a kind of momentary memory (e.g. Baars, B. J. (1988). A Cognitive Theory of Consciousness, Cambridge: Cambridge University Press). The starting position for Baars’s GWT is the observation that although the brain involves massive parallel processing, conscious experiences appear in a linear, serial way. According to the GWT, the capacity limits of the global workspace account for, for example, our inability to consciously carry out several tasks at the same time. Unconscious processes outside of the global workspace and coalitions among them thus compete for access to the global workspace. Building on the GWT, Dehaene and others have developed the notion of a ‘global neuronal workspace’ (GNW) (Dehaene, S. and Nacchache, L. (2001). ‘Towards a cognitive neuroscience of consciousness – basic evidence and a workspace framework’, Cognition, 79, 1–37). They argue that consciousness – both access and phenomenal forms of it – occurs whenever the content in question is in the GNW, that is, in a neural system distributed over the brain the function of which is the interconnection of specialized brain areas. Thus a brain process that reaches the global workspace is conscious as it is then ‘broadcast’ to the rest of the brain. See also: ARTIFICIAL INTELLIGENCE, FUNCTIONALISM, MULTIPLE DRAFTS MODEL
Hard problem of consciouness The terminology of ‘hard’ and ‘easy’ problems is due to David Chalmers. He maintains (e.g. (1995). ‘Facing up to the Problems of Consciousness’, Journal of Consciousness Studies, 2 (3), 200–219) that questions about consciousness can
A–Z Key Terms and Concepts
449
be seen as falling in either of these two categories, with only the solutions for the ‘easy’ problems being within the reach of (the standard methods of) science. Examples include our ability to categorize or react to environmental stimuli or the difference between wakefulness and sleep. Answers to these questions, though certainly not trivial and resulting from painstaking scientific research, are comparatively easy, as they do not raise special philosophical or conceptual issues. The hard problem, by contrast, concerns the phenomenal or subjective aspects of consciousness (qualia): A scientific explanation of, for example, reactions to environmental stimuli, consisting in the detailed and correct description of the mechanisms responsible for the electrochemical procession of sensory inputs and their conversion into certain behavioural reactions, fails to answer – or rather, does not address – the question as to why these processes are accompanied by a certain feeling of what it is like to be subject to this particular stimulus. In general, the hard problem of consciousness is thought to be the most serious challenge to physicalist and functionalist theories of consciousness. See also: EXPLANATORY GAP, REDUCTION, QUALIA
Higher-order mental states Mental states that intend or represent other mental states are higher-order mental states. According to higher-order theories of consciousness, the distinguishing feature of conscious mental states is their being accompanied by a higher-order mental state that represents the first-order state in question. The main line of thought supporting this contention starts with the observation that we can have not only conscious, but also unconscious mental states – certainly beliefs or desires, and, somewhat more controversial, maybe also perceptual states. The mark of the conscious states is then assumed to be our awareness of them, which is in turn construed as a second-order mental state. Such an account can be more or less ambitious, depending on whether it is supposed to provide a comprehensive account of consciousness, including its phenomenal, ‘what-it-islike’ – aspects, or only to characterize a specific kind of consciousness. Ambitious higher-order theories try to reduce qualitative features of consciousness to representation, and therefore belong to the family of representationalist theories, broadly construed (see REPRESENTATIONALISM). One main differentiation among higher-order theories concerns the mode in which the conscious first-order mental state is supposed to be represented: HOT
450
The Bloomsbury Companion to the Philosophy of Consciousness
theories think of the relevant higher-order states as noninferential thoughts or beliefs (e.g. Rosenthal, D. (1993). ‘Thinking that one thinks’, in Davis, M. and Humphreys, G. (eds.) Consciousness – Philosophical and Psychological Essays, 197–223, Oxford: Blackwell). Higher-order perception theories, by contrast, align them more closely with perception (e.g. Lycan, W. (1996). Consciousness and Experience, Cambridge, MA: MIT Press), assuming a kind of inner sense that produces fine-grained, non-conceptual analogues of the outputs of our first-order sense organs. Focusing on HOTs, we can further distinguish between theories which require of a conscious mental state that it be an object of an actually occurring HOT on the one hand, and those that settle for the mere disposition of a mental state to be an intentional object on the other (Carruthers, P. (2000). Phenomenal Consciousness – A Naturalistic Theory, Cambridge: Cambridge University Press). One general problem for higher-order theories of consciousness is the fact that normally, thinking about or perceiving an entity does not make this entity conscious. That is why higher-order theories need to specify further what it is about higher-order mental states that give rise to consciousness (Goldman, A. (1993). ‘Consciousness, Folk-Psychology, and Cognitive Science’, Consciousness and Cognition, 2, 364–82). See also: REPRESENTIONALISM, PHENOMENAL CONSCIOUSNESS, QUALIA IDENTITY THEORY: see PHYSICALISM
Information integration theory The integration of information is a central feature of consciousness (see UNITY OF CONSCIOUSNESS). The information integration theory (IIT), put forth by Giuliano Tononi, goes as far as suggesting that a system is conscious to the degree that it is capable of integrating information (2008). (‘Consciousness as integrated information: A provisional manifesto’, Biological Bulletin, 215, 216– 42.) Information is understood along traditional information-theoretic lines: information increases proportional to the reduction of uncertainty when a particular outcome occurs out of a set of alternative outcomes. For example, a system that can only distinguish between two different outcomes, say, light and dark, cannot generate as much information as another system that is sensitive to a whole range of different shades of colours. Yet in addition to the number of bits of information determined by the size of the repertoire of alternatives, all these
A–Z Key Terms and Concepts
451
alternative states the system can occupy have to display a certain unity. Tononi suggests construing the integration of the information as the information generated by a system as a whole in addition to that generated by its parts taken independently. Thus the reason why a digital camera is not conscious in spite of the huge number of different states all the photodiodes on a chip can occupy is that the photodiodes are not integrated: The information the chip can generate does not exceed sum of the states of the diodes taken individually. According to the IIT, therefore, consciousness is an information-theoretic property that can be realized to a higher or lesser degree and which does not depend on any specific material substrate for its realization. See also: UNITY OF CONSCIOUSNESS
Intentionality The term ‘intentionality’ refers to the abstract relation of thoughts and other mental states (or of the mind as a whole) towards their objects as when, for example, someone thinks about a theory, is afraid of failing an exam or is longing for the next holidays. As the possibility of fiction, false beliefs or illusion shows, these objects of this directedness or ‘aboutness’ do not have to exist. Arguably, intentionality is a characteristic feature of conscious mental states, which is obvious in the case of, for example, feeling, believing or desiring. This intimate connection was brought to the fore by Brentano (Brentano (1973) [1874]. Psychology from an Empirical Point of View, Routledge: London), who even took intentionality to be the ‘mark of the mental’. In the meantime, however, philosophers have construed the relation between intentionality and consciousness in numerous ways. Intentionality is sometimes taken to be the foundation of all the other characteristics of conscious mental states, in particular of so-called qualia (see QUALIA). It has been proposed, for instance, that having a perception of an object amounts to nothing more than representing that object in a distinct way (see REPRESEN-TATIONALISM). By contrast, John Searle interprets the relation the other way round: He first distinguished the intrinsic intentionality of thoughts, perceptions, etc. from the directedness of artefacts such as pictures, words or machines, whom he ascribes a mere as-if or derivative intentionality, that is, an intentionality that derives from the intentional states of the producers or users of the artefacts in question. Then, he argues that intrinsic intentionality depends on consciousness, with the intentionality of unconscious
452
The Bloomsbury Companion to the Philosophy of Consciousness
states deriving from their potential to become conscious (Searle, J. (1992). The Rediscovery of the Mind, Cambridge: MIT). Intrinsic intentionality appears to be an obstacle for attempts to explain consciousness as a natural phenomenon, and there have been various attempts to ‘naturalize’ intentionality. Daniel Dennett famously denies any genuine difference between intrinsic and derived intentionality. He holds that in employing intentional vocabulary in descriptions or explanations, we only adopt an intentional stance. Treating, say, a calculator as if it had intentions and beliefs may help to explain and predict its behaviour, but it does not imply that it has mental states that are intrinsically intentional – and, according to Dennett, the same holds for our ascriptions of intentional states to persons (Dennett, D. (1987). The Intentional Stance, Cambridge: MIT). See also: REPRESENTATINALIST THEORIES OF CONSCIOUSNESS, QUALIA
Introspection Introspection – literally ‘looking within’ – is the special process by which someone forms beliefs about his or her own mental states (see SELF-KNOWLEDGE). It figures prominently in certain theories of consciousness (e.g. Lycan, W. (1996). Consciousness and Experience, Cambridge MA: MIT Press), but it is a matter of controversy, what the nature of this process is and whether or in what sense it is indeed special. Sometimes introspection is seen as a close analogue of sensory perception. Thus, it is thought that we have a kind of inner sense that functions in more or less the same way as our outer sensory organs, except that it is directed to mental states rather than external objects. However, critiques have pointed out various disanalogies; unlike in the case of hearing, smelling or seeing, for example, there seems to be no distinct phenomenology corresponding to introspection. Dissociating themselves from the model of perception, other accounts have it that introspection should not be seen as an epistemic process whereby we gain knowledge. According to a view for which Wittgenstein is often credited, our utterances about mental states should be seen as the mere expression of our current mental states (Heal, J. (1994). ‘Moore’s paradox: A Wittgensteinian approach’, Mind, 103 (409), 5–24) and thus construe introspection as non-epistemic. Others have maintained that introspection is not that special after all; Gilbert Ryle argued that the only difference between
A–Z Key Terms and Concepts
453
ascribing mental states to ourselves and ascribing them to others consists in the fact that we are always present to observe our own behaviour, which is not the case in what regards other persons (Ryle, G. (2002) [1949]. The Concept of Mind, Chicago: University of Chicago Press). The use of introspection in science raises further questions. Introspection obviously cannot live up to the usual standards of scientific methodology: we cannot introspect other person’s mental lives, which is why the evidence introspection provides cannot be objectively verified. At the same time, introspection seems to be the only access we have to our mental states. Scientific theories about consciousness that do not account for the data generated by introspection at all seem in a rather weak position to explain consciousness. See also: SELF-KNOWLEDGE, SELF-CONSCIOUSNESS, PHENOMENOLOGY
Knowledge argument The knowledge argument, in its original form proposed by Frank Jackson ((1982). ‘Epiphenomenal Qualia’, Philosophical Quarterly, 32, 127–36; (1986). ‘What Mary didn’t know’, Journal of Philosophy, 83, 291–5), is an attempt to show that phenomenal consciousness is not reducible to physical facts. The argument takes the form of a thought experiment about a super-scientist who lives at some time in the future when all physical truths are discovered. She thus has complete knowledge of all the facts of physics, chemistry and neurophysiology, including facts about causal relations and functional roles. But she spends her entire life in a black-and-white room, and has never seen any colours until one day she walks outside and sees the coloured world. The intuition that, upon leaving her room and seeing colours for the first time, the scientist will acquire new knowledge is then supposed to show that physicalism is false, that not all facts are physical. The argument has provoked a number of physicalist answers, objecting to different assumptions in the thought experiment. Crucial issues concern the kind of knowledge that the scientist is supposed to learn, for example whether she acquires propositional knowledge or only ‘know-how’ or knowledge by acquaintance. The question has also been raised as to what exactly it means to know ‘all the physical facts’ and whether the a priori deducibility of high-level truths from physical facts is really implied by physicalism. On the other hand, many defences and related arguments (see ZOMBIES) point to the necessity of a
454
The Bloomsbury Companion to the Philosophy of Consciousness
careful interpretation of the claims and the structure of the knowledge argument (Ludlow, P., Stoljar, D. and Nagasawa, Y., eds. (2004). There’s Something about Mary: Essays on Phenomenal Consciousness and Frank Jackson’s Knowledge Argument, Cambridge: MIT Press). See also: QUALIA, ZOMBIES, ACCESS VS PHENOMENAL CONSCIOUSNESS, PHYSICALISM MACHINE CONSCIOUSNESS: see ARTIFICIAL INTELLIGENCE MATERIALISM: see PHYSICALISM
Modal arguments against physicalism Intuitively, the main thesis of physicalism (see PHYSICALISM) is logically contingent – it is not necessary, that is, that mind and brain are identical. After all, this identity, if true, would be the result of empirical research, not of a conceptual analysis. This contingency together with his analysis of the semantic and referential peculiarities of proper names was used by Saul Kripke to formulate a modal argument against physicalism. According to Kripke, proper names and natural kind terms are so-called ‘rigid designators’, meaning, roughly, they refer to the same object in every possible world in which the object exists. Thus, whereas a description such as ‘the richest woman on earth’ takes as its referent whatever meets the description in the world under consideration, the proper name ‘Descartes’ always refers to Descartes. Accordingly, if an identity between two rigid designators holds, it does so with strict necessity. Kripke now gives the example of the proper names ‘Descartes’ – referring to the person Descartes or to his mind – and ‘B’, which is introduced as a proper name of Descartes’s body. If the statement ‘Descartes = B’ is true in our world – as the physicalist would claim it is – then it is necessarily true. But since it is accepted that there is at least one logically possible world where ‘Descartes ≠ B’ is true, ‘Descartes = B’ cannot be true in any world, and, a fortiori, not in the actual one (Kripke, S. (1980). Naming and Necessity, Oxford: Blackwell Publishing). A closely related argument can be formulated using natural kind terms like ‘pain’ and ‘firings of C-fibres’. Now the implications of this sort of argument are not entirely clear; Kripke himself does not seem prepared to endorse a substance dualism (see SUBSTANCE DUALISM), and the argument has been criticized in several
A–Z Key Terms and Concepts
455
ways (e.g. Bayne, S. (1988). ‘Kripke’s Cartesian Argument’, Philosophia, 18, 265–70). What is generally accepted, however, is the condition of adequacy it formulates for any account of physicalism: if physicalism is true, there should at least be an explanation for why we have the strong inclination to think that B could have existed without Descartes or C-fibres could have fired without the feeling of pain accompanying it. See also: PHYSICALISM, SUBSTANCE DUALISM
Multiple drafts model The Multiple Drafts Model (MDM) is a theory of consciousness put forth by Daniel Dennett ((1991). Consciousness Explained, Cambridge MA: MIT Press), the central tenet of which is that consciousness is highly distributed across the brain rather than located at a clearly determinable area. The MDM contends that there are at any time many information-processing streams – multiple drafts or narratives – running in parallel and competing for limited resources in terms of attention and control of behaviour. We make a mistake, therefore, if we think of consciousness as a region in the brain (a Cartesian Theater) with clear boundaries separating conscious from unconscious mental states, and with a spectator like an ‘inner eye’ seeing what happens on stage (see SELF). Despite our impression to the contrary, according to which consciousness exhibits a kind of seriality with individual mental states successively appearing before an inner observer, consciousness is a actually a matter of what Dennett calls ‘cerebral celebrity’, that is, of the attention an individual narrative fragment gets from, and the influence it exerts over, the rest of the brain. In this regard, the MDM displays some similarities with the GWT (see GLOBAL WORKSPACE) from cognitive psychology. One major worry about the MDM is that it is not entirely obvious what kind(s) of consciousness the MDM is supposed to explain (Block, N. (1994). ‘What is Dennett’s Theory a Theory of?’, Philosophical Topics, 22 (1/2), 23–40); that is, one may concede that the MDM captures something important about specific types of consciousness, while maintaining that not all the phenomena we usually call by that name are explainable by means of it. See also: GLOBAL WORKSPACE, SELF
456
The Bloomsbury Companion to the Philosophy of Consciousness
Neural correlates of consciousness The search for neural correlates of consciousness (NCC) – the brain events that underlie conscious mental states – is at once central to the scientific study of consciousness and of considerable philosophical interest (Metzinger, T., ed. (2000). Neural Correlates of Consciousness: Empirical and Conceptual Questions, Cambridge. MA: Cambridge University Press). Research in this field basically proceeds by contrasting the neural activities of conscious and unconscious information processing, measured by, for example, functional magnetic resonance imaging. According to an early suggestion by Francis Crick and Christof Koch ((1990). ‘Toward a Neurobiological Theory of Consciousness’, Seminars in the Neuroscience, 2, 263–75), mental states become conscious when a great number of neurons in the cerebral cortex fire in synchrony with one another at a frequency in the range of 40–70 hertz. A different proposal (Flohr, H. (1995). ‘Sensations and brain processes’, Behavioral Brain Research, 71, 157–61) is that conscious experiences depend on the rate at which active neural assemblies are formed. A key role is thereby assigned to the activation of N-methyl-D-aspartate receptor channel complexes, which are responsible for the control of synaptic weights and thus influence the formation of cell assemblies. Research into possible NCCs can involve the investigation of the enabling conditions for all forms of consciousness, or of particular aspects thereof, such as the visual perception of colours. NCCs give rise to a number of philosophical questions. First of all, the mere contention of a correlation does not answer the question what the dependence-relation between neural event and conscious experience amounts to, for example, whether it is a causal or even an identity relation. Moreover, it is not clear whether NCCs that are found to be necessary for consciousness are also sufficient. See also: REDUCTION, PHYSICALISM NON-REDUCTIVE PHYSICALISM: see PROPERTY DUALISM
Phenomenology Phenomenology is the study of experience from the first-person point of view. ‘Phenomena’ are literally appearances; phenomenology thus describes and analyses things as they appear to us. More narrowly, ‘phenomenology’ refers to a movement in the history of philosophy that was particularly dominant in
A–Z Key Terms and Concepts
457
continental Europe during the first half of the twentieth century. Under the influence of Brentano’s ‘empirical psychology’ ((1973) [1874]. Psychology from an Empirical Point of View, Routledge: London), thinkers like Husserl, Sartre or Merleau-Ponty set out to describe not only sensations and perceptions, but also our experience of time, embodied actions, the self, memory and the like. The central structure of experience is thereby seen in its intentionality (see INTENTIONALITY) or directedness: all experience is an experience of something, that is, it intends an object. According to some phenomenologists, our experience is only directed to things through our concepts and values, which is why the representational content of experience is not to be identified with the things represented. In contrast, within analytic philosophy of mind, ‘phenomenology’ usually refers to the study of the sensory qualities of experiences in virtue of which there is something it is like to have these very experiences (e.g. Nagel, T. (1974). ‘What is it like to be a bat?’, Philosophical Review, 83, 435–50). These phenomenal properties seem to lie at the heart of the mind–body problem and allegedly pose the greatest challenge to a complete physicalist explanation of consciousness. See also: INTENTIONALITY, QUALIA, PHENOMENAL CONSCIOUSNESS, SELF
Physicalism Physicalism – the term is often used interchangeably with ‘materialism’ – is a metaphysical doctrine according to which there is only one kind of substance: everything that exists is physical. Being in accordance with the scientific world view, physicalism – or some variant thereof – is the mainstream position in current philosophy of mind. Physicalism thus claims that all mental entities are identical to physical ones. But this identity relation can be spelt out in two different ways: Token physicalism claims that every particular mental event is identical to some physical event. Type physicalism, in contrast, is the logically stronger position arguing for an identity between types of mental and types of physical events (Smart, J. J. C. (1959). ‘Sensations and brain processes’, Philosophical Review, 68, 141–56). Type-identity has of course the advantage of its simplicity; however, it faces the problem of the multiple realizability of mental states: even if it turns out to be true that in humans, pain simply is the firing of C-fibres, this does not imply that the same is true for all other beings capable of
458
The Bloomsbury Companion to the Philosophy of Consciousness
being in pain. It is at least conceivable that creatures with brains experience pain radically different from ours as well, in which case pain is multiply realizable and there is not one single type of physical event that is identical to the mental event of ‘being in pain’ (Putnam, H. (1967). ‘Psychological Predicates’, in Capitan, W. H. and Merrill, D. D. (eds.) Art, Mind, and Religion, Pittsburgh: University of Pittsburgh Press, 37–48). A further complication arises from the fact that it has also become popular to formulate the thesis of physicalism via the concept of supervenience (see SUPERVENIENCE). This supervenience physicalism claims, roughly, that it is not possible for two worlds to resemble each other exactly with regard to their physical aspects but differ in their non-physical (mental) ones. Supervenience physicalism is sometimes thought to capture the minimal commitment of physicalism, according to which fixing the physical facts is sufficient for fixing all the facts. As supervenience physicalism is logically independent of token physicalism (Haugeland, J. (1982). ‘Weak Supervenience’, American Philosophical Quarterly, 19(1), 93–103), and weaker than type physicalism, this concept allows to formulate a physicalist thesis that is non-reductive and is therefore compatible with property dualism. See also: KNOWLEDGE ARGUMENT, PROPERTY DUALISM, NEURAL CORRELATES OF CONSCIOUSNESS, SUPERVENIENCE, FUNCTIONALISM
Property dualism Property dualism (sometimes also called ‘non-reductive physicalism’) is a view about the relation between the mental and the physical that is monistic with regard to substances and pluralistic when it comes to properties. It thus combines assumptions from physicalism with assumptions from substance dualism, without fully endorsing either of these alternatives (see PHYSICALISM, SUBSTANCE DUALISM). Property dualism contends that despite the non-existence of an immaterial substance, the brain has not only physical, but also mental properties which are not amenable to a physical or functional reduction. The irreducible mental properties – qualia and/or intrinsic intentionality – are thought to be indispensable for complete explanations of psychological phenomena. Typically, property dualism is combined with an emergence thesis of mind, meaning that the mind is conceived of as emerging from complex material organization
A–Z Key Terms and Concepts
459
without being fully reducible to the material substrate on which it supervenes. One motivation for the recent revival of property dualism (e.g. Robinson, W. (2004). Understanding Phenomenal Consciousness. Cambridge MA: Cambridge University Press) has anti-physicalist arguments, which are interpreted as showing either that qualia cannot be completely explicated in physical terms or that they are not fully determined by them (see KNOWLEDGE ARGUMENT, ZOMBIES). A major challenge for property dualism is the causal efficacy of irreducible mental properties. If physical causal laws have unrestricted scope, it seems that mental properties are causally superfluous, and we are faced with the choice between treating them as identical to physical events after all or as mere epiphenomena (see EPIPHENOMENALISM) (e.g. Kim, J. (1998). Mind in a Physical World, Cambridge MA: MIT Press). However, the strength of this charge remains open to debate. See also: EMERGENCE, EXPLANATORY GAP, QUALIA, INTENTIONALITY EPIPHENOMENALISM
Qualia Broadly construed, qualia (‘quale’ in the singular) are the distinctive subjective aspects of conscious mental states, and vital to what is called ‘phenomenal consciousness’ (see ACCESS VS PHENOMENAL CONSCIOUSNESS). Examples for qualia include what it is like for someone to feel a sharp pain, to taste something sweet, to feel the heat of a fire or to hear a loud noise. Yet whereas these conscious states uncontroversially come along with qualia, it is less obvious whether there is also something it is like to suddenly remember a name or to entertain a certain thought. Being introspectively accessible phenomenal properties of our experiences, qualia are seen as one of the greatest obstacles to a complete scientific explanation of consciousness and to physicalist theories of mind. In particular, the debate evolves around several famous thought experiments purported to show that qualia are not reducible to the physical properties of brain states (see KNOWLEDGE ARGUMENT, ZOMBIES). In response, several attempts have been made to fit qualia into a physicalist or functionalist framework, for example by reducing them to representational content (see REPRESENTATIONALISM). Other philosophers even opted for an eliminativism with regard to qualia (see ELIMINATIVISM): Dennett, for instance, argues that the notion of qualia is inherently incoherent
460
The Bloomsbury Companion to the Philosophy of Consciousness
and should be given up entirely (Dennett, D. (1988). ‘Quining Qualia’, in Marcel, A. and Bisiach, E. (eds.) Consciousness in Contemporary Science, 43–77. Oxford: Oxford University Press). In that case, therefore, the problem of qualia would not lie in the difficulty of explaining them, but rather in the mistaken view that there is something to be explained in the first place. Among adherents of the thesis that qualia are real features of our mental life, there is considerable room for controversy, too. Major issues concern the question whether qualia are epiphenomenal or causally efficacious, and what relation exists between the qualia and the intentionality of conscious mental states. See also: PHENOMENAL CONSCIOUSNESS, HARD PROBLEM OF CONSCIOUSNESS, EXPLANATORY GAP, ZOMBIES, INTENTIONALITY, KNOWLEDGE ARGUMENT, ELIMINATIVISM, REPRESENTATIONALISM
Reduction Reduction or reducibility is a relation between scientific theories or the entities postulated by them. If someone argues for the reducibility of one theory to another, she thereby establishes a hierarchical order among the theories in question: if X can be reduced to Y, then Y is in some sense more basic or prior to X. In the philosophy of science, the thrust of reductionism is closely connected to the ideal of the unity of science (Oppenheim, P. and Putnam, H. (1958). ‘Unity of Science as a Working Hypothesis’, in Feigl. H., Scriven, M. and Grover, M. (eds.) Minnesota Studies in the Philosophy of Science, University of Minnesota Press), that is, the overall state of science when all disciplines and their theories are shown to be reducible to fundamental physics. In philosophy of mind, different positions with regard to the relation between consciousness and brain can be contrasted on the basis of the stance they take on the alleged reducibility of consciousness – if consciousness is nothing over and above its neuronal substrate, we can also say that consciousness is reducible to these brain states. One of the most influential accounts of scientific reduction is credited to Ernest Nagel. According to him, theory-reduction amounts to the logical deduction of the laws of one theory from the laws of another, more fundamental one (Nagel, E. (1961). The Structure of Science – Problems in the Logic of Scientific Explanation, Ch. 11, 336–97, New York, etc.: Harcourt, Brace and World). Since scientific disciplines are typically equipped with their own theoretical vocabulary, so-called ‘bridge laws’ are required for Nagelian reductions, connecting the
A–Z Key Terms and Concepts
461
theoretical terms of the theory to be reduced with the terms of its reduction basis. Other prominent approaches to reduction build on the notion of explanatory power: Thus, one theory serves as a reduction basis if it can be used to explain all the facts that are explainable by means of a second theory, but with greater parsimony, that is, without positing all the entities or kinds of the theory to be reduced (Kemeny, J. and Oppenheim, P. (1956). ‘On Reduction’, Philosophical Studies, 7 (1/2), 6–19). If consciousness were reducible to the brain in this sense, the phenomenal and intentional vocabulary we use to describe mental states would be explanatorily superfluous. Yet the appropriate model of reduction in the mind–body case is itself a matter of debate (e.g. Kim, J. (2007), Physicalism, or Something Near Enough, Ch. 4, 93–120, Princeton University Press). See also: EXPLANATORY GAP, EMERGENCE, SUPERVENIENCE
Representationalism When someone has a particular visual experience, for example, of the colour of an object, the perceptual mental state of the person thereby represents the object as having this very colour. Conscious mental states thus have a representational content (see INTENTIONALITY). Representational theories of consciousness now go one step further and contend that the distinctive phenomenal properties of conscious mental states – in this case, what it is like to have this colour perception – are fully explicable in terms of (specific forms of) representation. First-order representationalism (FOR) claims that a mental state is conscious if it is itself representational, higher-order theories, on the other hand, require the conscious state to be represented by a higher-order mental state (see HIGHERORDER MENTAL STATES). Representationalism has great attractive force for physicalists (e.g. Dretske, F. (1995). Naturalizing the Mind, Cambridge: MIT), especially when combined with an attempt to naturalize the intentionality of conscious mental states, too. According to FOR, the representational content of the experience accounts for its phenomenal aspects: the subjective quality of having a green perception is reducible to representing an object as green. Strictly speaking, qualia are thus no longer properties of the experience, but, being ascribed to the objects outside of the experiencing subject, ‘externalized’. Representationalism is often supported by so-called transparency arguments, pointing out that in perceiving an external object, we do not perceive any properties over and
462
The Bloomsbury Companion to the Philosophy of Consciousness
above the ones of the object itself – we see (or, for that matter, hear, etc.) ‘right trough’ the perception (Harman, G. (1990). ‘The Intrinsic Quality of Experience’, Philosophical Perspectives, 4, 31–52). Yet putative counterexamples to the representationalist claim are abound as well; opponents have argued that it is possible for two mental states to have identical intentional content but non-identical phenomenal properties. For example, in so-called inverted spectrum cases, two persons may use colour-words under exactly the same circumstances, but whenever one person sees red, the other sees green, and vice versa. Thus, it seems, both persons represent the same objects as ‘red’ but their red-experiences come along with divergent phenomenal properties (Shoemaker, S. (1991). ‘Qualia and Consciousness’, Mind, 100, 507–24). As the example shows, a crucial issue for assessing first-order representationalism is the question if representational contents of mental states can be individuated in the same fine-grained way as their phenomenal qualities. See also: INTENTIONALITY, HIGHER-ORDER THEORIES OF CONSCIOUSNESS, QUALIA, ZOMBIES
Self The self is the supposed instance (the ‘I’) that thinks, perceives, experiences, desires, etc. According to this view, the self is not itself a mental state, nor is reducible to particular states or sets thereof; rather, it is the psychological entity which is the subject of all mental states, and which would be identified as the soul in religious thought. This conception of an autonomous, unified self that persists through time is often seen as underlying Descartes’s argumentation in the Meditations, which explains why it is sometimes also called the Cartesian Ego. An early and influential critique of the idea of a unified self was made by David Hume, who famously pointed out that upon introspecting his inner mental life, he never came across something that could be called the ‘self ’; everything he encountered were individual perceptions. Hume concluded that the ‘self ’ was in fact only a bundle of perceptions whose apparent unity is an illusion created by the memory and our own expectations ((1978) [1739/40]. A Treatise of Human Nature, edited by Selby-Bigge, L. A. and Nidditch, P. H., 2nd ed., Oxford: Clarendon Press). More recent critiques of realism with regard to the self build on scientific evidence to the effect that patients who suffer from impairments of their brain functions sometimes lose the idea of having
A–Z Key Terms and Concepts
463
a unified self, for example in the case of multiple personality disorders. This suggests that what we call ‘self ’ is an imaginary, rather than a substantive, entity. Another seminal treatment of the issue is provided by Daniel Dennett ((1991). Consciousness Explained, Cambridge: MIT Press), who launched – and criticized – the metaphor of the Cartesian Theater, a kind of central stage inside our minds where all the conscious experiences take place. The ‘self ’ would then be a spectator who watches the show and whose ‘seeing’ a particular experience makes it a conscious one. Despite the attractiveness of the metaphor, Dennett argues, there is no such place in our brains, and the ‘self ’ is actually a construction. See also: MULTIPLE DRAFTS MODEL, SELF-CONSCIOUSNESS
Self-consciousness Self-consciousness is the consciousness that takes either particular conscious mental states or the self as its objects. In the first case, self-consciousness requires not only the awareness of a mental state, but also the awareness of the awareness of this state. Alternatively or additionally, self-consciousness could be construed as involving an awareness of oneself (of one’s self). That is, not only I am aware of the fact that ‘I think that p’ but I am also directly acquainted with the ‘I’ that does the thinking. Accounts of self-consciousness therefore depend crucially on the way in which the notions of the ‘self ’ and the ‘I’ are spelt out, that is, on whether it is a real, substantive entity or rather an imaginary creation (see: SELF). In biology and psychology, mirror self-recognition often serves as a criterion for the ascription of self-consciousness to organisms, yet terms like ‘self-consciousness’ and ‘self-awareness’ are not used in a consistent way throughout science (Bekoff, M. and Sherman, P. (2004). ‘Reflections on animal selves’, Trends in Ecology and Evolution, 19 (4), 176–80). The awareness in questions raises further questions, too. First, with regard to the relation of this awareness to the topics of selfknowledge and introspection, we can ask whether this relation has an essential epistemic dimension: Does self-consciousness constitute a source of beliefs with a special epistemic status? This question points to the relation between selfconsciousness and self-knowledge (see INTROSPECTION). Second, what is the nature of this awareness? If, for example, it is interpreted as having conceptual content, the range of possible self-conscious creatures will be rather limited, a
464
The Bloomsbury Companion to the Philosophy of Consciousness
consequence that not everyone is prepared to accept (Bermúdez, J. L. (2000). The Paradox of Self-Consciousness, Cambridge MA: MIT Press). See also: SELF, INTROSPECTION, HIGHER-ORDER MENTAL STATES, ANIMAL CONSCIOUSNESS
Self-knowledge As in the case of self-consciousness, we can distinguish two different meanings of self-knowledge; knowledge about our own mental states on the one hand and knowledge about the ‘self ’ on the other hand. The beliefs somebody has about his or her own mental states – for example, ‘I think that I hope to pass the exam tomorrow’ – were in the past seen as having a very special epistemic status. For we seem to have a more intimate relation to our own mental states than to anything else – I cannot introspect someone else’s mental states and no one can introspect mine. This privileged access we arguably have to our mental lives is why beliefs about our inner life were sometimes taken to be infallible, incorrigible or indubitable (see Alston, J. (1971). ‘Varieties of Privileged Access’, American Philosophical Quarterly, 8, 257–72). However, as results from psychology show, actually we can easily be misled about our own mental states, for example, by having false beliefs about what the motivation for a certain action really was. This is why these bold hypotheses have been replaced by more qualified views. Today, these bold hypotheses are replaced by more qualified views about what makes beliefs about our mental states epistemically special. For example, one can hold that although I can be mistaken about what my mental states are – I can erroneously believe that I am angry although I am not, just like I can falsely ascribe mental states to my neighbour – some of my beliefs about my one inner states are nonetheless immune to a certain type of error, namely error through misidentification: whereas my belief ‘my neighbour is angry’ can be false because the person I see behaving in a way in which angry persons normally do is not my neighbour, my belief ‘I am angry’ cannot be mistaken in this way (Shoemaker, S. (1968). ‘Self-Reference and Self-Awareness’, Journal of Philosophy, 65, 555–67). Moreover, self-conscious thoughts also seem to have a special motivating force, which assigns them a special role in explaining the actions of self-conscious agents (Perry, J. (1979). The Problem of Essential Indexical, Noûs, 13 (1), 3–21). See also: INTROSPECTION, SELF-CONSCIOUSNESS, HIGHER-ORDER MENTAL STATES
A–Z Key Terms and Concepts
465
Substance dualism Substance dualism, or, if named after its most famous adherent, Cartesian dualism, is the doctrine according to which there exist two fundamentally distinct substances, namely matter (or bodies) and mind. Substances are, roughly, the entities that possess properties. Substance dualism thus forms a contrast with physicalism or materialism, but it is also logically stronger than the different variants of property dualism, which are united by the thought that while there is only one substance, there are two categories of properties instantiated by the substance. Descartes himself rejected the idea of unconscious thoughts, so ‘mind’ and ‘consciousness’ were more or less the same for him. He then argued for the distinction between the realm of the material and the realm of the immaterial on the ground that the mind apparently possesses features that the corporeal substance lacks – for example, indivisibility – and that entities differing in their properties cannot be identical. Today, however, few if any philosophers maintain this strict distinction, which does not live up to the scientific evidence showing a close correlation between brain states and mental states. Already at Descartes’s time, though, the problem of the causal interaction was recognized, that is, the question how material and immaterial substance can exert causal influences on each other if they are fundamentally distinct. See also: PROPERTY DUALISM, PHYSICALISM, SELF
Supervenience Supervenience is an ontic dependence-relation between (sets of) properties or types. Roughly, if X-properties supervene on Y-properties, two objects or events that instantiate the same Y-properties will also share all their X-properties. Y-properties are then called the supervenience base. A typical example would be the relation between the aesthetic and the physical properties of a piece of art: if two paintings are type-identical with regard to all of their physical features, they cannot differ in their aesthetic quality. A classic – albeit not universally accepted – definition of supervenience is provided by Jaegwon Kim ((1984). ‘Concepts of Supervenience’, Philosophy and Phenomenological Research, 45, 153–76), who distinguishes weak from strong supervenience. X weakly supervenes on Y if and only if, necessarily, indiscernibility with respect to Y entails indiscernibility with respect to X. That is, there are no possible worlds within which there are
466
The Bloomsbury Companion to the Philosophy of Consciousness
Y-indiscernible but X-discernable entities. In the case of strong supervenience, a second modal operator is added: X strongly supervenes on Y if and only if Y-indiscernibility implies X-indiscernibility within the same and across different possible worlds. The concept obtains great relevance for the philosophy of mind because of its promise to steer a middle course between an identity theory of the mental and the physical on the one hand, and a merely accidental co-occurrence with no acknowledgement of the special status of the physical, on the other. Accordingly, when it comes to psychophysical supervenience, the dependence-relation in question is typically thought of as asymmetric; the mental is determined by the physical, but not vice versa. Yet this asymmetry does not follow directly from the definitions cited above. Further specification is also required with regard to the kind of necessity with which physical facts supposedly determine mental facts: If it is claimed that it is not possible for, say, two human beings to be at once indiscernible in their physical and discernible in their mental properties – do we allude to possible worlds in which the same laws of nature hold as in ours? Or is the possibility in question metaphysical or logical? These kinds of questions find application in discussions of the putative conceivability of zombies, that is, individuals without conscious mental states that are nonetheless physically indiscernible from conscious human beings (see ZOMBIES). See also: EMERGENCE, REDUCTION, ZOMBIES
Turing test In order to decide on whether a machine should count as intelligent, Alan Turing suggested that we should let the machine participate in a so-called imitation game ((1950). ‘Computing Machinery and Intelligence’, Mind, 59 (236), 433–60), which came later to be known as the Turing Test. The basic idea behind the test is that when we ask ourselves if other persons are intelligent, we rely on our observations of their verbal behaviour, in particular, of the way they engage in conversations with us. Consequently, if a machine is able to communicate with human interrogators in such a way that they do not notice that their conversational partner is a machine, the machine can be said to be intelligent, too. More precisely, in a Turing Test, a human interrogator communicates with both a machine and another human being via teletype, but without initially knowing which one of her two responders is the machine. Her task is to find this
A–Z Key Terms and Concepts
467
out by asking them whatever questions she deems appropriate, and if she fails, that is, if the machine can fool her into believing that it was a human being, it passes the test. The discussions that evolved around the Turing Test concern the questions both whether passing it is sufficient for a machine to be reasonably counted as intelligent (see CHINESE ROOM ARGUMENT) and what prospects of success in the test there are for machines. With regard to the former, it is important to notice that the Turing Test is first of all a behavioural-operational criterion for intelligence, and it does not provide a definition of what it means to be intelligent. Nonetheless, it has been claimed that since it seems impossible for any creature to pass the test unless it shares certain fundamental experiences of the world with currently living human beings, the test actually does not measure intelligence simpliciter, but rather culturally oriented human intelligence (French, R. (1990). ‘Subcognition and the Limits of the Turing Test’, Mind, 99 (393), 53–65). This shows in turn that the possibility of constructing a machine that passes the test crucially depends on the specifications of Turing’s criterion, for example, on whom we admit as interrogators. See also: ARTIFICIAL INTELLIGENCE, CHINESE ROOM ARGUMENT
Unity of consciousness There are several different characteristics of our phenomenology that go under the name ‘unity of consciousness’ (Cleeremans, A. (2003). The Unity of Consciousness – Binding, Integration, and Dissociation, Oxford: OUP). When someone sees an object, she views, for example, the colour of the object, its shape and surface structure. Yet her visual experience seems to be unified, that is, we would not fully capture her mental state by describing her as having a colour experience plus a shape-experience plus a surface-structure-experience. Although she is subject to several simultaneous experiences, there is also one overall experience of a single object – there is something it is like to have these perceptions together. The binding problem or problem of objectual unity concerns the challenge to square this fact about our phenomenology with the neurobiological discovery that the different particular experiences of colour, shape, etc., are represented in different parts of the visual system. Moreover, the experiences just mentioned by no means exhaust everything a person might perceive at a moment. Additionally, she may also have tactile experiences, hear
468
The Bloomsbury Companion to the Philosophy of Consciousness
some noise in the background and have emotional feelings. And once again, in addition to these single experiences, there is the experience of having them all together. Explaining this coherence is the problem of the phenomenal unity of consciousness. Finally, our conscious experiences exhibit a temporal unity: we go through a continuous succession of experiences exhibiting a kind of temporal integration. Otherwise, we would not be able to fully grasp temporally extended phenomena like melodies as coherent wholes (Kelly, S. (2005). The Puzzle of Temporal Experience’, in Brook, A. and Akins, K. (eds.) Cognition and the Brain, New York: Cambridge University Press). See also: SELF, QUALIA
Zombies In the literature on consciousness, zombies are creatures that are just like human beings in all their physical respects and behaviourally indistinguishable from humans, but they have no phenomenally conscious mental states. Thus, by definition, there is nothing it is like to be a zombie. If it is possible for zombies to exist, this would conflict with physicalism, since conscious mental state would then not be identical with, or reducible to, physical events. The possibility of zombies is controversial, however, just like the answer to the question what kind of (im)possibility would be relevant for this matter. Often, it is assumed that the nomological impossibility of zombies, that is, the fact that zombies are ruled out by the laws of nature in the actual world, is too weak to sustain the physicalist thesis, and that it is instead metaphysical possibility that is at stake. The best-known argument for the metaphysical possibility of zombies was presented by David Chalmers ((1996). The Conscious Mind – In Search of a Fundamental Theory, New York and Oxford: Oxford University Press). It relies on the connection between conceivability and possibility: If it is conceivable that our world is just as it actually is in all the physical respects, but that there are no phenomenal properties, then it seems that the physical properties do not entail the phenomenal ones. Some critiques object to the conceivability of zombies. Daniel Dennett, for example, argues that we overestimate our imaginative power when claiming that it is possible to conceive of creatures physically exactly like us who lack conscious mental states ((1995). ‘The unimagined preposterousness of zombies’, Journal of Consciousness Studies, 2 (4), 322–6). The step from the conceivability to the possibility has been questioned, too, for example on the
A–Z Key Terms and Concepts
469
ground that psychophysical identities might hold with a posteriori necessity, in which case the mere conceivability of their coming apart does not suffice for their possibility (Hill, C. and McLaughlin, B. (1999). ‘There are fewer things in reality than dreamt of in Chalmer’s philosophy’, Philosophy and Phenomenological Research, 59 (2)). These issues thus crucially turn on the specifications of the notions of possibility and conceivability that are relevant for the relation between physical and mental properties ((2010). The Character of Consciousness, Oxford: Oxford University Press). See also: QUALIA, EXPLANATORY GAP, KNOWLEDGE ARGUMENT
470
Index aboutness 237, 246, 253, 401, 404 aboutness of consciousness 7, 326 aboutness of thought 3 aboutness relation 355 absolute adversary 240 abstract (A), in PANIC theory 144 abstract functionalism 240, 241, 243 access-conscious 211, 212 first-order 212, 217, 218, 222, 225 higher-order 211, 217, 222 access-consciousness (A-consciousness) 212, 219, 229 n.3, 294, 437–8. See also functionalism; qualia accompaniment of brain ‘processing’ 259 acquaintance strategy 151 action-descriptions 83, 85 active corporeal quality 32 actual awareness 196 actual consciousness 6, 240, 251 Actual Consciousness (Honderich) 252, 253, 297 Actualism 243, 249, 250, 251, 252, 253 Consciousness of 253 theory of Actualism 249 unity as individuality 252, 253 actual languages, functionality of 410 n.8 to identify 253 uncertainty in 254 adverbial theorists 127 affective consciousness 234, 240, 243, 245, 247, 252 being actual representation of 250–1 what is and isn’t actual 249–50 Aleksander et al. 299–300 axiomatization of consciousness 299 depiction 300 planning 299–300 amodal concepts 218, 229 n.4 amodal (non-sensory) thoughts 211, 220, 221, 222, 223, 228
amodal (non-sensory) workspace 220, 221 analysis of consciousness 28, 242, 243, 272, 284 animal consciousness 438–9. See also Umwelt anosmia 81, 84 of sense-loss 81 anterior cingulate cortex (ACC) 161 as possible location for HOTs 163 n.8 anti-computationalist analysis 320 n.20 aposteriority 170 apperception immediate 11 a priori representation 394 Aristotelian Concept of Consciousness 3 externalism/internalism divide 28–30 inner perception 38–41 presentations and representations 41–3 psychic and physical phenomena 35–41 psychic dimensions 43–7 psychophysical watershed of consciousness 33–4 Aristotelian legacy 30–3 active corporeal quality 32 chemical–electrical processing 30 material basis of neural correlates 30 neo-Aristotelian theory 30 physical stuff 31 Platonic ideas 32 psychophysical process 30 qualitative experiences 30 qualitative perceivable attributes 32 self-awareness 32 self-consciousness 32 sensibles 32 syntactic functions 30 Aristotelian terms 36 artificial intelligence (AI) 187, 294, 325, 326, 440–1 Husserl as father of AI 340 n.4 science of 28–9
472
Index
as-if perceptions 84 aspectual theories 242 attention 192, 198, 202, 404, 439–40. See also neural correlates of consciousness (NCC) information 193 internal model of 192 mental relationship between subject and object 194 to sensory detail in scientific photography 105 testing bottom-up attention 202 testing with and without awareness 201 without awareness 199 attention schema 191–5 for control of attention 198–203 higher cognition and 195–6 integration of information 203–4 main components of 193 for social perception 204–7 uses of 197–8 attention schema theory 196, 203 attributions of properties to times 274–6 cognitive and affective information 275 conscious awareness 275 property clusters 276 This now 275 Attributive-Dynamic (AD) model of consciousness 7 audacity 242 autonomous reconfiguration 105 autopsychology 340 n.7 awareness 46, 235, 408, 439–40 as attention schema 207 in attention schema theory 196 conscientia as awareness 23 n.22 aware of self 399 axiomatization in classical mechanics 321 n.29 of consciousness 292, 299 logico-mathematical sense of 298 axiomatizing consciousness 297 axioms of CA 291, 305–6 axiom of freedom (Free) 314–15 axiom of (hyper-weak) incorrigibilism (Incorr) 310–11 axiom of introspection (Intro) 309–10
axiom of irreversibility (Irr) 312–14 axiom of knowledge-to-belief (K2B) 308–9 axiom of non-compositionality of emotions (CompE) 312 axiom of perception-to-belief (P2B) 307–8 causation axiom (CommCaus) 315–16 essence axiom (Ess) 311 extreme expressivity 306–7 Perry axiom (TheI) 316–17 beer drinker thought experiment 363 Begriff (concept) 150 behaving entities 75 n.25 behaviourism 57, 59, 60, 69, 70, 170 contra behaviourism 75–6 n.25 behaviourist antithesis 59–60 behaviourism 59 mind–body dualism 60 belief knowledge-to-belief (K2B) 308–9 logics of 309 perception-to-belief (P2B) 307–8 Benjamin, Joel (chess grandmaster) 297 bi-directional state of consciousness 23 n.17 binding problem 38, 45, 218 body schema 193, 198 brain 207 -to-afferent-nerve-ending feedback loops 268 attention schema 207 awareness 207 brain-based theories of consciousness 187 information processing 207 information-processing architecture 273 social perception 208 brain processes 262 attributing instantiation 263 this and now 263, 264, 266, 267 brain states and processes 170 pain 170 brain theory of consciousness Dennett’s own preferred theory 357–60 Dennett’s ‘quining’ arguments 360–4 Dennett’s zimbo argument 355–7 Brentanian framework 47
Index Brentano architectural theory of consciousness 28 Aristotelian legacy 30–3 conscious of qualitative forms 28 externalism/internalism divide 28–30 forerunner of Husserlian phenomenology 27 inner perception 38–41 presentations and representations 41–3 psychic and physical phenomena 35–8 psychic dimensions 43–7 psychophysical watershed of consciousness 33–4 science of psychic phenomena 28 veridical explanation 27 whole of consciousness 28 Brentano-Bolzano paradox 327–9 Greek mythology 328 objectless presentations 328 Brentano’s conception of intentionality 325–7 aboutness of consciousness 326 approach to philosophical problems 326 descriptive psychology 326 intentional inexistence 327 notion of intentionality 326 property of mental phenomena 327 psychological foundation for 331 Brentano’s empirical and descriptive psychology 31 Bringsjord and collaborators 304 brute consciousness 23 n.20 building-block emotions 312 Cartesian assumption 226 Cartesian beliefs 225 Cartesian certainty 225 Cartesian conception 37 Cartesian dualism 78. See also substance dualism Cartesian ego. See self Cartesian inference-rule 225, 226, 227, 228 Cartesian introspection 25 n.38 Cartesian materialism 357 Cartesian property 74 n.2, 310, 311 Cartesian theater 21, 58, 59, 69, 86, 357 Dennett’s criticism 358 mental objects in 69
473
Cartesian verdict 395 causation axiom (CommCaus) 315–16 C-fibre stimulation 170, 174–5, 183 n.12 changing tune 252 mesmerizing 253 chemical–electrical processing 30 chess-piece approach 66 Chinese room argument 441–2 classical dualists 126 cogitans 389 cogito 23 n.15, 304, 390, 391 removing the mark 304 cognitive access 189–90 cognitive approaches adherents of 367 challenges and solution 364–8 colour concepts 365, 366, 367 concept of visual experience 365 conceptualist resources of 366 DIA 365, 366, 367 HOT theoretic 365 opponents of 360 to phenomenal consciousness 347–8, 364–5, 368 ready reply to Raffman’s objection, 367 way of opposing 348 cognitive consciousness 249, 251, 252, 293 representations being actual 250–1 theories and what is and isn’t actual 249–50 cognitive in affective control 297–8 cognitive in perceptual control 297–8 cognitive states 347 color scientist thought experiment 4 color-phi phenomenon 358 common sensibles 32 comparative judgement 242 computational model 104 computational process 228 n.1 computers 106 conceivability argument. See zombies conceivable, definition 85 concept of essential property 132 concept of privacy 59 conceptual competence 401 conceptual quiver 406 conceptual self-consciousness 161 conceptual truth, about nature of conscious states 163 n.3
474
Index
conditional fallacy 130 confused thoughts 16, 21 conscience 19 conscientia 11. See also thought and operations of senses and imagination 13–17 and operations of the will 17–20 and thought 12–13 conscious exemplars 104 as freedom of representation 104–6 conscious experience 78, 87 n.14 conscious mental state 145 first-order world-directed state 147 The Conscious Mind: In Search of a Fundamental Theory (Chalmers) 258 consciousness 2, 64, 93, 106, 281, 395 affective 2, 6 cognitive 2, 6 cognitive science 312–13 as evidence of the external world 102–4 fast-acting process 263 FOR theory of 144 inner repository of mental objects 65 and moral capacity 318 n.8 naturalistic theory of 143–4 observational model of 65 perceptual 2, 6 philosophical myth 67 physical phenomenon 261 of time 284, 285 word meaning 66 Consciousness and Experience (Lycan) 109 consciousness, concept of 57 behaviourist antithesis 59–60 dualist–introspectionist picture 57–9 linguistic approach 67–9 observational model of consciousness 64–7 question of 57 recasting introspection 69–73 Wittgenstein’s mode of inquiry 60–4 consciousness consciousness 97 Consciousness Explained (Dennett) 355 consciousness, initial clarification of 234–8 inner peering 235 intentionality 237 leading ideas of consciousness 238 ordinary intelligence, logic of 235 phenomenality 237 primary ordinary sense 235
qualia 236 something it’s like to be a thing 236–7 traditional or familiar subjectivity 237 consciousness of pain 279–82 non-pain-like ways in consciousness 281 sensory data to time 281 consciousness or intentionality 334–5 consciousness of anxiety 334 intentional state 334 potential intentionality 334 conscious phenomena 83 conscious qualitative appearances 36 conscious reasoning 382 conscious representation 250 constitutive of consciousness 97, 98 constitutive process, awareness of thought, 228 n.1 consumer semantics 149 contingency 170 contingent property 310 contra-causal freedom 314 control theory 198 Copernican revolution 40 counter-attitudinal essay paradigm 216 creature consciousness 349 cross-consciousness 71 Cunningham 300–303 activity states 301 biconditional 303 decisions 302 propositional formal language 302 real intelligence 300 sentient consciousness 302 utilitarian value 300 Damasio’s theory of consciousness 47 data 238 database 238–40 De Anima (Aristotle) 3, 30, 400 decent theory 242 decision times 375–7 decide signal 376 electromyography (EMG) 377 Libet-style experiments 377, 378 self-initiated movements 379 “spontaneous” flexing 375 spontaneous fluctuation in neural activity 379 Trevena and Miller studies 376
Index Dennett’s own preferred theory 357–60 anti-realism about consciousness 359 criticism of the Cartesian theater 358 doxastic sense 360 fame in the brain 357 first-person evidence 359 homunculus 357 multiple drafts theory of consciousness 357, 358 qualia 360 Stalinesque explanation 358 third-person evidence 359 Dennett’s ‘quining’ arguments 360–4 beer drinker thought experiment 363 quale versus judgement about quale 364 Dennett’s zimbo argument 355–7 zombie 355, 356 zimbo 356 Descartes. See also thought about action 17–18 assumption of dualism 37 awareness of thought 12 Cogito 390 conception of conscientia 20 conscientia and knowledge 21 conscientia and thought 12 dualism 393 Ego 391 idea of consciousness 37 idea of having conscientia of objects 22 n.6 senses and imagination 13 Descriptive Psychology (Brentano) 47 Diachronic Indistinguishability Argument (DIA) 365, 366, 367 Dialogues (Berkeley) 135 dispositional higher-order thought (HOT) theory 5, 142, 148–50 dual-content theory 149 theory of mind mechanism 149 dispositional mental states 128–30 disquotational representation 98–100 disquotational theory 100, 242 doxastic sense 360 dreams 36 Dretske, Fred 144 DSA (decision-focused sceptical argument) 372
475
dual-content theory 149 dualism and the unconscious 133–9 armchair philosophy 139 awareness 136 concept of zombies 138 conscious mental state 134 hopeless confusion 137 intelligibility 136 nature of thought 135 perceived 133 phenomenology 139 dualisms 57–8, 60, 240–3, 401 mind-body dualism 60 dualist-introspectionist picture 57–9 concept of metaphysical privacy 58–9 dualism 57–8 (See also dualisms) first-person introspection 59 pre-linguistic description 58 pure 58 dynamic attribution (DA) 261–3 DA model 278 linguistic philosophical model 279 Model of Consciousness 261 property-object dynamic attribution model 262 Eastern mysticism 253 EEG (electroencephalography) 42, 371, 373, 376, 379 egocentric reference frame 161 eliminativism (eliminative materialism) 442–3. See also qualia emergence 375, 394, 397, 443–4. See also epiphenomenalism complexities of 407 evolutionary emergence 403, 404 emotions 336 empty higher-order thoughts 353–5 HOT theory 353 encoding thoughts 211 epiphenomenalism 241, 407 , 444–5 Epiphenomenal Qualia (Jackson) 4 epiphenomenon 70 epistemology 5 essence axiom (Ess) 311 essential property 132, 176, 177, 179 concept of 132 Euclid 289 reductio proof 289
476
Index
Euclidean geometry 44, 331 evolutionary thinking 198 exemplarization 96–8 innateness and language theories 96 exemplar representation 96, 105 expansion thesis 22 n.9 explanatory gap 445–6. See also hard problem of consciousness expressing thoughts 211 externalism 139 n.2 externalism/internalism divide 28–30 autopoiesis 29 chemical–electrical processing 30 embodied or situated consciousness 29 inferential-probabilistic principles 29 internal mechanism (computer) 29 neo-Aristotelian theory 30 science of AI 28–9 syntactic functions 30 theories of subjective experience 30 externalist reading 337 extrinsic property 361 Einstein’s Special Theory of Relativity 361 fame in the brain theory of consciousness 348, 357, 368 Dennett’s own preferred theory 357–60 Dennett’s ‘quining’ arguments 360–4 Dennett’s zimbo argument 355–7 fearing 127 Ferrier, James Frederick 94 fictional object 341 n.17 first-order logic (FOL) 291 first-order representationalism (FOR) 5, 142 first-order thought 348–9 first-person operationalism 360 first-person perspective (1PP) 161 first person plural 337 first-person subjective experience 260 first-person testimony 88 n.18 fMRI (functional magnetic resonance imaging) 42, 371, 373 formal methods, harnessed for implementation 293–4 expressive formal language 294 inference schemata 294, 295 free relative reading 110–11, 116
false on 112, 113 mention all reading 118 free relative sense 112, 115 free will 446–7. See also phenomenology decision times 375–9 generalizing 380–1 unsuccessful sceptical argument 381–5 freedom (Free), axiom of 314–15 functional/structural components 88 n.18 functionalism 240–3, 447–8 abstract functionalism 240 physical functionalism 241 functionality of brain 401 Galileo Galilei 31, 37, 197 Gemüt 331 Gemütsbewegungen 331, 332 generalizing 380–9 arbitrary picking 380 conscious deciding 380 conscious reasoning 380 distal decision-making 381 Gettier-style counter-examples 308 global broadcasting accounts 212 global broadcasting theory 219 global workspace theory 203, 219, 448. See also multiple drafts theory of consciousness and attention schema theory 204 Gödel’s formalization 320 n.22 Goldbach’s conjecture 294 grammatical fiction 395, 399, 402, 406 Great Divide model 60 eliminating introspection 70 hallucinations 36, 144, 145, 308 hard problem of consciousness 2, 37, 93, 351, 448–9. See also qualia solution to 100–1 time as hard part of 268–70 hard versus tractable Explicanda 258–9 Heidegger, Martin 392, 399, 402 Being and Time 341 n.8 Dasein 390, 391 Grundprobleme der Phänomenologie 391 qualification of doctrine 400 heliocentric theory 197 heterophenomenology 341 n.11 higher-order access-conscious 211
Index higher-order access theories 212 higher-order mental states 449–50. See also phenomenal consciousness (P-consciousness); qualia higher-order perception (HOP) theory 5, 142, 150–1 acquaintance strategy 151 active 150 sensibility and understanding 150 higher-order representationalism (HOR) 142 dispositional HOT theory 148–50 general argument for 146 higher-order perception theory 150–1 higher-order thought (HOT) theory 145–8 higher order (HO) theories, objections and replies 152–6 concept acquisition 154 concepts ‘brown’ and ‘tree’ 154 first-order perception 156 lack of clarity 154 mindreading 152 no qualia 156 partial recognition of LO state 156 problem of the rock 153 qualitative properties 154 reductionist theory of consciousness 153 Rhesus monkeys 153 scrub jays, experiments 152 self-concept (I-concept) 152 targetless (empty) HOT cases 155 higher order (HO) theory 147, 158, 163, 242 higher-order thought (HOT) theory 5, 142, 145–8, 438 advantage for 161 and conceptualism 156–8 conscious mental state 145 of consciousness 148 dispositional HOT theory 142, 148–50 higher-order awareness 146 meta-psychological or meta-cognitive state 145 and prefrontal cortex 160–3 higher-order thought (HOT) theory and conceptualism 156–8 notion of ‘hearing-as’ 157 notion of ‘seeing-as’ 157
477
higher-order thought (HOT) theory and prefrontal cortex 160–3 human color imaging experiments 161 introspective states 160 other brain areas 161 pre-reflective self-consciousness 161 unconscious and conscious HOTs 162 as viable theory 163 higher-order thought (HOT) theory of consciousness 348–51, 368 empty higher-order thoughts 353–5 non-circularity of 350 non-relational reading 353, 354–5 relational reading 353, 354 Rosenthal’s explanatory target 351–2 homunculus (little man) 357 human consciousnesses 284 human-level consciousness 304 human person, evolution of 393 human primate, evolution of 393 Husserl, Edmund 325 hybrid higher-order accounts 158–60 Higher-Order Global States (HOGS) 159 wide intrinsicality view (WIV) 159 hybrid higher-order and selfrepresentational accounts 158–9 hybrid perceptual-temporal properties 265 Ich denke 390, 408 n.2 Ich (de-transcendentalized) 391, 396 idea of unconscious mentality 229 n.8 identity 170–4 contingent 174 necessary a posteriori identities 181–2 pain with C-fibre stimulation 174–5 understanding water 171–3 identity theory 5. See also physicalism illusions 45, 222–8 accessibility of concepts 223 awareness of others’ thoughts 227 Cartesian assumption 226 conscious thoughts 222 experience of deciding 224 passing thought 224 to see or hear 223 self-other asymmetry 225
478
Index
System 2 inferential processes 223 wondering 224 impure reflection 16 incarnated mental 404 incorrigibilism (Incorr), axiom of 310–11 individuality 252–3 inexistence 327 influential model of inquiry 183 n.7 information integration theory (IIT) 450–1 information-theoretic account 313 innateness theory 95 inner peering 235 inner perception 4, 38–41 Aristotelian derivation 38 cross-modal nature of perception 39 interpretation of the stimuli 39 notion of consciousness 40 science of consciousness 40 theory of consciousness 38 inner speech and thought 214 inner stimulus 34 integrated-information account of consciousness 212 integration of information 203–4 global workspace theory 203 intentional content (IC), in PANIC theory 144, 405 intentional inexistence 327 intentionality 127, 237, 342 n.21, 410 n.14,451–2. See also qualia Brentano’s conception of 325–7 consciousness or 334–5 mental and intentional 331–4 notion of intentionality 325 without objects 335–40 role in time 326 unphenomenological intentionality 329–31 Intentionality (Searle) 325 intentionality without objects 335–40 emotions 336 externalist reading 337 intentional state 336 propositional thoughts 335 pseudo-object 335 transcendentalism 337 intentional objects 327–9 Greek mythology 328 objectless presentations 328 intentional reference 27
intentional world 404 internal time consciousness 7, 271–4 hypothetical truths 274 streaming consciousness 272, 273 truths about any existent things 274 unconscious autonomic attribution 272 interpretive process 228 n.1 interpretive self-knowledge 213–17 mindreading skills 213 theory of mind 213 interrogative reading 110, 112 mention all reading 118 interrogative sense 112, 114 interrogative versus free relative readings 109–12 knowledge-wh 110, 111 type of experience 112 intersubjectivity 337 intrinsic property 362 introspection (Intro), axiom of 147, 309–10,452–3. See also phenomenology; self-consciousness; self-knowledge as inward-inspection 69 introspectionist model 64 introspective awareness 131 irreversibility (Irr), axiom of 312–14 iterative meta-representational selfconsciousness 161 I think, Kant’s conception 390, 392 James, William 40 just noticeable difference (JND) 33 just (qualitatively) perceivable difference (JPD) 34 Kantianism 402 Kant, Immanuel 408–9 n.2 conception of the ‘I think’ 390 Ego 391 executive Ich 396 Ich denke 390 passage 391, 394 slogan 158 terms 150 Transcendental Aesthetic of time 283, 284 Transcendental Ich 391 transcendental unity of apperception 283, 395, 402 knowing and seeing 113
Index knowing how, 110, 123 n.3 knowing-what 110 knowing-what-it’s-like response 112–13 overall assessment 118–19 knowledge 16, 62, 94, 309 conscientia and 16, 21 for moral character of action 19, 20 for performing action 19 knowledge argument 108, 115, 453–4. See also qualia; zombies cul-de-sacs 113–14 knowing and seeing 113 Lewis’s view 119–20 old-fact-new mode approaches 124 n.17 in philosophy 109 Tye’s view 120–1 knowledge of consciousness 95–6 self-presenting conscious state 95 knowledge-how 120 knowledge-to-belief (K2B), axiom of 308–9 knowledge-wh 110, 111 Kripke, S. A. argument against identifying pain 174–5 brain-state identity 178–9 conclusion 181 confusing epistemic possibility 178 weakness of argument 175–8 language acquisition theory 95 Laplacian viewpoint 47 lateralized readiness potential (LRP) 376 Lewis, David 119 knowing-what-it’s-like response 120 view on knowledge arguments 119–20 Libet, Benjamin 371–5 claims of 372 DSA 372 proximal intentions 375 readiness potential 373 SMA in awareness 374 W time or time W expression 375 linguistic approach 67–9 factual assertions 67 kind of anthropology 68 metaphysical sense 69 ocular metaphor 68 sensitive knowledge 69
479
linguistic philosophical model 279 locus classicus 360 Logical Investigations (Husserl) 328, 329, 338, 339 long-term memory 227 lower-order (LO) state 147 McCarthy, John 320 n.28 machine consciousness. See artificial intelligence (AI) material basis of neural correlates 30 material entities 58 materialism. See physicalism matter conscious 100 MEG (magnetoencelography) 42 mental and intentional 331–4 object directedness 333 theory of intentionality 332 world disclosing 333 mental entities 58, 69 mentalism 242 mental representations 142 mental states 126–8 occurrent and dispositional mental states 128–30 unconscious occurrent mental states 131–3 mental-to-mental causation 407 mention-all reading 116, 123 n.4 free relative reading 118 interrogative reading 118 knowing what it’s like response 117 versus mention-some reading 116–18 mention-some reading 116, 122, 123 n.4 versus mention-all reading 116–18 mesmerizing 253 metaphysical privacy 59, 69 metaphysical sense 69 metaphysics of mind 404 methodologically construed reductionism 404 methodological solipsism 341 n.7 mind 238 site-specific treatment of 401 thinking 396 thought 396 Mind and World (Dreyfus) 392 mind-body causal regularities 405 mind–body identity 170 mind–body identity theory 170
480
Index
mind–body structure 70 Mind-Body Supervenience 406 mindedness 390, 395, 398, 400, 401, 402 mind functionally 394 mindreading skills 152, 213 self-directed mindreading 215 system 226 Miranker and Zuckerman 303 commitment to perception 303 mirror test (MT) of self-consciousness 304 modal semantics 5 arguments against physicalism 454–5 (See also substance dualism) and non-modal distinction 122 monism 60 Müller-Lyer illusion 45 multiple drafts in brain theory 368 multiple drafts model (MDM) 455. See also global workspace theory multiple drafts theory of consciousness 357, 358. See also brain theory of consciousness multiple realizability 241 μ-recursive functions 319 n.13 mysterian (Cartesian) verdict 395 mysterious experience 79 The Mysterious Flame (McGinn) 395 Myth of Given (Sellars) 95 Nagel, Thomas 143 Naming and Necessity (Kripke) 170, 180 naturalism 243 naturalizing phenomenology 42 natural language 78 necessity 170–4 contingent 174 non-descriptionality 172 reassessing 179–80 understanding water 171–3 neo-Aristotelian theory 30 nested internet-based information databases 6 neural correlates of consciousness (NCC) 42,456. See also physicalism neurological soft signs (NSS) 44 neutral monism 242 Newtonian absolute time 284 non-compositionality of emotions (CompE), axiom of 312
non-conceptual (N), in PANIC theory 144 nonhuman animals 23 n.11 conscious in 298 non-modal conception 121 non-physical supervenience 242 non-positional conscientia 15 non-propositional knowledge 120 non-reductive physicalism. See property dualism non-relational reading 353, 354 higher-order thought 354 non-temporal attribution objects 265 non-zombie 83 notion of intentionality 325. See also intentionality role in phenomenology 326 notion of the ‘Cartesian theater’ 21 now (passing moment of time) 264, 266, 271 number, categories 289 numberhood 289, 291, 302, 307 and consciousness 290 object files 218 objective awareness 187–8 neuronal signals 188 robot and apple 188, 189 visual subjective awareness 189 objective mind-independent time 268 objective physicality 247 objective physical world 243–4, 246–7 objectivity 244–5 physicality 244 objectivity 244–5 object knowledge 121 objectless presentations 328 objects of property attribution 262 observational model of consciousness 64–7 inward attention-directing model 64 language-games 67 linguistic meaning and consciousness 65 mental object 65 outward turned inward 65 traditional conception of wordmeaning 66 occurrent mental states 128–30
Index On the Phenomenology of the Consciousness of Internal Time (Husserl) 284 operations of senses and imagination 13–17 background awareness 16 conscientia as same-order thought 15 higher-order thought 14, 21 pure reflection 17 purifying reflection 16 same-order thought 14, 21 operations of the will 17–20 actions 17 (body-involving) actions 22 n.7 conscientia (of) 18 indubitable knowledge of 18 internal relations 20 knowledge 19 Meditations 17, 21 reflective failures 20 operators 292 ordinary intelligence, logic of 235 OSA (overt sceptical arguments) 381, 382 pain 170, 174, 279–82 brain-state identity 178–9 C-fibre stimulation 170, 174–5 instructive confusion 279 involving touch sensation 280 perception of bodily injury 181 sensory data to time 281 PANIC (poised, abstract, non-conceptual, intentional content) theory 144 panpsychism 242 parallel assertions 311 Parochial Sermons (Newmann) 74 n.8 passing thought 224 perceivable attributes 31 perceived objects 277 perceived stimulus 34 perception meaningful appearances in daily life 29 of sensibles 32 and time 261–3 perception-to-belief (P2B), axiom of 307–8 perceptual consciousness being actual 246–9 what is and isn’t actual 245–6 Perry axiom (TheI) 316–17 personal level consciousness 298
481
person-level consciousness 293 pessimists 235 disagreement about what consciousness is 236 PET (positron emission tomography) 42 phantom pain 81 qualitative or phenomenal experiences 81 phenomenal consciousness (P-consciousness) 80, 294, 318 n.4, 437–8. See also functionalism; qualia analytic philosophy of mind 296 cognitive approaches to 347–8, 368 inference schemata 294, 295 intensional first-order kernel 295 and Rosenthal’s explanatory target 351–2 self-belief 297 self-consciousness 297 self-regarding attitudes 297 phenomenal properties. See qualia phenomenal self-acquaintance 161 phenomenality 237 phenomenally conscious states 347 phenomenological components 88 n.18 phenomenology 139, 326, 456–7. See also intentionality; phenomenal consciousness (P-consciousness); qualia of conscious experience 269 philosophical anthropologists 393 philosophical grammar 75 n.21 philosophical novel 76 n.26 philosophical psychology 1 philosophical zombies 82 philosophy of consciousness 1, 2, 238 philosophy of mind 1, 2, 3, 5, 6, 21, 294, 325 analytic 326, 340 and AI 294 burden to 271 complicated questions 132, 393 theory of consciousness in 275 ‘what it is like’ phraseology 352 ‘zombie’ in 355 physical functionalism 241, 242 physicalism 123 n.1, 242, 457–8. See also knowledge argument physicality 244, 247 objective physicality 247 subjective physicality 247
482
Index
physical objects, mental analogues to 70 physical reality 242 physical stuff 31 physical world 404 physics-based theory of causation 314 physiological states and awareness 332 platitude 82 Platonic ideas 32 poised (P), in PANIC theory 144 Pollock, John (philosopher) 294 positional conscientia 15 powerful inclination 23 n.13, 24 n.34 précis 57 predicate-name roles 277 prefrontal cortex (PFC) 142 activity during dream period 163 n.9 human color imaging experiments 161 introspective states 160 neural activity in 160–3 pre-reflective self-consciousness 161 unconscious and conscious HOTs 162 pre-linguistic description 58, 62, 69 pre-reflective (inattentional) selfawareness 160 presentations are not representations 41–3 computational mind and consciousness 41 grammar of seeing 41 internalist proposal 42 ontological categories 41 presentation as biological phenomenon 42 representation as contemporary science 42 veridicality 43 prior work 298 Aleksander et al. 299–300 Bringsjord and Govindarajulu 304 Bringsjord et al. 304, 305 Cunningham 300–303 Floridi 304–5 Miranker and Zuckerman 303 problem of the rock 153 products of imagination 36 proof-theoretic semantics 319 n.14 proper sensibles 32 property dualism 318 n.4, 458–9 property-object application 276, 277 propositional-attitude events 211
propositional content 85 propositional knowledge 120, 121 psyche 30 psychic and physical phenomena 35–41 acts, as psychic phenomena 35 Cartesian interpretation 36 concept of psychic energy 36 distinction between 35 dualistic interpretation 36 physical phenomena 36 theory of immanent realism 37 theory of intentional reference 36 psychic dimensions 43–7 biology of mind 47 Brentanian framework 47 colour interaction 44 Euclidean geometry 44 Müller-Lyer illusion 45 perception of causality 43 perceptual illusions 45 physical units 45 quantitative physical dimensions 43 relaxation oscillations 44 space-time subjective primitives 44 stroboscopic motion 43 unconscious processing 46 psychologically realistic limit 310 psychological novel 76 n.26 Psychology from an Empirical Standpoint (Brentano) 35, 326 psychophysical process 30 psychophysical watershed of consciousness 33–4 inner stimulus 34 Ptolemy 197 pure reflection 17 qualia 144, 236, 262, 348, 352, 360, 361, 459–60. See also hard problem of consciousness; phenomenal consciousness (P-consciousness) consciousness without 360–4 intrinsicality of 362 no qualia 156 qualitative experiences 30 qualitative perceivable attributes 32 qualitative terms 47 quasi-apperception 399 quasi-Ding an sich-like entity 335 Quining qualia (Dennett) 360
Index rationality 401 readiness potential (RP) 373 lateralized (LRP) 376 recasting introspection, understanding privacy 69–73 conception of philosophical progress 72 cross-consciousness 71 dualistic picture, elements 69 meaning-determining 72 mind–body structure 70 philosopher’s nonsense 73 philosophy of psychology 71 psychological verbs 72, 73 stimulus-response mechanisms 70 reduction(ism) 407, 460–1. See also emergence; supervenience methodologically construed 404 reductionist theory of consciousness 153 theory of the transcendental reduction 339 reflection 147 higher-order thought 14 making-explicit of conscientia 18, 21 obstacles to 16 purifying reflection 16, 21 reflective consciousness 23 n.20 reflexive representation 97 of consciousness 101–2 reflexivity 96–8 innateness and language theories 96 Reid, Thomas 94 relational reading 353, 354 first-order mental state 353 relaxation oscillations 44 report-level consciousness 373 representationalism 143–5, 461–2. See also higher-order thought (HOT) theory causal theories 143 mentalistic terms 143 naturalistic theory of consciousness 143–4 PANIC theory 144–5 phenomenal states 143 representational theory of consciousness 4 representations 250, 391 re-representation 105 res cogitans 389, 390, 398, 403
483
res extensa 389, 390, 398, 403 Rhesus monkeys 153 auditory information and knowledge 153 right-handed people 216 rigidity 170–4 contingent 174 definition of 171 reassessing 179–80 understanding water 171–3 robot and apple 188, 189 ego tunnel 195 linguistic interface 190 mechanistic process 192 question of clarification 194 search engine 190 subjective awareness 194 robots 106 interacting with human tester 305 Rosenthal’s explanatory target 351–2 phenomenal consciousness 351 Rosenthal’s theory centrality of cognition in 353–5 explanatory target 351–2 gist of 349 Rylean assumption 119 saliency network 220 Sartre, J.-P. 11, 15, 22 n.4, 23 n.15, 94 theory of consciousness 159 Scholasticism 330 scientia (knowledge) 11 scientific brain research 263–8 non-temporal objects of consciousness 264 streaming consciousness 263 time and consciousness 263 scientific psychology 33 lines of inquiry 34 scrub jays, experiments 152 episodic memory 152 Searle, J.R. intentionality 342 n.21 phenomenological tradition and 340 second-order state 350 non-conscious state 350–1 self 44, 65, 192, 203, 204, 207, 270, 462–3. See also multiple drafts model (MDM); self-consciousness aware of 399, 402
484
Index
concept of evolution of 395, 402 construct of self 191 core self 46 description of physical self 194 functional ‘I’ 406 internal model of 199, 203 metaphysical self 243 model of 190, 191, 193 or person 397 self-conscious self 395 sense of 152 substantial self 46 virtual self 46 self-awareness 32 self-belief 297 self-concept (I-concept) 152 self-conscious mind 4, 38 self-consciousness 32, 297, 304, 394, 396, 397, 404, 408, 463–2. See also mental states conceptual self-consciousness 161 functioning of Homo sapiens 397 iterative meta-representational selfconsciousness 161 mirror test (MT) of self-consciousness 304 pre-reflective self-consciousness 161 test for robot self-consciousness 304 self-description 207 self-determination 315 self-initiated movements 379 self-knowledge 62, 190–1, 215, 464. See also mental states observational model of selfknowledge 64 and other knowledge 217 self-perception framework 215 self-presentation 95–6 self-reflective attribution 284 self-regarding attitudes 297 self-representation 96–8 innateness and language theories 96 self-representational accounts 158–60 Higher-Order Global States (HOGS) 159 wide intrinsicality view (WIV) 159 self-representational approach 142 self-representational theory of consciousness 159 self-representational view 163 n.7
sellars 98–100 sensations 63, 94 active principle of 32 sense data 102 sense experience 81 sense impressions 102 sensibles per accidens 32 Sensory and Noetic Consciousness (Brentano) 47 sensory-based broadcasting 217–22 access-conscious mental states 218 bottom–up attentional network 220 central organizing principle 218 top–down attentional network 219 sensory-based format 222 sensory-based mental events 214 sensory exemplars 105 sight-now (visual information package) 265 social perception and attention schema 204–7 awareness 205 mechanism for controlling attention 205 social cognition task 206 social function of attention schema 205–6 temporoparietal junction (TPJ) 206 theory of mind 204, 205 Socrates 260 something it’s like to be a thing 236–7 something’s being actual 240 space-time trajectory 311 speech-act model 336 speech interpretation 214 speech perception 219 spiritualism 240 standard model theory 317 n.3 state consciousness 349 and creature 442 non-circular explanation 350 stimulus-response mechanisms 70 streaming consciousness 267, 272, 273 subjective awareness 187 subjective experience 80 subjective, phenomenal experiences 84 subjective physicality 247 subjective physical world 245, 246, 247 subjectivity 242 subjunctive conditionals 130
Index substance dualism 357, 382, 465. See also physicalism; property dualism substantive cogitans 395 supervenience 386 n.5, 465–6 of consciousness on neurophysiology 260–1 supplementary motor area (SMA) 374. See also zombies in awareness 374 syntactic functions 30 System 2 inferential processes 223 targetless (empty) HOT cases 155, 156 test for robot self-consciousness 304 theories of subjective experience 30 theory of consciousness 5, 28, 72, 142, 154, 191, 242, 275, 278, 317 Brentano’s theory of consciousness 38 Damasio’s theory of consciousness 46 fame in the brain theory of consciousness 348, 355–64 FOR theory of consciousness 144 HOR theory of consciousness 146 HOT theory of consciousness 146, 348–51, 368 philosophical theory of consciousness 162 reductionist theory of consciousness 153 representational theory of consciousness 4 Sartre’s theory of consciousness 159 ‘self-representational theory of consciousness’ 159 theory of exemplarization 100 theory of intentionality 35, 328, 332 theory of mind 78, 161, 204, 205, 213, 389, 392, 393. See also dispositional HOT theory mechanism 149 theory of phenomenal consciousness 149, 212 theory of the transcendental reduction 339 third-order state 350 third-person structure of consciousness 291–2 this (data attribution) 264, 266, 271 This now 263, 264, 266, 267, 271 thought 211
485
always unconscious 211 awareness of thought 12 background awareness 16 conscientia and thought 12 conscientia as same-order thought 15 events of wondering 211 incorrigible thoughts 24 n.23 intellectual thoughts 13 primary object 15 secondary object 15 truth-value 13 thought experiment 351 thought structures, in linguistic structures 261 time 261–3 and consciousness 263, 270 as hard problem of consciousness 268–70 naturalizing phenomenology 42 non-existence of 283 notion of intentionality role in 326 as property 276–9 transcendental phenomenology 337 understanding time 282–5 time-slice of information processing 282 traditional mind-body problem 1 traditional mind-brain dualism 240 traditional or familiar subjectivity 237 Transcendental Aesthetic (Kant) 283 Transcendental Ich 391, 392 transcendentalism 337 transcendental phenomenology 337 transcranial magnetic stimulation (TMS) 220, 229 n.6 transitive consciousness 349 central to Rosenthal’s theory 349 transitivity principle 349 Trevena and Miller studies 376, 377 truism 253–4 identify Actualism 253 truth 101–2 cognitive consciousness 249 discursive theory of truth 102 theory of truth 101 truth-value 13, 121, 259 Turing test 466–7. See also artificial intelligence (AI) Tye, Michael 118, 144 PANIC theory 144
486
Index
Umwelt 398, 400 un-Cartesian idiom 13 unconscious brain processing 270–1 subjective association 270 This now 271 unconscious mental state 5 unconscious neurophysiological information processing 268 unconscious occurrent mental states 131–3 introspective awareness 131 understanding consciousness 263–8 non-temporal objects of consciousness 265 time and consciousness 263 understanding consciousness, misconceptions actual awareness generation 196 euphemism for ‘ill-posed problem’ 196 higher cognition and attention schema 195–6 unity of consciousness 467–8. See also qualia unity-of-science approach 407 unphenomenological intentionality 329–31 Husserlian phenomenology 329 non-conscious intentionality 330 non-intentional consciousness 330 paradox of non-existent objects 331 Scholasticism 330 unsuccessful sceptical argument 381–5 conscious reasoning 382 goal intentions 383, 384, 386 n.7 implementation intentions 383 Libet-style experiment 384 memory 385 metaphysical assumptions 382 OSA 381, 382 thinking thing 382 V Investigation 329 visual attention 199 method for measuring 200 testing attention 201 visual experience 127, 143, 176, 353
conscious visual experience 188 state of phenomenal consciousness 364, 365 visual stimulus 189, 199, 200, 362 attention to 201, 202 visual subjective awareness 189 West Coast phenomenology 341 n.10 what is actual 245–6 what isn’t actual 245–6 what it is like 4, 78, 82, 84, 95, 100, 361, 362 knowing 101, 106, 118, 121, 122, 134, 351, 352 objection 5 phraseology 351, 352 qualitative 154 sense 143 wide intrinsicality view (WIV) 159 Wittgenstein’s mode of inquiry 60–4 approach to philosophical difficulty 60 chess-piece approach 62 kind of ‘ism’ 60 knowing and awareness 64 modelling of introspection on inspection 62 person’s consciousness 61 practice over theory 61 tangibles and intangibles 61 theological circles 63 word-meaning 66 traditional conception of 66 working memory 219, 220 sensory working memory 222 standard tests of 221 Zahavi, Dan 399 Zeus, Greek mythology 328 zimbo 356 zombie-doppelgänger 83 zombies 4, 355, 356, 468–9. See also knowledge argument; qualia concept of 138 objection 251–2 unconscious things 251 zombie theory 252 Zombie Tom 84
487
488
489
490
491
492