Natural Language and Possible Minds : How Language Uncovers the Cognitive Landscape of Nature [1 ed.] 9789004344204, 9789004316652

Natural Language and Possible Minds: How Language Uncovers the Cognitive Landscape of Nature examines the intrinsic conn

195 59 1MB

English Pages 244 Year 2017

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Natural Language and Possible Minds : How Language Uncovers the Cognitive Landscape of Nature [1 ed.]
 9789004344204, 9789004316652

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Natural Language and Possible Minds

Value Inquiry Book Series Founding Editor Robert Ginsberg Executive Editor Leonidas Donskis Managing Editor J.D. Mininger

VOLUME 303

Cognitive Science Edited by Francesc Forn i Argimon

The titles published in this series are listed at brill.com/vibs and brill.com/cosc

Natural Language and Possible Minds How Language Uncovers the Cognitive Landscape of Nature By

Prakash Mondal

LEIDEN | BOSTON

The cover illustration is used with permission from Shutterstock. The Library of Congress Cataloging-in-Publication Data is available online at http://catalog.loc.gov

Typeface for the Latin, Greek, and Cyrillic scripts: “Brill”. See and download: brill.com/brill-typeface. issn 0929-8436 isbn 978-90-04-31665-2 (paperback) isbn 978-90-04-34420-4 (e-book) Copyright 2017 by Koninklijke Brill nv, Leiden, The Netherlands. Koninklijke Brill nv incorporates the imprints Brill, Brill Hes & De Graaf, Brill Nijhoff, Brill Rodopi and Hotei Publishing. All rights reserved. No part of this publication may be reproduced, translated, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission from the publisher. Authorization to photocopy items for internal or personal use is granted by Koninklijke Brill nv provided that the appropriate fees are paid directly to The Copyright Clearance Center, 222 Rosewood Drive, Suite 910, Danvers, ma 01923, usa. Fees are subject to change. This book is printed on acid-free paper and produced in a sustainable manner.

To Mahamaya, my divine mother



Contents Preface ix Acknowledgements xi 1 Introduction 1 1.1 On Minds and Mental Structures 2 1.2 A Note on the Methodology 13 1.3 Why Natural Language? 14 1.4 Summary 31 2 Natural Language and the Linguistic Foundations of Mind 34 2.1 Language as a Window onto Thought and Reasoning 36 2.2 Language as Conceptualization 44 2.3 Language as a Mental Tool 55 2.4 The Expressive Power of Natural Language and Ineffability 61 2.5 Summary 66 3 Possible Minds from Natural Language 68 3.1 Linguistic Structures and Mental Structures 70 3.2 Mental Structures and the Forms of Possible Minds 106 3.3 Summary 143 4 Natural Language, Machines and Minds 144 4.1 Machines and Minds 147 4.2 Computation and Natural Language 159 4.3 Summary 173 5 Possible Minds and the Cognitive 174 5.1 Summary 205 6 Conclusion 206 References 217 Index 231

Preface This book is part of an attempt to understand the nature and form of mentality in a vaster range of beings than is currently acknowledged in mainstream thinking in cognitive science. While the nature of human mentality is not under suspicion, when it comes to the exploration into the nature of other kinds or types of mentality people often raise eyebrows. Set against this backdrop, this book tries to trace the very foundations of minds and, in doing so, seeks to project a vista within which a tapestry of different types of minds can be delineated. Perhaps the only way the whole project here differs from other avenues of thinking in certain quarters of cognitive biology and cognitive semiotics is that this book attempts to project a picture of distinct types of mentality across organisms by drawing up a formalism extracted from natural language within which the abstract components of mentality in biological substance-­ independent terms can be described. This formalism is thus descriptive rather than explanatory. The formalism tries to grapple with the problem of describing the structural forms of possible types of mentality across the organismic spectrum. This is necessitated by consideration of the point that predicting or determining what other non-human organisms (can) do on the basis of what they have in their inner realms is still beyond our grasp. This is faintly understood because we do not yet get a handle on the question of whether other non-human species have anything approximating to mentality. Hence a more fruitful way of approaching the forms of possible types of mentality across the spectrum of various organisms or species is to first settle the descriptive problem of expressing what there exists that can be individuated inside the inner realms of non-human beings. With this goal set, the book undertakes to understand possible minds from natural language. Rather than injecting an anthropomorphic bias into an otherwise exploratory account, natural language turns out to carry certain advantages that have never been harnessed. Additionally, it emerges that this has surprising consequences for a description of the type of mentality we can attribute to machines. The nature of computation vis-a-vis natural language is also explored in this connection. Overall, this lends credence to the idea that natural language has the limitless potential for the task of unraveling many as yet unresolved riddles and puzzles surrounding language, minds and computation. Nevertheless, I also think there is a danger inherent in any attempt at uncovering the mental world of the other—an issue which deserves careful consideration as well as caution. But as the discussion proceeds, I’ve tried to show that some, if not all, of the concerns can be adequately addressed if the sources

x

Preface

of the confusion and vagueness are identified. The problems are not simply theoretical; they are empirical too. In fact, one of the underlying assumptions adopted in the current work is that the cognitive landscape of nature is far more variegated and kaleidoscopic than we may ever conceive in the safest corners of our chambers of intellectual inquiry. No matter how the details of the proposal presented here unfold, one thing that seems clear is that even a description of distinct types of possible mentalities grounded in the natural world will require a stupendous amalgamation and aggregation of facts, ideas and insights drawn from various modes, fields and aspects of intellectual inquiry. This book is a mere fragment of this rather daunting venture. Finally, I invite readers of all stripes to explore the book and decide for themselves what they need to seriously understand about minds in nature and the relation of natural language to minds. 05 January, 2017 Hyderabad

p.m.

Acknowledgements I thank the two anonymous reviewers of this book project who have provided constructive and corrective criticisms of some of the lines of thinking developed in the book. In this connection, I express my gratitude to Francesc Forn i Argimon, the editor of the book series Cognitive Science, for shaping the work in a way desired for the emergence of a more mature rendering of some of the contents of the book. I’m also indebted to Eric van Broekhuizen, the acquisitions editor at Brill, for immediately understanding the significance of the project. Thanks also go to Bram Oudenampsen for taking the book project to its final stage of production, and to Jarno Florusse for facilitating the rest of the process quite smoothly and patiently. Finally, I thank Avishek Chakraborty and Kiran Pala for all the discussions we have had, and for the support they have supplied unhesitatingly.

chapter 1

Introduction Natural language has an intimate connection to the nature of minds. Ideas, thoughts, feelings and other nebulous entities attributed to minds are often expressed and represented in natural language. One also comes closer to understanding what the other person entertains in the mind by using natural language. It may now become clearer that natural language is here taken to be human language. But the link between natural language and the nature of minds still possesses an inchoate character because the precise nature of the relationship between natural language and minds is still obscure, and thus has to be explicitly articulated. To be clearer, by saying that natural language has an intimate connection to the nature of minds we do not mean to simply assert that the use of natural language in actual circumstances helps discern and decipher the intricate patterns of human thoughts, intentions, plans, mental strategies, emotions etc. The present context in fact demands much more than this, although it is tempting to stick to this kind of behaviorist exploitation of natural language for explorations into the mental territory. The complex system of linguistic structures manifest in any natural language unpacks a host of assemblies of mental organization that reliably correspond to patterns of linguistic structures. In other words, a range of systematic patterns of linguistic structures reveals an ensemble of ‘mental structures’ which can be taken to be the representational properties of the underlying mental organization. For instance, in English the sentence ‘He pushed the bottle towards her’ expresses a mental assembly of conceptual relations which is to be distinguished from that expressed by the sentence ‘He crushed the bottle for her’. Thus, a variety of linguistic structures express a variety of mental structures that can be traced to the internal representational and/or encoding machinery of the mind. In this sense, variations in the representational properties of the underlying mental organization can be correlated with variations in the forms of minds. At this juncture, it appears that the notion of forms of minds must be such that it helps track variations in representations of thoughts in across diverse sections of humans. This is indeed the case, insofar as the focus is restricted to the members of Homo Sapiens. But the present book raises the question of whether possible forms of minds across a range of non-human organisms and/or systems can be tracked by marking patterns of variations in a vaster gamut of mental structures extracted and projected from natural language. As far as the whole enterprise of the inquiry undertaken by the present book goes, the answer provided here is resoundingly yes. © koninklijke brill nv, leiden, ���7 | doi 10.1163/9789004344204_002

2

chapter 1

As the current level of inquiry into the nature of the mind, as it stands, penetrates into the human cognitive machinery, it is striking that the structure of the human mind is often explored by looking into what natural language reveals about our cognition. This appears to induce an anthropomorphic bias when we go about figuring out what other non-human types of mentalities look like. The reason this is so is that the mental structures revealed by examining natural language(s) are considered to be intrinsically human, given that the structures of human language, as opposed to the expressions of formal languages, are the sources of significant insights into the nature of our minds. This is the cornerstone of this line of inquiry. The current work proposes to turn around the idea behind this inquiry by showing that the humanness of natural language does not introduce an anthropomorphic bias. Far from it, linguistic constructions and phenomena can actually tell us a lot about the structure and form of other possible types of mentality in other non-human creatures and plausibly even in some plants. One reasonable way of approaching this is to incorporate the idea that the mental structures that can be tapped through natural language can yield the mental structures that cannot possibly be tapped through natural language once the former is deducted from the collection of all possible mental structures which is independently postulated on the basis of cogent generalizations. Crucially, this book aims to demonstrate that there is nothing in the existing hypotheses and theories on the relationship between natural language and cognition that prevents mental structures from being realized in other non-human organisms or creatures and plausibly in some plants. Before we proceed further to see what the book has to offer in ways of understanding the character of mentalities through natural language, some of the basic concepts employed throughout the book need to be clarified. 1.1

On Minds and Mental Structures

So far the discussion has conveyed the impression that the notion of minds deployed in the present context is special. But one may now wonder in what substantial way it is actually special. We know for sure that minds have a special status in the realm of biological entities. But a notion of minds that is flexible enough to be tailored to diverse biological contexts of various organisms and species is, of course, desirable as mental structures are postulated as the cognitive ingredients of mentality in a general sense. For one thing, all that matters when one speaks about minds is whether it allows for invisible capacities that are regarded as the central facets of any mind, whereas what we require here is

Introduction

3

a handy conception of minds that does not simply attach to any unique form of the biological substrate but rather links to many realized forms of substance found in the biological space. It may be noted that not much over and above mental capacities and/or processes needs to be considered when speaking of minds just in the same way as not much over and above biological processes needs to be taken into account when talking about life. In this sense, minds are ways of speaking of certain entities and processes just like life is a way of speaking of biological processes (see Mayr 1982). Plainly, minds are not physical objects made up of certain types of substance. Therefore, from a certain perspective, the contention that the boundaries of life are exactly the boundaries of minds may seem justified. On this view, one has to look no further than the domain of living entities to delimit the possibilities of minds. But it carries with it the presupposition that the living world reliably coincides with the world of properties we usually or naturally ascribe to minds. The problem is that such a view, sensible though it may seem, inherits a weak form of anthropomorphism, for the properties we usually ascribe to minds are to be discerned from our own case that we usually or naturally observe and theorize about. It is also worthwhile to note that simply stating that the boundaries of life are identical to the boundaries of minds trivializes the very notion of what minds really are. This is so because minds are then plainly and somewhat grossly equated with emergent forms of life. This issue deserves a bit of elucidation here. On the one hand, we cease to get a handle on how to demarcate minds in such way that minds are recognizably specified with respect to multiple forms of biological substance found in the realm of life, for if all life forms have minds, all life forms have an equally consequential stake in the business of mentality. On the other hand, this notion is not fine-grained enough to tell birds apart from plants, for example, or for that matter, humans apart from other primates. This seems to indicate that there is something over and above the mere demarcation of minds that goes beyond an identification of the boundaries of minds. Rather, this suggests that demarcation is not sufficient for the characterization of what kinds of things minds are. But note that this cannot be taken to imply that the idea that the boundaries of life are the boundaries of minds is inherently misguided. As the arguments in this book unfold, we shall see that this idea forms the background scaffolding of what is to be developed in the succeeding chapters of the book. That is, this idea can constitute the background assumption so that this can be fine-tuned, suitably modified and sharpened further to yield the desired concept of what minds are. For all we know about minds, it is apparent that they cannot be simply identified with the body or any part of the body including the nervous architecture

4

chapter 1

or with anything akin to the nervous architecture having similar functions. This means that it would be too simplistic to reduce minds to bodies and brains or any such forms of biological substance because this invites the problem of Cartesian dualism that consists in positing two distinct ontological domains for the mental stuff and the biological substrate. A view of dualism may carry with it the danger that one may view the matter with a reasonable amount of skepticism in thinking that an anthropomorphic bias is imperceptibly being passed on to the characterization of what minds are. The reason this may appear to be so is that only those cognitive abilities and capacities that are usually championed or regarded as great achievements in the mental world of humans seem to be disengaged from the biological substance. Reasoning, thinking or cognizing and so on are the exalted candidates under this category, whereas eating, smelling, seeing, feeling etc. are usually not thought to fall under this category. This is largely rooted in the Cartesian bifurcation between (human) minds and non-minds either in inert objects or in non-humans. In any case, what seems clear is that the case for substance-independence of minds has to appeal not to a trivial notion of independence from any substance whatsoever, but rather to potential independence from different forms or types of substance which minds can be linked to. In other words, it would be wrong to simply say that minds are substance-independent to the extent that it inherits the dualistic segregation of minds and biological substance, thereby nullifying the case for distinct types of mentality in non-human organisms and creatures simply because mental types cannot, then, be said to vary depending appropriately on the type of biological substance chosen. Hence minds can be said to be substance-independent only insofar as the postulated independence is taken to be a kind of non-unique dependence, but not a general across-the-board type of independence. That is to say that the required notion of independence is cashed out positively in terms of a kind of dependence which is actually non-unique in nature. This leads to the view that minds cannot be linked to any single substance; rather, they can be linked to multiple forms of substance. Thus, mind’s substance-independence is a kind of logical or potential independence but not a kind of ontological independence that requires non-trivial metaphysical commitments. Now at this stage, it may seem that this is just another functionalist argument from multiple realizability which consists in the claim that a higher level (cognitive) function can be realized in multiple forms of substance. But this is mistaken for various reasons. First, multiple realizability in its essence derives from the computationalist view that a certain computable function can be realized in a number of hardware systems, thereby supporting a one-to-many mapping from computable functions to tokens of hardware (see Fodor 1975;

Introduction

5

Pylyshyn 1984; but see Putnam 1988, Polger and Shapiro 2016 for a sustained critique). In simpler terms, this springs from the idea that a software system can be realized in many different types of hardware. A view of non-unique dependence, albeit apparently compatible with multiple realizability, differs from multiple realizability, in that multiple realizability does not really ‘care’ about the type of hardware chosen, while non-unique dependence is a type of dependence but it does not depend only on one single substrate. To put it in another way, a given type of substance matters for non-unique dependence precisely because it is a case of dependence in each particular instance or condition of something depending on a given substance, whereas multiple realizability is not a case in which individual instances of realization in various types (or even tokens) of hardware are instances of dependence since the relevant computable function need not be realized anyway. That is, mind’s non-unique dependence on diverse types of substance demands that minds become anchored to at least two distinct types of substance because only then can we state that minds are not linked to any single substance. But the same cannot hold for multiple realizability, crucially because it is not even necessary that computable functions are realized in any single substance. Second, it may be noted that mind’s substance-independence is a kind of logical or potential independence, as stated above, whereas multiple realizability cannot be simply a case of logical or potential independence—it is more than that. Since multiple realizability may obtain even when no function is actually realized in a substance, multiple realizability may be a case of ontological independence plus logical or potential independence. This distinguishes multiple realizability from non-unique dependence in a striking fashion as multiple realizability is a super-order concept a part of which is shared by non-unique dependence. This is because everything that is a case of logical or potential independence may not automatically be a case of multiple realizability (a noncomputable function which is logically disengaged from any realizing physical system ­cannot be said to multiply realizable, and so is true of something like the largest prime number). Third, the underlying basis of multiple realizability is the very concept of realizability which can hold in many forms of substance, but the essence of non-unique dependence lies in the notion of dependence which warrants a relation logically distinct from that of realization. If A depends on B and B depends on C, we can always infer that A depends on C, but if A is realized in B and then B is realized in C, it does not follow that A is realized in C. Thus, for example, if a time calculation algorithm is realized in a digital wall clock which is in turn realized (in the sense of being embedded) in the concrete structure of a wall, it would be absurd to say that the time calculation algorithm is also realized in the wall.

6

chapter 1

Having clarified the notion of substance-independence we have targeted in the present context of the discussion on the relation between mind and its substrate, we may feel that the characterization of what kinds of entities minds are is achieved. As a matter of fact, this is not fully right. Specifying the relation between mind and its substrate does not simply amount to specifying what minds are any more than specifying the relation between a given equation and its realization in the mind or even on a piece of paper amounts to specifying what that equation really is in its fundamental nature. In fact, we require something more than this. But we may wonder whether minds are really the entities that can be characterized the way numbers, for example, are characterized. After all, it is quite plausible that minds conceived in a more general sense which is couched in broader terms do not fall within the phenomenal limits of the organismic envelope. On the one hand, the existence of minds within a generally broad organismic envelope is neither entailed nor made viable by considerations of intentionality, as expanded on in Section 1.3, or of consciousness and various kinds of cognitive processing. In particular, this is not guaranteed by consciousness in that consciousness is not a simple unitary phenomenon whose character can be reliably and appropriately fractionalized such that the separable parts can be in distinct combinations mapped on to distinct types of possible minds. Moreover, a general characterization of consciousness which is demarcated independently of specific biological substances and yet applicable to many organisms across the organismic spectrum is hard to come by. Thus, for example, sensory consciousness which is characterized by different degrees of integration of sensory features and qualities of the world through increasing hierarchical layers of abstraction can be candidate for a general level of consciousness (see Feinberg and Mallatt 2016). But the problem here is that it unjustifiably excludes living entities (such as plants) that may not have any kind of sensory integration through hierarchically organized neural structures. Thus, an appeal to consciousness to help build demarcations within the space of possible types of mentality would end up being too restrictive. Besides, the existence of minds is not equally made necessary by the individuation of the instantiation relation between mind and its substrate, for no instantiation relation between minds and the kinds of substance they are instantiated in can unequivocally determine the boundaries of (possible) minds. This is the case by virtue of the fact that the required instantiation relation between minds and various kinds of substance may be many-to-many. We cannot prima facie stipulate that the human biological substance can instantiate only its own type of mentality reserved only for Homo Sapiens—it is plausible that many other types of mentality are themselves embedded within what we may recognize as the human type of mentality, however characterized.

Introduction

7

But then one may believe that minds conceived in a general sense—the sense which is warranted in the present context—can be approached by stripping the human type of mentality, as we understand it, of all its intellectualist attributes and then appealing to those aspects of interactions with the environment that are bottom-level or low-level processes to be postulated as characteristic of minds. The best candidate that comes closer to fulfill this goal is the basic perceptual process which seems to capture much of the low-level territory of mind’s operations and processes. Since all organisms and plants interact with the outer environment and act upon certain properties, features and resources of the environment they are in constant touch with, it is the perceptual configuration in various creatures with respect to the perceptible or perceived world that appears to project a general version of mentality which can range over an ensemble of organisms simple or complex. That the perceptual mind seems to be the basic format of minds can also be traced to the view that non-human organisms live in a world of immediate perception beyond which their world ceases to exist (see Dummett 1993b, p. 123, for example). Now regardless of whether or not perceptual abilities in various organisms are attuned to the immediate world, it seems clear that perceptual abilities do not immediately lend themselves to being molded into the basic texture of minds. The reason is that what kinds of systems minds are cannot be entailed by the detection of perceptual abilities. Consider, for instance, the interaction of two magnets whose opposite sides attract but whose same sides repel each other. On the surface of it, a minimal form of account that does not presuppose the understanding of physics or behavior of magnets as physical objects may ­bestow perceptual capacities on magnets. But this is, of course, ­nonsensical. The problem emanates from the behavioristic criteria attaching to the way perceptual capacities are recognized. That is, it is only by looking at the behavior of a certain creature or even an inert object, one may surmise that the creature or the object concerned has perceptual capacities. But as soon as we recognize that behavior does not automatically underwrite the mechanisms or structures lying within, it is hard to see how one can drive home a general conception of minds by taking perceptual capacities to be the basic format. One may attempt to resist this conclusion by insisting that we already know for sure that a creature is a living entity, while an inert object is a non-living entity. This argument misses the point altogether, for it is not the question of whether one has the required familiarity with the entity in question—rather, it is the question of whether one can really read back from mere observation of perceptual abilities to the recognition of a mind-like system. In fact, in many cases we may not even possess a rudimentary form of knowledge of or familiarity with even many living entities, whether they are plants or microbial

8

chapter 1

organisms, and then we may treat them as inert objects rather than living entities by reading much into apparent observations. Our familiarity with the taxonomy of living and non-living entities cannot dictate what kinds of things minds really are. The point raised here does not, however, imply that we cannot understand anything at all about the structure of minds by working back from perceptual abilities to the system within that generates the behaviors which can be predicated on the perceptual abilities concerned. As a matter of fact, we can capture insightful glimpses into the structure of minds by appreciating the significance of non-perceptual contents of minds as they interface with perceptual capacities across organisms. This brings us to a point raised by Bermúdez (2003) who thinks that an otherwise justified restriction to perceptual capacities leaves out of consideration many non-perceptual processes and their contents which may well exist in many creatures, and possibly in other smaller organisms. The caching behavior of scrub-jays, the courtship behavior of European starlings, passerine birds’ behavior of bringing food to the eggs in the nest in anticipation, the nest building behavior of many birds in anticipation of eggs, tool uses in different primates such as chimpanzees, the hiding behavior in cats etc. and also many relevant behaviors of various creatures in unfamiliar situations illustrate cases where the actions in question are not simply attached to the immediate sensory-perceptual environment. Although it is reasonable to believe that non-perceptual capacities entail perceptual capacities, but not vice versa, and thus it is safer to postulate perceptual capacities for minds because they constitute the broader category of mental capacities and abilities, this is problematic for two reasons. First, just because perceptual capacities constitute a broader category than non-perceptual capacities, this minimalist orientation does not by fiat gain a purchase on the character of minds. The minimalist orientation turns out to be nugatory, on the grounds that it is not fine-grained enough to motivate subtler distinctions among different types of mentality as it grossly brings together all organisms or species under its ambit. This suffers from the same defect that was pointed out above for the view that the boundaries of minds are the boundaries of life despite its significance on certain other grounds. The same consideration applies to any appeal made to Morgan’s Canon that bans ascriptions of higher mental capacities to a behavior that can simply be interpreted as the outcome of some lower mental capacity (Morgan 1894). Second, when it is the case that perceptual capacities are entailed by nonperceptual capacities as the former forms a broader kind, it is also the case that non-perceptual capacities are not entailed by perceptual capacities. Now this means that we cannot necessarily infer the existence of non-perceptual capacities from the presence of perceptual capacities. That is to say that perceptual

Introduction

9

capacities may imply the presence of non-perceptual capacities. And if this is so, what justifies the restriction to only perceptual capacities for the conceptualization of minds in a general sense? After all, many possible minds can have non-perceptual capacities along with perceptual capacities. So, for example, even if one hundred organisms out of a thousand do not possess any non-­perceptual capacities but have perceptual capacities, we need to have an account of those nine hundred organisms whose mentalities are constituted by both non-perceptual capacities and perceptual capacities. Surely this cannot be a matter of quantitative weights that can be read off from the statement that the existence of non-perceptual capacities cannot necessarily be inferred from the presence of perceptual capacities. Hence the logical relation between non-perceptual capacities and perceptual capacities cannot be cashed out, at least in a straightforward way, in terms of biologically significant relations among the mental capacities across species. It is noteworthy that Bermúdez (2003) has gone on to offer an account of the non-perceptual contents in non-human creatures by proposing a nonlinguistic (or simply non-propositional) version of semantics called success semantics aimed at recasting beliefs and desires inherent in goals in terms of certain external conditions. According to him, beliefs have as their contents utility conditions—conditions, or rather states of affairs that make a belief true, and desires have satisfaction conditions which are construed as states of affairs that match desires to actions, thereby terminating the desire. While this may well wedge non-perceptual and yet non-propositional contents into the mental machinery conceived in a non-human way, this transfers the burden of non-perceptual contents to the external world and borders on a behavioristic way of individuating contents. The present proposal aims to approach this in a quite different way. Given the vagaries in articulating what minds really are, it is far more appropriate to pose the question in a way that is tailored to meet the requirements of the present inquiry without running into the sort of problems delineated above. That is, instead of asking what minds really are, we may now ask what constitutes what we recognize as minds conceived in the customary general sense of the term. Or simply, what constitutes mentality? The answer the present proposal aims to advance is that it is mental structures that constitute the basic texture of what we may usually discern as mentality. Mental structures are the ingredients of mental types, but not of mental tokens. In fact, mental structures can suitably replace that which we ascribe to organisms as possessing as part of the resources that allow them to perceive, represent, interact with, or act upon the world. Now it may appear that this formulation is circular because minds or mentality is characterized or defined on the basis of some structures which are true

10

chapter 1

or characteristic of minds. This apparent circularity is dissolved once we realize that the formulation above is not so much a definition or a reduction as a substantive (re-)description that is indicative of the fabric of mentality. That is to say that even if we substitute some other term, say, m-structures or even ­x-structures for the phrase ‘mental structures’, there is no loss of the substantive sense attaching to the phrase ‘mental structures’. Thus, the formulation above does not ride on a linguistic reduction or characterization of terms like ‘minds’ or ‘mentality’. More will be said on this later on, and this issue will be further explicated in Chapter 3. Suffice it to say for now that what is important about mental structures is that they have two different dimensions or modes. One is that they underlie linguistic expressions, and the other is that they are not just representations or reified structures that abstract away from the biological substrate. They may be grounded as internal states either in a nervous a­ rchitecture or within the bodily system of an organism as a whole.1 In this respect, mental structures are not to be aligned with basic perceptual processes if conceptualized in line with what Burge (2010) appears to think. He takes perception to be a quasi-algorithmic process that does not have a representational character, and hence it can be said to be implemented in non-humans and also infants who may lack representational or higher-order cognitive capacities. However, the problem, as one may see with his views on perception, is that he considers it to be largely modular, which does not comport well with the present view of mental structures which may be embedded within the biological constitution of a species, or simply, within the internal states of the body as a whole. For similar yet slightly distinct reasons, Gauker’s (2011) view that many nonhuman creatures can have imagistic representations is also not ripe for the development of species-general structures individuating types of mentality. Imagistic representations, just like perceptual representations, do not allow for unique or partial decompositions the way mental structures can by virtue of the fact that they will have the logical structure of relations in its mathematical sense (as will be formalized in Chapter 3). The image of a tree, for example, mentally organized cannot be logically linked to the part of the image for the trunk—only whole images rather than parts of them count. This prevents imagistic representations from being subject to partial virtual manipulations for reassembly, re-integration, and creation of new combinations from parts. Quite aside from that, imagistic representations cannot be postulated for organisms such as plants that do not have any perceptual apparatus configured in terms of sensory-motor organs. This does not, however, impose any ban on 1 But see Chapter 4 for a slightly distinct way of implementing mental structures in connection with the relationship between mental structures and machine cognition.

Introduction

11

imagistic representations being linked to or fed into mental structures, or vice versa, especially for organisms that possess the perceptual apparatus. It is reasonable to think that perceptual or imagistic representations can often index and shape mental structures when certain perceptual or imagistic representations are evoked more than once for the recognition of objects and features via re-identification and re-extraction. Thus, for example, when the features of certain food items in an organism’s environment are perceived and gradually evoked over and over again for the reification of a sign-like form linked to the set of food items naturally found, the relevant perceptual or imagistic representations can give rise to and thereby shape mental structures that are finedgrained to determine if some arbitrary item is the same as some member of the set of items found, or is of the same category. But this cannot be taken to imply that mental structures are in themselves perceptual or imagistic representations, for mental structures do not directly interface with the actual world. In this connection, it is noteworthy that perceptual or imagistic representations regarded as signs that (may) map themselves onto goals or actions in the form of responses these signs stand for enter into a causal or semiotic relation that directly transforms something extracted from outside into responses/actions which in turn conserve such causal or semiotic relations over many instances of events. Once perceptual or imagistic representations assume sign-like forms, it is mental structures that determine which sign relations are to be deployed in each given situation, thereby paving the way for the emergence of new and novel sign relations in unfamiliar settings and situations. This is made viable by the open-ended form of mental structures which, by virtue of not projecting onto the world directly, can link to multiple sign relations between inner needs and actions within and across organisms. Thus, there is nothing that can stop a certain sign relation that obtains in food gathering, for example, from being employed for hunting or even playing. This has ramifications that will be developed as we proceed to formulate mental structures for kinds of organisms in Chapter 3. From another perspective, the exact relationship between mental structures as specified above and linguistic expressions deserves elucidation. When we state that mental structures are interpreted structures that underlie linguistic expressions, the underlying idea is that mental structures can be revealed by examining linguistic structures. That is, linguistic structures serve to disclose mental structures that are structures having no meaning in themselves and are pre-interpreted within the contextual constitution of the exercise of various capacities and actions of organisms. To give an example, the mental structure that can be uncovered from the sentence ‘He danced with her but never sang for her’, for instance, can be said to be pre-interpreted in the sense that it is an

12

chapter 1

abstract structure which is internally accommodated and assimilated by the encoding mechanisms of neural networks and bodily processes that engage in the relevant actions associated with the mental structure at hand (in this case, dancing and singing). In other words, the relevant mental structure must have to be assimilated and integrated into system of neural and other bodily processes (responsible for motor, proprioceptive, kinesthetic interactions) for later evocation, deployment and iterative reuse. Note that mental structures taken in this sense can be encoded representations or embodied structures or both at the same time. This means that the two dimensions or modes of mental structures correspond to two different scales—the link to linguistic expressions forms the abstract higher-order scale (which we figure out by applying our meta-cognitive abilities) while mental structures as internal states become part of the scale of physiological configurations. Beyond that, it is crucial to understand that mental structures are not determined by linguistic structures, and in being so, they do not stand in a relation that can be cashed out in terms of an enabling or mirroring relation. Rather, linguistic structures bear certain logical relations which evince mental structures. This can be taken to mean that linguistic structures do not enable mental structures, in that enabling is ultimately a weakened or diluted causal relation and hence it inherits relations of a causal chain which does not harmonize with the character of mental structures. Mental structures cannot be either caused or enabled by linguistic structures because they can stand alone independently of linguistic structures. Likewise, mental structures cannot be mirrored by linguistic structures because mirroring demands a kind of isomorphism between the object that is mirrored and the image itself which has to be preserved over all transformations of the mirroring object, that is, over transformations linguistic structures may undergo.2 This does not hold true for mental structures since a transformation of linguistic structures may alter the mapping to the mental structures concerned. Moreover, logically speaking, mental structures may be or may not be compositional relations, and hence they cannot be simply predicative. While all predicative relations (e.g. F(x) when x is true of F) are compositional as the composition of predicates gives 2 This point makes reference to the way mirroring physically works. In mirroring, no matter what the size of the image on the mirror is (that is, magnified or reduced), there must be an isomorphism between any points in the mirror image and the corresponding points in the actual object mirrored. Now if the mirror is tilted or placed in another angle with respect to the object mirrored, the isomorphism has to be preserved. This obtains even for the equivalence between the distance from the mirror surface to the object mirrored and that between the mirror image (which virtually stands behind the mirror surface) and the mirror itself.

Introduction

13

rise to a relation which can be traced to the way the given predicates have been syntactically combined, mental structures may not always be characterized this way and be non-compositional as well because mental structures may contain elements which cannot be syntactically combined. This distinguishes mental structures from linguistic structures which are generally compositional as far as syntax goes. This will be elaborated on in Chapter 3 when a formalization of mental structures is presented. Also, more will be said on the linguistic relation to mental structures in Section 1.3. With this in place, we are now geared up to offer some remarks on the legitimacy of, and justification for, the methodology of the current work. 1.2

A Note on the Methodology

It is vital to understand that the present study is a modest attempt to unravel the intricately knitted complex that minds are by having them decomposed into their basic structural forms. The proposal to be advanced is that (possible) mentalities are, at least in part, structurally constituted by mental structures which can be uncovered from linguistic structures. Note that this does not in itself ban any investigations and explorations into the psychological procedures and mechanisms that may be postulated as part of the machinery of minds as well. Insofar as this is so, any experimental and/or comparative ethological studies on various non-human organisms and creatures may serve to complement the development of the formalism of mental structures in the present context. That is to say that any experimental or ethological studies on other non-human organisms and creatures may discover more about the variations in psychological procedures and mechanisms realized in non-­human species, and can in turn feed these insights into the present framework for their assimilation into the formalism of mental structures. In this respect, one important advantage of the present study is that it employs a descriptive apparatus which is to be framed by way of the articulation of a logical formalism of mental structures that can easily accommodate experimental and ethological studies. This is so because the formalism to be developed is supposed to be neutral with respect to its extrapolation to experimental and ethological findings. Thus, the formalism of mental structures in later chapters, especially in Chapter 3, will be deployed to see how a range of mental structures fits into experimental and ethological findings on cognitive capacities across species. Plus the descriptive formalism of mental structures will also be employed to figure out what can be said about the character of a type of mentality that can be attributed to computing machines (in Chapter 4). Surely this part of the exploration into

14

chapter 1

non-human types of mentality when considering machines in particular cannot simply be a matter of experimental investigations because no amount of study of machines can decide either in favor of or against the contention that machines have minds. This point will be taken up in Section 1.3. In this respect, the descriptive formalism of mental structures will have an edge over other competing proposals, in that it makes no claim as to whether experimental investigations into machine computations can reveal the nature of machines’ type of mentality. This will be further clarified in Chapter 4. In a nutshell, the present study will apply a top-down approach in tackling the problem of finding out other types of mentality in non-humans. That is, it will first attempt to solve the problem of description of other possible minds in non-humans and then get down to understanding how this can be squared with experimental and ethological explorations into non-human organisms’ cognitive abilities and capacities. We are aware that experimental and ethological studies have been the conventional type of studies in understanding nonhuman creatures’ behaviors and cognitive abilities. Significant as this approach is, this cannot adequately address the question of how to describe various types of minds other than the human kind. The problem is much more severe than is commonly recognized since no amount of experimental and ethological studies can unequivocally demonstrate that other non-humans have distinct mentalities. The central goal of the present approach is to get a handle on the problem of description of non-human types of minds first and then to see what we can learn about the cognitive mechanisms that can act upon mental structures to realize cognitive behaviors. With this goal as part of the methodology, this book sets out to examine the unique connection between natural language and naturally possible minds. While the conceptual apparatus required for later discussions has now been refined, we have not yet addressed the question of why natural language is so special. Section 1.3 will assess the merit of this question by contextualizing it in the wider domain of the investigation into the very nature of intelligence whether biological or otherwise. In this connection, various other proposals that approximate to the exploration of non-human mentalities will also be evaluated so as to check how the case for mental structures can be reinforced once the weaknesses of these approaches in getting to grips with the non-human type(s) of mentalities are shown. 1.3

Why Natural Language?

Investigations into the nature of the mind have proceeded with the supposition that an understanding of the structure of the mind can offer insights not

Introduction

15

only into the properties of mentality but also into the very possibilities of having a mind. One of the ways of examining the structure of the mind is to study the mental structures that human language as a cognitive organization gives rise to. On the other hand, a way of understanding the mind itself is to understand the nature of intelligence which seems to encapsulate everything we tend to associate with a cognitive system that evinces aspects of mentality. The former naturally lends itself to being made into a linguistic inquiry, insofar as it relates to an aim of understanding the mental structures behind the linguistic structures and representations. But the latter extends to a vaster intellectual territory within which the nature and form of intelligence of humans, different types of machines and other creatures in substance-independent terms is examined from computational, philosophical, biological and perhaps anthropological perspectives. Even though these two threads of natural inquiry in its general sense have different natures, goals and methodologies, they have a lot in common. It is not quite hard to see that an inquiry that projects a window onto the hidden texture of cognitive structures, insofar as it is revealed by an inspection of linguistic structures, can also reveal something about the form of intelligence. This is so because the cognitive structures underlying linguistic structures connect and shape what cognitive systems operate on, manipulate and exploit in any activity that counts as intelligent in some demarcated manner. In fact, the latter inquiry is often linked to what is usually done in artificial intelligence (ai). It may be noted that an inquiry into the nature and form of intelligence of humans, different types of machines and other creatures in substance-independent terms subsumes, rather than forms a part of, the study of ai per se. It is not unreasonable to argue that the underlying raison d’être behind the study of ai is the quest for other possible forms of mind. And it is this aspect that informs the inquiry that delves into the nature and form of intelligence of humans, machines, other creatures and even plants. At the same time, it is also vitally important to recognize that the quest for other possible forms of mind can make sense only if the necessary and sufficient properties of minds are adequately understood. Clearly the marks of what it is to be mental have some substantive connection to the language capacity, on the grounds that the language capacity makes viable certain cognitive structures, especially certain kinds of thoughts that we as humans entertain. However, we have reason to believe that this cannot be the whole story in itself, for creatures other than humans such as dogs, cats, parrots, crows, pigeons, dolphins or other primates do not possess the kind of language capacity humans are endowed with. Be that as it may, there seems to be something irreducibly linguistic in any conception of what it is for something to be mental, and by virtue of this, a conception of intelligence that borrows something

16

chapter 1

from this conceptualization of cognition is bound to incorporate aspects and properties of the linguistic organization of intelligence. Most significantly, when Alan Turing, the father of modern computer science, came up with the concept of a test that would count as the operational diagnostic for the inspection of the marks of intelligence in digital computers, the test which is known as the Turing Test (Turing 1950) was described essentially as a linguistic test. The test involves a computer and a human both hidden behind a screen or veil on the other side of which sits a human who as the judge scrutinizes the linguistically framed responses from both the computer and the human in reply to questions posed by him/her. Both the computer and the human are certainly indistinguishable to the judge, since the judge does not know which response comes from whom. All that the judge will have to do is check the verbal responses in response to his/her queries in order to tell the machine apart from the human. Note that the entire test has been designed in a fashion that involves natural language conversations which humans have to verify with a view to determining whether the responses come from a machine or from a human. Regardless of whatever demerits the test in itself has (see for a relevant discussion, Proudfoot 2011), the test has a lot to say about the connection between natural language and (natural) intelligence. The foremost question is: why did Turing think of natural language conversations when designing a test that could decide the case for machine’s intelligence? After all, there is no logical reason why the test as such could not have involved a task such as generating visual images or analyzing sounds or moving things around or even drawing a picture. The question that bothered Turing is ‘Can machines think?’ Since no definition of thinking that will be appropriate enough to conform to well-demarcated specifications applicable in diverse scientific contexts or to the normal use of the word ‘thinking’ can be formulated, Turing replaced that question by another relatively unambiguous and precisely framed question which asks whether a machine can play what he called an ‘imitation game’. Clearly, when thinking of natural language conversations, Turing had something in his mind, as he says The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man. … We do not wish to penalise the machine for its inability to shine in beauty competitions, nor to penalise a man for losing in a race against an aeroplane (pp. 434–435). It is clear from the passage above that Turing differentiated cognitive capacities and processes from mere physical capacities of humans, given that physical

Introduction

17

capacities are insignificant and irrelevant when the goal is to test machines on the capacity for thinking. Furthermore, while considering potential objections to his proposed test, Turing also thought it appropriate to take into account possible disadvantages that machines could face during the performance of the test. For example, humans can pretend to be machines and this action may weigh heavily against the machine involved in the test, for humans are not good at many tasks computers are good at (such as mathematical ­calculations) and hence computers can be easily caught. For the simple reason that this could put machines at a disadvantage, he also considers the following objection. May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection (p. 435). It should be noted that Turing acknowledges that the objection to engaging in natural language conversations in the test is a strong objection indeed, although he does not say anything concrete so as to weaken or eliminate any possible problems the objection in itself may carry. Let’s consider the objection because the objection in question constitutes the crux of the matter this book will be concerned about. Suppose a task other than engaging in natural language conversations is fixed as the task which both the machine and the human involved will have to perform. Thus, for example, machines can carry out the task of analyzing images pixel-by-pixel which humans never do. But then this will disadvantage the human engaged in the test, since humans cannot, without any external aid, execute this task anyway. Plus it is not clear how the task of analyzing images pixel-by-pixel can be equated with thinking. Let’s then consider the other possibility. What if we pick up a task which is different from what humans do and also from what computers do, along with the condition that the task chosen should be taken to be thinking in some sense? This question seems meaningless, on the grounds that some activity or task that can be taken to be thinking in some sense cannot plausibly be disjoint from what both computers and humans do. If a task or activity can be reckoned to be thinking in some sense, why cannot it be performed by humans, regardless of whether it can be performed by machines or not? Without doubt, there can be some potential tasks or activities that both computers and humans do not or cannot do. For instance, traveling backward and forward in time, gazing at all stars in the universe at once, playing with gigantic physically located buildings, running with the speed of light and so on are not the kind of tasks that

18

chapter 1

machines or humans do. Even if any of these tasks is benchmarked for the test of machine intelligence, it is not clear whether these tasks have the marks of mentality. Nor do we know whether these activities can be identified with the process of thinking per se, although they may require and also involve planning and appropriate processes of reasoning on the part of the agents that may engage in such activities. Faced with such a difficulty, we may attempt to fix the definition of thinking so as to let it apply to a certain range of cases reasonably constrained. However, it is a pointless task that Turing also recognized. Whatever way we may try to settle the question of what constitutes thinking per se, we undertake to reflect upon a different question, that is, the question of how the language capacity connects to the way we characterize something as mental so that we can figure out what possible minds may look like. We may now explore two possibilities that may help us get a handle on the complexity of the issue at hand, depending on whether we take into account natural language or not. So let’s first suppose that the nature and form of possible minds can be investigated by not postulating natural language as an intrinsic element of cognitive systems. If we adopt this possibility, we can explore the structure of possible minds that do not possess the cognitive capacity language affords. At this juncture, it appears that we will have to take into account the mind-like properties of languageless creatures such as ants, cats, snails, dogs, squirrels, pigs and so on. While the uniqueness of natural language may dispose many people to include other animals for the exploration of the question of how to shed light on the nature of mentality minus natural language, one may, with a reasonable degree of certainty, impugn the statement that the word ‘natural’ in natural language should be reserved only for humans, unless one has stipulated the demarcation of the meaning of the word only for humans (see for a related discussion, Lyons 1991). Assuming that the denotation of the word ‘language’ can be made to incorporate the systems of signs—however developed or impoverished—that are rudimentary enough to be used by different animals, we may see how we can make certain conjectures about the nature of possible forms of mentality. If we follow Luuk (2013) in this regard, a whole hierarchy of referential properties of symbolic systems emerges, provided that we accept that the word ‘symbol’ can be reliably interpreted to have developed from the broader-level category specified by signs in the Peircian system, which has a tripartite organization structured around icons, indexes and symbols.3 The hierarchy can be represented as follows: 3 Icons bear an imagistic (physical) resemblance to the object which an icon is an icon of (for example, pictures). Indexes have some natural or causal or sensory connections to the object an index is an index of; for instance, smoke is an index of fire. Finally, symbols are signs

Introduction

19

Signs → Denotation → Paradigmatic Connotation → Syntagmatic Connotation → Definition Monadic signs are simplex signs that depend on various stimulus–response relations. Alarm calls of vervet monkeys, crows can be of this kind. Denotation, in Luuk’s formulation, is a relation between a sign and its conceptual content, that is, the mental image of the entity referred to. Paradigmatic connotation depends on the logical-conceptual relations between signs; for example, partwhole relations, type-token relations, inclusion/exclusion relations etc. fall ­under this category. The same holds true for the predicate-argument relations; thus ‘cars’ predicated of ‘red’ instantiates a relation between an argument ‘cars’ and the predicate ‘red’. Syntagmatic connotation consists in the c­ ombinatorial relations that obtain among different signs. That prepositions, for example, precede the nouns in English, as in the prepositional phrase ‘in the garden’, is a matter of syntagmatic connotation. Finally, definition is a higher-order relation which is by its very nature parasitic upon other relations. Thus, for example, if we define heat as the motion of molecules of matter, the notion of molecular motion depends on certain other relations involving molecules and motion, and so on. Luuk claims that the human symbolic capacity is ­distributed among the interpretative correlates of all these five elements such that in any instance of interpretation of a sign any subset of the five symbolic interpretative ­potentialities can be utilized. Most importantly, Luuk argues that the hierarchy corresponds to the evolutionary trajectory of the symbolic capacity in the biological world, and that animals such as vervet monkeys, gray parrots, bottlenose dolphins, bonobos etc, may possess the first two symbolic capacities, namely sign-making and denotation in the hierarchy shown in (1). This raises some important questions for us. If we grant that sign-making and denotation are (also) part of the symbolic capacity characterizing the human linguistic capacity, it is not immediately clear how we can make sense of the question of throwing light on the nature of mentality minus natural language when we turn to other animals, assuming of course that we are referring to human language when talking about natural language. This is so because what is in essence a part of the human linguistic capacity (that is, sign-making and denotation) does not plainly detach itself from the extension or conception of what human language actually is. That is to say that sign-making and denotation cannot be independently characterized once for non-humans and then that are arbitrary and stimulus-free, that is, are used in the absence of the object denoted, and they do not bear any physical resemblance to the object (words in natural language, for instance).

20

chapter 1

for humans. And if this is so, we cannot ‘frame’ the notion of mentality minus natural language because the traces of human language linger on even when we attempt to formulate the notion of mentality minus natural language, especially for other animals. The framing itself, in virtue of involving sign-­making and denotation, carries over properties and conceptions of human language. On the other hand, if we are ready to accede to the proposal that other animals (may) have different systems of signs, regardless of whether or not they have certain overlaps with, or are subsumed by, the entire repertoire of human symbolic capacities, some glimpse into the structure of possible minds can be thrown. Note that this proposal appears to obviate the anthropomorphic bias, so long as we are inclined to think that animals have their species-­ specific independent systems of signs and that such systems of signs may look impoverished when compared to the human system of signs just as humans’ auditory or olfactory capacity is impoverished when compared to that of dogs or lions. In such a case, we understand a lot about the structure of possible minds minus human language. We come to observe that dependencies involving stimulus–response relations and mental imagery in denotational reference making portray possible minds with such capacities as systems that have a rich finely tuned low-level visual faculty along with a minimally structured memory attuned to the empirically perceived world out there. Such minds can make stimulus-bound responses, possibly categorize different types of stimuli, and conceptually differentiate between different tokens of stimuli. Additionally, having mental imagery requires episodic memory, semantic memory and perhaps a type of intermediate-term4 memory which can organize the experiences and interactions with the outside world, thereby also facilitating learning construed in its generic sense. Importantly, such kinds of minds may also be able to have a minimal theory of mind in the sense formulated by Butterfill and Apperly (2013). That is, such minds may have capacities of goal-directed action, encountering and plausibly a form of mental registration. The first two do not require any form of mental representation, in that a goal-directed action is cashed out in terms of a function which specifies an outcome or a goal to achieve which an agent must engage in certain activities, and encountering is simply a non-representational relation between an object and an agent, while the third does require representations because mental registration requires not merely a triadic relation between an object, an agent and a location, but 4 The intermediate-term memory is sandwiched between the working memory and the fullfledged long-term memory remaining active over an extended period but does not thereby harden into long-term associations (see Donald 2001). The memory formed during symbolic communications can be of such kind.

Introduction

21

also the mental representations of each. Pigeons, dogs, cats, snakes, chimpanzees, crocodiles may have such forms of minds. However, this is again problematic on the grounds that possible minds are thus interpreted to have some sort of language-like capacities manifest in the systems of signs such minds possess. Any systems of signs are ultimately bound to be language-like, irrespective of whatever such systems of signs are rendered. If this is what the whole thing turns out to be, understanding the nature and structure of possible minds minus language broadly conceived can be more daunting than can be naturally supposed. Plus we do not certainly seek to understand possible minds in theoretical terms only in the animal world— we aim to understand possible minds in a broader sense in computers, machines of different kinds, and other artifacts designed by humans. Perhaps a better approach towards this problem can be taken by considering a broadly construed ontology of representational levels, as is articulated in Bickhard (1998), which specifies a series, or rather a hierarchy of differentiated levels of representations that emerge through rich matrixes of interactions obtaining between agents and the world out there. Many of these levels in the hierarchy do not presuppose the existence of representational capacity in the internal states of the systems concerned. For the purpose at hand, we may imagine that these systems could be systems implicit in machines, animals or maybe even in plants. Thus we cast our net wide enough in order to capture a gamut of possible minds which is as broad as possible. It needs to be stressed that the ontological hierarchy specifies various kinds of representational ascriptions that may be grounded in different intentional ontologies. What this means is that our ascriptions of representational capacities to some system-internal properties in a machine, for example, can have various interpretations that depend on exactly where in the ensemble of different intentional ontologies we locate such representational capacities. Thus, for example, any intentional stance that takes a system to be about or oriented toward something can be cashed out in terms of a minimal ontology or even no ontology. That is, we may show no commitment to any restriction that determines whether it is machines or animals or whatever on which we assume an intentional stance. Under this construal, even a plant can have belief-like states when a plant such as a pitcher plant is oriented towards an insect, or when a machine such as an elevator is oriented towards what it elevates, namely humans and other objects. Plus various kinds of presuppositions and representational constraints may also emerge through interactions of systems with the world located outside of the systems themselves. For instance, an ordinary fan is built in such a way that the functional presupposition that there would be air around and that there would not be any blockage that may prevent its wheeling is not to be

22

chapter 1

found inside the machinery of the fan—it is simply presupposed by way of the interaction of fans with the world outside. Many representational constraints that determine how a system will operate and which final state it will end up in are not built into the system as such; rather, they are part of the interactive potentialities determined by the functional relationships obtaining between a system and the environment in which it operates. Possible minds in automata, many animals including primates and plausibly some plants may be described in these terms. Note that this way of approaching the question of how to explore the nature of mentality minus natural language is more appropriate and suitable for the examination of the formal structure of possible minds across a wider range of entities—machines, animals and plants. For, even on this proposal language and consciousness are higher-order cognitive phenomena or forms of representational capacities which arise from a much more enriched and specialized ontology of interactive dynamics involving the agent’s internal systems, the environment and the social world. Thus, understanding the question of how to explore the nature of mentality minus natural language boils down to understanding the form of the virtual mind-like emergence of interactive possibilities that give rise to representational constraints as well as to certain layers of functional presuppositions which both facilitate and constrain learning during such interactions but are not explicitly encoded or present anywhere in the organism/system concerned. Whatever merits or advantages this proposal may have, this does not advance our understanding of our question ­anyway. Although it needs to be made clear that the hierarchy of ontologies of ­different kinds of representation is not exactly intended to be deployed for the ­exploration of possible minds vis-à-vis natural language, it is doubtful that the form of the virtual mind-like emergence of interactive possibilities will (ever) gain a purchase on the nature of self which constitutes the core of mentality. This holds even if interactive possibilities afforded by a system within its environment may approach and thereby approximate to the contours of many properties of what Deacon (2012) calls ‘ententional’ phenomena, which are intrinsically incomplete by virtue of being related to, or constituted by something which is not intrinsic to those phenomena in question. Note that such ententional phenomena are phenomena having certain properties that are other than what their (physical) constitution entails, and include, for example, functions which require satisfaction conditions, thoughts which have contents, purposes which have goals or even subjective experiences that presuppose the existence of a subjective self. Given this characterization of ententional phenomena, the functional roles and organization of the systems in artifacts, tools or plants with respect to their respective environments can certainly give rise

Introduction

23

to functions or goal-like states by way of the interactive potentialities manifest in the systems concerned. However, there are certain problems that we need to consider here. Functions and goal-like states notwithstanding, non-­biological systems, at least in artifacts or tools, may not have well-developed self-like states which may go on to constitute subjective experiences. The plain reason is that any self-like states that may emerge in such systems cannot be fully autonomous and agentive, although the systems may, whether now or in the future, exhibit properties of self-repair, self-production or self-­reconstitution which characterize self-organizing processes found in nature. It needs to be made clear that self-organizing processes found in different organisms are in essence marked by the physical and chemical processes in organisms that draw energy, materials or other resources from nature and create and sustain themselves.5 From this perspective, it is in a sense reasonable to hold that plants and other animals can have self-like states characterizing and constituting subjective experiences in virtue of being autonomous and agentive. The form of subjectivity in plants and animals may arise not merely from the generation of functions, representations and goal-like states, but also from the subjective constitution of a self that propagates its organization which may well be called ‘teleodynamic’, to borrow a term from Deacon. Teleodynamic processes are those processes that represent within themselves their own dynamical tendencies by having the whole produced from the parts and then having the parts produced from the whole, thereby generating a self that continually creates, renews, preserves and interprets itself. Teleodynamic processes—which emerge from and ride on simple self-organizing processes, that is, Deacon’s morphodynamic processes—give rise to properties that qualify as sentience and the locus of subjectivity. Most importantly, teleodynamic processes are characterized by the dynamical constraints of their organization which encapsulate a restricted space of possible degrees of freedom that the internal systems of organisms causally generate when organisms take energy from the environment converting it into something necessary for growth, metabolism and reproduction. Now it may be emphasized that plants and other animals can have a form of subjectivity that allowably fits a mind-like entity if we attribute our certainty to the assumption that various self-like properties of the mind derive from the dynamical organization of different absences. And if 5 Importantly, Deacon calls such self-organizing processes ‘morphodynamic’ processes which are such that organisms having morphodynamic processes (such as bacteria) take from nature what they need and then create order by incessantly producing new structures inside or outside the boundaries of their physical organization.

24

chapter 1

this is the case, we can justifiably say that the mind-like entities of plants and other animals can possess non-representational and/or perhaps minimally representational capacities in perceptual, motor and sign-making activities. Such minds can detect objects in the vicinity, recognize the relevant predators and the members of the same species, make certain reliable categorizations of kinds of prey, feel pain, sense and also transmit certain signals necessary for the survival. Even though we can perhaps figure out what these kinds of minds look like or really are, there is perhaps a lot that is whisked off from the ground. As Dennett (1996a) point outs, the commonalities between kinds of minds are easier to discern and possibly discover as they come under the same recognizable larger envelope, while it becomes harder and harder to determine the finer details of differences that can help track the characteristic cognitive differences among kinds of minds in substantive terms. Teleodynamic processes of minds identify general characteristics of a larger envelope under which various kinds of minds of plants and other animals can be brought together, but it is not clear to what extent and how this can in itself reveal substantive cognitive differences among different kinds of possible minds. Teleodynamic processes are just that—expecting something more than that is not what the form of such processes warrants. Given these problems in characterizing in formally explicit terms the substantive cognitive differences among different kinds of possible minds, we can perhaps do much better by looking into the properties of natural language phenomena within and across languages. Natural language is important for many reasons. Our sensory systems organize experiences in terms of the sensory qualities that are combined or simply collapse by forming manifolds of percepts, and then language imposes its own organization on these percepts or organized forms of sensory experiences. Language is thus a second-order cognitive system that formats forms of sensory experiences and moulds linguistic representations that build on those forms of sensory experiences and creates more and more complex abstract representations divorced from their sensory origins. In a sense, language is perhaps the only cognitive system that projects a window for us onto the interior space of our own minds, and it is not clear whether any other cognitive faculty (such as the faculty of memory or the motor system or even the attention system) has this capacity6 (see for a discussion, 6 In many cases and in significantly relevant respects the faculty of memory or even the attention system is shaped by the linguistic system, inasmuch as linguistic labels help index and thereby track items and experiences stored in and recalled from memory, and in addition, it also helps enclose diverse ranges of experienced items within a constrained space of our attentional focus. But this is not to deny that the faculty of memory or the attention

Introduction

25

Torey (2009)). Even the human emotive system, which is phylogenetically older than the linguistic system, cannot be exactly said to come closer to this, although emotions are ways of feeling one’s self rooted in the body (Slaby 2008). Additionally, higher-order emotions (such as shame, embarrassment etc.) often rest on the intricate interplay between the linguistic system and the symbolically grounded cultural praxis, as language and emotion develop in close harmony shaping one another’s cognitive representations (see Mondal 2013). Most importantly, a diverse variety of linguistic structures with all the complexities and idiosyncrasies can offer insights into the mental structures that correspond to those linguistic structures. Assuming that a number of such mental structures may be shared by members of other species, regardless of whether or not these species have the means of articulating them or encoding them in expressions that can match the complexity of syntactic structures in human language, we can propose to investigate the nature of possible minds by extrapolating from what widely diverse types of linguistic structures across natural languages reveal. This has some crucial advantages that cannot be overlooked. First, the harder-to-determine aspects of mentality conceived in its general sense can be tapped if the form and structure of possible minds is explored by figuring out what a wide variety of linguistic structures reveals about an assortment of possible minds. The only caveat in this proposal is that the syntactic structure of natural language is not taken to be the pivotal point for the extrapolation from various types of linguistic structures to ranges of possible minds. Rather, it is the corresponding mental structures that can be inferred from various types of linguistic structures which will constitute the fulcrum of the proposed extrapolation. The talk of mental structures corresponding to various kinds of linguistic structures is, to a great extent, in tune with Jackendoff’s (1983, 1990, 2007) conceptual structures, which within the t­ heory of Conceptual Semantics characterize what humans conceptualize—the language-­independent mental representations that are structured around linguistic constructions. Since mental structures corresponding to various kinds of linguistic structures can be even human language-independent, this does not also invite the anthropomorphic bias. Second, the projection of the space of possible minds through other sensory-­cognitive systems is bound to run into severely paralyzing problems. It may be noted that other sensory-cognitive systems in humans are not as

system also facilitates the functioning of the linguistic system, especially during language processing.

26

chapter 1

well-­developed as they are in many other species on earth, while the faculty of language is developed in Homo Sapiens to an extent which is perhaps unparalleled in the entire animal kingdom. Given that this gives rise to a complementary distribution of the relative differences in cognitive capacities of humans with respect to other species or of other species with respect to humans, it would be in any event biased to look into the nature of possible minds through the lens of whatever cognitive system/faculty we pick up. In fact, this can tip the balance in favor of other sensory-cognitive systems since humans do not appear to distinctively excel in all cognitive capacities except in the memory capacity and the linguistic capacity (see Tulving (1985), especially for episodic memory; see Chomsky (1985) for the linguistic capacity). Many other cognitive capacities including the capacity for socio-cultural cognition that can be said to be uniquely present in humans are in some sense or the other co-­dependent or co-developing capacities of the memory capacity and/or the linguistic capacity (see Holtgraves and Kashima (2008); Fitch, Huber and Bugnyar (2010); but see Cheney and Seyfarth (2007), who think the linguistic capacity has arisen from the capacity for social cognition—which is not exactly at odds with the co-dependent development of the capacity for social cognition and the linguistic capacity). Overall, all co-emerging cognitive capacities are in a sense unique in humans, and most sensory-cognitive systems other than those co-emerging cognitive capacities/systems are shared with other species and may well have had a common homologous origin. Moreover, language being the prime cognitive capacity that helps humans to look inside themselves and also to engage in various kinds of thoughts that can be entertained, it would be more reasonable and appropriate to make an attempt to understand the question of exploring possible minds through natural language. As a matter of fact, it is hard to imagine how approaching the question of exploring possible minds through the window of other sensory-cognitive systems minus the linguistic capacity can even make sense, for any cognitive capacity minus the linguistic capacity in any creature cannot have the theorizing itself get off the ground in the first place. Third, the point made just above readily relates to the problem of intentionality vis-à-vis (natural) language. Intentionality is a property of mental states, objects or events which characterizes aboutness or directedness at objects or states or affairs (Searle 1983; Lycan 1999). In other words, intentional states are directed at the world in virtue of the specific kind of relationship that obtains between intentional states and things (either in the mind or in the world out there). More significantly, Brentano (1874) hypothesized that all mental states are intentional states. What this means is simply that all mental phenomena involve directedness or aboutness toward objects or entities

Introduction

27

or states or affairs.7 At this juncture, it appears that it would be worthwhile to look into the question of exploring possible minds by examining the nature of intentionality, primarily because even natural language—especially linguistic meaning—can be supposed to have derived from intentionality which was probably present in the earliest life forms, as Searle believes. Thus it seems reasonable to investigate the question of exploring possible minds by verifying whether something possesses intentionality or not. However, this way of formulating the question has some crippling disadvantages. Checking whether or not something, say, X, rather than Y, possesses the property of intentionality cannot be done by checking the internal parts of either X or Y, for intentional states cannot be directly seen within a system or a living entity. Intentional states are inferred from the outward behavior of an entity or from the outputs of cognitive processes. Plus the ascription of intentional states to non-living things is fraught with a number of deep conundrums. So it is not even clear whether we can include machines if we wish to include machines while we investigate the question of exploring possible minds. In addition, the philosophical debates on the question of whether or not humans’ intentionality is intrinsic or machines’ intentionality is derived also vitiate, if not entirely eliminate, the prospect of applying intentionality as a good test for exploring possible minds. On the one hand, ascribing intrinsic intentionality to machines risks having a blithe disregard for the relevant facts, for, if machines had intrinsic intentionality, machines would have been able to perform all sorts of intentional acts, for example, intending, pretending, believing, making commitments, lying, guessing, wanting etc. etc. So far as we know, machines do not engage in all these. Moreover, if machines can generate on their own algorithms, or rather programs to repair themselves or even ‘reproduce’ in accordance with ‘goals’ and ‘purposes’ that machines set on their own, this can indeed be taken to be a good test for the possession of intrinsic intentionality in machines. However things come about, there is more to it than meets the eye. Now, on the other hand, if humans had derived intentionality rather than intrinsic intentionality, this would invite a problem of infinite regress. If the intentionality of humans is derived from something else, say, from evolution, as Dennett (1996b) believes, then what is the intentionality of evolution derived from? And so on ad infinitum. One cannot, on 7 While the Brentano thesis has received support from Crane (2001), the thesis has been criticized by Millkan (1984) and Nes (2008) on the grounds that the feature of intentionality is also true of many non-mental phenomena (for instance, the directedness of the stomach toward (digestion of) food). Additionally, the absence of directedness of pain experiences is adduced to counter the claim that all mental phenomena are intentional.

28

chapter 1

any metaphysical grounds, maintain that the nature of the intentionality of evolution does not need to be traced to anything else save itself.8 Arguing that the intentionality of evolution is fundamental is a non sequitur, since we have already allowed machines to have derived intentionality, and it is not clear why we should not ascribe derived intentionality to evolution as well. Why should evolution have a privileged status over machines in this regard? After all, the process of evolution is also machine-like or algorithm-like, as Dennett himself claims. Therefore, the argument does not go through. Regardless of whether or not humans’ intentionality is intrinsic—and in fact the present discussion does not hinge on whether or not it is so,9 the question of exploring possible minds can be approached in a more sensible way. The range of possible minds can be reasonably construed and so constrained to include machines’ potential form of mentality, especially if we inspect the intricacies of mental structures hidden behind natural language constructions. It is because mental structures hidden behind natural language constructions cannot be solely possessed by human minds even if humans uniquely possess the linguistic capacity. It needs to be emphasized that it is not the mental structures behind natural language constructions per se that have meaning; rather, natural language constructions/expressions have meaning (also noted in Davis (2003)). And if so, mental structures of various sorts that constitute the contents of linguistic expressions can be conceived of in human m ­ ind-­independent terms as mental structures behind natural ­language constructions do not in themselves possess meanings for humans, or for that matter, for other entities. Mental structures concealed beneath natural ­language constructions or expressions may thus be projected for possible minds of animals, machines and also plants when the range of possible minds is explored in terms of such mental structures. This idea is, however, different 8 Even though evolution is not supposed to have any goal or aim and, for that matter, a form of intentionality, appealing to evolutionary grounds for making claims about the fundamentality of the design of evolution is circular. The reason is that the act of appealing to evolutionary grounds for claiming that the design process of evolution is fundamental is done in order to argue that the human intentionality cannot be intrinsic, but at the same time, the argument that the human intentionality cannot be more intrinsic than the intentionality of cats, for example, is adduced in order to establish that the design process of evolution is fundamental, and that evolution cannot be said to have a form of intentionality. 9 The view that will be postulated later in Chapter 4 is that humans’ intentionality is intrinsic, fundamentally primitive and may well be grounded in the human body, for it is even impossible to talk about intentionality in other entities in the absence of humans’ intentionality which ascribes intentionality to other entities by means of inferences as the very act of ascribing intentional states to other entities is always inferential.

Introduction

29

from the view of language adopted by Sperber and Wilson (1995), who consider language to be a medium for storing and processing information and thus thinks that other animals must have languages.10 This view more than trivializes the notion of language, and hence nothing appears to prevent any other cognitive faculty—insofar as it stores and processes information—from being reckoned to be languages. In fact, it is pointless to tinker with the demarcation of what may be called language, and any hypothesis that turns on tinkering of such kind seems like a play on words without much substantive import. Note that mental structures concealed beneath natural language constructions/expressions are hence independent of and outside the boundaries of the object of the human semantic interpretation, given that the mental structures are not in themselves part of the human semantic interpretation which belongs in the domain of the human mind. This is, however, not to deny that mental structures can be modulated and shaped by natural language expressions. But at the same time, this cannot also prevent such mental structures, however structured and shaped by natural language expressions, from being possibly manifest or realized in minds other than those ascribed to humans. This is precisely because mental structures as may be shaped by linguistic expressions are not an intrinsic property of any mind. Consider, for example, the mental structure underlying the sentence in (1). (1) John will travel across Australia no matter what it involves. The sentence in (1) involves a mental structure that contains two contrasting thoughts. The matrix clause ‘John will travel across Australia’ introduces the thought that will definitely be the case or is bound to obtain, whereas the thought that presents a conflicting condition is provided by the subordinate clause ‘no matter what it in volves’. On the one hand, it is evident that the mental structure of such constructions is structured by the structural organization of the expressions concerned, and hence we cannot get the same mental structure if we say, for instance, ‘what it involves no matter John will travel across Australia’, which is not a well-formed expression in English. But, on the other hand, there is nothing inherent in the mental structure in itself that can logically prevent it from being realized in, say, machines or even animals. If, for the sake of argument, mental structures are characterized as having meanings, nothing in principle can stop the meanings of mental structures from 10

They have considered a linguistic system to be a cognitive system, as opposed to a communicative system, instantiated in terms of information processing, and hence insofar as this is so, they deem that other animals and machines can have such systems.

30

chapter 1

having their own meanings and so on ad infinitum, thereby triggering an infinite regress of meanings all the way down the hierarchy of embeddings. Furthermore, the mental structure can be identified with a mental state which can otherwise be distinguished from a mental structure constituting the contents of a linguistic expression by the specific kind of abstraction of mental structures onto an inter-subjective level of knowledge.11 Likewise, the mental structure of sentences such as the following cannot also be the exclusive property of the human mind. (2) The kids swim as well as they dance. The mental structure of (2) cannot in itself be constituted by the expression in (2); rather, it is constituted by the thought that the kids’ swimming and their dancing are equally good. And hence there is nothing wrong in having the possibility that the mental structure in question can be projected for other possible minds. Thus the notion of the word ‘mental’ in mental structures has to be cashed out in terms of its substance-independent properties. Overall, what is important is that natural language constructions or linguistic expressions, in virtue of being subject to the human mind’s interpretative constraints, cannot be projected outside the domain of the human mind, inasmuch as the language capacity in humans is unique. Simply put, at least some mental structures behind natural language constructions/expressions can be ontologically located in many possible brains and systems, and they cannot be the exclusive property or part of the human mind because they are not (intended) to be mapped to further semantic structures in the first place. Mental structures behind natural language constructions/expressions are thus metaphysically autonomous entities in this sense, and this being so, they can be searched in many places and the hope is that they can be found too. We have so far considered various ways of understanding the nature of minds with respect to the linguistic capacity or its structural system. It turns out, as this book will argue, that we have to look no further than the domain of linguistic structures themselves to assimilate the building blocks of possible types of mentality across the spectrum of diverse organisms and creatures. Linguistic structures are not in themselves components of minds. Rather, the mental structures that capture the organization of expressions within 11 See Mondal (2012), who has argued that linguistic knowledge recognized as the knowledge of linguistic expressions, mental structures plus their correspondence possibilities can be said to exist at an inter-subjective level of individual language speakers by way of abstraction from individual minds, even though the states of individual minds can also instantiate properties of such knowledge.

Introduction

31

linguistic structures can be the potential candidates of mental types. This can be looked at from another perspective. Linguistic structures vary as a function of what mental structures they express that capture the semantic relations in a construction or across a range of constructions. As mental structures are conceived of as something linguistic structures express, it appears that the more variation in linguistic structures we find, the more possible mental structures we may find out. But the entire range of variations of mental structures riding on the variations of linguistic structures cannot simply be assumed to instantiate variations in mental types across different species. This would be a fallacious interpretation of what mental structures are supposed to capture as part of the building blocks of minds. The claim in the present context is not that the variations in mental structures corresponding to the variations in linguistic structures are identical to the variations in mental types across species. This would conflate all possible mental types within the exclusive envelope of mental structures as found in different linguistic structures (across languages) on the one hand, and distribute various mental structures of human languages among distinct categories of organisms and creatures on the other. Rather, the point of the whole exercise implicit in the line of reasoning employed above is to show that certain, if not the whole range of, mental structures can be extracted from natural language constructions themselves in order to test their viability for other organisms as well. Some such mental structures may turn out to be more general, while some others may prove to be more restrictive. The adequate testing ground here will, of course, be the ethological or cognitive-­ behavioral contexts of different species. If this line of reasoning is on the right track, we can, of course, expect mental structures to be related to minds in a special way. As we have explored the nature of mental structures as they relate to minds, we have come to understand that the relationship between mental structures and minds is more subtle than may be supposed. In the present context, minds are ways of talking about mental structures, although minds can have domain-specific processes over above mental structures. Although the relation between natural language and minds has been touched upon in this section, the foundational assumptions and the theoretical contexts underlying that relation have not been so far examined. The next chapter aims to do exactly this. This is what we turn to in Chapter 2. 1.4 Summary This book will thus examine the extent to which the relationship between natural language and the range of possible minds can be intimate. The way this relationship can be intimate will also be a part of the inquiry this book

32

chapter 1

will engage in. We shall observe that natural language and the extrapolation of a range of possible minds are inextricably intertwined. Researchers in ai and cognitive science have not given serious consideration to understanding this connection as deeply as possible. A lot of concentration has been on either developing theories of intelligence or debating the nature of human minds so as to say how humans differ from machines in various cognitive capacities. Significant as these issues are, I believe these issues bypass the fundamental question, that is, the question of whether we can unravel something about possible minds by examining the nature of mental structures revealed by ­different assortments of natural language constructions within and across languages. If the line of inquiry this book will undertake has anything to unlock, it must be such as to unlock the immense potential of natural language for cognitive science and possibly beyond. The better we understand this, the more we understand about the properties of mentality and the nature of intelligence in general. This can be appraised in view of the fact that current biological theories and philosophical hypotheses do not help much in understanding the structure and form of other possible types of mentalities. This is not because our biological and philosophical understanding of other possible mental types is limited by the internal inadequacies of the existing theories and hypotheses. Rather, it is because the available tools of biology and philosophy cannot reach into the realms of mental phenomena in other non-human organisms or systems and even plants by studying some intermediary object that can take us inside the domain of other possible minds. Language being the sine qua non of cognitive capacities equips us with the exact intermediary object which can offer glimpses not only into the realms of mental phenomena that are biologically instantiated but also into the structures and representations that can be brought forward to bear upon the question about the form of mentality in non-biologically grounded systems (such as computing machines). Overall, this book attempts to show how to integrate the biological understanding of animals and plants, the philosophical understanding of mentality, and the linguistic understanding of the nature of mental structures hidden beneath linguistic expressions into a whole that uncovers the nature of possible forms of mentality. Needless to say, the present book will be interdisciplinary in drawing upon insights from disciplines as diverse as linguistics, philosophy, anthropology, computer science, psychology, neuroscience and biology in general. Hence every attempt will be made to keep to a common discourse so that it reaches a larger audience in the widest possible spectrum of cognitive sciences. Having this perspective in mind, I strongly hope that lay people can also partake of the discussion the book will engage in; many of the issues to be pondered over and thus dealt with are everybody’s concern, as far as one may reasonably believe.

Introduction

33

The book is organized as follows. The book is divided into six chapters. ­ hapter 2 will focus on the linguistic foundations of minds as the affinity beC tween language and the character of mind needs to be scrutinized and looked at from various perspectives in order to see which conception is handy enough for the present context. In Chapter 3 the descriptive formalism of mental structures will be formulated, and then the form of various kinds of possible minds in distinct species or organisms will be specified after the formalism is fleshed out with reference to a plethora of natural language phenomena. Chapter 4 will develop further connections to machine cognition, and Chapter 5 will explore plausible consequences for everything that can be reckoned to be cognitive and also the connection between the cognitive and possible types of mentality. Finally, relevant concluding remarks as they follow from the entire discussion in the book will be made in Chapter 6. Even if some chapters, especially Chapter 1 and Chapter 2, may be read on their own, there will be a certain degree of continuity from Chapter 3 to Chapter 6. Confident readers may thus move directly over to Chapter 3 and follow the threads of the narrative as it unfolds, while other curious readers may track the flow of the arguments involved right from the beginning of the book if they wish to do so. With this we may now turn to Chapter 2 to find out what it can tell us about the relation between natural language and the (natural) foundations of minds.

chapter 2

Natural Language and the Linguistic Foundations of Mind The nature of our linguistic capacity can, at least in part, be traced to the form and structure of natural language(s). From this perspective, we can understand a lot about the capacity for language by investigating the structural details and the formal patterns of natural language phenomena. But a question that crops up here is this: why should we care about natural language at all when we go about figuring out the nature of mentality? Or even, in what sense is language special here? These questions have been touched upon in Chapter 1 with a view to understanding how to approximate to a description of other types of mentality in non-humans. The point to be emphasized at this stage is that language contributes to the character and constitution of the mind—the kind of mind that we all recognize among ourselves, to be sure, but it is not yet clear how language can be so special at all. Without a thorough grasp on this specialness, one may wonder why language should at all be privileged over other cognitive abilities or systems. This is, of course, a valid concern that needs to be taken into consideration. Therefore, this leaves one significant question open. Taking this question up, one may advance another objection. If the special status of language with respect to the structure of the mind that we are familiar with is given a place of prominence, one may wonder why this should not lead to a handicap rather than a favorable condition. The crucial point in this objection is that if the special status of language with respect to the revealing of the structure of the mind is something that we come to recognize only by exploring, studying, understanding and interacting with human minds, this is bound to inflate the anthropomorphic language-mind relation. Therefore, this may also be supposed to end up spoiling the current program of projecting the spectrum of other possible minds from certain structures manifest in natural language. This objection is not trivial and cannot also be brushed off without a due consideration of the concerns it raises. However, it is not hard to turn the central points of this objection into a line of reasoning that rather coincides with the central goal of this book. Before we proceed to offer the line of reasoning that can stave off the quandary, let’s examine the objection a bit closely. The objection consists in the concern that whatever we know of the contribution of language to the structuring of minds is essentially derived from, or is in some way identical to, our appreciation of the contribution of language to

© koninklijke brill nv, leiden, ���7 | doi 10.1163/9789004344204_003

Natural Language and the Linguistic Foundations of Mind

35

the structuring of the human mentality. In short, it is like saying that whatever role that language plays in configuring and structuring minds is a role that is reserved for the human mind, since we have no idea what role language can play in a mental world where the kind of language we use does not exist. Thus, any special contribution to the character of mind is nothing more than a contribution to the human type of mind. Although this objection is strong enough, as has been acknowledged above, the pivotal argument this objection employs can be turned on its head. Let’s see how. If the special status of natural language for the human type of mentality is exclusively significant for the molding of the human mind rather than for any other type of mind, we can do better by exploiting the uniquely special properties of the language-mind relationship in humans to draw valid inferences about the character of mentality devoid of the human type of language. That is, by understanding the contribution of the presence of language to the molding of the human mind, we can understand the nature of minds that receives no contribution from any language of the type we have. To put it in another way, understanding the presence of something can certainly help understand the absence of it. This is but a step to realizing the role of the absence of something in some other thing within which the absence is located. The character of mind minus language is something that can be projected by way of valid inferences from the character of mind plus language. In more precise terms, if X has a role to play in Y, we can infer what Y would be or look like without X, given that we already know what kind of role X plays in Y. The crucial part of this line of reasoning is that the linguistic foundations of cognition in humans, far from being the bane, can be a boon. This is exactly the point which, rather than detracting from the motivation of the present study, advances it further. To this end, this chapter undertakes to examine the ways in which language can be taken to be special for the configuration and constitution of the mind. What is interesting in this connection is that an investigation of such nature has the potential to tell us something about the mental organization of other cognitive faculties/capacities such as vision, memory, thoughts, reasoning, action and perhaps emotion. This understanding has enabled significant generalizations about the foundations of the superstructure of mind, as a great deal of work on language-cognition relations shows us why language is, after all, special. This chapter will, therefore, make an attempt to examine these generalizations in order to see how they may fare in the context of the question that this book raises. There are a number of well-formulated accounts of many such generalizations which bare crucial as well as fundamental insights into the nature of the mind, however conceived. Needless to say, these insights have helped us to tap the foundational basis of the mind from a linguistic perspective which

36

chapter 2

is relevant to the present study, insofar as we extrapolate from the actual mind to the range of possible minds. Once the linguistic foundations of mind are fleshed out with pertinent connections to the question of exploring the nature of possible minds, we may undertake to make adequate formulations to capture the range of possible minds restricted by the constraints of natural language constructions. This exercise must be accompanied by a caveat though. The caveat revolves around the question of whether an understanding of the linguistic foundations of our actual mind can help unlock anything about the range of many other possible minds. This question has also been touched upon in Chapter 1. Note that understanding the linguistic foundations of our actual mind can give us at least a clue as to the plausible description of what our kind of minds is. The reason the clue is about the plausible description of our kind of minds rather than about other kinds of minds is that we cannot have a direct access to the foundations of our own minds, and any understanding of our minds mediated by natural language is bound to be a description. But, when our goal is to find something out about the range of other kinds of possible minds, this description may be inadequate on several grounds which have been discussed in some detail in Chapter 1. As we proceed, we shall see that this is indeed so. However, on the present proposal a description that characterizes and thereby isolates mental structures per se from linguistic expressions will seek to balance the inadequacies of an approach targeted on the description of our kind of minds. Before we turn to our proposal in Chapter 3, we may proceed to scrutinize the ways in which the linguistic foundations of mind can be formulated. 2.1

Language as a Window onto Thought and Reasoning

Thought and language are considered to be the unique characteristics of the cognition of Homo sapiens. But, of course, other primates have thoughts, albeit in a rudimentary form—animals, especially baboons, great apes and our closest neighbor chimpanzees, have an elementary form of theory of mind which allows them to entertain and understand what other conspecifics have in their goals and plans (Aitchison 1996). This appears to suggest that they have some form of thinking, although it is not as enriched as that of humans (Call and Tomasello 2005; Tomasello 1999). Moreover, it is also believed that other primates (apes, chimpanzees, for instance) use a sort of inferential/ostensive communication,1 which consists in using communicative behaviors in order to 1 In fact, the term ‘inferential/ostensive communication’ has been originally used in Sperber and Wilson (1995).

Natural Language and the Linguistic Foundations of Mind

37

draw the addressee’s attention to an entity by bringing it within the attention of the addressee (Gómez 1998). Unquestionably, natural language can serve as a reliable cognitive mark of thinking and reasoning. When one speaks a language, we unconsciously repose our confidence in the apparently indisputable belief that the person in question is a thinking being. This belief is not merely an ordinary belief about thousands of other things about which we tend to form beliefs. Rather, this is part of our specifies-specific cognitive make-up. In fact, the language capacity integrates so seamlessly into our cognitive machinery that this has led Dennett (1991) to posit that language is like a virtual machine—say, a kind of executable file—that runs on the neural hardware at the lowest level and gives rise to intentional activities including speaking when we act in the real world. Additionally, conscious thinking, for Dennett, involves the manipulation of words or language-like representations in a central workspace of the mind where different representations may jostle for space. It needs to be stressed that this view of language as a kind of software running on the brain hardware borrows the computational metaphor from a presumed hardwaresoftware­distinction, which as a distinction in its substantive sense may well be ungrounded. How software is instantiated in the computer hardware is hard to determine, given that the question of whether something is the software at the higher level or the computer hardware at the bottom is a matter of how the object in question is construed to be. For example, a computer server can be thought of as a kind of hardware (as a physical computing device with the cpu) or a kind of software (as a part of the ensemble of network applications). Overall, the identification of language with a kind of virtual machine that reorganizes/reprograms the brain’s mode of representations and computations rather than implementing a wholly different computational architecture in the brain, at least in a restricted sense, obliterates the distinction between language (or even conscious thinking) which is supposed to be installed on the brain hardware) and the hardware itself. Moreover, it needs to be made clear in this connection that adopting the hypothesis that our (conscious) thinking is in the form of natural language sentences (as in Carruthers (1996), for example) does not presuppose a virtual machine that is reckoned to characterize conscious thinking. However, it is evident that this hypothesis takes thoughts, especially propositional thoughts, to require language as a form of medium for it to be manifest in consciousness, and in virtue of this, it seems as if it is thought which is the virtual machine installed on or realized in the linguistic hardware. Because thinking on this proposal is situated at the personal level at which we as intentional beings intend, act, feel, believe, desire, infer and so on, natural language sentences which are imaged as thoughts in the mind are thus available to introspection. And if so, n ­ atural

38

chapter 2

language recognized as such cannot be the mode of hardware on which thinking can run, on the grounds that natural language sentences so imaged will require interpretation in order that natural language sentences become meaningful and so the ­corresponding thoughts will also have/inherit the respective meanings. This cannot be ­possible since if (propositional) thoughts have/possess meanings corresponding to the meanings of the respective natural language expressions, either thoughts will themselves need another level of interpretation by the mind (at a certain sub-personal level of the mind) for the thoughts to come to possess meanings, or thoughts cannot be taken to be identical to meanings because thoughts will have to inherit meanings from natural language expressions. In either case (propositional) thoughts will oscillate between a software state and a hardware state—which is absurd, for the hardware level is located beneath the personal level of the mind, thereby vitiating the personal level character of (propositional) thoughts. There is perhaps a way the whole idea can be saved if and only if we hold that the natural language sentences in which (propositional) thoughts are framed need to be mapped to some representations at the sub-personal (tacit) level of the mind, as Davis (1998) seems to believe. But what could these representations be? Davis thinks that these representations can be sentences in what is called the Language of Thought (lot) or Mentalese (Fodor 1975, 2008), which is postulated to be independent of any natural language but at the same time have properties typical of natural language such as systematicity, compositionality and productivity. It is by virtue of the property of systematicity that we can say ‘John likes Mary’ as well as ‘Mary likes John’; compositionality requires the meaning of a linguistic expression to be derived as a function of the meanings of the parts of the whole expression, for example, the meaning of ‘John likes Mary’ is derived from the meanings of ‘John’ and ‘likes Mary’, which is in turn derived from the meanings of ‘likes’ and ‘Mary’. Finally, productivity rests on the property of production of an unbounded number of linguistic expressions from a finite inventory of expressions. It is quite well-known that the lot hypothesis invites the problem of infinite regress in requiring natural language expressions to be translated into the sentences of lot and back from lot into natural language expressions, in that lot sentences need to be translated for interpretation into the sentences of another mental language and then again these sentences have to be translated into those of another different mental language and so on ad infinitum (also see for relevant criticisms of the lot hypothesis, Horst (1996); Dennett (1998); Davis (2003)). Whatever lot may actually look like, it is claimed to be represented at the sub-personal level of the mind where lot representations are claimed to be physical configurations of informational processing states that individuate mental computations. lot

Natural Language and the Linguistic Foundations of Mind

39

representations at the sub-personal level of the mind are thus defined over the local or syntactic, rather than semantic, properties of the representational vehicles involved, and if so, lot sentences may or may not bring thoughts into consciousness. But the point to be driven home is that the computational property of lot sentences understood in terms of the syntactic character of the forms over which lot is defined is at odds with the personal level property of natural language sentences which correspond to (propositional) thoughts. The reason this is so is that natural language sentences (tokens) are to be interpreted by some conceptual interpretative system the character of which may well be largely non-computational.2 And if so, the interpretation of natural language sentences (which identify (propositional) thoughts) rendered in lot can lead to a deleterious incongruence in computational properties which may not ultimately contribute to the causal role of mental representations in mental phenomena (such as beliefs having a role to play in perceptions and actions). It may also be added that Chomsky’s (1993, 1995) formulation of the computational system of the language faculty makes it well-nigh impossible for natural language sentences that embody thoughts to square with lot sentences when such sentences are ‘interpreted’ and thus ‘understood’ at the Conceptual-­ Intentional­(C-I) interface connected to Logical Form (lf). Crucially, this is because the C-I system keyed to lf may have nothing whatsoever to do with the computational properties of lot sentences that are supposed to assign interpretations to natural language sentences. In the Minimalist architecture of the language faculty, lexical items are drawn from the Lexicon (the set of such selected lexical items is called Numeration) and then the selected items undergo what is called Merge, which is a binary operation that concatenates two syntactic objects; for example, for the sentence ‘The money came rolling in’, the lexical items ‘rolling’ and ‘in’ are Merged and then ‘rolling in’ is Merged with ‘came’, and finally, ‘came rolling in’ is Merged with ‘the money’ (which is a result of the Merging of ‘the’ and ‘money’).3 Thus the structure built is shipped to pf (Phonological Form) for sound representations and to lf (Logical Form) for the syntactic representations of meanings. pf is accessible to the sm system/ interface (responsible for articulation or perception), and lf is mapped to the C-I system/interface (mediating conceptual representations, discourse-related­ 2 This is a conclusion which Fodor (2008) also seems to vouch for when he thinks that many mental processes may be non-computational. 3 Readers who are familiar with the technical details of the Minimalist program may note that for simplicity I’ve omitted functional categories such as T(Tense), little v (Light Verb) from the example provided here.

40

chapter 2

properties, beliefs, intentions etc.). The Minimalist architecture is sketched out in full below (see figure 1). Importantly, only the portion in the architecture that consists in the mapping of Numeration to representations at pf and lf (excluding the sm and C-I interfaces) is called the core computational system of the language faculty which stores linguistic information used for instructions to the sm and C-I interfaces for articulation/perception and interpretation. The core syntactic system of the language faculty being the sole computational system, it is hard to see how natural language sentences that individuate thoughts can be assigned lot sentences, insofar as the computational character of lot and the presumably non-computational character of the C-I interface are plainly not concordant. More perplexingly, Chomsky (1993) also believes that there is no reason to suppose that there can be a ‘natural’ mapping between the products of the core computational system and the cognitive interfaces, namely the sm and C-I interfaces, because he thinks language is not designed for communication, and additionally, language may not be usable at all. Rather, the core syntactic system has evolved as a perfect solution to an optimization problem for a mapping between the sm interface and the C-I interface. In fact, the representational products of the abstract computational system (involving the mapping of Numeration to representations at pf and lf) need not be expressed either for articulation or for interpretation at all. Thus, for example, the sentences in (3–4) cannot be expressible for articulation. (3) John seems to__ (John) be busy these days. (4) What does he think John has drawn up__ (what)? Lexicon (Numeration) Merge

Spell-Out

Sensory-Motor System (SM)

Figure 1

Phonological Form

Logical Form

ConceptualIntentional System (C-I)

The architecture of the language faculty in the minimalist program (Chomsky 1995).

Natural Language and the Linguistic Foundations of Mind

41

The gaps in the examples above indicate the position of the displaced items which are postulated to have copies in the position of the gaps indicated—the words in parentheses which are italicized. It is maintained that these copies are interpreted for meaning, but they cannot be articulated. That is, had natural language been designed not to be expressed for articulation, these copies would have remained where they are interpreted. Hence it is believed that the connection between the syntactic representations of natural language sentences and their externalization for articulation is a contingent affair. On the other hand, natural language sentences can have infinitely embedded syntactic representations which can be linearized at pf but may, however, not be unusable for interpretation or simply not have any interpretation. The infinitely long center-embedded sentence in (5) is of this kind. (5) The woman the girl the guy … picks up visits plucks flowers. In all, if this is the case, the possibility that natural language sentences are to be necessarily ‘interpreted’ and thus ‘understood’ at the Conceptual-Intentional (C-I) interface is further weakened (but see for a different view, Hinzen 2013). Beyond that, it is not also clear what the interpretation and understanding of natural language sentences amount to in substantive terms for the relevant natural language expressions to count as thoughts. Do the interpretation and understanding of natural language sentences necessitate another homunculus mind sitting inside the brain? Or do they designate abstract generalizations that qualify something as interpreted and/or understood? If the abstractions of the interpretation and understanding of natural language sentences are said to add something to natural language sentences which turn into thoughts only when this addition is done, what do these abstractions, which are not believed to be identical to thoughts, come from? Or where on earth do the abstractions of the interpretation and understanding of natural language sentences hold—in the mind or outside the mind? It is worthwhile to observe that this problem inexorably brings the problem of infinite regress whatever option we choose. On the one hand, if the interpretation and understanding of natural language sentences require another hidden understander/ interpreter for the relevant natural language expressions to count as thoughts, nothing stops us from positing another hidden understander/ interpreter for the first hidden ­understander/ interpreter and so on ad infinitum. On the other hand, if we are inclined to affirm that the interpretation and understanding of natural language sentences are, after all, abstractions just like mathematical objects (lines, points, sets etc.), we are forced to conclude that our (propositional) thoughts clothed in natural language sentences require abstractions of intentional acts or states

42

chapter 2

(that is, interpretation and understanding) which may be constituted by other plausibly relevant thoughts which would in turn warrant another level of interpretation and understanding of natural language sentences and so on ad infinitum. Worse than that is the possibility that our (propositional) thoughts clothed in natural language sentences will cease to be thoughts. The reason is that the thoughts in question become thoughts only in virtue of being brought forth through some presumably viable activation condition that obtains between the abstractions of interpretation and understanding and natural language sentences, and in virtue of this, the activation condition recognized as such cannot in any sense make the (propositional) thoughts alive in the mind for online and/or offline manipulation in inferences and other forms of reasoning. Abstractions are just that—they can be reified entities, but, in a more substantive sense, they cannot drive mental processes any more than the very generalization that light bends in gravitational fields can itself make light bend into the black hole. Abstractions which become reified through our interpretation and understanding (assumptions manifest in scientific knowledge, for example) can certainly affect and shape our beliefs about things, but they cannot in themselves be causally active in minds in any substantive sense. What may well be represented in the mind is not the abstraction as such, for any abstraction is ultimately a description, which is a product of our intentional acts; rather, it is the mental state which an abstraction affects or is affected by that can be said to be represented in the mind (see for details, Mondal (2014a)). In the face of such conundrums, we may be hard pushed to see how natural language sentences in which (propositional) thoughts are rendered can be ‘interpreted’ as well as ‘understood’. The other problem is that the case for a computational analogy for conscious thoughts that may run as virtual machines on the parallel neural hardware—as far as the picture projected by the Chomskyan architecture of the language faculty is concerned—becomes more tenuous, simply because it is the uninterpreted natural language sentences that are subject to syntactic and hence computational manipulations and constraints but the thoughts which natural language sentences come to express are not. Therefore, thoughts do not appear to be computational processes. Far from it, natural language constructions are constituted by computational processes, as Chomsky (1993, 1995) believes. In this connection, it may also be observed that Chomsky’s thinking on the relation between language and thought harmonizes well with this conclusion, since he looks at thoughts through a point of view in which the abstract representations of natural language structures are ‘interpreted’ at the C-I interface for the expression of thoughts. This seems to place language on a higher level

Natural Language and the Linguistic Foundations of Mind

43

of ontological and possibly analytical significance with respect to thoughts. There are, in fact, a number of divergent views on this issue. Dummett (1973, 1993a) goes for a view of mutual dependence of thought and language, albeit through the priority of language over thought. Grice (1989) has advanced the opposite view supporting an epistemological priority of thought over language. Still it can be said that the ontological nature of thought and language is indeterminate (Moravcsik 1990). Whatever way the priority issue is thrashed out, not much hinges on whether thought is prior (ontologically and/or analytically) to language or not. The reason is that this cannot say much about the way we can approach the question of exploring possible minds through natural language constructions. First, if language is given priority over thoughts, the anthropomorphic bias for human language may block our understanding of the question we propose to investigate. And if the notion of language is set in so broad a context as to include the communicative or sign-making systems of other animals and potential machines, the whole idea of understanding possible minds breaks apart, for the notion of language is tiresomely trivialized and the terminological jugglery with the extension of what language is cannot help us make sense of what language really is, let alone understand the structure and form of possible minds. Second, if we claim thought as something that can be given priority over language, it appears that we can do justice to the question we have set out to deal with because the priority of thought over language may have us believe that we can discern thoughts in many possible organisms, animals and systems which do not possess the language capacity. However, this option renders nugatory the hope of ever being able to check whether the nature of other possible minds can be looked into via natural language, since any direct method of tapping thoughts in other organisms, animals and systems has to be always based on our inferences, and additionally, bringing forward natural language expressions so as to uncover the mental structures beneath seems to go against the very character of the (analytical/epistemological) priority of thought over language. Most importantly, detecting thoughts in other organisms, animals and systems cannot be done in a vacuum because thoughts have to be anchored in a base or substrate for us to even make any inferences about the form of possible minds; otherwise those inferences are doomed to flounder. In other words, thoughts cannot be directly observed in other organisms, animals and systems like quantum effects, for instance, can be observed in physical systems, and so in order to make headway towards our goal we need to do more than think of thought as prior to language. In the present context, the aim of uncovering mental structures beneath natural language constructions has the advantage of seeing the analytical/epistemological priority of language over thought against the ontological priority of thought over

44

chapter 2

language. That is, we combine the benefits of both the priority of language over thought and that of thought over language, albeit in different ways. 2.2

Language as Conceptualization

There is another way of understanding linguistic structures so that linguistic structures lead us into the interior space of the mind. Cognitive linguistic approaches (Lakoff 1987; Langacker 1987, 1999; Talmy 2000) advocate the view that semantic structures are not derived or mapped (only) from syntactic structures. Rather, semantics is a wholly different domain whose units are realized as conceptualizations in mind. Syntactic-phonological units are considered to be symbolic units which are mapped onto representations of conceptualizations. Also, such conceptualizations are believed to be grounded in the sensory-motor, perceptual processes. That is, the understanding is that conceptualizations are fundamentally and formally derived from aspects of perception, memory and categorization. Thus, it seems as if mental representations and processes underlying thoughts and reasoning are exactly what are or come to be expressed in symbolic units of natural language expressions. The direction of fit appears to be from mental processes and thoughts to natural language expressions. In this way, cognitive structures underlying linguistic expressions are structured in terms of how they are conceptualized in the mind. Aspects of embodiment derived from and anchored in sensory-motor experiences often determine the range of possible meanings corresponding to those cognitive structures. For instance, (6) The donkey stood in the flower-bed. (7) The donkey stood in the river. The sentence (6) makes sense so long as we make reference to our spatial experience and knowledge of flower-beds. But (7) does not have the meaning in which the donkey’s feet are in contact with the surface of a river, since the surface of the river cannot support the donkey (unless, of course, the donkey swims), and the donkey’s legs cannot thus be in contact with the surface of a ­river. What blocks this meaning, or rather the cognitive structure underlying this meaning is not intrinsic to the sentence alone, and does not thereby come from within the sentence. Rather, such an otherwise possible meaning is blocked by aspects of our spatial experience. If this explanation is cashed out in terms of Langacker’s (1987, 1999) theory, we may say that the landmark lm which forms the spatial background and is a river in this context (but a ­flower-bed in (6))

Natural Language and the Linguistic Foundations of Mind

45

cannot be in contact with and support the trajectory tr which is the donkey, the focal entity. Viewed this way, linguistic constructions which express a range of conceptualizations allow us to tap and make reference to the mental representations and processes, given that possible linguistic meanings can be constrained by the cognitive structures latent in our sensory-motor-perceptual domains. So meanings are not always a function of the constraints that are imposed by grammar/syntax. In fact, the approach that is perhaps most suitable for our purpose is Jackendoff’s (1983, 2002) theory of Conceptual Semantics within the framework of the broader cognitive approach to semantics, which supports a wider range of facts about meanings. Conceptual Semantics proposes that it is conceptual structure (cs), which allows us to connect to the world out there via the world projected within the mind. Hence conceptual structure is a mental structure that encodes the world as human beings conceptualize it, as Jackendoff thinks. Plus it is independent of syntax or phonology, but connected to them by interfaces that have interface rules which consist of words, among other things, that connect conceptual structures to syntactic and phonological structures. Under this view, no real distinction between linguistic rules and words is drawn, in that linguistic rules and words form two opposite poles on a continuum. For instance, idioms such as ‘kick the bucket’, ‘take X for granted’ etc. are not words. They are constituted by words, but the formation of such idioms is also governed by the syntactic combinatory rules in English. This permits us to say ‘take the fact for granted’, for example, but not ‘granted for take the fact’. Conceptual structure, being an independent level of thought and reasoning, builds structures in a combinatorial manner out of conceptually distinct ontological categories such as Object, Place, Direction, Time, Action, Event, Manner, Path etc. Combinatorial structures built out of such categories encode category membership, predicate-argument structure etc. The conceptual structures of the sentences (8–9) are represented in (8’-9’) in the following fashion. (8) The army stormed into the building. (9) The balls were in the bucket. (8’) Object ARMY GO(+MANNER) Path TO Place IN Object BUILDING (9’) Object BALLS Past &State BE Place IN Object BUCKET Note that the conceptual structures of (8–9) have been rendered in terms of the relevant ontological categories in subscripts. The primitive conceptual functions such as GO, BE, IN, TO apply to the conceptual representations denoted by the lexical items under the ontological category Object. The representations have been simplified for ease in understanding. What is important

46

chapter 2

is that the manner component encoded in the verb ‘storm’ is indicated within parentheses placed adjacent to the conceptual function for movement GO (by following Jackendoff (2002, 2007)), since storming into a place incorporates the concept of moving into the place forcefully and suddenly. Conceptual structure is linked to another mental structure called spatial structure (SpS), where various collections of information from vision, the haptic system, the auditory system, the motor system, the olfactory system, the kinesthetic system, the somatosensory system etc. are said to converge. Spatial structure is thus a kind of mental level where correspondences between conceptual structures and pieces of information from different sensory-motorperceptual systems are established. This enables spatial structure to encode different sensory-motor-perceptual features (such as shape, depth, index, color, dimension etc.) of objects, entities and space in language. This bipartite organization helps establish relations between language and the world out there via a series of levels of mental organization. The architecture of the mind in Jackendoff’s view can thus be schematized below (see figure 2). It is apparent that this view of the relation between language and thoughts, reasoning reflects the understanding that language recognized as such can sort of concretize weightless, invisible and intangible thoughts and reasoning. For Jackendoff, it is phonology, which is the lest abstract component of grammar (among syntax, semantics, morphology and phonology), that brings our nebulous thoughts into our focal awareness, or rather in what he calls fringe awareness, a type of awareness that is often unconscious but can be conscious too. However, we cannot also avoid the often-felt certainty that most of the time thought actually operates independently of language (Jackendoff 2007; Burling 2005). Most of our thoughts about taking quick decisions, effortless reflex-like

Vision

Audition Phonology

Syntax

Conceptual Structure

Spatial Structure

Articulation

Haptic system Motor system

Proprioceptive system

Language

Figure 2

Cognition

The architecture of the mind in Jackendoff (2002).

Perception/Action

Natural Language and the Linguistic Foundations of Mind

47

actions, and higher levels of reasoning are independent of language. Recent findings also suggest that spontaneous thought processes and goal directed reasoning have similar brain activations independent of language (Christoff, Ream and Gabrieli 2004). Even Millikan (2005) argues for such a possibility stating that thought does not always need language and vice versa, because she believes they have as parallel systems different types of intentionality. What she means to say in this case is that the intentionality of language is different from that of thought, on the grounds that the intentionality of language resides in how or in what manner the linguistic forms serve their linguistic functions by means of their forms, while the intentionality of thought relates to the way intentional attitudes (such as thinking, believing, desiring, proposing, promising, feeling etc.) serve their functions of correlating with structures in the outer world. These conventional functions of cognitive capacities are also termed proper functions. In this connection, Millikan also emphasizes that these proper functions, which are akin to biological functions (the function of the heart, for example) in many respects, have no direct connection to individual human intentions or thoughts. Rather, they have a connection to language users’ utterances which can have a generally descriptive character, given that utterances are to be understood in a generic sense. If this is the case, we may reasonably believe that it is the form of proper functions that actually binds together the cognitive capacities underlying both language and thought, even though the relevant proper functions for language and thought are not in themselves defined by virtue of a reference to human thoughts and reasoning. Importantly, she does not discount the possibility of conceptual thoughts being formulated through the medium of language, as well as of proper functions of language being described by making a reference to human thoughts and reasoning. From this it becomes evident that this position attributes at least some of our conceptualizations to language, and that the conventional uses of language rather than linguistic structures in themselves can make reference to human thoughts and reasoning. This is, however, not to say that linguistic structures cannot be described in virtue of a reference to human thoughts and reasoning. Overall, it is clear that our conceptualizations of objects, properties, processes, events and other abstractions owe their character, at least in part, to linguistic expressions which may or may not have truth-conditions but must have some proper function(s). Understood in this sense, this idea paves the way for the non-human realization of many possible conceptualizations in other organisms, animals and systems, since such conceptualizations can have different kinds of proper functions in other organisms, animals and systems. There are, however, other perplexing problems that revolve around the degree of modulation that language can exert on thoughts and reasoning. Note that

48

chapter 2

if we are ready to acknowledge that language shapes our conceptualizations of many things around us and scores of abstractions, this does not necessarily commit us to the view that language (fully) determines the nature of thoughts and reasoning. This issue deserves special consideration, given that Whorf (1956) has taken the position that it is a specific language that determines a specific way of thinking—a thesis which is known as the linguistic relativity hypothesis or the Sapir-Whorf Hypothesis (named after the linguists Benjamin Lee Whorf and Edward Sapir). Whorf came to this conclusion after studying the Hopi language and, in particular, the Eskimo language in which different words for different types and shades of snow are found4 (see for a different view, Pullum 1991). This led him to hypothesize that specific languages humans speak determine the structure of human thoughts as well as the reality our thinking and behavior assume. There is independent evidence that this thesis can be at least weakly supported. For example, Kay and Kempton (1984) have shown that color perception and categorization, especially the perception of focal colors, can be shaped by the color terms extant in languages, although Berlin and Kay (1969) much earlier attempted to establish that the universal cognitive mechanisms of color perception determine the character of (focal) color terms across languages organized in terms of levels since all languages have the same underlying structure of basic color terms. In a similar vein, ­Slobin’s (2003) study of motion verbs in languages including Spanish and English concludes that thoughts about motion are determined by the way languages encode the conceptualizations of motion. For instance, languages like Spanish incorporate the conceptualization of path in motion verbs, while languages like English incorporate manner in verbs of motion (such as ‘slide’ or ‘roll’), and this is supposed to induce Spanish speakers to visually interpret path more easily, or conversely, to induce English speakers to tend to fall into a salient visual interpretation of manner. Likewise, Nisbett’s (2003) study of thinking styles in Europeans, Americans and Asians driven by grammatical features seems to strengthen the same point. Be that as it may, in the present context this issue is severely muddled owing to the weakly manifest anthropomorphic bias injected into the thesis, even if we put aside other criticisms that impugn the validity of the linguistic relativity 4 Although Whorf was not the first person to bring into focus the discussion on the Eskimo lexicography that involved different words for ‘snow’ (it was Franz Boas who raised this matter first in his 1911 book The Handbook of North American Indians), it is Whorf who highlighted the possibility that the multiplicity of snow-words can be linked to the multiplicity of language-related concepts, and he presented this idea in his 1940 article ‘Science and Linguistics’ (see Pullum (1991), p. 276).

Natural Language and the Linguistic Foundations of Mind

49

hypothesis (see for discussion, Pinker (2007)). That specific languages we speak influence and determine the thoughts we have and entertain appears to fix the point for entry into the domain of humanly realized thoughts and reasoning. But this is misleading on several grounds. First, the lens-like nature of specific languages allowing for differences in thoughts and reasoning that assume differentially perceived realities is itself a thought. And if so, we are not sure whether this thought has itself been constructed through the medium of the English language (since Benjamin Lee Whorf as well as Edward Sapir were native speakers of English). Plus it is not clear why we should be disposed to think it is language rather than, say, the human memory or the human competence for social cognition that can exert the power of influencing and determining thoughts and reasoning, for after all the human memory or the human competence for social cognition is also unique in humans. If, on the other hand, language is thought to be the sine qua non of human cognitive capacities, this prevents (at least some) kinds of mental structures that underlie linguistic expressions in different languages from being possibly manifest in other organisms, animals and machines. This cannot be the case because many mental structures in humans underlying linguistic expressions may well have an overlapping extension shared with other animals and perhaps plants, since many such mental structures may be induced by the common physical properties of the external world that we all inhabit. For example, the physical properties of rivers and rocks in areas that have both rivers and rocks may induce in the animals (and possibly plants) living or located in such areas mental structures attuned to the properties and salient features of these rivers and rocks. There is practically nothing inherent in such mental structures that can stop them from being externalized or sort of concretized through human language and, for that matter, through other plausible expressions that animals and plants may have or could possibly have possessed. Second, the whole thing surrounding the hypothesis is centered on the idea that we can gain entry into the territory of human thoughts and reasoning by examining the structures of specific languages that we humans speak. It seems reasonable to assume that this supposition risks taking language to be the entry point rather than an entry point for the exploration of human thoughts and reasoning. As a matter of fact, many cognitive consequences that are claimed to ensue from the language-specific conceptualizations of number, color categories, motion, space and a plethora of other things and categories can be traced to the properties of our cognitive organization itself. In many cases, language-specific conceptualizations may also be at odds with the dictates of our cognitive organization. For instance, take the case of conceptualizations of path and manner of motion. It may be observed and checked that in the online­

50

chapter 2

processing of visual encounters in our everyday day-to-day life the manner of motion and the path of motion may have contextually grounded salience effects which are in part due to the nature of our conceptualizations of the manner of motion and the path of motion, and in part due to the properties of physical events and motions in the physical world. Thus, for example, if the parents see their baby crawling under a table, it is the path of the motion that may be more perceptually salient for the parents concerned than the exact manner of motion involved in the crawling, because crawling is what babies usually do (unless, of course, the baby suddenly shows an aberrant behavior in crawling not noticed before by the parents). Then, if one encounters a snake slithering down the porch of the house, the path of the motion of the snake may again be more salient than the manner of motion. If, on the other hand, a car comes hurtling towards some passersby, the passersby may have reason to be worried by the manner of motion of the car rather than by the exact path of motion of the car, because the car’s slow pace in the same path would not, more plausibly, alarm the passersby. Beyond that, in the case of pictures, paintings or images of the motion of moving objects, animals or humans, what is important to note is that the manner of motion can be inferred from, rather than visually experienced in a direct manner in, the static representation of motion as it is sort of frozen in static representations. But no matter how frozen the manner of motion in static representations is, it would not surprise us to find that the manner of motion may be, at least in most pictures, paintings or images, as perceptually salient as the path of motion of the entity or entities shown in the image, picture or the painting in question. The reason for this is that both the manner of motion and the path of motion become abstractions that have to be inferred from the static representations of dynamic events anyway, although the manner of motion is by its very nature more dynamic than the path of motion. Unless paintings or pictures are created in such a manner as to generate a perceptual bias in favor of either the manner of motion or the path of motion, it is unlikely that one can be more salient than the other in most pictorial representations. Given this understanding of the relationship between visual perception and the conceptualizations of the manner of motion and the path of motion, it would be worthwhile to inquire whether language users’ contextualized uses of manner of motion verbs rather than verbs of path of motion, or of verbs of path of motion rather than verbs of manner of motion in various tasks of what Slobin calls ‘thinking for speaking’ (speaking, writing, listening, reading, viewing, understanding, imaging, remembering etc.) is due to the very conceptualizations of the manner of motion or the path of motion formed and shaped by the particular languages of language users. Even if we grant that the particular

Natural Language and the Linguistic Foundations of Mind

51

languages language users speak, in virtue of having shaped the conceptualizations of the manner of motion or the path of motion, induce language users to tend to use a greater number of verbs of path of motion (in French speakers, for example) or verbs of manner of motion (in Dutch speakers, for example), it does not follow that the language-based conceptualizations of the manner of motion or the path of motion cause the language users to saliently use a greater number of verbs of path of motion or verbs of manner of motion when they use the specific languages in a task, say, the reporting of mental imagery. It is quite plausible that the actual conceptualizations of the manner of motion and the path of motion constructed during the language users’ engagement in such tasks are equally salient in their minds, and it is the linguistic expressions produced that appear to be rough markers or paraphrases of the actual conceptualizations, thereby conveying the impression that the underlying cognitive representations are modulated by the relevant properties of particular languages. This is because language users have no way other than that of producing the specific linguistic expressions their languages allow for. This may have nothing whatever to do with the actual and exact forms of mental representations or conceptualizations of the manner of motion and the path of motion. This is certainly not to deny that language-based conceptualizations of the manner of motion or the path of motion exist in language speakers’ mental repertories. Rather, this is to reject the idea that language-based conceptualizations of the manner of motion or the path of motion do the whole job when language users engage in diverse tasks of thinking for speaking. In many respects, an ensemble of language-based conceptualizations and the constellation of actually formed mental representations or conceptualizations constructed during language users’ engagement in various tasks may naturally be pitted against one another. For instance, neither the path of motion nor the manner of motion is encoded in a language like Bengali—a verb like ‘dourano’ (‘to run’) in Bengali encodes neither the path of motion nor the manner of motion, and no specific running verb for a specific manner of motion (as in ‘jog’ in English) or for a specific path of motion (as in ‘pasar’ in Spanish) exists in the language. But from this one cannot conclude that the language-based conceptualization of running in speakers of Bengali will always be aligned with the actual conceptualizations of the manner of motion and/or the path of motion formed during any activity involving mental imagery or language understanding. Note that the assumption here is that if the actual conceptualizations of the manner of motion and the path of motion are supposed to be identical to the language-based conceptualizations deployed in tasks of thinking for speaking, language users of Bengali may not be expected to diverge from this one-to-one mapping or the identicality of the actual

52

chapter 2

conceptualizations of the manner of motion and the path of motion with the language-based conceptualizations. But, if we assume that the language-based conceptualizations may not accord with or simply be way distinct from the actual conceptualizations of the manner of motion and the path of motion employed by language users, language users of Bengali cannot be expected to cling to any particular type of encoding—the encoding of the manner of motion or the encoding of the path of motion or even the encoding of the direction of motion. Clearly these are two conflicting assumptions. One may, however, urge that the postulation of non-linguistic conceptualizations of the manner of motion or the path of motion is redundant, and add that only the language-based conceptualizations may be said to actually permit both possibilities specified in the two assumptions. That is, one could insist that the language-based conceptualizations in being underspecified in the case of Bengali speakers may either cause Bengali speakers to remain neutral in choosing predicates of motion in tasks of thinking for speaking, or bias them in favor of any particular type of encoding in predicates of motion. But then this argument seems to ride on the close commensurateness of language-based conceptualizations with the structure of the type of encoding in predicates of motion used by language speakers in activities of thinking for speaking. This cannot make sense. If we buy this argument, we will also have to state that the bias for the manner of motion verbs in speakers of English, or for the path of motion verbs in speakers of Spanish in activities of thinking for speaking is closely commensurate with the language-based conceptualizations grounded in the types of motion verbs of the respective languages. This is misleading, given that languages such as Spanish or Italian actually have a mixture of manner of motion verbs and verbs of path of motion (see Levin, Beavers and Tham (2009)). And if this is the case, why should there be a bias for the path of motion verbs in speakers of Spanish, for example, in activities of thinking for speaking? If all that matters is the intimate commensurateness of language-based conceptualizations with the structure of the type(s) of encoding in predicates of motion in a language, Spanish language users should be expected to be biased for both the manner of motion verbs and the verbs of path of motion. In the face of this dilemma, one may again have recourse to the argument that the proportional magnitudes of specific types of motion verbs in a language may induce a bias in activities of thinking for speaking for that specific type of motion verbs which has a greater proportion than other types. Thus, one may maintain that the greater magnitude of the proportion of the motion verbs of path in Spanish is responsible for the bias for this specific type of encoding. Again, this misses the point, for proportions are after all statistical quantities that depend on measurements from outside. Conceptualizations or

Natural Language and the Linguistic Foundations of Mind

53

types of conceptualizations represented in the mind, whether language-based or not, are intrinsically independent of statistical regularities or quantities. There is no mind-internal measurement that determines the preponderance of one type of conceptual encoding rather than another. Only behavioral processes are sensitive to patterns of statistical regularities or quantities. So if our behavioral processes induce a bias in favor of or against any type of conceptual encoding, there is no sense in immediately leaping to the conclusion that this bias springs from the language-based conceptualizations recognized as such. More of X in the mind may but must not yield more of X in the outputs of cognitive processes. To put it in other words, there does not necessarily exist any statistically governed smooth transition from conceptualizations or types of conceptualizations represented in the mind to behavioral outputs of linguistic activities in thinking for speaking. Hence, for example, if a language has more transitive verbs (two-place predicates such as ‘kill’, ‘do’ etc. or/and three-place predicates such as ‘give’, ‘put’ etc.) than intransitive verbs (one-place predicates such as ‘die’, ‘cry’), it does not thereby follow that conceptualizations of events in speakers of this language will be particularly oriented towards events having more participants rather than towards events having just one participant. At this juncture, it needs to be stated that this line of reasoning does not turn on arguments for a specific degree of interaction between thought and language within which the assumption that language influences specific ways of thought is consonant with a diluted form of the linguistic relativity hypothesis. Rather, the present work raises a question about how, but not why, thoughts or conceptualizations accepted as such can be said to interact with linguistic representations, regardless of whether the interaction is intimate or moderate or even discordant. In this connection, the problem of description of conceptualizations looms large. Jackendoff’s conceptual categories that build the combinatorial elements of conceptual structures are as descriptive as Wierzbicka’s (1996) semantic primes can be. The problem of description of conceptualizations exists thanks to the rendering in symbolic forms of what (plausibly) exists in the mind. This seems to invoke the problem of increasing levels of interpretations of such symbolic forms as we scale up the order of our descriptions. That is, it is a question of what ‘interprets’ the symbolic forms that encode the mental contents. We may try to dodge this apparently naïve i­ssue by saying that the symbolic forms are mere notations or descriptive markers to be interpreted only by us just like volume or temperature or other physical quantities are measured in terms of notations devised and interpreted by us. Then, this appears to imply that there is nothing intrinsic in the symbolic forms in themselves that (can) determine the real structure or ­organization of thoughts and conceptualizations. From another related perspective,­the

54

chapter 2

question of how the conceptualizations are represented in the mind is also connected to the representational capacity of symbolic forms. For instance, Wierzbicka believes that the rules which determine various possible combinations of semantic primitives mapping the realms of conceptualizations are universal and intuitively verifiable, but do not constitute a kind of innately specified unverifiable calculus. This raises another concern that can help pry what is intentional apart from what possibly is not. Irrespective of whether or not the set of rules for the combination of all semantic primitives can be formalized in terms of a calculus which is verifiable, or even of whether or not the primitives are innate, the intuitions that are supposed to support verifiability cannot be part of the mentally embedded conceptualizations. Intuitive verifiability is called upon to substantiate the case for the ‘syntactic’ rules for combinations of semantic primitives that are supposed to structure conceptualizations. What this means is that conceptualizations are in themselves independent of these intuitions. Rather, intuitions about linguistic meanings are to provide evidence for the representational relation that obtains between the rules of combination of these semantic primitives and the structures of conceptualizations. Therefore, it can be observed that intuitions about linguistic meanings provide evidence for the representational relation obtaining between the rules of combination of the semantic primitives and the structures of conceptualizations, but not for the conceptualizations themselves—that is, for what is really represented in the mind. Had intuitions about linguistic meanings provided evidence for the conceptualizations, the conceptualizations would end up as the intentional object towards which intuitions are (directly) oriented. Thus, if conceptualizations become the intentional object at which intuitions are directed, intuitions about linguistic meanings would have to answer to what is really present in the mind. But then this makes the whole thing circular, since conceptualizations are believed to be what is really represented in the mind, and on the basis of this, it follows that conceptualizations also answer to intuitions about linguistic meanings because nothing prevents intuitions about linguistic meanings from being conceptualizations. And hence the evidence that intuitive verifiability provides for the semantic primitives or their combinations does not necessarily uncover what is actually conceptualized in the mind. In addition, there cannot exist a unique representational relation obtaining between the rules of combination of the semantic primitives and the structures of conceptualizations, in that the structure and form of conceptualizations will vary appropriately in accordance with the choice of the symbolic forms that stand for the semantic primitives. Because there can be several possible choices for the exact semantic primitives or ontological categories actually selected, there

Natural Language and the Linguistic Foundations of Mind

55

can also be several corresponding forms of conceptualizations. That is, what is ­really present in the mind may remain invariant, whereas the structures of conceptualizations will vary depending on the exact semantic primitives or ontological categories chosen. This problem recognized as such, in part, ­emanates from the metalinguistic fallacy inherent in most attempts to characterize the structural form of conceptualizations. That is, attempts to describe linguistic meanings or conceptualizations in terms of natural language structures and/ or some logical symbols/formulas end up widening the distance between what is actually conceptualized in the mind and the representational devices. It is nonetheless inevitable that we are bound to express through some symbolic form or the other anything that is linguistically conceptualized. It is virtually impossible for humans to move out of the symbolic confinements, and then straightforwardly describe or make reference to linguistic symbols and meanings. Despite this seemingly insurmountable difficulty, the distance that is widened owing to the use of linguistic symbols or some other kind of symbols can perhaps be reduced. If we are able to do this, we can at least describe possible mental structures in independent terms. This can only be achieved by sort of narrowing down the representational gap between what is actually conceptualized in the mind and the representational devices. This will be attempted in Chapter 3. 2.3

Language as a Mental Tool

Language can also be regarded as a kind of scaffolding that supports the infrastructure of thinking and reasoning. This view upholds a principle that attributes cognitive powers to language as a mental system which transforms the very structure of the mental machinery. Many patterns and forms of thoughts and reasoning are thus believed to be supported rather than determined by linguistic structures. This view has been advanced by Clark (1997). Only in this sense does this view jibe with Dennett’s (1991) in the sense that both believe that the changes brought about by language do not drastically alter the neural architecture. Rather, the changes are superficial, and for the most part, involve cognitive processes that interface with language. Importantly, this view differs from the Whorfian linguistic relativity hypothesis, in that thoughts and reasoning on this proposal are not supposed to be filtered through particular languages and, additionally, natural languages are not, strictly speaking, conceived of as cultural artifacts. What is significant in the present case is this supposition: language as a cognitive capacity universally specified changes the structure of brain’s organization, thereby augmenting the capacities of cognitive processes

56

chapter 2

underlying numerical calculations, categorization, reasoning, learning, believing, imagining, spatial cognition, attention etc. For example, memory processes are greatly augmented when linguistic labels help trace and keep track of the mentally represented items they stand for, or when these labels allow items or concepts to remain alive in the working or long-term memory. Such processes play a constitutive role, especially in mathematical calculations performed in the head. The linguistic labels for numbers hugely simplify the representations on which online procedures operate when we add, subtract, multiply or divide numbers. Likewise, many kinds of objects in the environment can be properly categorized or recognized if they come with linguistic labels. Words such as cream, soap, towel or toothbrush help demarcate the objects these words denote from a host of other objects such as piano, car, table, star etc. Even many abstract concepts that cannot otherwise be seen or touched are easily conceptualized when formulated in linguistic terms. Concepts of helping, hospitality, believing, sincerity or even the content expressed in the well-known sentence by William Wordsworth ‘The child is father of the man.’ can be recognized, understood and categorized as they are expressed in language. Furthermore, many of our plans in making a speech or a presentation, traveling, playing games, doing teamwork etc. are rehearsed in natural language expressions. Even spatial navigation is facilitated by the linguistic formulation and conceptualization of directions, maps and movements to places. No doubt a number of our cognitive processes are structured, amplified, facilitated, enhanced and reinforced by language which is taken to be a cognitive system. What is crucial in this view is the idea that cognitive processes which are constituted by computations do not occur just within the boundaries of our cranium. Rather, cognitive processes are claimed to go beyond the confinements of our brains. Linguistic expressions when externalized onto objects, entities in the world tie together the operations within the brain and the entities out there in the world (see Wilson (2004)). For example, when we write down our ideas in books, papers, diaries, computers or on walls, posters etc., the mental processes that underlie and accomplish these tasks are thus said to interface with the objects that are located outside of the brain. Thus, any computations that the brain performs in executing tasks of such kind are supposed to encompass a domain that includes the brain, body and the world. Central to this conception of the mental processes of thinking and reasoning is the representational role of linguistic expressions. The underlying idea does not warrant that the mental processes of thinking and reasoning will have to be implemented in linguistic expressions when thoughts and reasoning are said to be scaffolded by linguistic expressions. This is largely in accord with Stainton (2006), who holds that the content of thoughts which is grasped or

Natural Language and the Linguistic Foundations of Mind

57

understood in the mind is not expressed by any (unique) sentence(s) in the mind because many subsentences or linguistic structures with parts of expressions omitted (as in ellipsis constructions such as ‘I can play rugby, but they cannot.’) are good enough to express the contents of thoughts. This is plausible on the grounds that other mental faculties for perception, inferences and reasoning fill in the details left blank by gapped linguistic expressions. It seems as if other mental faculties are lumbered with the task of accessing, retrieving and manipulating the linguistic representations on the basis of the relevant information drawn from perceptual, inferential and other integrative mental processes. If this is the case, mental computations—whatever they actually turn out to be—bind linguistic conceptualizations and a host of perceptual, inferential and other integrative mental processes underlying thinking and reasoning. Significantly, this seems to extend the boundaries not only of cognition but also of computation recognized as something that underlies cognitive operations. The computational theory of mind (see Fodor 1975; Pylyshyn 1984; ­Chalmers 2012) espouses a view of cognition within which cognitive operations are deemed to be intrinsically computational in their form and character. Initially this view was defended in order to capture and account for the properties and features of the whole of cognition. But this optimism has, however, been ­abandoned in Fodor (2000), who now believes that many of the perceptual, inferential and other integrative mental processes cannot plausibly be computational. If perceptual, inferential and other integrative mental processes become interspersed with linguistic representations and the operations that process such representations in thinking and reasoning, we have reason to aver that mental computations that can be said to ground thinking and reasoning in the neural architecture come to constitute processes of thinking and reasoning. If so, this flounders on a number of paralyzing problems and dilemmas. First, if (at least many of) the perceptual, inferential and other integrative mental processes cannot be computational (qua Fodor), then mental computations cannot constitute processes of thinking and reasoning, given that perceptual, inferential and other integrative mental processes become interspersed with linguistic representations and the operations that manipulate them in thinking and reasoning. Second, recent formulations of the computational character of mental operations resist the idea that all of cognition can be computational, or that formal properties of computation are always relevant to the explanation of what is cognitive or how cognition works. For example, Milkowski (2013) advocates a view of computation that exceeds what representations in themselves warrant. For him, what representations are can be cashed out in terms of representational mechanisms that identify the objects represented as well as

58

chapter 2

the information about those objects, and assess the epistemic value of the information picked up about those objects. The conversion of the input information into representations within and for a system with internal states attuned to the external world is mediated by computations. Hence all representations a representational mechanism exploits are viable only because of computations, whereas computations can be without any representational content (as in a Turing machine that has halted). That is, representations cannot be representations without computations, but computations can obtain without representations. Crucially, on this proposal representational mechanisms are not identical to formal models of computation. Rather, they implement such formal models of computation which are supposed to obtain at a level isolated from contextualized interactions with the world and also independent of the level at which we may describe the components and inter-component interactions within a system constituting a mechanism. If this view of computation vis-à-vis representation is adopted, the presupposition that mental computations constitute processes of thinking and reasoning becomes both substantively and explanatorily vacuous as formal properties of computation are isolated from contextualized interactions with the world as well as from the constitutive properties of the neural architecture. One may, of course, try to circumvent these dilemmas by not clinging to any version of the computational view of mind, whether Fodorian in spirit or not. But then this does not square with the implicit belief that mental operations are computations which are what extend beyond the brain and augmented by language. Nor does it accord with the supposition that the content of thoughts can be cashed out in terms of representations on which computations (can) operate. The problems and dilemmas noted above cannot simply be translated into the by now familiar tensions that exist between the classical model of computationalism in cognitive science (which adheres to a view of mental computation that is circumscribed within the individual) and the new embodied cognitive science (which extends the domain of mental computations into the world). To see that this is so, we have to dig a bit deeper. Note that if the line of reasoning pursued in the paragraph above does not seem to be adequate, one may also contend that mental computations that (may) extend beyond the confinements of the brain do not constitute but rather influence or modulate processes of thinking and reasoning, precisely because the coupling of internal cognitive processes to the environment outside does not guarantee the constitution of a whole cognitive process consisting of the internal cognitive processes and the interacting objects along with the environment. In other words, the argument may run like this: if a whole cognitive process cannot be constituted merely by the coupling of the internal cognitive processes,

Natural Language and the Linguistic Foundations of Mind

59

the interacting objects and the environment all put together, mental computations in virtue of that coupling cannot be said to constitute extended ­cognitive mechanisms or processes. Thus, one may suppose, even if the coupled system (with the internal cognitive processes and the interacting environment) implements thinking and reasoning in an extended space, this coupling can at most have mental computations modulate but not constitute thinking and reasoning—which does not require that mental computations move out of the brain’s boundaries. To give one illustrative example, linguistic processes in a speaker’s head giving rise to language production may modulate or influence the processes of comprehension inside the head of the listener, but it is odd to state that the speaker’s language processing constitutes or is a part of the listener’s comprehension processes. Simply speaking, mental computations are here kept from being identified with or being a part of thinking and reasoning which may in turn influence but not constitute mental computations. Apparently, this accomplishes two tasks: (i) mental computations are isolated from the external world’ interactions; (ii) mental processes of thinking and reasoning which can have an extension outside the brain may influence or be modulated by brain-internal mental computations. Aspects of this argument rest on a well-known critique of the thesis of extended cognition that uncovers what is called the ‘coupling-constitution fallacy’ (Adams and Aizawa 2008; Rupert 2009; but see also Kagan and Lassiter (2013)). It is thus said that the coupling of internal mental processes to external objects and processes is not sufficient for the constitution of a cognitive system the parts of which are the internal cognitive processes and the external objects and processes. Ironically enough, the argument expressed in this critique turns out to be a boon rather than the bane of the thesis of extended cognition. This is, nevertheless, not quite right. Regardless of whether or not internal cognitive processes, external entities and the world are coupled to one another, or even constitute a whole cognitive mechanism/process, it is not clear how mental computations can operate on linguistic representations by remaining confined within the brain and yet underlie or support the mental processes of thinking and reasoning which can be said to go beyond the brain’s boundaries. It looks as if the mental processes for language are disjoint from those of thinking and reasoning—which is profoundly misleading at least for humans, to say the least. Besides, given that the coupling of internal cognitive processes and external entities does not suffice for the extension of cognitive processes outside the boundaries of the brain (see Clark (2008)), such coupling does not then appear to have anything to do with mental computations which are believed to underlie cognitive processes. To put it in a different way, mental computations come to constitute and underlie only those mental processes

60

chapter 2

that involve neither a coupling of internal cognitive processes and external entities without any extension of cognitive processes into the world nor actually possible extensions of cognitive processes. One example of such a mental process could be imagining the meaning of a phrase or a sentence without any external aid. But there are no principled grounds for foisting mental computations on cognitive processes that do not engage in coupling or any kind of cognitive extension, and for snatching mental computations away from c­ ognitive processes that do obtain via coupling or some kind of cognitive extension, for computations are intrinsically substance-independent anyway. The worldbrain boundary is not a natural cut for erecting a wall between mental computations and other non-computational coupled and extended mental processes. From a different perspective, that language as a cognitive system gradually moulds thinking as the language capacity develops in children, thereby transforming the structure of the cognitive machinery, reflects a commitment to a view that ascribes representational properties to the mental structures shaped by language. This is overall in tune with the idea that language acts as a complementary cognitive system augmenting the cognitive operations of the brain. In fact, Vygotsky’s work (1962) has shown that thought and language initially develop almost separately in the first few months of children’s development, but as soon as language starts developing in the ontogenetic pathway, it influences the growth of both private and cultural thinking through a kind of ‘private speech’. This kind of private speech also helps configure cognitive niches through the manipulation of the representational properties in the brain. Such cognitive niches can include experiences with the objects in the vicinity, the human beings around and also the environment in which children learn to do things (playing, for example). The Piagetian correspondences between sensory-motor schemas and linguistic constructions constitute another example of this sort. This is thus compatible with mental processes extending beyond the boundaries of the brain to include the body, external symbols and the world. But in a more significant sense, what may be noted for the present purposes is that thinking which is akin to entertaining thoughts needs to be distinguished from the thoughts themselves. The shaping of thinking by language is due in large measure to the mental processes language gives rise to rather than to the representations per se. Thus, if various contextualized ways of thinking are modulated by linguistic representations and processes, the procedural interpretation of the linguistic system in the mind can be a good candidate mechanism supporting a unique type of thinking. Linguistic processing operating in terms of a sequence of procedures acting upon words and/or phrases and sentences seems to bestow the human type of mentality on our cognitive

Natural Language and the Linguistic Foundations of Mind

61

apparatus. But in no way does this prevent animal communication systems from altering the structure of cognitive operations in animal brains. The only difference resides in whether organisms or systems have a language-like communication system or not. Take, for example, a plant or a digital computer. In this case, we can say with a reasonable degree of certainty that the organism or the system in the plant or the machine does not have a usable languagelike communication system. Then all that is conceivably inherent in the plant or the machine is an ensemble of representations and biologically/physically plausible mental structures. Whether these representations are deployed for entertaining thoughts is, to all intents and purposes, beyond our grasp. Also, it is patently hard to determine what such representations and/or mental structures may look like when we are aware that examining potential cognitive processes in plants and other organisms is confined to mostly checking the inputs and outputs of such processes. The case of machines is, however, a bit different, since we can figure out how the internal parts and processes that link them operate in machines. What is evident is that thinking cannot be determined in other animals or organisms, plants and machines without the representational structures of thoughts, while the representational structures of thoughts can at least be inferred even in the absence of any visible marks of thinking processes. Trying to determine thinking in other animals or organisms, plants and machines without having recourse to the representational structures of thoughts is like trying to understand what motion is like without any idea of the properties of objects that can move (for example, mass). Overall, it appears that mental processes of thinking and reasoning and the thoughts entertained are, by and large, complementary in their cognitive roles, although determining the grounding of the cognitive role of one with respect to that of the other is not a demonstrably well-balanced affair. Indeed, this needs to be put to the test. We shall have more to say on this when we turn to the discussion in Chapter 3. 2.4

The Expressive Power of Natural Language and Ineffability

If there is anything that makes natural language profoundly different in character and form, it must be its expressivity. Languages allow humans to express many things that are perhaps unthinkable without the very existence of natural language. We can talk about the houses around us, trees, animals, light, the moon, the sun, our own feelings, scores of distant stars and thousands of abstract ideas and things in bewilderingly complex ways that linguistic constructions permit. However, linguistic expressivity is constrained by the limitations

62

chapter 2

of our sensory, cognitive and affective systems. Just as there are myriads of things we can express using natural language, there are also countless things or abstractions that we cannot and perhaps will never be able to express. They may be ineffable ideas, abstractions, feelings or thoughts. In this context, one may also point to legions of inexpressible meanings that do not often get out of the grammatical machinery. For example, the possible meanings that could be manifest in an ungrammatical English sentence such as ‘Do not before really laying me know I’ or ‘I wonder what whether he chose’ may never see the light of day through the linguistic expressions in which they are couched. Thus, it seems that this is also an example of ineffability, by virtue of the fact that the idea conveyed in such sentences cannot be expressed in that specific natural language (that is, English). Whatever way one chooses to deal with this, the present concern is way different. In the current context, we bother about ineffable abstractions, feelings or thoughts that cannot at all be expressed in any language, simply because the perceptions felt or the abstractions grasped are below the recognition threshold of our sensory, cognitive and affective systems that underlie many of our thoughts, inferences and reasoning. Languagespecific constraints on the expressivity of natural language are grounded in the properties of language itself, while limitations of our sensory, cognitive and affective systems in expressing certain abstractions or perceptions that can actually be expressed in natural language derive from the way our sensory, cognitive and affective systems interface with the linguistic system at the fundamental level of cognitive organization. Besides, the expressive power of natural language also rests on how much logical structure natural languages can express in different linguistic constructions in virtue of the intrinsic properties of natural language. Thus different languages can have different types of constructions corresponding to different logical structures, while other languages can have fewer such types of constructions, thereby encapsulating a fewer number of logical properties and relations (see for details, Keenan (2009)). For example, many languages such as Hindi, Italian etc. drop subjects, while languages like English do not. In such a case, languages like English express more logical structure than languages like Italian or Hindi by obligatorily expressing the subject argument on the surface. But this is not the kind of linguistic expressivity we aim to explore in this section. Note that language is a second-order cognitive system that operates on representations and/or informational regularities extracted and derived by our sensory, cognitive and affective systems from interactions with the environment. Even if our perceptions and feelings may be continuous, natural language may impose its own discontinuous organization on our experiences and perceptions. It needs to be emphasized that the discontinuity of natural

Natural Language and the Linguistic Foundations of Mind

63

language­refers to the exact form or shape of the symbolic vehicles used in natural language such as sounds, words, phrases etc., but not to the meanings conveyed by those linguistic objects. Thus, many relevant features and aspects of the environment as well as of what we really perceive and feel may be lost, especially when the representations and/or informational regularities extracted by our sensory, cognitive and affective systems from the relevant interactions with the environment are transduced into linguistic expressions. A case from sensory perception can clarify the point made over here. Much of sensory perception can be characterized to be non-transitive in its mathematical property. Thus, if we find that X and Y are perceptually indistinguishable, and that Y and Z are also indistinguishable, it is not necessary that X and Z will also turn out to be indistinguishable (see Deemter 2010). This can obtain because there can be degrees of differences between X and Y on the one hand and Y and Z on the other which are not detected by our perceptual mechanisms, and ultimately, these differences, by being added up, end up as a sizable difference that becomes perceptually identifiable. This can be illustrated by means of a simple example, as discussed in Mondal (2016). Suppose we have three indistinguishable pictures a, b and c of the same size, material and form. Now let’s say that the difference between the pictures a and b is only at the microscopic level of roughness or color distribution which is not detectable through our eyes. Add to this the difference between the pictures b and c which can be a bit higher than the difference between the pictures a and b. Despite that, if this difference between b and c is still not detected by our eyes, the difference between the pictures a and b on the one hand and the difference between the pictures b and c on the other add up to a recognizable or identifiable difference detectable by our eyes. Finally, this can lead us to detect a difference between the pictures a and c. This condition may apply to other kinds of sensory perception including tactile, olfactory, haptic perceptions, and also to affective feelings. Affective experiences may vary from one individual to another. However, there is ­something general about affective experiences which may make affective experiences lined up with perceptions (see Roberts 1995; Prinz 2006). Thus, for instance, if a person affectively experiences two things, say, X and Y, which are not ­identical in their feel in the relevant emotional context, it is quite possible that the person concerned feels something which is intermediate between X and Y in its feel such that the person finds X and Z on the one hand and Z and Y on the other equivalent in their respective emotional contexts. Thus that person may end up feeling a difference between X and Y, even though X and Z on the one hand and Z and Y on the other are equivalent for the person in the ­respective emotional contexts. So I may be happy about my spending 100

64

chapter 2

bucks for a nice shirt, but not about the seller’s receiving a sum of 100 bucks from me in exchange for the shirt. Now it is possible that I consider my spending 100 bucks for the shirt and the exchange of the money from my possession to the seller to be such as to have the same feel for me. Likewise, I also consider the exchange of the money from my possession to the seller and the seller’s receiving a sum of 100 bucks from me to have the same feel in my affective experience. If this is the case, I may find that my spending 100 bucks for the shirt and the seller’s receiving a sum of 100 bucks from me do not produce the same feel of happiness. What is important for our purposes is that the relevant minuscule differences in sensory perception or in affective phenomenology may be felt or discerned but cannot be linguistically expressed. It is these tiny differences that are not and cannot be expressed in language any more than differences in neuronal firings or differences in filtration rates in the two kidneys can be subjectively felt and thereby expressed by us. Although one may well concoct wildly fantastic stories about these otherwise undetectable differences in our sensation, perception and cognition, this does not thereby make these tiny differences expressible in natural language, for expressivity in natural language requires at least the mental registration of the features or aspects of the entity or event or abstraction in question but not confabulatory or imaginatively crafted stories of felt emotions. It may also be observed that the features or aspects of neuronal firings or of differences in filtration rates of kidneys are not transparent to any form of mental registration at all. Similarly, many bodily events and changes within the internal organs of our body are not transparent to the mind. The commonality that underpins such bodily events and minuscule differences in sensory perception or in affective phenomenology has to do with the ineffability of such events and sensory-cognitive differences. But the fundamental difference between such bodily events and the multifarious minuscule differences in sensory perception or in affective phenomenology lies in the fact that the bodily events are not cognizable, while the multifarious minuscule differences in sensory perception or in affective phenomenology may be cognizable but are not expressible in natural language. Perhaps this has partly to do with the opacity in the specific mapping performed between the sensory, cognitive and affective systems and the linguistic system. Imperceptible bodily events which are not cognizable do not belong in the domain of sensory-cognitive faculties in the first place. If we are geared up to accept that many such potentially expressible features and aspects of the specific mapping between the sensory, cognitive and affective systems and the linguistic system are ultimately ineffable, we can appreciate that many otherwise possible but not linguistically expressible mental

Natural Language and the Linguistic Foundations of Mind

65

structures derived from the specific mapping between the sensory, cognitive and affective systems and the linguistic system exist in humans. Significantly, if they exist in humans, that is no reason to doubt that other animals, machines and plants cannot have a lot more mental structures of such types. Let’s think over this for a while. Many more mental structures of various types plausibly exist in other animals, machines and plants, only if the relevant features and aspects of the inner and outer worlds registered within the sensory, cognitive and affective systems that are detected, appraised and perhaps felt by other animals, or recognized by machines and plants are distilled into mental structures which can remain intrinsically ineffable. In other words, there is nothing that can prevent relevant features and aspects of the perceptions, sensations and possible forms of mentation mediated by the sensory, cognitive and affective systems of other animals, machines and plants from constructing mental structures that can potentially be expressed in some symbolic medium but are not actually expressible. Let’s consider, for example, the case of animals such as birds, dogs, cats, reptiles as well as other primates. The sensory, cognitive and affective systems of such animals are attuned to food and its sources, features of the surrounding environment, conspecifics, members of other species including their prey etc. Many relevant aspects and features of perceptions, sensations and possible forms of mentation the sensory, cognitive and affective systems of such animals generate and mediate are simply not expressible because no plausible mapping between such perceptions, sensations and possible forms of mentation and some cognitive system that can externalize them exists. However, animal vocalizations in birds, dogs, cats, reptiles as well as other primates can externalize at least some of these perceptions, sensations and possible forms of mentation, especially those indicating immediate excitements, fear, grief, joy and playfulness. But it is not clear whether such vocalizations can be part of a mental faculty evincing a cognitive dimension rather than just a plainly motor component, although it is not necessary that perceptions, sensations and possible forms of mentation in animals have to be mapped onto a fullblown symbolic cognitive system. In a sense, this irrefutably holds true for plants which do not have any inherent mental faculty either for vocalizations or for symbolic expressions, and in part for machines which can learn possible symbolic rules for expressions unless, of course, one interprets any symbols on the display screen to be the outputs of the mapping of possible forms of mentation onto a symbolic system.5 5 In a digital computer, such a symbolic system may consist of compilers, the machine language and the programming languages.

66

chapter 2

Overall, it is clear that mental structures can be realized in various forms in diverse kinds of organisms, systems and plants, and may also remain ineffable, irrespective of whether these structures are derived from perceptions, sensations and possible forms of mentation or from conceptualizations constructed and/or shaped by linguistic expressions. And if mental structures by their very nature are such that they can be dissociated from and independent of a symbolic or linguistic system for expressivity, we have reason to contend that mental structures are not intrinsically a part of any symbolic or linguistic system for expressivity. But from this we cannot immediately jump to the conclusion that no symbolic or linguistic system has any intrinsic connection to mental structures, since any symbolic or linguistic system—whether biologically grounded or non-biologically instantiated—is parasitic on minds, meanings and interpretation. Note that any symbolic or linguistic system has certain symbols that have meanings which are what they are only by virtue of interpretations assigned to the symbols. This assignment may be mental or parasitic on mental processes. The distinction between mental states and the content of mental states touched upon in Chapter 1 is significant here. In a biologically grounded symbolic or linguistic system the assignment of meanings to symbols and/or expressions relies on the mental states in question, whereas in a non-biologically­instantiated symbolic or linguistic system this depends on the mental contents derived from many minds or constructed at an intersubjective­level of interpretation. Therefore, a more adequate way of exploring the structure of possible minds by examining natural language structures would be to approach this question by probing into mental structures through linguistic structures rather than into linguistic structures through mental structures. The reason for this is because the mental structures that can be tapped through natural language yields the mental structures that cannot possibly be tapped through natural language once the former is deducted from the collection of all possible mental structures. Let’s see how we can go about this task. 2.5 Summary In this chapter, four different ways of looking at the relationship between language and the foundations of mind have been presented. Each way has been carefully evaluated and reviewed to see how it fits the present mission. No single way of tracing the linguistic foundations of mind has turned out to be both sufficient and necessary for the present goal because each way is fraught with a number of formidably crippling problems and dilemmas. Suffice it to

Natural Language and the Linguistic Foundations of Mind

67

say, the current question has been further sharpened and refined by way of close scrutiny of these problems and dilemmas. This will help us pursue the goal we have set out in Chapter 1. An important emerging thread connecting the present goal to the relationship between language and cognition is that the relationship between language and cognition is peppered with a lot more confounding intricacies than is assumed, and these intricacies have opened up new vistas for our goal. As we fine-tune the proposal in the next chapter (Chapter 3), we can clarify how the proposal this book advances can be fleshed out with connections germane to various fundamental issues in ai in particular and in cognitive science in general. The most important challenge will be to develop the proposal in a manner that avoids the dilemmas and conundrums we have so far encountered while examining various ways of looking into the linguistic foundations of mind. This is something we shall now take up.

chapter 3

Possible Minds from Natural Language Natural language structures are as varied as the points of variation linguistic phenomena across and within languages permit. Linguistic structures are not simply representations that carry meanings, and hence variations in linguistic structures are not simply variations in meanings. Rather, linguistic structures are expressions having complexly derived correspondences between the components of a linguistic system (syntax, semantics, morphology and p ­ honology), regardless of how or where (in the individual brain or in some inter-subjective space) this linguistic system is realized. Although linguistic structures do not always admit of fully correlated structures of forms and meanings (as in ‘He looks at flying birds with his binoculars’, which has an ambiguity in terms of whether the person looks at birds flying with binoculars or engages in the act of looking at flying birds using binoculars), linguistic structures are so structured as to offer a glimpse into the mental structures which we get a handle on by understanding the meanings expressed in such structures. Note that the distinction between the outputs of the linguistic system and the linguistic system in itself is crucial here. Uncovering the mental structures beneath linguistic structures is not so much about digging into the linguistic system for mental structures as about analyzing linguistic structures which project a window onto the structures mentally available for linguistic expressivity. Thus it is quite possible that what the linguistic system as a cognitive faculty reveals about mental structures may be independent of what linguistic structures as outputs of the cognitive faculty of language reveal. Mental structures so understood are structures, only insofar as they are not just a kind of nebulous and amorphous stuff present in the mind which gets activated when linguistic expressions grab it. Plainly, what is in the mind may look nebulous from outside. This is the outsider’s perspective. But in a way this springs from the insider’s illusion—we cannot actually tell what we really have in the mind from what we believe we have in the mind as we describe it. This insider’s illusion is quite undetectably projected onto the outsiders’ minds. Most importantly, the assumption in the present case is that whatever it is that becomes linguistically expressed has some form which can be deduced from the structural details of linguistic expressions. The form of mental structures, at least in part, derives from the form of linguistic expressions that express the mental structures in question as far as the intrinsic character

© koninklijke brill nv, leiden, ���7 | doi 10.1163/9789004344204_004

Possible Minds from Natural Language

69

of mental structures is concerned. But this does not entail that the intrinsic form of mental structures is bound to borrow or parallel the form of linguistic expressions, regardless of whether or not there exist linguistic expressions that express a range of mental structures. As discussed in Chapter 2, languagebased conceptualizations may constitute certain mental structures, but they are not all there is in the whole gamut of mental structures. In other words, many mental structures that may or may not be linguistically expressed are not language-like. It is fallacious to assume that mental structures are themselves language-like, on the grounds that language-like mental structures must then have to be interpreted to have meanings. This possibility is nonsensical, as has been argued in the previous two chapters. Even if we simply grant that mental structures in virtue of being expressed in natural language inherit at least some aspects and features of linguistic constructions, this does not seem to make a strong case for the intrinsically linguistic character of mental structures. After all, mental structures may be expressed in maps, pictures, paintings, codes, diagrams and scores of other things. Does this fact make one inclined to say that mental structures, in virtue of being expressed in maps, pictures, paintings, codes, diagrams etc., inherit at least some features of the respective things in which mental structures are expressed? This is certainly odd. Another way of reinforcing the idea that mental structures have an intrinsically linguistic character is to argue that aspects of natural language semantics construct and constitute mental structures, and hence mental structures incorporate aspects of linguistic semantics. This is a strong argument. Indeed, many aspects of linguistic semantics can have a reflex in mental structures that are expressed in natural languages. For example, the verb ‘khaoa’ (meaning ‘to eat’) in Bengali is used for both eating and drinking, while separate verbs for eating and drinking are used in English. This is an aspect of language-specific linguistic semantics. In addition, natural language semantics has some invariant features that prevent a verb in any natural language having the meaning of flying, for instance, from incorporating the aerodynamic properties and/or details of the act of flying. That is, many properties of flying are not encoded in any verb in a language that expresses the meaning of flying. Many aspects of non-linguistic semantics are thus said to be inferred from the appropriate encoding of linguistic semantics. But this is not simply a matter of reading enriched meanings through pragmatic aspects of linguistic structures off from the linguistic constructions. Rather, it involves the employment of other mental structures that complement those mental structures formed by means of linguistic semantics. Consider, for example, the difference between the sets of sentences below.

70

chapter 3

(10) (11)

a. We gave a jacket to the man sitting in the corner. b. We gave the man sitting in the corner a jacket. a. We donated/presented a jacket to the man sitting in the corner. b. *We donated/presented the man sitting in the corner a jacket.

It can be observed that the mental structures underlying both (10) and (11) may be similar. Furthermore, the mental structures that are formed by means of linguistic semantics may warrant that the location and time of any act of giving need to be specified in these mental structures. However, what we see here is that the linguistic semantics implicit in these constructions is in conflict with what can be potentially encoded as part of the mental structures that may underpin these constructions. In other words, the potential mental structures that may underpin these constructions turn out to be distinct from those that are actually encoded in the linguistic semantics of (10–11), and no manipulation by the former can make the sentence in (11b) grammatical. This shows that mental structures need not be identical to those that are formed by means of linguistic semantics. Had this been the case, we would not have found a tension between the potential mental structures that may underpin the constructions in (10–11) and the ones that are actually manifested in (10–11). The long and short of it is that the mental structures formed by way of linguistic semantics actually make both (10) and (11) viable, while some potential mental structures do not even find a way into the sets of linguistic expressions in (10–11). Overall, it is evident that mental structures have a lot in themselves that can be marshaled for the exploration of what can be said to exist in various kinds of possible minds of other animals, machines and plants, for mental structures accepted as such may have nothing whatever to do with linguistic expressions, although a subset of possible mental structures can be structured by linguistic semantics. In fact, this goes all the way towards a sort of shaping of possible mental structures by other forms of non-linguistic natural semantics that can be manifest in other animals, organisms and plants. We shall explore this further as we proceed. 3.1

Linguistic Structures and Mental Structures

We may now probe into the form of mental structures as they emanate from diverse classes of linguistic constructions within and across languages. Thus a range of linguistic structures covering various linguistic phenomena such as existential constructions, clausal embedding, anaphora, conditionals, counterfactuals, coordination, ellipsis, modality, negation, tense/aspect, information­

Possible Minds from Natural Language

71

structure will be examined in order to see what they uncover about the mental structures beneath the linguistic expressions that encode aspects of the relevant mental structures. The characterization of mental structures underlying linguistic expressions has to be such that it permits its extrapolation to a wider spectrum of entities encompassing other animals, machines and plants. This characterization can serve at least three goals: (i) it must not ride on humanly unique cognitive structures/representations; (ii) it must not introduce any species-specific or system-specific bias; and (iii) it must allow mental structures to have cognitive plausibility. In order to achieve these goals, we need to come up with a way to construct a viable formalism that can express the requirements of this characterization. Let’s see how we can undertake to accomplish this task. First of all, we require some discrete structures that can constitute the minimal building blocks of mental structures that can be uncovered from linguistic structures. What may fill this role can be sourced from the lexicon of a language. The lexicon of a natural language contains all lexical items, that is, words. In a relevant sense, the lexicon of any natural language is the stock of idiosyncratic and irregular pieces of linguistic information that contribute to the building of linguistic structures (Bloomfield 1933). However, this does not necessarily mean that the lexicon is not rule-governed. Quite a number of morphological, phonological and syntactic regularities can be found in the lexicon of a language. Let’s take idioms such as ‘face the music’ and ‘bite the dust’, which are whole chunks of linguistic expressions that are syntactically constructed. They are not semantically composed in the sense that the meanings of ‘bite’ and ‘the dust’, for example, do not combine to make up the meaning of the verb phrase ‘bite the dust’. Even if we assume that such idioms are to be listed in the lexicon, they comply with the syntactic rules of English. Thus we do not have idioms like ‘the face music’ instead of ‘face the music’ or ‘dust the bite’ instead of ‘bite the dust’. Likewise, the past and past participle forms of verbs such as ‘sing’, ‘wring’, ‘ring’ have to be listed as such in the lexicon, and we cannot help but observe a rule-like commonality in the set of these forms. This also applies to verbs like ‘breed’ and ‘bleed’. In a significant sense, there is no denying that the lexicon is the least abstract system within the complex ensemble of linguistic systems including syntax, semantics, morphology and phonology. Significantly, the lexicon of a natural language is closest to human cultural conventions, contingencies of language use and the outside world. It can be noted that a user of a natural language may know thousands of words of that language, but the entire lexicon of a language cannot be said to be located within the confinements of that person’s brain. Rather, the lexicon of a language resides in the collective inter-subjective space of a linguistic community­

72

chapter 3

(see for related discussion, Mondal (2012)). At this juncture, it seems as if the discrete structures that the lexicon of a language comprises are actually grounded in a humanly defined linguistic system or in the humanly manifest inter-subjective space of a linguistic community. And if this is the case, it ­appears that it is not clear how lexical items can help characterize mental structures that can be applied to other animals, machines and even plants. This apparent problem disappears once we appraise the pivotal aspects of the present proposal on which the relationship between the lexical items of a language and mental structures is viewed in a new light. On the present proposal, the lexical items of a language which can be conceived of as the building blocks of substance-independent mental structures are indeed humanly manifest. The semantic or conceptual contents of lexical items are here considered to be part of the contents of ­substance-independent mental structures, while it is the case that the lexicon of a language contains and specifies disparate pieces of information that incorporate and integrate features of phonological, syntactic, semantic and possibly pragmatic properties. If so, there is no escaping the fact that the discrete structures constituting the minimal building blocks of mental structures are humanly realized in communities of language users. Additionally, it needs to be emphasized that the semantic or conceptual contents of lexical items may be identified with the conceptual contents in the minds of language users or can be public and thus inter-subjective. This underpins the humanly realized character of the lexical items as well. Hence the minimal building blocks of ­substance-independent or organism-independent mental structures are humanly realized, but these mental structures in themselves are not. This is in stark contrast with the biological picture in which the common molecules of proteins are shared across a wider gamut of life forms including animals, various organisms and plants but the larger organism-specific emergent structures that are built out of these common molecules of proteins are unique to the particular organisms. Thus, for example, venom is unique to the species of snakes, spiders or jelly fish. Similarly, cells are the shared biological building blocks of various types of organisms, even though the specific biological structures composed of cells are unique in specific organisms. For example, wings are especially found in birds; peacocks have specifically colorful tail feathers; octopuses have unique nervous systems localized both in the brain and in its eight arms. If we wish to take into consideration non-biological systems, the common material elements such as carbon are shared by humans, animals, plants and machines, although the specific larger structures composed of carbon found in humans or other animals, plants or machines are unique in each case. Therefore, the relation between the directionality

Possible Minds from Natural Language

73

of ­composition of larger structures from smaller elements or from the minimal building blocks found in nature, and the growth or formation of unique structures is reversed in the case of cognition. That is, when we look at mental structures, it is the mental structures that can be shared among humans, other animals, plants and machines, while the building blocks of these ­mental structures can be unique in particular cases. The directionality of composition of larger structures from smaller elements and the constitution of unique structures are positioned in the opposite orientations in the realm of cognition. This is especially contrary to the familiar hypothesis that lexical items are the atomic elements that can be shared among humans, other animals and perhaps machines, whereas the structures built out of lexical items crossing the boundary between lexical items and other functional items (for example, prepositions) are unique to humans (see Miyagawa et al. 2014). It needs to be made clear that lexical items, when taken to be atomic elements as part of a formal system, are standardly deemed to be conceptually empty minimal items in their formal characterization. This allows lexical items to be shared among humans, other animals and organisms, but keeps the structures built from them from being so shared. On the present proposal, the building blocks of substance-independent mental structures are not devoid of conceptual or semantic contents since mental structures carry contents. And if mental structures carry semantic contents, the contents must come from somewhere. The conceptual or semantic contents of lexical items are actually the source of the contents of mental structures to be formally characterized shortly below. Thus, on the present proposal, the structural forms of lexical items have come to express mental structures some subsets of which are shared by humans, plants, other animals and organisms, and some others are unique to humans. This indicates that the shared mental structures are more primary and may have evolved before the unique forms of lexical items evolved. Although it seems counterintuitive to suppose that the minimal building blocks of larger structures built (mental structures, in the present context) can emerge later than the built structures themselves, it may be noted that mental structures will be characterized in terms of the meanings or contents, rather than in terms of the forms, of lexical items. So it is not the meanings or contents of lexical items that may have appeared later; rather, it is the forms of humanly realized lexical items which came to express mental structures that may have appeared later. Once this is fully appreciated, the apparent antinomy as indicated above immediately evaporates. Now we shall devote our attention to formulating the characterization of mental structures in terms of meaning relations to be specified below.

74

chapter 3



Let’s suppose that the lexicon of a natural language is a set Lex = {Li1, …, Lin}, and the indices 1…n in the set Lex are indices of the lexical items or lexico-morphological forms in the lexicon of a language.1 The indices do not necessarily impose any order on the lexical items in Lex which are, by definition, unordered. Rather, the indices (help) individuate the exact lexical items involved in the construction of any mental structures. Take the case of ‘bat’, for example. It can mean an implement with a handle for playing cricket or the nocturnal mammal that flies. In such a case, the index of the lexical item involved in a given mental structure can tell us which meaning is involved in a certain mental structure. For the sake of simplicity, these indices will be omitted in the formulation of mental structures on condition that they have to be assumed to be present. Since the lexical items in Lex are not just placeholders, the information contained in or generated from each of the lexical items in Lex may be rooted in phonological, syntactic, morphological and semantic features. This is not certainly to deny that such information may also have links to perception, actions, language use and properties of the outside world. Many n-ary meaning relations R1, …, Rk ⸦ Lex × Lex, where k is an arbitrary number, can be constructed by taking elements from the set Lex. Now we propose that mental structures are to be characterized in terms of relations drawn from among infinitely many relations defined on {Lex R1, …, Rk}. Hence these infinitely many relations have the form R1, …, Rk, Rk+1, …, R∞, where Rk+1, …, R∞ are higher-order relations. Thus R1, …, Rk are constructed by way of union of Lex with Lex itself. An example can make this clearer. For instance, if we want to construct mental structures represented in the phrase ‘an unusually brilliant scientist’ from the lexicon of English, the lexical items ‘an’, ‘unusually’, ‘brilliant’ and ‘scientist’ from Lex can be related to one another in terms of meaning relations (among the four lexical items) which come to individuate

1 It appears that having a set for lexical items may create a problem for languages like Chinese, Japanese and Korean, since there does not exist in these languages any one-to-one correspondence between logographic characters and words, and such characters, often equivalent to single morphemes, seamlessly come together to form words and phrases. However, what matters for us is not how the characters in such languages can be defined to form words; r­ ather, the possibility of having discrete word-like entities by imposing a certain ­organization—conceptual or otherwise—on the string of characters is all that matters. This consideration with a slight modification also applies to languages like Turkish in which a word can potentially be very long due to the chaining of affixes in a sequence. Given that the possibilities of combination of such affixes are myriad, we may hold that the possibility of having discrete lexico-morphological forms which include both bound and free forms in these languages having identifiable and isolable meanings is all that concerns us.

Possible Minds from Natural Language

75

mental structures. Thus one meaning relation individuating a mental structure obtains between ‘an’ and ‘scientist’, and another holds between ‘unusually’ and ‘brilliant’. Additionally, a relation individuating another mental structure can be constructed by pulling in ‘scientist’ along with the meaning relation ­constructed with ‘unusually’ and ‘brilliant’. This is a second-order meaning relation which individuates a complex mental structure. Finally, we can also construct a meaning relation between ‘an’ and the already constructed meaning relation for ‘unusually brilliant scientist’, which individuates a wholly different complex mental structure. Each of these relations Ri will have the form Ri = {(x1, y2), …, (xn, yn)}, where n =1 and either x or y itself can be a relation, but in many other cases n can be greater than 1. It needs to be stressed that the defining of meaning relations on Lex does not have anything to do, in a direct way, with the way syntactic relations are defined on the hierarchy of a tree, although in certain cases a meaning relation may well correspond to the way lexical items are syntactically combined. For example, the meaning relation between ‘an’ and ‘scientist’ does not form any syntactically defined constituent, yet it constitutes a meaning relation. Crucially, the present formulation of meaning relations in having nothing to do with compositionality per se is way distinct from any compositional models of meaning, in that meaning relations are here conceptually constrained, regardless of whether certain features of words relevant to their meanings match or not. Hence ‘an’ and ‘unusually’ in the phrase ‘an unusually brilliant scientist’ do not form a meaning relation precisely because this relation is conceptually vacuous. One may now wonder what sense one can make of the notion of a relation being conceptually vacuous. One way of determining whether or not some relation is conceptually vacuous can be offered. In order to check this, one may look into the logic of the expressions concerned. In the present case, it may be noted that the determiner ‘an’ specifies the content of a nominal ­expression whereas an adverbial such as ‘unusually’ modifies an expression (an adjective or a sentence) and thus reduces the cardinality of the set of entities characterized by the expression concerned (for example, the set of individuals who are brilliant must be greater than the set of individuals who are unusually brilliant). When we form a relation between these two words, the resulting relation does not lead to a harmony in the logical structures of the words involved. Since logical structures of words can go beyond what the syntax of a language permits, meaning relations are ultimately grounded in logically possible and yet conceptually constrained relations between words. Similarly, in the sentence ‘Rini knows that the man is sick’ we cannot make a conceptually significant relation between ‘that’ and ‘the’ since their logical structures do not go on to make up a conceptually viable relation.

76

chapter 3

But this must not be taken to mean that what is logically possible or viable in a grammatical context is always conceptually possible. For instance, in the sentence ‘The sick need help’ no conceptually viable relation can be formed between ‘the’ and ‘sick’ even though English syntax allows them to form a noun phrase. When this sentence is syntactically legitimate, additional conceptual resources are pulled in to make sense of the noun phrase ‘the sick’, which refers to the set or collection of sick individuals. Since meaning relations do not supervene on the syntax of a language, ‘the’ and ‘sick’ do not form a conceptually viable relation precisely because the nominal specification role of ‘the’ does not mesh with the nominal concept constraining expression ‘sick’. If we at all need to form a meaning relation for the noun phrase ‘the sick’, a non-­conceptual relation which is not a meaning relation has to be embedded within a meaning relation. That is, we shall have something like Ri = {(X, (x, y))}, where Ri is a meaning relation with an arbitrary number i, X denotes the meaning of the whole noun phrase ‘the sick’, and x, y designate the lexical items ‘the’ and ‘sick’. Here the pair (x, y) must be part of some non-conceptual relation which is not a meaning relation but a syntactic relation. Therefore, meaning relations are those relations that constitute conceptually viable elaborations or associations of linguistic contents expressed in words. Conceptually viable relations are those that either conform to logical compatibility or enrich semantic links by means of conceptual combinations. In other words, meaning relations are those that are instantiated by conceptually constrained associations of a set of given words. Thus the present formulation of mental structures does not make reference to the syntactic structures of natural language which are unquestionably human and cannot be expected to be manifest in other organisms, plants and even machines. Importantly, it can also be observed that the present notion of meaning relations is way different from the relations that can be constructed, as in model-theoretic syntax, for nodes in a tree (such as precedence or dominance relations) and for categories such as np (Noun Phrase), vp (Verb Phrase), S (Sentence) etc. which are properties of nodes (see for details, Pullum (2013)). In fact, the relations R1, …, Rk, Rk+1, …, R∞ encompass many dimensions (such as string adjacency, precedence, dominance, parent-of relations etc.) in terms of which linguistic constructions can be characterized. Besides, cases of idioms which seem to be built on syntactic combinations without any corresponding semantic composition can be cashed out in terms of a meaning relation that incorporates the relevant lexical items and builds the relation. So for an idiom ‘face the music’, for example, the lexical items ‘face’, ‘the’ and ‘music’ will be brought forward in order to construct a binary relation, say, Rj = {(X, (x, y, z))}, when X is the meaning of the whole idiom and x, y, z denote the individual­ lexical items (‘face’, ‘the’ and ‘music’). Here the relation Rj as a whole designates

Possible Minds from Natural Language

77

a mental structure conveyed by the idiom ‘face the music’. Cases having syntactic items with no viable contribution to the meaning of a phrase/sentence can be treated in a similar manner, insofar as a sentence such as ‘It rains’ can be assigned a meaning relation which can be written as Rm = {(X, (x, y))}, where X denotes the meaning of the whole sentence, and x, y designate the lexical items. The mental structure that the sentence ‘It rains’ signifies can thus be designated by means of the meaning relation Rm. It should also be noted that the relevant meaning relations are not simply reducible to dependency relations, as in Dependency Grammar (Tesniére 1959). For example, for a sentence like ‘He cannot hide his unfeigned disgust with such type of films’ a meaning relation involving ‘he’ and a relation constructed for ‘his unfeigned disgust with such type of films’, or a meaning relation comprising ‘he’ and a relation for ‘such type of films’ can be constructed. It is clear that these are not dependency relations because dependency relations themselves in many ways ride on constituency relations. That is, ‘he’ and ‘his unfeigned disgust with such type of films’ do not form a linguistically viable constituent and nor do ‘he’ and ‘such type of films’,2 although either of these pairs can form a meaning relation individuating a mental structure. Therefore, meaning relations on the present proposal go beyond the encapsulation of continuous and discontinuous linguistic structures both of which can be handled in terms of dependency relations.3 In fact, any relation in R1, …, Rk can be practically constructed by means of a relevant

2 Note that both ‘his unfeigned disgust with such type of films’ and ‘such type of films’ are (nominal) constituents, in that we can replace either of them with ‘it’, for example, and then say ‘He cannot hide it’ or even ‘He cannot hide his unfeigned disgust with it’. We can also ask Wh-questions by saying ‘What cannot he hide?’ or ‘What cannot he hide his unfeigned disgust with?’ These are some of the well-known diagnostics for the detection of linguistically possible constituents. 3 A discontinuous linguistic structure such as the following from Warlpiri, an Australian language, can be handled in terms of dependency relations. wawirri kapi- rna panti- rni yalumpu kangaroo aux sub spear nonpast that ‘I will spear that kangaroo.’   (Speas 1990) [aux = auxiliary, sub = subject] It may be observed that the demonstrative ‘yalumpu’ is actually a part of a noun phrase meaning ‘that kangaroo’, but ‘wawirri’ and ‘yalumpu’ are located far apart from one another. In terms of dependency relations, ‘yalumpu’ is a dependent of ‘wawirri’, regardless of whether they appear continuously (as in ‘that kangaroo’ in English) or discontinuously. All that matters to a dependency relation is an asymmetry between the dependent and the item it is a dependent of, since an item can have more than one dependent or even no dependents. For instance, a noun phrase ‘John’ has no dependent.

78

chapter 3

formulation of a relation on a subset of Lex, thereby yielding the desired flexibility in defining a wide range of possible mental structures individuated by such relations. Equipped with the desired flexibility relevant to the present context, we may now examine a range of linguistic structures across and within ­languages to check how the present formulation can be deployed to uncover mental structures of various kinds and forms of complexity. The viability of the current formulation can then be tested against the description of a heterogeneous complex of behaviors, abilities and capabilities in an ensemble of non-human organisms. Readers who are poised to get down to the nitty-gritty of mental structures applying to a diverse range of non-human organisms without bothering about the extraction of mental structures from various linguistic phenomena can directly move over to Section 3.2. Before we proceed further, a clarification is in order. The indices that serve to token different meaning ­relations must not be taken to be constant across the contexts of discussion of different linguistic phenomena or of various mental structures of different organisms considered below—they have to be interpreted anew within the context of discussion of a given linguistic phenomenon/a group of species each time a new linguistic phenomenon or a group of organisms is introduced. We shall first look at existential constructions. Existential constructions are constructions that express something about the existence of someone or something. Existential constructions involve a unique syntactic form which may be associated with specialized morphological structures as well. Most existential constructions involve an expletive such as ‘there’ or ‘it’ (which is semantically empty) and a pivot which is the main element which the proposition of an existential construction is structured around. Below are some examples from English. (12) There are universities with campuses around the world. (13) There is a man sleeping on the street. As can be observed in (12–13) above, existential constructions can have just a pivot (as in 12) or a pivot with an accompanying material that modifies the pivot which is called a coda (as in 13). The expression ‘universities with campuses around the world’ in (12) and ‘a man’ in (13) are the pivots, and the expression ‘sleeping on the street’ is the coda in (13). From this it appears that codas are optional and expletives are obligatory, but in fact expletives as well as existential predicates (such as copulas like ‘be’ verbs or ‘have’ verbs or even the verb ‘exist’) are optional across languages. Let’s also have a look at the following examples.

Possible Minds from Natural Language

79

(14) hay un hombre en la habitación is a man in the room ‘There is a man in the room.’ (Rodríguez-Mondoñedo 2005) (15) may malaki-ng disyerto sa Australya. exist big desert loc Australia ‘There is a big desert in Australia.’ (Sabbagh 2009) [loc = locative marker] (16) he aitua i runga i te huarchi i te ata nei a accident at top at the road in the morning this ‘There was an accident on this road this morning.’ (Bauer 1993) The Spanish example in (14) and the Tagalog existential construction in (15) show that expletives are not required. On the other hand, the existential construction from Maori in (16) shows that existential predicates are not even necessary. Another crucial property of existential constructions is that the pivot has to be lacking in definiteness, and hence definite determiners such as ‘the’ does not go with the pivot. This allows us to see why it is not possible to say either ‘There is the man in the room’ or even ‘There are all people in the room’ (see Milsark 1977). The relevant mental structures that can be found in existential constructions can be specified by linking the pivot either with the coda or with some existential predicate in terms of appropriate meaning relations. Thus, for example, a mental structure from the example in (16) can be specified by means of a meaning relation R3 = {(R1, R2)}, where R1 = {(he, aitua)} and R2 is a complex higher-order relation for ‘i runga i te huarchi’. Furthermore, a mental structure from the sentence (12) can also be specified by means of the relation R5 = {(are, R4)}, where R4 is a complex higher-order relation for ‘universities with campuses around the world’. It may be noted that a mental structure that is constructed by incorporating an expletive is essentially human in character since expletives are irreducibly syntactic devices, whereas mental structures that can be specified by incorporating either the pivot or the pivot along with the coda(s) may not be specifically human. This distinction will turn out to be crucial for the present purpose as we proceed to explore the structure of other possible minds. There is an important linguistic phenomenon that bestows uniqueness on natural language. It is clausal embedding. Clausal embedding allows for the iteration of a sentence by way of the embedding of clauses within clauses. Thus the propositions expressed by clauses are also embedded within clauses, thereby leading to complex higher-order propositions. Thus the sentences in (17–18) express highly complex propositions that require a complex mind.

80

chapter 3

(17) They know that Ray believes that Sandy is out. (18) She feels that Max feels that both of them know they have drifted away from one another. Even though such constructions are uniquely human by virtue of the specific syntactic mechanisms implicit in intensional verbs (‘know’, ‘believe’, ‘say’, ‘feel’ etc.) that pave the way for such iteration, the relevant mental structures that can be specified in such constructions may or may not be characterized independently of the intensional predicates involved. One of the humanly realized mental structures from (17) can be specified by the relation R1 = {(they, know, R2)}, where R1 is a 3-ary relation and R2 is a complex relation designating a proposition. In fact, R2 can be modeled in the form: R2 = {(Ri, …, Rn)}; here Ri, …, Rn are different meaning relations that can be identified within a ­proposition simple or complex (when 2 ≠ i, n). Thus, for example, one relation from among Ri, …, Rn can be a relation, say, R3  =  {(Ray, believe, R4)}, where R4 is a relation constructed for the proposition in ‘Sandy is out’, and likewise, another relation from among Ri, …, Rn can be a relation R5 = {(Ray, R4)}. Now what is important to observe is that some mental structures via meaning relations can be constructed independently of the intensional verbs involved. R5 is indeed one such relation specifying a mental structure and so is the relation R6, which can be characterized as R6 = {(they, R2)}. The mental structures R5 and R6 individuate link individuals and/or agents to certain states of affairs or events that specific propositions express. Note that this way of characterizing the mental structures that are independent of the intensional predicates involved in clausal embedding is separate from the issue of whether natural languages allow syntactically independent or embedded clauses. For example, Warlpiri allows syntactically adjoined dependent clauses, as Legate (2009) argues. The following example is of such kind. (19) Jakamarra-rlu-ju yimi-ngarru-rnu kuja Japanangka-rlu Jakamarra-ERG-1sgOBJ speech-tell-PAST comp Japanangka-ERG marlu pantu-rnu kangaroo spear-PAST ‘Jakamarra told me that Japanangka speared a kangaroo’  (Granites et al. 1976) [erg = ergative marker, obj = object, past = past tense,  comp = complementizer] Since in (19) the proposition expressed in the dependent clause ­‘Japanangka-rlu marlu pantu-rnu’ is believed to be true, regardless of whatever the matrix

Possible Minds from Natural Language

81

predicate dictates for what is within its scope, the dependent clause is supposed to be adjoined rather than embedded within the matrix clause. Instead of detracting from anything that has been characterized thus far, such cases strengthen the case for mental structures that can be specified ­independently of the intensional predicates involved since the relevant propositions expressed in these dependent clauses can be related to the speaker (to ‘ju’ in (19), for example). Another significant issue that links natural language grammar to its contextsensitive properties is the phenomenon of anaphoric reference. Anaphoric reference is distinct from other kinds of references in grammar in the sense that it requires the establishment of either local or non-local referential dependencies much of which hinges on the flow of information in the linguistic context. (20) John always carries two tools when he sets out for the field. (21) A man has come to dinner. He is a renowned mathematician. (22) Every scientist who uses a computer spoils it. Clearly, the cases in (20–22) can be identified with the various ways of syntactically linking a pronominal to some referent who has already been introduced. The sentence in (20) presents ‘he’ as the anaphora the most salient antecedent of which is supposed to be ‘John’, although the possibility of having a different referent for the pronoun cannot be ruled out (‘he’ may refer to somebody else not introduced in the sentence). Even though the anaphora and the plausible antecedent ‘John’ are in the same sentence, they are located in different clauses. The case (21) contains two sentences; the first contains the antecedent ‘a man’ and the second the anaphora ‘he’, which saliently refers to the person mentioned in the first sentence. The sentence in (22) is a more complex case in which the anaphora ‘it’ refers to the computer introduced in the relative clause ‘who uses a computer’, but the meaning of ‘a computer’ in this sentence is not referential. That is, ‘a computer’ does not simply pick out some computer that exists; rather, it has a quantificational force that comes with or inherits the quantificational properties of the universal quantifier in ‘every scientist’. Hence the meaning of (22) is more like: for every x: x is a scientist and for every y: y is a computer, if x uses y, then x spoils y. One of the most important theoretical problems associated with anaphoraantecedent linking is the problem of variable binding, given that the pronominal is treated as a variable which is bound by the (quantificational) antecedent noun phrase. In (20) the antecedent is a proper name (that is, ‘John’), and so ‘he’ is bound by ‘John’. But (21) appears to lead to a problem in that the variable that encodes the anaphora ‘he’ and the (salient) antecedent ‘a man’ are located

82

chapter 3

in different sentences, given that variable binding is supposed to be a local clause-internal or sentence-internal property. Note that variable binding that implements the co-reference of the anaphora with the antecedent unifies the variable-­like features of the antecedent with those of the anaphora. This makes it possible for the anaphora and the antecedent to be identified with two different instances of the same variable bound under the same operator. Hence the question of whether to treat the antecedent of ‘it’ in (22) as a universal quantifier or as an existential quantifier rests on how the antecedent is to be interpreted within the sentence as a whole and how the anaphora is to be interpreted with respect to that interpretation. Clearly, the choice between having variables to represent the special type of referential dependency between an anaphora and the right antecedent, and doing away with variables altogether has distinct consequences for the description of the cases at hand. The former has been thrashed out in Hamm and Zimmermann (2002) and is also part of the approach adopted by Discourse Representation Theory (Kamp and Reyle 1993), while the latter approach either reduces to the traditional Russellian approach that treats pronouns as definite descriptions or is rendered, as in the variable-free semantics approach (Jacobson 1999), in terms that make reference only to truth-­functional semantic values without any essential association with variables and indexes. Both of these two broader possibilities have their own problems. On the one hand, the Russellian approach does not always yield the right interpretation, as in a case like ‘As a child finds a friend, she begins to grow up with her’, where one would be hard-pressed to paraphrase ‘her’ as ‘the friend who a child finds’, and it is not clear how the variable-free semantics can handle cases like that of (21), where the compositional derivation of semantic values in a variable-free way has to range over the entire discourse. On the other hand, a variable-containing approach has to deal with empirical facts that are in conflict with the idea that a variable represented by the (pronominal) anaphora is just a formal entity not loaded with content. Consider the following example from Bengali. (23) chhele-gulo ekta lathi tullo ota/seta/ta niye boy-PLU a stick picked it-distal/it-mental/that with khela-r jonnyo play-GEN for ‘The boys picked up a stick to play with it/that.’ [plu = plural marker, gen = genitive marker] Here any of the three pronouns—‘ota’, ‘seta’ and ‘ta’—can refer to the antecedent ‘ekta lathi’, although the first encodes a distal reference and the second a psychologically grounded reference and the third a neutral type of reference.

Possible Minds from Natural Language

83

The problem this gives rise to is this: given that ‘ota’ (or ‘seta’ or ‘ta’) and ‘lathi’ are to be treated as two instances of the same variable bound by an existential quantificational operator, ‘ota’ (or ‘seta’) does not behave as though it is a variable while ‘ta’ is perfectly variable-like. Note that the anaphora ‘ota’/’seta’ is referentially dependent on the antecedent ‘ekta lathi’ and also is supposed to be co-referential with it, but at the same time the antecedent ‘ekta lathi’ is dependent on the anaphora ‘ota’/‘seta’ for the exact mode of reference (ways of making a reference). Crucially, ‘ota’ or ‘seta’ intrinsically contains the content in virtue of which it can act as a variable, but in order to act as a variable it cannot manifest this content because it is not transparent to, and thus cannot be retrieved from, variable binding. In fact, the sentence (23) would sound odd with ‘ota’, for instance, if the stick is located very close to the speaker. Although content does not matter to formally defined variables, the present case demands that something semantic over and above formal identity be made explicit. Simply speaking, the whole thing behind variable binding for such co-reference becomes rough and turns into a coarse-grained representation. In other words, ‘ota’—or for that matter ‘seta’—contains an irreducibly inherent component of reference from a perspective which is lost or erased when it is rendered as a variable, if and only if it is supposed that the antecedent ‘ekta lathi’ in the matrix clause and the anaphora in the non-finite dependent clause ‘ota/seta niye khelar jonnyo’ can undergo variable binding. Leaving these theory-internal matters open for further investigation, we may now concentrate on the relevant mental structures that we can construct for anaphoric referential dependencies. It may be noted that the concept of mental structures as formulated in this chapter is flexible enough to allow a smoother encoding of semantic structures. This can help steer clear of many of the problems with anaphoric referential relations discussed so far. Thus ‘John’ and ‘he’ in (20) can form a meaning relation individuating a mental structure, and this can be expressed as R1 = {(John, he)}. Likewise, ‘a computer’ and ‘it’ in (22) can also be incorporated in a meaning relation individuating a distinct mental structure. Let’s represent this relation as R3 = {(R2, it)}, where R2 is a meaning relation for ‘a computer’. Moreover, to capture the specific quantificational domain of meaning comprising ‘every scientist’ and ‘a computer’, we may also construct a relation R5 = {(R4, R2)}, where R4 is a meaning relation for ‘every scientist’. The case in (23) does not also pose any problem. Each of the three pronouns—‘ota’, ‘seta’ and ‘ta’—will form a distinct relation with the antecedent ‘ekta lathi’, thereby instantiating three distinct mental structures each of which varies depending on the exact choice of the pronoun involved. Let’s now move over to conditionals which are constructions that set up and manipulate the space of possibilities in a linguistically formulated frame.

84

chapter 3

­ onditionals generally have two parts; while one part (that is, the a­ ntecedent) C sets up the condition or the imagined scenario, the consequent describes the consequence that would follow from the antecedent. The interactions between the antecedent and the consequent in any conditional are pivotal to understanding the structurally anchored motivations for different aspects of various meaning relations. We may look at the following cases. (24) If someone removes the table, the vase will fall down. (25) If Sally is right, we may see a price hike soon. (26) If the professor has come to meet the team, then he has lost something. The examples in (24–26) are associated with different relations between the antecedent and the consequent. The example (24) is a clear case of material implication in that the event described in the antecedent is supposed to have a causal connection to the event that may follow should the event in the antecedent materialize. The sentence (25) is an indicative conditional in the sense that the antecedent states some possibility such that if it holds true, the consequent may also be taken to be true. Note that (25) does not exactly present a case in which the antecedent describes a cause the effect of which is expressed in the consequent. Rather, it is simply a case of (indicative) implication—the truth of the antecedent implies the truth of the consequent. The case in (26) is a bit different from the other two. It does not express the usual type of implication by virtue of which one infers the effect from the cause. Rather, it encapsulates an epistemic space in which the cause (as described in the consequent) is deduced from the effect which is already known to the speaker (see also Dancygier and Sweetser 2005). Needless to say, many of the complexities involved in conditionals are in part governed by the rules of grammar and in part by the human mental capacities. For instance, the conundrums material implication in conditionals gives rise to are well-known. Thus one may say something like ‘If Milton is not the author of Paradise Lost, zero plus zero equals zero’ without violating any rules of logic, since according to the rules of logic a conditional is true either if the antecedent is false or if the consequent is true, and false if the antecedent is true and the consequent is false. Playing upon this obvious irrelevance of the antecedent to the consequent has been tantamount to juggling truth-functional truth conditions with human intuitions about events, things and mental states. The problem is that if this juggling gains ground, a disjuncture begins to appear between the actual rules of logic and the psychologically organized rules that go by a different logic. Thus, if one strongly believes that the antecedent is false, does the person also believe what the entire conditional

Possible Minds from Natural Language

85

construction expresses? Or conversely, if the person strongly believes that the antecedent is true, does the person then disapprove of the content expressed in the conditional construction? It is difficult to answer this question by not bringing in matters of probability vis-à-vis our way of engaging in reasoning in natural language (see for discussion, Jackson 1987). But there is reason to believe that most cases of material implication in conditionals have implicit representations of modality (that is, concepts of necessity and possibility) such that the if-clause can be taken to restrict or simply modify the meaning that a modal operator can be said to convey (see Kratzer 2012). On this proposal, the meaning of (24), for example, can be paraphrased roughly as ‘it is necessary that the vase will fall down if someone removes the table’. The mental structures that can be defined with respect to conditionals are thus bound to be fairly complex. Thus for conditionals we require a sort of general schema by means of which the relevant mental structures can be spelled out. This general schema for a mental structure in conditionals will look like R = {(IF, RA, RC)},4 where RA and RC are higher-order relations for the antecedent clause and the consequent clause respectively, and the element ‘IF’ in the ordered triple may be taken to signify the element that introduces an antecedent clause in a conditional. Furthermore, there can be another mental structure individuated by the meaning relation R’ = {(RA, RC)}. What is important here is that each of the cases in (24–26) will have a distinct pair of R and R’ in order that the exact relationship between the antecedent and the consequent in any conditional construction can be faithfully reflected. The specification of mental structures for conditionals relates rather directly to the case of counterfactuals which express relations between alternative states of affairs seen against the actual or present states of affairs, and the consequences and/or situations that could obtain. The counterfactual sentence from Hungarian in (27) and the one from Modern Hebrew in (28) exemplify two different ways of expressing counterfactuals.

4 It needs to be made clear that conditionals can also be constructed by means of other syntactic devices. Note that unless-clauses, since-clauses, when-clauses or even provided-clauses may also introduce the antecedent in a conditional. Plus certain coordinate constructions can have a conditional reading as well. For example, if one says ‘He drops the ball and I will rip him off’, the first conjunct ‘he drops the ball’ does not exactly make a statement. Rather, the sentence reads like ‘If he drops the ball, I will rip him off’. Hence such cases will have the exact syntactic device—that is, ‘since’ or ‘unless’ or even ‘and’—incorporated in the relevant type of meaning relations that individuate mental structures for these conditional constructions.

86

chapter 3

(27) ha holnap el-indul-na, a jo:vö hétre oda-ér-ne if tomorrow away-leave-CF the following week.onto there-reach-CF ‘If he left tomorrow, he would get there next week.’ (28) im Dani haya ba-bayit, hayinu mevakrim if Dani be.PAST.3SING in-home be.PAST.1PLU visit.PTC.PLU oto he.ACC ‘If Dani were home, we would have visited him.’

(Karawani 2014)

[cf = counterfactual marker, sing = singular, ptc = participle, acc = accusative case marker] While Hungarian employs a different counterfactual morpheme ‘-na’, the past tense form ‘haya’ in Modern Hebrew does the same job. English seems to have more than one strategy for the marking of counterfactual constructions. The examples in (29–31) are all valid ways of forming counterfactuals in English. (29) If my father were alive, I would have married a princess. (30) If my father had been alive, I would have married a princess. (31) Had my father been alive, I would have married a princess. While (29) and (30) employ a specific tense and/or aspect, (31) contains auxiliary inversion, as the relevant portions marked in bold show. Most importantly, there is a significant difference in meaning between constructions that employ the grammatical strategy in (29) and those that employ the grammatical mechanism in (30–31). The former can be used to describe imaginary situations that have no intrinsic temporal connection, whereas the latter is also used to express alternative counterfactual scenarios that could have happened or obtained in the past. Compare (32) to (33) below. (32) If I were rich (*in the year 2000), I would travel the world. (33) If I had been rich (in the year 2000), I would have traveled the world. While the sentence in (32) describes an unrealized present condition in the antecedent, the antecedent in (33) describes an unrealized past condition, that is, something that could have happened if the speaker’s wish had materialized. The possibility of having the adjunct ‘in the year 1990’ with (33) but

Possible Minds from Natural Language

87

not with (32) provides evidence for this conclusion. Overall, it is clear that in most cases languages employ either a particular counterfactual marker or a specific form of tense-aspect marking in order to indicate the counterfactual event. It may now be observed that the mental structures that can be specified in counterfactuals have obvious similarities to those that have been specified for conditional constructions. The reason for this is clear—both these two types of constructions involve a dependent antecedent clause and an independent consequent clause. Crucially, a construction of the kind in (31) requires a meaning relation of the type R’ = {(RA, RC)}, because the conditional ‘if’ is not present in (31). There is an important sense in which we can contend that animals and organisms other than humans do not possess counterfactual mental structures, although it is possible that other animals, especially some species of vertebrates including primates, have certain forms of conditional mental structures. We shall discuss more about this in the next section. So we may now turn our attention to a different linguistic phenomenon. Coordinate structures in language are an important source of insights into the combinatorial possibilities natural language offers. Linguistic expressions of unbounded length can be formed by means of coordinate structures. Thus noun phrases (np), verb phrases (np), adjective phrase (ap), prepositional phrases (pp) or whole clauses (S) can form coordinate structures. In most general cases across languages, the grammatical device that helps accomplish this is a coordinating linker that connects two elements of equal type. Thus ‘a bag (np) and a rope (np)’ is fine, but ‘a bag (np) and in the well (pp)’ is not. Likewise, ‘A knife (np) or a sickle (np) will do’ is fine, but ‘A knife (np) or very important (ap) will do’ is not. Such coordinating linkers in English (such as ‘and’, ‘or’) are sandwiched between the elements they connect, as also shown in the examples below. (34) a. The sweet girl danced and played guitar. b. *The sweet girl played guitar, danced and. (35) a. I shall write this book or he will compose his music. b. *Or he will compose his music, I shall write this book. Languages can also duplicate the marking or the form that represents the coordinating linker, as the examples from the Pama-Nyungan language Djabugay (in 36) and Dutch (in 37) show. (36) yaba-mba(−nggu) nyumbu-mba(−nggu) djama du:-ny brother-and-ERG father-and-ERG snake kill-PAST ‘Brother and father together killed a snake.’ (Patz 1991)

88

chapter 3

(37) hij is noch snel, noch precies. he is neither fast, nor meticulous ‘He is neither fast nor meritorious.’

(Vries 2005)

Additionally, coordinate structures without any coordinating linker are also possible. Maranungku, an Australian language, exhibits this property, as shown in (38). (38) mereni kalani ŋeni kili-nya awa brother uncle my eat-3PLU meat ‘My brother and uncle ate the meat.’ (Tryon 1970)  [ERG = ergative marker] The mental structures that can be constructed for coordinate structures can be specified in a manner similar to the one for conditionals. We shall require certain general schemas that can accommodate the relevant pieces in meaning relations. Thus a relation R = {(LC, Xi, Xj)}, where LC is the coordinating linker, the variable X represents any phrasal or clausal element, and it may be the case that i = j.5 Besides, another relation R’ = {(Xi, Xj)} designating a different mental structure can be constructed for cases like (38). But an obvious question undoubtedly arises at this juncture. Is there a way of distinguishing the mental structure for a case of coordination like that in (38) and a mental structure that can be specified by defining a meaning relation comprising only the elements that are conjoined in a language like English? Given what has been discussed so far, this question also pertains to the question of identifiability of mental structures distilled from linguistic constructions. Note that a mental structure that can be specified by defining a meaning relation having only the elements that are conjoined by ‘and’ or ‘or’ in a language like English can have a property different from that which can be found in another mental structure formed by incorporating the coordinating linker along with the elements conjoined. Suppose that we have a relation, say, Ri = {(John, Max)} constructed from a sentence like ‘John and Max hate each other’, where Ri denotes a relation of hating. In this case the relation Ri = {(John, Max)} and another relation Rj = {(and, John, Max)} are indeed distinct in their conceptual as well as formal properties. Hence a relation Rk = {(mereni, kalani)} designating a mental structure for the coordinate structure in (38) is also different from Ri, insofar as Ri is a relation of hating and Rk is simply a relation of pairing. However, if the 5 This stipulation is necessary, given that we can have coordinating structures like ‘Sid and Sid and Sid…’.

Possible Minds from Natural Language

89

relation Ri is defined in such a manner as to represent a relation of pairing of ‘John’ and ‘Max’ in the sentence ‘John and Max hate each other’, it is then true that Ri = Rk. What this suggests is that two mental structures that are otherwise formally similar (in terms of the exact value of n in n-tuples) can differ based on the exact contents of the mental structures in question. There is more that we can say about this matter as we proceed. We now turn to constructions that leave out some piece(s) of structure by analogy with something that is already present in the construction formed. The general term that is used for such constructions is ellipsis. There can be various types of ellipsis depending on which portion of structure is left out. For example, a verb phrase ellipsis leaves out the verb phrase or a noun phrase ellipsis omits the noun phrase. Here are some examples that can illustrate the point better. (39) I would clean up the floor if there was a good reason to__. (40) Kitty likes all the wonders of the world, and this man does __too. (41) The boy loves to study these heavy books and Jim__ too. Here, the exact portions that have been retained in the sentences have been marked in bold, and the gaps have been indicated by dashes. In (39) the verb phrase ‘clean up the floor’ is missing after ‘to’. Interestingly, (40) contains the auxiliary verb ‘does’ that sort of re-presents the portion that is missing, whereas (41) does it without the auxiliary ‘does’. But there are also other cases of ellipsis in which smaller or bigger portions are found to be absent. Let’s look at the examples below. (42) Ron is passionate about cricket and Jane __ about soccer. (43) The children can pick up the chairs and the adults __the tables. (44) I will feel these violet flowers more than she will__ those green plants. (45) Jack saw something over there, but I don’t know what__. (46) Her son suddenly fell down, but I can’t figure out how__. In (42) the entire predicate minus the prepositional phrase is missing; thus this differs from the earlier cases in that constructions of the type in (39–41) will not actually omit the adjunct prepositional phrase that is part of a verb phrase. Consider, for example, a sentence like ‘Ron is passionate about cricket and Jane too’, where the entire verb phrase that includes the adjunct prepositional phrase is missing. Such cases as those in (42–46) are said to be a subclass of cases of ellipsis, and the term that is generally used is gapping. The sentence

90

chapter 3

(43) is also similar to (42). One of the important points to take into consideration is that gapping leaves a remnant (‘about soccer’ in (42) and ‘the tables’ in (42)) which contrasts in meaning with its counterpart in the other clause. Thus, ‘about soccer’ in (42) contrasts with ‘about cricket’, and hence ‘about soccer’ is the focus when compared to ‘about cricket’. The example in (44) is a different case; it leaves out only the main verb ‘feel’ in the dependent adjunct clause ‘more than she will those plants’. Since the portion that undergoes ellipsis does not carry the auxiliary, this looks like gapping but is not really a clear case of gapping. Hence this is called pseudogapping (see Levin 1979). On the other hand, (45–46) show that portions as big as the entire clause6 can also be absent. This is called sluicing. Quite apart from cases of ellipsis that involve vps and clauses, we can also think of cases of ellipsis that involve nps. NP-ellipsis is attested in English with and without the use of ‘one’. Thus we can have the following. (47) Three boxes arrived, but two__ were in a broken condition. (48) As for your papers, I like the ones you have coauthored with Smith. As can be observed above, the noun phrase without the quantifier in (47) or without the genitive/determiner in (48) constitutes the portion that is common across a sentence, as indicated by the fact that the part marked in bold is identical to one instance of this common piece when the other instance is dropped. The second strategy, that is, the strategy with one-insertion is also found in Spanish and Italian. The following is an example from Italian.

6 In the Generative literature tp (Tense Phrase) is the counterpart of what is taken to be a clause here. tp is the phrase that includes the np and the vp. The phrase that dominates tp is the Complementizer Phrase (cp). Within Generative Grammar, the structure of a sentence looks like the following. [CP C [TP NP/DP T [vP v [VP V NP/DP]]]] The heads of phrases in this schema are marked in bold. The vp in this structure is supposed to be enclosed within another verb phrase called Light Verb Phrase (vP). Thus the two verb phrases form a shell. Additionally, for convenience the Determiner Phrase (dp), which is a more inclusive functional phrase incorporating the traditional np, has been placed alongside np in order to indicate that either of the phrases can be taken to be present there in accordance with one’s choice of terminology within the domain of the appropriate theoretical interpretation. What is important for us is that not much hinges on whether we adopt this schema for the present purpose. So we will continue to use our terminology which is in a sense more familiar to readers.

Possible Minds from Natural Language

91

(49) a. un libro grande esta encima de la mesa a book big is on the table ‘A big book is on the table.’ b. uno grande esta encima de la mesa a big is on the table ‘A big one is on the table.’  (Bernstein 1993) But German licenses NP-ellipsis without one-insertion with the indefinite article present, as shown in (50). (50) Peter hat ein altes Auto gekauft. Hat Maria auch ein-es gekauft? Peter has an old car bought. Has Mary also a-SING bought ‘Peter has bought an old car. Did Mary also buy one?’  (Lobeck 1995) Overall, the common thread running through different kinds of ellipsis is that some element is left intact while a portion becomes missing, and the element that is left intact can be the complement of a predicate and/or the functional element that marks either tense and aspect (in the case of predicates) or the quantificational/determiner properties (in the case of nps). So in the case of VP-ellipsis ‘to’ or the tense marking form (‘does’, for instance) is the remnant; in sluicing the wh-phrase is the remnant; in NP-ellipsis the determiner is the remnant and so on. We many now concentrate on the mental structures that can be derived from cases of ellipsis. One advantage of the formulation of meaning relations is that it can easily express both the elements that undergo ellipsis in (39–41) which can be analyzed as phrases (that is, as verb phrases), and those parts undergoing ellipsis in (42–46) which cannot be taken to be (verb) phrases. The specification of the relevant mental structures follows straightforwardly from this. First, let’s look at the cases of NP-ellipsis. For (47) a meaning relation R1 = {(R2, R3)} can be defined such that R2 = {(three, boxes)} and R3 = {(two, boxes)}. The case in (48) is a bit tricky, since there is some material present (that is, ‘ones’) that has taken the place of the noun phrase. In such a scenario, the noun ‘papers’ must form a relation with another relation, say, R4 constructed for ‘the ones you have coauthored with Smith’. Call this relation R5, which individuates a mental structure. Now we may specify another mental structure individuated by the relation R6 = {(R7, R4)}, where R7 = {(your, papers)}. For (50) we shall require a relation R8 = {(R9, R10)} in which R9 = {(eines, R11)} such that R11 = {(altes, Auto)}, and R10 is a relation for ‘ein altes Auto’. A slightly different but otherwise similar ­strategy will be applicable in (49). Let’s define a relation R12 = {(R13, R14)},

92

chapter 3

where R13 is a relation for ‘un libro grande’ and R14 = {(libro, RN)}, where RN is a relation for ‘uno grande’. So R12 instantiates a mental structure that allows the conceptualization of ‘uno grande’ in the context of NP-ellipsis. Turning to the other cases of ellipsis involving vps and clauses, we can specify other types of mental structures. For example, for the case in (39) a relation R15 individuating a mental structure can be defined such that R15 = {(R16, R17)}, where R16 is a relation for ‘a good reason’ and R17 is a relation for ‘clean up the floor’. Likewise, for (40) a relation R18 can be defined such that R18 = {(R19, R20)}, where R19 denotes a relation constructed for ‘this man’ and R20 is a relation for ‘likes all the wonders of the world’. Now for a case like (42) we need a relation R21 such that R21  =  {(R22, R23)}. Here R22  =  {(R24, R25)}, given that R24 = {(is, passionate)} and R25 = {(about, cricket)}, and R23 = {(R24, R26)}, where R26  =  {(about, soccer)}. Similar considerations apply to (43–44). Finally, the relevant mental structure in the cases of sluicing can be specified by defining a relation R27 = {(wh-element, RS)}, where RS denotes a higher-order relation for the clause that undergoes sluicing. Now we shall look at modality. Modality in language is associated with the linguistic representation of the notions of necessity and possibility. Modal verbs like ‘can’, ‘must’, ‘may’, ‘might’ are so called because they linguistically express modal concepts. It does not seem hard to recognize that a sentence like ‘I may finish off this task’ conveys something different from what the sentence ‘I must finish off this task’ conveys. The pivotal difference cannot be ascribed to anything other than what the difference between ‘may’ and ‘must’ contributes to. Modality is another element in linguistic expressions that, at least in part, relates to and also reflects the human capacity to think of or imagine possibilities that are not actualized or have not materialized or are judged to be right in terms of some standards or moral codes. Some pertinent examples are provided below in order to state clearly what we can gather from expressions of modality. (51) gel-me-meli-siniz come-NEG-MOD-2PLU ‘You ought not to come.’ (52) mne nado idti v voksal I-DAT must go-INF to station-ACC ‘I must go to the station.’ (De haan 2006) [neg  =  negation, inf  =  infinitive marker, dat  =  dative case, mod  =   modality marker] The examples above illustrate how modality can be expressed through some affix in Turkish, as in (51), or via a different (adverbial) form in Russian, as

Possible Minds from Natural Language

93

in (52). Importantly, even though modality markers are embedded within sentences, they do not generally specify anything about the time or aspect of an event. Rather, modality modifies the proposition expressed in a clause.7 Thus (51), for example, states that it is necessary for the agent not to come. The example in (52) also expresses the necessity of the agent’s going to the station. If ­interpreted in the normal context, such modals express what is called deontic modality, which reflects notions of desirability and acceptability in terms of some (ethical) ­standards, principles or codes or etiquette. But it is to be borne in mind that what a certain modal form expresses is not always clear. The ­English modal ‘must’ is ambiguous between a deontic reading and an epistemic reading. For example, the sentence in (53) does not express a necessity in terms of some standards moral or otherwise. Rather, it describes the state of the speaker’s knowledge about some state of affairs. Hence the term used is ­epistemic ­modality. And (54) is a clear case of deontic modality in that the speaker ­permits the recipient of the message to send him/her the letter tomorrow. (53) The culprit must have stayed in the room. (54) You may send me the letter tomorrow. One the one hand, nothing keeps the modal ‘may’ from having an epistemic reading, as in the following sentence. (55) This group of volunteers may be stranded in the mountains. On the other hand, there are also other ways of expressing epistemic modality. Expressions like ‘probably’, ‘likely’, ‘maybe’, ‘certainly’, ‘I think’ in English achieve the same effect, in that they convey the impression that the speaker expresses his/her judgment about certain states of affairs in the world. It needs to be emphasized that it is difficult to restrict modals to a particular syntactic category. Speaking in cross-linguistic terms, it is not always easy to impose only a specific interpretation on modals since a certain modal element within a 7 But there are modals that can affect the event structure and so modify the event as well. For instance, if one says ‘I can change the rules’, the modal auxiliary ‘can’ does not exactly modify the proposition expressed in ‘I change the rules’. Here the modal ‘can’ is to be interpreted as implying something about the capacity or ability of the agent participant in the event concerned. Such modals are said to belong to the category called dynamic modality (Palmer 1986). A language like Japanese grammaticalizes this distinction between modals that affect the proposition and those that affect the internal structure of the event in a clause.

94

chapter 3

single sentence may receive a range of modal interpretations. Languages a­ pply various kinds of forms—affixes, particles, auxiliaries, main verbs, adverbs, adjectives or even phrases—to mark modality. We may now proceed to explore the mental structures that articulate with the expressions of modality. In most cases modals can be construed to be split between an eventive interpretation and a propositional-level interpretation. So we shall require two broadly conceived general schemas for these two types. Cases of what has been called dynamic modality (Palmer 1986) more often than not are restricted to an eventive interpretation (see note 7). Modal elements such as ‘can’, ‘be able to’, ‘need to’, ‘have to’ or even ‘must’ allow for such interpretation. For such cases, we formulate a relation Ri = {(ME, RE)}, which instantiates a mental structure for the eventive modal interpretation, where ME may be taken to denote a modal element (simple or complex) or a relation that expresses modality, and RE is the relation for the specific expression that describes an event. Thus, for example, a sentence like ‘I can win the game’ will have a relation R1 specified as R1 = {(can, RE)}, where RE is a relation between ‘I’ and another relation for ‘win the game’. In a similar manner, a relation Rj = {(ME, RP)} instantiating a mental structure for the propositional-level modal interpretation can be specified. Notice that here RP is the relation for the specific expression that describes a proposition. It needs to be made clear that RP is essentially an n-ary relation when n ≥2, but this is not generally true of RE. So the sentence in (51), for example, which demands a propositional-level modal interpretation, can have us define a relation R2 such that R2 = {(meli, RP)}, where RP represents a relation for ‘gel-me-siniz’ which can be specified as RP1 = {(R’, R”)} when R’ = {(me, gel)} and R” = {(siniz, R’)}. To give another example from above, let’s take the case in (53). The sentence ‘The culprit must have stayed in the room’ contains ‘must’, which is the ME here. Therefore, a relation R3 specifying a mental structure for the relevant modal interpretation in this sentence can be defined such that R3 = {(must, RP)}, where RP stands for a relation for the entire expression ‘the culprit has stayed in the room’ which can be written as RP2 = {(R`, R``, R```)} when R` = {(R1, R2)} as R1 = {(the, culprit)} and R2 = {(has, stayed)}, R`` = {(R2, R3)} given that R3 is a relation for ‘in the room’, and finally, R``` = {(R1, R3)}. Similar considerations apply to the other cases cited here. Just as modality modifies the (modal) structure of propositions or events, negation affects the internal formal structure of propositions or events. ­Logically speaking, if a proposition P is true, not-P makes it false. Or conversely, if a proposition P is false, that is, if it is the case that not-P, the negation of this ­statement makes P true. On the other hand, if negation applies to events, it simply states that the event in question did/does/will not occur or hold. For example, if someone says ‘Do not move’, the speaker urges that the action/event

Possible Minds from Natural Language

95

of the hearer moving should not occur. In other words, negation is like an operator than changes the valence of some expression it applies to. The linguistic expression of negation can be widely varied, but its availability across languages in some form or the other is widely attested. Thus it would not be surprising if negation is found to be a universal property of natural language. In fact, what sets it apart from other elements like modal elements or conjunctions is that negation is not exactly a property that has a physical grounding, in that ­negation cannot be traced to any physical organization of processes or to the occurrence of events while modal elements or conjunctions have a reflex in the regularities of physical events or even in the organization of psychological states. It is not also a kind of grammaticalization of language users’ psychological states and processes like anaphoric reference or information structure in language is. Rather, it is an abstract property of symbolic expressions that can be traced to logical and/or linguistic contexts of the expressions in which negation applies. Negation in languages is expressed through a variety of means, and it is not always the case that the linguistic forms of negation will match the negative operators in their meanings. A familiar example comes from African American English (aae), a variety of English, which uses double negatives without leading to a really doubly negated meaning in the constructions. (56) I don’t have no cars. This sentence does not mean that the speaker has a car or cars; instead, it means that the speaker has no cars. But this is not just restricted to the aae variety of English—this can be found in languages such as Italian, Afrikaans, French etc. (57) Gianni non ha telefonato a nessuno Gianni neg has called to nobody ‘Gianni did not call anybody.’ (58) hy is nie moeg nie he is neg tired neg ‘He is not tired.’ (59) Marie ne mange pas Marie neg eats neg ‘Marie does not eat.’

(Zeijlstra 2008)

The Italian example in (57), the Afrikaans example in (58) and the French example in (59) show that two forms of negation can be used in the same

96

chapter 3

­construction without any accompanying interpretation in which two negatives, by canceling each other out, give rise to a positive interpretation. This is generally called negative concord, because the two negative elements are supposed to agree in terms of the grammatical features of negation. It needs to be made explicit that the distinction between double negatives as in aae and negative concord is not quite straightforward as there is considerable overlap between these two cases, on the grounds that both do not ensure logical faithfulness in expressing negative operators. In this sense, it may be argued that logical expressions and linguistic expressions of logic may have certain ­well-grounded differences. Hence this raises questions about the semantic composition of sentences, given that one of the negative markers is semantically inert despite the presence of two negative markers. But the presence of two or more negative elements can in other ways be thought to heighten the semantic ­negativity—a possibility entertained in Jespersen (1924). This is evident in the case of (56). In fact, the presence of double negatives is not that rare even in the standard variety of English, especially in colloquial speech. Consider the pleonastic double negation in (60) below. (60) I shouldn’t be surprised if Bob didn’t get upset. The sentence means that the speaker would not be surprised if Bob got upset. Here is a case of double negatives that does not exactly instantiate concord, that is, grammatical agreement. Such pleonastic cases of double negatives abound in other languages such as West Flemish (Horn 2010). Therefore, multiple instances of negation do not correspond to multiple instances of semantically expressed negative operators. This will turn out to be important for the specification of mental structures for negation. Like modality, negation can be marked by means of affixes, particles, independent words or even phrases, and many languages apply a combination of these grammatical strategies. But there are few languages that can express negation by means of tone, that is, by not applying any lexico-morphological form. An example can be given from Mbembe, a Niger-Congo language in Nigeria. (61) a. mó-tá 3-FUT-go ‘He will go.’ b. mò-tá 3-NEG-go ‘He will not go.’

(Dahl 2010)

Possible Minds from Natural Language

97

Note that in the example above the symbol ‘\’ indicates a high tone and ‘/’ a low or falling tone. This shows that the meaning of negation can also be implicit in the intonation particular expressions carry. Apart from that, negation also interacts with other elements in sentences. That is, negation, just like modality, may be interpreted in different positions in linguistic constructions. The following examples illustrate this well. (62) The interns may not come tomorrow. (63) All students did not turn up. (64) Some student did not turn up. (65) Kat does not believe that Jakka is in town. In (62) negation interacts with the modal element ‘may’ in the sense that ­negation is interpreted relative to the modal element ‘may’ or vice versa. This interaction of one element with respect to another element for semantic interpretation is called semantic scope. Thus on one interpretation the sentence in (62) means that it is possible that the interns will not come tomorrow (the interns may come the day after tomorrow). Here the modal element ‘may’ takes scope over the negation. But there could be another plausible interpretation of this sentence on which it means that it is not the case that the interns may come tomorrow (perhaps the interns may not come any day). Here the negation present in the construction takes scope over the modal element ‘may’. Similarly, the universal quantifier ‘all’ interacting with the negation in (63) gives rise to two different interpretations. On one interpretation (‘all’ taking scope over the negation), the sentence means that all students were absent, while on the other interpretation (the negation taking scope over ‘all’) it ­implies that some students, but not all, turned up. But (64) does not have a reading on which it can mean that it is not the case that some student or the other turned up. Rather, it simply means that some particular student did not turn up. In other words, the negation does not take scope over the existential quantifier ‘some’. Finally, (65) allows an ambiguity in that the sentence may mean that Kat believes that Jakka is not in town, while it also means that it is not the case that Kat believes that Jakka is in town. In the first case the negation is ­interpreted within the embedded clause and in the second it is within the matrix clause. This is also called NEG-raising because the location of ­interpretation of the negation appears to have shifted from within the embedded clause. Mental structures expressed in constructions involving negation can be specified by having it related either to the proposition or to the event

98

chapter 3

­expressed.8 Hence, on the one hand, we can have a relation R  =  {(N, XP)}, where N denotes the negative element and XP designates a relation for the clause N negates. Thus, for a sentence like ‘Rani did not believe this’, the clause N negates is ‘Rani believed this’. But for cases of negative concord or multiple negation we require more than one element of negation. That is, for cases in (57–60), the general schema needs some modification. So a relation instantiating a mental structure in such constructions would look like R = {(N1, … Nn, XP)}, where N1,  … Nn stand for different negative elements within the same construction and n must be some arbitrary finite number. On the other hand, for cases involving the negation of events expressed in constructions, we can define a relation R’ = {(N, XE)}, where XE designates a relation for the expression of the event N negates. For example, the interpretation of (64) warrants that negation should negate the event expressed. Hence in such a case XE will represent a relation for ‘turned up’. Having said that, we cannot rule out cases where relations individuating mental structures for negated propositions and relations instantiating mental structures for negated event expressions interpenetrate. Such a possibility can indeed be found in (62–63), for instance. As a matter of fact, many negated expressions across languages can be of this type. Finally, for the kind of negated expression in (61b), something other than the schemas specified above is required. Note that the negated expression in (61b) is not constructed in terms of some lexical or morphological form—it is expressed through some non-segmental means. Here is a case where the same form pronounced in a different tone gives rise to a different meaning. Given that this is the case, the whole expression in (61b) along with the associated tone has to be part of the relation individuating a mental structure for the ­negated expression. In other words, a meaning relation installing a mental structure for the whole clause has to be defined on (61b). Now we turn to expressions of tense/aspect in order to explore a distinct type of mental structures that can be uncovered from them. Tense and aspect are two related terms that have something to do with time in events. Both relate events to temporality. The difference between them can be traced to the distinction between temporal relations and the mode of temporality. In clearer terms, tense has to do with the relations between temporal 8 There exists a special affinity between negation and polarity-sensitive items such as ‘any’, ‘ever’ etc. Consider, for instance, the difference between ‘*He has any car’ and ‘He does not have any car’, or between ‘*He has ever been to Austria’ and ‘He has not ever been to Austria’. For such cases, the relevant mental structure must incorporate a relation between the negating element and the polarity-sensitive item concerned.

Possible Minds from Natural Language

99

events, whereas aspect9 refers to the internal composition or constitution of temporal events. Thus the sentences ‘A man walks’ and ‘A man is walking’ differ not in event time, but rather in terms of the aspectual profile of the event in question. That is, the event of a man’s walking is the common background theme for both sentences, but ‘A man walks’ expresses a neutral aspect while ‘A man is walking’ isolates the portion of temporality within which the continuity of an ongoing activity has to be singled out for identification. Similarly, the sentence ‘A man has walked’ refers to the finished part of the activity of walking which is analyzed from within the event that the man’s walking encompasses. The following sentences show how tense and aspect interact in complex sentential constructions. (66) We thought they would be away from home. (67) She believes Jack went out early. (68) The committee has decided that Shan is the best candidate for the job. (69) Philip saw a lot of kids going inside the museum. (70) The match referee is calling the umpire to decide on the outcome. All the examples in (66–70) are complex sentences each having an embedded clause within its scope. The sentence in (66) describes a past event in the matrix clause and the event expressed in the embedded clause can be located only with reference to the past event in the matrix clause because the tense in the matrix clause takes scope over that in the embedded clause. Another of way of putting this is that the event in the embedded clause refers to a time in the past but that time point has to be located within (epistemic) modal context described by the matrix event. The reason is that the speaker(s), probably on the basis of inferences and knowledge gained from some piece of evidence, thought that they would be away from home. The sentence (67) as a whole

9 Linguists often differentiate between grammatical aspect and lexical aspect. The latter term is especially used to refer to aspectual properties encoded in specific verbs or classes of verbs. For example, verbs like ‘walk’, ‘eat’ describe activities; verbs like ‘build’, ‘paint’ express accomplishments which have a definite endpoint; verbs like ‘arrive’, ‘win’, ‘die’ refer to ­achievements which are punctual in having no component of a process, and finally, verbs like ‘know’, ‘relate’, ‘love’ describe states which are by their very nature unbounded in time. For our purpose, the term ‘aspect’ will be used to designate either grammatical aspect or lexical aspect depending on the exact context of use.

100

chapter 3

expresses an event in the present time, but the event described in the embedded clause refers to a past event and is to be situated with reference to the event time in the matrix clause. Importantly, the event in the matrix clause is not bounded in its aspectual profile, whereas the matrix event in (66) is bounded as it happened in the past. Hence the past event in the embedded clause of (67) is under the scope of an eventuality of unbounded aspect. The case in (68) is different in that the matrix event is an event that has happened in the recent past having a perfective aspect. That is, the event has a bounded aspectual dimension. Here again, the state described in the embedded clause refers to a present state of affairs, and the content of the statement ‘Shan is the best candidate for the job’ is co-temporal with the matrix event. What this means is that if the committee had taken the decision in the distant past, it could be quite possible that somebody other than Shan was then the best candidate. Such referential dependencies of tenses in linguistic expressions of events can also be rendered in anaphoric terms—a proposal elaborated on in Higginbotham (2009). Just like an anaphora refers back to some entity already mentioned or introduced, tenses also refer (back) to each other, thereby making viable the relations between events and/or states. This is strikingly clear in the example (69), where the non-finite and hence tenseless event of a lot of kids going inside the museum can only be situated with respect to the past event of Philip’s seeing. Significantly, the aspect of the embedded event is not bounded in time due to the non-perfect/progressive aspectual profile of the event, although the matrix event is bounded in the past. Finally, (70) contains a matrix present event in its non-perfect unbounded aspectual profile with the event in the embedded clause in a non-finite tenseless form, and this embedded event which has not yet materialized is parasitic upon the matrix event for its future or potential manifestation of tense and aspect. Needless to say, tense and aspect interact with one another in intricate ways across languages. As also observed above, they can also express modality. In fact, the intimate relationship between modality and tense is observed in the nominal encoding of tense in Somali, an Afro-Asiatic language spoken in Somalia. (71) nín-ka cáan-ka ah ee búug-gani man-DEF.M. fame-DEF.M be and book-DEF.M. qoray (waa Shákisbíir) DEM.PAST wrote ‘The famous man who wrote this book (is Shakespeare).’  (Lecarme 2008) (DEF = definiteness marker, M = male gender)

Possible Minds from Natural Language

101

In (71) ‘-a’ in ‘nín-ka’ or ‘cáan-ka’ and ‘-i’ in ‘búug-gani’ are the tensed definiteness markers. While ‘a’ refers to the non-past tense, ‘i’ refers to the past tense. As Lecarme (2008) argues, these markers are also markers of modality, ­especially when these tensed definiteness markers are found to induce modal readings (in terms of epistemic modality and the reference to ­non-actual ­possibilities). Aside from that, languages can also go tenseless by exhibiting no tense marking (see for discussion, Lin 2012). Mandarin Chinese is one such case. (72) a. wǒ shuāiduán-le tuĭ I break-PERF leg ‘I broke my leg (it’s still in a cast).’ b. wǒ shuāiduán-guo tuĭ I break-PERF leg ‘I broke my leg (it has healed since).’  (Smith 2008; cited from Chao (1968)) Here, both ‘-le’ and ‘-guo’ are markers of the perfective aspect. The difference between them resides in whether one form refers to an event that occurred at some unspecified point of time or to an event that took place prior to a given time. The former interpretation goes with ‘-le’ and the latter with ‘-guo’. It becomes clear that fine-grained differences in the aspectual profiles of events can also contribute to the interpretation of temporal relations of events. It is like having the interpretation of tense piggyback onto the expressed modes of eventualities. Thus tense and aspect are not two independent categories that have independent reified correlates within and across languages. Rather, they can often be mixed. The take-home message that we get from this exercise is that the mental structures that can express the properties of tense and aspect cannot be defined without reference to the verbs that constitute the pivot around which eventualities are structured. Hence meaning relations instantiating mental structures for temporal and aspectual properties of eventualities have to be specified in a manner that makes reference to clausal and inter-clausal temporal structures. For sentences that encode tense/aspect only in verbs, the relevant relation individuating a mental structure can be schematized as RT = {(VD, R1, …, Rn)}, where VD is a set of forms for the verbal domain and R1, …, Rn are relations between the items in VD and other elements in the rest of the sentence. For instance, in a sentence like ‘Gandy has been playing for an hour or so’ VD contains ‘has’, ‘been’ and ‘playing’ as members, and R1, …, Rn will represent relations that can be constructed between the members of VD, and ‘Gandy’ and a relation for ‘for an hour or so’. This formulation can also

102

chapter 3

cover the case of Mandarin Chinese—in one case the relevant item in VD is ‘shuāiduán-le’ and in the other it is ‘shuāiduán-guo’. For complex sentences such as those in (66–70), each instance of a relation RT from one clause has to be brought into a higher-order relation with the other instance of RT from another clause. Thus we can have R = {(RT1, …, RTk)}, where R is the higher-order relation for other relations RT1, …, RTk expressed in clauses. But for languages having two different domains for tense marking, we require another set ND apart from VD. In such a case RT = {(ND, VD, R1, …, Rn)}, where R1, …, Rn are relations between the members of ND, the members of VD and other expressions in the sentence. Lastly, we shall look at information structure in language. The notion of information structure in linguistic constructions is characterized in terms of the uncertainty/familiarity associated with linguistic items. The difference between topic and focus in a linguistic structure answers to the informational significance demanded by linguistic structures used in specific contexts of use. For example, in a sentence ‘Daisy left the party first’ the focus is ‘Daisy’, while the topic is ‘the party’, chiefly because the fact that it is Daisy rather than anybody else who left the party first attracts our attention in adding a new piece of information but the context of the party talked about in the sentence is a familiar or old piece of information—it forms the background. A focus is thus more informative than the topic, and hence it is often, if not always, stressed. This does not, of course, imply that what a focus is in a certain construction remains fixed as a focus in another related construction that follows the former; after all, any focus can become a topic in another construction because its surprise value or newness can ultimately wear off. Below are some more examples of topic-focus structures from English. (73) (74) (75) (76)

As for John, he went to meet Genelia. Peter needs a car, while Bob needs a gun. This fabulous book, I have been looking for__. Prax ate only vegetables in the restaurant.

All the items in focus are marked in bold. The sentence in (73) illustrates how topic and focus can establish their relations in the same construction. ‘John’ is the topic since it is already in the discourse, and ‘Genelia’ is the focus since it reflects the instantiation of a new piece of information the surprise value of which is much greater. In other words, the occurrence of ‘Genelia’ in the sentence is more informative than that of ‘John’ within the constraints introduced by the discourse context. The material that excludes the focus but may include

Possible Minds from Natural Language

103

the topic (that is, the portion ‘As for John, he went to meet X’) is called the presupposition, because this is what is presupposed to be known in the discourse, whereas the portion excluding the topic with the focus included (that is, the portion ‘As for X, X went to meet Genelia’) is the comment as something surprising or unexpected and new is introduced in this portion. The example (74), on the other hand, shows that two elements in focus can coexist in the same construction, but this obtains in a contrastive linguistic environment. Note that ‘a car’ semantically contrasts with ‘a gun’, inasmuch as the former is needed by somebody called Peter and the latter by someone called Bob. Thus ‘Peter’ and ‘Bob’ can be the topics against which the focus values of ‘a car’ and ‘a gun’ are determined. Hence this is called contrastive focus. The case in (75) applies a different grammatical mechanism in order to bring a topic into a more pronounced position. This strategy, also called topicalization or the displacement of a (noun) phrase from its canonical position, derives a construction in which the topic rather than the focus is marked. The gap in which the topicalized noun phrase ‘this fabulous book’ is to be interpreted is indicated by a dash. This fronting of a phrase which is actually a topic anyway seems to be a more marked strategy, given that elements can assume the status of topic even without recourse to this kind of fronting. But languages have ways of enforcing certain grammatical mechanisms that determine whether something can be a focus or a topic. The example (76) exemplifies this situation for focus, in that the occurrence of ‘only’ in the construction warrants that some element in its scope must be the focus. Hence ‘vegetables’ is the focus which is under the scope of ‘only’, which can be assumed to exclude a set of items which is the complement of the set of items that Prax ate in the restaurant. That is, if Prax ate only vegetables in the restaurant, he did not eat meat or items cooked with meat, cakes, chocolates etc. Even though English, which is known to be a morphologically impoverished language, does not have special markers for topic and focus, information structure mostly being contextually interpreted, there are languages that mark elements that contribute to information structure. Japanese is one such language. The following data show this well. (77) Taroo-mo hon-o katta. Taro-also book-ACC bought ‘Taro also bought a book.’ (78) Taroo-wa piza-o tabeta Taro-TOP pizza-ACC ate ‘As for Taro, he ate pizza.’

(Miyagawa 2010)

104

chapter 3

As shown above, ‘-mo’ is the focus marker and ‘-wa’ is the topic marker. It may be noted that the focus marker ‘-mo’ has the meaning of ‘also’ as well. As ­Miyagawa (2010) argues, this particle has a double function in Japanese. Further, Japanese is a language that can also apply fronting to position the focused element in a pronounced location. Cleft constructions and extraposition in English are also other ways of imposing the focus value on an element. Consider the pairs of examples in (79) and (80) below. (79) a. I hate mushrooms. b. It is mushrooms that I hate. (80) a. I like a dark chocolate. b. What I like is a dark chocolate.

There are also well-known diagnostics that can help detect focus in a certain construction. For example, a Wh-question can be used to determine what exactly the focus is. Suppose we say ‘John waited for Mary in the hall’. So we can ask questions such as (i) Who waited for Mary in the hall? (ii) What did John do? (iii) Who did John wait for in the hall? (iv) What happened?. If the answer for (i) is ‘John’, ‘John’ is the focus; if the answer for (ii) is ‘waited for Mary in the hall’, this verb phrase is the focus; if the answer for (iii) is ‘Mary’, ‘Mary’ is the focus, and if the answer for (iv) is ‘John waited for Mary in the hall’, the entire clause is the focus. Beyond that, there can be languages in which the word order reflects the placement of topic and focus. Hungarian is such a language, since in Hungarian it is the information-structural demands that determine what to be placed where. The examples below illustrate this. (81) Janos Imret mutatta be Zsuszanak John Imre-ACC introduced to/by Susan.DAT ‘John introduced to Susan Imre.’ (82) Zsuzsanak Janos mutatta be Imret Susan.DAT John introduced to/by Imre.ACC ‘To Susan Imre was introduced by John.’ (É. Kiss 1998) The focused elements are marked in bold, as has been done throughout. In (81) the topic is ‘Janos’ as the sentence is about John, but in (82) the topic is ‘Zsuzsanak’ as it is about Susan. As we carefully examine the pattern in (81–82), we can see that the topic is always located at the front, and the focus is positioned immediately before the verb. In other words, the topic precedes the focus in

Possible Minds from Natural Language

105

the information-structural configuration of Hungarian. Hence the placement of the subject and object co-varies with the placement of the topic and focus, but not vice versa. Thus, in (81) ‘Janos’ is the subject placed at the front, whereas in (82) ‘Zsuzsanak’ is placed at the front but it is the indirect object of the verb. Information structure in this language is thus grammaticalized. To put it in other words, it seems as if contextual properties of information flow which is mostly non-linguistic has found a concrete or formalized manifestation in Hungarian. Mental structures that can be specified for information structure in language must reflect the properties of topic and focus as well as their relations. With this in mind, we may define a relation Ri = {(FX, TX)}, where FX denotes either a singleton form (‘John’ for example) which is the focus or a relation for an expression that constitutes the focus (a noun phrase or a verb phrase or an entire clause, for example), and similarly, TX represents either a singleton form which is the topic or a relation for an expression that is the topic. Thus Ri instantiates a mental structure for the categories of topic and focus that constitute a part of the information structure within a construction. To give an example, for the case in (81) Ri = {(Imret, Janos)}. But for cases where either the topic or the focus is absent (including those cases in which topics, which are by nature familiar pieces of information, can be dropped, as in a language like Chinese), we can define a relation Rk = {(ISX, RPC)}, where ISX stands for any information-structural category—either the topic or the focus—and RPC represents a relation either for the presupposition (in the case of focus-only constructions) or for the comment (in the case of topic-only constructions). In order to illustrate this particular case, we can choose a suitable example. Suppose someone utters ‘It is dark’ when entering an unknown room; it can be assumed that the adjective (phrase) ‘dark’ is the focus. The sentence does not have any topic, precisely because an expletive (such as ‘it’ or ‘there’) is semantically empty and hence cannot have any referent. In such a case, the relevant relation individuating a mental structure will be Rk = {(dark, RPC)}, where RPC is a relation for ‘it is’.10 If, on the other hand, a whole sentence is the focus— and we have encountered such an example above (‘John waited for Mary in 10

It is possible to have the whole sentence ‘It is dark’ interpreted as a focus, and some stage which may well be conceived of as a setting requiring contextualized notions of location, time etc. can be reckoned to be the topic (see Erteschik-Shir 2007). Thus for the present sentence the stage would require time and location. If we adopt this proposal, what we then have is this: Ri = {(FX, TX)}, where FX is a relation for the entire sentence, and TX is a relation the defines parameters of a stage in which ‘It is dark’ is informationally relevant.

106

chapter 3

the hall’), the ISX in Rk = {(ISX, RPC)} will be a relation for ‘John waited for Mary in the hall’, and RPC will be ∅. Therefore, it can be easily applied to Chinese, a topic dropping language. Overall, the formulation of mental structures for information structure is flexible enough to express various properties of focustopic interactions. In this section, we have presented linguistic data from within and across languages in order that the viability of the notion of meaning relations instantiating mental structures distilled from linguistic structures can be shown. We have considered various linguistic phenomena to demonstrate that this is indeed possible. Wherever it has been possible, relevant suggestions, if sketchy, concerning the realization of such mental structures have been offered. The next section will explore this issue more deeply and then develop an account of the mental structures of plants, other organisms, and animals. Thus varieties of mental structures will be brought into correspondence with forms of different possible minds. This will also help spell out the mental structures that can be formulated for machines. This task will be done in the next chapter (Chapter 4), which will place this in the wider context of discussion on machine cognition. 3.2

Mental Structures and the Forms of Possible Minds

The nature of mental structures can illuminate the forms of possible minds, inasmuch as it can be reasonably believed that mental structures are constitutive of the structural or formal organization of minds. But there is no denying that this matter is fraught with its own theoretical and empirical difficulties. The ascription of mentality or a mind-like character to non-human entities invites the vexing question of whether other animals and organisms have minds or possess cognitive properties. On the one hand, it can indeed be held that the process of life itself can be equated with cognition—that is, the process or the property of living constitutes what can be reckoned to be cognitive ­(Maturana and Varela 1980; Thompson 2010). On this view, the formal and organizational properties, relations and processes that mark what is cognitive are to be taken to be fundamental to life. Thus the interactions in which a cognitive system engages in the process of cognition and in maintaining itself become manifest in the processes of acting and behaving in the world with a reason or goal. But, on the other hand, we must admit that there is a huge gulf between cognition and behavior/actions—something that cannot be easily bridged by simply postulating axiomatic principles relating the two. Approaching cognition by gradual approximations to accommodate the behavioral processes of an

Possible Minds from Natural Language

107

organism does not guarantee that there is something really cognitive we are tapping into. If it were so, we would be inclined to say that meandering rivers, moving glaciers, rotating wheels, bacteria, fungi, viruses, artifacts all evince qualities fundamental to cognition, just because these entities exhibit a certain form of behavior in common once we fix the notion of behavior as a type of interaction with the outer environment. Note that we are not denying that some entities from the list just mentioned have cognitive abilities and capacities. Rather, what we are gainsaying is the statement that the cognitive abilities and capacities of these entities, if any, will necessarily be penetrated by means of an inspection of the behaviors of these entities once the notion of behavior is appropriately fixed. Because an ant and a bacterium can under certain circumstances behave in a similar manner, this must not license the conclusion that the underlying cognitive mechanisms of both organisms are also similar. In particular, capacities for perception, memory, learning, and categorization are thought to be cognitive capacities. But this is not the only way of understanding how cognitive marks are to be discerned or detected in an entity whose status as an entity that can have cognitive abilities is in dispute. It is arguable that cognitive capacities can merge into low-level biological processes such as metabolism or immunological processes, given that the capacities that allow organisms to control their own behavior so that organisms can cope with the contingencies of environmental complexity can be considered to be cognitive capacities (see Godfrey-Smith 2002). If so, it is not clear how a well-motivated distinction can be drawn between possessing mind-like properties and not possessing mind-like properties. Perhaps more crucial to this issue is the problem of sketching out a principled way of distinguishing between having a mind and exhibiting cognition, if one urges that having a mind and exhibiting cognition cannot be one and the same thing. At this point, it should be noted that our purpose is to show that mental structures are constitutive of the structural or formal organization of various kinds of minds in other animals and organisms. It is certainly true that this contention carries the presupposition that other animals and organisms can have minds or at least something that approximates to a mind-like thing. And if that is the case, this position may be contested on the grounds that minds or cognitive properties are handed out to a constellation of animals and organisms without seriously making the relevant distinctions and divisions that can make latent the dichotomies which are considered evident. Be that as it may, it needs to be emphasized that the issue of whether other animals and organisms have minds or cognition cannot be prejudged by stipulating that non-human forms of minds or cognition are not possible in nature. Plus no amount of delving into the capacities for perception, memory, learning, and

108

chapter 3

categorization in other animals and organisms by way of careful and direct observation of their cognitive behaviors can decide whether other animals and organisms possess cognition but not minds (or even vice versa) (see also, Shettleworth 2012). Maybe the distinction between having a mind and exhibiting cognition, if there is any, is the product of our theoretical artifacts. Hence it is more judicious on many counts to attempt to make logically valid inferences about the formal organization of diverse kinds of minds in other animals, organisms by verifying the evidence provided by various capacities and abilities of such organisms and animals, if there are any, regardless of whether or not such entities are judged to possess (pre)theoretically categorized denominations like ‘mind’ or ‘cognition’. The minimal assumption that we make is simple: we can check if other animals and organisms (and perhaps also machines) possess mental structures by scrutinizing the evidence for various capacities, actions and abilities of such entities, if there are any, provided that the capacities, actions and abilities of such entities display something that descriptively and perhaps explanatorily fits the ascription of a certain mental structure or a range of mental structures to them. What this implies is that our inquiry has to go along with the condition that the ascription of a certain mental structure or a range of mental structures must have the descriptive and explanatory power to account for what the capacities, actions and abilities of other animals and organisms (and perhaps also machines) reveal. Overall, the fit between the ascription of mental structures and what the capacities, actions and abilities of other animals, organisms evince must come out logically as well as empirically. Before we go on to trace mental structures in organisms other than human beings, it is necessary to mention that meaning relations as formulated within the context of the proposed conceptual formalism are what mediate, but are not in themselves, sign relations within the constructed inner world in the sense of Uexküll’s (1921, 1982) Umwelt of organisms. Mental structures are not thus in themselves sign relations because a sign stands in a relationship with something other than itself, whereas mental structures are interpreted structures that do not turn on the relation that obtains between a sign and what that sign stands for. Interpreted structures do not engage in another process of interpretation precisely because they are already saturated. They can, of course, mediate sign relations simply because mental structures mediate the manifestation of these sign relations in the sense that they are the part of the inner constructed models in having a structure that is distilled from the signals or needs on the one hand and customized by the effects or responses on the other. Here the relevant sign relations obtain between the signals or needs and the effects or responses (Kull 2000). It turns out that with this conceptual clarification of what is perceived inside organisms and what counts as an output

Possible Minds from Natural Language

109

following from what is perceived, we can include plants within the gambit of our framework, although Uexküll did not, strictly speaking, consider plants to construct or possess the Umwelt. In a nutshell, mental structures serve to make the boundary conditions for biosemiosis obtain when a multitude of signals in animals and organisms including plants are transduced into semiotic forms within the bodies of animals and organisms. More particularly, mental structures mark the boundary conditions for biosemiosis because any sign relation in organisms has to stem from some internal structures which give rise to the very sign relation by virtue of determining the form of signs and what they stand for. For instance, in a certain organism a mental structure that specifies the internal state of sensing some food object in the surrounding environment helps constitute the sign of a hunger state and the response the sign can stand for or give rise to. The relevant sign relations in question can be constituted by complex one-to-many and/or many-to-one patterns of relations between signals or needs and effects or responses in animals and other organisms. But one may be interested in determining what it takes to implement a mental structure in an organism or in something that can bear a mental structure. This question is crucial since we have not said anything substantial on this thus far, although this has been briefly discussed in Chapter 1. Since in the present context it has been emphasized that the notion of substance-­independence of minds must appeal to a conception of non-unique dependence, as pointed out in Chapter 1, mental structures cannot also be linked to any single form of substance. Rather, given appropriate conditions, mental structures can be instantiated in many types of substance, biological or non-biological. Some of these conditions include the criterion that for a given substance the range of mental structures at hand must enter into a consolidated dependence relation with respect to that particular substance. Thus, for the biological substance it has been stressed that mental structures must be instantiated either as internal states in the whole body of the organism or as reified structures in the nervous system. Unless this holds, the case for non-unique dependence cannot be made. Plus such conditions will also consist in the requirement that mental structures be segregated from what are usually recognized as representations so that mental structures do not turn out to be stabilized or fixed structures in organisms. This is so because representations are by their very nature dislodged from what they represent, and hence they can be isolated from the context of their origin and also misrepresent. But mental structures in the present context can even be transient or fluctuating bodily states locked to the context from where they originated. But, also, mental structures may be segregated from the sensory-perceptual interfaces as they are gradually solidified and thus reified. This indicates that mental structures may either be locked

110

chapter 3

to the features or things with which they co-vary or remain disconnected from the context or the entities from which they arose in the first place. Thus conceived, mental structures are more than mere representations. But then, if mental structures can be either representations or non-­ representational internal states, what then prevents mental structures from being assigned to artifacts like thermometers or calculators? The worry is resolved once we realize that the assignment of mental structures is constrained by ethological or generally contextual considerations of the behaviors and other abilities of the organisms or potential systems concerned. Although mental structures themselves are not, strictly speaking, so constrained, it is the ascription of mental structures to arbitrary entities that is constrained by such considerations. More importantly, mental structures in their intrinsic sense are interpreted structures—structures that are pre-interpreted within the biosemiotic complex of an organism or system. Arbitrary artifacts/systems like thermometers or water pumps cannot be said to possess mental structures internally pre-interpreted in the same way. Mental structures are biosemiotically pre-interpreted inside the body of an organism or creature exactly when an internal state within the organism (or a system) captures or encodes the contents of a meaning relation that individuates the given mental structure. Thus, the mental structure specifying the presence of some danger, for instance, in a species must capture the contents of some meaning relation relativized to the (bio)logical context of the danger for the species. It is by virtue of being interpreted this way that mental structures can activate sign relations—the sign relation between the perception of the dangerous entity or property and the desirable response being such a case in the present example. Unless mental structures are biosemiotically pre-interpreted this way, what meaning relations individuate cannot even be supported by, or embedded within, the bodily context of organisms (which include biochemical and other physiological processes in the case of a biological being). Interpreted this way, mental structures are biosemiotically pre-interpreted when the internal mechanisms and body parts of an organism (or system) support and embed meaning relations individuating mental structures. Note that this characterization is broad enough to include machines or some artifacts. But what does or can make the difference in the case of arbitrary artifacts is how mental structures so interpreted within the electrical circuits of the system create further (‘bio’)semiotic regularities and possibilities or law-like contingencies involving a chain of complex sign relations. This issue will receive a nuanced elucidation in Chapter 4 where the issue of machine cognition (especially in digital computing machines) will be dealt with.

Possible Minds from Natural Language

111

Interestingly, one significant upshot of the whole discussion above is that the considerations put forward easily line up with the sensory-perceptual-­ conceptual continuum that can be related to mental structures in a more ­systematic way. As we have observed that mental structures are meaning ­relations, and by virtue of being relations per se, they have the granularity appropriate for different levels of ‘scaling’. That is, simplex mental structures (which do not involve an embedding of relations) can in general approximate to sensory-perceptual representations that are extracted and then reified over the lifetime of individual organisms, whereas more complex mental structures in involving deeper levels of embedding can approximate to conceptual representations. Now one may wonder whether all simplex mental structures are alike in approximating to sensory-perceptual representations. The answer to this would be that it need not be so. Different simplex mental structures will be ethologically and also biologically different in different contexts of diverse creatures, and insofar as this is so, some simplex mental structures may be more removed from immediate sensory-perceptual consequences even though they are naturally tuned to sensory-perceptual representations. An example could be the recognition of one of the conspecifics as an arbitrary or regular member of the same kind—which may not have immediate sensory-perceptual consequences because it may not demand immediate perceptions/emotions of danger or fear. On the other hand, the more complex mental structures become, the more geared they become to sort of assume the form of what we usually recognize as conceptual representations. Conceptual representations originate from sensory-perceptual representations but are gradually divorced from them as they form categories, kinds, universals etc. of a higher order. Now it becomes immediately clear that the logical structure of embedded mental structures fits the structure of categories, kinds, universals etc. since such mental structures can no longer deal with instances or particulars tied to firstorder representations. This does not, of course, carry the presupposition that sensory-perceptual representations cannot form kinds and universals—many sensory-perceptual representations go on to constitute clusters of representations in a category, as in categorial perception. But the point to be driven home here is that at least some, if not all, conceptual representations are distinguished by their very character of being removed from sensory-­perceptual representations in not being selectively sensitive to instances or particulars. The conceptual representation of tallness is one such case—while a group of objects having the height of a perceived value can give rise to sensory-­perceptual representations true of all the members belonging to that group, the representation of tallness relative to that group of objects cannot be a m ­ atter of a

112

chapter 3

straightforward extraction of the features of height from the group of objects or even from the individual objects concerned. Overall, the formalism of mental structures ensconces the sensory-perceptual-conceptual continuum in the system of its structural possibilities. To come back to the relation between the ascription of mental structures and the capacities, actions and abilities of other animals or organisms, it is worthy of note that any existing accounts that touch on the issue at hand are restricted to describing, much less explaining, various behavioral abilities and capacities in organisms other than humans. The current proposal aims to complement, rather than replace, these accounts by developing a richer formalism of mental structures which are sort of an ‘intermediary’ between behavioral outputs that are observable and the latent capabilities which are manifestly internalized. Thus the present formulation advances a wholly new and radical hypothesis that furnishes an enriched description of the internal structures that mediate the transformation of inner abilities into behaviors or vice versa. This is what the present account attempts to do, and in setting this as a goal, this marks a departure from almost all existing accounts of ‘natural’ cognition understood in its substance-independent sense. At this juncture, it must be borne in mind that the selection of different species or organisms and creatures will by no means be exhaustive; rather, the representative examples from across a spectrum of different kingdoms will be chosen in order to illustrate the relevant mental structures to be specified for them. This will permit crossselection of mental structures on the one hand and systematic comparisons on the other. The order of various kingdoms will be from simpler organisms (across kingdoms) to more complex creatures (within a given kingdom). With this in mind, we proceed to examine the case of simplest creatures such as amoeba or paramecium. Both amoeba and paramecium are single-cell Eukaryotic creatures found in waters. These simplest creatures do not thus have any nervous systems, but this need not warrant the conclusion that the absence of nervous systems equates to the absence of cognitive capacities. So putting this question aside as we have already addressed this issue, we can verify what these creatures are capable of doing. Amoebae move by using what is called pseudopodia, which resembles limbs when the cytoplasm of an amoeba is extended in space for movement. Amoebae survive on various species of plankton in waters which amoebae find in the nearby surroundings. In order to take in organisms of plankton, amoebae usually move towards the targeted item(s) so that they can engulf and then drag the item(s) inside for digestion. The most important part of this is the act of sensing that amoebae have to engage in. How does an amoeba sense its food in the closer environment? And when it does sense the food, how does it extend its cytoplasm forming a shape

Possible Minds from Natural Language

113

that fits the shape of the food item so that the food item can be grabbed in the right way? Clearly, these actions require at least a minimum form of sensorymotor coordination, although amoebae do not possess senses or any kind of motor systems found in many animals. Similar observations also hold for paramecium cells which move by using their cilia, which are tiny hair-like projections which surround the cell body. It is these cilia which paramecium cells use to grab bacteria, algae etc. But again, paramecium cells execute the task of detecting bacteria, algae in their close surroundings effortlessly. Additionally, paramecium cells, while swimming in the water, can detect temperature variations and then change swimming directions on the basis of such detection. Such capacities may be genetically coded in the cell bodies of these creatures, given that there are certain protein channels and/or receptors in the cell membranes of these creatures that can detect the presence of certain elements that are necessary for survival. But the real act of sensing and taking the relevant actions on the basis of that act of sensing require regular interactions with the environment. All of this cannot be pushed inside the genome of such creatures. For example, in any given situation what an amoeba or a paramecium cell senses or will sense cannot be pre-decided or pre-coded. The internal states of these creatures change with the varying circumstances in which they live or happen to fall into. Given these considerations, it is not unreasonable to believe that certain entities (food items, for example) or environmental features such as temperature, lighting conditions are somehow registered within the cell bodies of these creatures. Thus there is reason to assume that this registration of something in the nearby region leads to a kind of indexation of that something. Here the supposition is not that if something is registered and thereby indexed in such creatures, many items can also be simultaneously registered and indexed or kept in a kind of memory. The only thing that we can say on the basis of our understanding of the abilities of these single-cell creatures is that something, say, a variable X is registered and thereby indexed in such creatures. Without this minimum capacity such creatures cannot even move towards the food items that are targeted, much less grab them. This applies equally well to E. coli and other bacteria. Therefore, it can be postulated that these creatures must have a mental structure that reflects the presence of something. A meaning relation found in existential constructions can serve to individuate this mental structure. For the present purpose, this can be schematized as Something-Exists = {(is, X)}, where X stands for that something— whether a food item or some environmental condition. A clarification is in order here. Note that the mental structure instantiated by the relation Something-Exists is not a representation, for it does not have any semantic properties typical of representations. Nor is it a thought

114

chapter 3

because thoughts may have a truth-conditional import. As has been discussed in ­Chapter 1, the mental structure specified for single-cell creatures and other bacteria is what is registered in such creatures when they sense the presence or existence of something around them. Since it is simply a structure that evinces a mental character typical of internal states, it does not have a symbolic quality. Rather, the description of this mental structure in terms of a meaning relation Something-Exists can have a symbolic import. But it is only in a trivial sense that one can hold that our descriptions—and as a matter of fact, all descriptions anywhere in any kind of discourse—are symbolic. The clarification can also help formulate kinds of mental structures for other kinds of minds.11 We shall turn to the capacities and abilities that plants exhibit. Plants are autotrophic organisms because they construct their own food by taking inorganic materials (light, for example) from nature and converting them into organic elements. Traditionally plants are understood to be inert living entities that exhibit nothing other than plain stimulus–response regularities. Stimulus–response regularities are not generally thought to have the mark of what can be called cognitive, because these regularities are entirely describable in mechanical terms. However, the supposition that plants do not have cognition is the offshoot of the human-based theory of cognition, for it has not been yet demonstrated that plants do not have cognition. Instead, there is strong evidence that plants have something akin to the nervous structure in animals, although the architecture of the nervous structure in plants could be substantively different. Bose (1902, 1906) was perhaps the first to demonstrate that plants give off excitatory electric impulses when they are electrically stimulated by means of electric currents, and this was found virtually in every part of a plant. Furthermore, the protoplasmic excitation response patterns were found to vary in different types of plants depending on the appropriate strength of the induced currents. For example, leaves of biophytum plants, which are a genus of herb-type plants, were found to generate very feeble electric impulses, while leaves of cucumber trees were found to produce strong responses. On the basis of plenty of evidence of such kind, Bose (1926) 11

As highlighted above, it must be borne in mind that mental structures in the present formulation are not representations on the grounds that representations by their very nature are decoupled or distant from what they co-vary with both causally and statistically (Orlandi 2014). Mental structures may or may not be decoupled/distant from what they co-vary with since many mental structures, especially those in smallest organisms, are like tracking states in remaining tightly linked to what they co-vary with. However, mental structures can be different from merely tracking states, in that mental structures can be re-encoded, preserved and (re)evoked within various creatures, thereby rendering them largely segregated from what gave rise to the mental structures in the first place.

Possible Minds from Natural Language

115

demonstrated that it is the phloem tissue in plants (responsible for the transportation of nutrients which photosynthesis yields) that is the conductor of electricity just like nerves are the conductors of electricity in animal bodies. Phloem tissues are tube-like tissues that form bundles flowing from the stem into the leaves. He argued that this network of phloem bundles throughout the inner region of plants must be taken to be the nervous system of plants. The issue of whether the network of phloem bundles in plants is really the nervous system is not yet resolved, partly because the question of how to relate nervous structures of such kind to cognitive processes and/or capacities is critical to understanding cognitive processes and/or capacities. But from this it does not follow that plants lack cognitive capacities. In fact, Trewavas (2003, 2004) has strongly argued, by drawing upon a plethora of pieces of evidence from experimental researches in botany, that plants exhibit all the necessary cognitive capacities such as memory, learning, spatial ability etc. For instance, plant roots amass signals from different sources about the soil quality, the distribution of water, calcium or nitrate, the presence of gases like carbon dioxide or even nitrous oxide etc. Plus the roots of plants are also sensitive to gravity and change their projecting trajectories based on the nature of the path through which the roots project—in many cases roots project horizontally rather than virtually when any obstacle is sensed. Environmental signals such as temperature, light, touch are detected by the shoots and leaves of plants and are integrated in the root as well. In fact, Charles Darwin believed that the root is the brain of plants and the erect growing structure of the stem along with its leaves is the body. As Trewavas (2003) has shown that exposure to moderately low temperatures or reduced water supply (in drought, for example) can induce changes in the plant that are not just sensed but also are retained as a memory for later environmental challenges, it is hard to imagine how this can be done in plants just by means of stimulus–response regularities without some form of retention and retrieval capacities, irrespective of whatever the actual mechanism is. Beyond that, recent research also vindicates Bose’s results; for example, Volkov et al. (2009) have found that the trapping structure of Venus Flytrap, which traps insects which come attracted to the trap, is sensitive to electrical stimulations. They argue that Venus Flytrap must have a kind of short-term electrical memory that helps it track varying stimuli in its close spatial setting that the trap can grab. What is, however, a moot point is whether plants can be deemed to be ­individual wholes if plants really show cognitive capacities (see Firn 2004). Although there are various local mechanisms such as auxin canalization supporting leaf venation, light-induced development of (parts of) plants and also flowering that depend on the precisely modulated sensing of light and

116

chapter 3

t­emperature conditions, there is reason to believe that these processes are driven by at least low-level sensory-perceptual-motor processes which may be coordinated throughout the whole part of a plant. For such processes, however local they may be, are sensitive to the spatial layout of the environment in which they obtain in terms of plant-environment interactions, and they cannot be entirely genetically determined even though they certainly are genetically modulated.12 This is mostly true of cognitive processes. Tropism of plants, that is, plant movement is another source of evidence for the kind of sensory-perceptual-motor processes that are distinctively unique in plants. Heliotropism, or rather phototropism, in sunflowers is a familiar example. The direction of sunlight and its brightness are tracked by sunflowers. Such type of tracking can be executed online without the need for any kind of off-line manipulation of the perceived information. Calvo and Keijzer (2009) argue that leaf heliotropism in plants is more complex in requiring representations of features of the outer environment. For example, the leaf laminas of Cornish mallow (Lavatera cretica) exhibit anticipatory behavior concerning the sensing of the direction of sunrise. In addition, the laminas also keep facing the sun in the same direction for a few days even in the absence of the sun, that is, even if the laminas are not tracking the sun. Or, for instance, Cuscuta plants have been experimentally observed to grow to parasitize the host plants that offer the greater amount of food resources rather than just any hosts across the host species, and this depends on whether the right host has been found in the right order, which provides strong evidence for foraging in plants (see Kelly and Horning 1999; Koch, Binder and Sanders 2004). Clearly, these actions demand some degree of sophistication on the part of the sensing/signaling and recognition systems built into the plant (­Chamovitz 2012). The interesting behavior of touch-me-not plants (Mimosa pudica) in showing sensitivity to any kind of touch by contracting its leaves parallels animals’ muscle contraction behaviors. Of course, this can be described by means of complex chemical actions that involve calcium concentrations in cellular processes. What is crucial here is that the mechanism responsible for this touch sensitivity is partly driven by the internal circadian system that keeps track of time, since this plant also contracts its leaves as soon as twilight begins 12

Such processes at the molecular level are certainly genetically coded in the sense that genes code for the specific receptor proteins. But genetic modulation is a weaker concept in that it applies to any physiological process that arises out of codependent genetic contributions via genetic coding. For example, developmental (cognitive) disorders are genetically modulated, but they are not fully genetically determined as these disorders have complex gene-environment correlations.

Possible Minds from Natural Language

117

to appear. Besides, this mechanism makes the relevant categorization between its own leaves and stalk, and different types of things that make contact with the leaves of the plant—a reflex of the self-other distinction. Add to that the complex predating behaviors of plants such as orchids, pitcher plants, Venus Flytrap. These plants prey on different kinds of insects and arachnids. But the important point is that there is some degree of sophistication in the categorization capacities of these plants which certainly discriminate between raindrops or the leaves of neighboring plants and the prey. If it were not so, the predating behavior of these plants would be entirely ineffective and energy-consuming. Clearly, this is not the case. Consider the case of Venus Flytrap, for example. It has hairs on the edge of the trapping flaps which need to be touched twice within 30 seconds for the trap to close, and this mechanism betrays the presence of a short-term memory (see Shepherd 2012). So the capacity to differentiate between a raindrop and the prey is mediated by this minimal form of memory mechanism in Venus Flytrap. On the basis of these considerations, we may now proceed to explore the mental structures that can exist in plants. Plainly, the mental structure individuated by the relation Something-Exists is present in plants, since plants detect the presence of something. It can be a pollinator or another tree or the sun or any environmental variable (temperature, soil condition, light etc.) or the prey. What is noteworthy is that the part of a plant that catches light projects upwards in the centrifugal direction, and that the root which absorbs water by penetrating into the soil projects downwards. Hence there must be at least two things that plants sense and interact with the whole time. A mental structure that can express this can be specified by the relation Conjunctionof-Things = {(and, X, Y)}, where X and Y are two variables that can stand for any two things or conditions. In a normal condition, this relation may look like Conjunction-of-Things  =  {(and, light, water)} or like Conjunction-ofThings = {(and, sun, soil)}. For sunflowers, for example, this relation will be Conjunction-of-Things = {(and, sun, water)}. Note that the representational significance of the symbols in ‘Conjunction-of-Things’ is for us, but not for the plants. It should be borne in mind that we are not making the claim that any representational correspondence between these symbols and what they stand for is built into plants. Rather, what is stressed is that a mental structure instantiated by the relation Conjunction-of-Things is sensed in plants, especially when two things are sensed together. These two things can be any two entities or conditions which are incorporated in a meaning relation which is just represented by the symbols in ‘Conjunction-of-Things’ for our understanding. What ‘Conjunction-of-Things’ reflects is essentially an emergent implicit structure within plants (and plausibly in other organisms as well) in the sense

118

chapter 3

that this structure does not gain recognition as an emergent structure within plants, although the structure itself in its emergent form is decidedly sensed within plants. Nor does it achieve a separate level of encoding that is discerned by plants as such to be dissociated from the very elements (water, sunlight etc.) that gave rise to them. This contrasts with explicit emergent structures which assume a system of encoding (in linguistic or other representational formats) distinct from the lower-level structures and/or elements which give rise to the emergence in question (see for details, French and Thomas 2015). As a matter of fact, for plants, the two entities or conditions are what plants actually sense, and the relation Conjunction-of-Things instantiating a mental structure merely incorporates these sensed elements in an ordered triple. Apart from that, the relation Conjunction-of-Things is not intrinsically a 3-ary relation; it can indeed be expanded into an n-ary relation when more than two things are sensed together by plants (here n >3). This is crucial for plants such as Cornish mallow. Another mental structure that is more vital from our perspective is the one which encodes the quality or feature of things such as the bright feature of light, the soil quality, the moisture of the ambience, the heat or coldness of temperature or the location of something. So we may formulate a relation that instantiates this mental structure. Let’s call this relation Thing-with a Feature = {(X, feature)}, where X stands for an entity. Thus, for example, if the light that a plant senses is bright, the relation will look like ­Thing-with-a-Feature = {(light, bright)}. Likewise, if a plant senses a danger in the form of herbivores or pests, the indexing of the danger is vital for the plants, and in this case the relation can be written as Thing-with-a-Feature = {(X, dangerous)}, where X can stand for the entity that poses the danger concerned. If it is an herbivore, then this can be specified as Thing-with-a-Feature  =  {(herbivore, dangerous)}. In fact, the signaling mechanisms that fend off such dangers in plants can be quite complex. For instance, acacia plants sense tannin in the leaves of the neighboring plant(s)—which is a sign of dangers posed by herbivores (see Mithöfer and Boland 2012). In such situations, the location of this sign, or rather the origin of this sign in the neighboring plant(s) is crucial so long as the sign is exploited in order to index something as dangerous. What is required for this aspect is Thing-with-a-Feature  =  {(X, loc)}, where X stands for whatever acts as a sign for the plant and loc is the place where the sign, or rather the index issues from. For example, tannin is the sign that is found in the neighboring plant(s) of the same species; so this relation will then be ­Thing-with-a-Feature  =  {(tannin, conspecifics)}. The mental structure individuated by ­Thing-with-a-Feature can also serve to characterize the inarticulate representation that plants such as Cornish mallow employ. Finally, a

Possible Minds from Natural Language

119

mental structure that captures the ­relevant aspects of categorization in plants is also necessary. A first step towards categorization in plants is the distinction between the self and anything other than the self. And the next thing is a fairly advanced form of categorization that is primarily found in predator plants such as Venus Flytrap. Both these capacities require a mental structure which can be individuated by the relation Different = {(X, Y)}, where X and Y stand for two entities or things. But it may be observed that this is a more general relation for detecting some difference between any two entities, which is perhaps underspecified for the special difference between the inwardly pregiven self and anything other than the self. Hence we may specify a special unary relation Different-from-Self = {(X)}. A touch-me-not plant may, for example, have the mental structure Different-from-Self = {(object)}, when any object other than itself touches it. It is reasonable to think that the mental structure instantiated by the relation Different is grounded in the more primitive ­mental structure instantiated by the relation Different-from-Self. Thus it is likely that the former has evolutionarily emanated from the latter. These are some of the mental structures that can be deemed to be necessarily viable in view of the (bio)logical considerations that have been marshaled. However, this is not to say that other mental structures cannot possibly exist in plants. We leave it open for further exploration. Moving away from plants, we shall now look at invertebrates which are heterotrophic creatures. This means that they feed on plants and/or other heterotrophic creatures and generate or release inorganic materials (such as carbon dioxide or water) from the metabolized food. Invertebrates include insects, mollusks, worms and also some marine animals such as starfish. Thus far we have devoted our attention to characterizing inarticulate mental structures in plants which may qualify as representations in some sense. However, there is an implicit danger involved in treating in a general sense mental structures as representations, for representations require a correspondence between the thing that represents and what is represented to be interpreted. Millikan (2004) has argued that mental representations are intentional representations on the grounds that perceptual and cognitive systems that produce certain representations have been selected for producing representations conforming to rules and regularities to which interpreters adjust. She adverts to the Gibsonian theory of affordances (Gibson 1979) while explicating the role of mental representations. Importantly, mental representations, as she maintains, can have two roles—(i) identifying or recognizing an object or a class of objects and (ii) determining what to do with it or how to use it or how to behave with respect to it. The second role derives from the concept of affordances—­possible uses and purposive actions that objects afford or allow for (a cup affords holding it,

120

chapter 3

for example). Mental representations in this sense can be articulate or inarticulate. It is clear that the mental structures specified for plants are inarticulate, but whether or not they are representations for plants is something that has to be determined by the behavioral and cognitive roles for plants that these mental structures play. On the other hand, mental representations in invertebrates can certainly be articulate, as many invertebrates (such as flies, honeybees or locusts) produce audible signals or vocalizations. What is perhaps harder is the specification and determination of the differences, if any, between the mental representations employed in one species and those employed in another. On the other hand, tendencies to regard mental representations across various species simply as the data structures which information processing mechanisms operate on and manipulate end up trivializing the cognitive character of representations in animals and other organisms. This also runs the risk of blunting the possibility of having well-grounded and differentiated cognitive descriptions of certain phenomena that are otherwise describable in more generic terms. Be that as it may, it is possible to strike a balance between these two conflicting positions. The present approach that seeks to explore the kinds of mental structures other animals and organisms may possess both avoids overly general descriptions adopted by the information processing approach and appreciates some of the potential differences in the contents of the representations other animals and organisms may have. One of the most important issues surrounding the mental lives of other animals and organisms, especially invertebrates, concerns the difficulty in making a choice between an account of the apparent cognitive behavior of other animals in terms of stimulus–response conditioning and another account which appeals to mental representations. Note that it is not necessary that an account which appeals to mental representations has to import all characteristics of what are taken to be mental representations from the human perspective (see also Carruthers 2009). For animals, Millikan’s (2004) proposal grants only those mental representations that are locked to the present or at most to the imminent future. Plus these mental representations are not said to have an explicit encoding of the animals’ own roles in their own inner representations. For example, a honeybee, when collecting nectar from flowers with other honeybees, is not supposed to say to itself ‘Many ­honeybees have come along with me, but many others who have not are home’. In fact, most invertebrates deploy a range of sensory-motor strategies that betray the presence of perceptual capacities in such animals in varied circumstances that demand something more than a mere association of various stimuli and their related target responses. Spatial exploration in most invertebrates is one of the capacities that can be reckoned to be fairly developed in most species of i­nvertebrates. Most

Possible Minds from Natural Language

121

invertebrates ­including flies, bees or ants have to learn the spatial relations of one object to another object or many other objects that are located in their ambience. Foraging activities are certainly a part of the daily activities that invertebrates engage in because animals have to collect food in order to live and the sources of such food are not always found in areas nearby. Hence recognizing the spatial relations of the location of food to the nest is vital, insofar as the goal of the animal in question is to bring the food or a portion of it back to the nest. But this may not always be sufficient for invertebrates chiefly ­because recognizing the spatial relations of a location to another object or to any other location does not guarantee that the animal in question will be able to successfully guide its moving or flying behavior towards the target or back to the nest. This is because many environmental contingencies or changes can blur or obliterate certain distinctions in spatial relations on the route that have been memorized. In order to overcome this challenge, many animals also identify and figure out the spatial relation of outside objects to themselves. As a matter of fact, the waggle dance of bees is a suitable example that demonstrates this well in the sense that the dance not only guides other bees ­towards the location of nectar but also tells bees how to go in the right direction. That is, the waggle dance serves to represent two things—the location and an appropriate action that needs to be accomplished. Hence these representations have both ‘indicative’ and ‘imperative’ characteristics (see Millikan 2004). What is important to note is that the procedural aspect of bee dance, insofar as it is deemed to represent something which is perceived by bees, requires the spatial relation of bees to outside objects. Without this, bees will not be able to calibrate their flight with respect to the direction in which they are supposed to go. In fact, bees can also pick up shortcuts between a location already known and the location which is indicated by a waggle dance (Menzel et al. 2005). Moreover, bees can also detect the correct homing direction even if they are placed at a wrong place (Menzel et al. 1998). In another sense, this spatial capacity of honeybees is related to the general spatial exploration function of invertebrates such as bees, flies, wasps etc. Note that any exploratory activity of flying invertebrates has to be tuned to the perception of the external environment as an animal moves away from a certain known location. In other words, the animal’s perception has to be tuned to the changes in the outer environment along the path, since the physical view of the environment will look different as the animal flits from one place to another.13 On the other hand, 13

This derives from the capacity to perform what is called path integration, which consists in continuously integrating the signals about the path that an animal travels as it moves away from a point of departure.

122

chapter 3

the animal in question may also get some internal proprioceptive feedback or signal that gives it the sense of flitting. Without this, the animal may perceive fluctuating physical changes in the same object (Thinus-Blanc 1995). In many cases, bees, flies, wasps use the sun as a reference point, as well as using their internal feedback mechanisms. This kind of fine-tuned spatial calibration is also found in many species of ants, especially desert ants (Gallistel 1989). The question of whether these invertebrates have in them some sort of spatial map of the landmarks and paths while they take a certain route, especially when they are diverted from their route, is a moot point (see Bennett 1996). It may be noted that spatial cognitive maps, whatever they (may) look like, seem to do away with the demand for representing the spatial relation of the animals to outside objects. That is, these maps are of the spatial relations of outside objects with respect to one another. It is clear that having only spatial cognitive maps to guide movement would be a costly affair because the animal has to refer to its inner representation continuously when information gathered from the route can actually be processed fast based on the recognition of known landmarks. Regardless of whether or not such maps are represented in invertebrates, the crucial point is that the sensory-motor capacities of invertebrates can be perceptually responsive both to the spatial relations of outside objects with respect to one another and to the spatial relations of other objects in the outer environment to themselves. Therefore, besides the mental structures specified for single-cell creatures and plants, invertebrates, especially bees, wasps, flies and ants, must have mental structures that can encode these two aspects of spatial relations. Thus we may formulate a relation Things-Near, which instantiates a mental structure encoding the nearness of two or more locations. Let this relation be written as Things-Near = {(near, X, Y)}, where X and Y denote two variables for locations or landmarks or any objects. Being coarse-grained enough, this mental structure can encode spatial relations of two things being adjacent or of one thing being around or beside or in front of or even above another. But many species of invertebrates, especially bees, memorize the spatial arrangement or sequence of certain locations as well (Giurfa and Menzel 2013). This is possible only when the animals concerned index the locations on an abstract line or plane. So a relation Things-in-a-Sequence can be constructed such that Things-ina-Sequence = {(along, X, …, Z)}, where X, …, Z are variables for locations or landmarks or any objects that are arranged in some sequence. The individuated mental structure is general enough to cover quite a number of independently variable sequential arrangements of locations or landmarks or objects. Take, for instance, spatial relations of one thing being on top of another thing which is already on top of something else, or of one thing being situated before

Possible Minds from Natural Language

123

another thing before which stands something else. All of these can be covered by the relation Things-in-a-Sequence. Now mental structures that can specify the spatial relations of other objects in the outer environment to the animals concerned can be considered. If the spatial relations of outside objects to the animals involve the self, the mental structures must also incorporate a notion of self, however minimal it may be. So one relevant mental structure can be instantiated by the relation Movement-of-Self-from  =  {(from, X)}, where X may stand for any location or landmark or object. The converse of this relation specifying another mental structure is then Movement-of-Self-to = {(to, X)}. Suppose a bee moves away from its home and flies toward a flower; in this particular case we have Movement-of-Self-from = {(from, home)} and Movement-of-Self-to  =  {(to, flower)}. Both these mental structures are for specifying the spatial relations of the self of an invertebrate from and to/­towards something. Crucially, these mental structures are not, at least in a necessary sense, to be considered explicitly represented structures in the memory. There is nothing in the formulation of mental structures that ­guarantees that this has to be so. These mental structures may well be temporary p ­ erceptual and probably non-conceptual structures that are constructed on the spot, sensed and identified as such, although such mental structures can be preserved as intermediate-term active conceptualized structures utilizable for guiding behavioral strategies and interactions with the conspecifics (Donald 2001). More important than these mental structures are those in invertebrates that contribute to the capacities for categorization and generalization. These ­mental structures in invertebrates can be supposed to augment the cognitive abilities of invertebrates in the sense that the capacities for categorization and generalization free animals from the necessity to refer to the exact physical stimuli that were observed in the first place. Plus this reduces informational burden on the nervous structures of invertebrates, in that a rule or some sort of generalization for the categorization of objects existing in the ambience does not necessitate checking the details of all objects in a given domain in order to assess what a certain object is. Grasping a rule or generalization makes it far easier by having one simply apply the rule or generalization to any particular entity to verify whether or not it can be classified as something or the other, and thereby to finally ascertain that the generalization applies to all entities of that kind. For example, if we have a generalization for the categorization of tigers—dark stripes over a reddish-orange background color, this can be applied to tigers of all kinds all over a given demarcated region, and this precludes us from having to scrutinize all animals in the region to check whether some animal can be a tiger or not. Bees are known to be able to recognize certain visual patterns such as symmetric vs. asymmetric shapes and to generalize this

124

chapter 3

to novel stimuli. This capacity may well be driven by the ability to make certain distinctions based on the relational concepts of sameness vs. difference, above vs. below etc. which cannot certainly be identified with the physical qualities and features of objects (Giurfa and Menzel 2013). Also, wood crickets are often found to hide under leaves only when they interact with experienced wood crickets in the presence of predators (Coolen, Dangles and Casas 2005). What is noteworthy in this context is that such wood crickets have the complex ability not only to differentiate between experienced conspecifics and non-­ experienced conspecifics, but also to recognize the relation between the experienced conspecifics and the predators. These are cases of learning in which some mental structures have been gradually internalized. That is, the mental structures have been embedded in some form of memory. Although Things-Near can handle many cases of spatial relations between objects, the conceptual aspects of sameness must be specified by means of a distinct mental structure. Let’s first construct a relation that individuates the mental structure responsible for taking two entities to be similar; let’s call this relation Similar = {(X, Y)}, where X and Y are variables for two entities and X ≠ Y (because the notion of something being identical to itself cannot be ascribed to invertebrates without sufficient grounds for doing so). The notion of similarity in terms of some parameters or dimensions (color, shape, quality or the indexical ‘feel’ or mark of something) which can be applied to another entity to check if it conforms to the given notion of similarity may be specified by a higher-order mental structure which can be instantiated by the relation Sameness = {(Similar, X)}, where X is the entity which is to be checked for its conformity to the relation Similar. What this implies is that the ability to sense sameness in itself as an abstract relation consists in recognizing the relation between Similar and something which is to be judged as belonging to some category in terms of its similarity to something else. It is worth noting that the mental structure specified by the relation Different has already been recognized as something that not only plants but also insects can have. This does not necessarily mean, however, that X Different Y = Similar, on the grounds that the logical procedure of deriving converses of relations may not be valid in the animal world. That is, when X and Y are considered not different, it may not ipso facto be the case that for certain animals X and Y are then similar to one another. There may be some latitudes in the sensory-perceptual capacities of certain species that do not effect such logical transformations in mental structures. Hence the mental structure individuated by the relation Similar is indeed distinct in character from the negation of Different. The case of wood crickets is a bit different. The triadic relationship between wood crickets, experienced conspecifics and the predators must be part of a mental

Possible Minds from Natural Language

125

s­ tructure, otherwise it would not be possible for wood crickets to respond the way they do. The relation Predator-Danger = {(predator, RQ, leaves)}, where RQ is Thing-with-a-Feature  =  {(conspecific, experienced)}, instantiates this ­mental structure. The jumping spiders of the genus Portia exhibit a complex array of predatory behaviors when they attack other prey spiders. These spiders have been observed to track the behaviors and movements of other spiders, and also to draw up a precise trial-and-error strategy by flexibly using a series of signals that are meant to disorient the prey spiders (Wilcox and Jackson 2002). Unlike other spiders which have impoverished visual capabilities, these spiders have a visual capacity that is tuned to their predatory tactics. Most importantly, the signals that the jumping spiders use are continually changed depending on the responses evoked in the prey spiders. For example, if a particular signal seems to be more useful at a certain moment, a jumping spider repeats it at that moment. Clearly, these spiders must have mental structures that not only index the prey spiders along with their webs, but also manipulate a range of strategies which vary on the basis of the responses from the prey spiders. If this is so, the complex mental structure that may help accomplish this can be specified by the relation Combination-of-Signals = {(or, pattern1,…, patternn)}, where n is an arbitrary finite number, and pattern1,…, patternn are distinct patterns of sounds that are the signals at the jumping spiders’ disposal. Since the jumping spiders also engage in mimicry in order to control the behavior of the prey spiders, it appears that they have mental structures that have a conditional involved. One may argue that this may be a kind of conditional association of stimuli and responses (or response sequences) that is hardwired into these spiders. As Wilcox and Jackson (2002) have shown, this is not so on the grounds that these spiders use their strategies of mimicry and signaling in quite flexible ways. Therefore, we can assume that there must be a mental structure instantiated by the relation Conditional-Signal = {(if, Thing-with-a-Feature, ­patterni)}, where Thing-with-a-Feature = {(spider, female)} when, for example, a jumping spider encounters a female spider which is the prey. That these spiders have this complexity in the mental structures they possess should not be surprising, given that many spider species are known to be skilled predators that use decoys to prey on insects or other spiders. This has certainly evolved in the wild environments where predatory ventures have been an extremely risky business. Cephalopods like mollusks, squids, octopuses are more complex invertebrates. Many of these invertebrates live in water. Most cephalopods including sea slugs, nautilus, and octopuses exhibit different kinds of learning capacities in associating stimuli with a range of responses, observing conspecifics

126

chapter 3

and learning from their behaviors, and also in discriminating between one stimulus and another (Basil and Crook 2014). It is also understood that ­various spatial concepts are well-developed in cephalopods, especially octopuses, and that the capacities for categorization and generalization that facilitate discriminating not only stimuli but also kinds or classes of stimuli are present in octopuses (see Mather 2008). Hence the mental structures specified for other invertebrates apply to these species of cephalopods. Clearly these kinds of learning are mediated by some forms of short-term and long-term memories. Interestingly, octopuses have been observed to engage in some sort of play-like behavior when they are presented with both food and non-food items (Kuba, Gutnick and Burghardt 2014). Although detecting play-like behavior in non-human animals is a bit hard to figure out, it is clear that such behaviors must be something other than their instinctive or regular patterns of behaviors. If octopuses have the ability to engage in some sort of play-like behavior, it is quite plausible that octopuses have developed complex sensorymotor capacities probably because of its distributed neural structures in brain and the arms that help them handle objects in a fine-grained fashion, and that this capacity easily lends itself to exploratory and pleasurably repetitive activities with objects. Importantly, this also demonstrates that octopuses can hold an object in its attentional focus by locating it against a visually sensed background which is sort of taken for granted or ‘presupposed’. So octopuses must have a mental structure that encodes this relationship. The relation that can install this mental structure will thus parallel the mental structure that specifies a focus against a presupposition. For octopuses, let’s call this relation Object-against-Background = {(X, RPC)}, where X is the object focused, and RPC can stand for a first-order relation, say, Something-Exists or Conjunctionof-Things. Suppose an octopus sees a ball before it and plays with it by moving it against the water, Object-against-Background will then look like Objectagainst-­Background  =  {(ball, RPC)}, where RPC denotes the relation Something-Exists = {(is, water)}. Perhaps a similar type of mental structure applies to earthworms which plug up their burrows with leaves, by assessing the size of leaves before plugging up the burrows. This has been known since Darwin (1881), who made detailed observations on this behavior of earthworms. The particular leaves can then be the focused entities, and the location, shape and structure of the burrow can constitute parts of the presupposition. Perhaps more remarkable than this are the visual signaling strategies in squids and octopuses. Most squids and octopuses communicate with other conspecifics, prey and other animals by means of a variety of body postures mostly involving the arms. Some squids have chromatophores in the skin surface the motor fields of which contract and expand, thereby generating signals

Possible Minds from Natural Language

127

of black and white, yellow, green and red colors (see Hanlon and Messenger 1996). These signals are known to express messages about danger, courtship, nearby objects, the outer environment etc. This is also used for the purpose of camouflage, especially in cuttlefishes. The mechanisms behind the cuttlefishes’ ability to change the appearance to camouflage their body with the background must at least require some sort of pattern matching process involving the detection of edges and contours of objects and/or fragments of objects (see Zylinski and Osorio 2014). Even if the visual strategies of communication are limited to spatially restricted areas because of the very modality of the visual medium in contrast to the auditory medium which one can use for long distances even in darkness, these communicative strategies are optimal at least for cephalopods, inasmuch as these strategies are maximized in order to meet the needs of the expression of complex perceptual and cognitive representations. From this perspective, it seems reasonable to suppose that the visual configurations must have a motor-perceptual structure that is coded and decoded by cephalopods. These motor-perceptual structures are read off from the gestures and color displays, and then mapped onto some sensory or cognitive representations when they are sensed and grasped. Hence the mental structures that support or accompany these motor-perceptual structures must be extracted, in part, from certain imagistic properties of these motorperceptual structures. What is to be noted is that the analytical properties of the mental structures conveyed or grasped ride on the imagistic properties embedded in the motorperceptual structures. Since this involves a mapping of the m ­ otor-perceptual structures onto the sensory or cognitive representations, the resulting m ­ ental structures must be a bit complex. Suppose that there is a certain mental structure corresponding to a motor-perceptual structure for a particular visual display. Let the relation that individuates this mental structure be Configuration-Exists = {(is, patterni)}, where some pattern is sensed and indexed. Now suppose that the visual pattern with the index i signals the presence of some predator to a cephalopod. In abstract terms, the action of seeing the real predator will then act as a function (in its mathematical sense) and map Configuration-Exists onto the relation Predator-Danger = {(predator, RQ, home)} when RQ is Thing-with-a-Feature = {(conspecific, patterni)}. It needs to be made clear that the specific mapping is effected in cephalopods that use visual signaling strategies, regardless of whether or not the specific predator is present before the recipient of the message. But the cephalopod conveying the message must have to see the predator in order for the act of seeing the predator concerned to be identical with a function, for we cannot be certain that cephalopods can imagine predators which are not physically present before them.

128

chapter 3

We now turn to vertebrate animals to see what we can figure out about the mental structures in these animals. Vertebrates include many species of fish including sharks, birds, mammals, amphibians and also reptiles. Most ­vertebrate species have more complex brains than many species of invertebrates. Although the question of whether the actions and behaviors of many vertebrate species have the same mentalist import as those of humans is still debated, it needs to be made explicit that the actions and behaviors of many vertebrate species can be cashed out in terms of certain abilities that contribute to the emergence of cognitive skills and interactions with respect to the internal states of other animals as well as to the environment. This does not necessarily require that such animals think about their own internal states that support such interactions or even the internal states of other animals including conspecifics. Similar lines of reasoning can also be found in Allen and Bekoff (1997). As also argued at the beginning of this section, the ascription of mental structures to various animal species will not hinge on whether or not the animals in question possess intentionality or consciousness in some form. On the present proposal, having mental structures does not guarantee that the animals that have mental structures will be conscious or possess a certain type or form of intentionality. Clearly, having a range of mental structures serves certain mental functions, but it is not clear whether mental functions, that is, things that minds do must have to be interpreted in terms of consciousness or some form of intentionality. But, of course, an animal that is conscious must have mental structures. This applies to human beings. With this in mind, we can look at the range of mental structures that are viable in various vertebrate species. Smaller species of mammals such as rats have been known to display complex spatial learning skills. For example, rats can navigate through complex spatial layouts such as mazes, and often find out alternative ways of reaching a point or goal when other routes that they have already traveled through or they prefer are blocked (see Gallistel 1980). Such spatial behaviors of rats must be due to a subtle manipulation of mental structures such as T ­ hings-in-a-­Sequence, Things-Near, Movement-of-Self-from and also Movement-of-Self-to. Perhaps more interesting is the anti-predator activity of ground squirrels. California ground squirrels have been observed to exhibit complex adaptive defense responses to rattlesnakes when they confront rattlesnakes in order to detect the specific features in rattlesnakes that can prove dangerous for them (especially for the pups which are more vulnerable due to their underdeveloped capacity for producing anti-venom proteins in the blood). For example, larger and warmer snakes are more dangerous, and hence the rattling sound of these rattlesnakes produces greater amplitude which is used by the ground squirrels

Possible Minds from Natural Language

129

to categorize these predators by differentiating larger and warmer rattlesnakes from smaller and colder rattlesnakes (see Owings 2002). In this case, the auditory cues appear to be more important for the categorization of predators. Hence the mental structure that encodes these auditory cues can be individuated by the relation Configuration-Exists = {(is, patterni)}, where the pattern with the index i happens to be an auditory pattern. The activities of ground squirrels then map this onto the relation Thing-with-a-Feature = {(predator, patterni)}. Such mental structures can also apply to rattlesnakes themselves because these snakes are themselves eaten by other snakes such as kingsnakes as well as red-tailed hawks. Additionally, snakes are also known for applying a range of luring strategies in order to attack the prey (see Burghardt 2002). Note that the mental structures that are formed by auditory and/or visual patterns are not modality-based in the sense that a pattern, whether auditory or visual, is part of a mental structure that is general enough in virtue of being insensitive to modality-based perceptual representations. This is not, of course, to deny that the particular modality-based perceptual representations are to be realized in specific modality-specific neural structures. In a sense, mental structures are structures that transcend specific modality-based perceptual representations. This point can be appreciated in a better way by examining the mental structures of animals such as bats and dolphins, since both species use sonar to locate and characterize objects by transforming the incoming sound waves into perceptual representations of objects (see Roitblat 2002). The crucial difference between the kind of sonar bats use and the kind of sonar dolphins use is that the former is used in the air, while the latter is used underwater, and this is responsible for the greater speed of the sound under water than in the air. The wavelength of a given frequency is thus longer in the case of dolphins which use their sonar to track and identify many objects submerged in water. Another significant difference between their sonar systems can be highlighted in terms of the relative time that bats and dolphins take to detect and then evaluate the signals. Bats fly in the air, and hence the time between the detection of an object and its evaluation is relatively short. That is why bats detect an object from a short distance, whereas dolphins usually detect an object from a long distance. Despite these differences in the sonar characteristics in the two species, what is common in both species is that the incoming sound waves or the echo sensed will have an auditory profile or representation which is then mapped onto perceptual representations of objects. It is important to recognize that the former type of representation is modality-based and the latter type is perhaps a result of some sort of sensory integration of the features of objects as well as of certain salient aspects of the environment derived from the operations of the capacities for vision and olfaction besides echolocation.

130

chapter 3

In particular, there can be mental structures, besides the ones that correspond to the mental representations resulting from the sensory integration, which exclusively encode the incoming sound waves or the echo. While it is possibly not so easy to differentiate between the former type of mental structures from the latter type, especially in bats or dolphins, this is not so in lower organisms such as unicellular organisms or worms. To put it in other words, in lower organisms such as unicellular organisms or worms modality-based sensory representations can have or generate a mental structure that is underspecified in terms of its modality-oriented associations, but in mammals such as bats or dolphins it is possible to have two different types of mental structures—one type that is distilled from the modality-based perceptual representations and the other type extracted from some form of sensory integration. What is striking in bats (especially big brown bats) is that bats can remain silent in order to avoid signal jamming from conspecifics or simply to utilize the echo other conspecifics use. They can also eavesdrop to exploit the echolocation information about food sources or about objects that is actually available to other bats, and this tendency is also present in bottlenose dolphins (Chiu and Moss 2008). This capacity of bats or dolphins to detect object characteristics by using the echolocation of conspecifics but without involving their own sonar must exploit mental structures that ensure that the echolocation is of conspecifics but not of their own. This cannot be guaranteed if, for example, the ­relevant mental structures only specify that the echolocation is of conspecifics, for this is too underspecified to warrant the inference that the echolocation is not actually of a bat’s or a dolphin’s own. This is because X’s using an instance of Y is perfectly compatible with Z’s using another instance of Y. Additionally, the mental structures must specify the triadic relationship between the object, the conspecifics and the self of the animal concerned. We can now formulate mental structures which can fulfill these roles. First, let’s suppose that we have a mental structure individuated by the relation Thing-Source = {(RO, RS)}, where RO = Thing-with-a-Feature = {(object, prey)} and RS = Source = {(from, conspecifics)}, when the relevant information is about the prey that a bat or a dolphin gathers by using the echolocation of some member(s) of its own ­species. Now another mental structure that enables the inference that the echolocation is not of a bat’s or a dolphin’s own has to be characterized by the relation Not-Self = {(not, Thing-Source)}, where RS = Source = {(from, myself)}. This relates to a specific ability in dolphins. ­Dolphins have been observed to copy the sounds uttered by humans, or a behavior/act executed by humans, and also to distinguish between representations in a virtual world (displays on tv, for example) and the real world happenings (see for details, Herman 2002). For example, when familiar instructions by a human are given on tv, the

Possible Minds from Natural Language

131

d­ olphin executes the instructions without going for any object represented on tv. That is, the dolphin acts in the real world, but does not run after any object (a ball, for instance) which is shown on tv. The understanding of the self-other ­distinction must underlie this ability at the very least. Aside from that, the perceptual representation from the tv must have to be distinguished from the perceptual representation in the actual world in such a manner that the dolphin recognizes its own position and perceptual role in the actual world. For this special capacity of dolphins, we propose to construct a relation that can install the relevant mental structure. Let’s call this relation Comparison = {(RS, RO)}, where RS designates the relation that specifies the perceptual representation related to the dolphin’s self-based perspective, and RO denotes a relation for the perceptual representation that is related to the other. For example, if there is a ball that the dolphin sees on tv, and then the dolphin finds a ball in the real world, this relation will look like Comparison = {(as, There-ExistsS, There-ExistsO)},14 when There-ExistsS = {(is, ball)} and There-ExistsO = {(is, ball)}. Note that this mental can also account for the dolphin’s ability to copy a behavior, say, a human’s assuming a certain body posture. As can be understood from this, this capacity builds on dolphins’ capacities for categorization and generalization. So the mental structures that have been formulated for categorization and generalization extend to dolphins. We also find evidence of pigeons’ capacity to extract abstract concepts of both sameness and difference when they are exposed to stimuli of different kinds of patterns (blocks, points, images etc.). Often they are tested on such stimuli by having them exposed to a pattern and then presented with another pattern that is either the same as or different from the one shown earlier. When they pick up the correct choice, they are rewarded with food. One may now argue that this cannot constitute evidence of pigeons’ having abstracted any concept of sameness or difference, since it can be accounted for by the pigeons’ tendency to just press that choice which leads to a reward in the form of food. That is, the pigeons’ response could be a means of getting the food. However, given that pigeons have indeed been observed to respond correctly to disparately heterogeneous types of stimuli, this behavior of pigeons cannot simply be an outcome of pigeons’ tendency to apply a means of getting 14

The reason for the incorporation of ‘as’ in the relation is that it is the element that is generally used in English comparatives, as in ‘He is as good as me’ or ‘The guy runs as fast as Bolt’. The parameter of a particular comparison in each case is encoded in an adjective (‘good’ in ‘as good as me’) or an adverb (‘fast’ in ‘as fast as Bolt’). The parameter in the formulation of the relation Comparison is left underspecified as it is context-sensitive for dolphins.

132

chapter 3

the food because the pigeons cannot be expected to ‘know’ beforehand which choice to pick up, especially when there are many trials each containing an arbitrarily different type of stimuli (Cook 2002; Watanabe 2010; Soto and Wasserman 2010, 2012). Hence pigeons can possess a mental structure that identifies the concept of difference. Let’s then call the relation individuating this mental structure Difference = {(Different, Z)}, where Z is the entity which is checked for its conformity to the relation Different. Thus pigeons may possess the mental structure installed by Difference besides the mental structure instantiated by Sameness. Similar abilities are also found in grey parrots (Pepperberg 2002). Perhaps more marvelous capacities within the range of avian species are exhibited by crows and ravens. Crows are known for their tool making strategies that indicate a naïve understanding of physics on the part of crows (see Bluff et al. 2007). For example, New Caledonian crows can not only make tools to bring out food items out of holes or containers but also make inferences about objects that are hidden behind things having an observable index ­(Taylor, ­Miller and Gray 2012). For example, it was shown that these crows reacted and behaved differently in two conditions, namely (i) when a stick moved in the presence of a human entering a hide and then exiting it, and (ii) then in the absence of a human. In the latter situation these crows engaged in a bit of probing to check whether there was someone behind the hide moving the stick. A similar kind of behavior is also observed in ravens. Ravens have been found to follow other ravens that have carried food far out of sight, not to go near other ravens’ food when the other ravens guarding the food are present, and also to relocate food when there are chances of the food being taken away by other ravens (Heinrich 2002). It appears that both crows (at least some species of crows) and ravens can at least calculate the causal connections and relations of their own actions to the actions of conspecifics in a way that helps predict certain consequences beforehand. If this is the case, it is reasonable to ascribe to crows and ravens mental structures that have a conditional structure. Hence we propose the relation Conditional-Action = {(if, EA, EC)}, where EA is a relation that signifies the antecedent action or event pertaining to the self-based perspective of crows and ravens, and EC is a relation that specifies the consequent action or event. For instance, if getting closer to members of the same species (re)locating some food can make it difficult or impossible for crows or ravens to get the food, then the relevant mental structure may encode this causal connection in a conditional. So the relation ­Conditional-Action will look like Conditional-Action =  {(if, EA, EC)}, where EA  =  {(moving, Rj)} when Rj  =  {(near, conspecifics)}, and EC  =  {(not, Rk)} when Rk  =  {(getting, food)}. Likewise, if New Caledonian crows exhibit an understanding of the movement of sticks associated with agency, then these crows relate (hidden)

Possible Minds from Natural Language

133

agents’ ­actions to the movement of sticks. We shall then have ConditionalAction = {(if, EA, EC)}, where EA = {(agent, acting)} and EC = {(stick, moving)}. Perhaps similar mental structures can be found in jays, especially when these birds are observed to recover food items cached by other conspecifics (see ­Balda and Kamil 2002). Interestingly, the mental structure instantiated by Conditional-Action must also be present in sea lions and many species of fishes. Many fighting fishes such as Siamese fighting fish recognize other individual fishes and control their behavior when displaying aggression towards other conspecifics (Hsu, Earley and Wolf 2006). Apart from that, fishes can also eavesdrop in order to play on the reactions that are evoked in other fishes; for example, females can gather information from interactions between other fishes when the information proves useful for partner selection, and it is often observed that some fishes such as swordtails try to maximize their aggressive behavior before the audience fishes so that their behavior can remain tuned to the observers that eavesdrop (Reebs 2001; Bshary 2006). In this regard, one may note that the mental structure specified by Conditional-Action specifies a conditional structure at the level of actions or events, as the issue of whether these animals entertain propositions or anything akin to that can be sidelined for a better appraisal of the nature and form of mental structures in these animals. For a similar reason, the mental structure identified by Conditional-Signal has been specified at the level of entities which are perceived stimuli. Needless to say, an animal that possesses the mental structure specified by Conditional-Action must also possess the mental structure specified by Conditional-Signal. Further, we also remain neutral on the question of whether a theory of mind ability can be ascribed to these animals when they exhibit behaviors that appear to have characteristics of sophisticated forms of social cognition. Nevertheless, it is important to recognize that the capacity of crows, ravens and fishes goes beyond a mere mental registration of the triadic relation between an object, an agent and a location in the sense of Butterfill and Apperly (2013), for the capacity to draw certain inferences based either on the occurring events or on the events that have just occurred by projecting the outcomes of actions to be taken cannot simply be accounted for by a triadic relation between an object, one single agent and a location. Whatever the actual explanation is, we leave this matter open. There is reason to expect that the mental structures of animals will be more and more complex as we look at animals whose social world is complex (see also Tomasello and Call 1997). This is evidenced by the abilities and capacities of animals like crows, ravens and fishes. From this perspective, close inspection of the abilities and capacities of bigger mammals such as elephants, hyenas, horses, dogs and also non-human primates will be more useful for our

134

chapter 3

purpose. Most of these species are good at generalization, categorization, spatial learning, tool making and also at making useful inferences about events in their natural environments. Hence the relevant mental structures that can be taken to underlie these cognitive abilities can be attributed to the type of mentality these animals have. Vervet monkeys, for instance, are known for producing kinds of alarm calls that differ in terms of whether the calls are for particular kinds of predators—leopards, eagles, pythons and baboons. Alarm calls are not exclusively found in vervet monkeys; they are also common in scrub jays and chickens. What is often disputed is the supposition that these calls are referential in nature, that is, they refer to some predator or some class of predators, for it seems parsimonious to assume that these calls are actually calls for actions/behaviors that require urgent attention, or that these calls are just instructions, as Baron-Cohen (1992) assumes. This problem is easily dissolved once we take the appropriate mental structures into consideration. The mental structure that enables vervet monkeys to produce these alarm calls is such that it can be regarded as an internal mental state, insofar as mental structures are mentally instantiated, or as behavioral instructions, insofar as the relevant mental structure propels the animal into action. The mental structure can be specified by the already introduced relation Predator-Danger, which can take the form Predator-Danger  =  {(snakes, RV, trees)} when RV is Thing-with-aFeature = {(conspecific, patterni)} when, for instance, vervet monkeys hear an alarm call for the presence of snakes and the alarmed vervet monkeys choose to climb up the trees nearby. The index i here can be taken to be one of the four indices for four different patterns of alarm calls. If we follow the reasoning of Harms (2004), the mental structure here provides specifications for both the reference (snakes, for instance) and the sense (the action of climbing up the trees). Besides having the mental structures for categorization and spatial concepts, vervet monkeys have the specialized capacity to recognize third-party relationships (Cheney and Seyfarth 1990). That is, vervet monkeys can recognize relationships of other conspecifics to some other conspecifics. Thus, for instance, when the vocalization of a vervet monkey is in the background, other vervet monkeys often look at the mother of the vervet monkey producing the vocalization in the background. The general ability to identify and recognize dominance and kinship relations may underlie this capacity in many nonhuman primates including baboons and chimpanzees. There is no evidence that the ability to recognize third-party relationships is found in other bigger mammals such as elephants, although elephants live in stable societies and have well-developed capacities for categorization and spatial navigation (see Byrne, Bates and Moss 2009). Holekamp and Engh (2002) have also arrived at a similar conclusion for hyenas by examining the behavior of spotted hyenas.

Possible Minds from Natural Language

135

It is not clear whether horses possess this ability, although horses are known to perform complex visual processing tasks (comparison and representational linking of images) and also recognize differences between human facial expressions (see Leblanc 2013; Smith et al. 2016). The mental structure that underlies this capacity of vervet monkeys can be specified by the relation Relation-of-Conspecifics  =  {(conspecifici, myself), (conspecifici, conspecificj)},15 which can help figure out how a conspecific is related to a certain primate and then how that conspecific is related to some other conspecific(s). The particular relation can be dominance or kinship relations or even relations that mark social solidarity. In many cases, this capacity in primates may also be subserved by the mental structure instantiated by Conditional-Action. The reason is that many situations of aggression evoking stress (attacking another animal, for instance) in primates require the animals in question to make quick decisions which can be executed only if some condition is met. However, we cannot rule out the possibility that these mental structures are also used by primates when they engage in cooperative behaviors such as grooming, forming alliances, group hunting etc. Perhaps more interesting is chimpanzees’ capacity to be sensitive to other chimpanzees’ or humans’ mental states in a way that does not involve a theory of mind ability (see Tomasello, Call and Hare 2003). For example, chimpanzees have been observed to be sensitive to the looking direction of other chimpanzees by not venturing to grab a food item when the food is placed in a location that is visible to another dominant chimpanzee. Plus chimpanzees may also hide themselves in order to take some food when a human tries to interfere with the chimpanzee’s efforts (Hare, Call and Tomasello 2006). Although it is known that no primate species has anything that is akin to a theory of mind ability (see Tomasello and Call 1997; Tomasello 1999), several other species of primates such as baboons, macaques can actually detect and show sensitivity to other conspecifics’ or humans’ psychological states by preferring an action over some other alternatives. Macaques, for instance, have been found to display a behavioral preference for a human imitating the macaques’ object-oriented actions but not for someone who was just engaged in some co-occurring but different action (Paukner et al. 2004). Since being able to detect others’ psychological states in a way that does not require the 15

An analogous form of this relation can be extracted from control constructions like ‘He tries to understand art’, where ‘he’ is the agent of both the verb ‘try’ and the verb ‘understand’. Hence a meaning relation has to be defined such that it captures the relation of ‘he’ to both ‘try’ and ‘understand’. We can thus have Rk = {(he, tries), (he, understand)} for the sentence ‘He tries to understand art’.

136

chapter 3

mental representation of such states is a plausible explanation of this capacity of ­non-human primates, this seems to accord well with the prevalent view that non-human primates cannot represent others’ intentions, goals, beliefs and desires. In fact, it has been argued that this correspondence between a positive ability and another negative ability in primates has to be understood by jettisoning the entire representational vocabulary of mentalism and adopting an embodied and enactivist stance towards primate cognition (Barrett and Henzi 2005). The embodied and enactivist stance towards primate cognition consists in the understanding that primates’ and also other animals’ abilities and capacities are not to be grounded in mental representations, but rather in the actions and behaviors that primates engage in to tackle regular and contingent challenges in their natural environments. There is nothing inherently wrong with this view, so long as the mental states of animals are taken to be reflected or manifest in the (collection of) behavioral traits that contribute to a certain cognitive ability. But the internal states of animals cannot be thrown away, irrespective of wherever they are located—either in the brain or in the whole body or even in a plexus of brain, body and the environment (see also Saidel 2009). Perhaps there is a different way out too. We can formulate appropriate mental structures for this capacity in primates by avoiding both horns of this apparent dilemma—the problem of the explicit representation of others’ intentions, goals, beliefs and desires and the problem of being too conservative with respect to the internal mental states of other non-human animals. It is possible that these problems emanate from a fear that the ascription of mental representations of others’ intentions, goals, beliefs and desires to non-human primates as well as other animals may border on the attribution of propositions or proposition-like entities to non-­human animals. This can be avoided if we do not leap at the chance to formulate the relevant mental structures for this capacity in primates by incorporating psychological or intensional predicates (feel, be aware, assume, think, believe etc.) and ‘I’ into meaning relations. Being sensitive to others’ intentions, goals, beliefs and desires can be accommodated within a mental structure that only specifies a relation between the animal observed (a conspecific or a human) and the relation that expresses the psychological state of the observed animal. Let’s call this relation Other-State = {(Thing-with-a-Feature, RM)}, where Thing-with-a-Feature is the relation for the specification of the animal observed and RM stands for a relation for expressing the perceived state of the observed animal(s). For example, if a subordinate chimpanzee observes a dominant chimpanzee looking at the subordinate chimpanzee in the near presence of a food which both can see, the relation Thing-with-a-Feature will be Thing-with-a-Feature = {(conspecifici, dominant)} and RM = {(conspecifici,

Possible Minds from Natural Language

137

RN)}, where RN = {(seeing, food)}. On the other hand, if it is a human (who seems threatening to a chimpanzee) rather than a conspecific, Thing-with-aFeature = {(humani, threat)} and RM = {(humani, RN)}. Note that the mental structure installed by Other-State may well be a behavioral trait rather than an instantiated mental representation or even a thought conscious or otherwise. As a matter of fact, nothing in the formulation of mental structures keeps to a representational theory of mental states or mental contents, and hence nothing prevents this mental structure from being regarded as an implicit nonlocalized structure or simply as a behavioral trait. This mental structure must also be present in domesticated dogs (and perhaps also in domesticated cats), given that domesticated dogs are found to display an amazing ability not only in following humans’ gaze but also in showing a remarkable sensitivity to the master’s psychological states (see Hare and Tomasello 2005). It is doubtful that this capacity is also present in wolves, just because wolves are phylogenetically closer to dogs (see Miklólsi 2007). The last remaining issue revolves around the capacity of non-human primates to recognize the self. It seems as if a lot is at stake as we attempt to understand whether other non-human animals really recognize and understand themselves. One of the reasons has to do with the connection, however drawn, between self-recognition or self-awareness and consciousness, and it is evident that attributing consciousness to non-human animals may raise eyebrows in many corners. Whatever way one defines self-recognition or selfawareness, there is ample evidence that non-human primates can recognize or sense themselves, although it is arguable that non-human primates can match their own sense or the feeling of their own body with the visual representation/image of their self (see Mitchell 2002). The well-known test called the mark test (in which a mark is applied on the forehead of the animal in question and then the animal is allowed to look at the mirror) is often used to determine whether or not the animal in question exhibits self-recognition. Chimpanzees and other apes such as gorillas, orangutans have often passed this test, but of course this need not be taken to constitute evidence for any generic ability in primates to be conscious of one’s own self. In a sense, an understanding of one’s own self is also related to one’s ability to imitate others, which seems to be absent in non-human primates (Tomasello 1999; but see Savage-Rumbaugh, Shankar and Taylor 2001), since imitating someone or someone’s action requires an understanding of both the means and the goal of the action to be imitated so that the means and the goal are reliably replicated in the self by way of an observation of the other. Most non-human primates have been observed to just copy the means without any understanding of the goal.

138

chapter 3

Insofar as many non-human primates at least recognize themselves, this can be regarded as a case of self-recognition or self-detection, and this is implicit in the behavioral actions and habits in non-human primates which reflect a selfother distinction. This becomes strikingly manifest in non-human primates’ defensive or fighting behaviors and in cooperative actions including playing. Apart from mental structures encapsulated by Different-from-Self and NotSelf, non-human primates must then have mental structures that reflect the involvement of the self in certain events that matter to the non-human primates in question. For example, when a chimpanzee defends itself from an attacking animal by throwing rocks at the enemy, the mental structure that enables this behavior/action can specify the self within a meaning relation. It may be noted that throwing rocks at the enemy is to be construed as engaging in something destructive or harmful towards the other away from the self. So for this particular case we can have the relation Self-Event = {(myself, RB)}, where RB  =  {(throw, rocks)}. Similarly, if a gorilla hides its face in a playing behavior, we shall have Self-Event = {(myself, RB)}, where RB = {(hide, face)}. Because the relation RB is in a relation with ‘myself’, the relation RB specifies an event involving the animal’s own face. If a scenario involves an event with some other animal’s face (for instance, hiding some other gorilla’s face), there has to be a different relation embedded within RB that will specify the relation between that specific animal and its face. These mental structures are some of the crucial components of the form of mentality these non-human primates have, but there is, of course, a lot more that can be said about the cognitive capacities and abilities in non-human primates. Before we conclude this chapter, we need to deal with a few more concerns that warrant immediate attention. This is what we turn to now. At this juncture, quite a number of points need to be weighed up. First, one of the burning issues with regard to the nature of mentality non-human ­animals may possess concerns the form of differences between human and non-human animal minds. It was Darwin (1871) who hypothesized that the difference between human and non-human animal minds is one of degree, not of kind. This hypothesis has been debated ever since Darwin propounded it, but one may also note that many thinkers before Darwin including John Locke and Rene Descartes assigned the lowly status of automata to non-human animals. Not surprisingly, much has changed since then, although what counts as the required evidence for and against the view that human and non-human animal minds have unmistakable similarities (or for that matter, dissimilarities) is still a point of contention. In fact, Penn, Holyoak and Povinelli (2008) have dismissed most of the observations in experimental studies that (aim to) demonstrate that non-human animals can learn rules, analogical reasoning as well

Possible Minds from Natural Language

139

as relations abstracted from classes of stimuli. They argue that what appear to be cases of learning some relations abstracted from classes of stimuli, or of extracting rules (as in Hauser, Weiss and Marcus (2002) for tamarin monkeys) are actually cases of learning to calculate the analog or continuous estimates of variability among the perceptual classes of stimuli, or of simply learning some perceptual regularities. This argument in essence rests on the supposition that learning to calculate the analog estimates of variability among the perceptual classes of stimuli or learning some perceptual regularities is not akin, at least in kind, to the cognitive capacities in humans that support analogical reasoning or reasoning about generalizations, rules and other schemas. This argument is flawed on the grounds that the capacity to entertain thoughts and manipulate representations that underlie reasoning is assumed to be the standard against which the kind of mentality non-human minds may possess is to be measured. After all, non-human animals may have a kind of mentality that has nothing to do with explicit inference making activities or even reasoning. And if animals do not engage in such reasoning by their very cognitive make-up, it is pointless and at best trivial to compare the human type of mentality to the non-human animal type(s) of mentality from this particular angle, for it may be like saying, when talking about the differences between two English letters, the English letter ‘O’ has an oval shape and the letter ‘H’ has a rectangular shape. The other problem with this argument is that the very argument of these researchers is itself a product of the kind of reasoning activities humans engage and are adept in. If this is so, it is senseless to appeal to that which not only is used as a significant parameter for comparing differences in kind between human and non-human animal minds, but also remains responsible in humans for the construction of the very argument that (re)presents what it originates from as a significant parameter for comparing the differences between human and non-human animal minds. It is like having an argument for certain differences between A and B on the basis of the parameter X and then employing something called Z in B which may not exist in A but produces the very argument for certain differences between A and B on the basis of the parameter X. Needless to say, the argument for any differences in such a scenario will be cripplingly biased against A. At the end of the day, one cannot have it both ways! Furthermore, in the present context the problem posed by this type of argument does not simply arise. Thus far we have shown that differences between the human type of mentality and the types of mentality that plants and other organisms may have do indeed exist. These differences are a matter of degree, insofar as the greatest variety of the mental structures can be attributed to humans with plants and other organisms possessing smaller ranges or subsets of these very mental structures. On the other hand, these differences are a

140

chapter 3

­matter of kind to the extent that these differences are interpreted by us to yield significant divergence in kind. For instance, given the present evidence, mental structures for modal and counterfactual expressions in natural language may not be present in plants and the animal world. If these mental structures are thought to be of a different kind, then they certainly are different in kind. ­Ultimately, it is a matter of our conception because that is the way they can be conceived of in our theoretical view. Finally, the present formulation of mental structures for plants and other organisms or creatures is not couched in terms of cognitive processes. The advantage that accrues from this is that the proposed mental structures for plants and other non-human animals are to be seen as the products of cognitive capacities and abilities in non-human organisms, regardless of whatever way the operations or processes underlying these cognitive capacities and abilities are conceived of. Therefore, the question of whether or not certain cognitive processes obtain in non-human animals and plants is not germane to the present view of the differences and similarities between human and non-human mentalities. Second, as indicated above, within the present context it is easy to demonstrate how complexity differences in mental structures in plants, non-­human animals and humans can be thought of. It is the exact variety of mental structures in humans that makes the human mentality more complex than both the type of mentality of plants and the mentality of other non-human animals, and similarly, the exact variety of mental structures in non-human animals makes the non-human animals’ type of mentality more complex than that of plants. Having a greater variety of mental structures permits an organism or a creature to establish mappings between them, associate one mental structure with another or even with some other mental structures, and also to manipulate the mental structures in order to achieve a greater degree of freedom in coping with internal and external/environmental challenges. Now one may object that many instances of mental structures formulated above have higher-order relations defined on Lex, and that it is misleading to expect non-­ human animals to have these higher-order relations instantiating certain mental structures. This fear is misguided because on the present proposal the level of embedding of meaning relations does not count towards complexity differences in mental structures among different species. The most clear-cut reason is that what is expressible in one language in a single form may be expressed in another in more than one form, which can undermine the very basis of making out complexity differences in mental structures in terms of the order of relations. It is because in such a case complexity differences in mental structures can be traced to a range of distinctions made in human languages—which as a whole militates against the very idea of having organism-independent mental

Possible Minds from Natural Language

141

structures. For example, what the verb ‘gallop’ in English means cannot be expressed in one word in Hindi or Bengali, and if so, we shall require a relation for expressing the content of ‘gallop’ in languages like Hindi or Bengali. Given that we must have a language (whether natural or artificial) to theorize or hypothesize about mental structures—and this is an inescapable symbolic constraint imposed on us, we must not be ensnared in choosing between one language and another in order to flesh out the formulations made for mental structures. In fact, any human language other than English could have been chosen for the formulation of mental structures by way of meaning relations defined on Lex. If this book were in Russian, for instance, the formulation of mental structures would have been done on the Lex of Russian. So this is a choice we are bound to make. One may still insist that subtle distinctions in the contents or conceptualizations of words across languages may play up differences in mental structures when they are formulated from the lexicons of different languages. For example, the word ‘kaukokaipuu’ in Finnish means ‘longing for a visit to faraway places’, and this cannot be expressed in one word in English. Similarly, the pronouns of many languages conflate more content than is possible in other languages; the word ‘vo’ in Hindi, for instance, can refer to a human or even any inanimate entity, while pronouns in English refer exclusively either to a human (‘him’/’her’) or to an inanimate entity (‘it’/’this’/’that’). It may also be thought that such differences can spoil the neutral character of mental structures by rendering them (human) language-relative. That this argument is groundless can be shown by considering an important yet overlooked point. It is not just the difference between expressions of similar contents across languages that contributes to concomitant differences in mental structures—the difference between expressions having similar contents within the same language can also lead to concomitant differences in mental structures. Consider, for ­instance, the subtle differences in meaning between the words ‘consternation’, ‘dread’, ‘fright’ and ‘fear’ in English. Plainly, the meaning of ‘fear’ is more general, whereas each of the other words isolates subtle shades of significance of the state of being afraid. In this particular case, choosing one word rather than the other for a certain meaning relation that individuates a mental structure certainly has a consequential import for the mentality of a given creature. From this perspective, one must be cautious about attributing mental structures to other organisms so long as one intends to construct meaning relations from the Lex of a natural language. We cannot, for instance, expect other organisms to have mental structures instantiated by meaning relations containing words like ‘plasma’, ‘laptop’, ‘dissertation’, ‘paradox’, ‘mind’, ‘electron’ etc. That is why the most general words have been used in the construction of meaning

142

chapter 3

r­ elations for mental structures of non-human species until and unless we find out strong evidence for sophisticated shades or layers of conceptualization in the mental structures of some non-human species. Third, it may seem to someone that the meaning relations constructed for mental structures have intuitively recognizable meanings that can be characterized only by our own interpretations. And if this is so, it may not be immediately clear how the mental structures the meaning relations individuate can be extended to non-human organisms or creatures. It looks, on the face of it, like a strong argument against the possibility of having mental structures individuated by meaning relations. But careful scrutiny of this argument reveals that it is not tenable. The particular interpretation of any specific meaning relation constructed for a given mental structure has been fixed within the behavioral and biosemiotic context of the abilities and capacities of plants or animals. That is, the interpretation that a specific meaning relation picks up is already the outcome of our interpretive or (bio)semiotic processes, and hence no meaning relation that individuates a given mental structure is uninterpreted. Intuitive recognizability cannot be part of mental structures, precisely because mental structures are not in themselves interpretable (also discussed in Chapter 1). Meaning relations are thus interpreted meanings. The problem of interpretation in the sense recognized in this argument appears only when a dynamic process is sort of reified and the individuation of some symbolic structure (logical forms, for example) oscillates between two different p ­ ossibilities—the property of semantic interpretation taken as an abstraction and the psychological process of interpretation (see for discussion, Mondal 2014a). This does not apply to meaning relations, since they are interpreted meanings, and by virtue of this, they are always abstractions which are independent of cognitive operations and processes, as also emphasized above. Plus the mental structures meaning relations instantiate are beyond the realm of interpretation per se. Only in this sense do mental structures i­ ntegrate and unify the principled distinctions drawn between vegetative sign systems (iconic connections by way of recognition), animal sign systems (indexical connections by way of associations) and linguistic sign systems (symbolic relations), or between phytosemiosis and zoosemiosis. These distinctions as made by Kull (2009) reflect the three-fold distinctions between types of signs in the Peircian schema of signs consisting of icons (signs that carry a physical resemblance or similitude to what they stand for), indexes (signs that possess a causal relationship to what they stand for) and symbols (signs that have no physical or causal relationship to what they stand for). Mental structures integrate and unify the distinctions between different types of sign systems because they do not in themselves require interpretation but rather help determine signs and

Possible Minds from Natural Language

143

their relations, and different types of sign relations thus converge upon the same basis provided by mental structures for their constitution and/or determination. Different combinations of the mental structures thus characterized identify distinct forms of being minded. 3.3 Summary This chapter has provided a formulation of mental structures and then ­demonstrated how they can be applied to linguistic constructions across and within different natural languages. An examination of linguistic structures encompassing a gamut of linguistic phenomena across and within languages shows the viability of the notion of mental structures in the sense formulated in this chapter. It is also shown that the distinct structures of possible minds of plants and other non-human animals can be aligned with certain differences in mental structures in a systematic manner. But there is another challenge we are yet to face. The applicability of mental structures for digital machines has to be demonstrated. We shall take up this challenge in the next chapter (Chapter 4).

chapter 4

Natural Language, Machines and Minds This chapter will address one of the most fundamental questions regarding the nature of the type of mentality machines, especially digital machines, can be supposed to have. The question about the role of computation in revealing aspects of machine mentality is perhaps one of the most vexing questions. A number of confounding issues regarding the relationship between computation, machine mentality and human cognition still prevail. Understanding machine cognition has never been easy not only because of these confounding issues but also because of the very nature of machines which are for some of us merely artifacts. Hence raising questions concerning machine mentality or machine cognition appears to run the risk of inviting panpsychism—the belief that any matter has some form of mentality or consciousness. Understanding exactly how these issues make it, not just difficult, but severely paralyzing for us to make sense of machine cognition vis-à-vis mental structures will constitute one of the running themes in this chapter. More crucially, this chapter attempts to show that understanding machine cognition in its barest possible form involves disentangling a number of confounding knots. We shall not try to draw up a theory of machine cognition fleshed out in the broadest possible details. However, an attempt will be made to clarify certain issues in order to see how an understanding of machine cognition can be set against the possibility of having a range of mental structures. As we shall see, the formulation of mental structures will turn out to be handy for the appraisal of the possible structure of machine mentality. To sum up some of the take-home messages of this chapter at this stage, machine cognition has to be understood in its own terms, not with respect to human cognition or cognition of any other kind. This requires that a number of clarifications be made at this stage. It may be observed that the criteria for having minds or possessing cognition cannot be decided without looking into the capacities and behaviors of an entity to which these criteria are going to be applied. It seems natural if one points out the inappropriateness of the term ‘cognition’ when applied to machines. As discussed in Chapter 3, we aim to check if machines can be said to possess mental structures by scrutinizing the evidence for various actions and abilities of machines, given the condition that the ascription of a certain mental structure or a range of mental structures must have the descriptive and explanatory power to account for the actions and abilities of machines. This will be the guiding principle in what will follow

© koninklijke brill nv, leiden, ���7 | doi 10.1163/9789004344204_005

Natural Language, Machines and Minds

145

in this chapter. Given these considerations, the notion of cognition applied to machines will be merely a way of talking about or assessing the structural form of a type of mentality that we want to ascertain whether machines can at all have. In laying bare what is at stake, we also resist the temptation to make any claim as to whether understanding machine cognition can provide support to strong ai or weak ai (or to human-level ai or even agi (Artificial General Intelligence) in the sense of Goertzel (2007)). This is what will be challenged in the present chapter since understanding machine cognition does not intrinsically liaise with any claims in support of or against strong/weak ai or human level ai/agi. In addition, the goal is not also to list out disparities, if any, between what we can reasonably call machine cognition and what machine cognition is presently supposed to be. As also discussed in Chapter 1, there are lots of things that both humans and machines can do (chess playing, calculations etc., for instance), and there are other things that supposedly differentiate machines from humans (consciousness, emotions, understanding meaning in experiential context etc., see for details, Dreyfus (1992), Haugeland (1998), Tallis (2011)). The present purpose is not to show that some cognitive capacity in machines is the result of computation but in humans that cognitive capacity is not the result of computation. Or conversely, the purpose is not even to show that it is the other way round—that some cognitive capacity in humans is the result of computation but that cognitive capacity in machines is not the result of computation. In fact, these ways of investigating which cognitive capacities derive from computation either in machines or in humans beg the question. Such an approach to the exploration of machine cognition is a re-description of the problem as we go about figuring out which aspect of intelligence (or cognition) is/is not the result of computation in machines, and which aspect of intelligence (or cognition) is/ is not the result of computation in humans, primarily because it is this very nature of intelligence (or cognition)—whether of humans or machines—that is least understood. Since if we have grounds for believing that some cognitive capacity/process in humans or machines ensues (or does not ensue) from computation, we have reason to concede that the case for the very belief that some cognitive capacity/process in humans or machines ensues (or does not ensue) from computation cannot be prejudged. In a nutshell, the underlying idea is that all computation in machines emanates from potential or possible computation which is realized or actualized through human cognition (whatever that may be/turns out to be), and that machine cognition constitutes the state of this potential computation which results in (actualized) computation through the human interpretation. Hence the human interpretation as part of a semiotic process brings about

146

chapter 4

the a­ ctualization of computation from the state that machine cognition constitutes, and this derives from a kind of semiotic causality. Semiotic causality, broadly conceptualized, is part of semiotic causation, which consists in bringing about effects through interpretation (Hoffmeyer 2007). Essentially, this chapter will contend that machine cognition is not the consequence of computation (which is what has been thought all throughout) but rather engenders computation through the semiotic processes of human cognition. If machine cognition itself produces computation, and computation is a concept introduced by a human cognitive agent observing and interpreting a system, then any object that can be interpreted as computing must have machine cognition. This idea has also been elaborated on in Mondal (2014b). The next section will pinpoint two sources of all the mess centering on machine cognition. These are chiefly (i) attempts to explain human cognition in terms of computation and (ii) attempts to model/simulate (aspects of) human cognitive processing in machines. After the pertinent issues are clarified, a view of machine cognition vis-à-vis the embedding of varieties of mental structures will be sketched out with possible ramifications for the relation between the human mentality and the type of mentality machines can be said to possess. One corollary that arises from this is that the role of linguistic structures is limited to constraining but not constituting the human semiotic processes that filter out possible computations that could have materialized by springing from machine cognition. However, this must not be taken to indicate that language has an intrinsic connection to computations that are derived from machine cognition by means of the human semiotic processes. Far from it, the second section in this chapter (that is, Section 4.2) will demonstrate that natural l­anguage, or rather natural language grammar has no intrinsic connection whatsoever to computations. Even if it is tempting to believe that language is intrinsically computational in nature not merely in its computational properties but also in its mental instantiation, this section strongly argues that natural language has nothing intrinsic to do with computations, however conceived. This is intended to demonstrate that if mental structures extracted from natural language can be forwarded to define potential cognition in machines, it does not thereby follow that natural language must in itself have an intrinsic linkage to computations which can be exploited for smoothing down the transition to cognition in machines, potential or otherwise. That is, no inference about machine cognition can be derived from any postulation that natural language grammar has an intrinsic connection to computations that, as far as the argument in the first section goes, flow from machine cognition. For one thing, the postulation itself is unfounded. But on the other hand, it is just a short step to realizing that the way of gaining entry into machine cognition cannot

Natural Language, Machines and Minds

147

be s­ hort-circuited by pulling computations out of natural language grammar and then having them yield machine cognition. This is simply because the first section of this chapter argues that computations cannot generate and hence precede machine cognition. And if so, it would be a fallacious move to draw computations out of natural language (which is presumed to have an intrinsic linkage to computations) and then to allow them to reach to the point of machine cognition. This will be clarified as we move along. Suffice it to say that the first section on machine cognition carries over implications for the second section as well. 4.1

Machines and Minds

The entire gamut of cognitive science lays bare a strikingly remarkable observation that any account or description that has been made thus far falls far short of a proper grasp of human cognition. We still have a very weak grasp of what human cognition looks like or how it emerges despite a formidable understanding of the human physiology. Human cognitive processing in many domains as diverse as vision, language, thinking, memory, emotion etc. is not yet deeply understood (Gazzaniga 2009; Minsky 2006). But what about machine cognition? Due to the digital revolution following a path made possible through Turing’s and Von Neumann’s models of abstract computation, the nature of computer architecture and computational operations is well-­ understood. Turing’s (or Von Neumann’s) model of abstract computation is instantiated in any digital computer in the physical universe, and that is why Turing’s abstract machine is descriptively equivalent to any digital computer that we see today. Despite all this, this does not hit the mark when the matter comes to an unequivocal understanding of machine cognition. An interest in understanding the nature and form of machine cognition is not awkward, given the fact that computers engage in a lot of tasks—whether at the hardware level or the software level—that have analogues in human cognition. They include processing a variety of inputs and outputs through well-defined operations in logical reasoning, making calculations, game playing, memorial processes, etc. Still what remains palpably evident is that machine cognition is far less understood than even human cognition. One of the underlying reasons is that the notion of computation both in computer science and in cognitive science is itself vaguely understood (Piccinini, and Scarantino 2011; Shagrir 2006; Fresco 2012). Even if there can be claims about the transparency of the notion of machine computations, these claims do not hold out in the face of the challenge to unlock machine cognition, let

148

chapter 4

alone machine consciousness (if there is any such thing) (Fresco 2011). So many questions still remain. What does it mean for a machine to possess cognition? What does such cognition—whatever it ultimately turns out to be—consist in? For all that is thus far understood about machine operations, we still do not really know the answer. Thus, even if such questions may sound naïve or are otherwise reasonably significant, these questions are fraught with formidable difficulties that have ultimately engendered misunderstandings about the very nature and form of machine cognition. Basically, the misunderstandings come from two sources. The first comes from attempts to explain human cognition in terms of computation, and the second from naïve assumptions implicit in attempts to model/simulate (aspects of) human cognitive processing in machines with the belief that machines would display aspects and properties of human cognition. Both these sources of misunderstanding will be dealt with at length as this will turn out to be crucial in seeing where things have gone wrong. The equation human cognition =  computation has seemed to be perfectly comprehensible and innocuous. In fact, the entire bedrock of the computational theory of mind (Fodor 1975; Pylyshyn 1984; Chalmers 2012) is grounded in this, as also discussed Chapter 2. It is not simply a computational ­metaphor, that is, saying that the human mind is like a computer. Rather, the c­ omputational ­theory of mind goes farther than this and states that mental operations are mappings from designated inputs to outputs according to some well-­defined rules by means of symbolic manipulation of digital vehicles in the form of linguistic strings. There has been an avalanche of criticisms of this view ­(Putnam 1988; Searle 1992; Penrose 1994; Bishop 2009) as this is arguably one of the central dogmas of cognitive science. We shall not devote our attention to these criticisms. Rather, an attempt will be made to ascertain how the fallacy implicit in the computational theory of mind makes the notion of machine cognition confused. And the nature of this fallacy, it will be maintained, emanates from some deeper ontological and epistemological problems and confusions not ­directly addressed in the well-known criticisms of the computational theory of mind. The computational theory of mind ensures that mental operations are symbolic manipulations of digital vehicles (in the form of linguistic strings) by means of mappings from designated inputs to outputs according to some welldefined rules. But the mapping from some level of description of a machine state to some level of description of any other physical system is not intrinsic to the mapping itself, and hence mappings from designated inputs to outputs according to some well-defined rules have no intrinsically determined interpretation. There can be many possible interpretations that can be assigned to

Natural Language, Machines and Minds

149

a set of an innumerable number of mappings to meanings, as Deacon (2012) argues. Deacon has also raised the following question: which specific mapping to meaning (out of the space of many logically possible interpretations) is a physically implemented operation associated with for it to be called a computation? The notion of computation is then potential computation until a specific mapping to a meaning is achieved through an assigned interpretation, as he maintains. In order words, the mapping of some causal structure or ­organization of a physical system to the formal structure of the state of a computation cannot have any unique predetermined interpretation—an interpretation of this mapping out of a number of possibilities has to be fixed. What is of particular concern in this connection is that this interpretation is assigned by humans, not by machines or algorithms in themselves. That means potential computation becomes actualized through our interpretations. That is why computation cannot be considered to be observer-independent (Bishop 2009). In fact, a similar line of reasoning has also been pursued by Smith (1996), who has shown how computation can have embeddedness in the niche in which computation can have any realization/instantiation. Given that this is the case, saying that mental operations are symbolic manipulations of digital vehicles by means of mappings from designated inputs to outputs makes the notions of human cognition as well as machine cognition inextricably convoluted. If for actualization, computation by its very nature involves the interference of the human cognitive system, human cognition cannot be modeled on computational operations because what is modeled is intrinsically and indissociably linked to the model itself. If computation cannot be understood except thanks to the contribution made by the human cognitive system, human cognition cannot certainly be understood or modeled in terms of computation. Since we can never know what computation is like in mind-independent terms, how can we then use computation as a model for the human cognitive operations? There is indeed a deep circularity involved here. It is as if human cognitive operations are themselves explained in terms of the very human cognitive operations the functioning of which involves, among other things, the actualization of computation. Therefore, on the one hand, computation is intrinsically coupled to the human cognitive system, and by virtue of that, the boundary between machine cognition which may be supposed to be telescoped through computation, and the human cognitive system appears to be porous. But, on the other hand, the human cognitive system is certainly independent of and separate from machines in the sense that there is, in Pattee’s (2008) words, an ‘epistemic cut’ between the human cognitive system and machines, especially when humans impose interpretations of (computable) functions on machines. Pattee appeals, just as Rosen (2000) does, to Von Neumann in demarcating the

150

chapter 4

boundaries between a system that is measured or observed (e.g., machines in the present context), and the observing or interpreting system (e.g., human beings). This duality—the duality wedged between the coupling of the human cognitive system to machine computations, on the one hand, and an ontological independence of humans from machines on the other—is the foundation on which the new view of machine cognition will be erected. Stated in a different manner, this duality, rather than spelling out a paradoxical crisis, will serve as a boon, as we will see shortly. When the two aspects of the duality— the coupling of the human cognitive system to machine computations and an ontological independence of humans from machines—are viewed through a certain ordering relation, the duality can evince a more fundamental organizational feature or principle of cognition in its substance-independent form. In other words, this duality is split between two frames: in one frame machine cognition is segregated from and exists independently of human cognition, and in the other frame, they interpenetrate each other. The crucial thesis to be advanced in the present context is that the first frame logically and temporally precedes the latter. There is also another way of seeing the problem we started with. The concept of machine cognition becomes confused because of the apparently ­puzzling duality discussed just above. If the putative concept of machine cognition (whatever it turns out to be) is underpinned by the very apparatus that also underlies human cognition, what is it in machine cognition that is independently recognizable? If both human cognition and machine cognition are underpinned by the same apparatus, that is, by computational operations (qua the computational theory of mind), human cognition and machine cognition should look similar to each other. This is profoundly odd. If there had not been any differences between human cognition and the putative character of machine cognition, there would not have been so much fuss over what humans can do but computers cannot or vice versa. The fuss over what humans can do but computers cannot or vice versa prevails among scholars and lay people alike, not merely because there are differences between computers and humans in scale, efficiency, operational parallelism, speed and (evolved) heuristics, but also because we tend to ascribe intentionality to machines with all its powers. Furthermore, the fallacy implicit in the computational theory of mind uncovers the other horn of the dilemma. If human cognition makes computation actually possible by remaining an integral part of it, what is left of computation that can help one understand machine cognition in a substance-independent manner? The problem is simply this: how can one understand machine cognition when machine cognition by way of computation is itself made viable by human cognition? How can one get out of oneself and

Natural Language, Machines and Minds

151

try to understand ­machine cognition while remaining indissociably locked in machine cognition1? If this is exactly what impedes an understanding of machine cognition with respect to computation, the other fallacy springs from the presumed capability of machines to exhibit human-like cognition in virtue of having human cognition modeled/simulated in machines. There has been a long tradition of attempts to build intelligence into machines by studying human cognitive processing and then trying to model it in machines. There is an overarching belief that making computers intelligent requires modeling aspects of human cognitive processing in machines (Taylor 1991; McCarthy 2007; Starzyk and Prasad 2011). This may well emanate from what Proudfoot (2011) calls the forensic problem of anthropomorphism in ai. The Turing test is, of course, another way of grasping the issue. As discussed in Chapter 1, Turing had a different goal in mind when he devised the Turing test, but since then cognitive science has taken a detour by motivating the belief that machines can possess cognition only by having human cognition modeled/simulated. Closer inspection reveals that this is also flawed. Understanding machine cognition does not involve imposing the form of human mentality on machines because in trying to understand the type of mentality machines can have we have to bear in mind that we are not dealing with live organisms. It is worth noting that computational modeling to build machine cognition on the basis of human cognition is flawed not because human cognition is something that cannot be implemented or simulated in machines2 (in fact, this is exactly the familiar kind of argument found in the literature, as discussed above), but because the very nature of machine cognition cannot be telescoped through human interpretations of what machine cognition should look like. In other words, human interpretations of what machine cognition is like cannot help understand the nature of machine cognition, for human interpretations of machine cognition carried over from features of human c­ ognition 1 In this connection, one may also relate this question to Rosen’s (1991) notion of a complex system within which organization is significant insofar as organizational principles within a system as a whole causally determine the relations and processes that obtain in a system. In the current context, this means that the organizational principles pervade and encompass a systemic whole that connects machine cognition by way of computation to the human cognitive system at a more fundamental (ontological) level of organization. 2 Not everybody who believes in computationalism thinks that human cognition must be identified with computation, and thus the claim that human cognition is something that cannot be implemented or simulated in machines is a weak objection in this sense. For instance, it is possible to say, by following Rapaport (2012), that human cognition is computable, regardless of whether human cognition is computation or not. However, this does not affect the force of the arguments marshaled in this chapter.

152

chapter 4

will project a cluttered and adulterated picture of machine cognition. After all, human cognition may be fundamentally different from machine cognition at a more basic level, and in addition, human cognition interpenetrates machine cognition when humans define computable functions on machines. At this stage, this argument sounds similar to Dennett’s (1996), as he believes that the human mind does not necessarily have intrinsic intentionality as opposed to derived intentionality. If computers come to have derived intentionality assigned by us, then humans, he argues, should also have derived intentionality since humans are designed by evolution. But this is not what is claimed here. The point raised here certainly advocates that machines have derived intentionality and humans have intrinsic intentionality (contra ­Dennett), but the very fact that machines have derived intentionality veils the nature and form of machine cognition. One often talks about reducing complexity by means of optimization, robustness, efficiency and cost-effectiveness in cognitive modeling of human cognitive processes in machines. This originates from human interpretations of what computations should look like. In fact, the concept of abstract computation implicit in the formalization of Turing machines is neutral with respect to optimization, robustness, efficiency and cost-effectiveness which are attributes that follow on from a presumed fit with the features of human cognition, given that human cognition is essentially order-creating and thus appears to violate the second law of thermodynamics (Torey 2009; Deacon 2012), according to which a thermodynamic system approximates to a state of increasing entropy which instantiates a measure of disorder. Although we do not really know what machine cognition really is, but, to be sure, machine cognition need not be order-generating. This becomes much clearer when we appreciate that we put computers to use in our humanly affairs, and for that reason, we require algorithms to be designed with a purpose that fits well with the needs of the tasks concerned as well as with our own demands. Machine cognition in itself, when stripped of such human attributes or need-based interpretations, may well be disorder-generating, nonself-organizing and perhaps complexity-increasing. To put it all together, we can say that machine cognition is a confused ­notion owing to a number of confounding issues. If it is due to the ­interpenetration and interpretations of the human cognitive system that machine c­ ognition looks confused, why then do we need to look into machine cognition beyond any such interpretations and interpenetration? The reason is that there is a way of conceptualizing machine cognition by relating it to mental structures which have nothing whatever to do with human psychological processes underlying interpretation. This requires us to re-conceive the notion of machine ­computations in order to see how an understanding of machine cognition

Natural Language, Machines and Minds

153

emerges. The proposal in a nutshell is that machine cognition can be characterized as a trajectory in the space of all possible combinations or permutations of mappings between designated inputs and outputs. The trajectory is actually an abstract projection across the entire space of all possible combinations or permutations of mappings between designated inputs and outputs just like a curve connecting the points of a function on a graph is an abstract line. The projection can be schematized as traversing the entire space of all possible combinations or permutations of mappings between designated inputs and outputs. The role of the human interpretation in this context is crucial. The human interpretation brings down the formidable space of mapping possibilities covered by the trajectory to a vastly smaller set of mappings between designated inputs and outputs. That is, the human interpretation brings down the vast number of mapping possibilities to fewer actual mappings of inputs to outputs. At this juncture, it appears that the human interpretation possesses a kind of causality. But how can the human interpretation come to have a causality that accounts for or instantiates a change from some level of description of an abstract state of possibilities to some level of state of a physical system? After all, this appears to be spooky, for the human interpretation process is not a physical entity and it looks strange how the human interpretation causes a vaster region of mapping possibilities to get scaled down to a very smaller range of mappings between designated inputs and outputs. The mystery evaporates when we see that the human interpretation works just this way in our day-to-day communication—the speaker produces some signs and this affects the beliefs and understanding of the recipient(s), which is also an activity that effects a change from an abstract state of thoughts and ideas to a certain state of a physical system, and vice versa. Such processes are semiotic processes, and hence the process that brings forth computation out of machine cognition is a semiotic process per se, and the causation involved is a kind of semiotic causation. Semiotic causation, which consists in bringing about effects through interpretation (Hoffmeyer 2007), is also a part of human cognitive processes. In fact, Tønnessen (2010) thinks that semiotic causation, in addition to the four Aristotelian causes, namely material, formal, effective and final causes, is necessary for all life processes. The causal powers of signs, interpretation processes and mental/cognitive states are what drive semiotic causation. It needs to be emphasized that the causal power of the human interpretation by way of humans’ intrinsic intentionality is primary and perhaps more fundamental than any form of causal efficacy signs become imbued with. Of course, signs and any states that are representations of signs cannot be intrinsically impregnated with causal powers, which in fact derive from the human interpretation. Only in this sense does the idea that the causal

154

chapter 4

power of the human interpretation is a fundamentally primitive concept become perspicuous.3 Looked at this way, semiotic causation bridges the gap between forms of lifeless types of mentality and (bio)semiotic processes humans engage in. With this in the backdrop, we can now move on to explicate the new conception of machine cognition with respect to computation and human interpretation processes. It has already been indicated above that two aspects of the duality—the coupling of the human cognitive system to machine computations and the ontological independence of humans from machines—can be viewed through a certain ordering relation. In fact, it is this ordering relation on the two aspects of the duality that can help disentangle the knots that have thus far blocked an understanding of machine cognition. Simply stated, the proposal is that machine cognition is actually a precondition for computation, and hence precedes computation. In the light of this understanding, the notion of machine cognition with respect to computation and mental structures can now be formalized in the following fashion. Let’s assume that the meaning relations R1, …, Rn are actually all possible sets of ordered pairs that can be deemed to be inputs or outputs when n is an arbitrary number and R1, …, Rn can be defined on the Lex of a language or on a set of numbers (natural or otherwise).4 That is, R1, …, Rn contain all possible 3 One needs to be cautious about relating this to the causal-informational view of semantics, as evident in Dretske (1981), Fodor (1998). The causal-informational view of semantics demands that a causal relation—direct or mediated—obtain between the objects and the concepts or expressions that refer to those objects. If the human interpretaion by virtue of humans’ intrinsic intentionality possesses causal powers, these causal powers must have derived from the human intrinsic intentionality which is a primitive concept and cannot be further decomposed (Jacquette 2011). Taken in this sense, neither expressions/signs nor objects can in themselves cause or causally determine anything in the mind (in contrast to the view espoused by the proponents of causal-informational semantics), since all relations—causal or otherwise—are distilled and derived from the human intrinsic intentionality. And if this is so, the causality of the human interpretation process derived from humans’ intrinsic intentionality does not also need to be caused by anything else, mainly because humans’ intrinsic intentionality is primary and more fundamental than anything else in nature. 4 This is necessitated by the consideration that either human-specific lexical items or numbers forming meaning relations can be under the space of input-output relations of machines. No specific stipulation needs to be made here in order to imposes restrictions on what (numbers or lexical items) becomes part of such input-output relations. But these meaning relations are assumed to be pre-interpreted within some biosemiotic context or niche (either within the electrical system or within the human praxis of symbol use) since all mental structures are biosemiotically interpreted structures. This will lead to relevant consequences to be explored later in this section.

Natural Language, Machines and Minds

155

ordered pairs which are taken to be inputs or outputs that can be mapped from some level of description of a machine state to some level of description of a certain physical system. Suppose, for instance, there is a meaning relation Ri = {(a, b), (c, d), (e, f)} when i ≤ n, the members of Ri—namely, (a, b), (c, d) and (e, f)—are inputs or outputs that can be mapped from some level of description of a machine state to some level of description of a physical system. The Cartesian product of any two relations out of the R1, …, Rn can be represented as Ri × Rj when i ≠ j or i = j. Hence, if we want to have all possible Cartesian products by choosing each time two relations out of R1, …, Rn, the number of permutations (with repetition) will be n2. Thus, if n = 5, we can have 52 = 25 pairs of sets from R1, …, Rn for the construction of the Cartesian product. So we can have 25 different Cartesian products from R1, …, Rn: (i) R1 × R2, (ii) R2 × R1, (iii) R1 × R3, (iv) R3 × R1, (v) R1 × R4, (vi) R4 × R1, (vii) R1 × R5, (viii) R5 × R1, (ix) R2 × R3, (x) R3 × R2, (xi) R2 × R4, (xii) R4 × R2, (xiii) R2 × R5, (xiv) R5 × R2, (xv) R3 × R4, (xvi) R4 × R3, (xvii) R3 × R5, (xviii) R5 × R3, (xix) R4 × R5, (xx) R5 × R4, (xxi) R1 × R1, (xxii) R2 × R2, (xxiii) R3 × R3, (xxiv) R4 × R4, (xxv) R5 × R5. Let α1, …, αk denote all possible Cartesian products derived by means of a permutation (with repetition) by choosing each time two relations out of R1, …, Rn. The number k here will depend on the exact value of n; in the example given immediately above, k = 25 because n = 5. There are two things that need to be clarified now. First, the Cartesian products α1, …, αk yield a hugely diverse range of mappings/arrows from inputs to outputs taken from R1, …, Rn. That is why a permutation (with repetition) by picking up each time two relations out of R1, …, Rn has been necessary. What is significant is that no higher-order relation is to be defined on any αi when i ≤ k, since this definition of a relation (which includes functions as well) will involve human interpretations and interpenetration. Second, there exists a rationale behind having R1, …, Rn as sets of all possible inputs and outputs that can be mapped from some level of description of a machine state to some level of description of a physical system, for we certainly could have had just two sets, say A and B, for all possible inputs and outputs respectively (that is, A for possible inputs and B for possible outputs). The sets R1, …, Rn are in fact partitions of the denumerably finite set of all possible inputs and outputs, and thus we can associate any two relations from R1, …, Rn in any order for any number of possible inputs and outputs (because a certain relation from R1, …, Rn may well contain fewer members than some other relation). In this connection, it is also vital to recognize that the inputs and outputs for mapping from some level of description of a machine state to some level of description of a physical system are ordered pairs because ordered pairs as members of meaning relations make the ingredients or components of mental structures.

156

chapter 4

The ­formalism developed in Mondal (2014b) is specified in terms of entities, but not ordered pairs. However, this does not make much of a difference since singleton entities such as natural numbers can also be made into ordered pairs by incorporating certain indices into the constructed ordered pairs. For example, the numerical element 3 can be made into an ordered pair by having i as an index for 3, as in (3, i). The present formalism is much more enriched and thus in tune with the expression of various types of mentality telescoped through mental structures. Machine cognition in the current context will be identical to a projection of a trajectory spanning the space of the power set of the union of α1, …, αk. This can be written as (83) P ​(​ ⋃ ​α​ i​) ​​ ​  i ∈ N

Here N is the set of natural numbers. For notational convenience, let M ­denote P ​(​ ⋃ ​α​ i​) ​​ ​  i ∈ N

What is suggested here is that an abstract trajectory spanning the space of M is nothing other than the entire extension of M or some partition of M. This abstract trajectory is sort of distorted in its curve only when humans try to define a relation (including a function) on the associations or mappings of relations from R1, …, Rn, that is, on the Cartesian product of any pair of relations from R1, …, Rn. As a matter of fact, this distortion of the curve of the trajectory in M is to be identified with the curtailment of the space of all possible mappings to a set of mappings of a much smaller size, which is caused by any interpretation(s) by human beings. Translated in these terms, machine cognition is a kind of f­rozen (not actual) space of associations between designated inputs and o­ utputs based on mapping possibilities. On this proposal, this frozen space is identical with the projection of a trajectory through M in the way characterized just above. This frozen space is what leads to computation, and hence computation is the after-the-effect product of machine cognition. So machine cognition engenders computation—it is not the other way round. It is often argued that any information system—namely, a system that processes information in some encoding or format—that involves the manipulation of discrete packages of information involves computation (Dietrich and Markman 2003), but one needs to be cautious in differentiating the frozen space that consists in potential manipulations of discrete packages of information from what is a­ ctualized in the context of any interaction of the information system with human beings. It is this distinction which is crucial in the current context because machine cognition has something to do with the mapping possibilities.

Natural Language, Machines and Minds

157

Computation in machines, on the other hand, ensues from the reduction of the space of all potential manipulations of discrete packages of information to a cut-down configuration of the manipulations of discrete packages of information, when and only when humans come to interact with the system concerned.5 To put it in an abbreviated manner, machine cognition ≺ machine computation but not machine computation ≺ machine cognition, when ≺ denotes a relation of logical and/or temporal precedence. In a certain context, machine computation may also be thought of as a subset of machine cognition, only insofar as some subset of M is interpreted as implementing a mapping from some state of a physical system onto the formal structure of a computation. For example, if M contains various sets one of which is X when X = {αj} = Y × Z such that Y = {(g, h)} and Z = {(o, p)}, the mapping of (g, h) onto (o, p) may be considered a state of some computation as a function from Y to Z is defined by a human or humans. What is noteworthy is that this way of looking at machine cognition saves us from the pitfalls of the confounding issues associated with machine cognition and human cognition. Additionally, this also helps us understand that the projection of a trajectory spanning the space of M can perhaps be reflected in any physical system including rocks, tables, chairs etc., and in this sense, this appears to be tangential to the issues on computation and its implementation raised by Shagrir (2012). Hence it would be interesting to see whether the present characterization of machine cognition can shed any light on the question of whether physical systems or objects can be said to be computing. We can say that, since machine cognition, as conceived of in the present context, is a precondition for computation, and if this is the case, physical systems and 5 If an actualized computation is ‘harvested’ by human cognition through a semiotic interpretation process, one may thus wonder what would happen when a machine can repair itself or perhaps even ‘reproduce’. It needs to be made explicit that the current view does postulate that machine cognition exists in a different domain—more particularly, in a domain of possibilities of mapping from inputs at some level of a machine state to outputs at some level of description of any other physical system, whereas (machine) computation is a consequence of the human interpretation defining some relation (that may well be a function) on the abstract trajectory through M constituting machine cognition. In this sense, when a machine can repair itself or perhaps even ‘reproduce’, and if the repair and reproduction have been possible through the implementaion of some program(s) designed by humans, it is computations all the way repairing (or renovating) and reproducing computations further and further into a direction away from machine cognition. Therefore, machine cognition precedes any such implementaion of any program(s) designed by humans in a machine that can repair itself or perhaps even ‘reproduce’. And thus machine cognition may remain where it is, irrespective of whether or not the machine concerned repairs itself or even ‘reproduces’.

158

chapter 4

­objects are to be said to be computing only if they possess machine cognition. In fact, the characterization above is relative to physical systems which can vary in the properties that are appropriate for the defining of a mapping from some state of a physical system onto the formal structure of a computation. Since it is possible that a trajectory Ti ∈ M or Ti ⊆ M, a trajectory Ti can, of course, be a projection of something encompassing any arbitrary mappings in a kettle or a pressure cooker or even in a press. And machine cognition specifically understood in this particular case is trivial. If, for example, we say that machine ­cognition is a mapping from some description of a level of a machine state (say, some binary number) to (a description of) itself, this seems to be superfluous and spurious, on the grounds that any object can be said to be computing by virtue of having such machine cognition. This is too trivial to be of any substantial value. This means that there must be some regular and contingent, yet unknown, rules or laws connecting machine cognition to computation. Specified in a clearer way, this means that the mappings on inputs and outputs from some level of description of a machine state to some level of description of any other physical system must have certain identifiable regularities or contingencies. That is why an abacus can be said to possess machine cognition, thereby being engaged in computing; but a rock cannot thus be said to be computing. The missing link is provided by regularities or contingencies which originate in the semiotic processes of human interpretations. Consequently, an abacus can be said to have machine cognition and be computing only if it is, say, used and interpreted in a regular and contingent manner as an analog device (as we see it in scale models, for example). Hence a rock does not (in a general sense) compute, simply because the mappings on inputs and outputs from some level of description of its physical state to the formal structure of a computation do not have any (bio)semiotic regularities or contingencies. Only in this sense, does machine cognition lead to computation, otherwise it does not and cannot. If machine cognition is arbitrarily imposed on physical systems or objects, computation stemming from such machine cognition is bound to be trivial and perfunctory. The advantage of the present characterization of machine cognition is threefold. First, this is independent of any connection to human cognition. It is the mind-independent abstractness of machine cognition that does not presuppose any commitment to order or disorder, complexity or simplicity, constraint or randomness. In this sense, machine cognition is like an abstract schema that is realizable when the projection of such a schema can be ­implemented/instantiated, in a relative manner, in a physical system. Second, this view of machine cognition saves us from getting bogged down in computational panpsychism. Physical systems and objects, which exhibit machine

Natural Language, Machines and Minds

159

cognition only in those trajectories T1 … Tm (when m is an arbitrary number, and m ≠ n, k) that do fall under regular and contingent rules or laws within the semiotic processes of the human mind, can be said to be computing. In this sense, h ­ ypercomputation or quantum computation does not even seem to pose any problem, clearly because such computation may follow on from the trajectories T1, …, Tm falling under regular and contingent rules or laws interacting with M. This squares up well with the stretching bounds of ­computation going beyond the Turing or Von Neumann type of computation. Third, the present conception of machine cognition helps integrate (bio)semiotic processes with lifeless forms of mentality rendered in terms of possible associations among mental structures. Therefore, the notion of mental structures turns out to be useful even for the characterization of a wholly distinct type of mentality in machines that include not only digital computers but also some artifacts. We shall now focus on the relationship between computation and natural language given the new understanding of computation in the present context. 4.2

Computation and Natural Language

There is a sense in which natural language grammar can be subject to symbolic manipulations of linguistic strings by means of mappings from designated inputs to outputs according to some well-defined rules. But this does not fully clarify the nature of the relation between natural language and computation because there can be two ways in which natural language grammar can be said to meet the conditions manifest in the symbolic manipulations of linguistic strings. Linguistic structures can be represented in machines in such a manner that certain well-defined algorithms can be constructed for machine operations to be executed on linguistic structures. This is exactly what is done in computational linguistics which is oriented towards computational processing of natural language. The goal in such an enterprise is to overcome engineering challenges in building natural language processing tools that can have practical utilities. The notion of computation implicit in the relationship between computation and natural language in computational linguistics in particular and artificial intelligence in general is an extrinsic notion of computation. A few observations in this connection will suffice to show that the extrinsic sense of computation is trivial and far commoner. This simply requires that computable functions, or rather algorithms be defined on the objects which are to be computed. The underlying ontological assumption is that there are various natural language objects like words, constructions, rules

160

chapter 4

and ­grammar which can be analyzed, sequenced, manipulated and also represented in machines in a certain manner. Whether these objects reside in the minds of natural language users or in the inter-subjective community memory or even in the Platonic sphere does not matter much so long as the appropriate data about these objects are gathered from whatever domain one can have access to (language corpora, language users, the Internet etc.). By contrast, on the other hand, there is another sense in which the relationship between computation and natural language can be more inextricably intimate. The notion of ­computation involved pertains to an intrinsic sense of computation. Under this conception, natural language grammar as an abstract system itself can be a computing device. That is, the axiomatic system of natural language grammar can be a version of the Turing machine which writes, reads and deletes linguistic strings which are the symbols that are manipulated in terms of the rules and constraints of grammar. That grammar can be modeled on the Turing machine is naturally part of the view that grammar as a formal system can generate linguistic strings which can be assigned sound and meaning representations. This forms the bedrock of the Generative paradigm in mainstream linguistics. The subtle point that makes this view more prominent is that the system of grammar, or rather the language faculty, which is taken to be a computational system, is instantiated or grounded in the mind. This implies that grammatical operations are actually operations of mental computation. We have more to say on this below. But before that, there is another concern about the very notion of computation that needs to be addressed. When a question on whether something is computational or not is asked, much hinges on the fact that the right concept of computation is applied to the phenomenon that is to be scrutinized to see whether it falls under computation. Similar considerations apply to the present case when we focus on language and wonder what linguistic computation can be as it is supposed to be performed by the system of grammar. For all the vagueness surrounding the notion of computation, it appears that linguistic computation in the sense described here fits well with the classical sense of computation in which inputs are mapped to outputs according to some well-defined rules by means of symbolic manipulation of digital vehicles in the form of linguistic strings. This ­notion of computation is the narrowest in the hierarchy of notions of digital computation (Piccinini and Scarantino 2011). Much of formal linguistics has indeed employed this notion of linguistic computation implicitly or explicitly, mainly because the representational vehicles of language are discrete in form. Still a question remains. Can we take linguistic computation as a generic computation that encompasses both digital and analog computation? Even if this is a bit difficult to answer, the answer is more likely to be no. It is ­somewhat

Natural Language, Machines and Minds

161

clearer that the digital notion of computation has been predominant all throughout the field of cognitive science in general and theoretical linguistics in particular; hence the analog sense of computation does not apply to linguistic computation since in analog computation computational processes are driven and determined by the intrinsic representational content of the vehicles that are analog in nature, whereas digital computation involves mappings of inputs onto outputs that are executed without any regard to the content-­ determining properties of the representational digital vehicles (O’Brien and Opie 2011). In fact, Chomsky (1980), when talking about linguistic computation in its intrinsic sense, maintains that ‘… “autonomous” principles of mental computation do not reflect in any simple way the properties of the phonetic or semantic “substance” or contingencies of language use’. This makes it quite perspicuous that linguistic computation does not plausibly cover the analog sense of computation. Therefore, linguistic computation cannot be considered to be a type of generic computation that encompasses both digital and analog computation. What is central for any notion of computation is that the essential elements of computation can be extrapolated to what we make sense of in dealing with the concept of linguistic computation. When we talk about computation, we require (i) a function that is computed, (ii) a system which computes the function and also (iii) an effective procedure (also called an algorithm). This comes out clearly from the Church-Turing Thesis, which states that anything that can be computed with an effective procedure in the physical world can be computed in Turing machines. The computational thesis regarding the nature of human cognition has always maintained that it is the human mind or the brain that computes, as discussed in Chapter 2 and Section 4.1 of this chapter. The rationale seems to be understandable when the argument pronounces that the human brain is ultimately a physical object. As emphasized above, there is an independent sense in which the physical system for linguistic computation can be described. If we follow Chomsky (1995, 2001) on this matter, it is the language faculty in the brain/mind that computes because the language faculty is considered to be akin to a physical organ within the confinements of our brain. The language faculty instantiated in the brain/mind has a computational procedure that engages in all kinds of linguistic computation. The next essential ingredient of computation is a function, or rather a computable function. In linguistic theory the domain of such functions may well correspond to the domain of formal operations that apply to structures to make structural distinctions of linguistic representations, when linguistic structures are inserted, erased and thereby altered. In other words, the functions that can fall under linguistic computation are those which subscribe to linguistic

162

chapter 4

derivation. This is the process-oriented aspect of computation implicit in the specification of the Turing machine. That the system of grammar can be considered to be executing computable functions, or rather algorithms accords well with the abstraction-oriented aspect of computation which consists in the specification of computable functions that can be implemented by algorithms in a system. In this sense, all that matters is the specification of computable functions to be implemented in the putative computational system of the language faculty. That the specification of computable functions that grammar as a system is supposed to execute is possible has been demonstrated for the central operations of grammar in the Minimalist view of language, especially for Merge, which is understood to concatenate syntactic objects (see Mondal 2014a). In this case, Foster’s (1992) notion of an algorithm as a sequence of transitions between the states of a machine has been useful. As discussed in Chapter 2, Merge combines two syntactic objects to form a single syntactic object (which is actually a set). Thus, for a sentence like ‘John loves a car’, we have Merge (a, car) = {a, car} and Merge (loves, {a, car}) = {loves, {a, car}}. An algorithmic representation of the operations of Merge for the phrase ‘a car’ can thus be schematized as (84), by following Foster (1992). (84) [SO 1: a SO 2: car L: Σ] → [SO: a, car L: Σ]→ [SO: {a, car} L: Σ] →  [SO: Σ {a, car}] Here, so is a syntactic object and L denotes the label of an so, and Σ is the actual value of L. Thus each item on the left of the colon is the label and the one on the right designates the value of that label. Each item enclosed within braces represents a ‘snapshot’ of the state of a computation, and the arrow represents a transition between one such state and another. Equipped with this basic understanding of the intrinsic notion of computation for natural language grammar, we can look at some examples in order to see how linguistic computation helps make sense of linguistic facts. (85) The boys seem to__ have moved from the camp. (86) Who do you believe [__can complete this task]? (87) There are certain things we can do__ but they cannot (do) __. (88) Sasy is too weak to__ walk to the garden. (89) A hypothesis like that one may believe __ may prove to be correct. Here we have a range of different constructions that involve a displacement of a piece of structure from a place indicated by the dash to another place. For example, in (85) the noun phrase ‘the boys’ has been displaced from the gap

Natural Language, Machines and Minds

163

indicated; in (86) it is the Wh-phrase ‘who’ which has been displaced, and similarly, ‘certain things’ in (87) and ‘a hypothesis’ in (89) have been displaced. But the case in (88) seems to be different in that the noun phrase ‘Sasy’ cannot be said to be displaced; rather, it is interpreted in two places: (i) the place where it appears and (ii) the gap indicated by the dash. In each case of displacements, the relevant operation that makes a displacement possible can be taken to be a computation so long as the operation Merge is defined in such a way that it applies to two objects internally within a bigger constructed object. This is another way of saying that the operation called Merge can also instantiate displacements of linguistic structures.6 Now the question is: how do we know that (85–87) and (89) are cases of displacements while (88) is not. One plausible way of answering this question is to state that in (88) the noun phrase ‘Sasy’ can be interpreted where it actually appears, regardless of whether it is interpreted elsewhere or not. Note that this cannot be said about (85–87) and (89). Thus, for example, the noun phrase ‘the boys’ in (85) is not actually the agent of the verb ‘seem’, since we cannot have ‘the boys seem’, and in a similar manner, ‘who’ in (86) is interpreted in the subject position of the embedded clause by virtue of the fact that the question is about the person who is supposed to be able to complete the task. Likewise, the noun phrase ‘a hypothesis like that’ is interpreted in the gap indicated, in that the hypothesis referred to is supposed to prove to be correct and it is not what one may believe. In both cases—the case of displacements and the case of multiple instances of interpretation of the same item—some computational operation is supposed to be involved. But the grave worry is that neither the application of Merge per se nor the split between the version of Merge driving displacements and the other version leading to direct concatenation can be specified without recourse to our interpretations of what has been displaced and what has not. And this seems to be threatening to the intrinsic character of computation recognized as such on the grounds that computation, especially digital computation, cannot be sensitive to the contents of symbolic expressions. Interpretations of what has been displaced and what has not can be specified either as an abstract property 6 In standard Generative Grammar, this version of Merge is called Internal Merge, which is different from the normal version called External Merge. The operation Merge in the case of Internal Merge operates internally within a constructed syntactic object in the sense that it Merges an object from an already constructed syntactic object with another syntactic object within the same constructed syntactic object. For example, if Z in a phrase like [XP [YP Z]] has to be internally Merged with some syntactic object under the projection of the phrase XP, this instantiates a case of displcement of Z to a position within the projection of XP in order that we derive [XP Z [YP Z]].

164

chapter 4

of linguistic expressions or as a psychologically grounded process. No mapping from some state of the putative computational system of the language faculty to the formal structure of a computation can be sensitive to interpretations conceptualized in either of the senses. Now the other horn of the dilemma is that a rule that can differentiate (88) from (85–87) or from (89) cannot be formulated without reference to the matching of the relevant expression with the gap where it is interpreted. But once again, this move is ruled out if we are inclined to cleave to the algorithmic character of rules. In the face of this dilemma, one may also argue that it is the contents of the lexical items involved in a certain construction that can (help) gauge the exact interpretation that is derived. Closer scrutiny reveals that this is not possible. Note that the lexical contents of predicates (including verbs and adjectives) and nouns only specify the thematic roles of arguments (agent vs. patient, theme vs. instrument etc.) as well as the event-structural information (whether something is an event or a state). And this constitutes the ­first-order fact which derives from lexical contents; but how these contents contribute to a certain interpretation of a construction cannot be encoded within lexical items or reduced to what is present in the lexical items concerned. This constitutes the second-order fact which cannot emanate from lexical contents alone. Thus the lexical contents of ‘seem’ and ‘move’ in (85) do not in themselves suffice to constrain the displacement that occurs and the consequent form-meaning mismatch giving rise to second-order interpretive effects. All that the specification encoded in ‘seem’ says is that it requires a proposition, and the lexical structure of ‘move’ encapsulates the specification of an agent that moves. How these two kinds of specification interact to give rise to a given interpretation is a second-order property. Similarly, there is nothing in the lexical specification of ‘believe’ taken in itself that can tell us why the Wh-item ‘who’ cannot be interpreted as the object of ‘believe’ rather than as the agent of the verb ‘complete’. A similar of line of reasoning may apply to the rest of the cases as well. Furthermore, an analogy from mathematics is drawn in order to demonstrate how grammar as a system is intrinsically computational. Take, for instance, a recursive function: ƒ (n)  =  n  +  1 when n is a natural number. Thus ƒ (5) = 5 + 1 = 6 when n = 5, for example. This function is recursively defined in that the function is specified in terms of each calculated value of its own output. Hence this function can also be specified in a manner that involves the invocation of the same function. So we can write ƒ (n) = ƒ (n-1) +1. Note that an inductive definition forms an intrinsic part of this definition because the inductive definition licenses the very inference that the function can be specified in terms of each calculated value of its own output by way of an ­invocation of

Natural Language, Machines and Minds

165

itself. It is supposed that the generative mechanism of grammar has a recursive characterization in virtue of the postulation that a grammar recursively specifies an infinite number of linguistic expressions. The putative computational system of the language faculty possesses this mechanism by virtue of having the operation Merge. From this perspective, all that matters is the specification of the function concerned, not how this function is implemented in the language faculty in real time. If grammar is a computing system in this sense (as far as the mapping function so defined is concerned), it is not unreasonable to believe that the relevant properties of recursive functions that hold true for the set of natural numbers should also be found in the set of natural language expressions generated by Merge or any conceivable computational mechanism of grammar. Let’s see how we can test this formal parallelism. Suppose we have the following sentences which are output by Merge. .

(90) (Amy + (trusts + (a + man + … + … + …))) (91) (Amy + … + … + (trusts + (a + man))) The sentence in (90) can be taken to have an infinitely long expansion which goes on like this: ‘Amy trusts a man who is known to have three mansions which are located in three different countries that form a certain contour around a place that defies any description …’. Likewise, (91) can be infinitely long such that its expansion may run like ‘Amy who is one of the finest scholars at our university which motivates the study of culture in unexplored territories which may not have any access to education… trusts a man’. The problem for Merge is that it cannot get off the ground in (90) as Merge as a constructive operation starts and continues to work in a bottom-up fashion, whereas it can never terminate even if it does start in (91). Note that recursively defined functions in mathematics are such that they may never terminate, and hence this certainly cannot detract from what Merge taken as a mapping function defined in intension achieves. But functions operating on (the set of) natural numbers, as in ƒ(n) = n + 1, at least get off the ground when there are inputs to be mapped onto outputs, regardless of whether they terminate or not. To put it in other words, functions operating on (the set of) natural numbers do not spell out the problem of not starting in the first place, while Merge contains the germ of the problem of not starting in the first place, as well as inheriting the problem of non-termination. One may try to circumvent this problem for Merge by postulating null items that are supposed to be present in the infinitely long sentence in (90) in order to save Merge from this pitfall. Thus the null items which may be deemed to be the stand-ins for the relative clauses constituting the expansion in (90) may be assumed to be Merged. However,

166

chapter 4

this strategy is groundless because items empty of substance are inserted in a linguistic expression which is not even a well-formed expression and perhaps does not even exist due to its infinite length. We end up inserting items empty of substance into something which is already empty of content. The result is anything but a meaningful statement. Plus this will drastically alter the operational character of Merge because Merge does not concatenate null items when it goes against the ban that disallows items not present in the selected set of lexical items. For null items for chunks as big as relative clauses cannot be selected from the lexicon; nor can they be justified on linguistic grounds since nothing would prevent one from postulating null sentences whether simple or complex. There is another way of seeing where the problems lie in the postulation of formal parallels between recursive functions in mathematics and the putative computational mechanism of grammar. The principle of mathematical induction applies to all well-formed functions when it is used as a proof technique to test whether something holds for an infinite set because we cannot check all items in a potentially infinite set. So, as per the principle of mathematical induction, if some proposition P holds for n, it also holds for n  +  1. The second step in this formulation constitutes an inductive generalization which may also be aligned with various other kinds of generalizations drawn inductively by human beings. Let’s now reconsider the example in (90) to determine whether mathematical induction can be applied to it. Suppose (90) can be represented the following way. (92) ‘Amy trusts a man’ + Rclk (where Rclk = k number of relative clauses) Since it is necessary to render (90) in a manner that makes it amenable to the application of mathematical induction, the representation in (92) serves to demarcate the domain over which mathematical induction can be supposed to apply. One of way accomplishing this is the following way of characterizing the relevant set so that we state that mathematical induction applies over the set in (93). (93) {‘Amy trusts a man’, ‘Amy trusts a man who is known to have three mansions’, ‘Amy trusts a man who is known to have three mansions which are located in three different countries’ …} But what are the appropriate properties of this set or of the members of this set that can help establish that some proposition precisely formulated holds for the n + 1th expression only if it holds for the nth expression? In what sense can the expression ‘Amy trusts a man’ be supposed to be the nth expression? Or

Natural Language, Machines and Minds

167

in what sense can the expression ‘Amy trusts a man who is known to have three mansions’ be the n + 1th expression and so on? What are the exact properties of these expressions such that their succession can mimic that of natural numbers when the natural numbers that are inputs or outputs of a function are defined in terms of a function? One might say that the relevant proposition that needs to be tested has to be formulated by tracking the depth of concatenation of relative clauses. That is, one may say that ‘Amy trusts a man’ is an expression with the value of the depth of concatenation fixed at 0, and similarly, ‘Amy trusts a man who is known to have three mansions’ has the depth of concatenation set at 1 and so on. This may be supposed to reflect the progression of these expressions at par with natural numbers. So the proposition to be tested is that the concatenation of a relative clause to a sentence whose verb phrase is transitive returns a well-formed expression of English. This can be couched in terms that may be supposed to ride especially on the inductive generalization that the attachment of a relative clause to a sentence whose verb phrase is transitive always yields a well-formed expression of English. The relevant rule may be formulated in terms of phrase-structure rules familiar in formal linguistics, on the grounds that the rule has to be maximally general so that inductive definitions hold true for (92) or (93). Let (94) be the pertinent phrase-structure rule that can capture this inductive definition. (94) Sentence (S) → Noun Phrase (np) Transitive Verb Phrase (tvp) + Rclk However, the rule in (94) can never ground (92) or (93) in an inductive generalization, simply because rules like this overgenerate. Nothing stops (94) from generating (95). (95) *Amy trusts a man which is known to have three mansions which are located in oneself that forms a certain contour around hers that has any description …’ Trying to embed context-sensitive information and selectional properties of predicates and other expressions in (94) will shatter the very possibility of having a rule that will possess such a general character as to be amenable to an inductive definition. Needless to say, this is doomed to fail for natural language expressions. On the one hand, we require something like a function that can have the desired formal generality across an infinite range of expressions, and on the other hand, the very nature of natural language grammar is such that it defies the formulation of any such function. It may also be stressed that neither the compositional function nor the intuitive sense of concatenation can

168

chapter 4

serve this purpose. The former is of no substantive value in this particular case because natural language abounds in non-compositionally formed expressions (idioms, for example), whereas the latter cannot ground the generality of functions without rendering the output expressions deviant or ungrammatical since any operation of concatenation is too trivial to meet the well-formedness conditions that are required for (potentially) infinitely long expressions. Finally, closure properties of natural numbers make it possible for natural numbers to be defined within the bounds delimited by the set of natural numbers. That is, it is closure properties of natural numbers that tell us that both 5 and 4 in 5 + 4 = 9 are natural numbers and so is the number 9. Similarly, both the input numbers and the output number involved in the operation of multiplication in 5 × 7 = 35 are natural numbers. There is nothing in natural language that is even remotely closer to this mathematical property when we look at the relevant linguistic expressions. Therefore, the following expression which results from the Merging of ‘Amy trusts a man’ with ‘Amy trusts a man who is known to have three mansions’ is ungrammatical. (96) *Amy trusts a man Amy trusts a man who is known to have three mansions. On the other hand, it may also be supposed that the problem of non-­ termination is in general true of procedures specified in abstraction, given that all procedures in practical reality must terminate, and if so, the problem of non-termination cannot be characterized as a problem for the computational mechanism of grammar. As we shall soon see, this may not be a problem for mathematical functions or even for the Turing machine since they are intrinsically mathematical or purely abstract objects not anchored in any physical system, although they can be implemented or instantiated in a physical system. But this does not hold true for the computational mechanism of the language faculty since the language faculty is by its intrinsic character a mental system or a mental organization. In fact, the halting problem (Turing 1936) that is ­intrinsic to the model of computation inherent in the specification of the ­Turing machine must also apply to the putative computational system of the language faculty if the mapping function of the putative computational system of the language faculty is translated into the operations implicit in the specification of the Turing machine. Even if there could be intensional differences between the model of computation implicit in the specification of the Turing machine and the mapping function in standard mathematical formalisms of computability despite the fact that they are descriptively or extensionally equivalent (see Soare 1996), such intensional differences—whatever they turn

Natural Language, Machines and Minds

169

out to be—cannot be brought forward in order to dodge the halting problem for Turing machines. The reason is that the extensional equivalence between the model of computation implicit in the specification of the Turing machine and the mapping function in formalisms of computability is all that matters to the extrapolation of the halting problem to the putative computational system of the language faculty. Any intensional differences arise from a certain way in which computations are looked at or viewed by humans, and this cannot be built into the language faculty itself. Nor can these differences ground a different mode of computational operations that avoids the halting problem, because the problem of non-termination inherent in the halting problem is a fundamental part of any formulation of computation abstracting away from the real world. As a matter of fact, in Mondal (2014a) it has been stated that the problem of determining whether a procedure underlying the selection of lexical items ever stops could be shown to be a version of the halting problem, according to which, there exists no algorithm that, given a machine M and an input x, can decide whether or not M halts on the input x. Extrapolating this to the case of putative computational system of the language faculty we can state that there is no algorithm that, given the generative procedure of the language faculty and a selection of lexical items, can decide whether or not the generative procedure of the language faculty halts on that selection of lexical items. What this implies is that nothing prevents indefinite selection from the lexicon, which is a decision problem per se for lexical selection. This non-determinism within the putative computational system of the language faculty makes the satisfaction of interface conditions (conditions that make the language faculty answer to the requirements posed by the C-I and A-P interfaces in the mind) contingent rather than necessary. That intensional differences between the model of computation implicit in the specification of the Turing machine and the mapping function in formalisms of computability cannot circumvent the halting problem has been demonstrated from another angle in Mondal (2014a). The reasoning employed is this: if the halting problem for computation per se can be easily translated for the mapping function in formalisms of computability, it can be shown that the halting problem indeed transcends any intensional differences among the formalisms of computation. The busybeaver problem is a problem of determining the maximum number of 1s that can be left by a Turing machine, or rather by a class of Turing machines on a tape which is all blank (Rado 1962). Another way of seeing this is in terms of the maximum number of steps that a Turing machine performs while writing 1s on the squares of the tape before it halts. Thus it is easy to state the busybeaver function on the mechanical procedure of the language faculty in such a

170

chapter 4

manner that the task of determining the maximum number of 1s that can be left by a Turing machine (or a class of Turing machines) on a tape can be modeled on the procedure of increasing by 1 the index of a lexical item inserted in a given selection of lexical items when the indices are all 0, to begin with. That is, it is a problem of gauging the maximum number of 1s that can be subject to the task of incrementing by the mechanical procedure of the language faculty when it increases the index of a lexical item inserted in a given selection of lexical items. Just like the busy-beaver function, the function specified by the generative procedure of the language faculty that inserts lexical items in a given selection of lexical items will grow so fast that it will exceed any computable function. Significantly, the busy-beaver function grows in proportion to the number of states of the Turing machine. Given that a characterization of the states of the putative computational system of the language faculty is necessary, the states of the putative computational system of the language faculty having the generative procedure of the language faculty can be defined by the principles (sources of universality) and parameters (loci of variation) for particular languages because they are exactly what are supposed to instantiate the states of the putative computational system of the language faculty. Since computation by its very nature involves operations requiring space and time, these operations must have an inherent form that can be measured in terms of how much time or space a certain computation takes. The theory of computational complexity formalizes this notion in terms of certain ­well-defined laws that apply to computation in general. And if this is so, computational complexity theory can also be applied to the operations of grammar, inasmuch as the operations of the putative computational system of the language faculty require a number of steps for completion. But it needs to be borne in mind that these steps must be couched in terms of time which does not obtain in the real world since the computational mechanism of grammar is supposed to operate beyond real space and time. In fact, the construction of most sentences will require exponential time if the putative computational system of the language faculty constructs sentences by making reference to certain grammatical features (nominal features such as person-­number-gender features and verbal features such as tense or aspect feature). Consider, for instance, a simple sentence like ‘John likes a car’. We need at least 2 features for the nps (‘John’, ‘a car’) each and a total of 2 more for the two elements, namely V (‘likes’) and the functional category Tense. In such a case the m ­ inimum time complexity would be 22 × 22 × 22, where the base is 2 because each feature is binary-valued (presence/absence). The exponential complexity will mar the presumed ‘perfect’ and ‘optimal’ operational character of the putative computational system of the language faculty. If this amount of computational

Natural Language, Machines and Minds

171

complexity obtains for simple sentences, the amount of computational complexity for longer and much more complex sentences would be so formidable that it must approximate to the non-computable. Besides, there are certain linguistic constructions that are inherently complex, and these constructions in most cases warrant an immense amount of computational complexity if certain algorithms are so defined as to capture the expressive richness of these constructions. For example, cases of branching quantification or non-linear quantification which involve a special type of dependency between quantifiers not expressed via a linear order of quantifiers may require exponential computational time. Consider a familiar example of branching quantification below (Hintikka 1973; but see also Gierasimczuk and Szymanik 2009). (97) Most girls and most boys love each other. Such mono-clausal sentences at least require 22 computational time since the number of quantifiers is only 2 with respect to the two different nouns ‘girls’ and ‘boys’, and the base is 2 because we may suppose that a binary-valued scopal feature (the presence/absence of a scope relation in this case) has to be detected. That is, the scope relation between ‘most boys’ and ‘most girls’ needs to be scanned in various combinations that differ in terms of whether a scope relation with respect to each quantificational noun phrase can be established or not. The situation becomes more cumbersome if more than two quantifiers strung out in a construction are involved. Hintikka’s (1973) example in (98) is more complex and thus requires more computational power. (98) Some relative of each villager and some relative of each townsman hate each other. This sentence will require at least 22  ×  22 computational time because each grouping of a universal quantifier and an existential quantifier involves 22 computational time and two such groupings will add up to 22 × 22 computational­time. Needless to say, complex sentences having a number of embedded sentences will turn out to engender an enormous amount of computational complexity. Although algorithm-relative possibilities for reduction in computational complexity are often available, there is no way of escaping the complexity-­inducing effects in many areas of natural language that render pointless not only the computational attribute for natural language grammar but also the introduction of the possibility of having complexity-reducing algorithms. Doing away with the computational stamp for natural language is thus more optimal than having it. Ultimately, this clearly indicates that

172

chapter 4

c­ omputation is ­irrelevant to natural language grammar (see for greater detail, Mondal 2014a). From the discussion above, it may now appear that the foregoing arguments have some affinity with Searle’s (1992) devastating argument against computationalism. Searle’s attack on computationalism is grounded in the supposition that computation is not an intrinsic feature of the natural world, and that a certain pairing of a mental/neural state with a computational structure is relative to one’s observation and interpretation. Now given the apparent ubiquity of certain ‘laws’ of computability and computational complexity, one may wonder why computation has to be observer-relative at all (Buechner 2008). It needs to be made clear that the present view does not exactly say that computation is observer-relative or relative to one’s interpretation. In other words, the present view does not postulate that what is a computation is determined by one’s interpretation of what a convention for taking something to be a computation consists in. As a matter of fact, Searle believes that any computational description to a given physical object or system can be assigned by a human being. This is indeed an argument that aims to establish the triviality of the notion of computation. The present view of computation proposes that computation results from humans’ imposing certain functions (or relations) on the possible mappings between sets of inputs and outputs which constitute the form or shape of machine cognition. Humans’ imposing certain functions (or relations) on the possible mappings between sets of inputs and outputs is subject to the human interpretation processes. From this it certainly does not follow that any arbitrary computational description to a given physical object or system can be assigned so long as a human interpretation makes it viable. As has been emphasized in Section 4.1, only certain law-like regularities or contingencies originating in the semiotic processes of human interpretations can cause the implementation of a mapping from some state of a physical system onto the formal structure of a computation. Plus the ‘laws’ of computability and computational complexity can obtain the way they do after computations get off the ground once humans impose certain functions (or relations) on the possible mappings between sets of inputs and outputs. This is another way of saying that nothing prevents the ‘laws’ of computability and computational complexity from obtaining just because computation is harvested from machine cognition through the human semiotic processes. Just as human observations do not prevent laws of quantum mechanics from holding the way they do, the law-like semiotic regularities or contingencies that cause the implementation of a mapping from some state of a physical system onto the formal structure of a computation cannot stop the ‘laws’ of computability and computational complexity from being what they really are.

Natural Language, Machines and Minds

173

Lastly, the mediation of human semiotic processes in bringing forth computations makes the very idea of the language faculty being a computational system suspect. For, if computation by its very character arises from the mediation of human semiotic processes, the system of grammar cannot in itself be computational because that would require human semiotic processes to operate within the confinements of the language faculty. On the one hand, this would spoil the intrinsic character of the language faculty because it is ­supposed to be a computational system without internally having semiotic processes that can be attributed to humans as whole beings. On the other hand, if one insists that such semiotic processes operate internally, a homunculus inside the language faculty has to be present to regiment law-like regularities or contingencies in semiotic processes. Note that this issue is different from a notion in which mini-intentional processes are said to be operating at the internal level, making the coding and decoding of certain messages possible (as in rna reading off from the dna or in the operations of immunological elements in organisms). First of all, in such cases we are never sure whether these processes are actually intentional processes, that is, whether they involve interpretation, for it could be the result of the attribution of intentionality to certain internal structures in human bodies that seem to us to have certain properties attributed to sentient beings. Second, even when one finds a certain kind of isomorphism between two structures that can pass for human-level semiotic processes, this cannot make the idea of the language faculty being a computational system feasible since not all kinds of semiotic processes make computation viable. It needs to be made explicit that this is not to deny that low-level semiotic processes exist; this is indeed one of the foundational principles of biosemiotics (see Barbieri 2002). Rather, the imagined possibility that such kind of low-level semiotic processes can make computation inside the language faculty get off the ground is what is denied here. 4.3 Summary This chapter has shown how the formulation of mental structures can be applied to machines to see where they stand and how the structure of mentality of machines can be conceived of. This chapter shows what machine cognition, if anything, can be. Then the relation between natural language grammar and an intrinsic notion of computation is examined. The emerging insights cast doubt on any intrinsic notion of computation applied to natural language. Now we need to look into the ramifications and implications that accrue from the whole exercise.

chapter 5

Possible Minds and the Cognitive This chapter will flesh out the consequences of the proposal presented for the nature of what can be reasonably believed to be cognitive. Given that the formulation of mental structures has been useful for the characterization of various kinds of mentality across distinct organisms or species and also machines, it is worthwhile to explore the consequences that can be drawn for the very conception of what is cognitive. An understanding of the cognitive can help figure out how we can tap into the forms of minds of many other entities around us. In fact, the marks of the cognitive can be schematized in terms of the new formulation put forth for the range of possible minds. Perhaps it is more important to demonstrate that this formulation can avoid many of the perplexing epistemological and conceptual problems revolving around the problem of other minds in general. Knowing what other minds look like is beset with some formidable difficulties, for the absence of precisely defined and generally viable criteria for the recognition of the cognitive, at least in part, prevents one from stretching oneself out beyond one’s own boundaries that ontologically demarcate the self. Thus it seems that a lot has been left out when one tries to project oneself onto the interior space of the other. Apart from that, a notion of the cognitive cannot be simply left as latent within the domain of folk psychology since the limitations of folk psychology circumscribe the capacity to know or understand the cognitive as much as the limitations of folk biology circumscribe the capacity to know the laws or principles of biological mechanisms in formal terms. There is, of course, an inherent capacity latent in all humans which allows humans to think of other minds by deploying certain psychological mechanisms, but these mechanisms need not be assumed to be built around a conceptualization of the cognitive. Rather, these psychological mechanisms are grounded in inferential or abductive processes and capacities which permit the engagement of more or less intuitively verifiable criteria for thinking about what other organisms may feel or feel like doing. This is not to deny that these psychological mechanisms are useful as far as our normal and perhaps natural understanding of the mental is concerned. Whatever way the conception of the cognitive is formulated, it appears that it must turn on the vexed question of how one distinguishes the self from the other. Without the self-other distinction, it seems as if there is nothing in place in which the traces of the cognitive can be found to be manifest. However, this does not imply that the self-other distinction in itself can track or anchor

© koninklijke brill nv, leiden, ���7 | doi 10.1163/9789004344204_006

Possible Minds and the Cognitive

175

the marks of the cognitive across species and various organisms. There is indeed more to be attributed to the cognitive than what the self-other distinction in itself reveals, especially when the self-other distinction is a matter of projecting the other by deducting the self from a pre-given phenomenological space. S­ uffice to say that there is a gap, however palpable it may be, between any projection from the self-other distinction and what can be taken to be the cognitive. The goal of this chapter is to show that the conception of mental structures as formulated fares better in this respect, in that mental structures do not carry any extra metaphysical baggage that the self-other distinction or even the very notion of the self carries. The reason is that mental structures are not justified either by rational capacities or by the presence of intentionality or even by various aspects of consciousness. Beyond that, it is also vital to recognize that there are fewer ways of being in a position to possess rational capacities or intentionality or consciousness, whereas there are perhaps more ways in which mental structures can be manifest in organisms. An organism can evince the presence of a mental structure only if it possesses at least one mental structure. The additional advantage is, as it will be argued, that the nature of the self can also be cashed out in terms of the parameters of the new formulation. Overall, this can add to the viability of the proposal. In all, the connection between different kinds of possible minds and the nature of the cognitive is very intimate, insofar as various kinds of possible minds instantiate various ways the cognitive can be manifest across other organisms and species. If there is anything the preceding chapters of this book have attempted to put on a firmer ground, it is the inextricable coupling between different kinds of possible minds and the very nature of the cognitive. This relation between different kinds of possible minds and the nature of the cognitive will be elaborated on in this chapter as we proceed. This is what we shall now home in on. The marks of the cognitive can be looked at from various perspectives. One of the ways is simply to discern the transition from the non-cognitive to the marks of the cognitive, and thereby determine exactly where the boundary of the cognitive is crossed, as Adams and Garrison (2013) think. They argue that cognitive processes are marked by the capability of organisms or creatures to do things for reasons that can be aligned with having beliefs, goals, desires or intentions. Having reasons for doing something for them does not arise from evolutionary trajectories of the creatures that possess cognitive processes. Rather, reasons make reference to the representational properties of internal states of organisms and then go beyond them to include things, events and properties in the world that are part of the representations encoded in the internal states. Adams and Garrison postulate this as a necessary,

176

chapter 5

if not s­ ufficient, condition for the mark of the cognitive when they critique ­Rowlands’ (2010) proposal that a process is to be considered cognitive only if it involves information processing which has the proper function of making some information available to the agent which was not available earlier and which is made available to the agent by the production of a representational state in the agent to whom the process belongs. What Adams and Garrison dispute is the claim that a cognitive process must have the proper function of making some information available because many processes that can otherwise be cognitive are not either selected for or simply not fit to be cashed out in terms of biologically shaped proper functions. For example, cognitive processes that can be instantiated in the extension of biological organs joined to prosthetic structures cannot be said to have any proper functions, as they argue. Additionally, Adams and Garrison also stress that when a creature or system does something for reasons, these reasons must be such as to represent, or allow thinking about, something that does not exist. This, they believe, adds meaning to those reasons in ways that beliefs and intentions add meaning to internal states and external entities, events and processes. Elpidorou (2014) has argued that the stance Adams and Garrison maintain is not defensible on the grounds that there could be some cognitive processes or capacities that do not turn on reasons or even on having beliefs, goals, desires or intentions on the part of the agent concerned. He cites the case of a patient with blindness. The patient’s ventral visual stream being disrupted, the patient can only detect the orientations of things and figure out how to move in a certain direction but the capacity to form beliefs about things seen is not intact. In such a case the patient’s act of reaching for things is supposed not to be driven by the visual beliefs about things because the ventral visual system that encodes the features of objects (color, texture, edge, shape, position etc.), thereby causing certain beliefs to be formed about them is damaged. Many acts such as reaching for things or gauging the orientation of objects with respect to the (parts of) body in such patients, Elpidorou claim, can be said not to be driven by reasons, yet there is no reason to deem that the underlying processes are not cognitive. Although it can indeed be contended that the acts of such persons are driven by unconsciously formed beliefs or even by beliefs and goals at the sub-personal level, as Adams and Garrison (2014) in fact argue, the question of whether having reasons anchored in beliefs, goals, desires or intentions is at all necessary still remains. As a matter of fact, there certainly can be cognitive activities and processes that do not make reference to any reasons at the level of the individual; these cognitive activities and processes may not also turn on one’s possessing beliefs, goals, desires or intentions, regardless of whether or not these beliefs, goals or

Possible Minds and the Cognitive

177

intentions are conscious or even subconscious. Take, for instance, a situation in which one hears sudden and unexpected noises in the background but it is filtered out because the person in question is concentrating on some music. The background sounds are somehow registered in the mind of the person, but the person is so absorbed in listening to a favorite music that the person cannot in fact say or report anything about what is happening around. In this particular case the processes underlying the hearing of those background sounds are cognitive, insofar as these processes involve the manipulation of inner symbols or symbolic structures via the auditory system. Now it is hardly plausible that these processes are done for reasons at the level of the individual or even make reference to higher-level beliefs, goals, desires or intentions. Plus the processes underlying the hearing of those background sounds cannot also be said to be underwritten by the conscious or unconscious formation of beliefs, desires or intentions since the unexpected or unwanted background sounds have nothing whatsoever to do with any explicit or hidden intentions or beliefs of the individual. Nor does the act of hearing the unexpected or u­ nwanted background sounds have anything to do with any explicit or hidden intentions or beliefs of the individual. It is not just false negatives that raise a serious concern; there can be many false positives as well. Consider some of the noncognitive processes such as digestion, metabolism, yawning, blinking etc. If unconsciously formed beliefs or beliefs and goals at the sub-personal level are to drive cognitive processes, it is not hard to imagine scenarios in which a given non-cognitive process can be driven by unconsciously formed beliefs or beliefs and goals at the sub-personal level. Suppose that a person who is both physically and mentally healthy habitually watches her friend yawning whenever her friend watches tv, and is induced to yawn and gradually falls back into it whenever she gets to thinking of her friend. Now we see an activity which is evidently non-cognitive and yet seems to be driven by unconsciously formed beliefs or thoughts at the level of the individual. In order to circumvent these problems, one may now argue that these are isolated or accidental or even coincidental cases that do not bear on the main idea that cognitive processes turn on reasons or on having beliefs, goals, desires or intentions. It needs to be observed that these cases—even if they are considered isolated or accidental—disconfirm and disprove the thesis that it is necessary that cognitive processes turn on reasons of the individual or on having beliefs, goals, desires or intentions. The necessity condition simply collapses. And the necessity condition being jettisoned, the thesis becomes lackluster and of no substantive value as far as understanding the marks of the cognitive is concerned. More significantly, there is reason to believe that having beliefs, goals, desires or intentions is not at all necessary for something

178

chapter 5

to be cognitive. It has been demonstrated in Chapter 3 that many activities and behaviors of various species can be described by not making reference to reasons underwritten by beliefs, goals, desires or intentions. The behaviors of plants, bees, spiders, earthworms etc. need not be described or even explained by appealing to reasons underwritten by beliefs, goals, desires or intentions on the part of these animal species. It is not also clear whether one should always appeal to beliefs, goals, desires or intentions while describing the behaviors of bigger animals including primates. And on the other hand, it would be too impetuous to claim that the processes underlying the behaviors and activities of such animals described in Chapter 3 are non-cognitive. That they are non-cognitive like metabolism has to be established first. In the absence of any demonstration, the supposition that the processes underlying the behaviors and activities of these animal species are indeed non-cognitive carries no substance. One may now urge that it has to be shown that the processes underlying the behaviors and activities of these animal species are cognitive. As pointed out in Chapter 3, having cognition or cognitive processes is not something that can be confirmed or disconfirmed exclusively on the basis of arguments or reasoning. One must find out traces in animal behaviors and activities that can emanate from the interior structures of animals or organisms. Mental structures fulfill exactly this role since mental structures are cognitive structures, insofar as they are instantiated in the internal states that are part of the kinds of mentality animals and distinct organisms may be supposed to possess. Mental structures can also be taken to be structures manifest or realized in behavioral actions, insofar as the relevant mental structures propel animals and distinct organisms into action. Neither information processing nor reasoning is the right criterion for the recognition of the marks of the cognitive. The former is too trivial or generic to have any significant import for variegated types of mentality across distinct organisms, while the latter is too restrictive and perhaps more anthropomorphic. Note that information processing in itself cannot serve as a reliable marker as it is akin to a behavioral criterion which is also too trivial. As also discussed in Chapter 3, two entities X and Y may evince properties of information processing systems, yet X may possess cognitive processes but Y may not, even though both exhibit information processing at the appropriate level of description. In a nutshell, the fundamental problem with most of these approaches towards an understanding of the marks of the cognitive is that these approaches raise the bar either too high or too low by assuming that other kinds of mentality can be approached by having the domain of mentality recognized as such reduced to symbolic manipulations or by having the domain of mentality restricted to the domain of higher-level mental phenomena. In both cases the phenomena

Possible Minds and the Cognitive

179

to which the domain of mentality is reduced are in themselves in need to be understood via a series of displacements. One the one hand, information processing needs to be understood by means of the notions of information and computation both of which are ill-framed notions not suitable for the exploration of other kinds of mentality, for it is like trying to understand something barely grasped with the help of another thing which is so vaguely demarcated as to be useless. On the other hand, higher-level mental phenomena that are underwritten by beliefs, goals, desires or intentions are to be understood by means of the notion of intentionality the problems of which are so deep that it is perhaps better to keep away from it when trying to understand other kinds of mentality. Mental structures do not lead to such problems because they are not underwritten by beliefs, goals, desires or intentions. Nor are they driven by the agent’s having reasons for doing something. In this sense mental structures are free of many these crippling problems the approaches towards an understanding of the cognitive are beset with. There are also other philosophically exciting conundrums at the heart of the very possibility of other minds which appears to beget a host of other related stickier riddles. This is what we shall turn to now to see how the notion of mental structures fares in the face of challenges posed by certain epistemological and ontological problems concerning the very possibility of other minds. Let’s now consider the question of whether there really are other types of mentality or simply other minds out there, regardless of whether they are of other humans or of animals other than humans. The question is whether a concept of the subjectively felt and experienced mind or mentality can be extended to other possible entities to which that concept is to be attributed, of course by assuming that a concept of the subjectively felt or experienced mind or mentality is already available (see also Avramides 2001). This conceptual problem with the very possibility of there being other types of mentality is perhaps grounded in an asymmetry, which is that we as individuals are sure that we have minds since we feel, understand, think and reason about things etc., but we are not sure, at least not in the same way we as individuals subjectively are, that others have minds. This particular asymmetry was discussed in Wittgenstein (1968, 1975) as a cause of concern about the possibility of having knowledge about other minds. For Wittgenstein, one’s report of feeling one’s own pain is meaningful to the extent that it is the subject’s own, but the proposition that is conveyed by expressions such as ‘I can/cannot feel her pain’ is nonsense on the grounds that the felt pain is always the subject’s own pain and hence cannot be grounded in a proposition that presupposes the understanding or knowledge of the other’s pain even if one says that he/she can or cannot feel somebody else’s pain. Thus, for him, any proposition that

180

chapter 5

expresses the subjective experiences is different in kind for the subject from any other that expresses subjective experiences from the other’s perspective. The reason that Wittgenstein points out when raising this concern is that one can know how to move from a given experience in a certain body part to the same experience in another body part, but the question of getting in a position to feel somebody else’s experiences is not like this. This part of the concern seems to be tied up with an epistemological issue—an issue that concerns how we can have knowledge about other possible minds. Overall, it seems that the conceptual problem is anchored in the epistemological problem. But Wittgenstein wanted to bring into light a deeper ontological or conceptual problem with the very nature of mentality that is revealed or exposed by our use of language. Although he thinks that X’s report of her feelings and experiences has a distinctive feature which cannot be equivalent for X to X’s report of Y’s feelings and experiences, this does not imply that Wittgenstein is trying to insist that there is private domain of signs that refer only to inwardly felt feelings and experiences. This is made clear in what is known to be the private language argument. So Wittgenstein (1968) says But could we also imagine a language in which a person could write down or give vocal expression to his inner experiences, his feelings, moods and the rest—for his private use?—Well, can’t we do so in our ordinary language?—But that is not what I mean. The individual words of this language are to refer to what can only be known to the person speaking; to his immediate private sensations. So another person cannot understand the language (p. 243). Here Wittgenstein appears to state that inner signs that refer exclusively for the subject to the privately held experiences are incoherent because neither the subject nor anyone else, for that matter, can be right or wrong about the use of these signs. That the subjectively felt experiences have a different form or meaning for us in our normal use of language does not entail that the subjectively felt experiences are made viable by means of some privately held linguistic expressions or signs that refer exclusively for the subject to these experiences. Rather, it could be possible that the subjectively felt experiences are made viable by the way we use our language and also by how we react to and interact with others in our natural way. The word ‘natural’ here has to be construed in terms of the usual daily circumstances of language use as part of our very make-up. Hence it seems that Wittgenstein ultimately demolishes any argument that may advance the idea that different individuals will have different privately held linguistic worlds that exclusively make reference to the

Possible Minds and the Cognitive

181

subjectively felt experiences just because the subjectively felt experiences are different in kind for the subject. In other words, that one cannot talk about other minds in the same terms one uses for one’s own experiences and feelings must not license the claim that the subjects in question hold private languages that have expressions referring to those inward experiences and feelings only for that subject. Now the important question for us is this: can Wittgenstein’s deeper concern about the possibility of being in a position to feel and report the subjective experiences of other minds be applied to the present case? That is, can we say that we cannot formulate the mental structures of other organisms or species just because we are only disposed to report or express what our own mental structures convey in a way that cannot be done for the expression or reporting of the mental structures of other organisms? This question is relevant to the present case, given that mental structures are supposed to capture distinct types of mentality of other organisms or species. It should be noted that mental structures in the present context are defined neither with respect to the experiences or feelings of other organisms nor with respect to the expression or reporting of any such experiences or feelings. This is not to say that mental structures cannot be expressed in some system of signs. Human language is one such system of signs, and for other organisms a system of vocalizations or other signs is the appropriate medium of expression of mental structures. The point to be driven home is a bit different. Since Wittgenstein’s deeper concern is restricted to the reporting or expression of subjectively held experiences, mental structures, in virtue of the fact that they are exactly what are felt or experienced and conceptualized but are not in themselves expressions or reports of inward experiences and feelings, can be extended to other organisms. Mental structures being structures that may be instantiated in the internal states of organisms, it cannot be held that we as individual subjects cannot extrapolate some mental structures to members of other species or organisms. In addition, mental structures are not signs that are to be interpreted; nor are they expressions that have to be mapped further onto certain meanings. Mental structures possess or contain no meanings because mental structures are exactly what are experienced or conceptualized. What is of particular significance is that Wittgenstein worried about the translation of linguistic facts into metaphysically plausible facts—or simply ontological facts—about the nature of other minds. In other words, Wittgenstein’s worry becomes more pressing, especially when one attempts to arrive at certain conclusions about what is really out there in other minds by moving from one’s language to other possible minds. This move, for Wittgenstein, is either illicit or has a tenuous link.

182

chapter 5

But one may now argue that since mental structures are also formulated by having it instantiated or substantiated by relations defined on the lexicon of a natural language, mental structures inherit an association with linguistic expressions and thereby invite the problem Wittgenstein points out. That this is not so can be appreciated by considering some of the subtle aspects of m ­ ental structures. First, mental structures are not in themselves linguistic expressions—they are what underlie linguistic expressions. Although mental structures are formulated by having it substantiated by relations defined on the lexicon of a natural language, neither the lexical items not the relevant expressions that contain those lexical items are to be identified with mental structures. Rather, these linguistic elements serve to construct meaning relations which substantiate mental structures. These linguistic elements, if anything, are constituents of meaning relations which instantiate mental structures. Thus linguistic elements can be constituents of mental structures, insofar as they help designate the relevant mental structures. If this is the case, there is a relation of instantiation or substantiation between the linguistic symbols and the relevant mental structures, and this relation is necessary for the formulation of anything that appears abstract and nebulous. This cannot be assumed to underpin the supposition that mental structures are in themselves linguistic structures any more than the fact that demand, supply, competition etc. in economics are expressed in certain mathematical symbols can be assumed to lead to the belief that demand, supply, competition etc. are in themselves symbolic phenomena or categories. Second, the linguistic elements that are constituents of meaning relations instantiating mental structures are not also expressions in some privately held language. This problem may be valid for Jackendoff (1983, 2002), in that Jackendoff’s conceptual structures are framed in the vocabulary of some non-public or internal language of its own which is supposed to be mentally represented. As has been stated in Chapter 3, the linguistic elements that are constituents of meaning relations are public structures but what they instantiate as part of mental structures are not public and linguistic per se. This distinction is crucial for one to appraise the place of mental structures in the context of linguistic expressions. Third, mental structures not being linguistic expressions themselves, there is no sense in which one can contend that the private language argument applies to it, for the private language argument at least requires that an internal language should be such as to refer to or represent some inward feelings and experiences. That a system of mental structures does not constitute a private or internal language of its own does not accord or comply with the very criterion that the private language argument consists in. Plus mental structures do not in fact refer to or represent inward feelings and experiences—mental structures do not have any referring

Possible Minds and the Cognitive

183

functions. Rather, they are what are experienced or felt and understood. If one were to say that mental structures have referring functions, mental structures would cease to be what they really are as mental structures have no sense and reference in Fregean terms. There is a related worry that Strawson (1959) brings into focus when he adds to the problems highlighted by Wittgenstein. While Wittgenstein’s worry seems to centre on the move from linguistic expressions of subjectively held experiences to inferences about the internal states in other minds, Strawson appears to be fixated on the conditions under which one can ascribe states of consciousness and experiences to others. Strawson states that if one ascribes states of consciousness and experiences to others on the basis of doing so to oneself, this move must be invalid on the grounds that it is a necessary condition of the very act of ascribing states of consciousness and experiences that it should be done to others so that one can ascribe states of consciousness and experiences to oneself. In other words, Strawson thinks that one cannot speak of one’s own states of consciousness and experiences unless one has already done so, or at least does so, for others. Strawson also adds that any predicate that describes mental states can be applied to a number of distinguishable individuals only if the predicate is ‘significantly, though not necessarily truly,’ applied. It is necessary to recognize that mental predicates that can be applied to others are different from many other kinds of predicates such as ‘is white’, ‘is true’, ‘is a triangle’ etc. The latter kind of predicates is often checkable, while it is not easy to check whether predicates of the former kind will hold true, or even whether they can be applied at all. Thus Strawson places his bet on his principle as a constraint on any thesis that aims to motivate the idea that one can move from one’s own experiences to others in order to ground the belief that others also have experiences. The notion of ascription of states of consciousness and experiences to others is crucial here, in that the ascription of X to Y intrinsically consists in the fact that X must be attributed to Z, V, W etc. when Y is distinct from Z, V, W. For all that Strawson has to say about the notion of ascription as far as the ascription of states of consciousness and experiences to others is concerned, it may be possible that one does not have to know that there are distinguishable individuals, or even understand how to distinguish individuals, for one may simply assume that others are sentient beings having feelings and consciousness (see Hyslop 1995). Further, Strawson’s principle presupposes that the criteria for the ascription of states of consciousness and experiences to others are different from those for the ascription of states of consciousness and experiences to the self. If this is so, what justifies this apparently innocuous presupposition? After all, one may simply ascribe mental states to the self and

184

chapter 5

others in the same way. More crucially, the principle carries an inherent hole through which many possibilities can slip. Strawson’s principle clearly states that one cannot ascribe states of consciousness and experiences to oneself unless one has already done so, or at least does so, for others. Strawson’s principle secures the deal only if one moves from the ascription of states of consciousness and experiences to oneself to that to others. Then if one comes to know or recognizes that others have minds and hence feel and experience things not by way of the ascription of states of consciousness and experiences to oneself, this clearly steers clear of the constraint Strawson has proposed. As a matter of fact, it is not necessary that one knows or understands that others have minds only by ascribing states of consciousness and experiences to the self. One certainly can infer that others have minds and hence feel and experience things by interacting and communicating with others in a way that ensures that others have beliefs, desires, and intentions and hence they feel, experience in a shared space of linguistic meanings. This possibility has also been entertained in Davidson (1991) and Givón (2005). Moreover, this does not also keep mental structures from being applied to other organisms or species the way it has been done in the present context. First of all, mental structures have not been ascribed on the basis of an ­ascription to us as humans if it is assumed that others are organisms apart from humans. The formulation of mental structures has been such that mental structures transcend the substance in which they are manifest or realized. Simply ­speaking, mental structures as specified in the present context are substanceindependent. And if they are so, their being ascribed to entities including humans has no prejudged beginning from the self. That is, we do not start from the ascription of mental structures to ourselves and, on the basis of this, move over to the ascription of mental structures to various other organisms. Other than that, mental structures are such that we do not really attribute mental structures to others the way we may ascribe beliefs, desires, and intentions or states of consciousness to others. The reason is that beliefs, intentions or states of consciousness have a causal role in behavior that is not easily discernible or recognizable in the case of mental structures. This is, however, not to say that mental structures have no role to play in making organisms act in the real world. That this is indeed the case has been emphasized in Chapter 3, especially when it is pointed out that mental structures are structures manifest or realized in behavioral actions. The point we are trying to make is subtle. What is of particular concern is that the causal role of beliefs, intentions or states of consciousness can be detected much more easily than that of mental structures, precisely because intentional projection (that is, the ability to apply the intentional capacity and attribute intentional states to others) among humans

Possible Minds and the Cognitive

185

works in a way that cannot be guaranteed for mental structures which have not been specified by any appeal to intentionality or intentional projection. This does not, of course, entail that mental structures have no connection whatsoever to intentionality or intentional projection. Instead, mental structures by being instantiated in internal states or realized in behavioral actions may have properties of aboutness, on the grounds that nothing in principle prevents mental structures from specifying a relation in which certain organisms as well as humans (may) stand with respect to objects in the world. There are also other important aspects that differentiate mental structures from beliefs, desires, and intentions or states of consciousness in terms of which some mental predicates can be constructed so that they are applied to a range of individual entities. The procedure of affirming predicates that can be constructed by incorporating beliefs, desires, and intentions or states of consciousness is way different from that which can be constructed for predicates that include a certain mental structure. If we say, for instance, there is a predicate ‘have X’ when X is a mental structure, the denotation of this predicate will then be the set of individuals that have X. The sense of this predicate will ride on the relation of this predicate to other representations of contexts, settings as well as other plausible predicates of mental structures. Likewise, a predicate ‘have beliefs’ can have as its denotation a range of individuals that have beliefs and its sense is to be specified by the relation of this predicate to other representations of beliefs and contexts in which the individuals have beliefs. This underscores the formal similarities between the two kinds of mental predicates. Now the significant difference between them is that checking whether predicates like ‘have X’ can be affirmed can be done only by verifying the cognitive activities and capacities of individuals as well as the logical character of the mental structure(s) in question, while predicates like ‘have beliefs’ can be verified by our intentional capacity, that is, by our belief-desire psychology since beliefs, intentions etc. are irreducibly intentional states. Finally, mental structures bypass the problems associated with the ascription of states of consciousness and experiences to individuals as mental structures do not intrinsically make reference to them, although beliefs, desires, and intentions may prove useful for the verification of mental structures. When attributing a certain mental structure to a given non-human organism, one may do better by examining the goals and intentions, if any can be discerned, in the organism in question. Furthermore, if Strawson’s principle gains ground in any condition that one makes sense of by appealing to mental structures, there is nothing in his principle that bans the extrapolation rather than ascription of mental structures to various organisms. Instead, his principle welcomes the extrapolation of mental structures which goes way beyond the ­self-­other

186

chapter 5

d­ ifferences in the ascription of mental states. Plus there is nothing in the specification of mental structures that prevents the generalization of mental structures to non-human organisms, provided that the very act of ascription is banished from the context. Overall, it appears that most of the conceptual and epistemological puzzles arise only when one assumes that there must be a sui generis procedure or psychological process which is supposed to underpin knowing or understanding other possible minds. If the ascription of mental states is one such procedure, there could be a host of others such as making inferences, generalization, analogy making, imagining, simulation, and so on. It is quite possible that there is no single or unique way or procedure that underlies the act of knowing or understanding other possible minds. In the current context, mental structures have been formulated in order to remain neutral on the issue of whether a way or procedure that underlies knowing or understanding other possible minds can be reliably applied by making reference to mental structures. Thus we may consider it appropriate to remain open to various mechanisms or ways that may underlie knowing or understanding other possible minds. The appeal to some mechanism that may underlie knowing or understanding other possible minds can be seen to be prominently marked in N ­ agel (1986). Nagel believes that the domain of mentality is far greater than one may suppose. Thus he seems to be realistic about the possibility of there being many possible types of mentality among various other organisms. Although he maintains a kind of skepticism about the power of our cognitive capacities in discovering other forms of minds, he thinks other minds exist but we may not be able to find more about such types of minds. To defend this idea, he also stresses that Martians (or for that matter, aliens different from humans), while looking at humans, may well think that humans are mechanical creatures lacking any mental capacity. If this is the case, one cannot, he believes, cleave to any dogmatism in supposing that other minds do not exist. He also wonders how one can extend out from oneself to reach or grasp the mentalities of other humans or creatures and organisms other than humans. He posits that it is not possible to extend from one’s own self to others’ minds by simply reflecting on inward experiences and the accompanying subjectivity, or by observing others’ behaviors. This obtains because he does not see how one can move out of one’s own self to generalize the subjectivity inwardly felt to others, for there does not seem to exist a base in which this subjectivity may be grounded. Plus there is no way of telling a case in which the subject is just trapped in his/her subjectivity apart from another in which the subject really extends from within himself/ herself to others. Thus he proposes that ‘a general idea of subjective points of view’ must be required such that the very conception of one’s own subjectivity

Possible Minds and the Cognitive

187

as well as of others’ possible kinds of subjectivity is included in such a general idea. That is, there must exist a general concept of subjectivity in which not only the subjectivity of the self but also the subjectivity of the other can be seen to be instantiated together. Once this general concept is in place, he thinks the rest can get off the ground comfortably well. All that he requires now is a procedure or process that can substantiate the extrapolation or generalization of this concept from one to a host of others. This process is individuated by imagining which grounds the extrapolation or generalization of a general concept of mentality from the self to others. One imagines that the general idea of subjective points of view can be applied not just to oneself but also to others around in such a way that it is immediately recognized that the subjectivity of the self is just one instance of the general concept of subjective points of view. But recognizing that other subjectivities are various other instances of the same general concept of subjective points of view is also equally important. This, for Nagel, secures both the generality of the concept of mentality and the continuity between one’s mind and others’ minds. In this particular sense Nagel’s approach seems important for the general problem of probing into other possible types of mentality. But, if imagining grounds the general concept of subjective points of view, is not it also the case that the very act of imagining, at least in the case of understanding other minds, is instantiated by the general concept of subjective points of view? Note that imagining in the case of grasping or reaching out to other minds is vacuous unless imagining in itself is anchored in something that is to be imagined. The general concept of subjective points of view is exactly this base in which imagining for the extrapolation to other minds has to be grounded. For imagining to ground the extrapolation or generalization of a general concept of mentality from the self to others, imagining has to be constituted by a psychological process operating on a general concept of mentality. In this sense imagining has to be viewed as a natural or pre-given mental process along with the general concept of mentality which is, as Nagel believes, innately given. In a nutshell, both imagining and the general concept of mentality are to be innate, on the grounds that both interpenetrate one another. Notice that this does not so much solve the problem of other minds as displaces it elsewhere. If both imagining and the general concept of mentality are innately available to us, what is there to be done about our attempting to understand or grasp other minds? All we do is apply our instinct as it is already installed in us. No matter what we do, we are thus bound to know or grasp other possible minds. In this context, we must also remember that knowing or grasping other minds is not like trying to understand a part of nature using some discovery procedure by means of careful observations on the relevant

188

chapter 5

object of study. Hence knowing or grasping other minds is not simply a matter of understanding the causal structure and connections that link the self and others, or humans to other creatures. This is to lay stress on the point that the causal structures and connections manifest or implicit in the biological processes and activities within which humans and other organisms are closed cannot be sufficient for knowing or grasping other minds. To give an example, just because trees give off oxygen which we pick up for our respiration, trees do not become entitled to understand or grasp the mentality we have. Likewise, two persons having a similar genetic structure inherited through a lineage (twins, for example) do not become entitled to understand or grasp the mentality in a similar manner. There is perhaps more to it than meets the eye. But whatever it is, any psychologically given process is not enough. At this juncture, it appears that something over and above psychological processes is required. Mental structures may be the appropriate structures that can complement any psychological processes that actually underlie knowing or grasping other minds. It is thus necessary to articulate the relationship between mental structures and psychological processes or conceptual procedures that may utilize or manipulate such structures as mental structures. Linguistic phenomena provide us with the conceptual or logical connections between linguistic expressions and mental structures, and these conceptual or logical connections constitute the relations of mental structures which thus (come to) encapsulate the formal/logical properties rather than the syntactic or exclusively linguistic properties of the linguistic phenomena. This differs substantially from the way psychological or conceptual mechanisms may exploit mental structures in the case of knowing or grasping other minds, in that psychological or conceptual processes are subject to empirical observations and intrinsically make reference to the mental phenomena that give rise to them in the first place. Thus psychological or conceptual processes involved in knowing or grasping other minds are intrinsically instantiated in the relevant the mental phenomena that give rise to them, whereas mental structures are not in themselves instantiated in linguistic expressions since linguistic expressions merely meet the conditions of satisfaction for the conceptual or logical connections/properties constituting the relations of mental structures to obtain. Hence, when the linguistic expressions change, these conditions of satisfaction also change, but the psychological or conceptual processes that may exploit or operate on mental structures in the case of knowing or grasping other minds can never be identified with or replace mental structures. This is partly because psychological or conceptual processes may mediate only the conditions of satisfaction for the conceptual or logical connections/properties implicit in relations of mental structures to obtain or materialize but not the conceptual or logical connections/properties themselves.

Possible Minds and the Cognitive

189

Ultimately, it must be made clear that neither a psychologically grounded process nor any general concept of mentality suffices for knowing or grasping other minds, even if knowing or grasping other minds is made sense of in terms of our natural biologically inherited capacities (including the theory of mind capacity). Here two different senses of knowing or grasping other minds must be clarified. Knowing or grasping other minds as part of our natural biologically inherited capacities is one thing that may be taken to be universally available in humans, while knowing or grasping other minds as part of an endeavor to determine what the structure or form of other mentalities could be is another. The present work aims at the latter, not the former, although it needs to be mentioned that the preceding paragraph is an attempt to describe how the properties of the form of mentality in terms of mental structures relate to the psychological processes that may make use of these structures. The problem of other minds as described in Nagel does not have any strict relevance to mental structures as formulated in this book, not only because mental structures by their very nature are open to psychological or conceptual mechanisms that are really involved in extending from the self to others but also because mental structures taken in themselves do not presuppose a general or essential concept of mentality or cognition which may or may not be psychologically available in humans. In fact, the present book has argued that starting with a general or essential concept of mentality or cognition does not help specify the forms of various types of mentality since the demarcation of such a general or essential concept will always remain tendentious and contingent on one’s interpretation. This is to affirm that the ontology of other minds may be different from the epistemology of other minds. That is to say that specifying the forms of various types of mentality in other organisms is distinct from determining how humans actually know or can understand other minds. Many of the epistemological and conceptual conundrums in connection with the by-now-familiar problem of other minds in essence arise, primarily because the case from one’s own experiences and states of mind (including introspection) is a central part of the scaffolding premises that constitute the contents of such conundrums in any guise. This observation helps motivate mental structures further, inasmuch as mental structures are such that they do not start from or are not in themselves rooted in one’s self. The experience from the self-based perspective does not have mental structures get off the ground in the first place, and this constrains how mental structures can be part of the mental machinery without being an intrinsic part of experiences. If experiences, states of mind and other mental events are not, strictly ­speaking, constitutive of mental structures, any problems and riddles that experiences, states of mind and other mental events give rise to or are peppered with do not vitiate mental structures, especially when the aim is to probe into the

190

chapter 5

forms of other types of mentality. That this is significant can be delineated from another angle. Take the case of emotions, for example. Pickard (2003) has argued that emotions solve the problem of other minds, at least for other fellow human beings, on the grounds that emotions provide a link between experience and ­behavior which are united or viewed as united when one experiences emotions. She contends that experience is something that is considered to be a paradigmatic instance of mental events or states, and on the other hand, behaviors are such that they can be observed, checked and thus verified. She thinks that if experience and behavior are viewed as two aspects of the same thing, or rather as things of the same type, emotional experience can be brought forward to make this unity between experience and behavior palpable and viable. For her, bodily changes or states of the body that instantiate a certain emotion and the specific feeling or experience that is felt are the two sides or aspects of emotion that enjoy ontologically the same status. Thus the bodily changes instantiating emotions and the feeling of emotions are seen as constitutive of emotions. And insofar as this is so, the specific feeling or experience felt inside one’s own self and the bodily changes that are visible or observable in the other are aspects of the same thing, that is, emotion. Hence the experience of emotions and the bodily changes instantiating emotions coalesce to form a unitary phenomenon, making it possible for one to relate one’s private experience to the observable behavior of the other. This readily allows one to have the justified belief that others also have minds. Now it may be noted that this does not apply to the case for other types of mentality, in that even if the bodily changes instantiating emotions and the feeling of emotions are seen as constitutive of emotions, we can never be sure that they are so constitutive of the emotions, if any, of other organisms and creatures. To put it in a different way, the constitutive relation holding between human emotions and the unity of the bodily changes and the feeling of emotions may not apply to other creatures. This does not, of course, imply that there cannot be any such constitutive relation obtaining between other kinds of emotions and the joint effect of bodily changes and emotive feelings in other organisms. But the important question is: how do we extend from our constitutive relation to many other distinct kinds of constitutive relations of the other if other organisms constitute the other? Thus the conceptual problem of other minds creeps into the scenario once we go beyond the human case. The extension becomes tenuous and the possibility becomes dimmer as we consider smaller organisms such as bacteria, fungi, insects etc. since the nature of emotions in such organisms is far from clear. Thus far we have attempted to make out different ways in which the philosophical puzzles surrounding the problem of other (human) minds can be

Possible Minds and the Cognitive

191

found to be relevant to the characterization of the forms of other types of mentality in terms of mental structures. We have observed that the problems become more severe than is the case if one makes reference to experiences, mental states and events when dealing with the problem of getting into others’ minds. It has been maintained that mental structures can do better, and the arguments for this from several angles are marshaled in order to defend the whole idea of mental structures. Admittedly, even with the postulation of mental structures we cannot completely avoid the problem of getting into others’ minds, since mental structures are structures manifest within other types of mentality after all and we are trying to tap into these structures from the lens of our own cognitive apparatus. However, it may be recognized that this is a general argument against the very possibility of knowing or being able to understand anything that lies beyond the reach of our cognitive apparatus through which we view virtually everything. So even though this may seem to be a problem, this cannot restrict or hamper any inquiry simply because such a problem constrains every domain of inquiry we can think of. There is a similar line of thinking in Sober (2000) as well, but the way in which, he thinks, an inquiry into the structures of other types of mentality is to be conducted is way different from the methodology formulated in the present book. Sober believes that the genealogical relatedness among organisms cashed out in terms of the heritability of some mental traits and/or other non-genetic mechanisms (learning, environmental effects etc.) can reveal the correlation (in mental traits) between the self (in humans) and the other. Thus he seems to sketch out methodological guidelines as to where to look for traces of a given mental trait or capacity. He does think that the sharing of ancestors between the self (humans) and the other (other species) in our biologically given lineage is ­justified only if there is an extrapolation from one’s own self and the behaviors in both the self and the other are in some sense homologous. He does not, however, think that this is not otherwise justified, for one certainly can attribute mental states to Martians as well as machines we do not share any ancestors with, as he argues. Although it may appear that biological relatedness is underspecified with respect to the specification of mental structures that can be found in many species which are not directly related to humans by way of a common ancestor, it must be borne in mind that Sober’s proposal is primarily aimed at shrugging off philosophical proscriptions of any act of extending from one’s own case to others. Sober attempts to mount a general critique of any such proscription when maintaining that there is nothing logically wrong in extending from one’s own case to others. Plus his proposal does not exactly ban the specification of mental structures that can be found in many species unrelated to us; rather, the proposal seems to state the heuristics of parsimony

192

chapter 5

when mental traits/capacities—or for that matter, mental structures—are to be looked for in other species. Simply speaking, a proposal that adheres to these heuristics of parsimony is evolutionarily more viable than the one that does not. The way mental structures in other organisms have been specified in Chapter 3 does not in any way go against this, since the specification of mental structures from the smallest organisms to the bigger creatures testifies to this. We are now in a position to articulate the emerging conception of the cognitive in terms of mental structures. The viability of mental structures has been demonstrated from various angles that turn on biological, linguistic, anthropological and philosophical issues. With this in place, now it would be more appropriate to see what mental structures can offer for a view of the cognitive when it is evident that the characterization of the cognitive is the most important thing that mental structures can uncover. Undoubtedly, the cognitive must be implicit in the very specification of mental structures for various species of interest. Without this, it is not even clear how the specification of mental structures for various species can be considered plausible. But it must also be made clear at this stage that the goal of marking out the cognitive is not simply to determine where the boundary between the cognitive and the non-cognitive lies, although this will certainly fall out of the characterization of the cognitive in terms of mental structures. In many ways, this is perhaps a less interesting problem. Rather, the goal is to map out the vast territory of the cognitive within which different regions can be occupied by various types of mentality across species or organisms. In this context, the proposal in S­ loman (1984) seems to be pertinent. Sloman sees the problem of demarcating the space of possible minds not as the problem of drawing up either a binary distinction between the mental and the non-mental or a continuum which has two extremes designated by the mental and the non-mental. He thinks there can indeed be many dimensions, along which the cognitive can be mapped out, and many differences and similarities among various types of mentality may turn out to be significant as far as an understanding of the cognitive in terms of a variety of dimensions is concerned. These differences and similarities are to be conceived of, he argues, in terms of parameters such as the execution of a single task vs. the execution of multiple tasks, serial processing vs. parallel processing, inferring the unobservable internal states vs. making inferences based on the observable public behaviors, being able to monitor, describe and report inner states vs. remaining confined to the external events etc. Notice that many of these parameters make reference to cognitive capacities and activities, while mental structures substantiate cognitive capacities and activities. So mental structures can be found at a level which we arrive at by zooming in on what cognitive capacities and activities can be traced to

Possible Minds and the Cognitive

193

when we scale down the size of our view. In fact, as we explore the cognitive by looking at mental structures, how cognitive capacities and activities can be substantiated by mental structures will be clearer. The nature of the cognitive can be better understood by making reference to the properties of mental structures as formulated in Chapter 3. If considerations concerning species-specific constraints are set aside for now, it can be proposed that possessing at least one mental structure can confer a cognitive property on the system or entity that possesses the mental structure in question. In Chapter 3, it is postulated that bacteria, amoeba and other singlecelled creatures and also fungi can have the mental structure individuated by Something-Exists. This is sufficient for these organisms to have a minimal form of cognitive capacity. The mental structure individuated by SomethingExists constitutes the structure of the mentality these smallest organisms may possess. One may now jump in and argue that any mental structure is not as independent as any other arbitrarily chosen mental structure, given that some mental structures entail the presence of some other mental structure(s). Mental structures such as those individuated by Conjunction-of-Things or Thingwith-a-Feature entail the availability of Something-Exists, but not vice versa. The reason for this is that the set of all mental structures have a hierarchical organization. This is indeed the case. So the representation of such a hierarchy may look like the following. (99) MS 1