Functional Semantics: A Theory of Meaning, Structure and Tense in English 9783110818758, 9783110149418


193 106 191MB

English Pages 601 [604] Year 1995

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Part One: Meaning
1. Introduction
2. Meaning and the pursuit of knowledge: the world behind the word
2.1. Introduction
2.2. Plato on language and ideas
2.3. Aristotle: words and categories
2.4. Language and metaphysics in Plato and Aristotle
2.5. From classical to modern ontology: logic and the "linguistic turn"
2.6. The rise and fall of logical reconstruction
2.7. Logical and linguistic semantics
3. Meaning and cognition
3.1. Introduction
3.2. The "classical" computational approach
3.3. Language and meaning in the "classical" view
3.4. Intentionality, mental content and rules
3.5. Intentionality and information
3.6. The second cognitive revolution: cognitive linguistics and connectionism
3.7. Pan-cognitivism: turning behaviourism on its head
3.8. Continuity and differentiation: a delicate balance
3.9. Cognition: the differentiation and interrelation of skills
3.10. Problems with the word "conceptual" as used in cognitive linguistics
3.11. Conclusion: conceptual meaning — and why it is not enough
4. Meaning in a functional perspective
4.1. Introduction
4.2. The intellectual history of the functional perspective on language
4.3. What is "function" (if anything)?
4.4. Types of functional contexts
4.5. A functional account of language and meaning
4.6. Meaning and representation: procedural semantics
4.7. Concepts and conceptual linguistic meaning in the procedural perspective
4.8. Searle on representation and interaction
5. Semantics and pragmatics in a functional theory of meaning
5.1. Introduction
5.2. Pragmatics, truth, and Plato
5.3. Coded functions and utterance function
5.4. The principle of sense
5.5. Relevance versus sense: translating interaction into information
5.6. Final remarks
Part Two: Structure
1. Introduction
2. The functional basis of linguistic structure
2.1. Introduction
2.2. The ontology of levels
2.3. Component-based and function-based structure
2.4. Saussurean structuralism: a functional reconstruction
2.5. Structure and substance — arbitrariness and motivation
2.6. American structuralism: the Bloomfield-Chomsky tradition
2.7. Autonomy in generative thinking: the Pygmalion effect revisited
2.8. Generative autonomy: empty or absurd?
2.9. Underlying structure I: significant generalizations and the naming fallacy
2.10. Underlying structure II: distribution vs semantics
2.11. Autonomy: final remarks
3. Clause structure in a functional semantics
3.1. Introduction
3.2. On content and expression in syntax
3.3. The nature of content elements
3.4. Scope and layered clause structure
3.5. Process and product in syntactic description The clause as recipe for interpretive action
3.6. The nature of syntax: cognitive and evolutionary perspectives
3.7. The relation between expression and content syntax
3.8. Differences in relation to standard Functional Grammar
3.9. Semantic clause structure and grammatical universals
4. Conceptual meaning in a functional clause structure
4.1. Introduction
4.2. Language structure in Cognitive Grammar
4.3. Cognitive Grammar and the distinction between clause meaning and interpretation
4.4. Conceptualization embedded in interaction: the top-down aspect of syntactic structure
4.5. A closer look at non-conceptual meaning
4.6. Two forms of incompleteness: functional and conceptual dependence
4.7. Relations between functional and conceptual aspects of "element" meanings
4.8. Dependence and the division of labour between coded and uncoded meaning
4.9. Scope, function and semantic relations: the multidimensionality of semantic structure
5. Summary: function, structure, and autonomy
Part Three: Tense
1. Introduction
2. The meanings of tenses
2.1. Some central positions and concepts
2.2. Individual content elements: the deictic tenses
2.3. The future
2.4. The perfect
2.5. The place of tense meanings in the general theory of semantic clause structure
3. Tense & Co: time reference in the simple clause
3.1. Introduction
3.2. Logical vs. functional operators
3.3. Time-referential formulae as emerging from meaning plus structure
3.4. Reference time: a family resemblance concept
3.5. Tense time, adverbial time and topic time
3.6. Adverbials in complex tenses
4. Beyond the simple clause
4.1. Tense in subclauses: general remarks
4.2. Indirect speech
4.3. Tense in conditionals
4.4. Functional content syntax and "normal" syntax
4.5. Tense and discourse
5. Conclusion
5.1. Overview
5.2. Meaning
5.3. Structure
5.4. Survey of times
5.5. Conceptualization embedded in interaction
5.6. Semantics and pragmatics
Notes
References
Index of names
Index of subjects
Recommend Papers

Functional Semantics: A Theory of Meaning, Structure and Tense in English
 9783110818758, 9783110149418

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Functional Semantics

W DE G

Trends in Linguistics Studies and Monographs 87

Editor

Werner Winter

Mouton de Gruyter Berlin · New York

Functional Semantics A Theory of Meaning, Structure and Tense in English by

Peter Harder

Mouton de Gruyter Berlin - New York

1996

Mouton de Gruyter (formerly Mouton, The Hague) is a Division of Walter de Gruyter & Co., Berlin.

) Printed on acid-free paper which falls within the guidelines of the ANSI to ensure permanence and durability.

Library of Congress Cataloging-in-Publication-Data Harder, Peter, 1954Functional semantics : a theory of meaning, structure, and tense in English / by Peter Harder. p. cm. - (Trends in linguistics. Studies and monographs ; 87). Includes bibliographical references and index. ISBN 3-11-014941-9 (cloth) 1. English language - Semantics. 2. English language — Syntax. 3. English language - Tense. I. Title. II. Series. PE1585.H33 1996 432'.143-dc20 95-45939 CIP

Die Deutsche Bibliothek — Cataloging-in-Publication-Data Harder, Peter: Functional semantics : a theory of meaning, structure and tense in English / by Peter Harder. - Berlin ; New York : Mouton de Gruyter, 1996 (Trends in linguistics : Studies and monographs ; 87) ISBN 3-11-014941-9 NE: Trends in linguistics / Studies and monographs

© Copyright 1995 by Walter de Gruyter & Co., D-10785 Berlin All rights reserved, including those of translation into foreign languages. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording or any information storage and retrieval system, without permission in writing from the publisher. Printing: Ratzlow-Druck, Berlin. Binding: Lüderitz & Bauer, Berlin. Printed in Germany.

Preface The germ of this book was an idea I had more than twenty years ago. I thought it was a wonderful idea; I still remember the shivers it sent down my spine. It was very simple, too. Like many others, I felt uncomfortable with the geography of linguistics, divided as it was between an ideal structural core and a messy fringe of empirical phenomena. It occurred to me that you could eat your cake and have it too, if you reversed the usual linguistic approach. Linguists who start off with Language with a capital L and then cautiously move closer to the real world have to watch helplessly while the neat structural generalizations crumble gradually away; a not unnatural reaction is to try to keep the real world at a safe distance. But if you begin by facing the full extent of the messy social and psychological variability of linguistic communication, you can put yourself in a position that is much more encouraging as well as empirically responsible, simply by appealing to the undeniable fact that variability cannot be the whole story. What is it that enables me to understand messages between people whose lives have had no connection with mine, and who may have been dead for hundreds of years? The process that complex linguistic expressions trigger in addressees must be structured in ways that rise above the local and accidental, or this sort of everyday miracle could not occur. Linguistic structure should not be seen as a feeble laboratory plant, wilting at the first breath of actual communication, but as a remarkably robust aspect of real social interaction. In that way you can look with a clean empirical conscience for those generalizations that linguists get their kicks from, even if you have to accept that there may be fewer really neat structural generalizations than you might wish. People who feel they have been granted a revelation are rarely happy about the reception they get. The way I summed up the response, once the stage of ordinary friendly interest was over, was something like: it sounds good, but I don't see that it makes any difference; anyway, we already knew that — and besides, it's probably wrong. In the following years, a gap was apparent in the communication between linguistics and me. Although language was extremely interesting, linguistics managed to be extremely boring much of the time, I felt, as well as being bored with the questions I was interested in. For reasons that had nothing to do with linguistics, I became a teacher and had a number of happy and challenging years in that capacity; but when the Giese chair gave me the opportunity, I went back to the university and to linguistics, prompted by the desire to have time for sustained thinking again.

v i Preface

The field I returned to was in many ways more interesting than the one I had left. When I began to have a sense of what was going on, I dusted off the project of describing language-as-such as a structured aspect of the whole domain of communicative interaction. The climate was much more congenial now, and I could see my own views as part of a new pattern that was emerging in linguistics. Yet I found that there were still some widely held assumptions that made precise communication about the issue difficult. One aspect of the problem was the double commitment I felt towards the hard core of linguistic structure and the need to anchor it solidly in nonlinguistic reality; I have generally felt more structural than those who took non-linguistic reality seriously and more contextual than structural linguists. The feeling was enhanced when I began to develop the theory of English tense that is found in this book, wrote an article and had it returned from a journal with two very kind reviews which honestly could not see what was the point of doing it this way. The core of the problem as I saw it was the unclear status of linguistic meaning. Although semantic considerations had become vastly more central in linguistics, it was still very unclear to me how it was understood and where it belonged in relation to accepted views of linguistic structure. One occasion when I began to feel confident that the problem was not mine alone was when it dawned on me that the word semantics was widely used in ways that were systematically ambiguous between a sense that belonged in the philosophical tradition (as a subdiscipline of logic) and a sense where it was concerned with meaning as a property of natural languages. All this would probably never have turned into a book if my then head of department had not felt obliged to warn me that it might be wise to try to produce a major statement fairly soon, since my job might, for purely external reasons, be upgraded to a full professorial chair within the foreseeable future, with no further warning — in which case I would have to re-apply. The threat evaporated, but by then the project had got under way. Certain parts of the book have become rather more difficult than I had hoped. One reason for this is that I have included much more discussion of other people's arguments and disagreements than required strictly for a presentation of my own views. There are two related reasons for this choice, based on strategy and principles, respectively. The strategic aspect was determined by the nature of the response that I have described above. I felt that I had to get at the foundations of the accepted assumptions in order to make it clear why the project was worth pursuing; that I had to

Preface

vii

demonstrate painstakingly where it was different from what sounded vaguely similar — and to specify very clearly why it was important to be fanatic about both function and structure. Pedantic as the discussions may seem, I hope that the attempt to be clear about the foundational issues may be useful to others who have also been baffled about the kind of things that get taken for granted in talking about language. If it could be assumed that we already knew in practice what meaning and structure are, Parts One and Two would simply be superfluous; linguists could then proceed to describe languages on the basis of such an understanding and leave the foundational discussion to philosophers. But although agreement about basic issues is sometimes greater than it seems, the discipline of linguistics as of 1995 does not offer a coherent picture of the foundations of language. Moreover, on certain points where there is something that looks like a consensus, it rests (I think) on mistaken premises; in fact, most of the central vocabulary of linguistics serves to a considerable extent to obstruct communication rather than to promote it. Among the words that cannot be used without grave risk of misunderstanding are (apart from semantics) pragmatics, syntax, grammar and linguistic form. In an attempt to counteract the difficulty of the foundational discussions, I have tried to provide introductions and summaries as I go along to make it possible to skip some of the skirmishes without losing the sense of where the book is heading. I begin to get into my own story when I introduce the functional approach to linguistic meaning, in Part One, section 3, so this may be a good place to start for readers who already know or do not care about established views of meaning. But in addition to the desire to place my position clearly in the landscape, there is a more principled consideration: my feeling is that what progress the individual author can hope to bring about is generally in the inconspicuous details of an argument with previous workers in the field. Unless you take the trouble to make these details clear, no-one else is likely to. The spectacular conclusions that in moments of arrogance you would like to claim as your own — such as the one I started out with — are always ninety-nine per cent old chestnuts, seen in a wider perspective. To live with a realistic sense of what you can achieve, you have to regard the remaining one per cent with something analogous to the exaggerated sense of importance that human beings attribute to the one percent of their DNA that differs from that of the chimpanzee.

viii

Preface

Throughout my professional life I have had the good fortune to know and work with a number of people who took an interest in what I was doing. Within the field of English language, Niels Davidsen-Nielsen has been a source of encouragement and inspiration ever since my undergraduate days; the views on tense that I present arose as a result of discussions with him. My longest-standing professional circle is the "pragmatics group" in Copenhagen, which has been in existence since 1977. In relation to this book I would like specifically to thank Peter Widell for sharing his enthusiasm for Putnam with the group, and Ib Ulbaek for defending Fodor's views long enough to give me a sense of what Fodor was trying to do. In recent years, the working community of "Danish Functional Grammar" has provided a constant sense of direction and purpose in the linguistic environment that I have worked in, and in which this work belongs; Lisbeth Falster Jakobsen has been a mainstay in the life of this group. A three-year grant to the Danish Functional Grammar ("Grundforskningsbevillingen" 1992-94) has financed a lowered teaching load, increasing my research time. The Danish functionalist group owes its existence to the Dutch group around Simon Dik, whose inspiration will be evident in the book; at the time of writing we are still in the shadow of his untimely death. It was also through the Functional Grammar conferences that I met Colette Craig, who helped me to make contact with the functionalist community in American linguistics, when the Binar Hansen Travel Award made a study tour to the U.S. possible. A crucial phase in the project was a period as visiting scholar at UCSD, San Diego, in the spring of 1994. I am grateful to Ronald Langacker for making the stay possible, and for taking an interest in my work; the inspiring atmosphere combined with the possibility of a sustained period of writing was responsible for the first complete version. Among the other courses I followed in San Diego, Jean Mandler's on cognitive development in children, Christine Johnson's on animal cognition, and David Kirsh's on cognitive science and philosophy will be recognizable sources of inspiration in various places below. John Searle was kind enough to discuss the central point of "inherent function" with me and give me a sneak preview of workin-progress, including his central counterargument, as a result of which the error in my position is at least subtler than it used to be. Over the years, Hans Fink has pointed out some of the foundational aspects of the problems to me from time to time; one specific issue where I am directly indebted to him is with respect to the problematic status of the concept of information. Christian Kock worked closely with me at the time

Preface

ix

when the concept of functional semantics was first articulated, and what I say about the linguistics of literary texts is based on what I have learned from him. The English Department in the University of Copenhagen has been a benevolent employer, generously supporting the sabbatical in 1994 even though it came at an inconvenient time, and generally provided enviable working conditions for me. I would like to thank the following people, who have been kind enough to read and discuss parts of the manuscript at various stages: Finn Collin, Niels Davidsen-Nielsen, Per Durst-Andersen, Elisabeth Engberg-Pedersen, Troels Engberg-Pedersen, Hans Fink, Michael Fortescue, Maj-Britt Mosegaard Hansen, Lars Heltoft, Michael Herslund, Theo A.J.M. Janssen, David Kirsh, Ron Langacker, Anne Louise Paldam, Mark St.John, Steen Schousboe, Chris Sinha, Frederik Stjernfelt, Ole Nedergaard Thomsen, Susanne Trapp and Herman Wekker. The book has benefited greatly from those discussions; needless to say, I alone remain responsible for all its errors and shortcomings. Science is crucially a collective endeavour — by people who work long hours alone. Without a supportive working environment neither half of the paradox would be possible. In addition to the professional half of the environment which I have mentioned, I must say a few words about the private half, although its paramount importance is one of those background things that do not lend themselves well to verbalization. There is a somewhat one-sided relation between a happy family life and concentrated intellectual effort; emotional support is vital for the power to concentrate, but single-minded writing gives very little in return for it. Slightly over half of the people I know who have undertaken similar projects have been divorced in the process. I am immeasurably indebted to my wife, Birthe Louise Bugge, for her love and care throughout the years, and specifically for the unique mixture of indulgence and insistence through which she made it possible to keep both that and the project alive. In addition, I am grateful to her for undertaking a final reading process to catch bugs arising as a result of other corrections. Last of all, I thank my father and mother, Th0ger and Lene Harder, for support, interest, and continued boundless willingness to be there when help was needed.

Contents Preface

v

Part One: Meaning

1

1. Introduction

3

2. Meaning and the pursuit of knowledge: the world behind the word 2.1. Introduction 2.2. Plato on language and ideas 2.3. Aristotle: words and categories 2.4. Language and metaphysics in Plato and Aristotle 2.5. From classical to modern ontology: logic and the "linguistic turn" 2.6. The rise and fall of logical reconstruction 2.6.1. Logical form: the quest for clarity and precision . . 2.6.2. Frege and Russell: how to make ends meet 2.6.3. The fragmentation of the logical world picture . . . 2.7. Logical and linguistic semantics 3. Meaning and cognition 3.1. Introduction 3.2. The "classical" computational approach 3.3. Language and meaning in the "classical" view 3.4. Intentionality, mental content and rules 3.5. Intentionality and information 3.6. The second cognitive revolution: cognitive linguistics and connectionism 3.7. Pan-cognitivism: turning behaviourism on its head 3.8. Continuity and differentiation: a delicate balance 3.9. Cognition: the differentiation and interrelation of skills 3.10. Problems with the word "conceptual" as used in cognitive linguistics 3.11. Conclusion: conceptual meaning — and why it is not enough 4. Meaning in a functional perspective 4.1. Introduction

5 5 6 8 11 13 16 16 18 24 29 33 33 36 41 46 49 52 55 59 63 68 75 79 79

xii Contents

4.2. The intellectual history of the functional perspective on language 80 4.3. What is "function" (if anything)? 88 4.4. Types of functional contexts 93 4.5. A functional account of language and meaning 96 4.5.1. Millikan and the attempt to found logic on evolution 96 4.5.2. Three functional contexts — and three different discussions 98 4.5.3. A functional definition of linguistic meaning . . . . 101 4.5.4. Some central functional characteristics of linguistic meaning 105 4.6. Meaning and representation: procedural semantics 107 4.7. Concepts and conceptual linguistic meaning in the procedural perspective 115 4.8. Searle on representation and interaction 122 5. Semantics and pragmatics in a functional theory of meaning . . 127 5.1. Introduction 127 5.2. Pragmatics, truth, and Plato 129 5.3. Coded functions and utterance function 132 5.4. The principle of sense 136 5.5. Relevance versus sense: translating interaction into information 139 5.6. Final remarks 146 Part Two: Structure

147

1. Introduction

149

2. The functional basis of linguistic structure 150 2.1. Introduction 150 2.2. The ontology of levels 150 2.3. Component-based and function-based structure 154 2.4. Saussurean structuralism: a functional reconstruction . . . 157 2.5. Structure and substance — arbitrariness and motivation . . 165 2.6. American structuralism: the Bloomfield-Chomsky tradition 170 2.7. Autonomy in generative thinking: the Pygmalion effect revisited 173 2.8. Generative autonomy: empty or absurd? 176

Contents xiii

2.9. Underlying structure I: significant generalizations and the naming fallacy 2.10. Underlying structure II: distribution vs semantics 2.11. Autonomy: final remarks

183 187 190

3. Clause structure in a functional semantics 193 3.1. Introduction 193 3.2. On content and expression in syntax 193 3.3. The nature of content elements 200 3.4. Scope and layered clause structure 209 3.5. Process and product in syntactic description The clause as recipe for interpretive action 214 3.6. The nature of syntax: cognitive and evolutionary perspectives 218 3.7. The relation between expression and content syntax . . . . 224 3.8. Differences in relation to standard Functional Grammar . . 228 3.9. Semantic clause structure and grammatical universals . . . 243 4. Conceptual meaning in a functional clause structure 4.1. Introduction 4.2. Language structure in Cognitive Grammar 4.3. Cognitive Grammar and the distinction between clause meaning and interpretation 4.4. Conceptualization embedded in interaction: the top-down aspect of syntactic structure 4.5. A closer look at non-conceptual meaning 4.6. Two forms of incompleteness: functional and conceptual dependence 4.7. Relations between functional and conceptual aspects of "element" meanings 4.8. Dependence and the division of labour between coded and uncoded meaning 4.9. Scope, function and semantic relations: the multidimensionality of semantic structure

255 255 255

5. Summary: function, structure, and autonomy

297

261 265 271 275 282 285 288

xiv Contents

Part Three: Tense

311

1. Introduction

313

2. The meanings of tenses 315 2.1. Some central positions and concepts 315 2.2. Individual content elements: the deictic tenses 326 2.2.1. Introduction 326 2.2.2. Past and present as pointers 327 2.2.3. The division of labour between tense and states-of-affairs 332 2.2.4. Pointing vs. "focal concern" 335 2.2.5. Reference to unidentified past time 339 2.2.6. The markedness issue 341 2.2.7. The non-temporal past 343 2.2.8. A comparison with Cognitive Grammar 344 2.2.9. Internal structure 347 2.3. The future 349 2.3.1. Semantic description 349 2.3.2. The prospective 351 2.3.3. The pure future in relation to modality 352 2.3.4. Future meaning and content syntax 355 2.3.5. The status of the past future 357 2.3.6. The pure future in cognitive grammar 362 2.3.7. The status of the future in the structure of English . 367 2.4. The perfect 376 2.4.1. The role of content syntax 376 2.4.2. The perfect as a product of grammaticization . . . . 377 2.4.3. The central meaning of the perfect: the cumulative model of time 379 2.4.4. The compositional perfect and the present perfect . 383 2.5. The place of tense meanings in the general theory of semantic clause structure 386 3. Tense & Co: time reference in the simple clause 3.1. Introduction 3.2. Logical vs. functional operators 3.3. Time-referential formulae as emerging from meaning plus structure

391 391 391 396

Contents xv

3.4. Reference time: a family resemblance concept 3.5. Tense time, adverbial time and topic time 3.6. Adverbials in complex tenses 4. Beyond the simple clause 4.1. Tense in subclauses: general remarks 4.2. Indirect speech 4.2.1. Mental spaces and referential ambiguity 4.2.2. Point-of-view: coding and pragmatic considerations 4.2.3. The past perfect and "back-shift" 4.3. Tense in conditionals 4.3.1. Introduction 4.3.2. The role of if 4.3.3. Comparison with representational accounts 4.3.4. The role of tense I: reality-based conditionals . . . . 4.3.5. The role of tense II: P*-based conditionals 4.3.6. The past perfect conditional: Meaning, usefulness and implicatures 4.3.7. The "present subjunctive" reading of past tense . . 4.4. Functional content syntax and "normal" syntax 4.5. Tense and discourse 4.5.1. Introduction 4.5.2. Discourse Representation Theory 4.5.3. The "historical" present 4.5.4. Fleischman's theory of tense in narratives 4.5.5. Tense in fiction 4.5.6. Tense shifting in non-narrative discourse 5. Conclusion 5.1. Overview 5.2. Meaning 5.3. Structure 5.4. Survey of times 5.5. Conceptualization embedded in interaction 5.6. Semantics and pragmatics Notes References Index of names Index of subjects

398 404 414 423 423 429 429 433 440 443 443 444 448 451 454 459 463 465 475 475 477 482 484 487 492 497 497 497 498 500 501 502 505 533 571 577

Part One: Meaning

1. Introduction The overall cohesive strategy of the book is as follows: In Part One, I discuss some influential assumptions about meaning and defend a functionbased view of linguistic meaning. In Part Two, I discuss some influential assumptions about structure and develop a view of linguistic structure that is based on the functional view of meaning defined in Part One. In Part Three, I hope to show that the two foundational discussions make a significant contribution to the proper understanding of tense in English. I begin with meaning because that is the key to all questions about the nature of language. The question of meaning is one of the most fundamental issues in the humanities; any investigation of the subject should therefore be approached in a spirit of suitable humility. In view of that, there are some aspects of the purpose of this chapter that I want to make clear. First, two shortcomings that I would like to be the first to point out. To begin with, in terms of the current division of scholarly labour I have tried to cover too much, and as a result I am meddling in things I do not know enough about. Secondly, because I try to cover too much, the selection becomes patchy, and as a result I end up not covering enough to give a fair description of what each knowledge domain has to offer of relevance to the topic. The reasons I have for letting myself in for this type of criticism are twofold. As a point of principle, if you think meaning is the key to the understanding of language structure, you cannot approach meaning from the point of view of linguistics only — that would be to understand the explicans only via the explicandum, as it were. Further, historically speaking, all descriptions of language have been determined by the stance on meaning which they have either explicitly adopted or more frequently just taken for granted, in both cases for reasons imported from outside the field of linguistics. The reasons why certain linguistic descriptions which I think less than fully adequate appear plausible or even compelling lie in assumptions that are not addressed explicitly within linguistics — but rather in contexts that are traditionally seen as belonging within philosophy, more recently also in psychology or sociology and most recently within cognitive science. I need to take these ingrained presuppositions up for discussion in order to be able to describe the role they play in the received picture of the subject as such and in motivating the account I offer; and what I say about

4 Meaning: introduction

them I say as a linguist in foreign territory. However, it will be obvious why I could not (even if I were capable of it) within the confines of one book do justice to all these fields on their own terms. What I have tried to do is put the spotlight on representative authors and problems, and go just deep enough into them to establish the context of the view of meaning that I am after, in the hope of making it clear why I think the perspective I advocate is a necessary corrective. I have tried to avoid misrepresenting the purpose and central achievements of the different approaches with respect to meaning; but I have not tried to do justice to the full breadth of achievements or nuances. That is also one reason why I have not felt obliged to try to cover the most up-to-date versions of each view. Since meaning in a broad sense is a basic element in the world of human understanding, it is not obvious that it is possible to speak of any particular perspective as being the proper one for the study of meaning. However, the general approach suggested here takes for granted the validity of a broadly developmental approach to human phenomena, in terms of which certain phenomena are more basic and fundamental and others more derived and sophisticated. The general framework for this pattern of thinking is evolutionary biology. The point here, however, is not to suggest a particular evolutionary hypothesis, even though there will be a few speculations in that direction. Rather, the focus is on trying to understand the synchronic relation between aspects of human language in terms of this perspective. The most central example of this is the relationship between cognitive skills like thinking and interactive skills like functioning as a member in the human group. I argue that there is a natural relation between these two aspects, both central and constitutive of humanity, such that interaction is more basic and thinking is more sophisticated, and neither can be understood solely in terms that are specifically defined to account for the other. Both play a role for understanding meaning, and need to be understood together. There are four main sections in Part One on meaning. The first deals with the approach to the study of meaning that seeks to account for meaning by going "behind" the words, looking for that which the words stand for: the "objectivist", "denotational" or "extensional" perspective. The second gives an account of the approach that emphasizes the context of human cognitive processes as fundamental to the issue of meaning. The third introduces the perspective that I see as fundamental, the "functional" perspective on meaning. Finally, I discuss the relationship between semantics and pragmatics in the light of the functional definition of

Meaning: introduction 5

meaning, and discuss my position in relation to relevance theory as the most radically information-based view of language, in order to demonstrate what is involved in taking interaction inside rather than outside one's view of language. As will be apparent, I am not postulating any necessary conflict between the results of taking an extensional, a cognitive and a functional perspective. However, I think no satisfactory picture of language can be obtained by saying merely that these are three different perspectives, and one is as good as the other. On the contrary, the best way of describing language will have the property of enabling us to see why the results of taking different perspectives are the way they are. In suggesting that the functional-interactive perspective is inherent in language itself, I claim that you cannot properly understand the extensional and cognitive properties of language without building on the results of looking at language in the interactive perspective.

2. Meaning and the pursuit of knowledge: the world behind the word 2.1. Introduction In science as in daily life, what you see depends on what you are looking for. The approach that investigates meaning in terms of categories of the world outside language is no exception to this rule. However, because it has the full weight of tradition behind it, it has historically been less easy to see how certain features of the traditional answers are direct reflections of the traditional questions — rather than the only possible answers. I shall try to outline how the approach to meaning shared by classical antiquity and modern logic depends on a shared project, i.e. the pursuit of knowledge. The two periods are relevant in different ways. The classical accounts have shaped what remains the common-sense view, in spite of structural linguistics; and the descriptive principles of formal logic have an aura of scientific precision that remains attractive within a deeply divided and chaotic linguistics. The basic weakness of this view is the "aboutness" problem, i.e. the question of the nature of the link that enables words to be

6 The world behind the word

about things in the world. One way of describing the problem is that any attempt to make language isomorphic with the world ignores the place of language in the world. The problem is implicitly present everywhere, but comes out clearly in areas where the relation between language and the world cannot be taken for granted, such as in the case of subjecthood and existence, or in relation to negation and falsehood.

2.2. Plato on language and ideas When Western philosophy began, one of the first questions that took priority on the agenda was the question of the fundamental stuff or principle that the world was based on. In that perspective language is fairly marginal, in the sense that the world is not made of language. On the other hand, language sneaks in because the only way of communicating insights about the world is via the medium of language. The mere activity of trying to state something true about the nature of the world therefore raises the problem of how words relate to that world. The primacy of this context for the study of language means that in the early perspective there are three subjects that tend to go together: the ontological1 inquiry into the basic structure of the world; the epistemological question of how we arrive at true knowledge of the world; and the study of language, as the medium in which knowledge of the world is expressed.2 In spite of increasing division of scholarly labour, the existence of this basic trinity of subjects has remained a powerful factor in thinking about linguistic meaning. There may be many arguments for understanding the meaning of words in terms of the things in the world which they stand for; but when the primary interest is in the world "as such", it is hardly surprising that meaning is viewed from that angle. In getting the flavour of how an ontologically based semantics acquired its status, it is useful to go back to the factors that were important in shaping Plato's thinking. Its fundamental elements can be approached through two pairs of antitheses: the contrast between cosmos and chaos, and the contrast between appearance and reality. Plato's understanding of the world in terms of a cosmic order hidden beneath a chaotic flow of sense impressions to some extent reflects an orientation that is built into the pursuit of knowledge as such. The attraction of entering the search for true knowledge naturally depends on two preconditions: first, that true knowledge is coherent, i.e. constitutes an orderly body of facts; secondly,

Plato on language and ideas 7

that there is more to it than meets the eye. If we state these two basic assumptions in dogmatic form3, and radicalize the contrast between everyday beliefs and knowledge, we arrive at the doctrine of the underlying ideal world. But this descriptive aspect must be understood together with a normative element. Plato lived in an age which presented certain parallels with the twentieth century scene: an old order which had been taken for granted appeared to be cracking at the seams, and the forces of chaos appeared to be in the ascendant. The traumatic event of Socrates' conviction and death, occurring in the context of the political disorder following the defeat of Athens in the Peloponnesian war, epitomized the wrongness of things in the world. The nature of the doctrine of Ideas or Forms must also be understood in this context: as a reply, in the affirmative, to the question of whether there is any way of preserving a notion of a fundamental cosmic order as against the overwhelming evidence of chaos on the "surface" of things. Since the existence of an absolute order based on truth, goodness and beauty was not plain for all to see, it was necessary to postulate a mode of existence that was not immediately accessible, permitting order to stand as the ultimate reality, against which the ruling disorder was in the nature of a deviation. As will be apparent, there is no distinction between description and prescription in this view of the world. The crux of the matter is to maintain that there is a fundamental order in the world, embodying a standard against which one can judge any individual phenomenon, thereby simultaneously describing its position in the general scheme of things and prescribing what its ideal qualities should be. In this context, the central question in relation to language concerns the backing language must have in order to be used in conformity with the underlying order. Plato's metaphysical position implies that words, when used to characterize things in the world, ultimately get their backing by standing as names of the corresponding Forms.4 But this is evidently not the only meaning-bearing property of words, since it is possible to use words with considerable pragmatic success without being particularly concerned about the relation between one's words and ideal truth: language has the ability to evoke "images" which may or may not reflect eternal underlying reality.5 Sophists must be seen as dealers in images that do not correspond to reality; it requires a philosopher, using the deeper powers of his discerning mind, to tell whether an image reflects reality or not. Even

8 The world behind the word

with this complication, however, some basic correspondence between categories in language and in the world is taken for granted. If there were no such correspondence, we could not distinguish proper and improper use of language: the correspondence between linguistic categories and categories in the world is the criterion that enables the philosopher to say whether something expressed by a combination of words is right or wrong. Plato's thinking thus accorded a central place to the intuition that words stand for something that is above variations in designata and variations in individual speakers. In providing such a formula, he laid down a foundation for thinking about meaning which survives to this day. Beliefs have varied with respect to the exact nature of the ontological backing for that basic intuition; we shall come across some successor concepts to the world of Forms as we go along. The view of pragmatics as being the locus of variability and unreliability as opposed to abstract, underlying ideas which guarantee order and dependability, similarly survives in an essentially Platonic form. And the underpinning of this view of meaning is not that this is the way it works in actual empirical reality, but rather that in taking it as basic you come down on the side of cosmos rather than chaos.

2.3. Aristotle: words and categories Aristotle's views of language reflect the same basically ontological orientation as Plato's. But with the all-encompassing interest in the empirical world which constitutes the basic difference of perspective between them, Aristotle brought the analysis of language as reflecting the world closer to actual facts about language as well as the world. The familiar basic difference in their metaphysics is that Aristotle rejects the transcendental realm of Forms as distinct from and more basic than empirical objects. Instead, Aristotle locates the Forms inside the objects, introducing a complexity in the empirical world instead of a radical conflict between ideal and empirical reality — taking the human observer out of the cave of shadows, as it were. This view enables him to agree partly with the Platonic division into timeless and changeable objects; individual objects decay, but the Forms of which they are manifestations persist. On the epistemological level, the privileged role of the discerning mind is also maintained, preserving the possibility of going behind immediate sensory evidence.

Aristotle: words and categories 9

While retaining the timeless aspect, the Aristotelian theory6 of what the world consists of is that the ultimate constituents of the world are individual things. In the theory outlined in the Categories, such things can be grouped into species, based on their essential properties; thus "man" is a species, with the property "thinking" as central among its properties; species can similarly be grouped into genera ("animal" is the genus to which the species "man" belongs); and species and genera therefore count as secondary substances (secondary, because the individual object is what primarily exists (the Categories, 2a34)). In this type of analysis the analysis of the world is difficult to tell apart from the analysis of language. The properties of the world are established together with the semantic properties of the words referring to it: the Categories introduces the notions of substance, quantity, quality, proportion, time, place, etc., while giving examples of expressions conveying these notions as part of their meaning. In certain cases, however, the interface between the two perspectives becomes apparent and specifically linguistic categories come into focus, for example in the analysis of statements in the De Interpretatione (Pen hermeneias). The basic concepts in analyzing a linguistic sequence as a statement (or proposition) are: "onoma" (=noun or logical subject), "rhema" (=verb or logical predicate), affirmation, and negation. To see how these linguistic constituents yield a logical judgment, it is necessary with some analysis of their semantic peculiarities; of special interest is the analysis of the semantic composition of the logical statement. In order to produce a logical statement one must have an "onoma" (which signifies something tenseless), and a "rhema" which (a) is always said about something else (its subject), and (b) indicates that it is in the subject at the moment of speech. The latter requirement, preserving the immediate relation between subject-in-the-world and subject-in-the sentence, restricts attention to logical statements in the present tense; past and future forms thus do not constitute "rhemata" in this narrow sense. We thus get three kinds of significations (meanings): tenseless meanings which can stand alone, tensed meanings which can only be said about something else, and their combination in the logical statement. The meaning of the "onoma" can stand alone, but without being true of anything; when a "rhema" is used alone, it therefore changes into an "onoma", so the same concept can enter into both functions ("health" and "is healthy" is given as an example) — but only an onoma and a rhema together can be true of something. Aristotle points out that not all combinations of onoma and rhema form

10 The world behind the word

statements; truth and falsity do not apply to wishes, for example, although they are sentences. The distinction between essential and accidental properties, together with the law of the excluded middle, was the source of what came to be the standard analysis of concepts in terms of necessary and sufficient conditions. Although the correspondence between language and the world was the basic relation, Aristotle also made a programmatic statement about the role of mental categories in language, in the De interpretatione: The vocal forms of an utterance symbolize mental impressions7 as well as what the mental impressions are likenesses of, actual things. Both mental impressions and actual things are said to be the same for all people, so that variation between languages are assumed to be only in the vocal form — which is therefore a matter of convention.8 Although in an embryonic form, this passage sums up a position which may be called "classical universalism": languages are all alike, except in their "outer form", because they reflect the categories of the world, which are necessarily the same regardless of how you choose to name them. This assumption, which was more often taken for granted than explicitly stated, was epigrammatically summed up in a commentary on the Categories by Porphyry (3rd C.): As things are, so are the expressions which primarily indicate them (cf. Edwards 1967: 156). With these fundamental conceptualizations of language as reflecting the world, Aristotle laid down a detailed blueprint for intellectually coherent empirical knowledge and intellectually coherent language use at the same time. As if that was not enough, he also implemented it in just about all domains of knowledge, singlehandedly discovering fundamental facts about the empirical world, working out a conceptual apparatus to capture them, and developing linguistic terms to talk about them as he went along. Many of his terms and concepts became part of everyday thought and language, assuming the status of natural God-given reason (cf., e.g., Johnny Christensen 1993). The ideal of making thought reflect reality and words reflect thought, putting everything in the right category and distinguishing essential from accidental properties, remains the bedrock of everyday rationality; even if the unity of his project is no longer credible, his achievement commands a sense of awe in anyone who tries to trace it.

Aristotle: words and categories 11 2.4. Language and metaphysics in Plato and Aristotle I suggested above that a theory of meaning that looks for point-by-point correspondence between language and metaphysics must run into difficulties because it has no place for the role of language inside the world. Since metaphysics was the priority subject, these difficulties surfaced as problematic aspects of metaphysical theories, rather than theories of language; the power of language to reflect the world was taken for granted because it was necessary if the whole project was to make sense. This can be seen in some of the ways Plato argues for the Forms as constituting ultimate reality: if there were not such a thing as "beauty", of which beautiful things had a share, how could it be possible to attribute beauty to anything (cf. Phaedo lOOd)? The unexamined premise in the argument is the prior knowledge, by the philosophers who attribute beauty to an object, of a language that includes a word such as beauty. In the Aristotelian world picture, the difficulty took a different shape, which was central in the transition from the classical to the modern versions of ontologically based semantics. Because the Forms were seen as part of the concrete objects, the problem concerned the understanding of the nature of those individual things or substances which constitute the basic furniture of the world. Concrete objects are not unanalyzable primitives; they are instances of universal properties, segments of matter, members of species, etc. In the Metaphysics, Aristotle therefore goes on to discuss what it is about the individual object that qualifies it as a substance. Having rejected species and universal properties generally, he has two possibilities left: the "substratum" (hypokeimenon 'the underlying'), and ousia understood in the sense of 'essence'. "Substratum" is applied to two relations; and this is where the problem of language and the world raises itself. First, the relation between the "prime matter" of which a subject consists, and the subject itself, secondly, the relation between a subject and what is predicated of it. Since matter as such, in Aristotle's view, is completely unspecific and can never exist actually without a particular form, matter cannot be the primary substance of the world. Substratum in the sense of that to which predicates attach is provisionally accepted as constituting substance, but this again gives rise to the question of what exactly this substratum is. In Edwards (1967: 160), the question is discussed as one of the central issues in Aristotelian thought; the only answer that is deemed viable is to return to the individual object in itself.

12 The world behind the word

From hindsight, the discussion betrays a confusion between properties of things and properties of linguistic expressions, which can be seen as stemming from the lack of awareness of the two different notions of the subject of a sentence. In what we may call simple subject-predicate statements, the subject is what the sentence is about, as in Theaitetos sits, where Theaitetos is the subject. But is it the name or the thing that we are talking about? In Plato and Aristotle, it is typically both (cf. the discussion in the Sophist 262-263, and the first chapters of the Categories). The problem is that the basic isomorphism assumed in classical universalism is difficult to uphold on this point. Where we may sensibly discuss whether the meaning of the word cat corresponds more or less exactly to cats out there, the problem is different in scale when we look at the property associated with subjecthood, i.e. that of being an existing thing of which something is predicated — it is not clear that there is anything out there that corresponds to this property. In modern times, this has been seen in terms of the problem of existence (cf. the discussion of Russell in section 2.6.2). From that point of view, the parallelism breaks down because existence, as the modern slogan goes, is not a property (of things-out-there; but cf. p. 393 below). Hence, we cannot talk of existence as denoted by the word and existence "out there" — when we look for existence as something out there, we cannot point to anything apart from the object itself. The "aboutness" problem shows itself clearly here because the concept of existence, as opposed to "cat", in itself raises the question of aboutness: talking about existence can be rephrased as talking about whether an expression "is about" an object or not. From Aristotle's point of view, the issue was not the question of existence — that issue was not as problematic to him as it became later on — but the question of what was ontologically basic. In Aristotle's theory of substance, his pointing to individual objects as the basic furniture of the world remains as plausible now as then; but in distinguishing between a substance and what is predicable of a substance he bequeathed an unsolved problem to his successors down to the present century.9 These weaknesses arose because Plato and Aristotle took it for granted that language must be capable of standing accurately for the world. This prevented them from focusing on the problems posed by language as an object in its own right, and therefore also from taking up the ways in which language might have problems in reflecting reality with desirable accuracy. The attempt to resolve those difficulties is the most salient feature of the modern phase of ontologically oriented semantics.

Language and metaphysics 13

2.5. From classical to modern ontology: logic and the "linguistic turn" The change from classical into modern thinking involved a gradual dissolution of the harmony of language, mind and the world. It would obviously take us too far to go into the ramifications of this process. What is interesting from the point of view of meaning is the way in which the ontological approach persisted in assigning language basically the same job — while viewing it in a new perspective, where language acquired a more central position. A central aspect of modernity is the assumption that there is no guaranteed harmony between the world as such and the way it appears in a human perspective. Descartes marked a crucial transition from a conception of the world in terms of a meaningful cosmos to a recognition of the pervasive role of objective, mechanical processes. However, rather than give up the spiritual nature of man, Descartes saw that as the special distinction of man in opposition to the rest of creation, leaving a radically divided conception of human nature — and as often pointed out by authors who disagree (cf. Rorty 1980: 4; Searle 1984: 14; Johnson-Laird 1988: 14), Cartesian dualism became part of the accepted, commonsense picture of the world. While science gradually conquered everything else as its territory, the human mind remained sui generis. The split between mind and experience allowed rationalists, who retained the powers of the mind as their point of departure, to take over a great deal of the classical tradition. The position of their empiricist opponents embodied a basic scepticism towards the results of traditional thinking: how can we be sure that they are not just pure invention? Hence, it became necessary to look carefully into the empirical foundations of views that had until then not seemed to require such support; and the corrosive power of empiricism is a crucial ingredient in the development towards a scientific world picture. It should be emphasized that the ideal of harmony was by no means dead; it continued to assert itself both in its classical and in various new forms. The most influential attempt to restore harmony was that of Kant, in which the a priori categories of the mind (such as causality), as associated with linguistic forms (in this case conditionals: if water is heated, it mil evaporate), determined the character of the phenomenal world (a world imbued with causal relations); and the Tractatus picture (cf. below p. 25) is to that extent in the tradition from

14 The world behind the word

Kant.10 Furthermore, while assumptions about the nature of the world and human knowledge came under increasing pressure, language still had to do the same (partially) harmony-preserving job that was introduced in the discussion of Plato above: the job of making ends meet. The whole discussion about knowledge would be futile unless it was possible to maintain that language was able to reflect the way things were. Assumptions about the way in which this could be done, however, changed drastically. The most important element in this change involved a changed conception of the role of logic, occurring over a period round the beginning of the twentieth century, which marked a watershed both in philosophy and later in linguistics. The old kind of logic could be characterized as a sort of intellectual superstructure on ordinary language, characterizing paths that the mind could follow as leading either to valid or invalid conclusions; the various types of syllogisms were examples of such paths. In Strawson's words: ...the older logicians tried to reveal some of the general types of inconsistency, validity and invalidity which occurred in our ordinary use of ordinary language...

As against this partly regulative, partly descriptive function in relation to ordinary language, logic now assumed a much more independent position, where logical statements related directly to other logical statements instead of relating to each other via the use of ordinary language: ...the new formal logicians create instead the elements of a new kind of language which, although its elements have something in common with some more pervasive words of ordinary speech, would not be particularly suitable for the purposes of ordinary speech, but is supremely well adapted for the purpose of building a comprehensive and elegant system of logical rules. (Both quotations from Strawson 1952: 61).

Rationalist11 and empiricist elements are mingled in this development, according to whether the emphasis is on logic as mirroring mental capacities or on logic as mirroring the structure of the world; but because formal logic developed in the context of a period marked by the triumph of the empirical sciences, the empiricist affiliations of logic became historically more central.12

Logic and the linguistic turn 15

The association between logic and natural sciences crucially depended on the assumption expressed in the headline of this section, that language basically reflects the way the (real, natural) world is. Although the mental level is of course a necessary element in an account of meaning, the basic status in logic of the notions of truth and falsity by definition means that the decisive semantic relation is that between a sentence and something of which that sentence is either true or false, i.e. scientifically describable objects in the world outside the mind of the speaker. A prominent representative of this view was Stuart Mill, whose argument (A System of Logic 1843 [1970]: 14) may be taken as representative of the dominant strand of the tradition: When I say, "the sun is the cause of day", I do not mean that my idea of the sun causes or excites in me the idea of day.13

The nature of the changed role of logic must be understood in relation to the status of mathematics. From antiquity onwards, mathematical statements have been seen as model examples of truths that are not subject to the general flux of time. In Plato and Aristotle, facts about mathematical objects are standard examples of why timeless as well as temporal aspects of the world must be admitted, and this status mathematics continues to enjoy all the way up to the present day. In addition, mathematics came to serve a function that was imbued with a rapidly growing prestige: as the vehicle of description for physics, the paradigm natural science, capturing exact natural laws by exact mathematical formulae. The new, central role of logic came about as a result of reducing mathematics to basic principles, which simultaneously constituted a new kind of logic, "formal" or "mathematical" logic. With this development, mathematics had lost some of its mystery; logic not only inherited the aura but also became a much more powerful intellectual tool with a central position in the new scientific world picture. The crucial element of the change that occurred had to do with the status of logic as a form of language in itself. Its impact on philosophy is summed up in the phrase the linguistic turn (cf., e.g., the title of Rorty 1967). The philosophical discussion changed its perspective; roughly speaking, the focus shifted from the world to the way we talk about it. In relation to the tension that we discussed above in Plato between the actual messiness of language use and the ideal function of language as reflecting the underlying nature of the world, the new role of logic neatly factored out the two sides: logic seemed

16 The world behind the word

poised to take over the ideal role of language, leaving ordinary language to its own messiness.

2.6. The rise and fall of logical reconstruction 2.6.1. Logical form: the quest for clarity and precision If logic was to replace ordinary language and function as the guaranteed purveyor of ontologically validated statements, it had to satisfy a number of different requirements. Before we get to the point where these requirements begin to create problems for the project, I shall try to show what the basic attractiveness of the idea is and in what way it is tied up with the way we understand ordinary language. The object of the exercise is to replace ordinary language with something more precise. There are two kinds of precision involved, internal and external. Internal precision involves relations, including equivalence relations, between various types of expression: the rules of the system must be precisely specified. However, internal precision is not enough to enable logic to carry precise information about anything; for this we need the external dimension. There are two ways in which logical formulae must hook up with things outside the system: one is the way in which the formula corresponds to ordinary language; another is the way in which they correspond to the world about which they carry knowledge. The first relation involves the selection of the relevant pervasive features of ordinary language: predicates and arguments, operators and quantifiers, etc. But if such features of the same old ordinary language are really carried all along the way, what is gained by setting up a new "language"? The reason why this objection does not invalidate the project is the internal precision of logical systems. The terms of ordinary language may shift imperceptibly in what they convey from situation to situation; but once a logical term is defined, it retains the same definition in all contexts: when the premises have been chosen, the rules of the system are fixed. An obvious case is the conjunction or, which may have either an exclusive or an inclusive sense in everyday language. In logic, the corresponding connectives are defined in terms of truth tables; and that definition then regulates the use of the connective in the system. That does not mean that logic is more precise than language in the sense that it says things more precisely than one could ever do in ordinary

The rise and fall of logical reconstruction 17

language. The formulae can only describe the world if they are understood as corresponding to statements about the world that can be translated into a human language. Hence, claiming that logic could say something more clearly than ordinary language would ever be able to do would be impossible in principle; logic may express just those pervasive features of ordinary language in terms of which it is defined. Thus, one could render inclusive disjunction by saying "either p or q, or both". The point is that one does not usually take the trouble. Therefore a logical statement is more precise than its most obvious equivalent in ordinary language, which is different from saying that logic is in itself more precise. In linking a logical formula to an ordinary statement, the logician is said to ascribe a certain "logical form" to the ordinary language statement. If we say that either Socrates is white, or Callias is white has the logical form p V q, we assign an interpretation to it according to which it would be understood as true if both Socrates and Callias are white — which in this case seems to fit our semantic intuitions, so that this seems a reasonable translation into logic. What is gained by assigning logical forms to statements is that we can then use the precise machinery of logic to spell out the implications of these statements. If a statement is known to exemplify a certain logical form, all the inferential rules that apply to the logical form can now be brought to bear. We cannot at a later point suddenly change the interpretation of or into V (the exclusive or), which would make the compound statement false, if both constituent propositions were true. The way in which a logical formula is more precise can also be described by saying that it is more explicit than its corresponding ordinary language statement. In specifying the logical form of a statement type, you therefore put yourself under automatic supervision: whenever you might be tempted to let the understanding of a term slide, the system informs you that according to the rules of the game, you are now licensing a whole set of predictions that at once reveals the implications of the interpretation chosen. To take one of the classic cases of the misleading nature of ordinary language: if it is assumed that nobody (because it is a pronoun) must stand for a kind of person (as some of the medieval grammarians came perilously close to saying), it would have the logical form of standing for an individual. Hence, the statement nobody is perfect would have the logical form P (a), that of assigning a property to an individual. We might then go on to claim that another individual (Joe, for instance) was also perfect. Symbolizing Joe

18 The world behind the word

with the constant b, the statement Joe is perfect may be assigned the logical form P(b). It now follows that the conjunction Joe and nobody are perfect — exemplifying the logical form P(a) & P(b) — must be true. Since that would clearly be inconsistent, such statements must have a different logical form. The value of developing a logic that would serve as an ideal language lies in avoiding all such nonsensical ways of talking about the world, and in providing more responsible alternatives. The role of logic is to embody ground rules that makes it immediately transparent exactly what is entailed by a given scientific claim; and this virtue is not in question in the discussion below, which is concerned with the basic question of how to relate logical statements to the world they are about. Anybody who reads Alice in Wonderland will be able to share the logician's sense of struggling to escape nonsense and striving towards reason and clarity which is the driving force in the endeavour.

2.6.2. Frege and Russell: how to make ends meet I have tried to show why formal logic has a well-deserved place in the story of human enlightenment. But the undoubted advantages of logical reconstruction only operate on the basis of the same presupposition that the classical theory of meaning rested on: that there is some basic way of making language reflect the world. On that assumption, we can say that the logical form paraphrasable as for all x, it is not the case that χ is perfect is a more explicit way of saying nobody is perfect. Greater relative clarity is in everyone's interest. But the ultimate ambition of logical reconstruction was to develop a logical form that would make the link between language and the world fully transparent. The logical language would then have the property that classical universalism attributed to ordinary language, that of matching the world point by point. Because of the aboutness problem, this is a much more problematic ambition. Below I describe some of the most influential theories that were developed as part of that endeavour. The point is not to recapitulate a period in the history of philosophy, but to throw light on views that have become part of the standard way of understanding meaning in linguistics. Cross-disciplinary processes of tradition and influence have carried formalization from philosophical logic into the description of ordinary human language; I try to pave the way for an understanding of their implications for the understanding of meaning in human language. The idea

The rise and fall of logical reconstruction 19

is to describe stages of the so-called linguistic turn, pointing out en route discrepancies with a perspective that looks for meaning as a property of natural human language. I begin with Frege as the central figure in the development that intertwined the history of logic with the philosophy of language. Frege was mainly concerned to develop a logic that would serve as the foundations of mathematics, but in constructing a language of pure mathematical thinking he had to develop a theory of how symbols related to the world14; and his achievements in this field are such as to give him in addition the status of founder of modern philosophy of language (cf. Searle 1971: 2). The simplest form of the extensional point of view is that language stands directly for things in the real world. In considering this assumption Frege found a number of objections, one of which was the familiar problem from Plato's Sophist: what is the difference between a false and a meaningless statement? (cf. Frege 1919). However, the locus classicus in relation to the issue of meaning is the article "Über Sinn und Bedeutung" (1892), in which the basis of the discussion is the question of identity. If statements like the Morning Star is identical with the Evening Star are interpreted according to a purely extensional view, how can the statement mean anything else than the Morning Star is the Morning Star? Yet intuitively the first is more informative than the second. How can that be? Frege solves the problem by positing two levels of meaning, as indicated in the title of the paper: Sinn ('sense') and Bedeutung ('reference'). As we have seen, there is nothing sensational in positing a mental intermediary between things in the world and expressions; Frege's contribution was to set up a version that provided a systematic account of the way in which the sense and reference of each constituent part contributed to the whole job of a logical statement, to stand for something true or false. Because of his success in this endeavour, the principle of compositionality is sometimes called "Frege's principle"; and he is also regarded as the father of modeltheoretic semantics because his theory implied that the semantics of a statement could be accounted for by showing how the stepwise composition of a statement could be made to correspond to the stepwise composition of a model world of things that the linguistic expressions stand for. Frege sets up a clear distinction between sense as something common to all speakers and phenomena in the individual mind, reserving a special term (Vorstellung) to indicate that which is exclusively subjective. An illustration which he uses is the moon in itself (Bedeutung 'reference'), the picture of

20 The world behind the word

the moon in a telescope (Sinn 'sense') and the picture on the retina of the individual (Vorstellung 'notion'). Metaphysically speaking, this raises the problem of where to locate sense. The Cartesian distinction between the external world of things and the inner world of mental process could not accommodate sense, as something common to all speakers; so it was necessary to assume a "third realm" of immaterial but shared entities, the inheritor of the role of Plato's world of Forms. One connection where Frege argues for the necessity of this domain is in arguing for the "thought" (Gedanke) as constituting the sense of a sentence (1956). Since the thought is what is either true or false in a statement, the whole enterprise of logic breaks down if we have to locate thoughts in an exclusively subjective domain: truth would then cease to apply to something which speakers could be assumed to share. Although the distinction between sense and reference is formulated with regard to logical subjects, it is carried over to all parts of the logical statement. The predicate is said to have a concept (Begriff) as its reference; red has the concept 'red' as its reference (later in the extensional tradition reinterpreted in terms of set theory as corresponding to the set of red objects). The whole statement, as we have seen, is said to have a thought as its sense; and as its reference Frege posits the truth values, true or false. Instead of letting the statement correspond to the entity associated with the subject, with the quality ascribed by the predicate, Frege put down the truth values themselves as the reference of statements — a solution which has some desirable logical consequences. First, false statements are not without reference (which, again, would render them indistinguishable from meaningless statements); secondly, just as the truth of simple statements depend on what its constituent parts stand for, so does the truth of complex sentences depend on the Bedeutung of the simple sentences, i.e. their truth values, as expressed in truth tables. The point here is to see the beauty of the construction while remaining aware that from a linguistic point of view Frege's system is transparently counterintuitive. From an ordinary speaker's perspective, a false statement is false because the world does not correspond to what is claimed in the statement: and truth values are not part of the world of which we speak (they are, rather, properties that can be asserted of statements). No ordinary speaker, looking for that in the world to which a statement corresponds, will be satisfied to be handed a Tor an F irrespective of what the sentence is about. The mathematical, rather than linguistic flavour of this solution is evident; to take a parallel example, the exponential function

The rise and fall of logical reconstruction 21

ax lacks a natural value for a° — but we can decide to let the value be 1. It works fine from the point of view of the system itself; that it does not make sense is beside the point — and Frege's solution embodies essentially the same type of manoeuver. The problem of providing the missing referent (which is a variant of the aboutness problem) is solved arbitrarily, by letting elements of the purely logical universe stand proxy when the extensional relation that is taken as basic needs to be attached to something, and no satisfactory intuitive solution presents itself. The point in highlighting this element of Frege's system is to show that if you stipulate certain properties that the logical language must have, you may have to give up on ontological plausibility, as judged by everyday standards. In Frege's system, language and the world are perfectly matched, but an ordinary speaker would not recognize the world that Frege's sentences match up with. Russell is interesting in this context because his priorities are different from Frege's. He, too, wants an ideal language that matches up with the world; but he is more oriented towards preserving a link with the world of objects that was familiar from both a scientific and everyday perspective. Russell's main project was to find a language that would serve as a reliable vehicle for knowledge;15 the negative side of this project consisted in dissolving unreliable ways of talking, and the positive aim was to replace them with reliable ones. A secure foundation of knowledge was, in the British empiricist tradition, one built on experience, ultimately on perception; and the understanding of language must similarly be understood as perception-based. The basic role of perception is then supplemented with variously conceived versions of mental operations superimposed upon the basic sensory experience, in order to explain the derived validity of commonsense notions about the world. From this perspective there is no principled distinction between the goals of logic and those of the empirical sciences: "Logic, I should maintain, must no more admit a unicorn than zoology can; for...logic is concerned with the real world just as truly as zoology, though with its more abstract and general features" (1920: 97). Roughly speaking, where Frege is crucially concerned with logic as language, Russell is crucially concerned with logic as a way of getting at the empirical reality behind language; Russell would never have assigned truth values as the reference of statements, as a way to get the system to work, any more than he would postulate unicorns as denotata of the word unicorn.

22 The world behind the word

The result of Russell's thinking that is chiefly interesting in this context is the analysis of logical subject expressions, definite NPs designating that which the statement is about. One of the aims of his analysis of expressions of this kind was to provide a solution to the problem of the nature of existence that had beset all the followers of Aristotle (cf. p. 12 above). Where Frege was content to leave the existence of a reference for subject expressions as something taken for granted (cf. Frege 1892), Russell could not accept that because it would allow statements to look as if they were meaningful without actually being meaningful, and hence there would be a mismatch between language and the world that Russell regards as impermissible.16 Clearly, if logical subject expressions were to be appropriate, they must stand for real, existing objects in the world; otherwise the statements would not be about anything, as they purported to be. To avoid such conflicts, Russell proposes a logical form which sees definite descriptions as telescoped forms of more complex semantic expressions. The canonical example is the King of France is bald; in the logical form of this statement, the descriptive content which gives rise to the trouble is removed from the subject position, so that the only thing left is a variable. Instead, the descriptive content is put in the predicate position of a propositional function, yielding an expression of the form χ is King of France. In order for the whole sentence to be true, the statement χ is King of France would have to be true for one x. The generally quoted form of Russell's reconstruction is There is one and only one King of France, and that one is bald. In this way the assumption that there is a definite object that the subject expression stands for is made explicit as a commitment in the logical form. We can therefore approach all statements with definite descriptions in the subject position with no fear of misrepresentation (manipulatory or merely metaphysically naive), because our logical form tells us precisely what is implied in such a statement. The constituents to which Russell attaches the greatest importance in his view of meaning are "propositional functions", forms like χ is King of France. In the case of non-existent beings like unicorns, Russell emphasizes (1920 [1952]: 97) how their problematical nature is made harmless when, instead of considering the phrase a unicorn by itself, one considers it as a contribution to a proposition, paraphrasable as χ is a unicorn. Thus / met a unicorn is paraphrasable as / met x, and χ is a unicorn. In that form it requires only that we know the concept (as Russell calls it in this context), which means that we can determine whether something in the world is a unicorn or not. As it happens, the propositional function x is a unicorn

The rise and fall of logical reconstruction 23

always comes out false, just as χ is King of France currently does; but that is no problem for the theory, any more than other false propositions are. Let us look at this result first from the point of view of ontology. The implications of Russell's analysis go all the way back to Aristotle's attempts to disentangle substances from their attributes. The problem was connected with the analysis of logical subject expressions; if they are reanalysed in Russell's fashion, the problematic subject expressions are replaced with variables, and the descriptive content is put in the predicative position, where it is harmless. Russell returns to this analysis time and again as a crucial victory for his principles: he sees the "subject-predicate logic, with the substance-attribute metaphysic" (Russell 1924: 330) as one of the chief misunderstandings that language has forced upon philosophy, bringing with it the notion of existence as an attribute of the subject. With the concept of propositional functions, Russell can define existence as a property of propositional functions, not of "subjects": "to say that unicorns exist is simply to say that the propositional function 'x is a unicorn' is possible" (Russell 1918: 233). To say of a particular that it exists is quite simply meaningless: "it would be absolutely impossible for it not to apply, and that is the characteristic of a mistake". If we look at the analysis from the point of view of language, the most striking result is the increased distance between ordinary language and reality. Compared with Aristotle, the analysis represents a radical dissolution of the harmony between ordinary language and ontology. But if the burden is shifted to the logical language, an obvious question is, how do we get to talk "safely" about things in the world, now that definite NPs will no longer serve? The answer, in the beginning, appears to have been "proper names"; in Russell (1905), Scott is not reinterpreted in the same way as the author of Waverly. However, since the referents of proper names do not always rest on secure empirical ground, Russell invented the category of "logical" proper names (Russell 1918: 200) as names standing directly for particulars. But this was problematic, too. Examples of words securely linked to particulars are hard to find; words like this or that, said to denote "the object of attention" are among the few examples given, and these are only stable as long as attention is stable — hardly the prototype of an absolutely dependable relationship. What remained for Russell were general categories in predicate position which he assumed could be reliably defined in terms of perception.

24 The world behind the word

The problem of matching language and the world took different forms in Frege's and Russell's thinking. Frege gave himself licence to populate his world with elements from the logical universe; Russell reinterpreted talk about objects as being "really" about propositional functions (thus showing the linguistic turn in action). It remains, however, to look at the problem in general terms. Because of its inherent orientation towards the general form of problems, rather than towards concrete instances, the logical tradition itself forced a clarification of the aboutness problem.

2.6.3. The fragmentation of the logical world picture For Russell, as indicated in the quotation above, logic was like zoology in dealing directly with the world. Gradually, however, the focus of attention shifted to seeing logic purely as language, leaving the world behind as the territory of the natural scientist. With this went an understanding of philosophy as more or less identical with logical analysis of the vocabulary of science. Granted that there is a possible gain in clarity (cf. for example Quine 1960: 270-271) in escaping from moot empirical problems, the central problem remains. The trinity between world, knowledge and language needed to hold together, if the classical project were to survive in the modernized form of logical empiricism. From the point of view of meaning, we can sum up the process by which this construction lost its credibility in three main elements. To keep the logical world picture together, three things are required: (1) a solid real world for language to refer to; (2) a solid relationship between language and world in order to provide interpretations; (3) (if scientific knowledge expressed in logical terms is to be possible:) a solid relationship between mental (intensional) categories and real-world (extensional) categories. These three assumptions, which had seemed self-evident up to and including the period of unified science, all lost their foundations in the period after the Second World War. The first thing to go was the real world. The development that took place shows itself in the gradual degeneration of the way in which the concept 'world' was used in relation to logic. In its original conception, positivism as an alliance between natural science and philosophy had a historical mission in the struggle against rampant metaphysical speculation, especially in the Hegelian form. The establishment of a sanitized world picture was therefore a prime object; the continuing importance of this purpose was

The rise and fall of logical reconstruction 25

made uncomfortably obvious by the rise to power of the Nazi Weltanschauung. Logical atomism was an attempt to formulate a world picture that reflected the principles of reliable scientific knowledge, replacing (roughly speaking) the large-scale airy constructs of idealist metaphysics with small hard facts produced by scientists. But in the development from Russell to Wittgenstein to Carnap, the philosophical emphasis on the world of solid facts gradually evaporated18. Although Wittgenstein's position in the Tractates reflects the ontology of logical atomism, and although he still sees the theory as making explicit the criteria of meaningfulness for ordinary language as well as the language of science, there are already signs of the beginning retreat into a universe hedged by the premises of the logical system itself. This characteristic shows itself in the emphasis on logical syntax, on the relations between the logical symbols themselves. In Russell's thinking, the world is synonymous with reality as a whole, everything that he spontaneously and pretheoretically considers as the real world. In the Tractatus, Welt is to some extent a purely technical term, as is evident from the fact that there are things which are not in the world, but whose existence in a pretheoretical sense no-one would doubt, for instance the knowing subject (cf. 5.631). Effectively, the change of focus from world to language brought about a change in the direction of inquiry: instead of demanding of logic that it should reflect the world as we know it, the logical system was seen as delimiting the realm of what could be meaningfully talked about. Once the existence of a set of rules for talking controllably about the world seemed to be within reach, the whole issue of what the world was really like receded from view. Unless you could capture it in the canonical way, it would not make sense even to mention it — worüber man nicht sprechen kann, darüber muss man schweigen ('whereof one cannot speak, thereof one must be silent'). The view of logical systems as something you construct (at will) in order to be able to talk about the world is carried further by Carnap. From the beginning (cf. Carnap 1934 [1967]: 56), he emphasizes the advantages of leaving ontology behind and speaking only of scientific ways of talking; at a later stage (cf. Carnap 1950), the role of guaranteeing ontological responsibility is even more marginalized. The article is addressed to those who are worried about the abstract objects that seem to grow like weeds in the logical language of so-called empiricists. He reassures them that there is no need to worry about ontological irresponsibility. There are two ways

26 The world behind the word

of talking about existence or reality, one meaningful and the other senseless. In order to be able to talk meaningfully about anything, one has to construct a framework within which to talk about it. Once in possession of such a framework, a logical calculus, we can raise questions within it; but the external question of the reality of the framework itself can never be raised — because we cannot use the framework to talk about itself. When we decide to use a framework it is therefore not because we know that it is "true" or "real", but because it is found to be a practical way of dealing with things. The expediency of the framework is not enough to licence metaphysical assumptions, of course — and therefore it is quite beside the point to suspect logical empiricists of believing in Platonic entities such as those that inhabit logical universes. Model-theoretic semantics explicitly replaced the ordinary world with a controlled model world specially constructed to serve as correlate to the syntactic formulae; the question of the relation between language and world was thus shifted one level down to becoming the question of how the model world related to the real world. Although the world as understood by logicians lost direct ontological significance, the project of logical reconstruction was kept afloat by assuming that the worlds that logical semanticists posited could ultimately be brought into correspondence with the real world. The realist tradition continued to be taken more or less for granted, for the good reason that as long as logical symbols could be assumed to reflect reality directly, this gave logical systems a continuing licence to develop according to their own inner laws without worrying about human mental processes. Once the link between logical formulae and the world is seen as depending on what goes on in the head of the interpreter, truth depends on how people understand things; and the whole point of the attempt to develop a truly exact logic was to avoid the messiness of human processes of understanding. This leads us to the breakdown of the second assumption: the solidity of the relationship between the purely logical formulae and the facts that they are supposed to convey. When the belief in relating to the world via direct perception had become untenable, a different solution had to be found. In the logical tradition, it is natural to finesse the issue of precisely how the mapping is brought about and make do with a theory specifying the formal structure of the relationship in the shape of mapping rules to interpret the syntactic formulae. This is the closest we get in the modern logical picture to the issue of meaning as mediating between formal symbols and the world. Logicians rarely talk explicitly about meaning; in terms of a truthbased approach meaning is whatever provides the mapping into truth

The rise and fall of logical reconstruction 27

values, and its exact ontological status is beside the point. The point is made explicitly by Lewis: It may be disturbing that in our explication of meanings we have made arbitrary choices — for instance, of the order of coordinates in an index. Meanings are meanings — how can we choose to construct them in one way rather than another? The objection is a general objection to set-theoretic constructions ....but if it troubles you, you may prefer to say that real meanings are sui generis entities and that the constructs I call 'meanings' do duty for real meanings because there is a natural one-to-one correspondence between them and the real meanings (Lewis 1972: 183).

The job of providing solid mapping rules that do the job of meanings in relating logical expressions with extensional states, however, was proved to be insoluble by Hilary Putnam (1980). His (very technical) argument has to do with the relation between meta level and object level, which is basic to a semantics based on a "standing for" relation. It affects all versions of formal semantics whose procedure works according to Carnap's progression from purely formal symbols via interpretive mappings to the world that the formal symbols are about. The proof turns on a theorem which is closely related to Gödel's, and which implies that there can be no mapping which unambiguously fixes the relation between a purely syntactic formula and the world state which constitutes the intended model for it (cf. also p. 44 below). Roughly speaking, in order to work the mapping mechanism would have to presuppose the world that the symbols are to be anchored in — otherwise we cannot know whether our mapping from language to world is the right one. If we insist on taking only formal symbols for granted, there is no way to ever get outside a purely formal universe: no matter how assiduously we define wider worlds within which the formal worlds are embedded, the whole system remains of necessity purely formal. The structure of the argument is the same for the story about how the earth is kept from falling down; you can temporarily answer the question by suggesting that it is supported by a giant turtle, but unless you presuppose an ultimate foundation, you are left with turtles all the way down. Put differently, the problem in Carnap's view is that the generalization from wholly controlled formal languages with prescribed interpretations to languages that are capable of conveying an open-ended series of facts cannot be taken without losing control (cf. Putnam 1988: 61). Putnam's

28 The world behind the word

conclusion is that if language use does not in itself provide the requisite foundation, nothing else will do the job. In other words, the logical tradition of trying to save pure meaning from the confusions of the users by keeping users out of semantic theory is fundamentally flawed. Speakerembeddedness is not a source of nonsense, but the only possible source of sense. This leaves the third assumption, that of congruence between mental (intensional) and world (extensional) categories; but that also was eliminated by Putnam (cf. Putnam 1975b). In this case, the argument was not technical, but involved an appeal to intuition: when we make a statement about categories like elms, gold and tigers, we do not mean to make the statements dependent on our own mental categories of elm, gold and tiger. We may entirely lack knowledge of criteria by which to find out whether something is gold, elms or tigers, and yet we want our words to be about these things "out there". An extreme example would be a zoo visitor asking for directions to the new exotic attraction; the man who asks how do I get to the kakapo, please? may have no idea what it is, but he certainly wants to see a real kakapo, and that is why he uses the appropriate term for it. The appeal to natural kinds is a radicalization of Stuart Mill's point — that when we describe the sun as the cause of day we do not mean that the idea of the sun causes the idea of day. Speakers want to be understood as talking about reality, regardless of how precise their own ideas are. This point is two-edged: it undermines logical attempts to get at extensions via mentally represented criteria — but it also emphasizes that there is a basic extensional dimension in ordinary language, ruling out purely mental approaches to meaning (cf. p. 76 below). The project of an ideal logical language that would maintain the classical unity by reconstructing ordinary language so as to tie the three levels together is therefore multiply untenable. Truth-conditional semantics was criticized from various other angles in philosophy. Based on a criticism of Frege, Dummett (cf. for instance Dummett 1976), similarly shows that truth-conditions cannot be made to anchor meaning in actual understanding. A key issue is the question of decidability; if speakers made statements in order to claim that their truthconditions were satisfied, there would be a number of statements that could never be meaningfully made. As an example of special interest in relation to tense, statements in the future may be mentioned, especially in combination with conditionals (cf. Part Three, sections 2.3 and 4.4); since

The rise and fall of logical reconstruction 29

the truth of a future statement involves events that have not yet occurred, the speaker cannot without contradiction be understood as claiming that the truth-conditions are already fulfilled.19

2.7. Logical and linguistic semantics The surprising thing is perhaps that there should be any point in recapitulating this story in a book about ordinary language. But the accepted picture of linguistic semantics remains one that appears to reflect a state of the art in which the logical reconstruction project is not only alive and well but also valid for the analysis of ordinary human language. The most influential definition of the disciplines of syntax, semantics and pragmatics as part of linguistics is probably the one given by Carnap (1942), which in his formulation is meant to be about logical semantics. According to this definition, syntax is the discipline dealing with the symbols alone, irrespective of interpretation — i.e. the internal rules for manipulating the expressions in the calculus itself. Semantics deals with the relationship between symbols and what they stand for. Pragmatics, as the third term, deals with properties involving users and interpreters of the symbols. The directionality of inquiry tallies with the logician's perspective: First, the rules and expressions of the system itself; then the entities to which they are related; last, the context in which they are used. Carnap emphasizes that pure syntax arises as the result of an abstraction procedure (hence presupposing a substance domain from which one can abstract); and the logical point of view does not necessitate a priority of syntax over semantics (cf. Montague's remark about syntax as the "handmaiden of semantics", Montague 1970b). However, these points of principle did not affect the directionality from syntax via semantics to pragmatics, which is almost built into the logical point of view; unless one begins with a set of symbols and a syntax, one is not in logical territory at all. The main factor that blurred the distinction between logical and linguistic semantics was the remarkable increase in power of the logical systems that were set up to provide logical forms for natural language expressions. This development brought logic to a level of sophistication where a complete logical reconstruction of ordinary language appeared to be within reach. I think it is necessary to understand this both as a great achievement and as

30 The world behind the word

something reflecting quite different aims from those of a linguistic semantics. The breakthrough to a point where Russell's inherent conflict between the messiness of ordinary language and the precision of logic seemed to be surmountable by logical reconstruction was made by Montague (cf. the title "English as a formal language", Montague 1970a). Building on Frege's programme, Montague aimed to provide a step-by-step translation procedure from ordinary English syntax to states of the model world: for each syntactic step, there would be a corresponding addition to the world it is about. The distinction between calculus and language is eliminated, and natural language is subsumed under the theoretically infinite number of possible formal languages. This programme is carried on in a number of different variations. But there is no reason why linguistic semantics should feel compelled by this road of access to meaning. As far as I know, no-one has ever tried to argue explicitly that Carnap's views on semantics are a plausible way of approaching semantics in the linguist's sense of "the discipline dealing with meaning as a property of natural languages". Nevertheless the difference between the two different senses of the word semantics is widely overlooked; the issue is generally understood as a matter of what type of semantic theory you believe in, rather than what object you are describing. The strength of the denotational tradition, whether in the informal or the more formal version is such that linguists still commonly use semantics as dealing by definition with the referential properties of utterances. This can be seen, for example, in the International Encyclopedia of Linguistics (Bright 1992). In the article on Semantics, Allan defines meaning (cf. also Allan 1986: 139) in terms of sense and denotation; and Horn's definitions of syntax, semantics and pragmatics (in the article on Pragmatics, implicature and presupposition) are based straightforwardly on the tradition from unified science: Syntax deals with formal relations of signs to one another; semantics with the relation of signs to what they denote; and pragmatics with the relation of signs to their users and interpreters (Horn 1992: 260)

This is a very important source of confusion, leading most obviously to a tacit assumption that the meanings of ordinary linguistic expressions must of necessity be denotational (cf. for instance N0rgard-S0rensen (1992: 22),

Logical and linguistic semantics 31

who presupposes that an expression must be meaningless, unless it has a meaning that can be captured in terms of truth-conditions). The elimination of the assumptions on which a complete logical formalization of language rests does not mean that logic becomes pointless; but it puts logic in its proper place as a discipline which has a particular job to do within (rather than above) a human context. The point of logical analysis remains what it was before formal logic, namely to provide explicit and precise accounts of relations between different types of propositions; and the greater power of modern logical systems enables them to handle increasingly complicated types of propositions. This is an interesting field in its own right and does not depend for its value upon its appropriateness as a theory of linguistic meaning. However, to avoid the confusion that is the result of assuming that logical and linguistic semantics is the same thing, we need to deal with the two disciplines separately. Since logic lost its ontological authority, the legitimacy of logical semantics within linguistics can only reside in the ability of logical reconstruction to throw light on language. In that perspective, it is the properties of language that determine what a good analysis is (whether logical or not) — rather than logic that determines what are the good properties of language. This means that we need to ask whether — or to what extent — natural language is inherently designed to do the job of mapping expressions to states of the world. Once that question is asked, the answer would appear to be simple. Quite independently of the non-linguistic problems of logic, there are many — obvious and non-controversial — reasons why the programme of logical semantics does not capture all of natural language semantics. Yet the presupposition that language was constructed as a means to convey facts was not taken up for discussion when logic retreated from metaphysics; only in the 1980s did the issue begin to be raised (cf. Fillmore 1985, Lakoff 1987). The problem that not all kinds of meaning in natural languages are understandable in terms of truth-conditions has been acknowledged in footnotes and asides, from Aristotle (cf. p. 10 above) and all the way through the heyday of logical semantics; but for a long time it appeared as if it did not matter. The attitude was that "speaker relative concepts must be excluded by fiat" (cf. Kempson 1975: 79), preserving only the "objective" properties within the discipline of semantics. The reason for the teflon status of logical semantics was the supremacy of the project of knowledge — the assumption that the job of serving as a

32 The world behind the word

medium of true knowledge takes automatic priority. If we insert science in the place of the ideal world order, the received attitude to meanings that deviated from strict impersonal truth remained essentially the one handed down from Plato; no-one wanted to come down on the wrong side of the great divide. Yet a primary interest in knowledge does not necessarily require us to assume that meaning is in the nature of direct mappings from symbols to the world. If knowledge can only be transported via speakerdependent properties, we shall in fact lose our ability to understand how language manages to convey knowledge if we exclude such properties by fiat. A clear distinction between logical and linguistic semantics does not imply that truth is dropped from the agenda of linguistic semantics (which is why the discussion above should not be seen as purveying a linguist's summary of Rorty's programme for the deconstruction of knowledge, cf. Rorty 1980). If we succeed in giving a satisfactory description of meaning as a property of language, we should also be able to account for why a very salient question in relation to a prototype of linguistic utterances is the issue of truth versus falsehood. But in the developmental perspective adopted here, this issue can only be approached after we have looked at language in relation to speakers (cf. below, p. 132).

3. Meaning and cognition 3.1. Introduction Although the primary role assigned to language was to reflect the real world, mental categories have traditionally been understood as a necessary mediating link, as expressed in the maxim vox significat res mediantibus conceptibus ('sounds denote things with concepts as intermediaries'). But the permissible role of the purely mental level was strictly circumscribed as long as classical universalism was taken for granted. If the mind is assumed ideally to reflect the world, and the world is the same for everybody, there is little scope for interesting phenomena at the specifically mental level. To the extent mental categories operate on their own without ontological correlates, they must be wrong, bluntly speaking; and the modern scientific world picture echoed this position from a different perspective. The medieval modist grammarians, who took an interest in alternative ways in which the mind grasps reality, left a reputation of empty speculation behind them, a reputation which is not unfair if you look at their ontology; but if we assume that there is a less than direct mirroring relation between mental and ontological categories, some of their theories can be understood on a different basis. For instance, they worked out a theory of noun and verb based on the classical distinction between ens ('being') and esse ('activity'), as ways in which the mind may grasp reality — a theory that would be recognizable in present-day cognitive linguistics (cf. Harder 1990b). In the same tradition we also find clear formulations of how grammatical study of language was distinct from logic, because the primary commitment of logic was towards truth, whereas the primary commitment of grammar was towards faithful rendering of mental processes in relation to language: .. .sicut logica defendit animam nostram a falso in speculativis et a malo in practicis, sic grammatica defendit virtutem nostram interpretativem ab expressione conceptus mentis incongrua in omnibus scientiis (Siger de Courtrai, quoted from Bursill-Hall 1971: 38) ['as logic defends our mind from falsehood in speculation and evil in practice, thus grammar defends our capability for understanding from expressions that are at odds with our mental conception in all sciences']

34 Meaning and cognition

However, the ontological superego, strongly represented by British empiricism from Ockham to Russell, prevented this strand of research from developing. The attempt to replace the mental link between language and world with formal mapping procedures was designed to eliminate the role of the human mind once and for all. Today, the picture has changed completely. The interest in meaning understood in the context of the human mind is one of the major trends of research in the past decades. The reason why the word "cognitive" acquired a centre-stage position around 1960 was that it came to sum up everything that had been kept out of the scientific mainstream by the antimental climate, most radically represented by behaviourism. Especially in America, behaviourism had established a position as a touchstone of scientific rigour by making it seem as if there was only one way to be scientific about human capabilities: to assume that human nature and inorganic nature could both be described in terms of simple cause-effect sequences. Leaving aside the problems of this assumption with respect to inorganic processes, its most obvious error is in overlooking the possibility that human beings might be more complex "devices" than inorganic objects. Even if the same laws apply, we should not expect the same types of cause-effect sequences if the objects affected are different, any more than we would expect different types of organism to react identically when they are put under water. The cognitive revolution started when the behaviourist refusal to allow the existence of mechanisms in a human being to mediate between input and output was recognized as simply an arbitrary prejudice. Although this was not inherently a giant intellectual change, it came to seem so because it meant that the black box of the mind was declared open to scientific investigation. The psychological impact of the downfall of behaviourism was to provide a new sense of direction for the human sciences, which had been trying to exorcise all assumptions about mental processes. The most widely accepted view of meaning today is no doubt that language in general and meaning in particular is a mental, cognitive phenomenon. This view is a vast improvement over anti-mental accounts of language; yet there are two basic problems with it. The first type of problem, which is the one that this section deals with, arises because a cognitive theory of language is no better than the theory of cognition that it reflects. In different forms, this type of problem affects both the "classical" and later developments in cognitive science. In relation to the former, I argue (following Searle) that the "classical" view of meaning and mental process

Introduction 35

in cognitive science is based on a confusion between two ways of understanding computation: computation imperceptibly changes its status from being a mere descriptive tool to being a hypothesis about the specific nature of mental process. The notions of representation and information, in the sense in which they are used in the computational context, are affected by the same factor. We therefore end up with a new version of the aboutness problem: if mental states are understood via an analogy with formal, computational processes, how do mental processes manage to be meaningful? The problem that a good cognitive view of meaning presupposes a good theory of cognition also affects the alternative version of cognitive science, which is associated with the name of "cognitive linguistics" (whose general approach is much more congenial with the position I defend). There is a cross-theoretically pervasive, very broad definition of cognition which is natural if you want the word to cover all those mechanisms that behaviourists wanted to eliminate. If anything that complicates the relation between stimulus and response is understood as cognitive, there is little in human life that does not involve cognition; an extreme version of this way of thinking is pan-cognitivism, where all reference to noncognitive entities is eliminated, thus turning behaviourism on its head. With respect to meaning, if everything else about human life is cognitive, we can infer that meaning is also cognitive; but that is not a very useful conclusion. Also in less radical versions, the wide sense of the word gives problems: the vagueness in the term makes it difficult to get at relevant distinctions in reality. Typically, the problem is that the broad sense tends to coexist with the standard informal sense of the word whereby it designates higher mental functions, with intellectual concept-based processes as the prototype. Since we cannot be clear about what a theory of meaning as a mental phenomenon entails before we have a sufficiently constrained view of what constitutes truly mental phenomena, I suggest a rough developmental scale of skills to use as a yardstick. The scale builds up towards the domain of conceptual, intentional representations, which can be naturally understood as constituting the central domain of a cognitive semantics. Based on this attempt to differentiate, I take up some problems within cognitive linguistics resulting from a failure to keep conceptualization distinct from primary experience on the one hand and more primitive skills on the other.

36 Meaning and cognition

After thus trying to handle the first type of problem, I bluntly introduce the second, more serious problem: that there is more to linguistic meaning than cognition, whether understood in the wide or the narrow sense. In the hope of luring the readers on to the next instalment, I postpone the final stages of the argument on conceptualization and linguistic meaning until after I have presented the approach to meaning that I advocate, namely the functional approach, in section 4.

3.2. The "classical" computational approach The mainstream computational approach to cognition is important both because of its mistakes and its successes (which to some extent overlap). First of all, the one factor which more than anything else made it scientifically respectable to refer to mental processes was the development of the computer, which combined absolute formal explicitness and rigour with boundless possibilities for describing complex input-output relations. If computers can be programmed so as to perform intelligent tasks by mechanisms far more complicated than reflexes, there is no scientific point in denying this attribute to human beings. In addition, computers make it possible to propose very complex descriptions of phenomena without any loss of operationality: if a proposal is brought into computable form, the machine can take care of the complexity and the human observer can check its feasibility without having to keep track of every single step in the mechanism on his own. There is also a more practical reason for the conspicuous role of computation. The physical spread of computers in everyday life has made simulation a salient embodiment of scientific progress, doing for the impact of the computational paradigm essentially what the steam engine did for the impact of scientific thinking; and the movement continues with unabated momentum. Researchers from different disciplines with a common interest in computational methods of investigation, joining forces under the common denominator of "cognitive science", constitute one of the noticeable features of the present-day academic landscape. The computer revolution was an offspring of the tradition of mathematical formalization that was discussed above p. 15: the theory of computation established a rock-bottom formal basis for the reconstruction of all computable regularities. The hard-science credentials of computational studies of cognition are based on these irreducible primitives, which were

"Classical" artificial intelligence 37

discovered by Turing (1936), embodied in the thought experiment that goes under the name of the Turing machine, and implemented in the actual machine by von Neumann. The possibility of investigating mental phenomena was thus coupled with the possibility of maintaining a strategy of maximal scientific austerity: how far can we go towards accounting for all human powers without presupposing anything else than these maximally simple mechanical operations (cf. Johnson-Laird 1988: 23-26)? In this, computational studies have an edge over verbal accounts: no matter how clearly argued, they always involve an implicit reliance on the precision of the words used. Once you can get a computer to simulate the processes you want to describe, you can be absolutely sure that you have an account where you can specify exactly what is involved in your theory, down to the four Turing machine operations. Although the intellectual foundations are the same, there is an important difference of perspective between the traditions of formal logic and formal simulation when it comes to the relationship with the world outside the formalism: the former is based on truth, the latter on similarity of inputoutput relations. An element in a simulation does not have a truth value; its identity depends solely on its relationship to other elements of the computational simulation (which is then ultimately brought to bear on the real world by the programmer, or his boss). Put differently, in computational simulation the interesting part of a formal system is its internal organization, its syntax only: those properties which endow formal systems with a life of their own. The problematic relation between formal systems and the real world presents no problem in relation to their use in computational simulation. Formalization thus acquired a new context of use with somewhat different implications for the study of meaning. To understand the implications of this development, it is necessary to look at some ways of understanding the central notion of simulation. Computational simulation of cognitive skills can have three different levels of ambition. The first, which is the relevant one in most uses of computers, is the practical level. The reason computers are marketable commodities is that they can do things for you: if a computer-based procedure can solve a problem, such as a difficult calculation, most people do not care what relation there is between the way the machine does it and the way a human being would do it. The second level involves a theoretical context, where the role of the computer is to provide an explicit and testable way of formulating

38 Meaning and cognition

hypotheses about what goes on, for instance in the human mind. In that context it is of course not enough that the machine should be able to get from input (problem) to output (solution). There are always infinitely many ways in which a formal device may get from input to output, and if we have no specific reason to think that the machine does it in the same way as a human being, the success of the computer in itself tells us nothing about human cognition. The second, descriptive level of ambition is therefore to find a process that is structured in exactly the same way as the human process. Since it is very difficult to prove just how things go on inside the head, there tends to be a gap between the practical problem-solving power of the machines and the descriptive implications of the successful problem-solving procedures. It is tempting, therefore, to let conclusions move ahead of the evidence; setting up something that works generates more excitement than the slow and often indecisive work of sifting the evidence. A grey zone between the first and the second level is therefore heuristically inevitable, and even desirable: to have a computational simulation that works is at least to have one way in which the thing could be done. For example, Marr's simulation of visual processes (Marr 1982) is interesting, regardless of the precise extent to which it is isomorphic with actual brain processes; it shows us possible stages of the way from photons impinging on the retina towards the end product of the visual process, and serves to provoke further work which may then get us closer to the actual human processes. But many of the hopes that have been associated with the concept of artificial intelligence cannot be fully captured by the two levels that have been characterized here. In trying to understand cognitive processes, many researchers have looked on machine processes not as ways of getting from input to output, or as ways of describing what goes on in the head, but as processes of the same kind that human minds go through. If the computational process in itself is in focus, it is an understandable ambition to try to produce precisely what nature has produced — just as the ultimate aim in trying to produce artificial insulin is to bring about exactly what nature produces. The difference between the levels may be clear in principle, but in concrete cases it often manifests itself as a matter of degree: just how many properties of what goes on in the machine are shared by what goes on in the head? In order to be aware of the implications of computation-based theories of cognition it is therefore necessary to be extremely careful with respect to the nature of computation as a tool and computation as a process

"Classical" artificial intelligence

39

in its own right, and as such different from other processes that occur in the world. The first apparently intelligent task that computers could be made to solve reflected the intellectual affinity between formal computation and formal mathematics: the machine succeeded in proving theorems out of Pnncipia Mathematica. As discussed above, logic has no inherent affiliation with either rationalism or empiricism, and only the intellectual climate associated with the success of the empirical sciences made empiricism a more natural partner. With the cognitive revolution, formal systems acquired a new role that tended more to the rationalist side: as a theory of how the process of thinking worked. The dual nature of logical systems as instruments for describing a body of facts and as systems with a life of their own was both a great heuristic advantage for the use of formal models in studying mental processes and also a source of considerable confusion: formal calculi served at the same time as ways of modelling phenomena in the world (regardless of their nature), and as the most prestigious hypothesis about the nature of mental processes as such. This made it possible to avoid confronting a problem that the rejection of behaviourism had not solved for science. The fact that people could now talk about things going on in the mind without fear of the science police did not in itself eliminate the problem of the mysterious nature of mental phenomena. But to the other wonders of computational investigation came an added attraction: making it possible to talk about mental process without going into the issue of what it was. Computational simulation could serve both as a descriptive tool and as a substitute for a theory about the nature of mind. On the one hand, simulation proceeds in exactly the same manner regardless of the nature of the simulated object, eliminating the mystery from mental processes as objects of simulation; on the other, by showing how a completely deterministic device can do things hitherto accomplished only by human beings, it gave hard-nosed scientists the hope of accommodating mental processes in a world picture of the familiar mechanical kind. In one sense, therefore, the computational revolution in cognitive research provides no theory at all about the specific nature of cognitive phenomena: the words cognition and cognitive are simply used about whatever it is that goes on in people's heads, mediating between input and output in the mental tasks that are investigated in terms of computational simulation. But in another sense, a very precise and specific picture of cognitive processes

40 Meaning and cognition

came to be taken for granted: cognition was assumed to be computational, i.e. consisting in Turing-machine-like symbol-manipulation. I shall follow this dual way of thinking about computation in some detail because of the confusion it has generated, not least in relation to computational simulation of language. The assumption that mental processes are by nature computational was made explicit in the form of the so-called "physical symbols hypothesis" (cf. Newell—Simon 1976: 116). The physical symbols hypothesis implies that intelligence (whether artificial or natural) operates by performing operations on symbols, regardless of their symbol value. Symbols can be processed by Turing machines; (linguistic) symbols are used to articulate human thought; if all we are able to do can be shown to be equivalent with Turing machine operations, we have a fascinating parallel between mental and computational processes, with language in its usual mediating role. Underpinning this assumption, Putnam's article Mind and machines (1960) articulated a position known as (computational) functionalism. It was innovative in demonstrating why simple physicalism was obsolete: whatever creates thought, it is not the atoms and molecules in themselves, but something about the way in which they interact. From that point of view, it is natural to propose that thinking consists in going through a series of steps that can be characterized formally, regardless of the ontological status of these states: The functional organization (problem solving, thinking) of the human being or machine can be described in terms of the sequences of mental or logical states respectively (and the accompanying verbalizations), without reference to the nature of the 'physical realization' of these states. (Putnam 1960 [1975]: 373)

If we read the quotation as claiming only that we can study the structure of mental processes without knowing everything about the way they are made, there can be no objection to the claim. The problem comes in only because of the subtly different and more fascinating reading under which thinking is constituted solely by the manner of organization in itself, so that thinking is entirely substance-free — which means that it is exactly that which could conceivably be identical in the mental process and the computational simulation which in itself constitutes thinking. For the brain to go through a thinking process is the same thing as for a computer to go through a program, following one rule after another until the problem is solved.

"Classical" artificial intelligence 41

As part of this picture, another assumption was necessary in order to circumvent the basic empirical problem of access to mental organization (how do we know if it is actually the same thing that happens?). This difficulty made it necessary to assume that the ultimate test for the presence of intelligence in a "device" must be purely in terms of input-output relations, as described by Turing (1950): if we reach the point where we are unable to tell whether who/what we are talking to is a man or a machine (the "Turing test"), we have to conclude that there is intelligence behind the answers. The logic behind the Turing test reveals the element of continuity from behaviourism: ultimately overt behaviour is criterial, and the actual ontology of mental states remains hidden from view.

3.3. Language and meaning in the "classical" view In relation to language, the most important offspring of the pattern of thinking that has been described here is no doubt generative grammar. It developed independently of computational research in artificial intelligence, but was based on the same intellectual foundations: the development of formal systems to simulate the object of description, in this case human language. The problem of the inaccessibility of the mental box in which linguistic rules were supposed to operate was solved by a linguistic version of the Turing test, embodied in the notion of "grammaticality": the basic aim was to set up a device whose potential output (of grammatical constructions) was the same as for the human speaker. There was also a similarity involved in the type of formalization involved: both computers and generative grammars worked by following rules specifying possible transitions between states within the system; and both treated the question of the interpretations of the formal symbols as marginal. Generative grammar will be discussed in more detail in Part Two on "Structure"; at this point the issue is the view of cognition, including meaning, which it instantiates. The debate on the nature of meaning has mostly taken place within the wider framework of artificial intelligence, even though some of the prominent authors have been generativists; Chomsky's well-known antipathy towards semantics prevented the issue from occupying a central position within the movement itself. Fodor, who formed a link between generative grammar and the wider tradition of cognitive science, pointed

42 Meaning and cognition

out (1975) that if generative structures are to be credible as theories of language, they must be translatable into an internal system of representation capable of carrying the content of the utterances. Reviving a classical idea, Fodor suggests that the most natural theory, given existing assumptions about reasoning and information processing, is that the internal representational device takes the form of a "language of thought" consisting in predicates and syntactic structures closely analogous to those found in natural languages, and functioning in terms of its own internal logic, which can be simulated by a formal system. This idea can be seen as the simplest way of giving a coherent answer to the question of how the physical symbols hypothesis and generative grammar could be part of the same theory of mind. One can in principle choose to describe syntax irrespective of meaning, as in generative grammar; but in order to be part of a viable theory of human understanding, a syntactic theory must have a relation to whatever it is in the mind that does carry meaning, and Fodor's mental language is meant to do that job. Offering arguments to show that decomposition into primitives is implausible, Fodor suggests a mental lexicon roughly of the same size as natural languages, and relates predicates through meaning postulates rather than by shared primitives. The mental language, however, cannot be identical to any specific human language, since the ability to form and use mental representations cuts across language differences. Generative grammar came under attack from various other formal and computational approaches to language. From the artificial intelligence side, Schank et al. (1977: 7) dismissed as irrelevant Chomskyan syntactic rules that had nothing to with the ability to interpret text structures, and therefore had nothing to do with understanding. From the logical semantic side, Montague (1970b) and Lewis (1972) claimed that what passed as semantics in the generative camp was really not semantics at all. Regardless of whether one believes in decomposition or not, the translation of a syntactic structure into a structure involving semantic markers is only translation into another language, "markerese", getting us no closer to the meaning than a translation into Latin1. Lewis formulates his objection based on the logical view of semantics as equalling truth conditions (Lewis writes (1972: 169) about "... the first thing about the meaning of an English sentence: namely, the conditions under which it would be true"). For reasons described in section 2.6.3., Lewis's answer has lost its force, but his question remains: how do we get beyond syntax?

Meaning in the "classical" view 43

This objection touches on the basic limitation of the formal pattern of description, as discussed above p. 27, the problem of how to transcend the formal system itself. A formal device, whether theoretical or implemented in a computer, can translate one representation into another, but from the point of view of the machine neither of them means anything. Operations in formal systems are content-blind: there is no symbol value in addition to the physical item itself — so the term "physical symbols hypothesis" is really a misnomer. How does Fodor see them as getting meaning? Fodor's view of meaning is basically constrained by what meaning must do, or rather by what it must at all costs not do, namely interfere with the purely syntactically driven mechanisms of the language of thought. Whatever meanings are, they must be epiphenomenal in relation to the symbol tokens that enter into the supposedly computational processes of the mind. His answer therefore maintains a knife's edge balance: on the one hand, the symbols must be interpreted; on the other hand, there must be a complete one-to-one causal relationship between the formal symbols themselves and their interpretations, or the computational process would lose semantic information. Fodor therefore argues that his mental symbols must stand to their interpretations in basically the same relation as the perceived object to the perceptual representation. In his undaunted manner he tries (Fodor 1987: 111-127) to show how the tokening (the "mental occurrence") of a mentalese symbol could conceivably be associated causally with what it stands for, even with words like proton: every tokening of the mentalese symbol proton would then be causally related to a tokening of the extension of the symbol. This is the theory that Edelman (1992: 152) paraphrases by saying that in order for the mind to work according to computational principles, the world must work like a piece of Turing machine tape on the human subject. Jackendoff defends generative mentalism even more radically. Following the Chomsky an distinction between language as a social, external phenomenon (Ε-language) and language as purely internal representations (cf. Chomsky 1986: 20-22), he sees the task solely as that of describing the latter — the "I-language". From the point of view of the I-language, semantic or "conceptual" structures simply constitute one level of analysis just as the phonological structures do. There is no way of going beyond them; the buck stops there (Jackendoff 1987a: 122; 1990: 15). Unlike Fodor, who takes up the challenge of making the symbols stand for

44 Meaning and cognition

something, Jackendoff is happy to have mental "representations" which avowedly do not represent anything outside themselves. Expectations that the cognitive revolution as developed in the formally oriented tradition would get us closer to the nature of meaning must therefore result in disappointment: there is no room for meaning as a special property of language in formal simulation. Putnam's critique2 affects both truth-conditional theories based on mappings between expressions and world states and theories that put the formal language inside the mind. Putnam is one of the two most prominent critics of cognitive science; the other is Searle (cf. Searle 1980, 1983, 1984, 1992). Putnam's criticism takes its point of departure in the formal structures and specifies what you cannot do with them; Searle's is grounded in a philosophy of mind which begins from the opposite point of view, with a point of departure in the specific nature of mental phenomena. From both points of view, the missing item is the relation between the formal structures and what they purport to be about. Searle begins from the assumption that mental phenomena exist and are perfectly legitimate objects of description in their own right, and that they, like everything else, are part of the material world. The natural question is how to cash out such an unproblematic acceptance of mental phenomena in terms of precise scientific description — which was what motivated the strictly formal approach in the "classical" approach. Searle's attitude is that we simply do not know enough to answer it now — but in the absence of that knowledge, denying what we do in fact know (that mental phenomena are real and have certain constitutive features), will get us no closer to a satisfactory scientific description of them. Searle's opponents within the "hard-nosed" camp have had problems grasping his position, because it is part of the premises of the computational view of cognition that the computer analogy is the only way to be scientific about mental phenomena. Searle (1980) attacks the notion of "strong AI", the pattern of thinking that involves the third, ontological level of ambition for computer simulation, according to which computers can be said to have mental states; specifically it is oriented against the claim that a mental process is ontologically identical with the process of going through a computer program. The key element is not a logical argument, but a suggestive thought experiment, that of the "Chinese room". The central element is the idea of a man sitting inside a box, receiving one set of Chinese characters and handing out another set of Chinese characters; and the idea is that without knowing Chinese, he

Meaning in the "classical" view 45

gradually becomes more and more adept at doing this in ways that appear to make sense. The setup corresponds to the Turing test, and is designed to prove that the satisfaction of the customers has nothing whatever to do with whether the system inside the box (human being or machine, as the case may be) associates any mental states with the symbols it produces. The ensuing discussion involved several issues which were difficult to keep distinct.3 The essential point was: no matter whether the trick of producing real mental states may consist in certain forms of structural complexity or not (as claimed by the "strong AI" camp), complexity does not in itself constitute mental life. The claim that mental processes are computational is not disproved, but simply unsupported — because the claim does not even refer to the constitutive properties of mental states. The central point involves the relation between simulation and simulated object. One more analogy may be helpful in illustrating this point: A celebrated example of computational simulation are war games in which computers are used to simulate the effects of nuclear attacks, producing those maps showing temperatures, levels of contamination, incidence of radiation deaths, leukaemia, etc., that we post-war children have followed with horrified fascination. The point of the exercise is quite simple, and everybody understands it spontaneously: if the calculations performed by the machine rest on correct assumptions, there is an analogical relation between the output of the simulation and the state of the world after the attack. A perfect war game would be one that got everything right and thus embodied a complete scientific description of the course of events; but there would still be a clear difference between the war game and the war. It gets more difficult when what is simulated is not external events, but mental events. Because we do not know what the physical nature of a mental event is, we do not have the tangible distinction between simulation and reality that is the whole point in playing off the war game rather than fighting the nuclear war. But since mental events (whatever else they are) are real events occurring in human beings, accompanied by conscious states, including feelings like confidence, confusion and irritation, they are exactly as distinct from the computer simulation as the nuclear war is. The essential argument against strong AI is independent of whether one believes that computers could in principle have mental life, and turns on what is involved in the notion of simulation: since programmers are trying to produce something that is analogous to the real thing, rather than identical

46 Meaning and cognition

with it, there is every reason to suppose that this is what actually happens. If they produced real mental life it would be like producing a real nuclear war instead of the war game: very surprising as well as dangerous. In order to supply a machine with real mental processes by any other route than the most astounding piece of (bad) luck we would have to know more about the type of mechanisms that produce real mental states than we do now. Even if the answer is purely in terms of structural complexity, as claimed by the adherents of strong AI, we would have to know what sort of structural complexity before we could successfully go for it. That is why the key argument of Dennett and associates — that we would have to abandon the scientific world picture if we regard the mechanisms that bring about real mental life as unreproducible — is true, but irrelevant to the issue of whether computers have mental life. There is at least one form of circumstantial evidence which has been given surprisingly little attention. One thing we do know about mental life is that the only place in which most people agree that mental life exists is in "devices" which possess physical, organic life. A good place to start for someone who wanted to create mental life would therefore be to begin by creating biological life. Instead of starting at the difficult end, the obvious start would be to take something easy to begin with and create, e.g., an amoeba (cf. also Popper 1972). Computational experiments in "artificial life", while raising the same issue, are easier to keep distinct from the real thing; a real animal would be expected to live outside a computer process. If attempts to create artificial intelligence are seen as part of that endeavour, the proper perspective may be easier to maintain.4

3.4. Intentionality, mental content and rules A key element in Searle's characterization of what constitutes mental phenomena is the concept of Intentionality (cf. Searle 1983), spelled with a capital I which I shall ignore most of the time. Intentionality is the relation that allows the mind to hook on to the world5. The presence of something inside the mind which relates to but is not identical to what is "out there" is the central enigmatic property of mental phenomena which most of the discussions about artificial intelligence hover around. Searle invokes everyday phenomenological evidence to demonstrate some basic facts about the role and nature of intentionality in mental processes.

Intentionality, mental content and rules 47

The intentionality of perceptual impressions shows itself in the fact that when we get a visual impression of an apple tree, we see the impression not as an independent construct, but as brought about by a real tree out there. Simple causation, as suggested by Fodor, is not enough to give a phenomenologically adequate description of perception: there is a selfreferential aspect in the process, which must be present if we are to understand the experience as genuinely perceptual. To spell it out, the apple tree not only causes a visual impression, it causes the perceiver to understand the visual impression as being caused by the tree. Therefore what we see (in the sense in which we as speakers understand seeing) is the tree, not merely the visual impression. Unless we describe perception as involving this extra complication, we cannot understand perception as embodying the possibility of being wrong; perception would merely be a fact about the perceiver (like being in pain, cf. below p. 69). Gopnik (1993) argues that this assumption is not built into perception as such, but is only a story that we invent after the fact — because experiments show that it is only at age four that children are able to account for their own intentional relation to the perceived world. Her argument, however, is vitiated by one of the weaknesses in the computational theory of mind: the conflation between theory and actual interpretive practice. Gopnik assumes that if you cannot produce the theory ("Yes, I do understand my perception as representing the real world"), you cannot have the actual thing (the interpretation whereby perception is understood as representing the real world). But this is patently absurd, as shown by her own account: what children assume until age four is that they see the world directly, a "Gibsonian" view of perception as Gopnik herself calls it (cf. the discussion of Gibson below, p. 50). This shows clearly that children understand seeing in just the way Searle describes; what happens at age four is that they become able to see themselves as having just that assumption, rather than just having it in a naive and unreflected form. More generally, the whole idea of representation only arises within an Intentional context (for a thoroughgoing criticism of the influential tradition of suppressing this fact, cf. Sinha 1988). This goes for all types of representations, including pictorial. Although one shoe may be identical to another, it would not make sense to say that it represents it, unless it was used for that purpose by a subject with intentional powers (cf. also Searle 1986).

48 Meaning and cognition

The necessary two-sidedness of intentional states, intrinsically involving both a state-of-the-world and a state-of-the-mind (related by an representational relation), has made it difficult to talk about them, since parsimony appears to suggest that one of the two sides could be eliminated. The classic ambition of scientifically oriented research into mental phenomena, as most bluntly represented by behaviourism, has been to eliminate the mental side, trying to operate with states of the external world only; I return to the opposite movement below. However, if the twosidedness is an ontological fact, a meta-criterion of simplicity obviously has no business trying to eliminate it. Searle (1992) expands his earlier argument about computation and cognition into a general attack on standard assumptions in cognitive science, showing that just as the belief that going through mental processes is identical to going through computer programs is mistaken, so is the hardware equivalent, i.e. the claim that the brain is a computer6. One central claim is that all the types of "rules" that enter into computational descriptions and are attributed to the mind by computational cognitive scientists are either "as-if" rules which describe what are really neurophysiological processes, or descriptions of potentially conscious intentional processes. There is no third possibility: the notion of genuinely mental processes that are in principle inaccessible to consciousness is incoherent. To give something the predicate "mental", thus factoring it out from neurophysiology, is empty except to the extent it can be made conscious. This means that the structures that Chomsky ascribe as "tacit knowledge" to the human speaker as well as the rules that go into Marr's process of going from photons to visual image must be understood either as metaphorical descriptions of neurophysiological processes or as nonexistent. When we describe input-output mechanisms that are inaccessible to consciousness, we are thus in precisely the same position no matter whether they happen inside or outside a human subject. We can observe the actual input-output phenomena and we can provide a description (or simulation) to capture the regularities involved — but there is no third entity in the shape of a rule. The law of gravity is a rule in the same sense that Marr's laws for converting visual input is a rule — and if the visual system is following a rule, so is the meteor that is increasing its velocity as it approaches the ground.

Intentionality, mental content and rules 49

3.5. Intentionality and information The rise of the word information to the status of a epochal keyword (the "information society", etc.) has also been reflected in its increasing importance as a basic term in semantics. Since the concept has suffered extensive damage by the impact of the computational revolution, and the problem is cognate with those of mainstream cognitive science, the issue naturally belongs in this context. If we regard intentionality as a defining feature of real mental representations, it may serve as the basis of a similarly constrained concept of information. On this view, information is always information-about and thus involves a two-level element: a representation and something else which it represents. To the extent information exists outside the mind, it must therefore have what Searle calls derived intentionality: it depends on the presence of the intentional powers of an observer's mind in order to call forth its aboutness-potential. An attractive feature of such a narrow definition is that it provides the foundation of a distinction between information and reality, based on the canonical role-relationship: information is about something else, whereas reality is the something else which we may have information about. Of course, I hasten to add, information is part of the world, like everything else — but in seeing something as information we see it as ontologically secondary in relation to the entity which it is about. The historical reason for the need to make this elementary distinction explicitly is the mathematical notion of information which was one of the sources of the computational revolution. Shannon (cf. Shannon 1938; Gardner 1985: 144) showed how on-off switches in a machine could be understood as corresponding to true-false decisions in a logical system (as later implemented in the digital computer); this laid the groundwork for information being measured in terms of "number of distinctions". Mathematically speaking, the "aboutness" of the information could be abstracted from, leaving only a figure indicating the number of distinctions necessary to represent it. This makes excellent technical sense with telephone lines as well as computers, as we all know when we the line is busy or we run out of disk space: what we need is information space, which we can then fill up as we please. However, it leads to the potential confusion that information can exist in disembodied and abstract form, without being conceived as having an aboutness relation to anything, so that

50 Meaning and cognition

structurally complex objects are seen as containing more "information" than structurally simple objects. In the absence of an aboutness relation, information is cut loose from its ontological moorings and can be found with equal plausibility anywhere, for example in the physical environment. The problem with this way of talking, if it is to be anything more than a metaphor, can be illustrated by the following naive question: if we say that there is a certain amount of information in an object, such as a wooden chair — where is the information in relation to the wood? Inside, outside, or on the surface? There is no way to answer this question without getting an ontological duplication: the chair is not only a chair, but also information about a chair — every time we change something in the chair (by making a dent in it, for instance), we simultaneously change the information that we get by looking. It is the same potential misunderstanding that is involved when we talk about computation as involving information processing: only the human observer converts the output to real information. The duplication is innocent in all cases where we are using an object in the world to tell us something — because then the object has a dual role: it is both itself and a message decoded by us. A tree with moss on one side is, to a boy scout, a message about where north is. The problem arises only if we abstract from the observer and leave the information out there anyway. What corresponds to the mathematical notion of information in the environment is structural complexity: there is lots to say about a complicated object, and little to say about a simple object. But in the absence of an observer, we only have the intrinsic properties of the object, not the information. Putting information into the external world has been used for two diametrically opposite purposes. In one version the point is to argue against mental representations: if information is everywhere, there is nothing special about the mind. The other is to argue that mental processes occur not only in the mind, but also outside it. The first case is what we find in Gibson's theory of perception. In his battle against mental representations, Gibson (cf. Gibson 1966) argued that we need not postulate inner mechanisms that create mental constructs, because there is much more information in the environment than usually supposed. The valid part of Gibson's point can be restated by saying that there are more sources of information in the environment than sometimes claimed; Gibson showed that we need not actually invent as much as had been assumed. But patterns of light and shade, until used as input by the observer, are just patterns of light and shade, not information. A clue in a

Intentionality and information 51

criminal investigation is only a clue in the context of a detective: in itself, it is just a bloodstain, a piece of cigar ash, or whatever. The presence of a cognitive subject means that something new comes into being, namely the relationship between observer and object — and the information state of the observer vis-a-vis the bloodstain is part of that relationship. And, pace Gibson, no matter how little our mental machinery adds on its own to what is already there, the cognitive subject is able to create something which is obviously not in the environment, namely a stored representation which can (with luck) be called upon when needed, as for instance when we succeed in remembering a telephone number instead of having to look it up. One of the places where one often finds the second version, i.e. mental processes seen as existing in the environment, is in relation to biology. The most celebrated example of this is the genetic "code", which was discussed in terms of messages from the very beginning and is still widely understood in those terms: the double helix is seen as transmitting information about the future specimen. On ordinary assumptions about what is mental, this is a potentially confusing usage: the genes are not in the information business — they are links in a causal chain. The miracle of genetics is about how the genes manage to trigger a causal process that eventuates in a fully grown specimen with the right specifications. It is much more complicated than the process whereby fire causes smoke, but it is a cause-effect process of the same kind: only when there is a cognitive subject present does smoke turn into information about a fire; and only the geneticist can see a DNA chain as a message. Sometimes one finds the view that all biological processes are mental information processes, for instance in Gregory Bateson's synthesis between mind and nature (cf. Bateson 1980). Bateson's identification of evolutionary and mental processes, however, depends on a purely structural definition of mind, whereby all phenomena with the right kind of complexity qualify as "mental", echoing from a completely different perspective the mistake of strong AI (cf. the review by Steven Rose 1980). But in ascribing mental status to natural processes, he exemplifies the opposite trend to reductionism, which wants to eliminate the mental level. A radical version is advocated by Hoffmeyer (1993), who explicitly argues that the individual cell has subject-like properties (such as that of making choices). If you accept this view, information is indeed found outside the mind in the traditional sense of the word. But if you do not, information should be restricted to occurring in the presence of an observer. Otherwise you get

52 Meaning and cognition

either the ontological duplication described above or what is even worse, a world consisting only of information: situation semantics, locating situations equally in sentences and in the world, depends on this way of thinking (cf. Barwise—Perry 1983). In such a universe, there is no distinction between events and information about events — all that happens to a human being throughout his life is that he receives messages. Yet eating when you are hungry is not the same thing as representing to yourself that you are hungry and in the process of eating, although the former tends to bring about the latter. The difference is in the subjective experience rather than the informative content, which makes it invisible from an information-processing point of view; but only a very highly educated person would ever attempt to deny that it exists.

3.6. The second cognitive revolution: cognitive linguistics and connectionism The basic problem with the classical computational approach was that it smuggled a number of assumptions associated with logical positivism into the supposedly new mental framework, instead of facing the challenge of actual mental phenomena. Within linguistics, the insistence on taking actual mental phenomena seriously is represented by the movement known as cognitive linguistics. Cognitive linguistics must be understood as a return to the tradition of seeing language in its natural context of human understanding, after structuralism, behaviourism and logical positivism had diverted attention elsewhere for a while (cf. also Geeraerts 1992: 266); in this longer perspective they were just short-lived aberrations. But this glacial age served to provide the interest in language as an aspect of the world of human experience with a powerful sense of purpose; what had seemed obvious for two thousand years now needed to be explicitly and systematically defended. With the declining prestige of positivism, and the legitimacy of interest in mental phenomena for their own sake, the stage was therefore set for what is sometimes called the second cognitive revolution. The crucial new element in cognitive linguistics is to see mental organization as constituting a realm of reality in its own right: mental structures are part of human reality, too. The human process of understanding as an important part of life, and as an important part of the

The second cognitive revolution 53

context of language, receives a focus that was impossible until the primacy of the pursuit of objective knowledge was challenged head-on. The word "cognitive" is part of the battleground, since "classical" artificial intelligence also sees itself as a (or rather the) theory of cognitive process. The motivation for the word in the collocation "cognitive linguistics" is to indicate the orientation towards a larger context of inquiry than the inherited "formalist" and "objectivist" paradigm in linguistics (cf. Langacker 1987, Lakoff 1987, Fillmore 1985, Talmy 1985), which manifests itself not only in mainstream generative grammar, but in a whole range of formal grammars, some of which are more directly descended from the logical paradigm. Cognitive linguistics rejects the modular and syntactic understanding of language assumed in generative grammar (cf. Fodor 1983), replacing it with an understanding where language is directly embedded within the totality of human experience. Reflecting this embeddedness, one central concern in cognitive linguistics has been the role of figurative language, exemplifying most clearly the presence in language of phenomena that go against the grain of precise formalization of quasi-physical laws. Pervasive metaphors (spatial dimensions, force dynamics, etc.) and the role of the observer's perspective (mountains "climb", and the same road both "descends" and "rises" at the same time, depending on the perspective) illustrate the central point: it is necessary to see linguistic elements in the context of human experience, grounded in a human body, in order to understand the type of systematicity that is found in human language, whether we are talking about grammaticalization processes, clause structure, or in the structure of meanings. The human basis of category structure, with "prototype effects" and "basic-level concepts" (cf. Rosch—Mervis 1975) bear on the same point, replacing the all-or-nothing category structure of logic-based truth-conditional semantics. This general perspective also involves a rejection of the distinction between linguistic meaning and conceptualization in an encyclopedic sense: linguistic meaning becomes an integral part of the whole world of understanding through which we grasp experience. The notion "frame" is a salient example of how this perspective is incorporated in linguistic semantics (cf. Fillmore 1985). All words carry with them a set of contexts which they are designed to describe: Sunday invokes a calendrical cycle and a religious tradition as part of its meaning potential, while designating a particular part of the everyday routine — and this aspect must be part of

54 Meaning and cognition

semantic theory because it is necessary to describe what the word contributes to the understanding of linguistic expressions. At the same time as cognitive linguistics began to make itself felt, a new development changed the scene within the computational approach to cognition. Although these two developments are quite independent, an alliance is emerging between them: just as classical artificial intelligence and formal linguistics are natural allies, so the architecture known as parallel distributed processing or "connectionism" has a natural affinity with the experiential approach to cognition (cf. Harder—Togeby 1993 for a comparison between the two architectures in relation to human understanding). The key feature of connectionist networks is that they establish those input-output relations that constitute computational simulation not by means of a predefined algorithm, but by means of a process of "learning". The operative system consists in a network of electronic processors — input, intervening hidden units and output units — which function so that all configurations of input features are ultimately assigned to one of the available output categories. Each input units "fires" when the feature that it is sensitive to is present in the input (for example one for hooves, one for black stripes, one for white stripes, one for mane, one for tail, etc.), and through a pattern of weights and connections the combined impulses will cause output categories to "fire" (a candidate output category in this case being "zebra"). After each cycle, the programme will modify the weights of the processors that supported the wrong reply; and through this process of modification the system's performance can be gradually improved until, ideally, all the relevant types of input trigger the desired output category. The gradualist nature of this mechanism is reflected in the fact that deviations in the input do not give rise to total failure, as in an algorithmbased procedure. In such a process, there is no logic-like level, and no inventory of symbols corresponding to the putative language-of-thought — only units receiving and transmitting electronic signals. If we disregard the ontological level — on the level of strong AI both camps would confuse electronics with sense-making — the problem of meaning assumes a quite new form in connectionist simulation of understanding. The "classical" architecture sees cognition as rule-following and symbol-manipulation: mental processes consist in performing operations on symbol tokens according to specified procedures. In connectionist modelling, input elements are not understood as corresponding to mental symbols, but rather to features of an object that is

The second cognitive revolution 55

being cognitively processed; thus the gateway into the cognitive system is not a preexisting item in a mental combinatorial syntax, but a sensitization of the cognitive apparatus to various features in the environment. Accordingly, Fodor's problem of how to link mental symbols with external events does not arise. Instead, word meanings are captured by spread-out networks that naturally reflect the diversity of circumstances words may be associated with. The role of linguistic categories as an integral part of a wider pattern of human experience is thus shared ground between connectionism and cognitive linguistics. With a common emphasis on motivation instead of arbitrariness, basis in content rather than formal structure, and groundedness in experience instead of objectivism, together they constitute a powerful alternative to the mainstream approach to language and cognition, occasionally referred to as the "second cognitive revolution".

3.7. Pan-cognitivism: turning behaviourism on its head But even if you start out by taking mental phenomena as your point of departure, there may be problems in providing a theory of meaning based on the specific nature of mental content. Behaviourism was the blunt way of failing to capture the specific contribution of mental processes, by denying their existence; "classical" artificial intelligence was a more subtle version, moving the mechanism of computation from the machine into the theory of mind; an even more elusive way, however, is to give mental, cognitive processes such a central role in accounting for everything that nonmental phenomena more or less fade out of the picture. The extreme version of this stance is a position which turns behaviourism on its head: the taboo of the mind is replaced by the taboo of the external world.7 I shall begin by trying to show how pan-cognitivism may come to appear natural or even compelling. The common focus of the cognitive revolution is on what goes on in the mind of the speaker. The element that is characteristic of cognitive linguistics is an emphasis, spurred on by the polemical opposition to formalism and objectivism, on the vital role of mental processes in imposing a specifically human perspective on what is conceptualized and encoded. Instead of mental content being a suspect entity in a scientific world picture, it is only in virtue of contentful mental processes that we

56 Meaning and cognition

have a world picture at all. Therefore mental categorization is at work wherever we look, and in order to understand the picture we have of anything, we have to look at the mental processes that generate the picture. There is still nothing untrue in this; the problem is what consequences you draw from it. In order to understand the crossroads that opens up here, we may return to a problem that was mentioned above, p. 48: the temptation to eliminate one of the two levels in terms of which intentional states are defined. Objectivists wanted to minimize or eliminate the role of the mental level, and the cognitive linguist may want, in return, to eliminate the world state from the picture. And the cognitive linguist can do it with better arguments than the objectivist: the intentional state as such exists only within the subject itself, so why introduce complications by bringing in the external world? This description does not apply to all cognitive linguists (cf. Sinha 1993a for an explicit discussion of how cognitive linguistics should incorporate Putnam's insights). But it is typical enough to be worth taking up as a general issue; in addition to those who would have difficulties in extricating themselves from such a position, there are some cognitive linguists who explicitly subscribe to something very like it. A case in point is Nuyts (1992) — a good example from my point of view, because he shares the basic function-oriented approach that I am presenting in this book, thus illustrating that pancognitivism is not necessarily associated with views that are very far from mine. Nuyts defines cognition in its relation to language (1992: 5) as the "infrastructure" which makes language possible. In contrast to Chomsky's motivation for making a distinction between Ε-language and I-language, Nuyts does not select the internalist perspective because of a lack of interest in pragmatics; he wants to capture everything about language, including pragmatics, within that perspective. Contrasting his views with those of Verschueren (1987), who wants to distinguish between an individual and a social dimension of language, Nuyts claims that the individual can be assumed to internalize of all relevant social aspects of language: "group traits have no existence outside the cognitive systems of the individuals constituting the group" (1992: 12); so cognition is "the all-encompassing notion for language research" (1992: 13). It is true that human interaction as we know it would not exist without mental content, and group interaction necessarily works via mental states in the individual. But since we understand mental states as being Intentional, i.e. about something, they are not just facts about the individual, but must be understood as being about the world (cf. the

Pan-cognitivism 57

discussion of Putnam on natural kinds above p. 28). Therefore one is necessarily misunderstanding their nature by calling them "infrastructure". A constitutive feature of mental states is their role as links with the external world, including other human subjects. But — the standard argument goes — this is something you cannot know: what you really deal with is just the product of your own cognitive system. The difficulty in getting out of this perspective once you have adopted it lies in the argument that turns on the "circle of beliefs"; just as you can never get outside a purely formal system by purely formal means, you cannot get outside the mental world by purely mental means. However, this argument is so strong that it is an argument against a number of things that are necessary if you want to have any theory about language at all, including a cognitive theory. The problem is the following: if all you can talk about is what is inside your own head, all sciences are theories of mental content: astronomy is not about stars, and economy is not about the wealth of nations — they only deal with the way the scientist conceives of these things. If that is the case, the logical step would be to stop trying to say anything about what is out there, whether planets, or poverty, or language as something that people use to talk to each other. It is the same corner that both empiricists and rationalists are liable to paint themselves into: the Cartesian subject who realizes that he can never be certain about what is really out there and knows only that he exists because of his mental process, and the empiricist (cf. note 12, p. 506) who bases his world picture on "direct" sensory images and ultimately is forced to doubt the existence of a world outside them. In order to get out of the trap, we have to make an admission that has been very unpalatable for the knowledge-based tradition: that there is no transcendent guaranteee of the validity of any piece of knowledge — a realization that followed from the breakdown of unified science discussed above p. 28. Various conclusions may be drawn from it, a more radical postmodern position being associated with the name of Richard Rorty (cf. Rorty 1980). For those who are less than enthusiastic about the "anything goes" position that is the polar opposite to strict objectivist realism, the philosophical problem posed by the lack of an ultimate foundation for knowledge is that there is no obvious one-and-only way to salvage those things we want to keep from the realist tradition, such as the assumption that a statement is true when there is something out there which makes it true. Those who feel the "strong realist undertow" described by Dummett

58 Meaning and cognition

(cf. Crispin Wright 1986: 38) are faced with the necessity to articulate more precisely the way in which the real world impinges on the status of what we say and believe; and this has proved no easy task. However, it should be emphasized that even if it is unclear what is the precise nature of a Putnam-style "internal" or "pragmatic" realism, it is perfectly legitimate to believe that the truth lies in that direction. Such a position is wholly in harmony with the general emphasis within cognitive linguistics on the human embeddedness of cognition (cf. for instance Lakoff's use of Putnam in Lakoff 1987: 261); so even for cognitive semantics, there is no compulsion to assume that our assumptions are only a matter of infrastructure. I shall therefore try to make a few suggestive remarks about where pancognitivism creates an unwarranted narrowing of the perspective. What it amounts to is really the inverse of the "no access" argument that formed the foundation of behaviourism: if we begin by taking the mental world for granted, the black box is outside us. If we choose to reject this argument, because we prefer to assume that there is an external world, the assumption that group interaction has no existence outside the cognitive system of an individual comes across as a fairly bad hypothesis, reducible to the more general form that nothing that affects my life has any existence outside my mental states. The only way to see group interaction as being inside the head is by putting everything else inside the head as well: Taj Mahal, the Second World War, and — perhaps more pertinently — soccer football: only sophisticated cognitive skills make football matches possible, and a game is only brought about by the mental states of the individual player. Yet to see the cognitive process of an individual as the allencompassing context would mean that you studied football best by putting electrodes to the head of an individual player, trying to track down his mental processes; keeping an eye on the ball would be a waste of time. The grounds we have for rejecting the "no access" argument involve the relation between theory and practice, as further discussed below in connection with the Background (p. 70). The way human practice is, we have no choice but to take for granted the existence of an external world to which we have access. If it made sense to see this from an evolutionary perspective, one might describe the status of this "assumption" by saying that it has proved so fruitful that it has become hardwired in our system. If anyone denies it, he will still continue to rely on it (in the sense defined below p. 71-72) every second of his life, making such a position necessarily hypocritical. As Wittgenstein once said, surely those who deny

Pan-cognitivism 59

the existence of the material world do not wish to deny that underneath my trousers I wear underpants. We can only understand the mental representations inside our heads properly if we see them as oriented towards things outside our heads, thus maintaining the commonsense distinction between what is in here (thoughts, memories, feelings) and what is out there (other people, including their utterances as well as their underpants). As scientists, we base our descriptions on the assumption that the real world is not only somewhere, but everywhere, and that our own knowledge of it is extremely partial — but that it is possible to come to know more about selected portions of reality if you "put your mind to it". The conclusion of this is that rather than see cognition as the all-encompassing context for language research, it is preferable for us to choose a perspective in terms of which part of the job of describing cognition and language consists in describing the embeddedness of cognitive phenomena in their noncognitive context.

3.8. Continuity and differentiation: a delicate balance The pancognitivist stance is an extreme version of the problem. A less radical version is the failure to describe the specific place of higher mental skills within the continuum of human bodily functions. To see the nature of the problem, it is instructive to look at the programmatic article by Mark Johnson (1992) on the distinctive contribution of cognitive semantics to philosophy. I shall allow myself a lengthy quotation, taken from a section on embodiment: We have a very basic and very complex range of bodily experiences of force, and we thus develop a complex bodily understanding of the varieties of forceful interaction (e.g. compulsion, attraction, speeding up, slowing down, etc.) occurring at this experiential, non-linguistic level. This is the embodiment of meaning, which provides a semantic basis for linguistic forms, meaning and the structure of speech acts (see Sweetser 1990, whose analysis owes a great deal to Talmy, e.g. Talmy 1988). It is for this reason that many cognitive linguists tend to see syntax, semantics and pragmatics as intricately interwoven, since they are all grounded in and highly dependent on the nature of our bodily experience. The form and content of meaning at this bodily level are, for the most part, nonpropositional and beneath the level of conscious awareness. That

60 Meaning and cognition

there should be such a deep level of significance makes good sense from an evolutionary point of view. Long before we sat around the fire talking, we were solving all sorts of incredibly complex cognitive tasks involving highly developed bodily skills, and we communicated non-linguistically. There is plenty of (non-linguistic) meaning in this kind of embodied activity, and language emerges out of this more primordial form of social interaction, just as it does for young pre-linguistic children (Paul Churchland 1979). Language is but one form of skilful coping activity that is crucial for our survival and understanding. (Johnson 1992: 348)

The thrust of the passage is to argue for the continuity of linguistic and cognitive phenomena with bodily human experience, and as such it makes a valuable point; below I try to say some things about the process whereby language "emerges out of" the prelinguistic phenomena he enumerates. The problem is how to differentiate between the different stages and elements in the continuum. An example is when Johnson talks about "complex bodily understanding"; I assume that he must mean a type of understanding that is bodily in a different sense than the trivial one in which conceptual understanding is also an achievement by a part of the body, namely the brain. But in that case how does our bodily "understanding" differ from the bodily experience itself? Similarly, it is not easy to attach a clear sense to the notions of "form and content of meaning...beneath the level of conscious awareness", if meaning is to be distinct from the activity itself. Reading this passage, it is very difficult to find a difference between experience, cognition and meaning — for instance between the bodily experience of force, the concept of force and the meaning of the word force. The same problem can be found in relation to connectionist simulation. The assumption about concepts (including linguistic meanings) which is more or less implicit in the descriptive practices of connectionism is that concepts are (sets of) configurations of input features that come under a certain (output) label: the concept 'cup' is defined by the (fuzzy) range of properties that makes the speaker categorize something as a cup. If a network places input configurations in the right categories, the system has mastered the "concept". This picture of concepts, however, does not capture the specifically mental nature of concepts. The input-output relations of neural networks can be used in mechanical production processes, just like algorithmic procedures — for example to pick out defective specimens on an conveyor

Continuity and differentiation

61

belt. And as opposed to the physical symbols theory, in the neural networks there is nothing that looks even remotely like conceptual thinking. In the general presentation of connectionism (cf. Rumelhart et al. 1986), the illustration example of "the microstructure of cognition" is the ability to reach out and push a knob hidden at an awkward angle behind some other objects. This complicated motor ability is not obviously different in kind from motor abilities of the kind that enables even invertebrate predators to recognize and capture their prey. Similarly, connectionist simulation of pattern recognition shows that configurations of neuron firings, unmediated by any interpretative activity, are good candidates for accounting for some types of perception-related achievements; instead of a rule we have a neural setup trained to produce input-output relations that mainstream cognitive science might capture in terms of a rule but are really just neurology. As opponents have been quick to notice (cf. Fodor—Pylyshyn 1988), connectionist pattern recognition tasks of this kind look very like a plausible imitation of stimulus-response reactions, thus providing a sophisticated implementation of behaviourist semantics. On that view, cognition would appear to be at work also in the type of processes which are model examples of behaviourist conditioning, such as Pavlov's dogs. However, in Rumelhart et al. there is an indignant protest against the identification of behaviourism and connectionism: after all, connectionism is explicitly concerned with the problem of internal representation and mental processing, whereas the radical behaviorist explicitly denies the scientific utility and even the validity of the consideration of these constructs. (Rumelhart et al. 1986: 120)

The wording neatly places connectionism on the common ground of the cognitive breakthrough: the assumption that there are things going on in the black box. But it does not provide a definition of cognition that makes it distinct from neural activity. We have to be careful here, since there are two potential representational relations involved: one between the computer process and the simulated object, and one between a mental representation and that which it represents. Obviously the first of these must be present, otherwise we have just a machine process and no theory. But on the supposedly mental level there is no representation involved; since connectionist "representations" are explicitly nonsymbolic, there is nothing to suggest that they represent anything at all. It is much more natural to interpret connectionist networks

62 Meaning and cognition

as simulating neural rather than conceptual processes; the central concept of "neuron" would appear to encourage this way of thinking about it. In light of this, let us return to Mark Johnson's description of meaning as embedded in (mostly unconscious) bodily processes. Both from Mark Johnson's perspective and from the connectionist perspective, there is no clear distinction between the mental and the nonmental realm; I note in passing that Mark Johnson quotes Paul Churchland, the champion of eliminative materialism, in support of his position. To take an example which most people would wish to exclude from the mental realm: If the link between the bell and the gastric processes in Pavlov's dogs is cognitive, why not the link between eating and digestion? Searle (1992: 81) quotes an account of digestion, illustrating the blurring of the distinction: The gastro-intestinal tract is a highly intelligent organ that senses not only the presence of food in the lumen but also its chemical composition, quantity, viscosity and adjusts the rate of propulsion and mixing by producing appropriate patterns of contractions. Due to its highly developed decision making ability, the gut wall comprised of the smooth muscle layers, the neuronal structures and paracrine-endocrine cells is often called the gut brain.

The italics are Searle's and highlight the quasi-mental terminology; his point is that if we take it seriously without the "quasi", all bodily functions are mental. To sum up: connectionism captures only the lower-level aspects of what goes on in higher-level processes. Within the mental realm it captures the level of non-conceptual content, such as auditory orientation in space, cf. Cussins (1990) — skills which do not depend on a conceptual mediation of coping strategies. In relation to concepts, it captures only the regularities that support the concept, not the concept itself. More generally, as a computational architecture connectionism shares a problem with classical artificial intelligence: computational simulation of input-output relations can provide no definition of what mental phenomena are. Only because classical artificial intelligence transplanted its own computational architecture into the mind did it provide something that looked like a theory of higher mental processes. I am not claiming, however, that connectionists or cognitive linguists want to see all human skills as equally mental. I merely suggest that we need a more precise way of talking about mental skills. In the next section I shall attempt to invoke some useful distinctions, and then discuss some types of mistakes which they will help to avoid.

Continuity and differentiation

63

3.9. Cognition: the differentiation and interrelation of skills A natural way to differentiate while maintaining continuity is to look at the issue in a developmental perspective. If we can show how some skills build on others, we can see how higher-level skills "emerge out of lower-level skills, simultaneously capturing the embeddedness and the distinctiveness of mental processes. The discussion is mainly a discussion of principle, but certain empirical findings are referred to in order tov illustrate and support the general drift of the argument. We are interested in at least the following three basic elements. First, the ability to react differentially to stimuli from the environment, such as when a frog reacts to bugs but not breadcrumbs. Secondly, the existence of subjective experience in the sense of Nagel (1974): the question of whether there is something which it "feels like" to be a bat as opposed to being for instance a computer. Third, the place of intentionality, which was taken above as criterial for (higher) mental life. Of these three elements, the first has received by far the greatest amount of research interest, for the simple reason that it lends itself to experimental investigation rather more easily than the two other elements. But again, scientific accessibility does not translate directly into ontological importance. As already argued, differential reaction as such is nondistinctive for the existence of mental life, and an investigation with that as the main parameter thus obscures rather than clarifies the nature of mental life. Indeed, even physical life is not required; thermostats are made to react differentially to stimuli, but that does not make them either living or mental entities. Instead I take the existence of subjective experience, untestable as it is, as a basic criterion of mental life. No matter how subtle a device is, unless it has experiences, it does not qualify. Even if we cannot know when it occurs, we can say something about the presuppositional links between subjective experience and other phenomena. For instance, learning requires physical life, but is theoretically feasible without subjective experience, not to mention intentionality; it presupposes retention of traces of earlier events, but these traces may be purely neural. Habituation and dishabituation are not inherently dependent on any form of awareness: reflex reactions can be dulled by overstimulation without any mental mediation, as an elastic band that is stretched too many times.

64 Meaning and cognition

On the other hand, experience must occur before intentionality becomes possible; unless there is an awareness of what it feels like to be, we have nowhere to put an awareness of what is out there. A distinction which is applicable at the preintentional stage is the distinction between pleasure and pain as experiential qualia; neither of these is intentional, since feeling good or feeling pain is not inherently directed towards any aspect of the external world. At the preintentional stage we may also imagine the existence of implicit memory: subjective experiences are recorded and may be recalled when repeated. Perception, cf. above p. 47, involves intentionality: unless the sensory impression is received as a message about what is out there, we do not understand it as perception.8 With perception, memory in the explicit sense becomes possible. Stored mental representations can be have different levels of sophistication; the most basic kind is generally supposed to be "iconic" in relation to the original perceptual input: although the immediate sensory image fades away, it leaves a trace of some kind for future reference. At a higher level of sophistication we get "secondary representations" that arise by generalizing over iconic traces; thus, an ability to operate with the notion 'colour' cannot be understood simply on the basis of sensory images, but must be the result of an abstraction over (traces of) the different colours subsumed by the general concept. Much of the discussion of conceptual ability has turned on the property of abstractness, reflecting the identification between intellectual sophistication and powers of abstraction. However, higher conceptual constructs are not necessarily to be understood simply as abstractions based on perceptual categories, as pointed out by Mandler (cf. 1988: 119; 1992: 587). All too often the literature, Mandler argues, categorial perception (i.e. the ability to put percepts into discrete categories) is confused with conceptualization proper. According to Mandler and McDonough (1993: 316), the evidence suggests that there may be a "double dissociation" between perceptual and conceptual categorization in early cognitive development. We therefore need to be explicit about the element of discontinuity between perception and conceptualization. The tendency to go unproblematically from perception to conceptualization, as in the history of basic-level concepts, once again ignores the role of subjective experience; perceptually dissimilar events may "feel" the same way and thus represent the "same thing" in a sense that has nothing to do with perception. An example is when infants treat "kitchen utensils" as being one kind ofthing, in spite of the lack of perceptual similarity. But since perceptual elements

Differentiation and interrelation 65

may be necessary for some concepts, conceptualization as we know it stands on the shoulders of both experience and perception. As we move towards higher mental life, a central parameter is the degree of stimulus control. We can posit a hypothetical starting point at which all that the animal has is a stream-of-consciousness of input from the ongoing situation, filtered by its receptive apparatus, where impressions as well as reactions are stimulus-controlled: the environment plays upon the organism as on an instrument. With increasing memory and processing ability the situation changes; memory enables the subject to store intentional representations over and above the immediate situation, and a richer array of processing options modifies stimulus-control, because the stimulus can be evaluated — and differentially treated — in the context of different information states. At some point along this development intentional action becomes possible. Intentional action requires Intentionality with a capital I: only an organism that can represent a desired state of the world to itself can act upon the external world with the intention of bringing about that state. It also requires enough freedom from stimulus control to allow that distinction between involuntary and voluntary action which — whatever the God's eye truth may be about the matter — underlies the understanding of human action that our sense of self and personhood is based on (cf. Harder-Togeby 1993: 489). The logic of this developmental scale would be compatible with the central mechanism of "re-entry" suggested by Edelman (1992: 85), according to which cognitive sophistication arises as a result of adding additional loops of neuronal processing to the output levels of previous levels of processing. The greater the capacity of the system to give cognitive output an extra round of processing, the more independence we have of initial input and the finer the tuning of the ultimate output. Among the very advanced skills is planning for the future in which the mind can envisage future needs and act upon them instead of being motivated by the present actual needs (cf. Gulz 1991). Thinking comes in naturally in this context, since the ability to think about a problem requires that the subject should be able to marshall intentional representations at the required level of abstraction and consider alternative orderings of them in a way that will be helpful in putting the real world in order at a later stage — to be decided as part of the thinking process.

66 Meaning and cognition

A few examples from the animal kingdom may be illustrative. It is intuitively attractive to follow Johnson-Laird (1988: 23) in using the bacillus escherichia coli to represent stage zero of the scale: the mechanism whereby it propels itself towards food and away from toxic substances is prewired and inflexible, it has no memory — and probably no subjective awareness either. Stage one begins when learning becomes possible (cf. Ulbaek 1989), and thus includes invertebrates sensitive to behaviourist conditioning. Already at this level the word "cognitive" begins to be used, for example in relation to the connectionist interest in the aplysia sea slug; its fascination lies in the fact that it has so few neurons (around 10,000) that a complete model of its "cognitive" system is quantitatively feasible. Although pigeons have played a prominent role in behaviourist theories, it turns out that the training of pigeons involves more complex mechanisms than the behaviourists believed (cf. Herrnstein 1985: 141), reaching the level of categorial perception: the animal is able not only to learn to react differentially to different kinds of stimuli, but to do it in a way that presupposes mental traces. The next step up can be defined as involving abstract, "secondary" mental representations. Tasks depending on abstraction are at the limits of the capacity of Herrnstein's pigeons, but easily within the capacity of another bird, namely Alex, an African grey parrot, as demonstrated by its ability to use the word colour correctly, cf. above. Belying stereotypical prejudices about the linguistic ability of parrots, Alex answers questions like what's the same about these two things? with (correct) replies like colour or shape, neither of which can be understood as iconic in relation to any perceptual environmental stimulus (cf. Pepperberg 1991: 170). At the level where representations are used as input to completely stimulus-independent action, as in intentional deception, unambiguously documented nonhuman examples begin to be scarce; possibly apes are capable of this (cf. Mitchell 1986, Cheney—Seyfarth 1994). Similarly, planning for the future may be limited to humans (Gärdenfors 1992: 74); I return to the importance for language of stimulus-independent hierarchical planning below p. 269. Before I go on to use this scale of differentiation to distinguish between various phenomena that I claim to be interestingly different, let me emphasize that in relation to any given organism, such as a human being, the distinctions suggested are only to be understood as analytical, i.e. as picking out types of similarities between that organism and others. In relation to an actual organism, there is no suggestion that (first) there are

Differentiation and interrelation 67

watertight compartments between different skill types, or (secondly) that the actual specimen has a range of "lower" and a range of "higher" abilities. In particular, I would like to emphasize the closely interwoven nature of different types of skill that are bound up with both a "knowing-how" and a "knowing-that" aspect (cf. Ryle 1949). For language, it is particularly important that a simplistic modularity theory and a simplistic "high-low" scale is not presupposed; I shall therefore try to illustrate why such assumptions are unwarranted. I have claimed that motor routines do not work via explicit mental representations; there is nothing about them that makes it necessary to assume that they depend on higher-level intentional representations — they may, for instance, have the status of reflex reactions. However, whenever they are bound up with intentional action, there is a necessary element of Intentional representation with a capital I, because intentions by definition presuppose mental anticipation of a desired outcome: the bodily routines that are involved in walking are triggered by the explicitly representable goal we have in walking, for instance posting a letter. Even with processes that cannot be intentionally triggered, like digestion, we rely on them in deciding (intentionally) what to eat. With the perceptual input systems, the relationship is the reverse. The visual process is like the processes involved in triggering bodily routines such as digestion in being pure neurology; but unlike digestion, it becomes mental at the output end. In order to function smoothly, the whole system has to be smoothly integrated. One of the most salient facts about human cognition is the fact that the division of labour between representational and purely neural processes is not fixed once and for all. An example of the flexibility with which the two systems cooperate is the process of learning a bodily skill, as in Searle's example of learning how to ski (Searle 1983: 150). When such a process occurs as a result of explicit instruction, the skill goes through a process of change from conscious activity, depending on internal representations of how to do it, to unconscious, automated routines. A traditional account is that in learning how to ski one depends, in the beginning, on conscious observance of rules like bend the knees, lean forward, etc. As one gets more skilled, these rules are commonly said to become "internalized" and "unconscious". Following Searle, we may prefer to say that the rules become irrelevant, because the body takes over. This process can be understood as one in which the skier builds up a set of motor routines, plausibly describable in terms of the training of a neural network, which

68 Meaning and cognition

permits him to forget about explicit representations of what to do once the skill is wired into the system. Here, the "higher" form of the skill is naturally understood as the one in which it has become a pure motor routine, like walking.9 We shall return to some implications of this view below in relation to meaning (p. 115).

3.10. Problems with the word "conceptual" as used in cognitive linguistics Accepting that there is much that the differentiation suggested above does not capture, let us now try and see what it does capture with respect to the main issue — the question of whether we can get a satisfactory account of linguistic meaning by describing it as a "cognitive" phenomenon. The sense of the word that includes the neuronal organization of sea slugs is useful when exploring the continuity of cognitive development; however, to distinguish it from the more narrow range that most people associate with the term, the word neuro-cognitive will be used in the following when this sense is intended. Saying that meaning is cognitive in the neuro-cognitive sense would therefore only capture what it shares with motor ability and digestion, and so hardly tell us all we want to know about linguistic meaning. The interesting sense in which meaning is cognitive not only according to cognitive linguistics, but also to the tradition, is in the much narrower sense that is associated with the word conceptualization: higherorder, intentional representations, building on both perceptual input and subjective experiential qualia. Although everybody agrees about the prototypical core of conceptualization, it is often unclear how much of the neuro-cognitive fringe it can justifiably be used to cover. This problem arises, for instance, when the word conceptualization is applied to the expression side of language. To take a case in point, Langacker (1987a: 78) suggests that phonological distinctions should be described as conceptual rather than physical, based on a compelling argument that the interesting properties of speech sounds are not physical but involve the way language users process sounds. The usage is natural in relation to the neuro-cognitive understanding, whereby perceptual recognition and motor execution procedures are regarded as cognitive in the same sense as conceptual processes. According to that criterion, expression

Problems with the word "conceptual" 69

elements are indeed conceptual in the same sense as the elements on the content side. With laudable consistency, Langacker (1987a: 79) therefore includes the phonological side of the sign as part of the semantic side. The problem with this broad usage is, as neatly demonstrated by Langacker, that we cannot get at the distinctive nature of the content side of language as opposed to the expression side: both [dog] and 'dog' are characterized in terms of concepts and cognitive process. The scale suggested above, however, makes it possible to differentiate on this point. Learning to understand and speak a language requires learning to master the relevant set of phonological distinctions, so that one can distinguish duck, dock, dug and dog (to take an example that is difficult for a Danish learner). This can naturally be regarded as a (neuro)cognitive achievement — but (in terms of reception) it belongs at the level of categorial perception rather than conceptualization (corresponding to motor routines in terms of production). Only the content side, 'dog' rather than [dog], is conceptual in the precise sense of the word. The concept *[d]' is the content side of this symbol (a phonetic concept), not the expression side. But the distinction between conceptualization and perceptual discrimination is not the only place where the borderline between conceptualization and neighbouring phenomena tends to get fuzzy. The weaker version of panconceptualism is when distinctions between properties of representations and properties of that which they represent is blurred, and too much is located at the intentional, conceptual level. The risk is especially salient in the relation between the conceptual and the nonconceptual part of human life: the representational and the nonrepresentational part of our inner world. Chief among the nonrepresentational aspects is primary experience itself, the canonical example being the experience of pain: pain does not refer to anything, it just occurs. I now turn to the risk of conflation between conceptualization and primary experience. Since subjective experience is a problematic concept, it is necessary to begin by saying something more about its role in relation to those phenomena that were described above as higher-order in relation to primary experience. The only relation between cognitive representations and noncognitive states that has traditionally been the subject of discussion is the representational relation itself. The human subject as the statutory owner of such representations has generally been left out of consideration for the same

70 Meaning and cognition

reason as purely mental processes: only knowledge is really interesting, and the subject herself is just a source of error. At least from Kierkegaard onwards, however, the phenomenon of human existence itself has been on the philosophical agenda: even the most perfect picture conceivable of total objective reality as it evolves through time is flawed if it leaves the human subject out of account. But it has not been easy to forge a generally accepted link between the knowledge-oriented and the existential approach to human life. In direct relation to cognition, this problem is discussed by Searle under the label Background, which designates the nonrepresentational states that form the context of the explicitly representational states that are accessible to the subject. Background is that part of reality which is internal to the human subject; it is not part of what you know, but of who you are. An important part of the background consists of the element usually denoted by the word desire — the circumstance that there is something which makes it worth our while to act at all. We may also talk of values or purposes (Edelman's terms, Edelman 1992: 162) — everything that makes one get up in the morning is part of the background, or had better be if life is to be felt as worth living. Clearly, if the zest for life ceases to be part of the background, the lack of it cannot be remedied by the establishment of any explicit representational states. Values are based on the qualia of primary experience, not on representational states. These qualia are bound up with all sorts of intentional relations; the qualia associated with eating spinach, running, or thinking affect the way we orient ourselves towards spinach, swimming and thinking — but the experiential qualia themselves (including pain) cannot be reduced to the representational states that they may be associated with. In relation to the distinction between representational and nonrepresentational states, the temporal dimension of human life has a crucial role. One of Kierkegaard's key observations was that life must be understood backwards but must be lived forwards10. In this, he sums up a salient discrepancy between the existential and the knowledge-oriented perspective. Although the process view of language and cognition is generally accepted on the level of principle, the element of temporal progression that is inseparable from the quality of lived experience is almost by definition alien to the mode of being of conceptual representations. Secondary representations are constructs distilled out of the flow of events, and their special usefulness is in their relative stability over time.

Problems with the word "conceptual" 71

The most important consequence of this is that everything in life that is more basic than conceptual representations is oriented in relation to the ongoing flow of life. This directionality is reflected in a very important aspect of the way background states relate to explicit representations — the relation of presupposition. Every movement forward in time presupposes a point of departure (which is already there) and aims at a point of arrival (which has yet to be reached). This relation cannot really be captured within a world of stable, simultaneous representations, but belongs in a world of temporally anchored, intentional action. In relation to actions, background states of the subject constitute the point of departure (cf. Poulsen 1980, who talks about "practical ground" as opposed to representational content) — which contrasts with the movement forwards which our attention is focused on. The role of the nonrepresentational states of the subject, as described here, is simply to be there as part of the situation, just as external factors are. This means that our attitude to Background states is not a propositional attitude (cf. Searle 1983, 1992). When we step into a room, our prewired set of motor routines depend on the circumstances under which they were formed: we learned to walk in a world where the ground generally remains solid when we step on it. Therefore when an intention to cross the floor triggers our walking subroutine, in some sense we expect that the floor will not give way under our feet — but it is not plausible that we have a mental representation of the floor staying in place under our feet, when we set foot on it: our practice is simply designed so as to fit a situation in which the ground generally remains solid under our feet. We can express this relationship by saying that we "presuppose" that the floor will stay in place. But the term presupposition is also used about cases where something has taken the step from Background to cognitive content, so that it is in fact a propositional attitude — in drawing up a plan of action, we may list those conditions that we take for granted as being fulfilled, for instance. To have a name for the relationship that obtains between our actions and those conditions that they depend on for their successful execution but which are not explicit cognitive content, I shall use the term reliance: in taking a step, we are "relying" on the solidity of the ground. Reliance will be crucial in several places below (e.g., pp. 76-78, 96, 115—17). We can become aware of aspects of the background as we can become aware of features of the external world: we realize that we are in love, or

72 Meaning and cognition

that walking presupposes solid ground, just as we realize that it is snowing. Both inner and outer reality can therefore be explicitly represented when this is called for; as recognized from Heidegger onwards, cognitive representations of things we had previously just relied on tend to arise when our practice breaks down. When our reliances are mismatched with the external world, we produce representational content as soon as we are able, although it may take more resources that we command at the moment: if the ground does indeed give way under our feet, it may take a while before we "know" what happened as opposed to just experiencing disconnected sensations.11 Based on this discussion, we may now return to Johnson's continuum. The reason why there is a point in emphasizing the distinction between primary experience and secondary representations is that the descriptive practices of cognitive linguists to some extent stress the continuity to the point of forgetting about the distinction. This problem surfaces already in the title Metaphors we live by (Lakoff—Johnson 1980), which trades on an understanding whereby metaphors are understood as operating in the domain of primary experience: only a hardened intellectual lives by conceptualizations in the narrow sense. This view is also reflected in the way the authors discuss metaphor: it is not a matter of conceived similarity, but of experiential sameness: 'more' is not like 'up', it is 'up', etc. One of the reasons we need to be careful in distinguishing between experiential basis and conceptualizations of it is that a characteristic feature of primary experience is the element of nondiscreteness, recognizable in the state of confusion that some of us regularly experience when trying to articulate things that tax our conceptual apparatus to the limit. In establishing a secondary representation, one has to impose a kind of order on experience that goes beyond its inherent structure: the totality of experience must be reduced to manageable chunks (cf. Jacobsen 1984). Metaphor depends on this process. Our conceptualization segments what is a continuous domain of experience into separate domains, e.g. space and time; and a metaphor like prosperity is just round the corner may then bring them together again. Unless we have a level of conceptual distinctness, the notion of metaphor becomes empty: experiential sameness is all we have, with no distinction to play up against. More generally, if the two levels are not kept distinct, we are forced to assume a reading according to which there must be complete isomorphism between primary experience and conceptualization. The lack of a hand-inglove fit between experience and conceptualization is shown by the

Problems with the word "conceptual" 73

difficulty most people experience in forming representations (linguistic or otherwise) of the most significant and powerful experiences they have, such as their feelings for people who are close to them;12 a standard reaction would be to say that we just do not have the words to say such things. The problem goes beyond the domain of cognitive linguistics proper, but is worth taking up because of the exciting perspectives that are opened by broadening the scope of investigation of mental structures based on the methods of cognitive linguistics to include a wider range of phenomena — as discussed by Lakoff in his presentation of at the 1993 international conference in Leuven. The example is chosen because it illustrates both how interesting perspectives there are in applying the apparatus of cognitive linguistics to experiential structures, and why it is necessary to keep the levels distinct. Lakoff referred to an investigation (in progress), inspired by cognitive linguistics, of the self-understanding of a group of women employed in the American navy. They did a number of different things which might appear to be strange, including accept ritualized sexual harassment and refrain from applying for veterans' benefits — and resenting it when someone reported on these practices. As it turned out, all this could be accounted for in terms of an idealized cognitive model of "proving one's manhood". Because the women had conceived of their navy career in those terms, complaining about conditions would amount to unmanly failure, and since proving one's manhood is difficult for women, chances of failure were overwhelming — and since one never did prove one's manhood one did not feel entitled to benefits. This account sounds extremely probable as well as interesting; following the "Tailhook" scandal and continuous reemergence of similar experiences there is every reason to pursue the investigation of the manhood model that is entrenched in the navy experience. But the structure that accounted for the behavioural pattern of these women is obviously a structure of experience rather than just a conceptual structure — a pattern of life, not of secondary representations. The two things do not necessarily go together: to possess as part of one's conceptual repertoire a conceptual structure such as "proving one's manhood" does not entail that one's life is oriented towards proving one's own manhood; and conversely, to have a pattern of life oriented around this model does not entail that you have a conceptual understanding that this is so. One illustration of why the two things are not automatically connected is the therapeutic mechanism in

74 Meaning and cognition

psychoanalysis: strong, but dysfunctional experiential structures sometimes disappear if they are correctly categorized by doctor and patient: a conscious conceptual representation of the structure lifts the spell. Conversely, when we are asked to explain why we do things, the conceptual categorizations which we provide of our actions are often rationalizations which are very different from the real experiential pattern which we are unable (or unwilling) to see on our own. Once the distinction is made, rather than undermine the project it opens up a new dimension, which would not diminish its interest: investigating the relationship between explicit conceptualizations and experiential structures; and cognitive linguistics can be expected to have something to offer in both domains. The third type of problem involves the distinction between the linguist's (meta)conceptualizations and the ordinary language user's conceptual structures. This problem is in a different alley from the others, in that everyone can see the difference in theory and no-one would want to get at the linguist's own patterns only. But a strong emphasis on the continuity between basic human patterns of experience and conceptual patterns as encoded in language may create an equally strong temptation to accord all traces of this continuity the same central status in descriptions of language. In relation to metaphor, this manifests itself in the question: to what extent are the connections that the linguist can find between domains actually anchored in the speaker's conceptual system? An obvious source of error here is the temptation to see identity of word expressions as indicating identity of conceptual anchoring. I for one find it plausible in many cases to assume some sort of identity; as demonstrated by the power of novel and striking metaphors, cross-domain linkage is a type of real mental phenomenon. But whenever we come across a metaphor like "moral credit"13, the situation is somewhere within a territory bounded by two extremes: one is that the speaker is in fact construing morality totally in terms of bookkeeping; the other is that he is merely using the best available term for the acclaim that goes to somebody who is perceived to have done the right thing, while this acclaim is conceptually as well as experientially totally distinct from an entry in a ledger. There is a backlog of empirical work to be done before we know how much is in the inventive linguist's head and how much is in the speaker's conceptual system (cf. Sandra and Rice 1995 for an example of this kind of work). Two examples from my own language can illustrate the distinction that one would like to be able to capture experimentally. As in English, the category name perlemor [mother-of-pearl] involves the lexeme

Problems with the word "conceptual" 75

mother; and I still remember the point at which I realized that this was no phonemic coincidence. The status of this concept in even a very radial concept 'mother' must be extremely doubtful, although a perfectly valid metaphorical relationship can be postulated to support the connection. As an example from the opposite part of the scale, which very obviously (according to me as a native speaker) draws upon the experiential reality of the language user, I will mention the term plakstikmor ('plastic mother') used by Danish children to designate the new partner of a divorced father (in cases where the "real" mother retains custody). The potential of the word plastic to designate something made of less valuable, possibly faked material, made in the outward semblance of the real thing, speaks powerfully about an area of contemporary experience when used as an adjectival modifier of the term mother. We need to get busy finding better ways of keeping cases like this distinct from mother-of-pearl cases than the suggestive intuitions I am offering here.

3.11. Conclusion: conceptual meaning — and why it is not enough The distinctions I have argued for above are designed to define the conceptual, representational domain which I see as the core of higher-level cognitive competence. I have marked out the frontiers of this territory, and its foreign relations, on three sides. On one side it is separate from external world states, thus avoiding pancognitivism; conceptualization and world states are linked by the intentional relation. On the second front it is separate from pre-conceptual skills such as motor routines and perception, thus avoiding a lack of differentiation that would make the content and the expression side of language equally conceptual; conceptualization is linked with these skills as part of the overall processing-and-action competence of the subject. On the third side it is distinct from the pre-representational background states of the subject, avoiding the confusion between the structure of experience and the structure of secondary representations of experience; these are linked in the complicated processes that surround the role of consciousness in human experience. The key achievement of cognitive linguistics as I see it is to demonstrate the many ways in which the process of articulating and encoding secondary representations must be understood by reference to the experiential basis of conceptualization. A conceptual structure such as force dymanics enables

76 Meaning and cognition

the human conceptualizer to get a handle on complex experiential wholes: using it as a pattern of orientation, he can simultaneously factor out and relate elements in what pre-conceptually constitutes one unanalyzed blob. We can only properly assess those wide-ranging implications which Johnson rightly points to in his article, however, if we also do something that cognitive linguists have underemphasized — namely refer to objects, experiences, words, and concepts separately, so as to be able to describe how they are related. If my description is fair, linguistic meaning as understood in cognitive linguistics essentially equals the conceptual content of linguistic expressions, in the restricted sense of secondary and higher-level representational content — dependent on, but distinct from primary experience. The special contribution of cognitive linguistics is to point out that conceptual content is much richer and much more bound up with the human point of view than had been previously assumed. The continuity with earlier assumptions about meaning is in seeing meaning as purely mental and essentially descriptive or imaginal — compare the notions of "image Schemas", "scripts", Langacker's pictorial diagrams, and the emphasis on category structure. I think cognitive semantics, thus conceived, is a good theory of the largest, most sophisticated and interesting area of linguistic semantics. But I do not think it covers everything. The problem with it goes back to the key intuition in the extensional tradition in semantics (cf. section 2): that we mean our words to transcend the purely conceptual domain and have implications in that real world which our mind as a whole also helps us to cope with. In addition to Putnam's point in relation to natural kinds, there is another non-conceptual dimension in meaning, which is most easily seen in more primitive forms of communication. The communication which we share with animals, such as alarm calls and mating overtures, is not cognitive in the narrow conceptual sense. It is typically understood as stimulus-controlled: elicited by direct situational input rather than mediated by situation-independent secondary representations. More advanced human utterances share at least one crucial feature with such signals, however, namely that they are physical events which "make a difference" in the lives of speakers — just like other physical events such as eating, mating and fighting. In contrast to eating, utterances need to go through mental processing in the sense of requiring categorial perception; but this applies to the primitive signals too: unless the alarm call and the mating overture get to the addressee, they cannot do their job.

Why conceptual meaning is not enough 77

The reception skill does not require any secondary representations, only recognition skills analogous to the ability to recognize food or phonemes, which we rely on but do not usually represent. But linguistic meaning is different, it may be argued; the standard assumption would be that the success of any actual speech event is mediated by conceptual representation, so that we can account for language within a purely conceptual picture, leaving the actual consequences to a nonlinguistic form of pragmatics. But not all meanings are obviously conceptual in the sense in which the meaning of the word colour invokes a concept. Some types of linguistic meaning are closely analogous to the direct interactive signals mentioned above. Greetings and terms of address are cases of this kind. Greetings like hello acknowledge the presence of the addressee and indicate lack of immediate hostile intentions, and have correlates in all social animals. Similarly, terms of address like sir, son or honey signal differential stances and roles in ways that are reminiscent of animal signals of submission, superiority and intimacy, invoking background relationships rather than explicit representations: unlike the ordinary noun honey, the term-of-address does not call on a conceptual representation. The only way of arguing that this type of meaning is conceptual is by referring to the existence of a concept that subsumes the situations in which they are used: there are "good morning" situations and "hi" situation, "sir" situations and "honey" situations. But notice that the relationship between concept and meaning is different in this usage from what is the case in prototypical concepts. The 'colour' concept, as argued above, is independent of situations in which you might want to use the concept. As against that, the "hello" concept would be one in which you need to begin with the situation of use and then generalize about it. In this latter case the conceptualization is secondary to the communicative use of the word: you learn to say hello as a possibly stimulus-dependent skill prior to getting a truly conceptual representation of what a hello-situation is like — whereas you need to have the concept 'colour' before you can use the word competently. Saying that meaning is conceptual in both cases therefore either waters down what you mean by "concept", or confuses the issue, or both. If you think of meaning in conceptual terms, it is natural to say that such expressions are devoid of meaning; but this is not an option open to the linguistic semanticist, because such words are clearly distinct from meaningless combinations like blick in English. When someone says hello

78 Meaning and cognition

to you on the street, this may not strike you as a very significant event, but the fact that it is not meaningless will be evident the day that person just looks at you in passing without saying anything. The meaning, and its nature as part of interactive experience rather than conceptual abstractions, will then make itself felt. The point is not to retract what I have said earlier about the need to be precise about the special status of concepts in relation to human language. Indeed, precision about concepts is necessary to understand why not all of linguistic meaning is conceptual. Also, the answer to the question of the nature of meaning does not lie in lower-level cognitive phenomena. The point is that over and above the conceptual dimension which linguistic meanings tend to have, there is another dimension of the meaning of a word which in certain cases is the only one: the meaning of the event of using it. Words like hello fit directly into a pattern of life (including experiential qualia) without requiring conceptual mediation — just like alarm calls. We need to look at the whole interactive pattern in which it belongs in order to find out what it means. To the extent this type of meaning is recognized it is usually talked of as pragmatic. In order to talk about the relationship between this type of meaning and the conceptual type of meaning that is everybody's prototype, we need to consider the functional perspective on meaning, which I see as the most fundamental. This forms the subject of section 4.

4. Meaning in a functional perspective

4.1. Introduction The extensional and the conceptual approaches to meaning have one thing in common as opposed to the functional view on meaning, which will be explored in this chapter. They both see linguistic expressions as having meaning by standing for or representing something else; and this common feature will be subsumed under the label the "representational" view of meaning. As opposed to that, the functional approach sees meaning as a job that linguistic expressions do. The difference in relation to the representational view can be summed up in two salient properties, "dynamic" and "communicative". Meaning is an aspect of the dynamics of human communication, rather than a basically static entity or relation associated with a linguistic expression. One of the aims of this book is to present an integrated approach to function, cognition and structure, with function as the fundamental perspective in terms of which cognition and linguistic structure must be understood. In that respect, the position of the book is hardly sensational. In fact, the functional approach is an idea whose time has come repeatedly. From antiquity onwards, it has been pointed out that language can do other jobs than designate things or concepts. Seeing meaning in terms of function for the speaker is familiar and accepted in the theory of language at least since Bühler (1934); in the philosophical discussion, understanding linguistic meaning in terms of use goes back to the later Wittgenstein; in the Danish context a view of meaning as "officially recognized function" goes back to Christensen (1961). So what's the big deal? As I try to show, the problem is that the sense in which most of us are functionalists when it comes to meaning is both muddled and superficial. The standard assumptions about meaning involve a division of labour between the functional and the representational view, shared by people of otherwise quite different positions, which is the result of a stalemate of historical forces rather than consistent thinking about the issue. Further, the kind of underpinning that functional views can point to in those areas where they have become accepted is often less than compelling. After discussing the problems in some widely accepted positions, I try to establish a sense of the word "function" which is scientifically respectable as well as compatible with the intuition-based usage in functionalist literature. I then

80 The functional view of meaning

use this sense to show how the representational properties of language can be understood within a picture where interactive function is the basic element in linguistic meaning.

4.2. The intellectual history of the functional perspective on language The reason for the traditionally marginal position of functional approaches to meaning is the central role of the pursuit of knowledge (cf. section 2.1. above). From Plato onwards, the role of language in interaction became linked with unreliability, with a moral and intellectual stance according to which anything goes: what you can do with language depends more on how unscrupulous you are than on the truth of what you are saying. This left two options for our understanding of language: either it was unreliable and on the side of chaos, or it could be seen as having, as its better nature, the role of mirroring the world. The logical distinction between semantics and pragmatics reflects this pattern of thinking: first the "clean" relation to the world of denotable objects, then the step into the messy world of users and interpreters. This general understanding of language has made certain concessions to the interactive perspective in the twentieth century. The form of the argument, however, has to some extent reflected the premises laid down in the traditional picture, such that the interactive perspective remains antithetical to thought: interaction is cast in the role of the barbarian invasions bearing down upon the central area of civilization covered by the representational view of meaning. Viewed from that angle, the most uncompromising version of the interactive perspective on language is behaviourism and the stimulusresponse view of meaning. In the behaviourist era there was a total conflict between the tradition of language-as-carrier-of-thought and the interactive perspective; also outside the ranks of committed behaviourists there was a general climate of suspicion with respect to mental categories. The interactive trend understood itself as modern in being antimentalist; the anthropological perspective on language as "a mode of action" suggested by Malinowsky (1930) and Firth's contextualist theory of meaning also reflected this general climate of opinion. This to some extent applies also to the most important hero of the interactive view of meaning, the later Wittgenstein. Both the early and the late Wittgenstein believed that the philosopher's central duty was to dissolve problems by showing how they arose as a result of misleading ways of talking about things. With respect to the representational powers of

The intellectual history 81

language, the Tractatus position can be seen as reflecting both a positive conviction: language could reliably structure the world of facts as constituted by what was within the point of view of an individual observer — and a negative conviction: whatever was outside that world was beyond language and should not be talked about. Wittgenstein's position in Philosophical Investigations is an extension of the negative convict on, reflecting a realization that language cannot be reliably used even in t circumscribed representational domain that Tractatus allowed it. Reliability in language is found elsewhere, in the social practices it serves. But the focus of interest remains on philosophical rather than interactive issues. Hence, the notion of "language game" that embodies the new foundation of language, serves mainly as a way of avoiding philosophical dilemmas based on representational assumptions, rather than as the foundation of a positive new approach to the description of language. In the case of talk about other people's pain, the model example of why there is no way to relate a word reliably to experience, we can see why it is helpful to take the interactive pattern in which the word enters as basic; interaction provides a way to move the problem of meaning-assignment outside the problematic mental domain. This is also the rationale behind the repeated insistence (cf. Philosophical Investigations 1247; 1421) that if we want to ask sensible questions about language, we should ask about language use, rather than about meaning — because talking about meaning leads into confusions that are avoided if use is regarded as basic. Similarly, the concept of "family resemblance" was oriented towards undermining unwarranted confidence in nice and clearcut meanings existing apart from the referential practices of speakers. In short, Wittgenstein's ideas, epochmaking as they were, preserved the frontiers of the battle in maintaining the conflict between use and meaning, instead of discussing how the two should be related. The issue was not cleared up when attention shifted back into the mental domain again, but was rather buried under the cognitive wave; for a discussion that reaffirms Wittgenstein's approach, compare Arne Thing Mortensen (1992: 209), where intellectual functions are said to be essentially outside the organisms. A more positive approach was offered by the ordinary language philosophy, especially as it developed into a "speech acts" philosophy. The split between ordinary and ideal language, discussed in section 2.5 above, not only increased interest in logical languages, but also gradually turned the attention towards those properties of ordinary language that were

82 The functional view of meaning

problematic in a logical perspective. The phenomenon of explicitly performative utterances, which may seem a bit quirky from hindsight, had the historic mission of putting the "action" status of utterances in a position where it could not easily be marginalized. The power of this discovery must be understood in relation to the entrenched position of logical semantics; the fact that although explicit performatives looked like statements, they could not be understood as describing any antecedently existing aspect of reality, undermined the standard representational view of meaning in a way no previous argument had done, leaving it with the label "the descriptive fallacy" (Austin 1962: 234). And once it had become clear that certain types of utterances could only be understood as actions, the road was paved for the realization that this was basically true of all utterances, even the time-honoured standard examples of logic textbooks. Hence, on the level of whole utterances the old descriptive view was universally acknowledged to be too narrow. Following Austin (1962) and Searle (1969) it came to be generally assumed that all sentence tokens must be analyzed as having as its highest level of analysis an illocutionary force, endowing the prepositional content of the sentence with a "force" specifying the interactional status of the utterance. The notion of illocution swept through the discipline of linguistics and provided an important part of the impetus for the founding of a real discipline of linguistic pragmatics, which until then had been merely a technical name for an uninvestigated field. Even generative grammarians, whose intellectual foundations were very far from those of ordinary language philosophy, tried to incorporate the notion. It was argued that the performative had underlying status even when it was not explicitly present (Ross 1970); illocutionary force indicating devices were uncovered everywhere. However, the old frontier survived once again, even if the map was redrawn. First of all, sentence-internal meaning was peacefully conceded to the representational tradition. Austin set up a concept of "locutionary act" whose function would appear to be chiefly that of keeping the purely linguistic aspects outside the domain of what he wanted to talk about, which was only the illocutionary level: ...a locutionary act... is roughly equivalent to uttering a certain sentence with a certain sense and reference, which again is roughly equivalent to 'meaning' in the traditional sense. (Austin 1962: 108)

The intellectual history 83

Austin, who translated Frege, can be assumed to refer to "roughly" the Fregean analysis of sentence meaning — no implications were drawn of calling it a locutionary "act". Searle initially (1969) went further than that, analyzing reference and predication as speech acts, with conditions analogous to those applying at the illocutionary level; he also criticized Frege's generalization of the distinction between sense and reference to the predicate and sentence as a whole (cf. above, p. 20). Although at this point, the general direction of Searle's investigation was to extend the analysis of meaning in terms of action to language as a whole, his actual analysis of sentence-internal meaning stopped at that point, and later his interests moved in the direction of the philosophy of mind rather than action (cf. below, section 4.8). In practice, the proposition-internal aspects of meaning remained unaffected by the interactive reanalysis of whole utterances. There was a rationale for this which had two components. One is that parts of sentences are naturally understood as incomplete from the interactive point of view, so that it is natural to reserve the action status for whole sentences; a theory of houses takes only a secondary interest in bricks. This reasonable attitude, however, merged imperceptibly into a second component: the illocutionary level was designed to capture precisely sentence-sized acts, endowing each sentence with a "force" which supposedly put it in its canonical interactive slot — and the illocutionary level itself was therefore assumed to define the level at which action and language had something to do with each other. Partly for this reason, the pragmatically very successful notion "illocution" turned out to have drawbacks as a vantage point for making further progress with an investigation of language as communicative interaction. The reason why illocutionary force as a level in itself was not the panacea it was widely assumed to be (cf. Cohen 1964, All wood 1977, Harder 1979) is the conflation that it involves between linguistic category and interactive category. The notion of illocutionary act came about purely through an analysis of language. More or less by accident, a door opened that led from timeless conceptual understanding into the realm of social interaction, one central concept was set up to cover this relationship, and then the discussion moved back into language as such. Through the inventory of performative verbs1, language appeared to offer sufficient means to analyse itself in terms of interaction. But as discussed above in relation to cognition, the lived reality of the subject cannot be captured by explicit verbal

84 The functional view of meaning

representations; and this goes for the social aspect of lived reality as well. Explicit performatives were the tip of a very important iceberg; but mainstream linguistics dealt with it by cutting off the tip and carrying it home for further analysis. In other words, there is nothing special about the meanings of performative verbs — but that does not mean that the notion of the "force" of an utterance can be reduced to linguistic meaning. On the contrary, the force of an utterance is something quite distinct from meaning, including the meanings of performative verbs (cf. below p. 133). But as a way to get at the force or function of an utterance, performative verbs have basically the same status as the existing inventory of words denoting human relations (boss; friend; colleague; relative, etc.) have for a sociologist: the words offer suggestive hints, but cannot save him the trouble of investigating the actual relations that they are used to talk about. Searle's analysis (1969) of institutional facts, understood in terms of constitutive rules, brought the analysis a little closer to the larger social context. However, the philosophical methods used relied on the timeless conceptual aspects of the field, not on actual social practice. Constitutive rules are tendentially understood as constituting reality on their own: institutions exist basically by virtue of constitutive rules. However, that is imposing too heavy a burden on something with the flimsy ontological status of rules (cf. Fink 1980; Searle 1986); it only fits reasonably in the case of invented games such as soccer football, and even there, it does not provide everything a functional analysis would require. With institutions like promising, it is obvious that we cannot understand the nature of the act without inquiring into the role of promises in the social pattern, including experiential properties. The point of deriving "ought" from "is" (which at an early stage was the main point of Searle's analysis) lies in demonstrating that norms are built into patterns of human interaction, as mediated by language, in a way that contradicts the premises of logical empiricism. A formulation in terms of rules is a way of representing the lived reality of promising — it is not an arbitrary rule that defines what promising is, the way being "vulnerable" was defined when Vanderbilt invented contract bridge in 1926. Even in games like bridge or football, once they acquire a role in actual human interaction, actions taken within the rule system also acquire a "force" that is not deducible from the book of rules. The actual force of committing a brutal foul is not the same in different countries, even if soccer rules in principle are the same.

The intellectual history 85

The cases which are easiest to analyse fully are acts which belong in an explicitly formalized institution, such as marriage. But that again means that their description is not a description of language, but of the institution of marriage: marrying is possible by virtue of the institution of marriage, not by virtue of the rules of the language. Its relation to language is only via the lexical item "marry". A theory of such speech acts is a theory of the relevant institutions, rather than a theory of the action potential of language in itself (cf. also Leech 1983, Sperber—Wilson 1986, Searle 1991). The illocutionary theory thus did not provide a socially anchored theory of the function of utterances. What it did was to point out that utterances had an interactive status, and that this status was not derivative in relation to the descriptive role of language, but rather fundamental to the understanding of utterances in the first place. If illocutionary forces, as defined by performative verbs, constituted a classification of the interactive options open to the speaker, language itself would provide a full-fledged theory of its action potential (which would have been rather an extraordinary service for a theorist of action, when you think of it). Since that is not the case, the whole network of institutional and spontaneous activity must be uncovered before we can have a theory of how linguistic acts acquire their functional identity: linguistic patterns must be placed within a pattern of life. Other approaches to meaning also followed the schema of conceding some marginal territory to the interactive view while leaving the representational nucleus intact. The mode of being of linguistic meaning, as pointed out above, was another topic approached in this way. Two concepts from the domain of action have been traditionally invoked in accounting for meaning: intention and convention (cf. Grice 1957, 1968; Strawson 1964; Searle 1965). Grice derives linguistic meaning basically from speaker's intention, hence in principle from pragmatics — but (again) by means of general philosophical definition rather than by investigating the ontogenesis of meaning in interactive practice. Grice defines nonnatural meaning in terms of particularly complicated types of intentions: those where the speaker wants the utterance to have an effect by virtue of the hearer's recognition of the effect. The "recognition of intention" is what is different from "natural" meaning, as in those spots mean measles. One problem with this definition is the perlocutionary tinge (cf. Searle 1965, 1986, 1991): meaning becomes a matter of what the individual speaker is trying to

86 The functional view of meaning

achieve, regardless of those factors which make it possible for the hearer to understand what the speaker is saying. The problem with a purely speaker-based notion reveals itself in the fact that the "recognition of intention" aspect does not stop at the first level (cf. Strawson 1964, Searle 1965), but seems to return no matter how far down we go. This circumstance turns out to relate to a presupposed feature of the situation in which the speaker acts, namely the fact that social coordination prestructures hearer expectations; and this leads on to the the second notion, that of "convention". The traditional notion of convention as something superimposed upon natural meaning2 is explicated by Lewis (1969) as unfolding into a set of expectations of the kind that Grice's definition of meaning depends on. Conventions can be understood as solutions to coordination problems: whenever people depend on each other's decisions they have to some extent to make assumptions about other people's future actions. If A and B want to meet at a certain place and time, they may make an agreement to do this, which sounds simple enough. But if we want to describe the conditions of success of such an arrangement, it will soon be apparent that in principle there no limit to the number of factors which may undermine the possibility of making such an agreement: A would not come unless he thought B would come, and vice versa. But both parties know this of each other; so when A comes, it is really on the assumption that B assumes that A will be there — because otherwise there would be no reason to think B would turn up (and vice versa). However, this underlying condition both parties are, in principle, assuming to be fulfilled; so A actually assumes that B assumes that A assumes that B will be there..., etc.3 When we have a situation in which a given outcome can be described by such a set of mutual expectations, we have a convention — and this is what makes it possible for speakers to use words to mean what they do "nonnaturally" without causing coordination problems. Lewis's point in making this analysis is determined by his interests as a logical semanticist. His analysis of utterance-meaning involves a dichotomy into a truth-conditional nucleus and a mood (indicative or imperative), and his account is designed to establish the conditions under which logical, truth-conditional semantics can continue to have centre-stage position in a picture that is not open to charges of neglecting the social embeddedness of communication. His strategy for taming the vagaries of communication implies that the place of communicative intention is fixed once and for all if we assume that the speaker is playing the (conventional) game, as far as

The intellectual history 87

vention specifies as being sayable with the expressions that he chooses, because if a convention exists at all, it exists as a way of facilitating "mutual recognition of intention", thus covering Grice's definition of meaning. If Putnam's arguments had not undermined the power of the mapping rules themselves, Lewis's picture would have put the interactive aspect of language into the position of a marginal, enabling condition: once the interactive ante has been paid, we can disregard interaction the rest of the way. This pattern is found again with Grice's other major contribution to the study of meaning, the theory of implicature (cf. Grice 1975). The notion was devised to capture those aspects of what is communicated by an utterance which, although clearly part of what is going on, cannot be analyzed as constituting part of what one would call the meaning of the words. The usefulness of the principle of cooperation, as spelled out in the maxims, lies in the possibility to predict "implicated" information by interpreting the meaning of utterances under the assumption that these maxims are observed. This process of augmenting the meaning by adding implicatures based on the cooperative principle can be used to bridge some familiar gaps between logical form and everyday understanding of language (cf. Gazdar 1979, Levinson 1983). Grice's principles provide a kind of semantic razor, similar in function to Ockham's in metaphysics: do not multiply linguistic meanings beyond necessity, i.e. if the relevant aspects of the message can be put down to pragmatic principles. However, this theory again leaves meaning and interaction nicely compartmentalized. Where the speech acts philosophy puts interactive force outside prepositional meaning, Grice sets up a neat separation according to which logical semantics takes care of the coded parts of utterance meaning and a principle of cooperative communication takes care of the rest. The cooperative principle is a broker that reconciles logical analysis with the demands of cooperative communication. Implicatures thus occupy the same margins of semantic civilization in which we find illocutionary forces and cooperative conventions — for the historically very good reason that propositions constitute the presupposed core, from which the authors see themselves as working outwards. All advances in the theory of the interactive underpinning of language thus permitted Carnap's division of labour to be upheld, even after it had become clear that the relation between signs and denotations was at the mercy of the relation between signs and users (cf. the conception of

88 The functional view of meaning

pragmatics embodied in Leech 1983, and the argument for this "hybrid" theory of meaning in Gazdar 1980). This division of labour has functioned peacefully and still serves to distinguish between semantics and pragmatics; even those who explicitly want to regard the pragmatic perspective as crucial in relation to language continue to talk about pragmatic and semantic properties of language in these terms. The familiarity of talking about meaning in terms of function is therefore deceptive; underneath it the representational view of meaning dominates, marginalizing the role of communicative interaction to that of a pragmatic envelope inside of which you find real meaning. There is still work to do, therefore, if we want to explore the implications of understanding linguistic meaning with communicative function as the basic concept.

4.3. What is "function" (if anything)? Usually the traditional search for a good definition is not a good way to begin; what is a good definition only becomes clear in the course of the discussion. However, the word "function" has such a promiscuous history that a definitional stage is necessary in order to be able to use the word at all; it is far from obvious what its "proper" sense is, if indeed it has one (cf. also the discussion in Nuyts 1992: 26). I shall therefore take time to establish the sense in which the word can fulfil the role I want it to have. There are three elements in its ordinary, pretheoretical sense which are central. The first involves causal powers: function has to do with effect. However, not any old effect qualifies as a function. As the second element, there must be a norm in terms of which a particular effect is ranked higher than others, such that only the privileged effect is understood as the function. The third element is that it is oriented towards some larger context: when we describe something in terms of function, we change the emphasis from the thing in itself to its contextual role, which determines the norm in terms of which its function is defined. The ambiguities of the words "function" and "functionalism" are mostly due to the numerous notions of context involved, some of which have been rather narrow. Within a predefined structural context, a functional approach consists in looking "upward" in the hierarchy, while the structural approach looks "downward" (cf. Part Two p. 152) — but both these approaches are obviously "structural" in terms of the larger structure that is presupposed. The mathematical notion of function involves the members of the domain

What is function? 89

and the range (but nothing outside the pairing relations between them); computational functionalism involves the context of the computational process (but nothing outside the relations between elements in the program). Functionalism in linguistics has sometimes been an integrated aspect of structuralism, when the context considered was the linguistic system, as in European structuralism (Prague phonology and Hjelmslev's glossematics). Functionalism can also be diametrically opposed to structuralism, when function is understood in relation to the context of communication, and linguistic structure is seen as totally isolated from function. Depending on how we select the context in terms of which "function" is defined, we thus find functional considerations sometimes as a motivation for abstracting away from almost everything, sometimes as a motivation for taking almost everything into account. But this multiple polysemy is only one side of the problem with the word "function". The other side of the problem has to do with the uncertain scientific grounding of one of the more central aspects of the meaning: what is the status of the norm in terms of which one class of effects is promoted to "function" status — the element of goal-orientation (cf. Nuyts 1992: 30)? Searle (1992) argues that functions (outside the world of conscious, intentional action) are in the eye of the beholder: in nature there is only causation, and it is the observer who dignifies certain effects with the special privilege of being a function. Sometimes natural, physical entities are provided with a function by human beings who use them, as when mountains are used as seamarks; sometimes entities are invented which exist only by virtue of the function that is imposed on them, such as money. This argument is expanded in Searle (work in progress), and the general moral is that functions can only be inherent in a universe that incorporates intentional beings who can confer these functions on entities. We obviously need to be careful not to attribute extrinsic functions to the object itself. Yet I shall try to show that it makes sense to use the word "function" from the biological level upwards as an observer-independent property. I argue that function within an intentional universe can be seen as a special and sophisticated case of a broader notion. The argument shares the premises discussed by Searle (1992) in relation to evolutionary biology, but the conclusion is different. In opposition to Searle, I think the term is inherently appropriate to biology (as opposed to other domains)

90 The functional view of meaning

because we can understand the way living things are built up, if we see them as shaped by the existence of the "survival test". This is a problematic claim, but let me try to show that I can avoid at least some of the traps. The Darwinian revolution rejected the teleological picture according to which there was a purpose inherent in the natural world, thus outlawing Aristotelian functionalism. What appeared to be a reaching towards a goal, as in the case of plants turning towards the light, should instead be accounted for in terms of purely causal phenomena: light triggers a chemical process, which causes the plant to turn. Nevertheless, biologists still talk about functions, because turning towards the light helps the plant survive. But according to Searle, when we choose to say that turning towards the light has a functional explanation, we do so only because we subjectively regard survival as important. Yet even if we stay within a totally causal picture, there may be another way of looking at this. Let us first take an obvious example of function imposed by human beings, such as when we say that it is the function of pillars to support the roof, or of a thermostat to maintain the same temperature: clearly, it is only because there are human beings who want the temperature and the roof to stay where they are that we speak of this as being the function of pillars and thermostats. But this does not apply in exactly the same manner to the phototropic movements of plants. It is not just we as observers who think survival has a role to play; if Darwinian biology is on the right track, differential ability to survive is the pivot of evolution. Knowing that the creatures that we look at today are those which have survived, we can ask how they have managed to do that, while other creatures did not. This provides survival with a normative status in a sense that is compatible with a causal model of explanation: the creatures have passed the test of time by managing to survive in their natural environment — it therefore makes sense to ask in what way their properties helped to make this possible. Unless survival had the same status before evolution had produced any evolutionary biologists, the theory is simply wrong. The reason why survival acquires causal relevance in the biological but not in the inorganic world is the role of reproduction: biological species are types that persist only by virtue of tokens producing new tokens. Survival until reproduction therefore has a special role in explaining the features of the biological world: those effects which contribute to survival are inherently privileged in relation to other effects, and we can say that the function of wings is to confer powers of flight on the animal without arbitrarily imposing something from our own perspective on the animal. A

What is function? 91

crucial feature of functionality in this sense is an extra loop in the causal chain: the persistence of wings as an attribute of animals is caused by the effects of the wings, rather than by simple causation. We can speak of function generally when the reason organ X is here is that it has causal power Υ (cf. Wright 1973). To spell it out: WINGS cause POWERS OF FLIGHT POWERS OF FLIGHT cause SURVIVAL AND REPRODUCTION SURVIVAL AND REPRODUCTION WINGED ANIMALS

cause PERSISTENCE

OF

PERSISTENCE OF WINGED ANIMALS causes existence of WINGS Searle (work in progress) considers this type of definition and acknowledges that function in that sense can be seen as observerindependent — but rejects the usage because it does not tally with what we pretheoretically consider as functions. His counterexample is that if this usage was accepted, it would be the function of colds to spread germs — because that is how colds survive. The objection shows that the extra loop is not enough to define function. The third element of meaning posited above needs to be taken into account, i.e. the role in a wider context. When we speak of function in terms of survival value, with wings as the example, we are talking not about the survival of wings only — we are talking about the way wings contribute to the survival of something larger than itself, namely the winged species of animal. Wings have a functionfor-the-animal, namely powers of flight; if we try to put Searle's counterexample in those terms, it does not work, because there is no wider context whose survival depends upon the effects, specifically the germspreading, of colds. The necessity of the contextual role in the "function" concept is not only a matter of the folk sense of function, it also plays a crucial role in the use of the notion of function in biology. The motivation for positing a notion of "functional explanation" in biology as well as in linguistics is the assumption that when we look at phenomena in ecosystems and linguistic systems, we can account partly for their individual properties by asking how these properties would help the items as a whole to survive. Thus it

92 The functional view of meaning

makes sense to ask why scavengers tend to have naked patches, and to ask why languages tend to have a particular syntactic construction, looking for ways in which such a special feature might tend to further the reproduction of tokens of which they form part. Spreading germs is the way colds stay in business, but the element of contextual role is absent, so colds have no function in those terms. Saying that it is the function of colds to spread germs would be like saying that it is the function of human beings to have sex; it is either a tautology (the process of reproduction is necessary for reproduction) or a wilful attribution of priority: this is what it's really about! But when we look for functional properties in a little cog in a larger machinery, we are looking for a non-tautologous relationship between parts and wholes in a larger system. A functional account is appropriate wherever the general criterion involving the extra loop in the causal chain is satisfied. This criterion may cause occasional dizzy spells if you think of it purely in abstract terms; but it has an intuitively simple point to it. If there is something about the effects of a device that plays a role for the future of the system of which it is part, it means that the system responds to feedback from the enviroment. That, in turn, means that it cannot be understood as "autonomous' — in order to understand such systems you must understand the way they interact with their ecology, and the way they are shaped by this interaction. The shaping process consists of feedback-response cycles which may have many different lifespans, cf. the next section; depending on what contextual perspective we look at, we get different types of feedback and correspondingly different types of function. A sanitized concept of function is necessary whenever we want to understand larger wholes — structures — whose parts are related because they collaborate in serving contextual functions. Languages are examples of such structures, which is why the concept of function-based structure (as opposed to autonomous structure) is a main theme in Part Two. The defining feature of such structures is their context-sensitivity: the way elements hang together internally must be understood in terms of the way the whole relates to the context. Not all features of such function-based systems have a functional explanation; since biological entities are part of the physical world, they also reflect purely physical laws. In biology, an instructive illustration of this can be found in Gould's discussion of male nipples and female orgasms (Gould 1992: 137). The special thing about biological entities is that in addition to direct causal impact from the environment, they are also capable

What is function? 93

of reacting in the more subtle, but still causal manner that is captured by the definition of function above.

4.4. Types of functional contexts When language is viewed in the functional perspective, there is a difference in relation to both the logical and the cognitive approach: we now look at language in relation to a context where neither the ability to map sentences into states of the world nor the cognitive powers of the human mind as such can be taken for granted. Language is part of the same evolutionary process which also created the human mind; and both language and mind need to be understood within this wider context. Before we begin to describe language in functional terms, however, we need to specify on what premises this way of talking is grounded; otherwise we would risk falling victim to Searle's razor, which is designed to remove functions that are posited only on the basis of the observer's own priorities. This is the reason why we should not set up too straightforward a continuity between present-day linguistic functionalism and Aristotelian teleological functionalism (sometimes one may read Givon as suggesting such a continuity, cf. Givon 1989: 13, 26; 1993: 2). Evolution is the widest, but not the only context in which this approach is possible. The evolutionary context defines the larger horizon within which other functional mechanisms can arise; but we are not bound to take the whole evolutionary process as our possible context. One alternative context within which a feedback-response cycle can create functional roles, is the life history of an individual. This is an intuitively easy step to take because the features that protect the species against extinction will by definition also help the prototype specimen to stay alive. As a starting point, the specimen inherits the functional properties of the species; in addition, the individual specimen subsequently develops properties that prove functional within its own life cycle. The theories of Gerald Edelman (Edelman 1992) provide a striking parallel between the mechanism whereby a species evolves and the way an individual develops. Originally an immunologist, Edelman showed how the defense system of antibodies works by a selection mechanism: the body produces a large number of different antibodies, and an invading microorganism triggers a procedure

94 The functional view of meaning

whereby the relevant antibody begins to multiply, just like an animal that multiplies when it finds a suitable niche. The same explanatory pattern proved to be applicable to neural organization. Neuron growth "evolves" in an individual depending on the "survival value" of different neuron activation patterns. "Survival" here obviously applies to the individual rather than the species; but there is also another difference in that it extends beyond the point-blank alternative of life and death. What promotes neuron growth is not just avoidance of potentially fatal problems, but more generally the promotion of the individual's purposes: what is useful is preserved and developed. The mechanism starts in the womb; identical twins become different before birth. The mechanism, as expressed in the title Neural Darwinism, is assumed to be the same as in natural selection, and hence conforms to the definition of function given above: a given neuron pattern survives because of a special class of effects which it has for the organism. The existence of such processes are of course evolutionarily relevant in that they endow the species which have them with greater fitness — but their operation is within the lifespan of an individual, thus yielding an individual-based class of functions. This type of functionality constitutes a biological context for cognition: cognition must be seen as part of a larger domain of functionally shaped human attributes. When I argued that cognition should not be seen as the widest possible context for the study of language (cf. above p. 58), I motivated this with an appeal to the undeniability of the external world. The functional context of cognition is an example of the way in which we can only understand cognition scientifically by going beyond it; to the extent cognitive patterns are "selected", there are mechanisms behind that are not themselves cognitive. Within human life, we can choose a perspective that focusses on the group rather than the human individual. To understand the function of behaviours in the lives of social animals, from bees and ants to primates, requires that we look at functions in terms of highly differentiated patterns of social behaviour. The notion of the interactive function of a behaviour presupposes an extra level of complication in the animal's pattern of life, but still directly reflects the evolutionary level: for social animals, survival involves a crucial element of collective security. Also, like other aspects of an animals's life, interactive elements can be assumed to have a subjective correlative (from a certain point "upwards"); in accordance with the natural link between subjective experience and selective fitness (cf. Popper 1972),

Types of functional contexts 95

one would expect social animals to be subjectively motivated to seek the experience of leading a social life of the proper kind. Communicative function arises as a special case of interactive function. The signals associated with territorial dominance behaviour have (apart from subjective functions) an important role to play in regulating social interaction among animals, for instance in avoiding fights that would needlessly lower the chances of survival for species as well as individual. In the case of man, I think it is fairly uncontroversial to claim that interaction is both objectively and subjectively fundamental in human life. By this I am referring first of all to the ontogenesis of primary experience as a pre-representational phenomenon. Ordinary parental experience as well as research into child development overwhelmingly supports the assumption that the life project of young children very largely consists in looking for a place in an interactive community. In order to be able to lead what we understand by a human life, a child both subjectively wants and objectively needs to have access to plenty of interaction with other human beings — human children are not designed to just come out of an egg left to hatch in the sun. If we try to view the issue from within the circle of subjectivity, in the manner of authors such as Melanie Klein (1952), Heinz Kohut (1977) and Daniel Stern (1985), the picture that emerges is that the development of subjective qualia evolves directly out of the pattern of interaction that the child enters into: intensive and positive response from the caretaker and, gradually, people in general, leads to the formation of a positive and pleasurable sense of being; hostile, irregular and lacking response leads to lack of pleasure in oneself and lack of ability to react appropriately to personal contact in later life (hence, evolutionarily speaking, also to reduced chances of reproductive success). A child is born with extensive genetic preparedness, including an instinctual drive, to seek out and react (probably also linguistically, cf. Pinker 1994) to other people; and carrying on a human life means maintaining one's place in a pattern of human interaction. Thus, the bedrock status of interactive feedback in human life is visible in the dynamics of identity formation: interaction not just influences the formation of the self — it is the stuff of which selfhood is made. In all the cases discussed above, I speak of function whenever there are phenomena that exist because the context favours their reproduction, as discussed above. I now turn to the only situation in which function can exist from Searle's point of view: within an intentional context, where subjects can assign a function to something by virtue of their own conscious

96 The functional view of meaning

choices. Intentional functions are unique in being assignable by choice — but they share with all other forms of function the dependence on a Background in terms of which their contribution is normatively motivated. Intentions can only arise within the space of possibilities defined by our life world (Lebenszeit); this is what Searle (1983) expresses in relation to the background by saying that you cannot intend to become a cup of coffee. For that reason, even within the subjective world of a very eccentric individual, it is impossible to assign something the function of keeping the coconuts from harassing the doughnuts, except if there is a horizon of assumptions and purposes in terms of which this is a meaningful thing to do. Furthermore, the conscious act of assigning a function to an object, such as a linguistic expression, is only a brief and glorious moment in the "functional life" of the object — like a christening ceremony for a proper name. Once a new name is actually functioning, the mechanisms that sustain its functional role need to be understood as an aspect of communal life. If we use the notion of rules, including constitutive rules, to account for linguistic conventions, we need to specify that in the case of fully competent speakers the rules have ceased to be relevant in the same sense as rules for skiing for the Olympic champion: they are wired-in aspects of interactive competence, founded on a shared set of intentional routines. Conventions, on this understanding, are aspects of life which the members of the community "rely on" just as much as they rely on features of the physical environment, like the solidity of the ground. To sum up: there are many different types of processes, with many different time perspectives, through which functions may come into being and continue to exist. In order to be precise about language as seen from a functional point of view, we therefore need to be precise about the type of anchoring that we postulate for functions in language.

4.5. A functional account of language and meaning 4.5.1. Millikan and the attempt to found logic on evolution The two chief contenders for "the" function of language remain those involved in what Strawson (1969 [1971]: 172) called the "Homeric" struggle between views which see communicative interaction as the basic function and those who want to see language as basically a representational

A functional account of language and meaning

97

device.4 The mainstream tradition, as already argued, favours representation in one form or another; and typically the word "function" has been used fairly loosely in the discussion. Within the representational tradition, however, Millikan (1984) invokes a precise, biologically grounded sense of function as a basis for semantics in an impressive attempt to provide a new foundation for the central tenets of the realist tradition in logic. Millikan's strategy is based on the fact that the criterion of survival (and hence function) is selective adaptation to the environment, i.e. the real world outside the organism (and outside the mind of the organism, if it has one). The operative item in this process is therefore things-in-the-real-world rather than things-in-the-mind. If what makes it selectively useful to have a certain word is its ability to pick out a certain real-world type of object or property, then the proper function of that word must be described in terms of the item-out-there, the environmental feature to which the animal is adapted, rather than whatever the organism may have represented in its own mental world. Realism in model-theoretic semantics rested on the assumption that mapping rules could link expressions to their real-world interpretations; but since this is impossible, for reasons pointed out by Putnam as we saw above, Millikan suggests that the job can be done by the history of evolution instead — because the history of evolution has the advantage of standing outside the human mind. A central consequence of this view is that there is very little that the content of the human mind can reliably tell us; although there are intensional correlates to the real-world items, these come a poor second in relation to the "real values". Another consequence is that everything about a human being and the words he uses is defined in evolutionary terms: Let me put the position starkly — so starkly that the reader may simply close the book! Suppose that by some cosmic accident a collection of molecules formerly in random motion were to coalesce to form your exact physical double. Though possibly that being would be and would even have to be in a state of consciousness exactly like yours, that being would have no ideas, no beliefs, no intentions, no aspirations, no fears, and no hopes (His nonintentional states, like being in pain or itching, may of course be another matter.) This is because the evolutionary history of the being would be wrong. For only in virtue of one's evolutionary history do one's intentional mental

98 The functional view of meaning

states have proper functions, hence does one mean or intend at all, let alone mean anything determinate. (Millikan 1984: 93)

Millikan's commendably clear position entails that a human speaker has no chance of knowing what he is talking about; only Mother Nature knows, slowly weeding out the ill-adapted speakers, who never knew what hit them. Her version stays at the evolutionary level, without considering the individual-based type of function, let alone the subjective dimension.4 The central defect of her theory is closely bound up with the aim she has in constructing it: to find a new way of doing what the logical tradition has always wanted, namely assigning meaning to utterances without relying on the messy processes of the human mind. Millikan's theory presupposes that the survival value of language is attached to the relation of reference; only such an assumption would justify the attempt to use "function" to establish the traditional logical relations of reference directly to ontological categories in the external world. Making misleading reference, we must assume, is the path to extinction. Searle (1992: 50) offers a reductio-adabsurdum counter-example to this theory in the form of the statement / think Flaubert was a better novelist than Balzac, challenging Millikan to explain the meaning of that in terms of how the history of evolution would fix proper functions for this clause in terms of "real values" in the external world. 4.5.2. Three functional contexts — and three different discussions In order to do better than Millikan, we need to show how language is functionally anchored without presupposing the logical universe of thought. In section 4.4. I described how the basic principles of functional description could be viewed in different contexts, and presented my view of the human context that language belongs in; in that sense I have already shown my hand. What I do now is to discuss the implications of these view on functionality and on human life, when we look for the function(s) of language. I begin with the context of the lifespan of an individual. This context is a necessary element in the picture, because a full-fledged language is not a lengthened toe or a naked patch which is handed down to the individual as a gift from the history of the species; each individual learns and speaks a language in its own way. At the level of the individual speaker, the feedback-response cycle involves the shaping of language use within the

A functional account of language and meaning

99

lifetime of the individual. The evolutionary time-frame is not necessary; in fact, functions can arise very quickly. An example of individual-level functions arising as a result of processes that do not depend on Intentionality is behaviourist learning: when a behaviour is learnt by means of rewards, the behaviour acquires the function of bringing about the reward. In the human context, function depends on subjective Intentionality as well as on subjective experience: when the human subject uses language, he has a particular effect in mind. The extra loop is simply brought about by definitional property of intentional action: intentional action is not brought about by mechanical causation, but carried out because of its likely effect. At this individual level, the speaker's own norm is therefore criterial for what counts as function; and the confidence with which representation has been defended as the function of language, with no necessary interactive link, must be understood on the basis of the individual perspective on language — which is strongly emphasized in the generative tradition (cf. for instance Chomsky 1986: 36). The individual is entitled to use language for whatever purposes he likes, such as writing things that no-one is assumed ever to read (Chomsky's example, 1975: 61); on the individual level there is no way to prove that interaction is the "real" function (cf. also Dummett 1989: 210).6 Let us now consider the evolutionary level, where language is viewed as a property of homo sapiens. What we are talking about here is therefore the language faculty, not any particular language. The first thing that needs to be said is that there is good reason to be sceptical about anybody who makes very definite claims about what is "the" function of language at that level. Even in cases that look obvious, the argument for saying that anything is "the" function of a particular behaviour is often very difficult.7 Also, the fact that processes of selection and survival continually apply to whatever the organism does means that any feature of an animal is likely to be used in any way that comes in handy. This means that the burden of proof falls on those who make claims of the form "X and not Υ is the function of language". If we approach the subject without presupposing that there is one preeminent function, however, there is a strong case for saying that the evolutionary functions of language include communicative aspects. As an empirical fact, language serves communicative purposes, and even if we assume, with Chomsky, that language learning is not strictly speaking brought about by interaction, it does presuppose interaction. Not all

100 The functional view of meaning

interactive functions served by language have to be part of the specific survival value of human language; surely, however, one would assume that the survival value of the representational elements in human language must be understood in relation to the type of social coordination that interactive use of language makes possible. The readiness of many people to maintain on a very general level that representation and not communicative interaction is the function of language, for instance Sperber and Wilson (1986: 172-73); Bickerton (1990: 75) demonstrates the entrenched position of the purely representational approach, more than anything else. Representation, in fact, is not a serious candidate for the function of language, if it is totally abstracted from action. Dummett expresses this point by saying that "the significance of an utterance lies in the difference that it potentially makes to what subsequently happens" (Dummett 1989: 210-211). To make sense of representation as a rival function of language, we therefore have to link it up with an activity. This reflects a principle familiar in evolutionary explanation: the mechanisms of selection apply only to the reactions which the organism has in actual situations, not to unmanifested potential. The act of using language to represent (true or false) thoughts may occur as part of communicative interaction, in which case representation is one form of communicative activity. The position I take, and which will be argued in greater detail below, is that this is the natural relation between representation and communication as functions of language. According to this view, the overall function of language is communication; representations are the most sophisticated and interesting types of communicative purposes in human language, but they are not the only types. The only way you can avoid this conclusion is by insisting on soliloquy as a fully equivalent option: one can use language just as well in speaking to oneself. There are two discussions that are needed to make precise the real, but limited validity of such a position, of which the second will be postponed until I have given my own account of the relation between representation and communication (p. 124). The first argument becomes visible in the third functional context I consider — the one that I would like to propose as the proper context for linguistic description. The mode of being of actual linguistic items is neither to be part of the life of a human individual nor to be part of the evolutionary history of the human species as such. The things that constitute the object-of-description for linguists, understood as entities in their own right, including words, suffixes, construction types, discourse patterns, and (as a limiting case) whole languages, belong in a context that

A functional account of language and meaning

101

is larger than the life of an individual, but narrower than evolution: the life of a speech community during a particular period. A functional investigation of languages and their structural elements looks at all those things about an actual, specific language that a linguist might want to discover and, if possible, explain, in the human context in which they belong; and this is where the concept of a functional linguistics belongs. Evolutionary functions depend on the persistence of species; individualbased functions are defined in terms of the persistence of preferred (including intended) activities in the individual's life. Similarly, the functions that are peculiar to linguistic elements as such must be defined in terms of the events to which linguistic expressions contribute, and whose persistence causes the linguistic elements themselves to persist. This is where linguistic interaction has a privileged role. If a human subject uses language for entirely egocentric purposes, there is no obvious way in which the functions of the linguistic elements he uses can survive the individual; Samuel Pepys's diary code died with him. The type of events that cause the recurrence of linguistic expressions beyond the lives of individual speakers is clearly interactive, communicative events. Because languages and their elements are kept alive and shaped by the interactive processes to which they contribute, they can only be properly understood as aspects of communicative interaction. The function of linguistic elements, on this argument, must be definable in terms of their usefulness as a medium of linguistic communication. 4.5.3. A functional definition of linguistic meaning With this, we are ready for a function-based definition of linguistic meaning. Its crucial element is the assumption that each linguistic expressions has as its proper function the job of contributing in a specific way to communicative events. In doing so, it calls on all the skills that are necessary for the success of the act to which it contributes — including most saliently the cognitive capacity of the speaker and hearer; but it is its contribution to the actual event, the way it "makes a difference" that is decisive: The (linguistic) meaning of a linguistic expression is its (canonical, proper) communicative function, i.e. its potential contribution to the communicative function of utterances of which it forms part.

102 The functional view of meaning

In previous attempts to define linguistic meaning in interactive terms (cf. Harder 1975, Harder—Kock 1976, Harder 1990a), I have borrowed the terminology of constitutive rules: meaning is what uttering the expression "counts as" by virtue of linguistic conventions. It still gives something of the right flavour; but as argued above (p. 96), there is an unwarranted duplication in seeing actual interactive practices as functioning by virtue of rules. There is a pattern of interaction, and there is a capability in the individual for taking part in that pattern; but there are no separately identifiable "rules", except in the trivial sense of correct descriptions of what the pattern is like. Similarly, when I speak of "linguistic conventions", it is to be understood in a sense whereby it captures the implications of the interactive pattern for the way a linguistic item is understood, with no suggestion that conventions exist separately from the pattern itself. What the definition of meaning entails will be demonstrated in some detail in the rest of this book. At this point I shall limit myself to answering some of the immediate problems that spring to mind. First of all: this definition might appear to commit the behaviourist mistake of assuming that each linguistic item must have its own isolated function, thus rendering the theory liable to something analogous to Chomsky's attack on Skinner. But the definition explicitly talks about its "contribution"; and typically it can only make this contribution if it is used together with other linguistic items in accordance with the structure of the language of which it forms part. On this view, the canonical function of a linguistic expression such as -ed (understood as expressing past tense) can only be properly described as part of a description of all the things that -ed need to be combined with in order to make up a whole utterance: linguistic fragments presuppose the whole of which they are designed to form parts. In Part Two (section 2.3), I show how structural complexity in language can be understood as function-based. The understanding whereby meanings are (aspects of) interactive routines may also have a behaviourist flavour in that it emphasizes the role of habit. What was wrong about behaviourism, however, was not the claim that a competent speaker has automatized linguistic routines at his disposal — the problem is whether the routines are always mechanically triggered, as assumed in behaviourism, or are at the service of a human subject who uses those routines in a creative manner. Secondly, let me emphasize one similarity and one difference in relation to Millikan. Like Millikan, I see meaning as residing in the relationship between a speaker and his environment: the canonical ontological locus of meaning is human ecology, rather than conceptual representations inside the

A functional account of language and meaning

103

speaker. But unlike Millikan, I see mental constructs as playing a central role; and an important part of the discussion below will consist in describing just what role (cf. Part Two, section 4.4.). The similarity with Millikan may lead to the question of how I manage to account for the meaning of a statement about Flaubert in ecological terms. The answer is that getting to know about Flaubert is in itself a social(ization) process — so there is nothing mysterious about the idea that knowing what the function of a word is is a matter of knowing how to fit into the (cultural) environment. A functionalist account of language is based on the assumption that languages are systematically responsive to feedback from the human environment, just as a functionalist biology assumes that biological organisms are responsive to features of its habitat; and mental capabilities are part of the equipment that language depends on to serve the human speaker in just that way. Thirdly, and perhaps most importantly: how do we distinguish "proper" functions from all the accidental and occasion-specific things that also happen when we speak? This objection touches on a point where a functional definition of meaning in terms of function jars against the accepted, "pragmatic" way of talking about function. It is a commonplace to say that there is a one-to-many relation between linguistic expressions and the functions they serve. A celebrated case is the so-called indirect speech acts; it is cold can have the function of a request to close the window, a suggestion to take a holiday in Spain, etc. But according to the definition here, such purposes do not count as the proper or canonical function of the linguistic expression as such — they are functions of the utterance in the sense of "intended function", but do not enter into the cause-to-persist loop that preserves these particular linguistic items: meaning is not just any old intended effect that an expression may have, or any old use to which it is put, but precisely that effect-type which causes the persistent use of utterances of which the expression is part. The canonical function of the word cold (the effect that causes speakers to include it in their utterances) is presumably its power to place something low on a temperature scale — whatever ulterior effects that may have in a concrete case. In this way, the "cause-to-persist" clause in the functional definition of meaning can be used to flake off perlocutionary aspects of what is understood by "function" in the common pragmatic sense. What counts as "function", hence as coded meaning, is everything that becomes part of the word's generally recognized usefulness for speakers.

104 The functional view of meaning

The definition, therefore, does not flake off those "pragmatic" properties which do indeed cause people to reuse it, such as the signalling of in-group membership that could once be achieved by words like groovy — and that is a calculated consequence of seeing meaning as function-based rather than ontologically or conceptually based (cf. the discussion on the nature of content elements p. 200); the definition is not meant to provide a rationale for the traditional usage whereby descriptive meaning is a priori given privileged status. On the other hand, when we know what types of function natural language expressions can have, it becomes interesting to study the relationship between different types of meaning; one issue that I try to look at is the relationship between conceptual meaning and (certain types of) functional-interactive meaning in linguistic structure (cf. p. 265). What is occasion-specific as opposed to what is part of the coded, proper function, however, is not fixed once and for all. The way in which individual occasions of use rub off on the code itself is a form of abduction (cf. Peirce 1931; Andersen 1973) whereby properties that are not inherent in the code are attributed to it because something in the situation makes it reasonable. When the word corn is used mainly about maize in a particular community, it gradually comes to mean 'maize' in that community. The constant abductive process of matching achieved purposes with the words used in achieving them keeps language users' sense of what their functional potentials are up to date. While winnowing away most occasion-specific aspects, each such abductive process reassesses, and possibly modifies, that part which is retained for future reference. The failure to make an explicit distinction between what the speaker takes to be an expression's reusable potential and its purpose on a specific occasion is a central reason why the functional tradition has in effect conceded the central domain of linguistic semantics to nonfunctional approaches. A functional semantics needs to factor out that privileged property of an expression which constitutes its meaning, just as much as a logical or conceptual semantics does. The central unclarity in the functional tradition is to let function be synonymous with communicative use, without distinguishing the privileged role of meaning in the picture. In Givon (1989: 27) we can see first, how function is linked with meaning, and secondly how meaning is effectively submerged in the broader domain of "communicative use":

A functional account of language and meaning

105

Bloomfield's extreme position.. .proclaimed that the instrument ~ language -may be described without reference to its function — meaning or communicative use. ...The work of all 20th Century functionalist schools, from Jespersen and Sapir onward, falls squarely within the scope of pragmatics, insofar as they have all insisted that the structure of language can only be understood and explained...in the context of its communicative use, i.e. its function.

I have claimed that the functional tradition in linguistics has stopped short of a full clarification of meaning in functional terms. The significance of taking this step can also be profiled in relation to the oft-quoted views of the late Wittgenstein. The definition that equates the meaning of a word with its function (rather than with the representational content itself) updates Wittgenstein's famous dictum about meaning as use in two ways. First, it explicitly filters off accidental aspects of the use of an expression; secondly, it avoids the antimentalist stance: the mental element is necessary, but does not in itself constitute meaning — meaning needs to be understood as communicative potential for the speaker. 4.5.4. Some central functional characteristics of linguistic meaning Changing the locus of meaning from mind to ecology runs against the prevailing cognitive trend in semantics; a number of authors whose views I otherwise share (Langacker 1987a: 5; Searle 1983: 200, Gardenfors 1992: 107) pointedly emphasize the place of meaning inside the head. To make this difficult issue clear is going to take up some space below. As a starting point, let me reemphasize that if it were not for the mental skills, meaning could not exist, but neither could other intelligent, coordinated activities: put meaning inside the head, if you like, but then football goes with it. To see the point of trying to look at meanings from the functional rather than purely conceptual perspective, let us return again to the directly grounded forms of communication that were referred to above p. 77 in the discussion of the difference between background states and representational states: greetings, etc. Such signals are without conceptual content, but are obviously describable in functional terms: they serve as means to take part in interactive patterns, and the act of participating is the speaker's reason for using them. The interactive pattern itself, as a collective form of life, is therefore prior to the representation and experience that individual participants have. Unlike a concept, the act of participation is not inside the

106 The functional view of meaning

head of the speaker; what is inside the head in such cases is the ability to take part in the pattern, a skill analogous to the ability to play football. Some examples of grammatical types of meaning also fit better into a functional than into a conceptual view of meaning. But this is for a different reason than in the case above — not because they do without the conceptual level and fit directly into interactive patterns. As a demonstration case at this point, I shall take the declarative mood in English. This category has a highly sophisticated type of meaning, which presupposes the existence of conceptual meaning in ways which will be more evident in Part Three. Yet its meaning does not constitute a concept; rather, it involves the purpose served by the conceptual content. The declarative, as a paradigmatic alternative to the interrogative, indicates that the content of the sentence is to be understood as describing what is the case. As with the meaning of a greeting, it is possible to set up a concept "declarative"; but again, this concept is secondary to the meaning. In the case of the declarative, this can be shown by the fact that you could not understand a definition of the concept "declarative" unless you already understood its meaning: in understanding the definition, one is already relying on the knowledge that declaratives (including definitions) are to be understood as describing what is the case; understanding the declarative has "transcendental" status (cf. Habermas 1971). Again, a definition in terms of function is a natural way of expressing the meaning of the category: it does a job in relation to the conceptual content of the sentence. A crucial argument for seeing linguistic meaning in functional terms is the fundamental directionality of meanings — which cannot be accounted for if we assume that meanings are purely representational. The directionality of action is due to the temporal directionality of life, as discussed in relation to presupposition above p. 71. And just as actions are always oriented towards a future state that is anticipated by intentions, and presuppose an already existing state which intentions are designed to improve on, so are meanings oriented towards the job they do and presuppose a context in which to do it. Greetings are designed to serve a function, and by virtue of that they presuppose that the conditions for carrying it out are there — there must be an encounter before hello can function properly, and in performing its function it "makes a difference" in relation to the encounter. With this property, we are at the point where the functional view of meaning most clearly manifests the second of the two properties that was mentioned at the outset: its essentially dynamic nature. But this sense of

A functional account of language and meaning

107

dynamicity must be kept distinct from the sense in which only actually used meanings are dynamic. Even functional meanings have a mode of being in which they are just kept in "static" readiness; in order to actually say hello you must already possess the "option" of saying hello. The sense in which functional meanings are inherently dynamic also in their existence as options is that these can only be understood as something that depends for its existence on the situation of use — just as the skill of playing the violine must be understood as dependent on the actual event of playing the violine. As opposed to that, representations have no inherent beginning and end, and can exist regardless of being used for any purpose. Because of this directionality, meanings (like actions generally) always have two complementary aspects, i.e. the forward-oriented and the backward-oriented: if you want to go somewhere, you have to start from somewhere. To take a non-linguistic example, a railway ticket has the same property: its function is to get the owner somewhere, but the ticket will simultaneously show from what station it is valid. Just as you cannot buy a railway ticket with only a destination on it, you cannot use an expression with only a forward-oriented function and no backward-oriented implications for the situation you are using it in. I shall use the word (functional) "basis" about such presuppositional implications; "(coded) function" will do double duty as describing both the forward-oriented aspect and the whole — since the whole only comes into existence because of the forward-oriented aspect. It would be difficult to sell a railway ticket that only entitled one to travel from somewhere.

4.6. Meaning and representation: procedural semantics According to the picture presented above, representation as a function of language must be understood as a sophisticated variety of communicative interaction. But does that make any real difference in the way we should understand representational meaning? Unless that is the case, the tradition according to which it could be regarded as marginal would be essentially correct; I shall therefore try to show that is does indeed make a difference. The first, essential step in getting what I see as the proper perspective on the issue is to distinguish rigorously between the representational powers of the mind and the representational function of language — as a prerequisite to finding out just how these two things are related. If we look at

108 The functional view of meaning

representation in relation to the scale of cognitive sophistication that was presented above section 3.9, it will be apparent that there is no logical need to understand representative powers as being bound up with a communicative outlet for them. The ability to form and process Intentional representations is a useful skill to have also for an individual animal, making possible a far greater flexibility of response than the dependence on direct situational stimuli; a being that can envisage what may happen can let a hypothesis die instead of itself, as Popper (1972) pointed out; for that, communicability is not essential. As an example of representational skills in human beings which are not directly linked with language, the most obvious is perhaps concepts associated with faces. Although faces may have a name attached to them in one's mind, those of us who are teachers are painfully familiar with the fact that this is not always the case; and attempts to link conceptualization of faces with a constellation of ordinary linguistic concepts are implausible because the salient features of faces typically defy attempts to put them into words. Yet the tradition, going all the way back to Plato, is to assume without addressing the issue that the capacity for language and the capacity for abstract mental processing are two sides of the same thing. However, as argued by Ulbaek (1989) there are good reasons to assume that cognitive skills evolved independently of language, so that the ability to process abstract representations was already there at the point where human language arose. Although Ulbaek's theory is based on assumptions corresponding to a Fodor-type language of thought, the general moral — that the evolutionary path of internal representations is autonomous in relation to the communicative use of them — applies equally to a story based on intentional representations. Bluntly speaking, it would be absurd to suggest that linguistic utterances with representational content arose before the minds of speaker and addressee were already equipped to handle representational content. One factor that has made the identification between linguistic meaning and mental representation plausible is the classical assumption, generally implicit in the period before Fodor explicitly revived it, that mental processes are in themselves language-like; another is the absence of available alternatives. Although concepts in themselves have always been thought of as having imaginal properties, the only way in which they could be combined into complex wholes, specifically into propositions that could be true or false, appeared to be by means of language-like structures such as the subject-predicate relation. To be able to build such combinations

Procedural semantics 109

naturally seemed to require mastery of the rules of logic. When artificial intelligence began, the assumption was implicitly present in the logical foundations of the enterprise. Yet the ground on which the "mental logic" hypothesis was thrown in doubt was in what may be considered the home ground of the hypothesis, the area of deductive reasoning. Based on studies of what happened when actual human subjects were given reasoning tasks, Johnson-Laird (1983) found that the degree of difficulty involved in reaching the right conclusion from standard pairs of premises in a syllogism varied considerably in a way that did not fit with an assumption that reasoning consisted in following logical rules. Building upon an idea proposed by Kenneth Craik (1943), Johnson-Laird suggested instead that understanding worked by setting up a form of analogical representation of a state of the world, a "mental model" in which the premise is true, and showed that it was possible to predict the difficulty of a particular syllogism on the basis of how many models it would require to check the validity of the conclusion. Instead of applying rules to a formula, we construct a model and derive the properties of the world of which we are speaking from the properties of the mental model. Let us take the simplest case: All artists are beekeepers All beekeepers are chemists

All artists are chemists In understanding the premise all artists are beekeepers, we set up a mental model in which there are some entities with the property of being artists, and these are then endowed with the property of being beekeepers as well. In this simple type of syllogism, when we come to the minor premise we take all the beekeeping artists of the model that we already have and assign them the property of being chemists. In other types of syllogism it is not so simple, the most difficult type being the one which requires three models to check out the inferences; the example is all of the bankers are athletes + none of the councillors are bankers, which few people can cope with.

110 The functional view of meaning

An implication of this theory which is central from our point of view is that in understanding utterances we use mental representations that are not essentially a language. Instead, it involves a representation in the form of a construct with certain analogical properties, which stands proxy for the real-world situation that an extensional account would be looking for. We may call it language in the sense that a model "means" the state of affairs that it represents, but in that sense a model of a house or an aeroplane also "means" their originals. In the context of meaning, the difference between a model representation and a linguistic representation is more interesting than the similarity. When we move, some of us have experimented with 1:100 paper models of chairs, tables and bookcases, moving them around to find ways in which they will fit. The possibility of being used in this way is a feature which the model shares with the reality it represents — only it is somewhat easier to perform the operations with the model, which is why we do it that way. A linguistic form will not serve the same purpose: shifting around labels with "bookcase" or "table" on them will not tell us whether the objects fit in. Similarly with mental models. Let us take a sentence describing a room, such as there was a picture on the wall to the left, and in front of that a sofa with a tea-table next to it. If we form a mental model of a room, we can furnish it according to the description as we go along, and the model with picture, sofa and teatable in it then constitutes our interpretation of the sentence. With the notion of mental models, much of the traditional underpinning of the notion of "language of thought" disappears. There is no reason to suppose that we are born with a language complete with syntactic structure, if the job will be done by a generalized ability to form models of states of the world. Among the attractive features of the model-building hypothesis is its continuity with the developmental scale of cognitive abilities that was described in section 3.9. The ability to reprocess intentional representations can easily be extended with a component which allows us to recombine greater or smaller fragments into whatever representational wholes we like (just as we can mentally turn around three-dimensional figures to check if they are identical). Although there is an obvious complexity difference, there is also a reasonable developmental continuity leading from fairly concrete and inflexible imaginal representations to complex abstract constructions, such as those involved in scientific hypothesis-formation: once we have the capacity to reprocess the output of previous mental processes, complexity can grow by easy stages.

Procedural semantics 111

Johnson-Laird is not very specific about the structure of mental models; any hypothesis that preserves the essential property of permitting the subject to "read off" properties from the model he constructs (rather than depend on rules) will be compatible with the notion. In terms of the apparatus of cognitive linguistics, it would be natural to assume that a basic form of mental model is structured by image Schemas, since image Schemas, as argued by Lakoff (1987), can support logical inferences. To take an example involving the "container" scheme: if the toy is imaginally represented as being in the box, and the box is is imaginally placed in the bag, then the toy is also imaginally in the bag; and this inference can be "read off" an image-schematic representation without formal rules. As demonstrated by Jean Mandler (cf. Mandler 1992: 598), the ability to handle image-schematic mental constructs is a very early achievement, and it would be plausible to see this as the beginnings of logical skills as well. If mental representations are the output of an immensely complicated mental recycling plant rather than language-like constructs, the relation between thoughts and the meanings of sentences expressing these thoughts needs to be approached anew. Linguistic expressions cannot stand directly for the mental structures that are constructed by the individual; the relationship, rather, is one in which linguistic meanings trigger the construction of the mental model. This relationship between language and mental representation is what is captured in the label of "procedural semantics". Procedural semantics arose naturally from a computational perspective. Although the goal in working with a machine may be to enable the machine to produce and process representations, the general format of man-machine communication is not "constative" but "directive"; programming is a matter of telling the machine what to do. It is therefore natural that people working with computer simulation of inferential processes should come to speculate on the properties of the process of building representations rather than on the properties of complete representations per se. One of the early discussions of this point of view is Davies and Isard (1972), an article whose point is succinctly encoded in its title, Utterances as programs. In getting a program to work, two phases are necessary. The encoded instructions must make contact with the point of reference; hence, if an instruction is to replace χ with y, it is necessary to locate χ before anything can happen. Once that contact has been achieved, the instructions can be carried out, i.e. the program can "run".

112 The functional view of meaning

Similarly, the point of reference of an utterance can be seen as something that is changed in the process of interpreting the utterance; an interpretation is an operation or "procedure" that is carried out on a point of reference. George has left is an instruction to locate George (in your point of reference) and add to the information stored the circumstance that he has left. One of the uses to which they put this point of view is an analysis of reference failure. As the addressee of the King of France is bald, you are aware that you have to locate the King of France and add a piece of information about him, and in that sense you understand the utterance perfectly well; only you cannot locate him and therefore the "program" cannot be executed. The crucial role of the procedural approach in this context is that it makes possible an extension of the functional view of meaning into the propositional core of language. The standard division of labour between the functional and the representational view, as we have seen, is that meaning is centrally representational, but needs a functional-pragmatic envelope to be wrapped in. With the procedural view, we get instead a situation where each semantic element has a job to do in producing the representational content that the utterance is supposed to convey. In relation to language, therefore, representation stands on the shoulders of function. Without meanings doing their job, we do not get the intended representational content. The word "procedural" thus transfers the focus to what goes on before we have reached the completed representation that constitutes our interpretation of an utterance — meaning as process input rather than meaning as the content of the envelope. However, it does not in itself tell us what the interesting procedures are. Within a computational context the procedural point of view does not entail any fundamental change in views on representation and interpretation. In spite of the mildly disquieting nature of the new point of view, the discussion in the artificial intelligence camp between the procedural and the declarative views did not have very much empirical substance (cf. Johnson-Laird 1983: 247). Whether the representational capacity of a machine works by associating symbols through a procedure or by having full propositional representations in its memory may make a difference in terms of what is practical, but need not differ in ultimate implications. Although Johnson-Laird argues against the language-of-thought hypothesis, his own use of the procedural semantics remains essentially within the bounds of the "classical" computational picture, in terms of

Procedural semantics 113

which procedures and final models have a one-to-one correspondence. However, he differs in going beyond the mental world of representations; many of the properties of logical semantics are retained in his version. Instead of mental representations, he understands meanings in terms of truth-conditions specifying the properties of the model; he also allows the propositional format to retain a role in memory and understanding, especially in cases of indeterminacy where too many possibilities are open for a single representative model to spring to mind (cf. Johnson-Laird 1983: 162, 264). In relation to natural language utterances, however, the implications may make more of a difference than within the computational procedure. For the computer programmer it does not matter whether the description is given in terms of the representations that you end up with or the instructions that are used to build them up, because both are equally part of the whole process going on in the machine, a process which you can control in exactly the same way whether you look at it one way or the other. However, the processes going on in heads of individual speakers in connection with an individual communicative event can never be part of language "as such". The linguistic input that prompts the speaker's process of interpretation is much more naturally understood as a property of the language, rather than solely of the speaker. Therefore a description of natural languages in terms of the procedural point of view may be substantially different from a description in terms of the actual representations that people end up with. The representational view of semantics, understood in a cognitive context, involves an assumption that there is a complete linguistic meaning in the form of a mental representation. The procedural view, when applied to natural language, means that what language codes is process input, not representation. The meaning of a complex linguistic expression thus consists of hierarchically organized "instructions" rather than a complete representation; and since we cannot determine the executive powers of the addressee with the precision that we can impose upon a computer, the meaning of a natural language utterance cannot be put in a one-to-one relation with the product resulting from the "execution", as it can in a machine context. The procedural point of view is involved in many otherwise different approaches to semantics. Discourse Representation Theory (cf. Kamp 1981, Kamp—Reyle 1993) concentrates on reference-tracking, and shares with

114 The functional view of meaning

Johnson-Laird's approach the close affinity with logical semantics: the representations that are built up as a result of the linguistic input are provided with an evaluation procedure specifying the truth-conditions of the representations.8 The differences between traditional semantics and the perspectives opened by the procedural points of view are emphasized by other authors, for instance Winograd, cf. his criticism of Wilson 1975 in Winograd (1976: 272); in Winograd & Flores (1986), he subsequently radicalized this criticism considerably. In his account, there is no possibility of neatly severing meaning understood as truth-conditional content from the rest of what is going on: There is no single formal object which is the "meaning" or "primary intention" of an utterance. Overall meaning is an abstraction covering all of the goals. Many of the goals are at a meta-communicative level, dealing with the personal interaction between speaker and hearer rather than the putative content of the utterances. (Winograd 1976: 269).

A central advantage of the procedural point of view is pointed out by Fauconnier in relation to his theory of mental spaces: Relatively simple grammatical structures give instructions for space construction in context. But this construction process is often underdetermined in the grammatical instructions; thus, simple construction principles and simple linguistic structures may yield multiple space configurations. And this creates an illusion of structural complexity. (Fauconnier 1985: 2)

The point of listing these three approaches together here is in the fact that they all explore ways of disentangling the product of language understanding from language itself. This is a decisive step in understanding why linguistic meanings are not representations. What you have arrived at when you have properly understood a linguistic utterance is not its linguistic meaning, but something much richer and more valuable to you: the way in which the utterance changes your own actual, situational understanding. Linguistic meanings are just a step on the way. Following some of the authors I have quoted, I use the word "instruction" about linguistic meanings to emphasize the procedural, dynamic nature of meanings as constituting process input rather than static representations; the same point has been made by words like "clues" or "hints". Some people I have discussed this with see "instructions" as being too one-sidedly hearer-oriented; we shall

Procedural semantics 115

return to this criticism when we have discussed the social aspect of meaning below.

4.7. Concepts and conceptual meaning in the procedural perspective According to the procedural view, linguistic meanings generate representations but do not themselves contain finished representations. This may appear to be either quibbling or go against the most obvious fact about the most obvious type of meaning, namely that horse means 'horse', period. Yet on closer inspection, it will be apparent that there are advantages in a procedural approach also to concepts, and to conceptual meanings of words. These two things are obviously very closely related; the reasons for maintaining a distinction will emerge below. To see why a procedural approach makes sense, let us begin by noting that it is easier to describe what is involved in conceptualization procedurally than it is to say what a concept "as such" is, as a static entity in itself. We may say that animals have conceptual abilities when they can solve certain tasks satisfactorily; but we do not know precisely what mental constructs enable them to do so. When Alex, the most well-educated parrot, is faced with questions like "what's the same about these two things?" and comes up with "colour", we conclude that he is able to abstract in certain ways about objects and align properties with linguistic labels — i.e. that he can carry out certain mental procedures — but we would be unable to say what was the nature of his concept, as a construct per se. And even with human subjects, the nature of conceptual constructs are not available for introspective inspection. The process whereby we use words to generate representations has almost totally Background status; it is something we have to "rely on". Only the finished representations are typically accessible to consciousness. If we were to assume that speakers know the meaning of words in the knowing-f/ζαί sense, we would have to say that it is the finished representations that constitute the meanings, since we have no access to the procedures themselves. But everybody agrees that knowing a language the way a native speaker does is a form of knowing-/zo>v. Fluent speech presupposes automation; natives speak as expert skiers ski, without following explicit rules. Yet the knowledge-based tradition has led linguists to suppose that the structural parts of linguistic expertise should be

116 The functional view of meaning

understood as "declarative knowledge", while only the marginal and pragmatic parts constituted "procedural knowledge" (cf. for example Faerch—Kasper 1986). Instead, we should say that all of the mental apparatus which enables us to speak has fundamentally procedural status. The difference, then, between the conceptual and the less obviously conceptual types of meaning, is not in the procedural properties — because they are shared between all types of meaning. The difference is that some procedures are specifically designed to add representational content. To do so, they must have inherently representational properties, and in that sense their meaning is indeed representational. The procedural view, where those properties are seen as having the status of "contribution-to-generatingrepresentation" rather than complete representation also makes it easier to accommodate the insights into the complexities of conceptual organization that have been brought to light by cognitive linguistics. If concepts are supposed to consist of necessary and sufficient conditions, it is natural to assume that the procedural and the static representational approach really come to one and the same thing. But if conceptual structure includes phenomena like family resemblances and similarly sprawling configurations, all of which are ultimately linked up in one vast network (cf. Langacker 1987a, 1991; Lakoff 1987), the difficulty of defining an item which constitutes "a concept" is much greater. In contrast, it remains natural to say that there is a set of procedures which individuals can use in understanding and producing representations to which these conceptual structures contribute. This raises the question of compositionality, which will be further explored in Part Two. When you say boxing ring, wedding ring, smuggling ring, it is not the same "ring" you are talking about, yet it would lead to an unrealistic number of ambiguities if all shades were factored out into unrelated meanings. Possessing, or knowing the word "ring" is much more easily understood as being able to use it so as to produce the right individual representation on different occasions — than if we were to say precisely what representation constituted the concept-as-such. Knowing a word, in other words, does not consist in having access to one complete representation, but in having a procedure for generating a range of different representations, dependent on the linguistic and nonlinguistic context. But although the network as a whole is not unproblematically available for introspection, the semantic connections of which it is made up are in principle the sort of thing that you can become aware of. The knowledge that underlies those procedures which constitute conceptual competence comes into existence through events that are consciously accessible. When

Concepts in the procedural perspective 117

we come across a new word, we are aware of both the expression and the context in which it is used; but as we become familiar with it, the individual explicit occasions of use blend "into the background", and become part of the knowing-how that enables us to use and understand it automatically. This understanding of conceptual meaning lends itself well to simulation in terms of neural networks (cf. Plunkett—Sinha 1992; Plunkett et al. 1992). The experiment that is most central from the point of view discussed here involves a double matching procedure, in which a network is trained to go from images to labels (production) and from labels to images ("comprehension")9. As an additional twist, the images shown were organized around prototypes in such a way that the network was never shown a prototype during training; all the images that were fed into the network with an associated label were displaced in relation to the prototype. The result showed several interesting features. Plunkett et al. (1992) point out that a neural network of the kind described can give an account of "symbol emergence": we learn to use a symbol correctly by forming a concept that can be used to subsume different instantiations and use the label to evoke that concept. The plausibility of the experiment as a modelling of vocabulary learning in children rests on some familiar features which the network shares with children: the process creates prototype effects (the network extracts the prototype from the displaced input images); there is a vocabulary spurt at a certain phase in the training, after a time of comparative standstill; comprehension is ahead of production throughout. Compared with a similar experiment, the results also suggests something interesting about the relationship between concept learning in the absence of language and concept learning in the context of linguistic labels: learning was faster when there was a label to attach the concept to. My own experience of trying to learn the voices of birds shares this feature: even with an expert beside you to tell you the names, it is not always easy to learn to recognize the song of a particular bird; but the difficulty pales into insignificance compared with the situation when there is no-one to provide the label — as in the case of the ornithologist who spent a number of years discovering new bird species merely by listening to voices of invisible birds in the forest canopy in the Amazon rain forest. It is reasonable to assume, therefore, that linguistic concepts are in some respects easier to handle than concepts with no linguistic status.

118 The functional view of meaning

The structure of this process can also be used to say something about why the assumption of harmony between natural kinds (out there) and concepts (in the mind) is in some sense inherent in the way we learn and use concepts as part of learning to cope with the environment. If the mechanism functions as it should, the mind learns to produce representations that match the kinds of input that it gets. When we are told, for instance, that there are two distinct kinds of chimpanzees out there, we must modify our chimpanzee concept in order to be able to handle new chimpanzee information that depends on this distinction. This is one reason why the problem of whether conceptual distinctions reflect the way nature is or the way we think, continues to resurface — especially in the case of generalizing constructs like animal species and the Gauss curve. If conceptual distinctions are to be adequate, they must do both at the same time; at some level of abstraction, the structure of our concepts must match the structure of the world we are talking about. But we can never get any general answer to the question of whether that level of abstraction is the right one. All we can say is that our ability to cope with the environment depends on our ability to generalize in appropriate ways; in other words, the ultimate embedding of conceptual skills is in the functional relations between the animal (us) and the environment.10 In relation to conceptual skills, linguistic meaning involves yet another level of complication. Whatever was required in order to invoke a concept (linguistically encoded or not) in one's own cognitive processes, the ability to invoke it appropriately in the addressee comes on top ofthat, and enters into the interactive competence of the subject. As one salient extra element, it involves the status of meanings as social entities rather than elements in one's own private mental processing. The ontology of social entities is very complicated (cf. Sinha 1988, Searle, work in progress) and involves the interplay of subjects with respect to each other and all the biological and physical entities that they relate to. As an example of a social entity we can take the reputation of a historical figure such as Winston Churchill or John F. Kennedy or a business company such as ITT. The properties of a reputation cannot be captured by reference to what any individual observer thinks about the person or business. You can only get at it by asking people, but in principle no single answer is ever sufficient. Of course one person may provide a description which turns out to match exactly what a thorough survey yields as its result. But that would not mean that a person's reputation was ontologically identical to what George Smith, of 18 High Street, Poughkeepsie, N.Y., said about

Concepts in the procedural perspective 119

him. Rather, we would say that George Smith turned out to be right about what his reputation was; and if it is to make sense for him to be right, there must be an object which he turned out to be right about. His reputation, therefore, is that social entity about which (after careful investigation) George Smith turned out to be right. Similarly with meanings. If we ask someone what a word means, we do not ask what conceptual structure this word corresponds to in his head (even if this is the only resource available for our informant to produce an answer). This has also been taken up in the philosophical discussion following Putnam's position on the division of linguistic labour (cf. Putnam 1975b). Distinguishing between "the word", and "the [individual's] entry for the word", Bürge (1989: 181) shows how we can only know the individual's understanding of the word via the study of the interaction he takes part in. What we ask about, when we want to know the meaning of the word, is what its function is in the speech community — what it can do for us, if we hear or use it in communication. An individual does not possess or carry around that meaning as his own head-internal property. We say about him that he knows what it means, just as he may know the way to the library — but neither the meaning nor the way to the library is in itself inside his head. The point of this is that concepts constitute a point at which the representational powers of the mind and language as a social construct overlap, but without merging into the same thing. Word meaning presupposes conceptual skills, but is a more complicated thing. Only in the dictionary does a word "stand for" a concept in the way traditionally assumed, and speakers are not usually walking dictionaries — outside dictionaries it exists as an option for interaction, for use. Knowing a linguistic concept really well means that you can invoke it for a variety of different purposes, imposing just that kind of conceptual order on the subject matter that you would like — not that you can make a detailed chart of your own network. In the neural network, a series of slightly different actual instantiations corresponding to one label gave rise to a prototype — which was the machine's way of handling the pattern of variability. The interactive dimension, however, adds another dimension of potential variability that speakers must handle: the conceptual patterns may be different between speakers. We do not know to what extent the patterns are alike in speakers; as generally recognized, the most sensible way of being realistic about

120 The functional view of meaning

variation (while accounting for the fact that it makes a difference whether we speak the same language or not) is to say that concepts are sufficiently similar in different speakers. Gärdenfors (1991) suggests the notion of "linguistic power structure" as a way of accounting for the way individual meanings and social meanings are made to coincide and invokes Wiener's notion of a "virtual governor" (1961). This notion refers to an experimental setup in which a number of devices coded to give off impulses at the same frequency are linked up in a network. It turns out that the performance of the individual device is much more reliable than if the devices are made to work alone, which illustrates how, although no device is made to keep the others in line, social control can be seen to exist as an emergent property of the system of interrelated speakers. However, I think it would be a logical extension of variability between instantiations from the individual point of view to set up a notion of linguistic "macro-concepts" (as a special case within the range of possible types of concepts) in a manner that explicitly incorporated potential social variability. This would be a logical consequence of seeing meaning in terms of interactive function: meanings are not in speakers, but in speech communities. What you do when you use a word is thus not just to invoke your own concept in the hope that the addressee has the same one: you call on something which is essentially in the social space between speakers. In terms of connectionist simulation, a linguistic concept could be seen as being constituted not by the variability pattern in an individual brain, but by a network of such patterns as occurring in the speakers of the speech community: a network of networks. Knowing how to use language entails some degree of awareness of this. What you will be deemed to have said is not in your own hands, because by opening your mouth instead of merely thinking you commit yourself to invoking other people's conceptualizations instead of sticking with your own. A case in point was when a Danish politician used the word fascist about a proposed collective agreement between government, trade unions and industrial employers on tax, wages and working hours. She meant it in a technical sense, whereby the introduction of one top-down decisionmaking process to replace ordinary decision-making procedures is one of the marks of fascism. It was (of course) understood as meaning "ruthless oppression in violation of a democratic constitution"; and she had to admit that the term she used was inadequate, even if it was legitimate in terms of her own semantic network. If we follow up the implications of the role of the linguistic macro-concept, this withdrawal was the right response: unlike

Concepts in the procedural perspective 121

Humpty-Dumpty's words, ours are assigned meaning by a process in which we are not the only shareholders. This type of phenomenon is one reason for the consistent hearerperspective I maintain in this book. I do not mean to imply any dominance of the hearer-perspective over the speaker-perspective, or imply that words have nothing to do with what is in the speaker's mind but is only designed to trigger something in the hearer's mind. Rather, I concentrate on one of the two possible perspectives, hoping to capture what is best viewed from that perspective as well as what can equally well be viewed from both perspectives. If you believe that the social pattern is logically prior to the speaker's capacity to take part in it, it is reasonable to assume that speakers are by and large sensitive to the overall pattern of which they form part. Extreme insensitivity is one of the key criteria of insanity: socially unconstrained creation of new words unlicenced by existing patterns ("neologisms") is a symptom of schizophrenia. But clearly a full account of language in communication must include also those processes which are best viewed from the speaker's perspective; and these aspects my account does not capture. The notion of macro-concept can provide a focus for the investigation of the social life of important concepts. One of the most important concepts that have been systematically investigated in this fashion is the concept "culture" (cf. Raymond Williams 1958, 1981; Fink 1988a,b). From its beginnings as a derivative of the latin verb colo 'cultivate' (in the agricultural sense), it grew to stand as a key concept in Western thought about human activity and identity. Because of its central status, there have been many attempts to define and redefine it, including head-on clashes between rival understandings; as discussed by Fink, a number of conflicts between popular and elite, divisive and consensual senses, play a role in understanding the word; and the quarrel is part of what you need to know to understand the word fully. Most speakers will only have part of the network at their disposal; but the history of the interpretation processes associated with this word has a form of reality that cannot be captured by seeing this as merely a case of bad coordination between individuals. Admitting this type of social variability means opening a slot for awareness of the existence of my language and your language as well as our language in the area of semantics. In sociolinguistics, we are familiar with the fact that patterns of prestige determine phonological and syntactic variation. It would be surprising if semantic variation did not follow the

122 The functional view of meaning

same pattern: the confident prestige speakers feel justified in taking their own conceptual organization for granted, while the status-seeking hypercorrectors use the okay words of the higher group in the hope of invoking an understanding which is only half accessible to themselves.

4.8. Searle on representation and interaction I have now described what I see as the relation between communicative function and representation. The central idea is that all meanings are interactive, and some are also representational in the sense of being specifically designed to cue the hearer's representational powers. I now return to the last element in the discussion of representation and interaction: the argument against the view that says, on the one hand, that language is primarily a vehicle of communication, but (on the other hand) meaning has a representational nucleus that is logically prior to any notion of communication (cf. Searle 1991). While Searle's views of cognition have been decisive for the picture that has been presented above, I think his recent views on meaning do not do full justice to the functional-interactive dimension, as also pointed out by Nuyts (1993). The early emphasis on linguistic action in Searle's thinking has receded in favour of an analysis in terms of conditions of "satisfaction" (leading up to a further, illocutionary analysis in terms of direction of fit). This is not necessarily identical with a traditional truth-conditional analysis, but it involves the same identity between the content of a belief and the content of a sentence expressing that belief. For the reasons described above, I think this is imprecise as an account of linguistic meaning. The development away from meaning understood in terms of communicative action is made clear when Searle, in essential agreement with the position of Chomsky (1975b: 61-62) and Dummett (1989: 209), argues that meaning can exist in complete isolation from communication. The argument takes the form of a discussion on the possibility of somebody using a meaningful sentence or gesture without any communicative intention at all. According to Searle, for something to have meaning it is sufficient that it should have conditions of satisfaction. These conditions of satisfaction are imposed by the speaker upon his utterance or gesture as a step that is logically prior to the possibility of communicating them. As analyzed by Searle, the gesture itself is also a condition of satisfaction for the meaning in question: until the gesture has been made, it cannot have the meaning

Searle on representation and interaction 123

intended. This means that Searle can elegantly, albeit somewhat obscurely, define meaning in terms of "imposing conditions of satisfaction upon conditions of satisfaction" (1991). The relevant gesture acquires the Gricean reflexivity simply because once the conditions are made the subject of a communicative intention, recognizing these conditions-of-satisfaction is sufficient for the communicative intention to be successful. One can agree that the separation of meaning from a communicative intention is analytically possible without therefore agreeing that this separation brings out the essential nature of meaning. Searle uses the separation to make a point which I see as valid agains a Gricean position, defended by Jonathan Bennett (1991). The weak point in the Gricean definition is that it assumes that meaning is ultimately a matter of perlocutionary intention, i.e. reducible to an individual communicative event that takes place between two agents. Searle shows that meaning must in some sense exist before being communicable: it does not arise because of the intentions of the individual speaker. The structure of my argument against Searle's position is analogous to his own argument against Bennett in the sense that I claim a similar presupposed status for the interactive pattern that is the ontological locus of language according to the argument above. The reason why Searle's argument is not definitive is that it only proves that mental content must exist before being communicated — not that linguistic meaning must exist independently of communication. In order to intend to transmit a proposition, the speaker must first be able to entertain it in his head — but mental representations, as argued above, do not in themselves constitute linguistic meaning. My story differs from Grice's in saying that once we have communication, linguistic meaning is logically prior to any concrete linguistic utterance; it differs from Searle's in claiming that linguistic meaning presupposes a pattern of communicative interaction. This is most easily seen in the case of those types of meaning which are not representational, such as greetings. As we have seen, it would be misleading to analyze hello as drawing upon a hello concept — rather, the expression invokes an interactive routine that relates directly to the situational context. The meaning (= canonical function) of hello is to bring about a certain interactive relation to the other person — and the relation itself is prior to a representation of it. But it applies also to representational meanings, which are the chief concern here. The argument is somewhat more complex, however.

124 The functional view of meaning

The reason why a pattern of communicative interaction must be presupposed also in the case of an utterance with a propositional content is that "imposing conditions of satisfaction" upon an arbitrary expression or gesture does not make sense in a world where such conditions are not communicable. The action sequence has three aspects: making a gesture in complete solitude assigning conditions of satisfaction to it thereby establishing a derived Intentional relation to a state of affairs which it represents and which constitutes its meaningfulness In doing this, a subject would be constructing a very strange object, suggesting an element of witchcraft (which would be cheating, since there would then be an assumed audience of dark powers): semi-externalizing his primary intentionality for himself alone, and in such a way that it is accessible via his own mind only — since there is no thought of anybody else becoming privy to the conditions of satisfaction that are willy-nilly imposed on an arbitrary segment of the external world. How can the meaning-bearing gesture be reasonably described as a "condition of satisfaction", if the meaning-bearer does not carry its meaning anywhere? The question is what, if anything, it is a potentially unsatisfied condition of? In an ordinary utterance, the making of sounds is indeed a condition-ofsatisfaction (otherwise the addressee could not hear any utterance), and it is also necessary that the sounds should have the intended meaning (otherwise he would not have a clue as to the intended message); the two levels of conditions of satisfaction are clear enough — but only because the meaning-carrier is assumed to actually carry its meaning somewhere, i.e. be a vehicle of communication. In Searle's scenario it does not work. Take for instance the case of somebody first decreeing that an arbitrary meaningbearer (pointing a stick towards the sky, for instance) means something like "I am going to stop smoking", then pointing the stick towards the sky. A crucial difference in relation to ordinary language is that the criteria of satisfaction would be without any conceivable causal relevance. It would be entirely up to the speaker to determine whether the whole ceremony had been performed to his own satisfaction, and what was to follow from it.

Searle on representation and interaction 125

If we take the arguments against soliloquy above p. 100, the additional point made by this argument is that representational meaning, even for an isolated individual, has no causal relevance except in the context of communicative interaction. There is action involved in Searle's scenario, as required by Dummett; but the action itself fails to "make a difference". If, therefore, we regard representational meaning only as involving conditions of satisfaction, we have no way of explaining how it evolved. We may compare this scenario with a secret vow made in ordinary language. Such a vow borrows its criteria of satisfaction from the criteria that apply to communicative use of language, and may thus acquire a force derivable from the force of public commitments; the speaker functions as his own privileged audience, proving his will power by supplying all the necessary pressure on his own. Evolutionarily speaking, then, the step surely must be directly from the inherent, subjective intentional content, to communication of representational meaning. The postulated intermediate step of noncommunicative representational meaning is at best a barely conceivable eccentricity. What we end up with, therefore, is that representational meaning — as opposed to representational mental content — arises when cognitive resources come to be systematically invoked for purposes of human interaction. The conceptual core of human language requires two forms of sophisticated skill of language users: it requires them to form concepts that can be associated with linguistic expressions (the cognitive prerequisite) and it requires them to invoke such concepts in ways that fit into its interactive pattern (the communicative prerequisite). Cognitive sophistication is useful in itself regardless of interaction; but as an aspect of the language ability, it is the junior partner. Although the processing of a linguistic utterance presupposes both skills, there is a unilateral relation between them: representational skills are invoked in order to make communication possible, not vice versa — we do not, in the standard case, communicate in order to generate representations. Hence, from a linguistic point of view, conceptualization is the servant of communication.

5. Semantics and pragmatics in a functional theory of meaning 5.1. Introduction Understanding meaning as communicative function has significant implications for the relationship between semantics (as the discipline that covers linguistic meaning) and pragmatics (as the discipline that covers linguistic interaction). One of the features of the picture that emerges is that semantics is transparently a subdiscipline of pragmatics — "frozen pragmatics". The role of pragmatics as the natural whole of which semantics is one part has been suggested a number of times (cf. the manifesto of the Journal of Pragmatics, Haberland—Mey 1977; cf. also Allwood 1976: 3), but has not succeeded in displacing the traditional picture. The purpose of this section is to use the communicative view of meaning to throw light on some of the misunderstandings that are tied up with the picture where semantics is the representational core and pragmatics is the communicative periphery. With respect to the status of pragmatics, an obvious implication is that it does not make sense to think of pragmatics as a well-defined area within linguistics, standing besides semantics and syntax in a catalogue of linguistic subdisciplines. Levinson's heroic attempts (1983) to find a coherent definition of pragmatics on that basis illustrate the difficulties it gives rise to. To suggest an alternative image to the core-periphery model, it might be productive to think of pragmatics and semantics in terms of a mountain in the ocean (rather like the Hawaiian islands): most of its under water, but the part of it which breaks through the surface, analogous to the linguistically coded aspect, is what tends to capture our attention. The reason this is preferable to the proverbial iceberg is that pragmatics is not a hidden danger, but the stuff that all linguistic phenomena are made of. With respect to semantics, it follows that the field of linguistic semantics arises out of a narrowing of the full pragmatic perspective. Such a narrowing is neither good nor bad, just a practical frame for certain types of descriptive purposes. We need to remind ourselves periodically of what is inside and what is outside the frame we have chosen, in order to avoid two basic mistakes: one is to see the frame as part of the world — a Chinese wall surrounding an autonomous territory (more on that in Part Two); another is to announce as an empirical insight that the frame does not exist out there in the world, and use that as an argument for abolishing

128 Semantics and pragmatics in a functional theory

it. In other words, those who announce that semantics should be investigated as an autonomous domain regardless of pragmatic circumstances and those who announce that there is no distinction between (coded) semantics and (the rest of) pragmatics are both wrong. The first mistake is the most pernicious, because it severs the connections between phenomena inside the frame and the rest of the world. The second merely gets us back to square one: with no defined field of inquiry we have to begin again to look for a motivated delimitation of what we want to investigate. But what if there is no motivated frame within which we find language — because phenomena that we understand as linguistic shade off into phenomena that are nonlinguistic, as argued convincingly by Langacker (1987a: 154-61)? As his own descriptive practice shows, however, the fact that there is a cline between what is coded and what is not coded is no argument against investigating coded meaning, as long as some aspects of meaning are more obviously coded than others: clines are just as interesting as rigid boundaries, only more difficult to handle. As emphasized by the peak-foundation model, semantics can only arise as part of a larger conceptual and interactive universe; but the properties of the pattern imposed by coding is not reducible to any pattern of experience that exists already before, or in abstraction from, linguistic coding (see the dicussion on iconicity in Part Two, chapter 5). The traditional picture gave rise to something like a war between semantics and pragmatics. Where the tradition stayed within what it saw as the representational core and despised the pragmatic chaos outside, opposing forces argued that pragmatic factors ought to be treated as the centre instead of the idealized, artificially isolated areas that usually got all the attention. But since semantics is part of pragmatics, it does not make sense to see the two fields as competitors. We need to rephrase the real questions in this area as questions about how different types of phenomena interact. This section attempts to pose three and answer two of these. In the traditional picture, questions about the relation between the two fields assumed that representational properties were constant and interactive properties were situation-dependent. But this identification was merely a simplistic illusion: all meanings are interactive, and both representational and nonrepresentational properties of the received message may be either coded (and thus have an element of constancy) or "in the air" (i.e. situation-dependent). We therefore need to ask anew the following two questions:

Introduction 129

What is the relation between the coded meaning and the communicated message? What is the relation between representational and nonrepresentational aspects of the coded meaning? The first question is the main issue below. It includes two aspects: the issue of the nonlinguistic parts of the communicative event itself, and the "implicature" problem. The second question will be discussed on a general level in Part Two on structure, and exemplified specifically within the area of tense in Part Three. But first, there will be a brief discussion of a third question that arises when semantics becomes interactive: what happens to the traditional concerns of the knowledge-based tradition in semantics?

5.2. Pragmatics, truth and Plato It follows from Putnam's description of his "internal" or "pragmatic" realism (cf. Putnam 1988: 113-20) that the social perspective ("use") and the extensional perspective ("reference") share something which is important for logical purposes, and which is absent in the purely conceptual perspective. To refer to something and to say something true is not naturally described in terms of what happens inside the head of a single individual (cf. the discussion above p. 57): facts and referents would then merely be one among a number of other representational states. In a social context, the ability of words to reach out beyond the speaker's own mental world can be given an interpretation which does not suffer from the shortcomings that Putnam pointed out in the traditional picture of truth and reference. The missing link is the interactive (including interpretive) practices of the speech community, in which individual speakers have a share, and which enable them to get reliably from words to things that are not words. This means that "real values" in Millikan's terms and "natural kinds" in Putnam's terms are a function of the socially shared interactive practices in terms of which meanings were defined above. The unilateral dependence of "natural-kinds-as-denoted-by-linguistic-expressions" upon social coordination can be seen in the case of meanings which are mismatched with the real world. In Gilbert White's Selborne in the eighteenth century the local lore had a name for an animal called cane which was supposed to be half-

130 Semantics and pragmatics in a functional theory

way between a weasel and a mouse in size. There turned out to be no such animal, but it would not make sense to say that the word was meaningless while it was used; and while it was used it had the status of a natural-kind term, thus demonstrating (in the breach) that reaching-out towards the real world that was expressed above with the quotation from Stuart Mill. The same point can also be made if we start from the functionally relevant environment, working inwards towards language. Things-out-there play a role for the formation of relevant concepts by providing feedback that triggers those responsive mechanisms that are characteristic of functional systems. One word that most people learned in the 80's was aids. With respect to the success human subjects may or may not have in adjusting their interactive practices to the "real value" of this particular word, the term "survival" is not too strong. Hence, reference remains part of the story of meaning; and so does truth. However, the interest in language as a medium for true knowledge that dominates the tradition necessarily belongs in the full pragmatic perspective: language alone cannot tell us whether there are canes or not. The functional definition of meaning implies that it depends on the communicators and the pragmatic circumstances both what the truth conditions of a statement are and what segment of reality it is to be evaluated against. Truth, therefore, is essentially about situationally produced mental models, not about linguistic meaning as such. This may sound postmodernist, but is really the opposite; it would not have surprised Plato to hear that the search for truth was dependent on the understanding and moral quality of the participants rather than being bound up with language itself. Plato was well aware that the written word was defenceless against its interpreters and much preferred the "intelligent word graven in the soul of the learner" (cf. Phaedrus 275) — and used the dialogue (a form of communicative interaction) rather than a chain of logical statements as a means to get the addressee to generate knowledge. The reason why it has been possible at all to talk about truth as a situation-independent property of statements is a remarkable and persistent feature of the pragmatic context of scholarly communication: a realm of entities that appear to be impervious to changes of time and perspective, inhabiting Plato's realm of ideas, Frege's third realm and Popper's world III (cf. Popper—Eccles 1977). There are two important things to say about this domain. One is, as emphasized by Rorty, that it is generated in the course of human practice. The realm of stable intellectual products is the result of a collective intellectual process. This means that it cannot live up

Pragmatics, truth, and Plato 131

to the most radically Platonic claims on its behalf, namely that it represents the world irrespective of our feeble grasp of it. The second thing we need to say, however, is that this does not reduce it to the status of a mere figment of imagination. Because of the traditional association between pragmatics and chaos, the extent to which the social dimension is on the side of knowledge, reliability and order tends to be underrated. An important basic mechanism is the "virtual governor" effect that was mentioned above p. 120 as keeping meanings more well-defined than they would be in a single speaker. The same effect benefits the pursuit of knowledge, both informally in everyday life and in the formal institution of science. To the extent assumptions about the way the world is are continually checked and (dis)confirmed socially, the social domain of knowledge remains more stable and reliable than the knowledge of any single subject. Stretching the point, we might hesitate to speak of knowledge unless more than one person is involved. How can a single individual reliably distinguish between "what I am convinced of" and "what I know"? Only the implicit appeal to social validation can give a distinct status to the two categories. Digressing a little, this means that the possibility of reliable knowledge depends on the existence of a social "knowledge contract", according to which putative insights are subjected to selection in terms of a collective fitness criterion of "correspondence with the world as we know it". In all areas that go beyond everyday experience, a radically postmodern denial of any privileged status to the category of knowledge may prove selfconfirming; particularly vulnerable is knowledge of types of fact where human reactions are part of the domain itself, such as the background information that is necessary for democratic decision-making. To the extent that the struggle for ratings causes news to be treated as a species of entertainment, with those attendant standards of fitness that are portrayed in Robert Redford's "Quiz Show", there is an end of the chances of getting a reliable picture of what happens in the world from TV. The social and human sciences are similarly at risk if the rules of academic discussion become (de)construed so as to dispense with the need for collective checking; there is no way to force people back into the fold, once they terminate their social knowledge contract because they want to seal off their own truths from outside tampering. In ordinary life, including everyday language use, the cost of eliminating the distinction between real people, cars and earthquakes and such as occur

132 Semantics and pragmatics in a functional theory

in the media is too great to permit any blurring. Hence, although the third realm must relinquish its claim to transcendent truth, it occupies a relatively privileged place within the larger context, where only one thing is unconditionally absolute, namely everything (cf. Fink 1988b: 152). We cannot prove realism, but we are bound to rely on it. Truth even retains a foothold within the narrower domain of linguistic semantics. The way we understand a certain class of linguistic expressions, namely declarative clauses, also raises the issue of truth; so in order to have a good semantic theory, we need to be able to account for this interesting property of declarative clauses as opposed to for example greetings. If truth conditions were assumed to be the standard formula for linguistic meaning, this would reduce to the general problem of meaning; but if meanings are communicative functions, we need an account of where truth as a more specific property comes into this picture.

5.3. Coded functions and utterance function Since linguistic expressions never occur in complete isolation from other aspects of reality, the actual "force" or "utterance function" of a communicative event always has other sources than coded meanings. One important source is the mechanism whereby the speaker imposes, or displays (in the sense of Allwood 1976: 74) himself as an important part of the message. This type of communication is preconceptual, shared with for instance the territorial male in the animal kingdom. The linguistic utterance itself contributes to this effect via the mechanism of presupposition (cf. Harder—Kock 1976): all the things that the speaker takes for granted in saying something are "displayed" when he makes his utterance. There is no guaranteed harmony between what is said and what is displayed, as demonstrated by the phenomenon of "double-bind" (cf. Bateson et al. 1956); and the displayed part tends to be more credible, because it is a form of "showing" as opposed to "telling"1. The same phenomenon is often found in discussions involving prejudices, as shown by Verschueren (1994), with discrepancies between the sanitized explicit verbal messages and the displayed offensive attitudes. If we want a pragmatic theory to tell us what goes on in human communication, it is useful to keep in mind the status of linguistically articulated meanings (including the conceptual parts) as the visible peak of a mountain that stands on the bottom of a preconceptual and prelinguistic ocean.

Coded Junction and utterance function 133

It goes for all aspects of the communicative event, however, that they have to be understood in terms of the way they "make a difference" in the interactive context. To describe how that happens, we need first to look at the basic mechanism whereby utterance functions are assigned in social interaction, and then see how the use of linguistic expressions feeds into that process. The process whereby coded meaning is contextually enriched is standardly understood in the Gricean manner (cf. Grice 1975). Because of Grice's purely informational understanding of meaning, his account needs to be revised in the light of the functional approach to linguistic meaning. One might think that if meanings are functions, the need for mediation between semantics and (the rest of) pragmatics disappears: utterances have just that function which they are coded as having, and that is all there is to say. But meanings still only constitute part of what brings about the overall utterance function; hence, we still have to ask how to get from the coded functions to utterance functions. Utterance functions are kinds of human actions and therefore share their general characteristics. Like other functions, they must arise within a horizon where they make sense in terms of overall goals, and are compatible with the agent's background sense of what is possible. With Searle's example, you cannot intend to become a cup of coffee. To make sense is the same as being assigned a situational "force"; an utterance that does not make sense is one that cannot be fitted into the situation. In cases of social action there is a special twist to this. With a single individual, intention and purpose may be said to be a private matter between himself and his environment. We can watch somebody at a distance going through an apparently senseless series of motions, idly wondering what he is trying to do, without calling into question the intentional nature of his actions — there may be private purposes which are beyond our reach. But social actions such as utterances depend for their meaningfulness on the reactions of others; their intentions have a public dimension. Whatever we do as part of a shared activity must be recognizable as part of an overall pattern in which the other participants can also share. If our social actions fail in this regard, they will fail as a whole, whatever our private intentions are. Therefore we cannot intend to perform them without intending that they should be socially recognizable. Because the work of accommodating to the social fabric of everyday life is by definition something that comes naturally to us (or it would not count as everyday life), it is one of those background phenomena which are

134 Semantics and pragmatics in a functional theory

designed to be invisible. We no more worry about whether people we know fit into our familiar social pattern than we worry about whether the floor will give way under our feet. But thanks to the ethnomethodologists these mechanisms have been under the microscope. The method they used to uncover the "routine ground of everyday activities" (cf. Garfinkel 1972) was to see what happened when people took part in everyday activities in ways that deviated from the pattern that such activities normally followed. Sons and daughters at the family dinner started to behave as invited guests. Supermarket buyers started to haggle about prices. Such deviant behaviour, from a coordination point of view, is a spanner in the works. In terms of straightforward, rigidly goal-rational behaviour, what would happen would generally be that the goal did not come off and the activity was abandoned, as described by Lewis (1969). But this is no good when it comes to the business of leading one's everyday life; ordinary everyday life simply has to work (Goddammit!). When subjects started to behave in deviant ways, they therefore provoked an intense activity of "making sense", of finding an interpretation under which the deviant action could be assigned a function in terms of the existing (picture of the) situation. "Making sense" is one of the favourite phrases in ethnomethodology, because it emphasizes that sense is indeed something that is made — by the participants in a social situation. The notion of sense is related to the notion of function in that making sense of an individual social action is the same as assigning it a function in the social situation. It may seem, based on what I have claimed above, as if sense-making activity works so as to make unconventional activity impossible, since the general drift of the sense-making is to find a way of fitting actions into the existing pattern, somehow. However, the act of fitting something into the pattern also changes the pattern itself. As an example, students who were sent out to haggle about prices sometimes came home with cut-price goods. Although it was counter-cultural to start haggling, and we can all imagine what would happen if it was attempted by someone who looked like a foreign worker, the abstract purpose of getting something cheap was quite familiar and sensible; and it may also be in the shop's interest to sell at a lower price rather than not sell — so the pattern was adjusted in some cases. In other cases, however, there was no way to adjust the pattern so as to fit in the deviant action. Sons and daughters who started to behave like invited guests could not be fitted into the pattern by a minor revision in cultural expectations. The actions (including utterances) which they

Coded function and utterance function 135

produced could not be assigned any sense compatible with the existing pattern. The ensuing desperate attempts at sense-making subjected innocent participants to great emotional stress, because the familiar world appeared to be cracking at the seams; and the experimenters could not keep up their roles for long. The reason for the stress was not just a difficulty in getting a coherent picture of what was going on in the representational part of the mind — it was also, and more basically, a disturbance in terms of primary experience, in the process of living one's life. If you can no longer "rely on" your nearest relatives to be part of the familiar world, basic goals and values are directly threatened, and you may well feel that the ground is giving way under your feet. The processes that are at work in such social contexts illustrate the public nature of intentions in relation to social action: your intentions and purposes are not your own private property when you act as part of a social pattern. You cannot intend to perform an act that involves other people and at the same time disregard the expectations that make up the pattern which your action is going to associate itself with. The impossibility of saying that it's cold here and meaning (i.e. intending to convey) that it's warm here (Wittgenstein 1953, par. 569) is the linguistic equivalent of intending to become a cup of coffee. Once the social fabric in which you live has become part of the Background, the "stance" you have towards the two actions is the same: one is just as impossible as the other. This is a fundamental feature of the context of understanding an utterance. The use of language as a conventional means of communication is by definition a social act; its only possibility of realizing an intention depends on the reactions of other people. Forming an intention to speak while disregarding the existence of a social pattern that speaker and hearer have in common is therefore literally impossible; the idea is incoherent2. We are now in a position to describe how coded meaning (function) and utterance meaning (function) are related. The assignment of utterance function depends on a process of negotiation whereby two sets of constraints must be satisfied in a concrete situation: (1) The interactive pattern constrains what the speaker can intend to do; (2) The meanings of linguistic expressions (=canonical functions) constrain what each element can contribute. Actual interpretations must be placed in the space defined by the two sets of constraints.

136 Semantics and pragmatics in a functional theory 5.4. The principle of sense The mediation between the constraints provided by coding and interactive pattern forms the basis of a version of Grice's cooperative principle that reflects the redefinition of linguistic meaning in functional terms. Since the principle reflects the constraint that whatever you say must make sense, it is called the principle of sense. The essence of it is simple: it says that any piece of social action must make sense. From the linguistic point of view this is tantamount to saying that it must be possible to follow up the linguistic cues in such a way that the utterance gets a function in terms of the pattern of life to which it contributes. This is near to a tautology: unless something has a discernible function, it is not a piece of social action, as defined above. But the manner of operation of the principle in carrying on the business of social interaction gives it a status which means that the tautology is continually made true instead of being true automatically. To return to the ethnomethodological experiments: we cannot just reject actions that are intermeshed with social situations — we must assign some sort of function to them, and if it should indeed turn out that it is utterly impossible to assign a kind of function to what appears to be a social action, either we ourselves or the agent as perceived by us drops out of the social situation — a scary experience, because it means, in a sense, the end of the world as we know it. Schizophrenic breakdowns, to take a case in point, are extremely anxiety-provoking seen from both points of view. Sometimes it is easy to assign the utterance a function. If A asks, "What time is it?", and B says "half past two", the linguistic content associated with "half past two" fits immediately into the ongoing interaction. To the extent that communication is entirely smooth, the principle of sense is invisible in conversation; only when there are problems does the interpretive work that is guided by the principle of sense become visible. The implications of the principle of sense can be spelled out in terms of three subprinciples (cf. Harder 1979). The conventional linguistic content cannot make sense, unless it lives up to three demands: (a) nonredundancy (b) compatibility (c) worthwhileness (or purpose-relevance, cf. Allwood 1984)

The principle of sense 137

Nonredundancy is a way of capturing the fact that you cannot knowingly intend (or be understood as intending) to bring something about that is already the case. In the case of a declarative sentence, where A tells B that p, the sub-principle can be spelled out as follows: A cannot want to make it the case that B understands himself as being informed of p by A, while knowing that he is already so informed. In the case of imperatives, you cannot want to bring it about that B understands himself as directed to do q by A, if you know that he already takes this to be his position. Compatibility is a way of putting the fact that you cannot intend (or be understood as intending) to bring something about while knowing that it cannot come to pass (the coffee cup case). In having an intention you automatically rely on the possibility of making it come off. Mankind, it may be said, has sometimes found it laudable to strive for the impossible; but to think so, one must also think that to strive for that goal is an adequate substitute for reaching it. The youth who bore 'mid snow and ice a banner with the strange device "excelsior" did not specify a point he wanted to reach, only a direction he wanted to take — and this, then, must be seen as his intention. In linguistic cases, the compatibility requirement implies that you cannot intend to tell somebody to do something if you simultaneously make clear that you take the task to be impossible. Also, and perhaps marginally more interestingly, you cannot intend to give information about clearly nonexistent entities. Purveying information about the present King of France is therefore something you cannot intend to do in ordinary circumstances. In such cases, again, the principle of sense requires some interpretive work to be done; one possibility is that the speaker will be understood as "displaying", either deliberately or inadvertently, his own frame of understanding. The third subprinciple will be discussed further in relation to relevance theory, since it is the aspect of the principle of sense that comes closest to what is generally called "relevance". What this implies is that the change in the situation brought about by an act must not only be possible and take us to a point different from where we are; it must also in some sense get us to a preferable situation. At one end, this third criterion shades off into the redundancy criterion, since an action that changes the situation in a way that does not reflect any of the participants' goals may also be said to "make no difference". We do not have to assume that everybody agrees as to what makes a difference; what the subprinciple says is merely that in

138 Semantics and pragmatics in a functional theory

understanding something as making sense, we automatically understand it as reflecting an intention to bring about not just a change in the situation, but a significant change. These three criteria may be seen as spelling out ways in which the principle of sense is active in the interpretation of a linguistic utterance. Above they were discussed in terms of absolute requirements: total redundance, total incompatibility, total pointlessness. However, it would be better to say that we choose the most likely way in which the linguistic content can be said to observe the three subprinciples, so that if one interpretation comes too close to being redundant, incompatible or pointless, we try to find another interpretation of what the speaker meant. In case of trouble, as always, the background "reliance" (that in my account takes the place of mutual knowledge in normal cases) becomes problematic, and we start to calculate in terms of what he thinks that I believe, etc., adjusting expectations so as to find an acceptable interpretation. Satisfaction of the principle of sense, as spelled out in the three subprinciples, is a sort of entry condition for an utterance to be admitted into the process of social interaction; unless we can find a way of seeing it as satisfied, either the utterance is ignored or the process is stopped, awaiting reconstruction of the misfired intention. Why is this a better theory of the phenomena captured by Grice's cooperative principle? Grice's theory, as pointed out by Lyons (1977: 593), is basically oriented towards efficient exchange of information; and not all communication is co-operative in this sense, as demonstrated by E.O. Keenan (1976) in a cross-cultural perspective. When there is a cultural pattern whereby speakers do not make their contributions as informative as required by the purposes of the addressee, the Gricean maxims are inoperative — because they presuppose a common purpose. In our own culture, the same thing goes for zero-sum games such as poker-face negotiation. You can meaningfully contribute to such patterns of interaction without living up to Gricean standards — but what you do still has to be meaningful in the way captured by the principle of sense. Because it is based on a definition of meaning as "contribution to interaction" rather than "efficient exchange of information", I think the principle of sense captures the universally valid part of the Gricean theory. An added advantage of the principle of sense is that it also covers cases where the purpose of the communicative exchange has nothing to do with information in the full sense of the word. What this implies will be discussed in the following section.

The principle of sense 139

5.5. Relevance versus sense: translating interaction into information There are a number of different suggestions for revisions of the Gricean account. Perhaps the most radical and influential revision is the one provided by relevance theory, as developed in Sperber—Wilson (1986). Relevance theory shares some of the salient features of my own account: a basic dependence on uncoded processes of understanding; the procedural view of meaning; and the pragmatic codetermination of descriptive meaning. These features, however, go with a total rejection of two basic premises of my own account, namely that meaning is to be understood as interactive function, and that explicit representations must be understood as grounded in non-representational reality. Because of its unflinching consistency, it brings out a number of features that are not always evident in less belligerent versions of the same basic programme. For these reasons, a discussion with relevance theory is a good way of profiling some of the points I wish to make with the principle of sense. Relevance theory is based on a total severance of language and interaction: utterances transport information only. There is only one link between language and communication, made explicit in the definitions of informative versus communicative intentions. The communicative intention adds nothing to the informative intention except the transmission: A communicator produces a stimulus intending thereby (52) Informative intention: to make manifest or more manifest to the audience a set of assumptions {/}. (Sperber—Wilson 1986: 58) (54) Communicative intention: to make it mutually manifest to audience and communicator that the communicator has this informative intention, (p. 61)

Informative and communicative intentions are seen as oriented towards "contextual effects", the technical term for the total modification in the belief state achieved by an utterance. A central manner of getting contextual effects is via deduction: if you are told that Joe is at home (during working hours), then you may deduce that he did not get the job he was talking about; enabling you to draw that conclusion increases the contextual effects of the statement Joe is here.

140 Semantics and pragmatics in a functional theory

Relevance, the central concept of the theory, emerges as the result of a subtraction where contextual effects count on the plus side (the more contextual effects, the more relevance) and processing effort counts on the minus side (the more processing effort, the less relevance). Sperber and Wilson try to show that whenever we appeal to others, we do it on the assumption that it is worth their while to attend to our appeal (1986: 155). They argue that the same principle is at work in all modifications of our environment, whether by communication or by our own cognitive effort: we attend to what is relevant to us. And communicative stimuli, it will be recalled, have as their only potential interest the informative intentions that they carry. Hence, the only way in which they could hope to gain our attention would be because of the relevance of the informative intention they manifest. This forms the basis of the principle of relevance, which is the centrepiece of the theory, saying that every utterance communicates the presumption of its own optimal relevance (1986: 158) — maximal informational "profit". The principle of relevance replaces all Grice's maxims by refining the basic notion of communicative efficiency down to a simple cost-benefit calculation. There are several problems with this theory of informative profit (cf. also Clark 1987, Mey—Talbot 1988, Allwood 1984). One is that not obviously informative types of messages are translated into information basically in the manner discussed above p. 52. The place where the translation is effected in relevance theory is seen in the way communicative events are understood, according to Sperber and Wilson (1986: 246). Whenever something has been said it is processed by being integrated in an assumption schema of the form "N.N. is saying that P, asking Wh-P, or telling to P", reflecting the declarative, interrogative and imperative sentence types; there is no distinction between on the one hand taking part in a communicative event and, on the other, setting up a representation of it. I have argued that saying hello to a friend on the street is not to inform him of anything (although as a secondary effect of my greeting, the addressee will get a representation of my saying hello to him, compare above p. 52); yet this is essentially what Sperber and Wilson claim. Note that Sperber and Wilson's processing strategy works just as well for physical doings as it does for utterance types: when somebody knocks you on the head you produce the assumption schema "N.N. did X to me"; but here, the bump on the head is a more tangible reminder that the change in information state was not all that happened. The interactive event of being greeted is less tangible, but real enough to be ontologically prior to its

Relevance versus sense 141

cognitive representation. As discussed above, the only alternative is to see all events as constituting information, in which case there is no bump as such, only our awareness of it — and that means that everything exists for us only as information. The theory that language involves only information then loses its interest, because it becomes compatible with all other theories: whatever they may claim, it all boils down to information anyway. The central problem posed by the insistence on information exchange as the sole purpose of language surfaces occasionally in the discussion. Thus interrogatives force Sperber and Wilson to a revealing reformulation of their general view of relevance, because interrogatives typically reverse the burden of relevance in relation to declaratives. Where declarative sentences have to be relevant in some sense to the hearer, standard questions are made because they are relevant to the speaker. This is, in fact, freely admitted by Sperber and Wilson (1986: 252): relevance, it is said, is a twoplace relation, and in the type of interrogatives that constitute "regular requests for information" are analyzable as "questions whose answers the speaker regards as relevant to her". But this has potentially disastrous implications for the principle of relevance as argued initially. We saw that every utterance, according to the relevance principle, communicates its own optimal relevance. This was assumed because it fitted into a picture where individuals only attended to what was worth their while. Growling dogs and footprints in the flower-bed commanded attention in basically the same way that speakers did: because people thought it might be worth their while in terms of contextual effects to attend to them. Sperber and Wilson are hard as nails about the addressee's statutory rights to relevance: The addressee may be willing to believe that the communicator has tried very hard to be relevant, but if he also believes she has totally failed, he will not pay attention to her. (Sperber—Wilson 1986: 159)

The curious distribution of male and female protagonists, noted by Mey and Talbot, conjures up a picture of a helpless woman making a desperate but fruitless attempt to get a relevance score above zero — and a stony, mercilessly information-oriented man who, having credited her with full marks for trying, refuses to attend to her utterance. This would appear to be incompatible with the picture offered for "regular requests for information" above. The male addressee who insisted on his pound of

142 Semantics and pragmatics in a functional theory

informational flesh and therefore could not spare the struggling woman a glance is unlikely to start attending to other people's questions merely on the puny excuse that the answers would be relevant to other people rather than himself. On my story, the moral of this is that what regulates communication is not a cost-benefit-oriented thirst for contextual effects, but accepted patterns of interaction. On the action-based account, addressees pay attention to the extent it is natural according to accepted patterns of interaction, regardless of whether information is given or requested. Sperber and Wilson do not wholly deny that action is involved in communication: the necessity of relying on a notion of action is evident where Sperber and Wilson say that there must be a relation between the informative intention and the communicative stimulus, because otherwise the speaker would not be acting rationally. So the process of communication is actually seen as a form of action, but the action is only involved as the familiar envelope with a message inside. I think relevance theory is valid for that special type of communication where transmission of information is the sole purpose of the interaction; its mistake is in forcing its assumptions on other types. Interestingly enough, there is one place where Sperber and Wilson offer a piece of reasoning which on close inspection entails that linguistic communication is more than an envelope round a piece of information. At a fairly early point in the book, where the Gricean view of meaning is discussed, Sperber and Wilson (1986: 62) raise the question of why one should ever communicate, i.e. make one's informative intention mutually manifest, in the sense of going public with one's intentions. The central part of their answer is worth quoting in full: Mere informing alters the cognitive environment of the audience. Communication alters the mutual cognitive environment of the audience and communicator. Mutual manifestness may be of little cognitive importance, but it is of crucial social importance. A change in the mutual cognitive environment of two people is a change in their possibilities of interaction (and, in particular, in their possibilities of further communication).

Sperber and Wilson illustrate this view with the recurrent example of a broken hair-drier, which Mary would like Peter to mend. Peter might get the idea that Mary wanted it mended if Mary just left the pieces lying around, and this might sometimes be a better strategy for Mary: if Mary

Relevance versus sense 143

had communicated her wish that Peter would mend it instead of merely making it known (by leaving the pieces lying around), she would have put Peter in the position of either "granting" the wish or "refusing" or "rejecting" Mary's wishes (my quotes). In other words, Peter's utterances would acquire a particular interactive status, corresponding to the speech act verbs used. Sperber and Wilson thus offer an explicit demonstration that the step from covert to manifest transmission of information, which is crucial in defining linguistic information, not only involves but is essentially motivated by effects describable only in terms of social interaction. They also explicitly state that these features cannot be captured by reference to the cognitive environment (".. .may be of little cognitive importance...") in terms of which relevance is ultimately defined. Hence, by their own admission, Sperber and Wilson appear to accept that the notion of cognitive effect, and by implication also their form of relevance cannot explain why people form (or choose not to form) communicative intentions. I shall now try to illustrate the difference between the interaction-based principle of sense and the information-based relevance principle by taking up two of Sperber and Wilson's own illustrations. The first introduction of the principle of relevance is made (1986: 120) by way of three sentences:

You are now reading a book You are fast asleep 5 May 1881 was a sunny day in Kabul In terms of the principle of sense, it will be evident that the three sentences could be regarded as exemplifications (within the domain of informative communication) of redundancy, incompatibility and lack of worthwhileness. In terms of relevance theory they stand as three ways in which statements can fail to be relevant: by adding nothing that is not already part of the cognitive environment, by contradicting a manifest feature of the environment, hence failing to make its content stick, and by being "utterly unrelated to the context in question". The question now is: if there is any difference between the two accounts, what is it and why is the principle of sense preferable?

144 Semantics and pragmatics in afiinctional theory

I shall concentrate on the third case, which, as pointed out by McCawley (1987), is the only one that intuitively comes under the concept of relevance. The crucial difference between relevance as defined by Sperber and Wilson and " worthwhileness" is that Sperber and Wilson regard relevance as something achieved in virtue of what an utterance does to a body of assumptions, whereas the principle of sense talks about contributing to the interaction of which the utterance forms part. If we look at it from this basic point of view, it is interesting to note that the Kabul utterance gets quite close to achieving relevance in Sperber and Wilson's terms. If you merely write it down for further reference, it would fail to achieve relevance (cf. 1986: 120), because it yields no further contextual effects: it only "counts", if it combines with existing assumptions to yield inferences. However, unless you happen to know what the weather was like on that day in Kabul, you could make it relevant by quixotically deciding to derive an inference from it, such as "probably the peasants who hoped for rain were disappointed". On the principle of sense, however, the requirement is basically that the utterance should take a meaningful place in the social interaction. This requirement is in a sense stricter than the principle of relevance; inferences alone are no guarantee of worthwhileness — we have to find a way in which the utterance could serve a purpose in the actual context, before we feel that we have made sense of the utterance. If we cannot do that, we have to reject it, with the interactional consequences discussed above. Sperber and Wilson takes up a number of inferential chains for analysis to illustrate the predictive power of relevance theory, among them the story of the callous passer-by (1986: 186): Flag-seller: Would you like to buy a flag for the Royal National Lifeboat Institution? Passer-by: No thanks, I always spend my holidays with my sister in Birmingham. In interpreting this communicative sequence, they offer the following very plausible inferential chain: (a) Birmingham is inland (b) The Royal National Lifeboat Institution is a charity (c) Buying a flag is one way of subscribing to a charity

Relevance versus sense 145

(d) Someone who spends his holidays inland has no need of the services of the Royal National Lifeboat Institution (e) Someone who has no need of the services of a charity cannot be expected to subscribe to that charity And therefore we reach the conclusion (f) the passer-by cannot be expected to subscribe to the Royal National Lifeboat Institution. Summing up, they proudly announce: What is interesting about the passer-by's reply is the very close connection that exists between seeing its relevance (or, more precisely, the relevance the speaker intended it to have) and being able to derive some contextual implication from it....perceiving some contextual effect of an assumption seems to be sufficient for judging it relevant.

This, read as ordinary language, is perfectly true. But in terms of the costbenefit analysis that is central in Sperber and Wilson's technical definition of relevance it is less convincing. They argue that it is because of the contextual effect (f) that the original answer is saved from irrelevance. But it would seem by ordinary cost-benefit calculations, that (f) can only make the original answer relevant if (f) has something to offer in itself; we have to ask what makes (f) worth the addressee's effort if the whole account is to make sense. But (f) tells us merely that the speaker cannot be expected to subscribe — and the speaker has already explicitly refused to subscribe. So (f) is a much weaker piece of information than what we already have, and one entirely devoid of interest to the flag-seller. Hence, for the flagseller it can hardly be the reward of being able to add (f) to his cognitive environment that makes it worth his while to invest processing power in the inferential chain above, as Sperber and Wilson claim. But what is it, then? According to the picture offered by the principle of sense, what is required of an utterance is that it should constitute a meaningful contribution to ongoing interaction. We ask therefore, not for contextual effects in terms of possible inferences, but for ways in which an utterance can be seen as contributing to interaction. If this is the demand we make on the flag-seller's original answer, we get a more plausible reason for wanting

146 Semantics and pragmatics in a functional theory

to get to (f): with (f) we can understand the flag-seller's reply as an explanation for his refusal to subscribe — an excuse. In terms of the interactive patterns exemplified by this anecdote, a refusal to grant a favour is a face-threatening act (cf. Brown and Levinson 1987) which opens a slot for an interactive move to soften the blow. If we are culturally attuned, we are able to understand the reply as filling this slot, once we figure out (f); but it is not a piece of information anyone would wish to hoard for its own sake.

5.6. Final remarks The story of meaning has been a long haul; in the light of where we are now, I would like to return briefly to the basic motivation for the effort. Language has almost always been approached in one or two ways: either as a means to an end that was external to language, the most obvious example being knowledge — or as primarily a matter of linguistic "form", of the expression side. If you want to approach language as primarily a meaning-bearing entity, without taking meaning in a sense that depends on nonlinguistic preconceptions, there is no obvious theory of meaning that can just be taken for granted. I have tried to single out and borrow from the tradition exactly that which I think is required for a description of language, and explain why. Part Two will present a view of linguistic structure built on these foundations.

Part Two: Structure

1. Introduction Traditionally, language as an object of description was the province of the area of "grammar". In the twentieth century, however, the science of language reconstituted itself under the name of "linguistics". Modern linguistics came into being on the basis of the belief that language "as such", the essence of language as an object in its own right, is identical with linguistic structure. Structure in language, moreover, is associated with an ontologically privileged status: "structure" means "autonomous structure". The path from language to structure, and from structure to autonomy, is not compatible with a functional approach to language; unless one prefers to reject the notion of structure entirely, it is necessary to go back to the basic question: what is structure? This issue forms the subject of the first chapter, which concludes that the important insights of structural linguistics fit unproblematically into a functional view of language if the concept of language structure is revised on two points: first, structure is not autonomous, but must be understood in relation to the substance that is structured; secondly, the organization of substance that takes place in linguistic structure is function-based. The second chapter goes on to present a function-based theory of clause structure. A central point in it is that once syntax as an aspect of natural language is properly distinguished from formal syntax, it becomes clear that the basic part of syntax is that half of the discipline which deals with the combination of simple meanings into more complex meanings: the content side of syntax, or the combinatorial part of semantics, whichever way you prefer to think of it. To mark the difference in relation to a non-semantic view of syntax, I talk of this area of language as content syntax. The theory I present is based on the layered clause structure as developed within Functional Grammar by Simon Dik and associates. This chapter presents the core part of the descriptive principles that are tested out in Part Three on tense. The third and last chapter is about the role of conceptual meaning in clause structure, building on Cognitive Grammar as developed by Ronald Langacker. I argue that in addition to the bottom-up picture presented by Langacker there is a top-down structure which can only be captured if you distinguish between conceptual and functional-interactive aspects of meaning, and that clause structure is the result of a division of labour between them, reflected in the operator-operand relationships between elements at higher and lower levels. While the second chapter presents the

150 Introduction

basic descriptive architecture, the last chapter tries to present a more specific picture of the content side of syntax. The general conclusion is that structure is just as important as structuralists claim, but its importance can only be properly captured in an account that (1) is function-based, and (2) emphasizes both the distinctness and the collaboration between conceptual and interactive aspects of clause meaning.

2. The functional basis of linguistic structure 2.1. Introduction In this section, I first take up a general discussion of the concept of structure, concluding with a distinction between two varieties: componentbased and function-based structures. Both types reflect what I call an "integrative" rather than an autonomous role for structure. This entails two things: first, that structure presupposes the substance that it structures, hence is not ontologically prior to it; secondly, that structure adds properties to the substance, hence is not reducible to it. Then I discuss two ways of thinking about autonomous structure in relation to language, that of Saussurean structuralism and that of generative grammar. I argue that Saussurean structuralism can be reinterpreted in functional terms — autonomy was too strong a statement of a valid point. In relation to Chomskyan structuralism, however, I argue that the concept of autonomy is due to a deeper flaw in the whole pattern of thinking, making it necessary to reject the notion of linguistic structure in the generative sense.

2.2. The ontology of levels If structure is to have an ontological role to play at all, structured combinations of items must have properties not shared by the components themselves. The satisfaction of that requirement is granted on a quite general basis if we assume the existence of "ontological levels", familiar from the division of labour in science between physics, chemistry and biology. In terms of this assumption, we see the world as being organized on different levels of complexity, with a directionality from lower to higher

The ontology of levels 151

levels, based on how complex the object of investigation is. Beginning with particles, one can proceed via atoms and molecules up to planets, and via another path to biological organisms and societies. The existence of such levels is fairly uncontroversial from a strictly descriptive point of view. The question is how deep the division into levels is: is it just because of our ignorance that we seem to find different sets of properties at different levels, or is it because such is the nature of the world? In a general theory of scientific knowledge, dealing specifically with physics, biology and neuropsychology, K0ppe (1990) argues that the division is deep and irreducible. This position must be understood in relation to two opposed basic views about the nature of the world. One is reductionism, a position associated with "classical" natural science; and the opposed point of view is holism, an approach associated with the "new age" opposition to science. Reductionism is a strategy that seeks to explain laws applying at higher levels by reference to lower levels, so that ultimately one would be able to say that everything can be explained by reference to the lowest level: only particles really exist. The key argument against reductionism is its inability to account for the "jumps" in the types of applicable laws which arise as we move, for instance, from subatomic (quantum) to macrophysical phenomena, from dead to living entities, or from premental to mental phenomena. Reductionists may claim that this is just a matter of time, and the successes achieved in reducing chemical to physical processes, and biological to chemical processes, point the way for future advance in the same direction. However, even if we grant this, the important thing is to be aware that there is nothing intrinsically more scientific in reductionism. It is simply an empirical issue to what extent we can successfully make it work, and K0ppe points out some of the difficulties it faces. Holism, conversely, claims that phenomena at all levels hang together irreducibly in one overarching whole, so that we cannot describe a phenomenon at any lower level without simultaneously invoking all the higher domains: only cosmos really exists. The key argument against holism is the impossibility of development: if it is misguided to conceive of particles in abstraction from biology, how can there have been a time before life arrived on the scene? There is no agreement among nonreductionists concerning the number of essentially irreducible levels. Popper and Eccles (1977) operate with three (of which the third accommodates the Platonic objects that are neither

152 The functional basis of structure

physical nor mental in the subjective sense, language being an important example). K0ppe claims that there are four: the physical, biological, mental and social levels. According to Penrose (1989), the distinction between the quantum and the macro-physical level would appear to be just as unbridgeable as the distinction between these four. More generally, the cautious attitude would seem to be that we know of differences that we cannot at the moment reduce away, and if we want to be faithful to the world as we know it, we have to respect those distinctions at least until someone explains how they can be eliminated. This picture provides a clear role for structure: to create new items at higher levels by combining lower-level items in a way that confers properties on them that did not exist before they were brought into this structural relationship. Particles behave differently when organized into atoms; atoms behave differently when organized into molecules, and so on. Once structure is there, it codetermines the way components behave. In understanding how a notion of autonomous structure can come to appear plausible, the role of such top-down causation is central. Properties that only arise at higher levels invite a top-down approach: in describing a diamond, it is natural to focus on the structure (crystal) and describe the individual carbon atom from the higher point of view. Structures can be described in terms of the distinction between "role" and "occupant", or "slot" and "filler": the crystal structure creates slots for the individual atom to fill. From the top-down perspective, the slot takes priority over the filler: when we describe the crystal structure, the individual atom is less important than the slot it fills in the crystal. The higher we stand in the ontological hierarchy, the greater the preponderance of the superimposed properties; and the more natural it becomes to think of structure as the dominant part of the picture, reducing component substance to insignificance: reality consists of slots to be filled. The logical ending-point of this intellectual trajectory is the complete autonomy of structure: it is structure all the way down. This ending-point, however, is the result of an optical illusion. The apparently smooth descent from one structural level to the next leads one to overlook the basic dependence of higher levels on the lower-level "substance". The relationship between higher-level structural positions and lower-level fillers is not arbitrary; carbon atoms are not in themselves diamonds, but it is their "pre-diamond" properties which makes it possible to build diamonds out of them. If we combine two hydrogen atoms and one oxygen atom into a water molecule, we have properties that were

The ontology of levels 153

nonexistent before the combination — but these properties are not created out of the blue. Outside thought experiments, there is no other way to create water than by combining hydrogen and oxygen; if we try to use helium and nitrogen instead, we shall not succeed.1 Hence, structure is not autonomous in relation to the components that it structures. In an actually existing, structured object, the relation is one of mutual dependence: structure and substance mutually constrain each other's potential. Just as the structure constrains the "behavioural options" of lower-level items, so does the substance constrain the possible range of structural relations that can arise. But if we ask what could potentially exist alone, we get a unilateral dependence of structure on (lower-level) substance: structure presupposes component-substance, but not the other way round. We can have particles occurring without atoms, atoms without molecules, carbon without diamonds, and molecules without living organisms — but not the other way round. If we reduce away higher levels of structural organization, we also reduce away some properties of the lower-level components. Reality will be different also from the point of view of the lower-level items, as it were — but the lower-level items as such would persist. An example of how higher-level properties can disappear while leaving lower-level properties intact is the death of an animal; its components are still there, but the biological level of organization has disappeared. The converse example of higher-level properties persisting while losing the lower-level substance has yet to be proved scientifically; and that is not for lack of interest in the possibility. The last point could also be translated "substance matters!" In relation to the structures that are involved in higher-level items, items at the lower level constitute the substance that supports, or carries, the structural relations — and these relations would not exist, in the sense of being instantiated, unless there was a substance which could support them. We do not know, in the course of development from the big bang onwards, how the more complicated levels arose; by the "mutation" scenario the available materials simply jelled into new, more highly structured types of entities when conditions were right. But whatever the mechanism is in the progression from lower to higher levels, structure has to work with the substance that is available, creating items within the space of possibilities that is thus defined.

154 The functional basis of structure

2.3. Component-based and function-based structure The directionality that is inherent in hierarchical structure is the basis for one use of the concepts of "structure" vs "function": looking at an item from a structural point of view means looking downwards, whereas the functional point of view consists in looking upwards. This is a purely relative distinction (cf. Lycan 1990); as we move up the hierarchy, structure includes more and function includes less. This version can operate within a completely formal structure: function, in this picture, can be defined fully in terms of "structural position". We find that notion also in linguistics, as when the subject function is defined within a presupposed hierarchical clause structure. This is terminologically rather confusing with respect to the relationship between structure and function, but is logical enough once a formal structure is presupposed as the context of description. However, the sense in which most linguists who understand themselves as functionalists use the term "function" is oriented towards the use of language in communication. In that sense a functional approach to language reflects the intuition behind the definition of function that was suggested above (p. 91). The key idea was that functions arise in systems that depend on being reproduced, and which are responsive to feedback from the environment: the function of an item is that type of effect by which it contributes to the survival of a larger whole of which it forms part. Functionalists believe that linguistic elements can only be understood by looking at the jobs they do in communication, because that is what explains why they recur and pattern the way they do. When we understand "function" as "the job done by an item", we also approach it top-down. However, in contrast to what is the case in the purely structural top-down approach, the functional identity of an item has just that independence of component-substance which we do not find in purely chemical or physical structures: a definition in terms of "the job done" provides them with an intrinsically component-independent status. Biological examples will illustrate the point whatever can play the same role in an animal's struggle for survival is functionally the same thing. The caddis worm builds a protective armour of whatever objects are available in the stream where it lives. The panda's thumb (cf. Gould 1992) is not made out of the same skeletal stuff that fingers are made of, but serves the same function for the animal. Arbitrariness in the Saussurean sense is therefore a function-based property, surprising as it may seem: the function provides the criterion for declaring differences in component-substance

Component-based and function-based structure 155

irrelevant. Without a functional context to judge by, there is no possible inherent criterion of equivalence between different components: in the prefunctional world, components are necessarily criterial for ontological status. Functional descriptions are thus clearly distinct from descriptions in terms of component-substance and component-structure as defined above. In the case of functionally defined entities, we can describe an entirely different type of structure which occurs when functionally defined items are constructed out of elements with sub-functions. A knife is an instrument for cutting; and in order to have that function it needs to support two subfunctions: there must be a part you can get a grip on and a part that can do the cutting. This type of structure is function-based rather than componentbased. The difference between the two types of structure is a difference in ontological anchoring. Component-based structure operates on a lower-level "input" of distinct items that it brings together in creating a more complex entity. Function-based structure operates on a higher-level input in the shape of one overall function that is differentiated into sub-roles: the subroles added together must serve that overall role. What is "structured" is thus a higher-level "input" which is differentiated into smaller lower-level items. Complex functionally defined entities evince both types of structuring. A business relationship between two people is characterized by some properties due to the sub-function that each performs in relation to the shared business, and these sub-functions result from a differentiation of some higher-order function that is served by these people in combination. But the nature of the relationship also depends on what happens when you bring just these people together: relations come into being based on the stuff these people have in them, rather than on what they do ("chemistry" is the term, fittingly, used for this phenomenon). Furthermore, component-substance has a natural affinity to function, as already noted by Aristotle (cf. the introduction to Givon 1993). For one thing, the function that is to be served typically makes certain demands on the material that will serve to fulfil that function: you cannot make a knife out of just anything. This means that elements which are used to construct a functionally defined entity will have component-based properties that are functionally desirable. In this case, too, an element of motivation and an element of arbitrariness go hand in hand: the blade needs to be made of stuff that is hard and sharp (motivation), but if it has those properties, it

156 The functional basis of structure

does not matter what it is composed of (arbitrariness); both properties of the choice are function-based. In constructing a knife, one needs to take account both of a componentsubstance and a less tangible "functional substance": a class of processes that the knife is devised to serve. There are thus two kinds of substance involved. All tools presuppose the existence of such a "functional substance", and by their existence they automatically structure it. Take the case of a do-it-yourself houseowner's tool box: the type of problems that the owner handles constitutes a domain of substance, and each tool in the box is associated with a sub-function within that domain. Part of the skill of an expert tool-user consists in a subtle and precise mapping procedure that will enable him to carve up all tasks into sub-tasks in a way that neatly corresponds to the canonical functions of his tools. This structure is not inherent in the task itself: shipbuilding is structured differently according to whether you have chainsaws, axes or just fire for hollowing out treetrunks. An exhaustive description of a tool must cover three things: component structure, the structuring of the functional domain that is reflected in the particular use for which it is designed — and the relations between the two sides. In talking about component-based structure it was emphasized that substance matters: structure was not autonomous. The same thing applies perhaps even more obviously to function-based structure: it is impossible to describe the structural relation between a blade and a handle without understanding the function that blade plus handle = knife is designed to serve. So neither function-based nor component-based structures are ontologically autonomous. Structures are rooted in substance and cannot be understood apart from it. Put differently, the verb "to structure" is a transitive verb: there must be something that is structured, whether it is ontologically attached "above" or "below" the structure itself. That structure does not exist apart from substance is actually fairly obvious. What may cause us to forget it is that we can generalize about structural properties: we can abstract structural similarities out of objects with differences in substance. Thus it is possible, for descriptive purposes, to forget about the substantive anchoring of structures — just as it is possible to focus on other properties in abstraction from the objects that have them. Hence, the autonomous view of structure has a natural affinity to a Platonic mode of existence: pure structure, being invisible for lack of parts, exists outside time and space and only intermittently becomes clothed in flesh. Autonomous structuralism thus represents an ontological inversion

Component-based and function-based structure 157

of the same type as Platonism: the structure assumes the role of the disembodied idea that manifests itself "arbitrarily" in a given substance. The position argued here implies that, as with other Platonic ideas, the obvious Aristotelian alternative is to say that structures exist only as manifested in concrete instances of structured entities. It follows that it is perfectly possible and legitimate to consider structures in abstraction from what they structure, just as it is to consider other properties apart from objects that carry them; a structure is just a special type of universal. An example of a project investigating form, or structure, as a generalizable property is Thorn's "catastrophe theory". It is interesting to look at structural complexity across different fields, and to investigate to what extent the process of "morphogenesis" can be described independently of the objects that are structured. Among the interesting possibilities that this opens up is the possibility of generalizing about the contribution of structures to the constitution of phenomena across traditional substancebased dividing-lines — nature and culture, subject and object (cf. Stjernfelt 1992). But even if we have a theory of the importance and role of canonical structures that can be abstracted from no matter what substance, it is still true to say that we can understand any given object only in terms of both structure and that which is structured.

2.4. Saussurean Structuralism: a functional reconstruction Saussure's great achievement was the discovery of language "as such"; the autonomy mistake was part of the way Saussure described this insight.2 The point of what follows will be to argue that we can and should correct the mistake while preserving the discovery, first by changing the "autonomous" conception of linguistic structure into an "integrative" conception, and secondly by pointing out the functional basis for the structural properties that he discovered. The central idea was that language is a system where everything hangs together, and where you cannot understand the nature of an element except by virtue of its relation to all the other elements in language. This view reflected a polemical opposition in relation to earlier approaches to language, which all described language from some external point of view: normative grammar (with the aim of prescribing standards of correctness), philology (which dealt with language for the purpose of describing texts),

158 The fanctional basis of structure

and comparative grammar (which, in comparing different languages, lost the essential nature of any single language from view). All these approaches pick out subdomains of language and describe them while ignoring their place in the living language. The idea of providing descriptions based on linguistic relations was not new in itself. There was an element of structure in the accepted view of language, based on a component-substance consisting of words. In describing words, however, the classical tradition used a mixture of criteria, intermingling elements of structure and substance. Word classes were standardly described by three types of properties: accidence (word forms, structurally organized into paradigms), syntax (motivated mainly by the need to specify where what kinds of word forms occurred), and content (as exemplified in the distinction between abstract and concrete nouns, often only loosely related to the rest of grammar). Categories like case and tense reflect a structural point of view. But the Saussurean revolution was necessary in order to get a description of language which distinguished clearly between properties motivated by external considerations and properties which linguistic elements have because of the way the language is. In arguing for a description based on linguistic relations, the "autonomy" mistake is very easy to make. From the structural point of view, relations between linguistic items are criterial. This means that a conceptual distinction between types of word meanings (for instance animal vs. vegetable, or abstract vs. concrete) cannot be taken as a description of language unless it is shown in what way relations between linguistic items reflect this distinction. This could be put by saying that only linguistic relations matter, while properties of the semantic substance are irrelevant. It looks (hopefully) as if this is just another way of saying the same thing; but actually the second version would be an overstatement. We can only investigate linguistic relations by looking at the items that they relate. Unless we presupposed a description of the semantic substance, we could not even begin to investigate the way in which linguistic relations deal with the substance. Properties of the semantic substance are thus far from being irrelevant — they are just not sufficient to qualify as descriptions of language. The point can be illustrated with the case of a distinction which is obviously linguistically relevant, such as the distinction between "mass" and "count" NPs. We can only describe the linguistic distinction because we understand the conceptual correlate to it. We must understand the

Saussurean structuralism: a functional reconstruction 159

division into individual units that is characteristic of "count" NPs, before we can see how it is bound up with different categories of linguistic items; unless we knew the substance, we would not know how language structured it either. The integrative view of structure thus preserves the criterial role of structural relations, but sees language as "structured substance" rather than "structure minus substance". This means that we need to look at the centrepiece of structural linguistics, the notion of langue, in a different light. Authors working on language from non-structural angles have typically taken one of two different attitudes: either they have accepted the notion of a purely structural core and located themselves in the periphery, or they have wanted to replace the notion of language as a structured potential with a notion of language embedded in its context. According to the integrative view of structure, both positions are radically defective. I shall try to outline why a functionally oriented, context-sensitive approach to language needs the notion of langue, and why the notion of langue must be understood as dependent on the social embedding of language. Gregersen (1991) takes his reader through the history of Saussurean structuralism, showing how the pattern of thinking it embodies has prevented a theory of language that takes account of its social anchoring. In order to mount a credible defence of the notion of langue, we therefore need to confront the central contextual counterargument to it. Let us start by accepting that language is only directly accessible to us in the form of human activity, totally embedded in human practice as a whole. The problem, therefore, is to show how the abstraction whereby the linguist postulates a langue in the shape of a set of relations between classes of items that individual utterances are seen as "manifesting" can avoid obscuring the contextual anchoring of language. The question is: granted that language manifests itself only in the form of activities — why do we need a presupposed, structured potential in order to describe it? Since langue is not concretely visible, it can only be postulated by abstraction. The pragmatic turn of the past twenty years has done a very important job in pointing out just how much of an abstraction it is, and the job is far from over yet. To throw light on what kind of abstraction it is we can look at the same concrete phenomenon and contrast the type of descriptive tasks that one may pursue in submitting it to scientific treatment. We can take the case of a doctor-patient interview as an example. In that context, the concept of langue emerges as one of a number

160 The functional basis of structure

of different possible abstractions; whenever you have a scientific as opposed to a participant interest in an activity, you need to define an object that you want to say something about, and the object depends on the project you have in mind. Somebody investigating medical standards would be interested in whether the doctor asked the medically relevant questions; a sociologist in the role relationship in relation to social structure; a psychologist in the patterns of reactions, including transference and emotional dependency relations, etc. Everyone uses what happens to say something about his own object of interest; and everyone needs, in principle, to understand everything that goes on in order to draw the right conclusions in his own field. From the linguistic point of view, the task is to see the interaction as an instance of speakers drawing on their linguistic potential. If the linguist does his work properly, he will be able to say something about the means-ends relation between linguistic items used and the communicative purposes achieved. The linguist's abstractions are oriented towards finding regularities in the medium, rather than in the messages; and langue is the construct which accommodates such regularities. That is also why the linguist's data are not restricted to what has occurred in an actual conversation: it is the potential behind the actual choices that is at issue, and therefore intuitive judgments from the speakers and results obtained in an experimental setting are valid and for some purposes better data than purely corpus-based observations. We can now pinpoint the sense in which langue is the basic assumption for linguists, including functionalists. If there is not some sense, however abstract, in which speakers using spots, temperature, doctor and vascular instantiate the same thing as other speakers saying spots, temperature, doctor and vascular, there is no sense in which these people speak the same language. Hence, there is simply no linguistic work to do. The linguist would be in the position of a Martian scientist who watched a game of soccer on the assumption that it instantiated the same activity type as a funeral: all hypotheses put forward would be wrong because there would be no underlying sameness between the movements and actions. But is this not the case, then? Adherents of a pragmatic view of language often point out that cases in which a given linguistic form is used may have nothing in common that could be isolated by any likely scientific procedure; also, situations in which we use a given linguistic form may be completely identical to situations in which we do not use it. How can we justify positing something invariant to account for this, thus disregarding the tangible differences in favour of an intangible invariance? At this point the func-

Saussurean structuralism: a functional reconstruction 161

tional ontology of meaning (cf. p. 101) becomes crucial. When the meaning of a linguistic expression is constitued by its canonical function in the speech community, meaning is anchored in shared interactive practice rather than in the cognitive processes of the individual. The sameness that is basic to linguistics does not have to be in the minds of the interlocutors; the linguist can plausibly assume that two people who use the word temperature are calling upon the same social routine, even if their own cognitive systems would not reveal any sameness. The plausibility of positing a socially anchored langue in abstraction from specific contexts and conversations can be illustrated by reference to the social fact of teaching and learning of languages, occurring (for better or worse) outside normal social activity. If the change in the speaker's position before and after a learning process were only a change in past messages, language "products", there would be no change in his action potential: he might have had a lot of fun making strange noises, but he wouldn't be any better off. What makes language learning worth paying money for must be changes in terms of familiarity with langue, not in terms of accumulated bundles of parole. The basis of abstraction which is most similar to that of linguists, and which is therefore likely to cause the most confusion, is the orientation of "conversation analysis", as practiced by Schegloff and associates (cf., e.g., Schegloff 1972). They are interested in the structure of conversations, which can reasonably be considered the most basic and natural manifestation of language. Like the field linguist, the conversation analyst is interested in all utterances and in linguistic details, with a view to finding out about the role played by each item in the structure of the activity or event. But although the phenomena that the two types of specialist are interested in may be exactly the same, the object of description is different enough to make their results necessarily different. This difference is not always observed, and it is worth being pedantic about it, since any obscurity on this point makes it impossible to be precise about the relationship between linguistic structure and discourse function. If we take a random word or structural item, such as blanket, walk, "plural" or "retroflex", we would not expect the conversation analyst to pay much attention to them, because they presumably do not have any very special role in conversations. On the other hand, they would have their natural place in the field linguist's picture, because they have a specific place in the description of the linguistic potential — the difference of

162 The functional basis of structure

interests is clear. In other cases, the difference would be nearly invisible. Looking at an particle like well as it enters into the structuring of conversation and looking at the item well as instantiating an item in the English language would be almost indistinguishable, because the potential-for-use of the particle well is virtually identical with its role in managing dialogue. This is where it becomes crucial to be aware of the difference in the object of description: whether one is describing conversations, i.e. actual events in space and time, or describing the word well as an aspect of the linguistic potential of the speaker (as a member of the speech community). We can either use well to say something about conversations, or use conversations to say something about well as a linguistic item — but we cannot do both at the same time. The two descriptions may contain exactly the same components — yet they would still be essentially different, because they are about different objects. On the one hand, the description of conversations requires reference to other factors than language: language is only one contribution to what goes on between the interlocutors, and even the most careful transcript necessarily leaves out essential aspects of the conversation considered as an interactive event. On the other hand, the status of the linguistic potential is such that it is always available for yet unrealized uses — therefore a description solely based on actual events will always leave out essential aspects of the potential itself. What follows from this is not that the two disciplines should ignore each other — on the contrary, they are mutually dependent on each other's results. We can only investigate the functional potential of a linguistic item, if we understand the nature of the events to which it contributes; and we can only understand what happens in a conversation if we understand the functions coded into the language used by the participants. But — and this is the difficult part, according to my experience — unless we have investigated the two objects independently, there is no point in looking for interesting relations between them. If we presuppose that well has some discourse function and look at hundreds of occurrences of this specific item, finally concluding that the word has just this specific discourse function, we may get a very good result — but there is a central methodological weakness. This is because we are leaving two essential types of data out of consideration: the discourse events that are similar to the ones cued by well, but which occurred without that word being used — and those aspects of the potential of the word well that are not manifested in the corpus. If the object one is looking at is a linguistic item, one is working as a linguist, not as a conversation analyst: and the result is, in principle,

Saussurean structuralism: a functional reconstruction 163

a description of language (as manifested in a given corpus), not a description of a discourse function. It will only be completely reliable in one type of situation: the case where a given discourse context is one hundred per cent correlated with a given linguistic item, as exemplified with the Danish holophrase skal!, which is only used in the ritual of bringing a toast. What we should do instead is to analyze the two types of regularity separately. On the conversation analysis side we should first figure out in as fine-grained a manner as possible what actually happens, without assuming "that discourse functions come with little buoys attached to them in the form of linguistic labels" (Schegloff p.c. 1994). On the linguistic side, we should try to get at the potential of each word, by the methods described above, which are not limited to looking at any actual corpus. Only when we have these independent data do we have a chance to see what the relation really is between them. With these data, the conversation analyst and the linguist, working together, can reach results of a kind that neither of them could achieve on their own: describe the way in which the invocation of aspects of the linguistic potential contributes to actual conversations. The problem of being aware of which of the two things one is doing can be illustrated with the notion "topic". Conversation topics are quite different objects from topics as a type of linguistic constituent: the fact that it makes sense to speak of the "topic" of a conversation does not imply that the linguistic constituent which denotes the conversational topic has a privileged position in the description of the clauses used to talk about it. If one uses the two senses of the word in the same breath one is presupposing a direct and unproblematic coding relation, which in most cases is unrealistic (cf. the criticism of Dik in Mackenzie and Keizer 1990). Linguistically, one will err by overlooking those aspects of the functional potential of constructions that do not in fact correlate with the signalling of the discourse topic. In terms of conversational analysis, one will err by overlooking other ways of nominating topics than those cued by the construction in question. The question of whether to recognize a notion of "langue" boils down to the question of which is most empirically respectable:

164 The functional basis of structure

to assume that the concept of language in abstraction from actual utterances is a total fiction (which means that the medium has no existence outside actual communicative activity) or:

to assume that participants' skill in actual, successful language activity depends some preexisting "language-as-such" — consisting centrally in an ability to assign meaning to the linguistic items used. The position I have defended is that a version of the second answer is the less fantastic assumption to make. It will be apparent that the object which occupies the place of langue will be different from what either Saussure or Hjelmslev was looking for. In the integrative view, this object is not made of relations abstracted from what they relate; a langue description needs to make reference to all those aspects of the social process which play a role in understanding linguistic relations. Langue, in other words, must be understood as an accumulated, routinized and organized survey of the way linguistic items function in parole, preserving all that is relevant and omitting all that is irrelevant for future utterances. Therefore linguists, just like language learners, must be in a position to understand what goes on in a given context before they can understand how language structures it. In this view, the langue-parole dichotomy, expressing the relationship between actual events and the preexisting pattern that they manifest, is necessary in order to understand all social institutions, from small, informal ones like families and playgroups to large, formalized ones like banks, hospitals and governments. Just as the institution "government" always exists, also when the members are asleep or have just lost their seats in an election, so does language exist even when everybody is silent. What exists between actual manifestations is relations between members, regularities that apply generically to patterns of activity, structural distinctions (parents vs. children; divisions into executive, legislative and judicial functions, etc.); the institution exists as a pattern that imposes a structure on events within its purview. During business hours, these institutions manifest themselves as "flow", as a subsection of the social life of the community. Hence, there is no conflict between the two points of view. Both aspects must exist in order for an institution to exist. If there were no constraints, only features of a continuous flow, there would be no institution, just

Saussurean structuralism: a functional reconstruction 165

individual idiosyncratic events; if there were no events that flowed through the channels set up by the institution, the system would be a fiction, or defunct: to get at the reality one would have to go back in time. The langue that a functional linguist is looking for must be seen as the pattern that must exist if we are not to understand linguistic communication as totally random and spontaneous activity. Part of this descriptive project is to ascertain the precise combination of relative invariance and relative variability that is the natural mode of existence of linguistic facts. From a functional point of view, it will be obvious that a social, Saussurean langue is superior to an individual-oriented Chomskyan competence as the basic concept in linguistics: the social construct is the whole story, of which individual competences form parts. The interactive routines that constitute langue (cf. above p. 135) can only exist in virtue of individual, internalized routines, but the social construct cannot be understood by looking at an individual alone — any more than a molecule can be understood by looking at the atoms one by one. A speech community can afford to lose an individual competence, but an individual competence is meaningless without a speech community in which to function. The last remaining speaker of a language — which, alas, is far from being merely a theoretical possibility — must be understood as a remaining fragment, not as an autonomous whole. The concept of langue, far from eliminating the social anchoring of language, emphasizes its similarity with other forms of structured human interaction.

2.5. Structure and substance: arbitrariness and motivation It may be thought that such a reconstruction of the langue-parole contrast is rather far away from the Saussurean concept; but this is only halfway true. Saussure himself, quoting Whitney for his support (cf. Cours p. 110), also thought of language as one social institution among others. However, it is mentioned as Saussure's own contribution to point out that language is different from all others in being arbitrary, whereas all other institutions (such as the legal system) had an element of natural foundation. A functionalist reconstruction of structuralism therefore means that we must go back to Saussure's understanding of the relation between form (or structure3) and substance.

166 The functional basis of structure

Saussure describes the role of linguistic form/structure in terms of the formlessness of the process of thinking and the universe of sounds taken separately — as contrasted with the precise and complex order imposed on sounds and concepts alike when the two domains are linked by the sign relation. The image is that of a demiurge linking sound and concept, expression and content, in one masterstroke which simultaneously creates order out of chaos in the two substance domains. It is in this context that we find the italicized maxim that the combination of the two planes produces a form, not a substance (Cours p. 157). The relation that Saussure is primarily thinking of in talking about arbitrariness is thus the relation between (minimal) sign expression and content, such as the relationship between "horse" and cheval: the basic link whereby idea is linked to sound. Since sounds have no inherent link to ideas, the arbitrariness of this relation has never been seriously disputed; the problem is what conclusions can be drawn from it. The view of linguistic structure described in Cours is that both sounds and concepts lose their prelinguistic nature and get a new nature prescribed by the place in linguistic structure; langue is a system "qui ne connait que son ordre propre" (p.43). Once linked to each other by the arbitrary sign relation, the two planes are describable in terms of similarly arbitrary linguistic relations only. The justification for this picture is the need to argue against the polar opposite view, according to which linguistic relations are mere reflections of prelinguistic facts (cf. above p. 158). But this is not a case of tertium non datur. The notion of function-based structure introduced above (and, I claim, only that notion) can accommodate both the relative arbitrariness and the relative motivation of linguistic categories. In the case of the tool-box, there were two substances that were simultaneously organized: the components that the tools were made of, and the tasks that could be divided up into subtasks corresponding to the function associated with each tool. These two substances neatly correspond to expression and content in language. Expressions are vehicles for conveying content, and an inventory of signs simultaneously structures the expression and content domains — as described in Cours. The arbitrariness of the relation between expression and content is a variant of the arbitrariness that is endemic to all functional systems: as described above, only because the causal role of function provides us with a criterion of equivalence can we say that another component substance would be just as good. Without it, there would be no answer to the question: good for what?

Structure and substance: arbitrariness and motivation 167

Because of the functional basis of arbitrarily, it always coexists with motivation. Since there is a function involved, there are also criteria for the components that are to serve the function: the hardness of the blade is motivated by the function of the knife. Similarly, phonetic substance must live up to the criteria of pronounceability and audibility, otherwise it is not feasible as material for linguistic expression (with respect to the oral medium). The existence of vowels is motivated because their sonority makes them more audible. The existence of a reasonable number of different sounds is motivated by the need to keep words distinct (cf. the Praguian term "functional load"). Whenever there is a function, there are ways of serving it equally well (to which extent the choice of means is arbitrary); and ways that are better than others (to which extent the choice is motivated). So, pace Saussure, even in the most arbitrary relationship, substance and motivation necessarily remain. Elements of component-based structure (such as assimilation) are intermingled with the function-based properties. As steadfastly argued by Eli Fischer-J0rgensen in a running discussion with Hjelmslev (cf. Fischer-J0rgensen 1966), it is not plausible to regard phonological structure in general as independent of the actual sounds that "implement" the system. The non-autonomy of structure and the essential role of substance are very clear on the side of content, i.e. function. As we saw above, what made arbitrariness possible in relation to component-substance was sameness of function. Expression elements, as mere instruments, can be made of any material that satisfies the basic requirements, but it goes without saying that the meanings which constitute the functions of the expressions cannot be replaced by anything else. The fundamental substance-dependence is also evident if one begins to look more closely into attempts to reduce meanings to relations. To describe the semantic property of words as reflecting only a word's place in language structure, Saussure borrowed the term valeur from economy, because of its lack of dependence on substance — another function-based property: value in the sense of market value is that property of a commodity which causes businessmen to (re)produce it. An example is the case of French mouton as opposed to English sheep and mutton: Saussure claims that only the "valeur" of the word, the difference in relation to other meanings, is part of the description of language-as-such. But a moment's reflection makes clear that unless we start with the meanings in themselves (signification), there is no way to

168 The functional basis of structure

describe the differences that constitute the structural relations between meanings: if we abstract away from the animal "sheep" as a common semantic factor, it is impossible to explain to anyone what the two words mean. Similarly, it is impossible to imagine a diagram comparing the way different languages structure the colour spectrum if we eliminate the colours themselves and try to preserve only the relations between them; it is not surprising that the structuralist attempt to make linguistic meaning an entirely relational affair never caught on outside the specialist community. If we understand meaning in terms of function, this falls out automatically: depriving an expression of its function means to deprive it of its ontological identity. On the other hand, it is perfectly possible to describe the meaning of a single, isolated word such as sheep or red by appending a picture of a sheep or a colour sample of red, without relating it to the other animals or colours. There will, of course, be something missing in this description until you know what the other terms in the system are, but this only means that both word meanings and the relations between them (including the division of labour) are part of language. Similarly, the notions of paradigmatic and syntagmatic relations can be understood as linguistic manifestations of more general functional relationships. Any complex task is composed out of sub-tasks. If you intend to build a house, a lot of sub-actions are necessary to take you to your intended goal, and the relationship between these is basically a syntagmatic relationship: a chain leading to a stage of completion. Similarly, most such sub-actions will represent the choice of one among several options: wood or bricks? red or yellow paint? Each such set of options will constitute a "paradigm". Some such complex actions get integrated into fixed and formalized chains linking sets of options, as in a semiautomated industrial assembly line: a number of buttons need to be pressed in order to make the choices along the line, but only a limited number of options are open, and the process moves along automatically, once it is set in motion by an initial command. The structural organization of language does not set it off from all other structured institutions — rather, it emphasizes the similarities.4 In short, at pains to dissociate language from all chains linking it to "a priori" ontological structures, Saussure made too much of the independence of linguistic structure. The over-emphasis on arbitrariness in this passage is partly a result of the traditional assumption that meaning is ideational only, rendering the functional underpinning obscure. As an example of how all institutions except language, however arbitrary they may seem, contain an element of motivation, Saussure mentions fashions. In spite of all

Structure and substance: arbitrariness and motivation 169

vagaries and excesses, fashions remain dependent on the form of the human body. But as we shall see in detail below, linguistic structure is motivated in exactly the same manner. The dependence of fashion on the human body is clearly of a functional nature: fashions differ within the margin permitted by the function of clothes. Similarly with language: the structures of languages differ within the margins permitted by the function of human languages, which is to enable speakers to exchange utterances. Just as there are no fashions which prescribe clothes that are impossible to wear on the human body, there are no languages with words that are impossible to use in human utterances. In other places in Cours we find a greater awareness of the role of motivation in language and the substance basis for structure (cf. below p. 304 in the discussion of iconicity). The truest European believer in arbitrariness and structural autonomy is Hjelmslev. Aware of the differences within Cours, he enlists Saussure's structural foreshadowing of the laryngeal theory for the support of purely relational phonological entities without phonetic properties (cf. Hjelmslev 1948: 31); one passage where it becomes clear that the Saussurean position does not unconditionally support such a move in general, is Cours p. 190, where Saussure explicitly points out that abstract entities exist only by virtue of the concrete entities that instantiate them. Hjelmslev had this argument — and, essentially, lost it — with the Prague school (cf. Gregersen 1991). In regard to Hjelmslev's Danish version of Saussurean structuralism, we must therefore conclude that the glossematic project as a whole foundered for two very good reasons. From the point of view argued here, there was one central flaw in addition to the autonomous view of structure. The second flaw is in the type of relations on which Hjelmslev wanted to build his theory, namely dependency relations, of which he basically recognized three: mutual dependency, unilateral dependency, plus the "zero" alternative: mutual independence. From the integrative point of view the first two types of relations are interesting enough: they are proof that there is indeed a structure rather than just a random combination. But the relations have to be understood together with the parts they relate — and one of the ways in which the relations may be understood is as functionally motivated (cf. below p. 275 on the two kinds of dependency relations). This does not mean that we cannot occasionally find cases where dependency relations acquire a life of their own; when Hjelmslev's dependency-based description of syllabic accent turned out to apply to

170 The functional basis of structure

vowel harmony (which does not co-occur with "ordinary" accent distinctions) it seemed to confirm the validity of this view. However, dependency relations are not as a rule the stuff of which linguistic categories are made. Defining vowels on the basis of dependencies led to the necessity of setting up a category of "pseudo-vowels" which were like other vowels except for the dependencies. And they are obviously incapable of exhausting the nature of content elements. Hjelmslev (1938: 159) suggests a definition of aspect and tense on the basis of two types of dependency relations; some conjunctions in Latin require certain tenseaspect choices, and "consecutio temporum" involves dependency relations between matrix verbs and verbs in subordinate clauses. However, even if it could be done (cf. Part Three, p. 423) this fairly obviously means putting the cart before the horse. Tenses and aspects, even more obviously than vowels, have properties which may sometimes give rise to dependency relations — but the dependency relation is not the ontologically primary property. Structural relations, including dependency relations, can only be understood as anchored in the substance that they structure. The picture of structure that has been drawn above entails that linguistic structure exists and is crucial for understanding language in most of the ways claimed by Saussure. But the autonomy of linguistic structure is relative and function-based; to embed Saussurean structure in its functional context is not only possible, it is in fact necessary in order to understand the Saussurean insights properly. The difference between this account and traditional structuralist notions is thus not in the kind of structural relations it makes possible — it is only in underlining the fact that the whole structure of language exists by virtue of its functional relation with the pattern of human interaction that exists in the speech community.

2.6. American structuralism: the Bloomfield-Chomsky tradition Structuralism, however, is not a unitary phenomenon. The difficulties in reconciling function and structure in present-day linguistics are not so much due to European structuralism, but to the pragmatically much more successful American tradition. The autonomy of structure that is assumed by mainstream American linguistics has quite different intellectual sources. The key notion in the American tradition is distribution, and it is based essentially on the expression side rather than on seeing language as a symbolic entity with a content side as well as an expression side. To

The Bloomfield-Chomsky tradition 111

understand this pattern of thinking it is necessary to begin with pregenerative American linguistics and its dependence on the descriptive ideals of positivism. Bloomfield, like Hjelmslev, saw his linguistic views as based on positivist foundations. But the two forms of positivism were very different. Hjelmslev's views were based on his understanding of the principles of logical description, in terms of which only relational description was scientifically respectable; when it came to the type of constructs he was prepared to allow in his theory he was not noticeably inhibited by ontological scepticism. Bloomfield's positivism was mediated by the descriptive principles of behaviourism, which concentrated on the problem of the respectability of different kinds of data, and it included physicalism and mechanism as part of its general orientation. The scientific respectability of linguistics, according to Bloomfield (after his conversion from Wundtian mentalism), therefore rested on its ability to describe language in terms that treated language exactly as one would deal with any other physical phenomenon. What the linguist saw himself faced with, according to this view, was therefore essentially a mass of sound (the non-count formulation is essential!). Since reliance on intuition was ruled out, only actual utterances counted as data, and the job of the linguist was to subject these to scientific analysis, exactly as if they were blobs of matter under the microscope. However, Bloomfield was perfectly aware that this was impossible in practice. In order to get off the ground at all, linguistics must make what Bloomfield called "the fundamental assumption of linguistics" (1933: 78, 147): "we must assume that in every speech-community some utterances are alike in form and meaning". Although in itself this is not an unreasonable assumption to make, cf. above p. 160, it was actually a monumental violation of the sceptical premises that were supposed to underlie the whole enterprise. It was, of course, seen as merely anticipating proper scientific procedures of identification, but that did not in itself make it scientific. In terms of the distinction between the two types of structure, the problem with the "fundamental assumption of linguistics" is that the "likeness" is functionbased rather than component-based and thus cannot be captured within the bottom-up mechanism on which Bloomfield based his theory. When it came to hopes for a scientific confirmation of the "likeness", expectations with respect to form were naturally somewhat more sanguine

172 The functional basis of structure

than with respect to meaning. Twaddell (1935: 63) quotes Bloomfield as having announced (in an address to the Language and Literature Club of the University of Wisconsin, 1934) that "the physical (acoustic) definition of each phoneme of any given dialect can be expected to come from the laboratory within the next decades". Until that day comes, however, we have to (Bloomfield 1933: 77) "trust to our everyday knowledge to tell us whether speech-forms are the same or different". Of course, if we replace the words "everyday knowledge" with "intuition" it would hardly make any difference with respect to the empirical underpinning of the assumption. By giving itself licence to operate with "sameness of form", the Bloomfieldians could get as far as the phoneme; by licensing "sameness of meaning", it could get at the morpheme (for instance "past tense", with "allomorphic variation"). But these licences were not granted without methodological headaches and doubts. It became apparent even in Bloomfield's day that sameness of form could not expect to receive scientific posthoc confimation: phonemes were not realized with any physically constant features. The one option that seemed to be open to Bloomfieldian structuralism was then to rely on abstractions based on distributional patterns: physically different, but potentially "same" segments could be brought together if they occurred in contexts that were mutually exclusive. Distributional analysis was also attractive because it seemed to offer a way of avoiding reliance on meaning: if phonemes could be distributionally defined, it might be possible to avoid the definition based on their distinctive function. However, as evident in Twaddell (1935), a strict adherence to such principles would mean that familiar phoneme systems could not be upheld, and only a much lower degree of "descriptive order" could be imposed on the infinite variability of sounds. In practice, reliance on intuitions of sameness were necessary in order not to cripple possibilities of description, even if "distribution" became the only theoretically safe basic notion. Before you begin a distributional analysis, you have to have an item whose distribution you investigate, and in dealing with language, this is simply impossible without some reliance on intuitive judgment; if you stick rigidly to the principles, you find yourself faced with endless variation only and no units with any distribution to describe. Consequently, even with the licences, the structure that emerged out of American structuralism was a very modest and down-to-earth type of structure compared to what we find in European structuralism. There was little theoretical apparatus beyond the basic concepts of phoneme and

The Bloomfield-Chomsky tradition 173

morpheme plus distributional analyses of linguistic forms on that basis (clustering of phonemes and immediate syntactic constituents), as reflected in the epithet "taxonomic". But in the context of structuralism, the Bloomfieldians deserve praise for one thing: however misguided the restrictions they imposed on linguistics, they did not make the mistake of assuming that structures exist independently of substance. The distributional patterns at sentence, morpheme and phoneme level were officially understood as being legitimate only as a way of "cooking" the raw (component) substance of sound.

2.7. Autonomy in generative thinking: the Pygmalion effect revisited What happened with the arrival of generative grammar must be understood in this context. The points on which Chomsky stood for a radical break with American structuralism are familiar: mentalism as opposed to physicalism and behaviourism; the principle of a generative description validated on the basis of the grammaticality test as opposed to discovery procedures; the infinite number of sentences as opposed to the finite body of data: and the enriched notion of structure in comparison with a taxonomic description. What has been less obvious than the polemical opposition is the element of continuity with Bloomfieldian linguistics (but cf. Matthews 1993: 128). The essential element of continuity is the role of the purely distributional approach in the theory. As we saw, it became apparent comparatively early that progress in linguistics was unlikely to be forthcoming based on the attempt to categorize speech sounds on the basis of their measurable properties; and already before Chomsky, more sophisticated ways of accounting for distributional phenomena began to arise. Bloomfield himself began to analyze some alternations between phonemes in terms that involved a risk of mixing levels, and invented the term "morphophonemics" to deal with cases where distribution of phonetic elements depended on morphemic factors; Harris (1951) analyzed the distribution of certain syntactic phenomena in terms of transformations, and (as pointed out by Itkonen 1992) the Chomskyan sense of "creativity" was already emphasized by Harris. What Chomsky did, seen from the point of view of linguistic theory, was essentially to make a clean sweep of all the restrictions that behaviourism

174 The functional basis of structure

and mechanism had imposed on the sophistication of distributional analysis in linguistics, thus radically enhancing the scope for sophisticated theoretical description. Chomsky's attacks on behaviourism, more famous perhaps than any refutations in the field of psychology, were based on the demonstration that the formal structure that could be distilled out of behaviourist accounts of language was too impoverished to match the linguistic data. What was necessary were more complicated formal structures; the logical structure of the metalanguage of taxonomic linguistics was too impoverished to do the job. The hallmark of generative grammar has remained those sophisticated types of distributional regularities that could not be captured in simpler models of grammar. Initially this was evident in the shape of the emphasis on the necessity of a transformational level of description, because of the insufficiency of the "finite state" and "phrase structure" approaches. The distributional regularities that revealed themselves in the existence of an active clause corresponding to every passive clause, or a negative and interrogative clause for every affirmative clause could not be handled in the simpler formal models and therefore required a richer apparatus. Although transformations have been on the wane for a long time, the model is perhaps still most widely known under the name of "transformational grammar". The limitations in the occurrence of w/z-words (whether captured by constraints on transformations or otherwise) remains a central example of the sophisticated types of distributional restrictions that are at the core of Chomsky's thinking about language structure. The change from Bloomfieldian to Chomskyan distributionalism is thus analogous to the step from old-fashioned to formal logic: instead of being tied down to dealing with actual examples of the phenomenon, the formal logician or linguist can set up his own inventory of constructs at will, postponing application to real data until a later stage. But just as in the case of logic, ultimately the dependence on actual examples is exactly the same — it is only the descriptive technology that changes, not its relation to the object of description. It makes no difference whether you set up an axiomatic system that begins with an initial symbol, S, and then expands it into NP + VP (subsequently mapping this formula onto an actual sentence), or you take an actual sentence and break it down into an NP and a VP. In both cases you end up with a datum matched with a description, which has to be validated in terms of current scientific criteria. The change into generative descriptive principles therefore did not involve any change in the understanding of language as characterizable in

Autonomy in generative thinking 175

distributional terms. As pointed out by Itkonen (1992: 73), Chomsky's descriptive practice reflects the principles of logical positivism, most obviously in the tradition of Carnap. Just as Carnap was concerned with the formal structure of the world, founded on the physical sciences, so Chomsky was concerned with the formal structure of language, based on — in the early phase — "the physical properties of utterances" (cf. Chomsky 1955 [1975a]: 127). The affinity with Carnap is evident in the retreat from the messy details of actual utterances and facts to the crystalline sphere of formal structures. Chomsky's essential argument for his I-language (Chomsky 1986: 15-46) is that the confused realities of Elanguage are totally unfit for scientific consumption; the only way in which language can hope to become worthy of scientific attention is by the rescue operation which puts it inside the mind of an individual and attributes proper formal structure to it. The reason why it is necessary to spend so much time on this continuity is not to trace the historical development of linguistic thinking in the 1950s, but because it constitutes the key to understanding the source of autonomy as a central concept in generative thinking about structure. It should be obvious that if we did not have the items themselves, as occurring in various combinations, distributional regularities would not exist either; hence, distributional structure (whether or not it is described by means of a formal model) is patently not autonomous. The argument is essentially the same as in the case of Saussurean structuralism above: the fact that we can describe language revealingly by elements defined solely on the basis of relations to other elements, regardless of their physical realization, does not prove that structure is autonomous, only that it exists. The question is therefore where autonomy comes in. In the next section I shall argue in some detail that what is autonomous in the generative pattern of description is only the formal metalanguage, not the object it is mapped onto (which in the beginning was "the physical properties of utterances"). Any object which can be described in terms of regularities would have a similarly "autonomous" structure; by the same logic a mathematical model of population development would be autonomous in relation to actual births and deaths. The question is then how this autonomy made the transition from metalanguage to object language. I think the mechanism is the one familiar from two other historical instances that we have already come across.

176 The functional basis of structure

The first was the pioneers of formal logic (cf. above p. 14), who thought of the rules of formal logic as constituting "the laws of thought" (Boole) or "Sprache des reinen Denkens" (Frege); the second was the development of "strong AI", where the computational mechanism of the machine invented to simulate intelligence was reinterpreted as constituting the mechanism of real human intelligence. The fallacy can be termed the "Pygmalion" effect, resulting as it does from the creator falling in love with his creation, wanting it to be real rather than a mere imitation of reality. In all the three cases, the inventors of formal metalanguages attributed the mechanisms they had invented to the human brain. According to this picture, Chomskyan rationalism is nothing more than logical positivism gratuitously transposed into the black box of the mind. The problem of the inaccessibility of the language part of the black box is solved by the linguistic equivalent of the Turing test, embodied in the notion of "grammaticality": if the formal grammar and the human speaker permit the same structures as possible outputs, we must assume that the black box contains the same mechanism as the formal grammar; and the reasoning is untenable for the same reason as strong AI, as illustrated in the "Chinese room" argument.

2.8. Generative autonomy: empty or absurd? In this section I discuss some aspects of generative linguistics where I think it can be shown that the metalanguage blends into the generative conception of the object language itself, creating a geological fault in the foundations of generative grammar. The autonomy issue can be regarded as the core of the problem, because it is has programmatic status, stands as a central area of disagreement between generative and non-generative linguistics, and concerns a property where the difference between object level and metalevel is vital. I go through the argument in slow stages, because I think it is both very difficult and absolutely essential for the understanding of what linguistic structure is and especially what it is not. I hope that readers who are sceptical of one instance may find others more striking. First, there is the question of generative terminology: the adoption of the same word, "grammar", for grammatical theory (the formal device) and the "device" in the mind (cf. Chomsky 1965: 25). In this usage, it is made officially and deliberately impossible to distinguish between metalanguage and object language, and generative theory stipulates the nature of language

Generative autonomy: empty or absurd

177

(as an object of description) on the basis of the chosen metalanguage: the formal system of rules. It is of course not a case of wilful sleight-of-hand; Chomsky believes it to be obvious that it is the same thing, as also demonstrated when he goes on to talk about the learning child's "theory". In complete consistency with this position, the term "grammar" also replaces the term "language" as designating the object of description itself (cf., e.g., Chomsky 1981: 4): the object language, its structural relations, and the theory collapses into one and the same thing. Similarly, concerning the mental structures that constitute the object of description, "language" (in the sense of I-language) and "knowledge of language" come to the same thing: the same putative cognitive structures serve in both capacities. Even if you do not want to draw any principled conclusions from it, this terminological conflation of object level and meta-level has made it very difficult to get a clear picture of what is involved in the autonomy claim. I have already suggested that the true nature of generative autonomy can only be understood in terms of the mathematical meta-level. As such, it is only an aspect of the formal method of description and thus harmless once its implications are understood. A formal model, mapped onto an object of description, works by fully explicit mathematical principles, and is therefore autonomous in relation to the object. Such a description can be applied to any object, and if the application is successful, the structure of the description will be isomorphic with the structure of the object — as the formalization "2 + 2 = 4" is isomorphic with the act whereby you take two apples, add two more, and count up to four. But this structural isomorphy entails nothing beyond the analogy that is captured in the successful mapping. In particular, there is no claim that the structure of the object of description is autonomous — or that the object in itself is autonomous. This notion of autonomy is, in other words, descriptively empty. I shall now try to show that generative claims about autonomy which at first glance look as if they make a real claim about object language structure, are defended with arguments that only make sense if they are understood in relation to the empty, metalinguistic understanding. The central concern of Chomsky an structural description is syntax. Let us see what follows if we assume first, that syntax is a property of the object language; and second, that syntax deals with the analysis of linguistic "form" in the sense of "expression", such that submorphemic expression elements are handled by phonology and higher-level expression organization is handled by syntax. Autonomous syntax therefore means that

178 The functional basis of structure

the structure of (morpho)syntactic "form" is autonomous in relation to semantics. In conjunction with the theory of language acquisition which goes with generative grammar, such a theory would entail that children learn to combine expression elements into well-formed clauses regardless of what the expression elements mean. However, because that is blatantly absurd, a number of generativists have claimed that various forms of linkage between semantic and syntactic organization are fully compatible with the assumption of autonomy. One instance is the notion of "semantic bootstrapping" advocated by Pinker (1989) in opposition to the purely "syntactic bootstrapping" suggested by Landau—Gleitman (1985). Pinker finds it plausible that children base their acquisition on semantic hypotheses, but these are only the initial step into the system; later, meaning is left behind along with the diapers, when the child is ready for higher things. Chomsky provides a more radical formulation of the way autonomy and correspondence with semantics can go hand in hand: We can distinguish...an absolute thesis, which holds that the theory of linguistic form, including the concept, "formal grammar" and all levels apart from semantic representation, can be fully defined in terms of formal primitives.. .To show this strong thesis to be false, it will not suffice.. .to show that there are systematic relations between semantic and syntactic notions. This assumption is not and has never been in question; on the contrary, it was formulated explicitly in conjunction with the thesis of absolute autonomy. (Chomsky 1977: 43-44)

For a comprehensive recent restatement of this basic view in relation to the iconicity issue, compare Newmeyer (1992). If there are systematic correpondences between syntax and semantics (but not in all cases), the natural conclusion would be that there is some kind of partial autonomy (cf. below, pp. 306-307 and 473); in those familiar cases where syntactic patterns cut across semantic distinctions, a functionalist would say that children learn them by going from simple to more complex hypotheses about the relation between clause content and expression. But Chomsky's position appears to correspond to that of a man claiming to be totally independent of the mafia because it supplies only seventy-five per cent of his earnings: the argument does not make sense if we see autonomy as a property of syntax in the object language. In terms of the "meta" understanding, however, it makes perfect sense. If we choose to take the

Generative autonomy: empty or absurd 179

set of possible sentence expressions in English and describe it in terms of a formal system while ignoring meanings, we are free to do so — but this has no implications whatever for the relationship between sentence expressions and sentence meanings.5 In other words, the autonomy thesis is in fact descriptively empty: it means only that we choose to divide the field up and describe syntactic regularities on their own, postponing correspondences till later. The next problem is that it is not clear that we are to understand of syntax as part of the analysis of linguistic "form" in the sense of "expression" — because the conflation of object level and meta-level also affects the understanding of syntax. Since syntax is the central component in grammar, this follows from the ambiguity of the word "grammar" as described above: syntax is both in the grammar book and in the child's head. Further, it is part of the thinking in terms of autonomous syntactic structure that its status is ambiguous in relation to the distinction between content and expression. The centrality of syntax in the generative paradigm means that syntax is located between phonology (clear-cut expression) and semantics (clear-cut content), facing both ways. This intermediate position, I suggest, is also better understood if we look at it from the metalinguistic point of view. On the level of the formal meta-language, the place of syntax as the central discipline is a natural consequence of the paradigm of formal simulation itself (cf. also Harder 1991): since symbol "values" are by definition outside the formal device itself, it is natural to concentrate attention on the formal operations that manipulate the symbols. But formal symbols, like computational simulations, can be used to describe anything — they are essentially neutral between describing (object-language) syntax, or semantics, or digestion, or logical reasoning, etc. When the task at hand is to describe an object by means of a formal model, at the end of the day you should have a model and an object with properties that match; but you can approach that end from two different viewpoints. The fact that Chomsky's chosen vantage point is the metalanguage is evident throughout his writings.6 As a logical sequel to claiming that Chomsky is smuggling the properties of his chosen metalanguage into his conception of human language, I am here suggesting that his view of what constitutes syntax is not constrained by any idea about what syntax is in an object language. Syntax is simply whatever the formal structure maps on to.

180 The functional basis of structure

If this description is true, there is really no reason why Chomsky should worry about the autonomy of syntax as a particular subpart of an object language. The aim is to set up a formal device that can generate "all and only" the grammatical sentences; once such a device is attributed to the cognitive system instead of being based on "physical properties", it does not matter whether the object-language elements that these formal primitives are mapped on to are semantic or not. I think this implication is discernible in the development of Chomsky's thinking. In his early phase, Chomsky made much of the rigid separation between syntax and semantics (cf. the opening discussion in Chomsky 1975a: 47, originally dating from 1955); and it would have been heresy to say that Chomsky was really describing semantic relations. In 1975, however, it appears the burden of proof has shifted to those who think the theory ought to make a difference between semantically and syntactically motivated ungrammaticality; Chomsky even doubts that the question makes sense (cf. Chomsky 1975b: 95). I would like to quote one analysis that illustrates the point, making it clear that Chomsky an "syntactic" formulae are inherently ambiguous with respect to being understood as representing semantic and nonsemantic relations in the clause. Two representations of the clause John seems sad are compared with respect to the issue of "logical form": (1) John/ [VP seems [s t/ to [VP be sad]]] (2) seems (sad (John)) In theories of the sort I will be considering here, (1) is the S-structure representation of the sentence, while (2) — understood as indicating that that sad is predicated of John and seems of the proposition sad (John) is reminiscent of notations of familiar logics. The null hypothesis, within the theories considered here, is that (1) is also the LF-representation. While it would be simple enough to design an algorithm to convert (1) into (2) or something like it, empirical evidence would be required to support any such move I will tentatively assume that representations such as (1) are indeed appropriate for LF. (Chomsky 1981: 35)7

For Chomsky, the point is that once we have a perfectly good "syntactic" structure, why bother to invent something special to describe meaning? There may be semantic relations which could plausibly be described as (2), but obviously they are not interesting in themselves, the way he sees it. If we turn the argument around, it follows that there is no way to know whether formula (1) describes clause content or (strongly autonomous)

Generative autonomy: empty or absurd 181

syntax, until Chomsky tells us; the distinction between facts about content and facts about expression is uninteresting compared to the inner logic of the descriptive calculus itself. On this understanding, the autonomy of linguistic "form" has nothing to do with form as opposed to meaning (which is one reason why I use the terms "content" and "expression"). Rather, linguistic "form" is understood as designating "structural organization". This kind of object, considered in itself, makes good sense at the metalevel where only formal organization matters, as described. If we transfer it to the object level, however, we get an aspect of language that is somehow neither content nor expression, but pure "formal structure". It should be noted that the unclarity with respect to the status of grammar in relation to the distinction between expression and content was not invented from scratch by generative grammar. Like many other aspects of generative grammar, it is to some extent carried over from the grammatical tradition, where we find a concern with elements of "grammatical form" without any systematic theory about the role of grammar in relation to the distinction between content and expression (cf. Hjelmslev 1947: 127). Jespersen saw grammar in a mediating role, which resembles the generative conception with syntax in the middle, phonology on the one side and semantics on the other; for a full discussion of the way in which Chomsky sees himself as the heir to Jespersen, see Chomsky (1977). However, according to the position defended above, in which structure is always based on substance, a level of pure structure intermediate between expression and content is in principle impossible. We cannot have a structure which is not the structure of anything. Therefore such a claim is necessarily wrong. But we can try to reconstruct generative syntax on the understanding that it must apply to an identifiable domain, and consider the autonomy claim on that basis. If we agree with the stoics and European structuralism that language basically consists of expression and content, we must know whether a putative structural description is meant to apply to content phenomena, expression phenomena, or whole signs incorporating both sides at the same time. Where does "grammatical form" belong, reconstructed in terms of this picture? According to the two-sided view, grammatical elements have content and expression just like lexical elements — only the content-expression relations are sometimes less transparent than in the lexicon. A reinterpretation of

182 The functional basis of structure

"grammatical structure" as involving both content and expression (rather than being neither one nor the other) is plausible in that it fits with the practice of positing abstract elements based on both semantic and "formal" criteria. It does not salvage the autonomy thesis, of course: that would entail that abstract structures posited on the basis of both form and meaning were autonomous in relation to both form and meaning — which does not make sense. On the metalinguistic understanding, where it does no harm, it means only that grammatical structure exists and can be captured by a formal model, and the same would apply to the structure of a piano or a petrochemical plant. It may appear as if this problem in generative thinking is specific to the Chomskyan version, in which there is a central syntactic component which is accorded an unwarranted degree of autonomy in relation to semantics. But the problem is deeper than that; it applies whenever the object language is conceptualized in a way that transfers the autonomy of formal structures as a metalinguistic tool to the object of description itself. As an illustration, we may take the position of Jackendoff (1987a, 1990). Jackendoff recognizes a distinct level of semantic structures; and in contrast to Chomskyan models, the semantic level is not marginalized in relation to the central area of syntax. The autonomy thinking, however, is perhaps even more pervasive. Against those (for instance Fodor, cf. above p. 43) who say that meaning is different from other linguistic properties in establishing relations with the outside world, Jackendoff insists that from the cognitive point of view there is no difference: phonology, syntax and semantics are three autonomous levels of organization, each with its own "formation" rules, so that there is no derivational relationship which establishes one as "prior" to the others. Jackendoff argues that the external relations which Fodor wants to capture would be totally out of place in relation to phonology, syntax and musical cognition — so why should semantics be any different (1990: 14)? It may appear that this is simply ridiculous; to use the property of sounds in arguing about the properties of meaning is on the face of it an obvious non sequitur. But that would be to misunderstand the logic on which this argument is based. As in Chomsky's theory, autonomy has really nothing to do with the object described; it is a property which is smuggled from the metalanguage into whatever object it is used to characterize. All components of Jackendoff s theory are autonomous of each other, and have no defining relation to anything outside themselves — they are really all "formal syntaxes", and they are all attributed to the cognitive system in the

Generative autonomy: empty or absurd 183

same manner as Chomskyan syntax. Jackendoff says that it is fine with him, if people prefer to speak of the syntax of thought, rather than semantics, as long as it is realized that it is different from "straight" syntax; and we can add that the phonological component could similarly be seen as a syntax of phonological elements. The better-known doctrine of the autonomy of "straight" syntax is just Chomsky's preferred version of the basic flaw in the generative pattern of thinking.

2.9. Underlying structure I: significant generalizations and the naming fallacy But it is possible to look at generative descriptions without worrying too much about the questions of autonomy and mental reality. The question is then the more down-to-earth one of whether the generative descriptive practice is a good way to say something about language. I shall try to say something both about the pragmatic strengths of the approach and about two ways in which I think the basic philosophical weaknesses vitiate the descriptive practice itself. At the core of the generative pattern of thinking lies the metaphor of underlying structure as opposed to the "surface". This is the result of assuming that the formal structure of the metalanguage is lurking somewhere inside the object language, rather than being a way of talking unambiguously about it. This pattern of thinking is independent of any particular version of the model, and captures the motivation for assigning generative structures to sentences. What meets the eye is the surface; the fundamental truths lie deeper, and only the discerning mind of the linguist can lead us to them. As will be apparent, this is a straightforwardly Platonic type of argument, with the Chomskyan linguist assuming the garb of Plato's philosopher; this intellectual debt is recognized in using the name "Plato's problem" (Chomsky 1986: xxv) about the question of how the language-acquiring child can go from the supposedly debased linguistic forms he is faced with to the ideal grammar in the mind. The notion of an underlying level, which the linguist sets up by procedures that cannot be exactly specified, was introduced as a logical consequence of the demonstration that the aim of establishing "discovery procedures" was simplistic: there is no royal road leading directly to the truth. The essential element in this descriptive procedure has survived a

184 The functional basis of structure

number of revisions and spread to many other grammatical frameworks. Even many pragmatically oriented linguists in practice invoke the metaphor of depth; Chafe (1971), whose position with respect to generative grammar is very close to the one adopted here, is still able to praise Chomsky for showing that the well-formedness of structure "can only be determined with respect to something "deeper" than surface structure" (1971: 7). The points Chomsky made originally against less sophisticated models seemed for two decades to command immediate agreement. Setting up a level of description designed to show what was "really" going on caught on like wildfire, demoting the actual forms of language to "surface" status and linguists insisting that one should not get too far away from observable linguistic fact to the status of people fooled by mere appearances. The development of generative semantics carried to extremes this new-found type of orientation, where representations became more and more abstract to make room for more and more generalizations; there was no stopping mechanism built into the descriptive system. The basic argument against the descriptive practice of setting up underlying, uninterpreted structures is this difficulty in restraining the use of underlyingness — in other words, the uncertain scientific validity of such descriptions. The underlying level, at which the descriptive procedure begins, is (as described above) defined by the linguist himself, and cannot be validated directly in terms of canonical procedures; and description then moves by rules towards the "surface", where we find actual linguistic expressions. As often pointed out, such descriptions are in danger of being vitiated by a version of the "naming" fallacy: you notice certain facts about language (at the "surface", because that is the only place where you can observe anything "directly") — and in order to account for them you assign an underlying form to them. But since the underlying form has no independent empirical status, it really adds nothing to our knowledge of the "surface phenomena". The classic example, taught to generations of Danish students as part of a preliminary exam in philosophy, was the explanation of sight in terms of an underlying "ability to see". Chomsky defends his descriptive practice with reference to the fact that physicists are permitted to postulate invisible entities to explain visible phenomena — why cannot linguists do that? The aim was in a sense to devise a mathematically inspired linguistics that would do for language what mathematics had done for physics. Since the theory of relativity existed as a model before being experimentally validated, the abstractions appropriate for linguistic description could similarly be postulated first and

Underlying structure I 185

validated afterwards. And if physicists can note the correlation between thunder and lightning and devise a theory of underlying, invisible electron streams to account for both, what is the difference? The basic objection to this line of reasoning is that it is only scientifically valid to the extent that the postulated entities can be experimentally confirmed. Electrons are in principle visible, only they happen to be too small, so one must register them in other ways. The constructs posited by generative grammar do not share this property. In themselves they are tacit, i.e. in principle inaccessible to direct conscious inspection. There is little experimental evidence to provide indirect support for the generative theory, and what there is necessarily depends on conscious reactions from language users, who can hardly be assumed to isolate their judgements from all sorts of factors that are irrelevant to the inaccessible mental structures as such. Intuitive grammaticality tests tap all criteria that could rule out utterances; apart from semantic factors, there are social, emotional and intellectual reasons why some sentences may get starred — in short, actual grammaticality judgments are just as debased as the actual sentences people produce. As the distinction between competence and performance is defined and used in practice, it puts underlying constructs beyond the reach of experimental control. No-one would seriously claim that generative grammar has experimentally acquitted itself in a manner that enables it to borrow scientific legitimacy from physics; Chomsky's occasional glib remarks about experimental methodology (cf. Chomsky 1988: 190) must be characterized as hand-waving. The escape from transparent circularity in the face of the missing experimental evidence lay in the fact that underlying structures were used to relate similar phenomena that were not identical at the "surface", and to bring out differences in superficially identical phenomena — in both cases in order to express what Chomsky called "significant generalizations". To the extent the theoretical poverty of Bloomfieldian structuralism had made it impossible to capture such generalizations, the doubtful empirical status of underlying form mattered less than the fact that it was now permitted to incorporate such previously forbidden observations in the theory. In other words, it is a way of talking about them, and as such part of a canonical humanistic form of scholarship (cf. Itkonen 1978). The term "naming fallacy" only applies if something is put forward as a full scientific theory. To give a linguistic phenomenon a name and a place in one's description is obviously a step forward from a position in which it had neither a

186 The functional basis of structure

theoretical habitation nor a name. Names like "raising" and "equi" will long survive the "deep structures" where they arose. The mistake, in other words, consists in believing that putting a phenomenon somewhere in the underlying form and showing that you could invent ways in which to derive the right surface form is anything more than a way of saying "I have now noticed phenomenon X (in terms of which a and b are similar and different from c) and put it inside my description". Most of the real work is still to be done. The first step is to give a downto-earth description of the phenomenon, and the second to show that the position given to the phenomenon in one's underlying structure is a good way of saying just that. Higher, "explanatory" ambitions are even further away. Criticism of this kind has been around more or less from the beginning of generative grammar. One reason why it has not been felt to be more damaging was Chomsky's notion of "empirical description", his version of Carnap's argument for formal logic as a metalanguage in philosophy: a formally explicit description based on mathematically precise rules can be used to make precise predictions — in contrast to informal definitions of the kind we are familiar with from traditional grammar, where the meaning of a concept can change imperceptibly so as to fit all cases, ending up by making so vague predictions that they cannot be tested. On this point a Chomskyan device had an obvious strength: once a set of rules had been set up to account for a phenomenon, they generated (and excluded, respectively) a whole range of structures which had to be (un)grammatical if the account was to hold water. Chomskyan linguistics therefore "captured" a range of insights embodied in traditional grammar, while giving them "empirical" status by embodying them in a framework guaranteeing formal explicitness and testability. This type of empiricalness is the one associated with logical empiricism, and hence subject to Putnam's objection; but there was a clear operational value in it, which served to validate the claim on the heuristic level of actual work with the data. When generative descriptions were confronted with earlier descriptions, the fact that generative theories could be simply and controllably used to derive counterexamples meant that they were quite often actually better and more carefully worked out than earlier descriptions. Earlier authors (Hjelmslev is a case in point, cf. Gregersen 1991) had often remained satisfied with intuitions followed up by vague outlines that could be worked out in a number of directions and would require extremely hard work of those who wanted to test them out against

Underlying structure I 187

actual examples (cf. Basb011 (1971) and (1973) on Hjelmslev's theory of the Danish expression system). Also, the fact that the generative descriptive procedure is unsound does not imply that all generative descriptions are wrong. Structural properties embodied in concrete generative descriptions can of course be isomorphic with important structural properties of the sentences they describe, in which case generative descriptions do tell us something true about sentences. The fact that traditional theories about sentence structure were incorporated into generative grammar makes them neither more nor less correct than they were before the incorporation. But just as problems in the procedure does not make generative theories wrong, good generative descriptions do not validate the descriptive system in which they are couched.

2.10. Underlying structure II: distribution vs. semantics Apart from the problem of empirical substance, the chief danger in the underlying-to-surface picture is in the conflation it creates between semantic description and distributional description. As described above, generative linguistics was at the beginning concerned with roughly the same sort of facts about language as Bloomfield, and the model was developed to take account of distributional regularities. Since then, interest in semantics has been increasing steadily. Because of the descriptive procedure, semantic phenomena could only be incorporated in basically the same slot as distributional phenomena: as elements in underlying structure. The problem with that is that the whole apparatus, as driven by the grammaticality test, is oriented towards deriving the right sentence expressions. The basic empirical control takes place at the output end: does the device generate "all and only the grammatical sentences"? The descriptive procedure is biased towards the expression side; the linguist is free to suggest what he wants at the underlying level, provided he can show that it makes the right predictions at the expression end of the procedure. This means that semantic properties of sentences are among the "invisible" properties that the linguist chooses to assign as the initial representation of the sentence. As an example, in languages with lexical, semantically opaque gender, nouns must figure in the underlying representation with a gender value attached (cf. Chomsky 1965: 171). This gender symbol (M, F or N) adds

188 The functional basis of structure

nothing semantic to the sentence, but will stand among other features which are clearly semantic and yet others whose status is unclear. The problem of seeing content as "underlying" and expression as "surface" can be illustrated if we compare it with the relationship between a "phonological form" and a "surface-phonetic form" — where the "underlying" phonological representation stands in the same relation to the "surface" phonetic realization. In phonology, underlying structure clearly does not have the status of semantic content in relation to the phonetics; it only contains the distributional regularities that are not directly visible in the "surface" phonetic form. From the point of view of the descriptive procedure, distributional regularities without semantic content thus end up in the same role as semantic regularities — as ammunition for deriving the right morphosyntactic output. In the early days of generative grammar the standard pattern was to reinterpret everything as syntactic generalizations. I shall briefly go through a model example that has been used by Chafe (1970: 2) and Bolinger (1977: 2), demonstrating how an ambiguity can be understood either as semantic or syntactic, with no empirical difference. The example is one of the classical ambiguities (Chomsky 1957: 88): (1) The shooting of the hunters The ambiguity can be captured, as Chomsky did then, by setting up a transformational relationship between (1) and either "the hunters shoot" or "they shoot the hunters". Instead of the syntactic construction that meets the eye we get two other syntactic configurations, from which (1) can be derived; because the new structures are not ambiguous the way the first was, the ambiguity is resolved, and we have simultaneously set up a relationship between linguistic forms that are similar in the relevant respect. It is this procedure which is elevated to a general explanatory principle underpinning the notion of level of representation (Chomsky 1957: 85). But we could also "capture" the ambiguity by noting that the hunters can be understood as having two different semantic relationships with the meaning of the verb shoot (agent or patient). These semantic relationships also turn up in other connections, in combinations with different types of linguistic items; and in all cases where they turn up, they will of course have to be part of the linguistic description. In most linguistic contexts there is no ambiguity, but the linguist will note that the gerund permits NPs with either relation to be attached by means of an 0/-prepositional phrase.

Underlying structure II 189

Once the ambiguity has been located as a property of the construction "gerund + 0/-phrase", the semantic description can do the same job as an underlying syntactic structure. And if we couch the underlying entity in terms of a formal symbol, no-one can tell whether it is to count as a syntactic description (underlying object vs. subject) or semantic (agent or patient). The formal model itself does not distinguish. With time, the postulation of semantic differences became more popular. Indeed, the obvious way of validating the claim that there was something "underlying" the morphosyntactic form was simply to equate it with meaning. The way in which the two notions can be conflated can be illustrated by a quotation from Haiman (1978: 565): Similarity in the superficial form of grammatical categories usually reflects an underlying similarity of their meanings

If we let "deep" equal "semantic" — a solution that has been proposed in various forms from generative semantics onwards — we get a clear locus for meaning, namely the initial stage of the derivation. But unfortunately this does not solve the problem, as long as the description follows the generative pattern of moving by rules towards the "surface"; the rules remain intermediate between content and expression in the manner criticized above in relation to "autonomous" syntax. It may be thought that the problem is handled by the distinction between different levels of analysis of the clause, distinguishing (as the main levels) a phonological, a syntactic, and a semantic representation. The "levels-ofanalysis" approach in the generative paradigm has an obvious affinity to the general notion of ontological levels, as applied to language. The three representations are standardly compared with the levels of analysis that can be applied to a physical object; and this is not restricted to generative theory (cf. Winograd 1976, Leech 1981, K0ppe 1990). This pattern of thinking fits very neatly with the underlying-to-surface pattern: one could set up a "grammar" with an atom level permitting "grammatical" constructions such as H2O at the next higher (molecular) level, etc. All these underlying levels can then be justified by the way they account for the "surface" behaviour (= the directly accessible aspects) of the object analyzed. But according to the understanding of meaning argued above, this analogy does not hold — basically because semantic content is not a componentbased property that inheres in the expression as such. As described in the

190 The functional basis of structure

beginning of this chapter, it is a function-based property assigned to it by virtue of the role it plays in communicative interaction. Let me try to illustrate the falsity by means of an obvious straw man: The physical levels capture the fact that certain properties only arise as we move from lower to higher levels of analysis, following the basic bottom-up directionality of component-based structures. Thus neither hydrogen nor oxygen has water-properties, although water is made of hydrogen and oxygen. Similarly, the word finger consists of two syllables — neither of which means anything in itself. The semantic properties only arise when the syllables are combined. As will be apparent, this is an entirely spurious parallelism. Only the concept of function-based structure will supply a proper structural and ontological home for meaning.

2.11. Autonomy: final remarks We have now discussed two sources of autonomous structure: the langue of Saussurean structuralism, and the formal structures of logical positivism. There is a very important difference between the two modes of thinking. The first is meant to say something about the object of description, namely "language is pure structure". In order for this to be interesting, there must be other possibilities available, i.e. there must be things around which are not pure structures. The second is meant to say something about the metalevel, i.e. about scientific description: only descriptions in terms of logical structures count as scientific. Therefore, the two points of view are rather different in scope and content and are not easily combined. Nevertheless, they sound so similar that it is easy to confuse them. Hjelmslev (cf. Hjelmslev 1948) and Chomsky both did this, although their misunderstandings proceeded in opposite directions. Hjelmslev understood the principles of logical description in such a way that they it could underpin his own theory of language, yielding the apparently invincible combination argument: not only is language in fact structure, but this is also the only scientifically permissible way to describe it (cf. Harder 1974). As opposed to that, Chomsky's contribution to linguistics was basically a more powerful metalanguage, which — in contrast to Hjelmslev's — lived up to the requirements of logical positivism. Not content with that,

Autonomy: final remarks 191

however, he proceeded to devise a conception of language as an object of description which matched his metalinguistic structures point by point. On a very abstract level, Hjelmslev's and Chomsky's projects were quite similar: both wanted to set up a self-contained calculus that was constructed purely on formal principles, and then apply it to the description of concrete languages. Both made the mistake of confusing the autonomy of the calculus with the autonomy of the structure that was inherent in the object.8 Generative grammar was historically important in that it came to inaugurate an era of description based on formal calculi, and did this at a time when development in artificial intelligence made this part of a general upsurge in cognitive research. But ultimately the two theories paint themselves into the same corner; and generative grammar can survive as the mainstream approach to language only through the strategy that was exemplified in relation to the autonomy discussion — by a covert retreat from the untenable to the trivial version of the theory.

3. Clause structure in a functional semantics 3.1. Introduction This chapter presents the closest thing to a "model" that will be found in this book. I try to spell out the implications of the functional theory of meaning and structure for the description of structural relations in the clause; and this is the view of clause structure that will be applied to the description of tense in Part Three. The argumentation centres on its three main features: the division of syntax into two sub-disciplines, one on the content side and one on the expression side; the emphasis on that half of syntax which is simultaneously part of semantics; and the implications of the functional (instructional) view of meaning.

3.2. On content and expression in syntax The picture of syntax presented above differs from the traditional pattern of thinking in two ways. First of all, it splits syntax into two halves, where the tradition regards syntax as one homogeneous area. Secondly, by doing so it removes syntax from the pedestal where it could stand apart from substance both on the content and expression side, and raises the question: what is it that is structured (organized, brought into coherent order) by syntactic structures? Experience tells me that it takes a while to get used to this way of thinking about syntax. I shall therefore try to illustrate it with some care, at the risk of being somewhat repetitious. The standard picture informs us that language contains an inventory of morphemes (minimal signs), pairs consisting of a content and an expression side. Syntax presupposes such an inventory, and describes the way these elements get organized into larger wholes. If we accept these two premises, it follows that the combination of morphemes into larger units must involve two sets of operations going on at the same time: the minimal content elements or meanings are combined into larger meanings, and the expression elements are combined into larger expressions.1 Speaking of a "syntactic relation" is therefore necessarily an abbreviation of two relations: one on the content side, and one on the expression side. I shall speak of "content syntax" as describing how meanings are combined into sentence meanings, and "expression syntax" as describing how expressions

194 Clause structure in a functional semantics

are combined into sentence expressions. The term "morphosyntax" is often used about the expression side. The head-modifier relation, for example between Wasser and reines in German reines Wasser, corresponding to English water and clean in clean water, can be used as a hopefully uncontroversial example. This relation has (2)

an expression side — which in the case of English most obviously involves the order of the two expression chunks clean and water, and in German also involves concord (the -es ending on rein); prosodic contour also plays a potential role; and

(3)

a content side — which involves the way in which the meaning 'clean' combines with the meaning 'water'; here there is no obvious difference between the two languages

Content syntax is thus describable as the combinatorial aspect of semantics, while expression syntax deals with combinations of expressions that occur above the level of phonology. As soon as the job of sorting out the content and expression parts of syntax gets under way, it becomes apparent that most of the relations traditionally understood as syntactic are clearly semantic in nature. In the example given above, the head-modifier relationship between 'clean' and 'water' cannot be understood except in semantic terms; in fact, the standard syntactic hierarchies reflect the way meanings, rather than expressions, go together in the clause. Concord systems serve to express such hierarchical relations, but the stuff of which the hierarchy itself is made is the organization of the content elements, rather than the expression elements as such. There are hierarchical elements on the expression side as well in the form of patterns of accentuation and chunking mechanisms, but these do not reflect what is standardly assumed to be the "syntactic" structure; for instance, intonation units in Danish group unstressed syllables in the beginning of a word together with the previous stressed syllable (cf. Thorsen 1983), thus creating boundaries in the middle of words. This is why generative syntactic structures, which ignore the twosidedness of syntax, are not straightforwardly amenable to functional reconstruction. If a syntactic structure is posited on the basis of both expression and content properties, and presented as autonomous — and that is what happens if we talk about the head-modifier relation "as such" — its

On content and expression in syntax 195

basic functional anchoring becomes opaque: the description does not tell us what content you convey by means of what expression device. If we want to unpack the hidden functional underpinning of the structure, we have to translate it into two different things at the same time: an expression side and a content side. In the case of our chosen example, the description of the head-modifier relation must be given in terms like (2)-(3) above. This dual reconstruction is only possible if expression and content properties always match; it requires one-to-one correspondence between content and expression. Thus, curiously enough, the only way in which a generative syntactic structure, supposedly autonomous of semantic considerations, can be one hundred per cent correct, is if there is that total structural isomorphy between semantic structure and "formal" structure which generativists have spent so much time arguing against. To take a simple example, we can consider phrase structures corresponding to the familiar "immediate constituent" analysis, beginning with the initial symbol S. When interpreted, this element stands for an item, namely a clause, which has both content and expression. So far, so good: we generally take it that to a clause expression there corresponds a clause content. Already at the next stage, however, serious problems begin to arise: where do we make the first cut? No matter where we divide it, as long as we make no explicit distinction between content and expression we implicitly claim that the clause is divisible in the same manner on the content and the expression side. There is the traditional option, followed by Chomsky, of dividing it into subject and predicate. This division goes back to Plato and Aristotle, where it was essentially based on semantic criteria. In the standard examples of subject-predicate statements it is plausible on the expression side as well; but both from a semantic and from an expression point of view several other cuts might be just as plausible. One alternative possibility is to divide it in the manner of Fillmore (1968: 23) into a modality constituent and a prepositional nucleus; this would appear to be mainly a semantic division, with opaque relations to the clause expression side. Then again, we can ignore the "modal" elements and divide it into verb + actantial and circumstantial elements; this is again an intuitively "isomorphic" division, but it does not cover all the content of the clause. We can also divide it three ways into a subject, an auxiliary element ("infl") and a predicate; this is again opaque in relation to expression, but has some plausibility with respect to content. The possibilities are, if not actually legion, at least numerous.

196 Clause structure in a functional semantics

The general moral of all this is that even if we forget about the problems attached to the autonomy issue as such, we get into problems with all structures which claim to be merely syntactic without addressing the expression-content relation. First of all, if one side is not structured in exactly the same way as the other, we have to ignore either expression or content criteria when we set up the structure. Secondly, if there is more than one set of relations on either the content or expression side, the complete structural description cannot be isomorphic with one quasialgebraic structure — whatever we do, it will be only a partial truth. The alternative that is provided by the two-sided and non-directional picture is the task of describing a clause by investigating both what the expression relations are and what the content relations are in the clause. It may be asked whether content is not just another word for "underlying". But in addition to avoiding the conflation of distributional regularity and semantic content, there is another fundamental empirical advantage: semantic relations should not be seen as "abstract" or "hidden below the surface" — they are just as accessible as the expression elements. In fact, they are accessible as part of the very same feat performed by the language user: you can only recognize a slice of sound as "grammatical" (in the sense of being well-formed on the expression side) by simultaneously recognizing is as "semantical", i.e. as conveying a particular meaning. The unclear status of the two-sidedness of language in generative grammar in combination with the civil wars in the generative camp has irretrievably associated the word "syntax" with something non-semantic, thus generating massive confusion about the nature of relations between sentence elements. The problem of "the borderline between syntax and semantics" arises naturally because of the role of underlying constructs as ammunition for predicting the right "surface" output (compare the example with lexical gender above): when one postulates an underlying construct, it is difficult to tell whether it is "syntactic" or "semantic", because both items, as "formal primitives", are uninterpreted and do the same job. Once it is realized that syntax deals with the combination of items at and above morpheme level, and thus that half of syntax deals with semantic content elements, the other half with expression elements, this question reveals itself as meaningless — as an artifact of a misbegotten descriptive procedure. It is like talking about the borderline between the content of a poem and its division into stanzas and lines: where does structure end and content begin?

On content and expression in syntax 197

In talking about "content" vs. "expression" syntax, it needs to be stressed that they are two aspects of one phenomenon. Signs have two sides, and you cannot have combinations on one without simultaneously making combinations on the other. What the distinction points up is that there are two things happening at the same time: when two-sided items are combined, it results in two necessarily different operations happening simultaneously; and as discussed above, the things that happen on one side cannot always be inferred from the things that happen on the other. The speaker must therefore handle two simultaneous tasks: expression elements need to be combined into a complex expression, and content elements need to be combined into a complex content. It would be of little use to be able to combine expressions fluently without being able to work out the combined meaning, or to be able to combine meanings without stitching together the appropriate combined expression. The status of syntax in this respect is fully comparable to the other main linguistic compartment, the lexicon. Nedergaard Thomsen (1992: 221) appropriately speaks of a phonological sublexicon (the list of meaningful word expressions) and a semantic sublexicon (the list of coded word meanings) such that a complete lexical description of a language (= a dictionary) consists in a pairing of the two lists. It should come as no surprise, therefore, that a syntactic description of a language must also consist in pairing the "expression sub-syntax" with the "semantic content sub-syntax".2 The diagram illustrates the place of content syntax: Item Expression (signifiant) Content (signifie)

'clean'

Syntactic relations (e.g., linear sequence) (e.g., the headmodifier relation) = content syntax

Item

'water'

Each minimal sign has a content and an expression side; on each side the elements enter into relations with each other; the syntagmatic relations between content elements constitute what I call content syntax. Most phenomena in that area are usually called "semantic", pure and simple — and they are indeed part of semantics. The reason I insist on the

198 Clause structure in a functional semantics

nonstandard collocation "content syntax" is that they are also part of syntax, and this tends to be overlooked because syntax is by definition nonsemantic in mainstream terminology. It may appear as if the content syntax I am arguing for is doing essentially the same job as Jackendoff s "semantic structures" (cf. the title of Jackendoff 1990): they too have syntactic structure and describe how coded meanings are organized in the clause. If we disregard Jackendoff's commitment to autonomous levels of description, the substance of these levels would appear to be the same. However, Jackendoff's semantic structures are not in principle constrained by the coding, by the distinctions made on the expression side (cf. the discussion of the "commutation principle" in the next section). The term "conceptual", which he also uses, is more appropriate, because the structures he sets up are independent of the way these structures are coded by a specific sentence. As an example, consider Jackendoff s analysis of the "make one's way" construction (1990: 214). The conceptual structure for sentences like (4) John joked his way into the meeting contains no constituent corresponding to his way because Jackendoff sees his way as a syntactic oddity that is used to express a conceptual relationship where there is no "way" involved, but only a manner or means relationship: Joe got into the meeting by or while joking. However, as pointed out in Goldberg (forthcoming), the central meaning of this construction is the "means" interpretation, and in that reading his way is perfectly meaningful. Quoting Jespersen, she analyses the construction as involving a resultative object: it indicates that the subject creates a way for himself by the activity designated by the verb. This does not mean that Jackendoff's conceptual representation is wrong: it just makes it clear that it is not constrained by the way language codes the component meanings and puts them together. It is simply a different, slightly more abstract way of describing the meaning of the clause. Yet in practice Jackendoff s structures (cf. Siewierska 1993: 1) are actually abstractions based on linguistic structures: like generative syntactic structures as discussed above, the quasi-generative semantic structures of Jackendoff are therefore partially unconstrained products of the linguist's abstraction process, with the attendant risk of empirical vacuity (cf. Siewierska 1993: 16). The notions of content vs. expression syntax force us to confront the issue of precisely how much of clause structure is semantic in nature: we

On content and expression in syntax 199

can no longer simply call it syntax without worrying about how much of it is purely expression regularities, and how much is a matter of content. It may be very difficult to figure out precisely what the semantic elements involved in the combinatorial potential of the language are; it is by no means always clear what the semantics of case affixes and word order options is, and there may be very complex relations between sets of expression parameters, various forms of idiomaticity, and the semantic choices that are involved. But this is precisely the sort of complexity that it is the linguist's job to work at; to dodge the issue and remain satisfied with distributional generalizations is really a form of shirking. This problem persists in all frameworks in which a syntactic derivation procedure is the central component, no matter how attentive they are to semantics.3 The familiar "folk" architecture of linguistics, dividing the area into phonology on the one hand and semantics on the other, with syntax or grammar in the middle, is therefore a continuing source of confusion with respect to the structural organization of language. To begin with phonology: segmental phonology deals with one sub-compartment of the expression side, namely the organization of expressions below the level of meaningful items. But phonology is usually also taken to include suprasegmental phenomena such as intonation — and this is a completely different area from a structural point of view, including also a semantic side. And it is unclear how much of morphology should be included in phonology, although allomorphic variation is clearly part of the expression side. Grammar, including syntax and the non-phonological parts of morphology, also deals with expression phenomena such as word order — but like intonation, it has a content side, as argued above. Semantics, of course, deals with the content side, but since intonation, morphology, and syntax also contribute content, the only part of semantics that has not been dealt with elsewhere is purely lexical meaning. So either semantics overlaps with all the other disciplines, or it deals only with lexical meaning, in isolation from the structural context in which it belongs. This is the traditional interpretation; but it means that there is no place in the traditional picture for a coherent theory of linguistic content. In addition to this, the association between semantics and descriptive, lexical meaning has coalesced with the logical definition of semantics in terms of truth-conditions, creating an arbitrary boundary in the minds of many linguists between semantics and any phenomenon that points towards

200 Clause structure in a functional semantics

communicative use (cf. Part One, ch. 5). To cover those, pragmatics is then added to the tripartition as an ambiguous fourth musketeer, and further increases the confusion by dealing partly with coded meaning ("pragmatic particles"), partly with situational factors outside the code. There are good reasons for suggesting that the familiar picture needs to be revised. The linguistic architecture adopted here involves a basic cut between content (= coded function) and expression, and uses the term "semantics" about the content side as a whole. On both sides, it distinguishes between elements and relations; and the "double articulation" of language means that the expression side is organized on two levels, each with a set of elements plus relations between them. One level is phonological (covering phonological elements and relations between them), and one syntactic (covering morpheme expressions and relations between them). On the content side, we have one articulation only, because there is no semantic equivalent to phonology: semantics begins at the morpheme level. Semantics, then, covers a set of elements plus relations between them: the inventory of content elements (= the semantic sublexicon), and the relations that combine content elements into larger wholes (= content syntax). We shall begin by taking a look at the inventory of content elements, and then return to their syntax.

3.3. The nature of content elements The principles whereby we establish the inventory of structural elements in language must be consistent with the functional basis of linguistic structure. The basic functional relation between content and expression is criterial for the status of elements on both sides: there is a presupposed domain of substance entities on the side of expression as well as on the side of content, but it is only by virtue of the link between them that a content or expression element can be called linguistic: content elements become linguistic by being associated with expression, and expression elements by being associated with content. This assumption is fundamental in establishing an inventory of content elements, and is expressed in what may in Hjelmslevian terms be called the commutation principle. This principle, serving as a criterion for the distinction between substance and structure, functions as a linguistic variant of Occam's razor: in postulating elements either on the content or expression side, we always have to demonstrate that the element is

The nature of content elements 201

associated with something on the other side of the coin. In relation to the generative pattern of description (see below), it restrains the postulation of invisible underlying formal distinctions.4 In relation to anti-structural functionalism it restrains the wholesale attribution of functional distinctions to the linguistic code itself, as we shall see later. The principle is familiar on the expression side: if two sounds give different meanings when inserted in the same context, they are linguistically different. As a consequence of the function-based identity of a linguistic expression, if we change an expression item in a way that also changes its functional potential, it is no longer the same object — and if the change does not affect the functional potential, it remains the same object. If we pronounce a series of versions of the word tar, and gradually change the initial segment, beginning with a prototypical "t" and then retract it gradually, it will not affect understanding at first, but then it will go through an uncertain phase until it becomes a "ch" sound, as in char. The linguistic delimitation of phonetic segments can be investigated by such a series of experiments. The other half of that principle is less often invoked, but follows from the same logic (cf. for instance Hjelmslev 1957: 103). According to this half of the principle, the delimitation of a particular content element is determined by the point at which it becomes impossible to use the same expression for it. The negative version is fairly uncontroversial: if two meanings occurring in the same context are coded by different expressions they are linguistically different. The positive version is: if two specimens of semantic substance can be designated by the same expression, they are part of the same structural content element. An illustration case could be for instance the noun bird: we can begin with a list of the properties of a prototypical bird, and change them one at a time — we can drop powers of flight, wings, and feathers, as in the case of a kiwi, and still remain inside the content territory that is structurally assigned to the expression "bird", with no need to postulate any ambiguity. Letting expression distinctions be criterial for content distinctions gives problems in cases of homonymy: if we replace a (river) bank with a (commercial) bank and still have the same expression, is that linguistically the same thing? Yet when structure is substance-based rather than autonomous, that undesirable consequence is automatically ruled out: in order for a semantic territory to be structurally one thing it must also be coherent in terms of substance. If we use a procedure of gradual

202 Clause structure in a functional semantics

modification in terms of substance, we must be able to get from one end of the semantic territory to the other by gradual change; and since we cannot modify the properties of a river bank and gradually shade into a commercial bank (intermediate hybrids would be banks neither by one criterion nor by the other), these two could not qualify as one structural meaning by the substance-based criterion. But there are other exceptions that one would like to maintain. Sometimes one would like to say that a meaning is present without being "overtly signalled", i.e. without an expression side; a traditionally recognized case of "zero expression" is the plural of sheep. Also, the term "portmanteau morph" is commonly used to designate cases where expressions cover two syntagmatically combined meanings: here expression is fused, as in the past form went. How can we claim that meanings in such cases are individuated independently of the particular expression in question? I have two answers: one methodological principle, and one intuitive motivation for it. The methodological escape clause depends on the distinction between closed and open paradigms. The past and the present tense in English stand in a paradigm where there are only two members; we can therefore generalize from the cases where it is overtly and separately expressed to other cases without risk of proliferation of structural items. We can then say that it receives "zero expression" (present tense outside third person singular), but count the content element as being there nevertheless: sing, as in they sing, means 'sing' + 'present'. Similarly for "portmanteau morphs": because all other verbs have a past tense form, we say that went expresses both the stem go and the past tense morpheme; and regular alternation patterns may similarly justify the "portmanteau" interpretation. Thus y mj'ypense ( think of it') means 'ä' + 'pronominal reference'. Genitives in determinative function, as in the government's problems, include the content element 'definite', because the phrase corresponds to the problems of the government, not to (some) problems of the government. There may also be cases of both syntagmatic and paradigmatic nonisomorphism. Some verbs, like cut, do not express the difference between (they) cut (in the present tense) and (they) cut (in the past tense). Even so, we know there are only two options, and in each case assign cut either to the present or to the past tense: cut thus fills two places in the paradigm, each of which contains two content elements in syntagmatic combination.

The nature of content elements 203

The "closed paradigm" requirement limits the postulation of semantic complexity: we can count the inventory. But the principle itself is just a crutch to lean on, and does not necessarily reflect the way speakers organize semantic content; it would therefore be preferable to have a motivation for it. Decomposition appears to play no essential role in the process of understanding (cf. Johnson-Laird 1983: 206-242); and there is a type of fact about open-class meanings that may suggest why. Whenever we decompose lexical items, the gain in simplicity is counteracted by the fact that there is generally a semantic residue — something that is not covered by the generalizable semantic properties (cf. Haiman 1980 for the iconic motivation for this). It is reasonable to speculate that only the high frequency and regular patterns associated with closed-paradigm items keeps such idiosyncratic features out of portmanteau morphs. Kill means something more direct than cause to become not alive; therefore the decomposition, although it eliminates the content element 'kill', produces a new item in the shape of the residue. Since we need to describe the word kill in any case, because people use and understand it, we remain closer to reliable facts if we keep the content of kill in our inventory, and then regard "cause to become not alive" as substance properties of that content element. The burden of proof is on those who postulate decomposition products as independent elements in the inventory. This is an example of the advantage of having a substance-based view of structure; when language carves out slices of meaning, the slices may have more or less substance in them, and the substance properties are inside, rather than outside the purview of structure. This argument falls if we assume that there is a small number of cognitive elements that make up all meanings: the "alphabet of human thought", an idea originally suggested by Leibniz and vigorously pursued by Wierzbicka (cf. Wierzbicka 1991: 8). She and her collaborators have had considerable success in paraphrasing all known meanings in terms of a very limited number of elements - which might be seen as indicating that decomposition is possible after all. But this is only true if we think that speakers' concepts can be taken apart into elements like lego blocks, whereby a father-concept consists of a male block and a parent block, etc. Unless that is the case, paraphrase in terms of semantic primitives only has the status of the linguist's metalanguage: it is not that the meanings come apart into elementary pre-fabricated components, but that the theorist's sophisticated abstract tool can be devised in such a manner that very few

204 Clause structure in a functional semantics

items can be used to distinguish very many meanings in terms of their substance properties.5 We can thus set up three requirements for assigning more than one content element to a single expression element. First, the language must have means of distinguishing the elements systematically in other cases. Secondly, it must be plausible in terms of semantic substance: speakers must understand for instance cut as meaning both 'cut' + tense, and either 'past' or 'present'. Thirdly, the analysis of the meaning of the items in terms of existing component meanings must leave no unanalyzed residue; if went always meant for instance 'stealthily', it would not fit wholly into the closed-item slot but would be an idiosyncratic lexical element that just happened to be usable in the absence of a proper past tense form. As a case in point, in Danish we have the cognates of worse and worst (vcerre, veers f), but no clear-cut base form corresponding to bad; several forms compete for the job, but all have residues that do not match the comparative and superlative forms. The fact that content elements are individuated with reference to the expression side, therefore, does not force us into a position where we have to accept an axiomatic one-to-one correspondence between content and expression. But the choice is not always simple, because there are shades in between clear structural and clear "substance" cases. One dimension of gradualness is the number of cases a given distinction applies to. Most people would want to regard the past participle as distinct from the past tense in English, even if the regular ending is the same (cf. also p. 207 on distributional variants); but would we want to say that 'first person singular' is expressed by zero in all English verbs except be, just because of the existence of ami6 Content elements in the Saussurean tradition have generally been thought of as invariant, and as minimalist in nature (cf. the discussion in Kirsner 1993). But if structure is based on substance, it has to co-exist with the messiness of semantic substance, too; and a functional semantics has to live with the fact that expressions, like other tools, can be used for a range of purposes. How can we reconcile variability with rigorous structural delimitation of content elements? Part of the answer lies in the familiar distinction between potential and actual meaning, combined with the "process-input" view of coded meaning. An expression may have a rich potential, covering a whole range of possibilities, while only one narrow aspect is reflected in a concrete context of use. The notion of meaning potential in fact allows us so much latitude

The nature of content elements 205

that involves a risk of vacuity; we might choose to say that anything, even the homonymic bank case, can be regarded as one meaning potential, which happens to be discontinuous. However, there is enough sense in looking at it like this to make it reasonable to dwell on the possibility for a moment. As demonstrated by Swinney (1979), there are indications that when we hear an ambiguous word like bug it invokes the whole potential, even in unambiguous contexts. Instead of agonizing over whether an expression has one or two or three different meanings, it would therefore be more sensible at the initial stage of description to view everything as one whole territory and see the linguist's task as that of describing its internal organization. If we discover complete discontinuities within the territory, as in the bank case, we can count them and list them separately as homonyms. This is important as a way of preventing structure from overriding substance, as discussed above — but if distinguishing between homonyms was the only aim, it would be a fairly coarse way of describing the facts. There may be very tenuous connections between certain senses in a polysemic network; and it is not of overwhelming interest whether we decide to call the familiar ear (-of-corn) case a homonym or a polysemous variant. However, it is interesting to look at the internal organizations of semantic territories regardless of how we choose to count meanings. Many of the properties that have given rise to quarrels between linguists about whether to describe meanings as variant or distinct meanings might more fruitfully have been discussed as problems of semantic organization. One important property of content territories is the degree of internal cohesiveness. At the one end, we find the sharply bounded territories definable in terms of necessary-and-sufficient conditions, which occur in superordinate categories and in institutional contexts such as the law, where fuzzy boundaries are undesirable (cf. also Johnson-Laird 1983: 203-204); there must be fairly precise criteria for what constitutes a felony if the citizens are to know where they stand. Not so far from the cohesive end we find the real prototype concepts, where the expression centrally designates the core of the prototype where all the features are present, and only marginally designates the cases that have only some of them. The word mother can be said to invoke all the features together, even in being applied to a marginal instance. If you give a talk about what it means to be a mother in a group of women, it would not make sense to see the talk as having an ambiguous topic if there were adoptive mothers in the audience — it would be a case of the same concept applying slightly differently to

206 Clause structure in a functional semantics

concrete cases. The last stop before the discontinuous case discussed above is the purely polysemous family resemblance network, in which the variants are only locally related, and no core is discernible. However, I do not think these types are equally "good" content elements. Multifunctionality makes sense in the biological perspective since once an item is there it is an advantage for the individual to use it whenever it comes in handy; but on the other hand, in terms of selective fitness, purely disjunctive sets of meanings are likely to be fairly blunt instruments of communication — given a basic assumption that the whole network is activated to some extent when the expression is used. The usefulness of having multi-purpose tools must therefore be balanced against the desirability of some degree of precision; and the "fitness" of expressions would seem to be enhanced if they have some sort of uniting selling point to them. I think this is why pure polysemous cases are difficult to find. Even in prepositions, which are perhaps the best cases, there is typically something that holds the meaning together; in the case of over, this has been argued by Gardenfors (forthcoming). Therefore I think in describing content elements it is important to look for the most central areas of the meaning potential, even where a clear-cut prototype structure cannot be established — as also expressed in the traditional distinction between Grundbedeutung 'basic meaning' and Gesamtbedeutung 'total meaning'. One criterion for deciding what is more central or basic is the extent to which one sense presupposes another; for instance, the "adoptive" sense of mother would appear to draw on the "biological" sense, rather than vice versa. Frequently, the centrality of certain senses is panchronic: they came first, and remain the basis on which speakers understand the more marginal senses. At least for some speakers, the sense in which a priest comes under the semantic territory of father remains dependent on the male-progenitorcum-family-head sense of the word (whether this understanding survives postmodern family relations is another matter). Some of the arguments for variability also overlook potential sources of cohesiveness, because they have typically looked for shared micro-features like truth-conditions in the familiar bottom-up manner. One source of cohesion arises when conceptual organization is understood as having a topdown aspect, involving a differentiation of a larger whole. Wittgenstein's example of game is a case in point. An investigation in terms of the component properties of games yields no common element, as we know. But in terms of a differentiation of human activities in a top-down manner, playing games can be negatively characterized in terms of "distinctness

The nature of content elements 207

from the serious business of life" .7 We need such an element in the picture; otherwise there would be no answer to the question is this some kind of game? The sense that internal cohesiveness must play a role in language is what is expressed in the "one-form-one-meaning" principle that is maintained with varying degrees of stringency by different linguists. But in describing language, it should be regarded as an open empirical question what the force of this principle is. I side with those who regard it as a fact of life that content territories tend to have some degree of disjunctive structuring. As a paradigm example, let me mention the word shower. As a noun, it comes in two basic varieties, designating an indoor and an outdoor event. As a verb, it denotes the figurative process element that is shared between the two concrete senses of the noun. This in turn gives rise to a derived noun sense occurring in the collocation baby shower. It would be equally absurd to say that it was just pure homonymy and to say that it was just one invariant sense. The latter can be illustrated by comparing with the case of somebody giving a talk about mothers above; if the topic was showers, it seems unlikely that the audience could be equally divided between people interested in the outdoor, the indoor, and the baby variety without problems of understanding. To the extent there is a clear choice between different subparts of the meaning potential, and not just a choice between different types of instantiations of the same central concept, I shall talk about the content element as having an internal paradigm. This term reflects the fact that just as the commutation principle regards alternative options in the same place in the chain as standing in a paradigmatic relationship, so do disjunctive parts of the meaning potential constitute a choice at least for the addressee of the message. The dysfunctional aspect of having an internal paradigm can be lessened to the extent the choice between the variants is predictable from the linguistic combinations that they enter into. In this, we can draw on the parallel with classical structural concepts in phonology, in the shape of combinatorial variants ("allophones") of phonemes: the existence of a devoiced variant of /r/ after initial fortis stops does not complicate our description of the expression element /r/, because it follows from what we know about the effect of fortis stops on the following segment. The paradigm case of the sort I am thinking of is the combination red hair. In this combination, the content of 'red' is way off prototype on the most

208 Clause structure in a functional semantics

obvious reading, designating the carroty "ginger" shade that is the closest natural hair gets to prototype red. Because the instruction to assign the property 'red' to hair presupposes our world knowledge of hair colours, the "ginger" variant is not a symptom of lacking cohesion in the content territory of red. On the content side, however, this type of regularity is not as clear-cut as on the expression side, because context rarely dictates a specific variant with one hundred per cent certainty. If we talk about hair that is dyed, the collocation red hair could be ambiguous between "ginger" and "scarlet", so the internal paradigm might give rise to misunderstanding. A kind of coded meaning that is often thought to be quintessentially pragmatic is the "social meaning" that is bound up with constraints on the contextual appropriateness of linguistic choices. Such constraints typically have stylistic, regional, professional and ideological dimensions; and we standardly assume that the constraints are respected because there is a natural link between the situation and the linguistic choice. A case in point is the choice of a specific code (such as French), which tells us that the interlocutors either are French or have some natural affinity with the language. This information is not naturally thought of as functional, because in the standard situation it is invisible: we do not choose French in order to tell other people that this is a code we use, but because in the situation this is a natural tool for our communicative purposes. However, as pointed out by Hjelmslev ([1966]: 101), the code itself may occasionally function as the expression side of a sign, whose content is the context-association of the code. The most familiar purpose of such conscious use of the code is "stylistic effect". If we speak of the atmosphere that characterizes a gathering as its ambience, we add an element of je ne sais quoi to the situation that is bound up with the Frenchness of the word. This type of meaning, which following Hjelmslev I shall call connotation, becomes salient by figure-ground reversal: normally we presuppose the code as part of the whole situation, and focus on what the coded utterance says. But the situation itself is always to some extent negotiable, and it may be more important to determine the nature of the situation than to attend to what is happening at the moment. Thus as addressee one may try to read the signals ("he speaks French — is this a sort of highbrow event?"); and as speaker one may try to give off the right signals ("I had better respond in kind"). This contextual mechanism, which is important in understanding the role of presuppositions in communication (cf. Harder—Kock 1976), I shall call connotative reversal.

The nature of content elements 209

Although, as standardly assumed, this mechanism is part of the situationspecific use of language, it also occurs as something that must be understood as part of the code, by the same abductive mechanism that causes other features of use to become part of the code (as when corn comes to mean 'maize' in a maize-growing community). In such cases it takes the step from being a part of the "basis" to being part of the "function" of the expression (cf. p. 104). An example of how connotative meaning may rub off on the code itself is the word nigger, from being simply the word for 'black' used in old American South, it came to be specifically associated with the standard attitude of the Southern community towards the blacks. Instead of a coded meaning describing a black person — with the proviso that this was the term used by a southerner — it came to mean "black person as viewed from the Jim Crow perspective". The subsequent fate of the word negro and the emergence of African-American as the politically correct term means that there is a regular lexical paradigm based on connotative contrasts; once you set foot in the US you will not long remain in doubt that being aware of connotative paradigms is an essential part of your linguistic skills.

3.4. Scope and layered clause structure Above, I have given a sketch of types of coded content elements that enter into language structure. The question now is how structure affects the way these coded meanings collaborate; and I begin with the syntagmatic dimension of linguistic structure. The zero hypothesis about syntagmatic content structure in the clause would be that all meanings could just be added like elements in a salad bowl. Each expression element would then simply add its load into one big jumble of semantic material, to be combined by the recipient as he pleased. Practitioners of artificial intelligence have toyed with this idea, since in that perspective it does not matter too much whether coherence is coded or not. But it has never been put forward as a serious option, as far as I know. On the other hand, clear positive statements on the issue are few and far between. Because of the traditional unclear status of the semantic aspects of syntax, discussions about what relations content elements have to one another have always been mixed up with other topics. To some extent semantic relations have also been the province of logical and philosophical analysis rather than

210 Clause structure in a functional semantics

linguistic. One of the merits of the notion "content syntax" is that it puts the spotlight on this central issue. I shall take my point of departure in a general structural relation, which can be seen as the basic organizing principle of meaning in the clause. This relation obtains between any two elements, such that one element operates on the other element rather than vice versa. Let us return to the head-modifier example: 'clean' as a content element is understood by making it operate on the content element 'water', rather than the other way round. This relationship has been taken for granted in the tradition, but only the confrontation with logical analysis forced a clarification of this issue (cf. Dahl 1971). According to logical principles, clean water is simply the intersection between the sets of things which are clean and things which are water. However, in relation to natural language this is not adequate. Dahl's example is the phrase pregnant women; on the intersection analysis, it does not matter whether we begin with the content 'women' or the content 'pregnant'. The two interpretive procedures can be rendered in the two paraphrases: (5) persons who are female and pregnant (6) persons who are pregnant and female As will be apparent, (6) is odd because once you have 'pregnant', 'female' is redundant. Dahl's way of accounting for this asymmetry at the time, adopted by Dik (1989: 116), was to say that the information in the head determines a "universe" which has "logical priority", and within which the modifier selects a subset. In the "recipe" view of meaning adopted here (cf. also the next section), the priority is understood in terms of the structure of the interpretive task that is set by the clause. On this view, water instructs the addressee to construct a mental model of something instantiating the concept 'water', and clean instructs the addressee to adjust the water-model so as to instantiate the concept 'clean'. To complete the interpretation of clean one therefore needs to have access to the interpretation of water, but not vice versa. Similarly with Dahl's example: you first construct a mental model of women, then (re)conceptualize them as pregnant (on psychological reality, cf. p. 227). This type of relationship, whereby some content elements cue processes of interpretation which presuppose that other aspects of the interpretation are already in place, can be captured in terms of scope relations. The

Scope and layered clause structure 211

notion of scope has drifted from logic into linguistics over a long period, and has been affected by all the lack of clarity involved in that process, but the relationship I want to reserve it for is clearly within its established domain, as exemplified in relation to negation. Familiar examples like (7) / did not do it because I like you illustrate the ambiguity involved in determining what negation "applies to" (= operates on) — does it negate because I like you, or do if! In carrying out the interpretation, one needs to choose the right input to the negating operation, or one will end up with the wrong interpretation. The expression side of syntax encodes constraints on scope relations, specifying what operator-operand relations are possible among content elements. It is quite possible, as pidgins exemplify, to get by with a code that lacks such constraints and organize meanings according to the "salad bowl" principle described above; but as pointed out repeatedly by Givon (e.g., 1989: 246), coded (and therefore automatically triggered) syntactic routines offer a much more efficient and less attention-demanding means of communication. The ordering of scope relations is therefore an important part of the job of syntax. Scope relations are part of widely different theories of clause structure; the theory I take as my point of departure has been worked out within functional grammar as developed by Simon Dik and followers, in particular Kees Hengeveld (cf. 1988; 1990a,b; 1992a,b,c). Inspired by Foley and Van Valin (1984), it takes the form of what is known as the "layered structure of the clause" (cf. Dik 1989, Nuyts—Bolkestein—Vet 1990, Fortescue— Harder—Kristoffersen 1992). Layering involves a central idea which can be illustrated by a diagram of the earth cut in half; description proceeds in a movement from the core outwards, such that each successive layer contains the previous layer and adds something to it; and relations between layers can thus be conveniently captured in terms of scope relations.8 In the model developed in Functional Grammar, the main division was originally into four layers, as outlined in the diagram below, adapted from Hengeveld (1990a: 4):

212 Clause structure in a functional semantics

(8)

Syntactic layer:

Entity type:

Linguistic rendering:

clause proposition predication term

speech act possible fact state-of-affairs individual

Did John go? John went John 's going John

In terms of scope, the structural skeleton of this model clause is (9) interr (past (go (John))9 Taking the procedure bottom-up, we begin with the innermost element, i.e. John (which constitutes a term); then we add the next higher element, i.e. the verb go, which gives you the state-of-affairs go (John); then we add deictic tense, in this case "past", which gives us the proposition "past (go (John)))"; and finally we add the illocution "interrogative", yielding the full speech act. In introducing the model, Hengeveld (1990a) also takes his point of departure in semantic considerations, stressing as a major point the division of the four layers into two "super-layers": an "interpersonal" layer (in Halliday's terms, cf. 1970, 1985), which is superimposed upon a "representational" layer (following the terminology of Bühler (1934), rather than Halliday's "ideational"). The interpersonal layer contains elements inspired by the speech acts philosophy; it is describable in terms of the concepts "illocution" and "proposition", reflecting the formula F(p), as in Searle (1965 [1971]: 43, 1969: 31). In the linguistic context, the notion of illocution is anchored in the distinction between primarily declarative, interrogative and imperative clause constructions. This "linguisticky" narrowing of the concept is to some extent parallel with the more recent views of Searle (1991), where the interest is focused on the "bare bones" of an illocutionary act, centring on the notions "direction of fit" and "illocutionary point". These notions are easier to relate to linguistic categories than the full panoply of illocutions familiar from Austin, and also less vulnerable to criticisms made of the notion of illocutionary force itself (cf. Allwood 1977, Harder 1978). Any adequate account of declaratives vis-a-vis interrogatives must include these two elements. The two types share a world-to-word direction of fit (it is the state of the world that determines the proper reaction to both an

Scope and layered clause structure 213

interrogative and the corresponding declarative utterance); but they differ in illocutionary point: one conveys the existence of the fit, the other opens a slot for the affirmation or negation of the fit (prototypically, but not necessarily, to be filled by the addressee). The representational super-layer can also be subdivided. The highest level is the predication (corresponding to an "event" or "state-of-affairs"); at the next stage it is then subdivided into the central predicate and the "terms", i.e. shareholders in the event (agent, recipient, location, etc.). The distinctions in the layered model are familiar — except perhaps the one between "possible fact" and "state-of-affairs": in many accounts one does not find clear pairs of distinctions between statement and proposition on the one hand and proposition and state-of-affairs on the other (but cf. for example Leech 1981). The essential difference is best understood in relation to the difference between "occurrence" and "truth". A proposition is what is being asserted in a statement or questioned in an interrogative clause, i.e. what is common to did John leave? and John left. The proposition as such is outside the world we are talking about; it is the content of a claim or question, which may or may not match the world of which we speak (in which case it is true or false, respectively). Truth, like the proposition itself, is not in the object world, but is a property of what we say about the world. As opposed to that, a state-of-affairs may occur in the object world, i.e. become instantiated in place and time. The difference can be illustrated with the complements of verbs of perception (cf. Dik—Hengeveld 1991): (10) He saw her go (state-of-affairs - he saw it occur) (11) He saw that she was right (proposition - he saw the truth of her claim) We shall return to this distinction in detail below (pp. 273 and 327), since it is crucial in relation to tense. There are many unanswered questions with respect to the precise form of the layered model; but the basic idea according to which a clause may be understood in terms of a series of operations that create layers of increasing semantic complexity will be central in what follows.

214 Clause structure in a functional semantics

3.5. Process and product in syntactic description: the clause as recipe for interpretive action The function-based theory of clause structure presented here builds on the functional view of meaning that formed the conclusion of Part One. On the standard representational view, the meaning of a clause can be described in terms of a "situation" that the clause characterizes. This assumption is found in different variants, depending on whether one understands situations in terms of set theory, situation semantics or conceptual structures; but from the point of view adopted here they are all wrong. The same fault would be committed by someone assuming that (in the ideal case) clause meaning is identical with the speaker's communicative intention, which is again identical with the addressee's received message. According to the view adopted here clause meanings are wholly different in nature from descriptions of situations as well as from communicative intentions. Meanings are designed to start off interpretation processes and do not guarantee the finished outcomes — which always depend on what the actual people in the situations make of them. The status of full clause meanings can perhaps best be illustrated by comparing them with cooking recipes. Compare the structure of the two items below, the semantic clause structure introduced above and a simple recipe for grilled salmon: (12) interr (past (go (John))) (13) serve (sprinkle with lemon (grill (add salt & pepper (slice (salmon)))) To make the parallel explicit, we can paraphrase both formulae in instructional terms, beginning from the innermost operand. (12a) identify John and construct a mental model of him; make the model instantiate the property 'go'; understand this model as applying to a certain past situation; and consider whether the model is true of that situation. (13b) take a salmon, slice it, spice the slices with salt and pepper, put them on the grill and sprinkle them with lemon before serving.

Process and product in syntactic description 215

The analogy is illuminating in various ways. First, the functional or instructional view of meaning entails that each content element requires the addressee to carry out a sense-making operation, just as each stage in a recipe requires the cook to carry out a food-making operation. Secondly, the recipe in itself is not enough to make food — you need the skills and the raw materials presupposed by the instructions as well. In the same manner, the clause in itself does not make sense without the context (a conceptual and a situational universe) and the interpretive skills that are presupposed by the use of the linguistic forms. Thirdly, the outcome of the process of interpretation triggered by uttering a clause is just as humanly unpredictable and context-dependent as the outcome of the food-making process that is carried out according to a recipe: there is no bi-uniqueness either between the recipe and the food produced, or between the clause and what the addressee understands by it. This means that what we describe when we give an account of the meaning of a clause as type (rather than token), is different in category from what we understand by it in any given context. It is not a matter of a canonical meaning that may sometimes get lost for pragmatic reasons: clause meanings are different from clause interpretations for the same reason that a recipe for grilled salmon is different from a grilled salmon. A finished interpretation is an aspect of the addressee's spatiotemporally concrete situation; linguistic meaning is a potential which is available for use in not yet actualized situations. The presuppositional relations involved in the operator-operand structure can also be made concrete when viewed in terms of recipes. The basic "operand" in a recipe is the substance (meat, vegetable) itself, corresponding to the argument entities. These are presupposed both by the "modifying" elements that go into the food (water for boiling, spices), and by the purely procedural elements (chopping, frying). And the end product is what emerges when all the processing instructions have been followed and brought the elements into the requisite relations with each other: a received message, and a dish of grilled salmon. This means that criteria of ambiguity are different from what would be the case if clause meanings were finished descriptions of situations. The difference can be neatly illustrated with one of the cases that Chomsky has used to argue for the lack of fit between syntax and semantics: Matters are still more complex when we attend to the use of plural noun phrases in predicates. Thus to have living parents and to have a living parent

216 Clause structure in a functional semantics

are quite different properties. If John has living parents, both are alive, but not necessarily if he has a living parent. It is true of each unicycle that it has a wheel, but not that it has wheels. But the expression "have living parents", "have wheels", may have the sense of the corresponding singulars or their "inherent sense", depending on the means by which the subject noun phrase expresses quantification. Compare "the boys have living parents", "unicycles have wheels", "each boy has living parents", "each unicycle has wheels". In the first two cases, plurality is, in a sense, a semantic property of the sentence rather than the individual noun phrases in which it is formally expressed. "Unicycles have wheels" means that each unicycle has a wheel, and is thus true, though "each unicycle has wheels" is false. (Chomsky 1977: 30-31)

Chomsky notes correctly that the plurality belongs in different places in the two relevant interpretations; on the assumption that clause meaning is a matter of complete interpretations, we therefore have an ambiguity. Chomsky goes on to point out, again correctly, that there is no direct fit between such semantic ambiguities and syntactic structure. On the recipe reading, where clause meaning constitutes input to interpretation, these cases would be described rather differently. Roughly speaking, what happens is that the addressee is instructed to assign the property denoted by the predicate, including the plurality of wheels, to the subject. With a "set" subject this leaves two options open to the addressee: he can distribute the property between the members of the set, or he can assign it to the set as an aggregate. Thus there is no ambiguity in linguistic meaning: both the component elements and the scope relations which dictate how to combine the elements are perfectly clear-cut. The ambiguity arises only because there are two ways of complying with the linguistic "recipe" in relation to the subject. With singular subjects, there is only one option — and we do not need a linguistic explanation for this. This example also illustrates that so-called ambiguities often arise because of the unavoidable conflict between the number of distinctions one might pragmatically want to make and the economy of linguistic resources. The nature of sets is such that plurality can apply in two places, and in order to keep up with this, we would have to have two plural-marking options in such cases, whereas singular subjects need only one. Languages differ with respect to how many distinctions they make in different semantic dimensions, but all languages give up at some point on each dimension. On the recipe reading, we can limit ourselves to describing how specific the instruction is in each individual case, and leave the resolution of

Process and product in syntactic description 217

ambiguities beyond that point to the speakers and hearers. Depending on the circumstances, the message will change, while the coded meaning remains the same. In cases where there is a discrepancy between relevant message distinctions and available coded distinctions, we can often observe that a struggle to stretch the linguistic resources; in this particular case, in the form of the use of the singular for the "distributive" reading: "unicycles have one wheel (each)". In such cases, it is often very difficult for the grammarian to specify exactly when the problematic case is coded by one or the other alternative, as it is with the distributive reading in English and Danish: both languages use both forms, but not in precisely the same way. The other half of the same issue is the extent to which clause meanings sometimes appear to be less ambiguous than they are from a "recipe" point of view. Every time two content elements are brought together in an actual utterance, this process involves an element of contextual interpretation that is not part of the coded meaning. The pragmatic enrichment that occurs every time a semantic processing instruction is followed up works according to the usual Gricean mechanism. As a name for the sense-making accommodation procedures that are triggered solely by the linguistic components, I use the term syntagmatic implicature. A case in point is the "continuative" meaning of the present perfect, as in (14) a. b. c.

/ have lived here for two months / have lived here for two months (but I don't recall when it was) / have lived here for two months now

The understanding whereby the state-of-affairs continues into the future is not due to the present perfect (as pointed out in Steen S0rensen 1964; more on this point below p. 377). If we add but I don't remember when it was, the continuative reading is replaced with an indefinite past reading. Conversely, in (14c), the continuative reading becomes the only possibility. The added now would be redundant by the principle of sense unless the truth of the statement were bound specifically to the moment of utterance: hence, the combination of the two-month period, the perfect and the explicit temporal anchor yields "continuation" by syntagmatic implicature. The coded clause meaning does not, on the recipe view, contain a continuative element — this only arises as the recipe is followed up through interpretation.10

218 Clause structure in a functional semantics

Clause meaning can never be identical to a received message; but a received message can be (type-)identical to a communicative intention, compare the perfectly natural question got the message? In contrast to clause meanings, both intended and received messages are spatiotemporally anchored. Also, both are naturally thought of as wholes rather than assemblies of parts; messages, as opposed to clause meanings, do not necessarily have any clear compositional structure. Let us take a message in the form of a situation description which includes the temporal location of a state-of-affairs that is reported, for instance Joe left yesterday. The temporal location, as part of the situation, does not belong in a little box of its own; "yesterdayhood" is a ubiquitous aspect of the whole event. When the speaker encodes the message, however, he needs to find separate ways of getting different aspects of the message across — hoping that the addressee will be able to recreate the whole. Compositionality is thus a necessary, but temporary inconvenience. It is a fact about the coding — not about the messages. This is again analogical with recipes. As an intentional action, the process is oriented towards the conceived goal of making for instance sauce hollandaise, a culinary gestalt in the mind of the cook. In making this sauce the cook is instructed to use butter, lemon and egg yolks, corresponding to different, compositionally organized sub-processes; but if in the final result the substance is still neatly factored out in these subcomponents, something has gone seriously wrong. What the cook aims to achieve is the effect of the composite product that constituted the original intention. "Message meaning" is thus naturally understood as "distributed" (cf. Sinha—Kuteva forthcoming); and hence it is not surprising that some aspects are coded several times in the compositional structure, just as some elements in a cooking process may need to be repeated. Similarly, some elements may be uncoded, because they will emerge by syntagmatic implicature. A strict correspondence between elements-in-the-clause and elements-in-theintended-message is hardly a coherent notion at all.11

3.6. The nature of syntax: cognitive and evolutionary perspectives This view of compositional syntax has implications for the understanding of the role of syntax in cognitive organization (cf. Fodor and Pylyshyn 1988) and in the evolution of language (cf. Bickerton 1990). In both areas, the notion of strictly formal, compositional syntax that is associated with

Cognitive and evolutionary perspectives 219

the generative paradigm has played an important role; the opposing camp has tended to downplay the role of syntax and emphasize the continuity of linguistic skills with other types of cognitive achievement. The picture I defend is essentially of the second type but proposes a slightly more prominent role for syntax. Beginning with the evolutionary issue, the recipe theory makes obvious the analogy between hierarchical organization of language and hierarchical organization of action in general: just as a cooking recipe is a structured plan of action, so is a sentence a structured plan for interpretive action. It is therefore significant that there is evidence that neural capacity for hierarchical organization of manual tool use and for hierarchical organization of speech has the same evolutionary origin, and the same ontogenetic developmental path (cf. Greenfield 1991). Among the examples used are the following structures, which evince similar hierarchical complexity, in language and in tool use (1991: 533, 540). The examples of parallel development are: Language use: ball

no structure

more cookie

pairing

want more cracker

hierarchy: "more" applies to "cracker", and "want" applies to ("more + cracker")

Tool use: take a spoon

no structure

Either: spoon-into-food, Or: spoon-into-mouth

pairing12

take the spoon, put it in the food and take it to the mouth

hierarchy: the food gets onto the spoon, and spoon + food is then put into the mouth

220 Clause structure in a functional semantics

The two sets of skills are even more parallel than Greenfield's analysis reveals. She analyses them as one-level phenomena only, with the functional side being implicit; but the hierarchy is only on the "content", i.e. the "function" side, not on the side of overt action. In themselves, the overt actions might be just two pairs: putting the spoon in the food and putting the spoon in the mouth are both possible at the hypothesized middle stage, and from this it follows logically that the child is capable of both putting the spoon in the food and subsequently putting the spoon in the mouth. But as a functionally motivated action sequence the event series is necessarily hierarchical: the intention to put a full spoon in the mouth presupposes the existence of a full spoon — so in order to get this intention, one must be able to envision the possibility of getting food onto the spoon first. The recipe view of syntax suggested above makes clear why the same type of skill is involved in language: a sentence is a structured complex of functions.13 In the utterance above, the child who is capable of producing (genuinely structured) utterances like -want more cracker has an overall intention of getting more crackers; but instead of coding this holophrastically (more!}, cf. p. 267 below, he invokes three different coded meanings in a way that presupposes the ability to put them together in the right hierarchical order — if 'more' went with 'want' rather than 'cracker', his intention would misfire. On the expression level, any child who can string words together would be able to produce the sequence of words as such, without any hierarchical order being required; the special role of hierarchical competence is to be able to structure intentional functions of actions into complex actions, where the ultimate intention depends on the outcome of previous achievements. Recipes have the same property: in the situation where you have only a whole salmon, or no salmon at all, you cannot intend to grill a sliced salmon (any more than you can intend to become a cup of coffee) — unless you are able to intend as a prerequisite to get hold of and slice the salmon first. This point is crucial to the way we should understand syntax, including its acquisition by the child. The choice of complex hierarchical structures over simple linear structures is one of Chomsky's central candidates for the innate language faculty, because in a generative theory of syntax that cannot properly distinguish between content and expression, it comes out as a mystery why the learner should choose the more complex strategy (cf., e.g., Chomsky 1988: 42-48). With a clear distinction between expressionsyntactic relations and (functional) content-syntactic relations, it becomes

Cognitive and evolutionary perspectives 221

clear why the hierarchical analysis is preferred: because that is the way the sub-actions make sense in relation to each other. Also on this point, Chomsky's views lend themselves to different interpretations, tending either towards an untenable or a trivial extreme. On the understanding whereby the "innate syntactic patterns" are permitted to be fully isomorphic with complex patterns of meaningful activity, Chomsky is right about innateness and acquisition: of course the ability to learn a human language depends on a genetic equipment that makes it possible to cope with hierarchical relations — which is a specifically human feat in the sense that human beings obviously possess it and flatworms fairly obviously do not; and of course variation between languages reflects the scope of possibilities that is defined by this genetic basis. When Stephen J. Gould (1994: 20) gives Chomsky credit for having the right view of the division of labour between nature and nurture, this is the view which he is implicitly ascribing to Chomsky. But although it is probably the position to which generative grammar will have to retreat, there is a central reason why such a tactic would be cheating: it takes the oomph out of what has remained the central argument all along for the language acquisition device, namely that the child needs a knowledge that has nothing to do with what makes sense in the situation. If hierarchies reflect what makes sense, the purely formal structure loses its independent explanatory potential. To sum up: syntactic capability is just as important as generative grammarians claim, but syntax is a quite different animal than claimed in generative grammar: it is a structure that organizes the process of combining linguistic meanings, not an autonomous level of pure structure. Moreover, there is a distinction between syntax as a general cognitive capability and syntax as a property of the linguistic code. This comes out most clearly when we compare communication in a pidgin situation with communication between speakers who share a fully syntactically conventionalized language, the situation that was taken up (following Givon) on p. 211 in connection with the "salad bowl" hypothesis about semantic organization in the clause. In order to handle hierarchically complex meanings, also when they are talking in a pidgin language, speakers must have a syntactic capability — but in the pidgin case, hierarchical relations are not in the code, only in the mind. Similarly, it goes without saying that in order to develop the coded routines, i.e. pass from the pidgin to the creole stage, speakers must already possess the general hierarchical skill — otherwise the project of learning a "real" language would be doomed to

222 Clause structure in a functional semantics

failure. Syntax as a cognitive, evolutionary leap forward is necessary for syntax as a property of language, not the other way round.14 The difference between the "hierarchy-of-functions" view and the purely formal view of syntax can be made explicit in relation to the discussion in Fodor and Pylyshyn (1988). Fodor and Pylyshyn argue against the claim that language competence can be accounted for through the same type of neural network procedures that can account for skills such as motor routines and perceptual recognition (cf. above p. 61). Their central argument for a level of compositional syntax corresponding to the quasigenerative language of thought is that is impossible to explain the capacity to understand utterances in terms of a non-compositional approach to the input. In quantitative terms, the ability to differentiate all possible twentyword English sentences without recourse to compositional structure would require a fantastic number of output categories (corresponding, as they claim, to the number of seconds since the big bang). The qualitative side of the argument is the "systematicity" argument, corresponding to the "generality" argument of Evans (1982: 100): to know a predicate is, by definition, to know that it can apply to new instances; and to know an object is to know that different predicates can be applied to it. If we look on it from the reception point of view, this means that it is implausible to conceive of sentences as unstructured input to a network, consisting only in a bundle of features like coloured dots in a picture — that would not capture the element of systematicity/generality in the way sentences are built up. This argument is fatal to the approach that sees the mind as one big neural net which eats all inputs in one mouthful, and thus to simple-minded connectionism (cf. also the concessions to compositionality offered by Smolensky 1987). However, it does not apply to recurrent networks, i.e. systems where output categories can be fed back into the system; and from a connectionist point of view this could be seen as simply rendering the criticism irrelevant today. But saying that recurrent networks can handle the problem does not fully clarify the nature of the syntactic skill that goes into sentence processing. Fodor and Pylyshyn assume that syntax is inherent in the message conveyed by the sentence. But according to the suggestion above, the compositionality is in the process rather than in the received message. In understanding a sentence we need to be able to send parts of it separately through the recognition process: the operation of ascribing a predicate to an individual could not work unless predicates and individuals could be

Cognitive and evolutionary perspectives 223

selected for processing in a compositional manner. But this does not imply anything about atomic conceptual units: in ascribing the predicate walk to the individual John we may have a very rich and experientially based understanding of both 'walk' and 'John'. This type of compositionality fits unproblematically with the "recipe" view of (content-)syntactic structure that was illustrated above with the sentence did John go (cf. p. 212). Each separately coded element in the structure must be recognizable as part of the input to the interpretive process: (15)

'interr' ('past' ('go' ('John')))

Understanding requires that all the elements involved in the semantic structure are recognized and applied to each other in the order specified by the scope relations. If recurrent networks functioned exactly as contentsyntactic structures would have us expect, then the structure of the processing procedure would match the content-syntactic structure. Although this would clearly be a simplification15, it shows both that neural nets can be used in compositional understanding and that the input-output mechanism that is their distinctive feature is not in itself enough. We must be able to impose compositional structure on the elements in our processing procedure: if all our recognition procedures (one for each word, for instance) were just churning away without coordination, no coherent understanding would be possible. In other words, syntactic relations are not part of the gradual, emergent, bottomless network itself, as cognitive linguists sometimes appear to assume. Syntactic competence involves the ability to combine meanings in canonical (as well as meaningful) ways — and in order to do that it is necessary to suppress a large part of the semantic links that are possible in terms of the network, as shown by the work of Gernsbacher and associates (cf., e.g., Gernsbacher—Faust 1994); and the compositional syntax is in the combination task, not in the multitude of potential links. In relation to the "classical" architecture, this shows that the compositional structure does not have to be in the mental states themselves. Once the processing is over, the resultant understanding may be describable by means of a feature matrix. Thus there need be no identifiable state of believing that 'John loves Mary', with just those atomic concepts in a canonical relationship with each other — this belief may be a property

224 Clause structure in a functional semantics

shared by a number of different mental states which are all in themselves much more complex. This is also what is implied in the "mental models" view of deductive reasoning, cf. Johnson-Laird (1983); JohnsonLaird—Byrne (1991). The distinction between compositionality in the procedure and in the construct may be difficult to sort out when there is inherent structure in the input also, as in syntactically structured sentences. To illustrate the difference, it may be useful to take a task without inherent structure, like looking for a needle in a haystack. If we had only the "one-mouthful" processing option there would be only one possible instruction, i.e. "go and find it!", with the proverbially slim chances of success. With hierarchical processing power, we can divide the job into manageable sub-tasks, such as: (1) put the haystack on a tarpaulin, (2) divide it into chunks (3) assign each chunk to a person with a metal detector, (4) arrange for a recycling procedure in case the needle is missed in the first round, and (5) assign one person to keep an eye on results and make decisions about the further processing. In this case, the hierarchy would clearly be external to both input and output as such, but potentially useful nevertheless. If we have that general ability, it enables us to handle input with inherent structure also, without requiring a special level of compositional cognitive symbolprocessing: we can handle an input consisting of separate, but structured linguistic symbols (whose existence nobody in his right mind would deny) without postulating a level of atomic cognitive symbols. Just as with the recipe, the hierarchy may not be in the product, although it is necessary for the process.16

3.7. The relation between expression and content syntax In the argument against an underlying form based on distributional abstractions, the chief argument was that it obscured the fundamental relation between content and expression in the clause. Now that we operate with a clearly semantic structure, we have better possibilities of exploring the relations between content and expression when it comes to the combinatorial operations — to see how the combinations of content elements correspond to the combinations of expression elements. One type of relation illustrates the basic principle in a simple way. This is the relationship between scope and constituent order, familiar from functionally oriented literature (cf. Vennemann 1973, Foley—Van Valin

The relation between expression and content syntax 225

1984, Givon 1982). Although the principle is not without exceptions, there is a strong cross-linguistic tendency to order constituents so that the linear order on the expression side indicates scope relations on the content side: beginning with the innermost operand, the further away we place expressions, the larger the scope of the content element. The directionality may vary between "prefield" and "postfleld" languages, and languages may have elements of both, but the phenomenon in itself is not in dispute. This gives us the following schematic relation between content (C) and expression (X) structure:

(16) Cl (C2 (C3)) -> Xl + X2 + X3 We could also invert the order, both on the expression and the content side: (17) ((C3) C2) Cl -> X3 + X2 + Xl

But where on the content side the two notations describe the same construct, the expression constructs are different; so languages which have inverse ordering relations but the same scope relations between elements have different expression systems, but the same content systems.17 Constituent order, however, is not the only expression parameter available. In languages with a morphological component, there is a distinction between the type of expression regularities that apply to bound and free elements. Morphological expression is by definition a change in word form rather than the addition of a new independent expression; so morphological expression presupposes a word form to "operate" upon. This applies whether expression is in the form of an affix or a modification of the stem. 'Past tense' affects the stem sing in one way, the stem walk in another, but both may be seen as operations performed on an input; you have to have a lexeme in order to apply a past tense expression process. The difference in relation to "free forms" is also marked by the fact that the linear ordering is different. In English, auxiliary verbs (whose content has scope over the main verb) are expressed in front of the main verb in scope-defined order; but past tense, with the highest scope, is regularly expressed as a suffix, rather than in front of the verb. In a notation where content elements are marked with single quotes, the two cases can be described as follows:

226 Clause structure in a functional semantics

(18) 'have' ('been' ('cooking')) — > have+been+cooking (19) 'past' ('cook') — > cook+ed. This makes possible a higher-order generalization which has been called the "centrifugal" principle (cf. Fortescue 1992: 122), or the "conveyor belt" principle (Dik 1994b: 361): expression rules can best be described by starting at the "lowest" or "innermost" elements in the scope hierarchy (cf. also Harder 1990: 149). Under the interpretation where "underlying" structure is seen as unambiguously semantic, this means that expression dances to a tune called by semantic relations. Expression rules work so that we begin with the semantic "input" element and then we successively build up a complex two-sided structure by a procedure where content and expression as a general rule goes hand in hand. The two sets of processing instructions correspond to the decoding and encoding stage, respectively, but must be seen as inherently linked by the functional relationship that was described above: if you follow the process instruction for expression, you get an output which has the function of triggering the process instruction for content (in the addressee). The link between content and expression can be illustrated with an example from Harder (1992), the NP the old elephant. The content structure is (roughly) (20) 'definite' ('old' ('elephant')) The processing instructions can be spelled out as follows: Content instruction (instruct interpretation device to...)

Expression instruction (instruct production device to...)

invoke the concept 'elephant'

call noun stem elephant

modify the elephant concept so that the elephant becomes old

put adjective old in front

identify contextually salient instantiation as being referred to

put definite article in front

The relation between expression and content syntax 227

These instructions must not be understood as direct instructions to the speech organs, only as routines that are presupposed for the articulation and understanding of syntagmatic relations in the clause. In the computer analogy for procedural semantics (cf. above p. Ill), these instructions are at the "compilation" stage, which logically must precede effective execution. In actual practice, however, execution need not reflect the logical structure of the compilation component. It is perfectly possible that a speaker begins with a definite article, saying the..., without knowing how to go on — but the rules for creating a well-formed NP dictate a sequence of expression elements and a scope hierarchy of content elements that must be in order when the whole utterance is finished. In this example, there is one expression instruction for each content instruction. However, the absence of isomorphy in closed paradigms means that this is not always the case. The use of zero morphs (i.e. cut as past tense) means that sometimes nothing happens on the expression side. One could also describe it as a matter of selecting the appropriate form in the paradigm (the "word and paradigm" approach, cf. Hockett 1954). In that case something does happen, namely a "pass" bid in the sequence, because one stays with the base form of the lexeme, even if there is no visible change in output. The only empirical way to choose between these descriptions would be via psycholinguistic experiments. Similarly, with portmanteau morphs, nothing "visible" happens on the expression side at a particular stage — in this case because the expression codes two content elements together, and we cannot select the expression before we have both content elements (a "stand by" order rather than a "pass"). A case in point is the past tense form went: if we start by selecting the stem go, and then apply past tense to it, we need to throw out that stem again at the next stage in order to select went, which codes 'go' + 'past' in one operation. Allomorphic variation sometimes also depends on higher-level elements. In German or in Danish, the expression rules for the equivalent of the old elephant could not work until all the elements were in place: in order to give the right form to the adjective, one needs to know what the higherscope determiner is. This is why the actual instructions to the speech organs and to the interpretive mechanism in the brain often cannot work properly before the whole of the compilation is in place; it would be "risky" to begin before all choices were made. So even at the higher stage of abstraction, we do not get complete isomorphy between content and expression. It is intuitively more plausible that advanced processing

228 Clause structure in a functional semantics

capacity works by having only a minimal amount of expression commands, and these work only when all intended content elements are in place. Nonfluent speakers are known to make false starts because they cannot handle all the relevant mechanisms in one go, and therefore execute expression procedures that ought to have been "overruled" by other, "higher" procedures.

3.8. Differences in relation to standard Functional Grammar I now return to the theoretical apparatus of Functional Grammar, in the hope of showing that the views I have defended constitute a more functional approach to clause description, and hence is more in keeping with the basic principles of the theory; the discrepancy between my views and standard Functional Grammar, I argue, are due to a generative skeleton in the cupboard of Functional Grammar. I begin by looking at these basic principles. As the name suggests, Functional Grammar in the version developed by Simon Dik and associates understands language as basically functional, i.e. as a means of communication. The theory of clause structure which is the core of the theory is therefore seen as an integral part of a larger theory, involving all those processes that enter into human communication. As expressed in Dik (1989), the model of language is part of a larger model of the "Natural Language User" covering his total communicative competence. Among the non-linguistic capabilities that a theory of language must be able to link up with are logical, perceptual and social skills. Although the grammatical theory is not in fact very fully integrated with research in human cognition and interaction, the grammatical description is thus programmatically subject to criteria of adequacy that go beyond grammar. In more or less direct polemical opposition to Chomsky's notion of "explanatory adequacy", Dik sets up three criteria of adequacy: "pragmatic adequacy" reflects the embeddedness of linguistic theory in processes of communication and requires that the constructs of the theory are revealing with respect to the way in which language is used in communicative situations. "Psychological adequacy" reflects the integration of linguistic theory in the larger context of the workings of the minds of the language users; this places linguistics within the enterprise of cognitive science. "Typological adequacy" is part of the traditional domain of linguistics

Differences in relation to standard Functional Grammar 229

proper, but insists that theories should respect differences as well as similarities between languages. Functional Grammar thus invites discussion and criticism on all these points, instead of sealing itself off from "external" attention. This orientation is also reflected in the order of priority between syntax, semantics and pragmatics: ...pragmatics is seen as the all-encompassing framework within which semantics and syntax must be studied. Semantics is regarded as instrumental with respect to pragmatics, and syntax as instrumental with respect to semantics. In this view there is no room for something like an "autonomous" syntax. On the contrary, to the extent that a clear division can be made between syntax and semantics at all, syntax is there for people to be able to form complex expressions for conveying complex meanings, and such meanings are there for people to be able to communicate in subtle and differentiated ways. (Dik 1989: 7).

I now turn to the nature of the theoretical apparatus. The central problem is the role of meaning in the theory, and the basic objection-in-principle can be stated by reference to what was said above on generative grammar. What I claim is that the descriptive practice embodies an attempt to build a functional description of language on a notion of underlying structure inherited from generative grammar. The problem can be seen in the following passage: In order to do justice to the formal and semantic properties of clauses in a typologically adequate way, we assume that each clause must be described in terms of an abstract underlying clause structure which is mapped onto the actual form of the corresponding linguistic expresssion by a system of expression rules....: UNDERLYING CLAUSE STRUCTURE I EXPRESSION RULES

I LINGUISTIC EXPRESSIONS

The underlying clause structure is a complex abstract structure in which several levels or "layers" of formal and semantic organization have to be distinguished. (Dik 1989: 46)

230 Clause structure in a fiinctional semantics

The generative spectre appears in the notion of an abstract underlying structure of formal and semantic organization: abstraction is combined with a structure that is simultaneously "formal and semantic", exactly as in the generative notion of underlyingness. To show how the underlying structure is in fact motivated by abstractions based on distributional generalizations (cf. the critique of underlyingness p. 187 above), let us take Dik's own example, the analysis of definiteness (cf. Dik 1989: 15). Dik begins by considering the possibility of accounting for definiteness in terms of the syntactic structure that we find in the case of the English definite article: we have an NP with a determiner in front of a noun, such that adjectival modifiers may come in between. If the grammar gave such an account, it would be at a loss when confronted with definiteness as a suffix (as in Danish) or with definiteness as embodied in pronouns and proper names. Such an account would therefore undergeneralize both intra- and cross-linguistically. "The notion definite article is too concrete to yield a typologically adequate account of definiteness", Dik concludes. Instead he suggests positing a more abstract operator "d", which can then be mapped into different forms of expression, as in his diagram (3), rendered below as (21): (21) English: dfhouse] = the house d[John] = 0 John Danish: d[hus] = huset (Danish hus = house) The operator "d" enables the linguist to formulate rules affecting definite NPs in a simple manner and provides an elegant way of accounting for distributional facts in this area. One may recognize from the shooting of the hunters (p. 188 above) two characteristic features of the argument: first, the fact that structures are used to bring out something that is not evident from the "surface form" (in this case the syntactic configuration associated with the definite article in English); secondly that although there is no reference to meaning in the argument — the issue is whether we can derive the structures most elegantly from one representation or the other — the whole argument is crucially dependent on a semantic intuition: we know that all these items instantiate definiteness, because we know their meaning. I should like to emphasize that this dependence on semantic intuition is wholly absent from the argument. The notion of a syntactic configuration

Differences in relation to standard Functional Grammar 231

like that of the English article is rejected for one reason only: it does not match the syntactic distribution of definiteness as expressed in the Danish suffix and in pronouns. Had definiteness always been expressed by a preposed grammatical word, the notion "definite article" would have been all right, so we understand by contraposition. As in the case of the shooting of the hunters, we could elegantly capture the same insight by saying that we have a meaning element which we call "definiteness".18 The difference in relation to generative grammar, however, is that in Functional Grammar so much interest has concentrated on semantic sameness that the general impression is that the underlying structure is actually a semantic structure (cf. e.g. Van Valin 1990: 199; but cf. Siewierska 1991: 10). In the face of this, it may sound like quibbling to insist, despite general appearances, that (a) Standard Functional Grammar clause structure is in fact not semantic but (cf. Dik above) both formal and semantic, hence "underlying" in the same distribution-based sense as in generative grammar and

(b) this procedure ought to be replaced by a model based on semantic relations in the clause; in other words, Functional Grammar representations ought to be made semantic, so that expression rules could mediate between content and expression, instead of mediating between an abstract structure and a concrete linguistic expression. In spite of the nicety of the distinction, I think the point is extremely important; the risks of ignoring it are exactly the same as in generative grammar (cf. p. 188). The central problem is the status of coded meaning — the locus of the functional anchoring of language — in the theoretical apparatus. Meaning is only found in one obvious place inside the derivational apparatus, namely in the beginning of the derivation, in the lexicon (or fund); the "semantic functions", like the rest of the syntactic apparatus, are motivated semantically, but not provided with a semantic interpretation. Operators are just set up in the manner of the "d" operator above, and are then used as input to expression rules; there is no semantics-of-grammar component.19 The generative nature of the descriptive practice also reveals itself in the

232 Clause structure in a functional semantics

problem of the "borderline between syntax and semantics". In the first quotation from Dik above, the problem is revealed in the formulation ". .to the extent that a clear division can be made between syntax and semantics at all..". A basic distinction between content and expression in syntax makes it possible to provide a much clearer restatement of the functional basis of grammar: expression syntax is instrumental in relation to content syntax (we combine expression elements in order to construct combined meanings); and content syntax is instrumental in relation to pragmatics (we construct combined meanings in order to convey communicative intentions). To say that syntax is the servant of semantics is a less than totally convincing manifesto, if one's theory cannot tell the difference between them. In spite of the absence of a "semantic component", semantic descriptions are found everywhere in articles written within the framework; but they are used as ammunition, to motivate structures. The descriptive framework encourages a pattern of thinking whose goal is grammar-as-such (i.e., essentially uninterpreted distributional categories) — and which reduces semantic intuitions to the status of arguments for grammatical "devices" — exactly in the manner of generative underlying forms. And, as pointed out by Janssen (1981), one might perfectly well provide a Montague-semantic interpretation for Functional Grammar; there is nothing about the derivational apparatus itself that indicates why a functional semantics would be more natural. A final example of the "syntacticism" of the descriptive apparatus itself is Hengeveld's explicit view of the layered structure as embodying a version of the performative hypothesis of Ross (1970), where the illocutionary structure incorporating speaker, hearer and performative verb is seen as part of syntax. Hengeveld, as quoted above (cf. Hengeveld 1990a: 7), regards the illocution as an "abstract predicate" which carries an "illocutionary frame" consisting of speaker and hearer (who provide the pragmatic anchoring that corresponds to the performative hypothesis). One argument for this is descriptive elegance: lexical predicates have a predicate frame which structures the representational level, and illocutionary predicates have an illocutionary frame which structures the speech event.20 One reason why the performative theory was in general abandoned was the so-called "performadox". The issue started with the problem of whether explicit performatives were true of anything, or were only acts, as claimed by Austin. If no truth value could be assigned to them, logical semanticists would have to concede some of their heartland: declarative sentences

Differences in relation to standard Functional Grammar 233

without truth values would be uncomfortable from a logical point of view. But at first it seemed that the division of labour could be upheld. Although performatives do not function as descriptions of pre-existing reality, the propositional nucleus of statements like "I apologize" could, for logical purposes, be seen as true by virtue of the successful performance of the speech act itself (cf. Lewis 1972). Afterwards, one could truly say "he apologized", and it would be illogical to say that the proposition was not true of the event that took place also while it took place. But there is a problem that this does not solve (cf. Levinson 1983: 251): if the earth is flat is understood in terms of the "underlying" performative version "I state to you that the earth is flat", then it ought always to be true, merely by virtue of somebody uttering it! Hengeveld describes his solution as avoiding that problem because the interpersonal domain of the illocution is explicitly outside the domain of truth and falsehood (which belongs in the lower, representational, layer); but that solution is hardly better than a definitional strategy. And it is doubtful what empirical content there can be in calling something a "predicate", if it cannot be true or false of anything. In terms of the principles I have argued for above, the basic problem is that the inclusion of performativity in syntactic structure means that structure is made to do a job that is really a matter of semantic content. The relevant semantic item is the content element 'declarative' (which is part of a semantic description because of the commutation with 'interrogative'). By making a clause declarative, you add something to the semantic recipe that constitutes clause meaning, namely the instruction that the clause is to be understood as conveying a fact; and any attempt to translate this semantic content into structure will only generate confusion. The relation between language and context can never be accounted for via structure, since structure is by definition inside language; for the way in which meanings hook on to context, see below, p. 286. The basic format of the layered content structure, as already outlined, looks very like the abstract underlying structures of Functional Grammar — because semantic factors are central arguments for such structures. Yet the modifications of principle that I have suggested also have consequences for the actual representations. One dimension of change is that semantic structures will be simpler than distributional abstractions, because there can be no more "underlying" elements than there are coded content elements in the clause. This means that we cannot attribute elements to a clause that

234 Clause structure in a functional semantics

are necessary only because of distributional abstractions that do not affect the structure of the individual clause. This is true both intra-linguistically and cross-linguistically (compare below). In contrast, the layered format as understood in Functional Grammar is seen as a format for capturing generalizations, irrespective of how much of it can be seen as a property of the single clause (cf. the argumentation in relation to the definiteness operator above p. 231). As a recent example of a how the distribution-based approach to underlying form makes for more complicated underlying structures than necessary, I would like to take Hengeveld's argument for enriching the representations of subclauses (cf. Hengeveld 1992c): In Bolkestein (1990), Hengeveld (1990) and Dik and Hengeveld (1991) it is argued that further subdivisions have to be made with respect to choice of operators allowed within complements. Consider the examples in (10), (11) and (12): (10)

a I saw him leave b *I saw him have left

(11)

a I want him to leave today b *I want him to have left yesterday

(12)

a It is possible that he leaves today b It is possible that he will leave tomorrow

The complements of see, want ana possible all designate states of affairs. The difference between them resides in the fact that the event described in the complement of see is necessarily simultaneous with the main clause event, the event described in the complement of want is necessarily posterior to the main clause event, whereas the event described in the complement of possible is temporally independent of the main clause. This difference may be represented as in (13): (13)

Seev (xl)Exp Wantv(xi)Exp PossibleA

(Sim el) Go (Post ei)Go (ττ2 ei)Go

Differences in relation to standard Functional Grammar 235

The underlying representations of the clauses are thus provided with the "sim" and "post" operators: a clear-cut example of a grammaticality- and distribution-driven pattern of thinking, one associated with underlying as opposed to semantic clause structure. We have a set of grammaticality judgments giving rise to a distributional pattern that the grammar must "handle" in order to be able to derive all and only the grammatical sentences. In order to do this, certain devices are assigned to the underlying representation, neatly securing only orthodox combinations of predicate and complement. In contrast, an approach in terms of the semantic structure of the clause primarily looks for the coded content elements and the semantic (contentsyntactic) relations between them. In the case described here, this is fairly simple: we have the meanings of the three predicates, and the semantic relation between the predicates and their complements. Then what about the constraints in temporal relations? The primary strategy would be to ask whether they could be seen as following from the semantic descriptions themselves — and this would appear to be quite promising in this case. A semantic description of see must specify the relationship between perceptual subject and object, and it would appear that simultaneity between act of perception and perceived event fall naturally out of this description. Similarly, the meaning of want involves the orientation towards the future. One may argue that it is perfectly possible to want things to have been otherwise than what they are; but interestingly, if we permit the expression of wants that affect the past, the ungrammaticality of the example evaporates. In the company of a wizard it would make sense to say, for example, / want my rival to have left yesterday. Finally, the meaning of possible tallies with the temporal neutrality of epistemic judgments. The reason why elements like "sim" and "post" come up in underlying representation is therefore simply that meaning has been submerged in the distributional pattern of thinking that Functional Grammar has taken over from generative grammar. This example simultaneously illustrates that the postulation of elements in underlying representation is not constrained by distinctions made in the coding system, i.e. in the way captured by commutation. The question as to where "sim" is coded does not arise at all. A major difference between the two ways of thinking, therefore, is that layered semantic structure consists only of coded elements, whereas layered underlying structure can contain elements that are not justified in the coding

236 Clause structure in a functional semantics

system. Some of these are distributionally motivated "triggers" that ensure correct predictions, whether they are motivated semantically or not. Others are semantic distinctions which are deemed important irrespective of coding. One central point on which this difference comes out is in the status of the notion "proposition". According to the description given here, a proposition is the result of applying identifying, deictic tense to a predication. According to the description given in Hengeveld, however, the notion is not associated with any particular semantic element in the clause (cf. Hengeveld 1990a: 2): From an interpersonal point of view, what is governed by an abstract performative predicate is the propositional content of the speech act, not the description of a So A [state-of-affairs] as such. Unlike So As, propositional contents can be asserted, known, denied or questioned, i.e. "they are entities of the kind that may function as the objects of such so-called propositional attitudes as belief, expectation and judgement" (Lyons 1977: 445). These properties of propositional contents show that (i) they have to be separated from the illocutionary forces they may be subjected to and (ii) they have to be separated from the So As they describe. The latter conclusion makes it necessary to study a predication from both an interpersonal and an ideational perspective. From an interpersonal perspective, a predication constitutes the propositional content of a speech act. From an ideational perspective, a predication constitutes the description of a SoA.

As Hengeveld has put it (p.c. 1993), "for me a proposition is not simply a predication with an operator added to it, but a unit with its own grammatical status and ontological correlate". In the methodology adopted here, grammatical status can only be established my means of commutation, and the ontological difference (which is undisputed) does not in itself count as qualifying for a distinct place in linguistic structure — according to the principle that language is structured substance, neither pure structure nor pure substance. But what about the languages which do not have identifying tenses — do they not have propositions? If the language lacking identifying tense has a choice between declarative and interrogative, for example, the answer is yes, they do. The proposition can in that case be seen as necessary in virtue of the principle of sense, by the same logic that brings about syntagmatic implicature (cf. p. 217); but since it is part of the standard repertoire of

Differences in relation to standard Functional Grammar 237

interpretive routines, it must be understood as a coded aspect of the meaning of the combination.21 In the assumed case, the operand of the declarative operator is coded only as reflecting a state-of-affairs — which (as described in Hengeveld's quotation above) is incapable of being in itself the subject of a propositional attitude. If it is to make sense to apply a declarative operator to it, we therefore have to enrich the meaning by a piece of reasoning going roughly like this: since a declarative conveys a fact, i.e. presents something as true, and a state-of-affairs in itself cannot be true or false, the predication must be applied to the world at some point (reflecting a "tense" that is neutral with respect to the distinction between past and present) in order for the message to make sense. If the layered structure is seen as consisting of content elements, rather than a mixture of distributional abstractions and semantically motivated items, the layered clause comes to consist basically of "micro-layers" where each content element adds a layer of its own, rather than the four to six macro-layers (cf. Harder 1990, Nuyts 1992). But it makes sense to keep thinking in terms of the four original macro-layers since they represent ontologically different types of semantic wholes which are typologically very general, if not a hundred per cent universal. The unambiguously semantic structure can perhaps also provide some clarification of the status of the predicate and arguments in the layered structure. The lack of clarity in Functional Grammar up to now (cf. Dik 1989: 46 versus 1989: 50; Keizer 1992a: 2; Hengeveld 1992a: 35) is a natural consequence of its conception of structure as "semantic-plussyntactic", since expression structure diverges from content structure at this point. Semantically, arguments are innermost in the scope hierarchy: they are the conceptually autonomous billiard-balls in the model (compare the discussion of Cognitive Grammar below). This is in conflict with the "expression" approach to layering, most clearly seen in Foley and Van Valin, where the predicate itself constitutes the nucleus of the clause, and the arguments constitute a layer between the predicate and the satellites. At this point the semantic interpretation clearly diverges from the expression-based structure, where the verb is the central item and the arguments are positionally diverse. Because we eliminate extraneous distributional abstractions from the semantic clause structure, we thus get simpler clause descriptions. Another source of simplicity in the purely semantic structures is due to the "recipe" view of meaning, and consists in the elimination of indexing from clause

238 Clause structure in a functional semantics

representation. The recipe view is opposed to the referential approach of standard Functional Grammar, where each layer is organized in terms of the format of an indexed variable followed by a "restrictor", another point on which the model carries over elements from the tradition of formal grammar. Indexing was introduced into linguistic descriptions by Chomsky (1965), and its use extended in an influential article by Bach (1968). One of his main points was that the descriptive content of nouns should be separated from the indexed referent, yielding the analysis where the professors signed a petition was assigned an underlying form corresponding to "the ones who were professors signed something which was a petition". This format was taken over for the analysis of NPs by Dik (1978)22, extended to predications by Vet (1986), and generalized to the four layers introduced in Hengeveld (1989). The theory underlying this format is that each layer can be said to "designate" or "refer to" an entity of a different order (following Lyons 1977: 442-47). The variable symbolizes the actual entity, seen as belonging to a domain of potential referents, and stripped of descriptive specification; the restrictors then contain the descriptive content of the layer. The terminology reflects a set-theoretical universe of thought: referents are naked entities denoted by formal symbols, and the set of potential referents is "restricted" by the descriptive specifications. Since the original four layers the approach has been extended even further. Predicates have been included as referential (cf. Keizer 1992a), and an extra layer has been introduced in the shape of an "illocution" intermediate between clause and proposition. An underlying representation, according to Hengeveld (1992a: 35), thus contains at least six types of variable: (22) term (zn); predicate (fn); predication (en); proposition (Xn); illocution (Fn); and Clause (En). The chief motivation for indexing, from Chomsky (1965) to Hengeveld (1992), is coreference: in order to specify that the same entity is referred to, we need to have ways of indicating what is referred to in both cases hence the need for indexing. Hengeveld's argument for positing variables (cf. Hengeveld 1990b: 108) is that "anaphoric reference can be made to any of these variables after the production of an utterance", as illustrated in the following examples:

Differences in relation to standard Functional Grammar 239

(23) A. Come here, please! B. Is that an order? (that = speech act) (24) A: He 's a liar B: That's not true (that = proposition) (25) A: John won't come B: That's a pity (that = So A) (26) A. Yesterday I saw a boy with a scar on his face B. That must have been my brother (that = individual) In 1992, variables for predicates, including Hengeveld's "illocutionary predicate", were introduced for similar reasons: (27) Ernest is sleeping. So is Jack (Keizer 1992a: 4) (28) John is intelligent, and so are you (Hengeveld 1992a: 31) where so refers back to the predicate. The example of referring to an illocutionary predicate is less transparent: (29) A. Shut up! B. Don't talk to me like that! Here like that is said to refer to the selection of "illocutionary strategy" (Hengeveld 1992: 34). Some of these interpretations could be challenged; but the point here is the principled relationship between the entities referred to by B and the semantics of the clause uttered by A, so let us accept the different references of that which are suggested here. Do they prove that there must be indexed variables in the semantics of the clause? This argument depends on a model-theoretic, "standing-for" view of meaning. Only if the meaning of a linguistic expression is the entity which it stands for do we need the tagging of entities as part of linguistic semantics. The "recipe" view, where meanings are input to interpretation, does not require reference to actual entities, any more than recipes refer to actual salmons. The point is perhaps most obvious in the case of the whole

240 Clause structure in a functional semantics

speech act, where the argument against the referential view essentially recapitulates Austin's argument against the descriptive fallacy: the performance of a speech act does not "refer to" an act, it is in itself an act (cf. also van der Auwera 1990: 28 and 1992: 336). The possibility of referring to the act is not due to the langue properties (including the coded meaning), but to the actual utterance. I have used the analogy (Harder 1992: 320) of a physical act, which has the same referential possibilities: (30) A: (knocks down B from behind) B: Do you think that was fair? B's utterance does not imply that A referred to his assault in performing it. Similarly, in (30), A did not refer to the act, he just performed it, thus creating a situation in which B in turn could refer to it. The same applies to all the other possibilities of anaphoric reference, whether to propositions, states of affairs or individuals: they are referred to as aspects of the world of discourse (enriched by the previous utterance), not as aspects of the coded meaning of the clause in question. One may now ask: if we take all indexing out of underlying representations, how can we then handle coreference relations - which are obviously part of the intended meaning of linguistic utterances? There are two parts to the answer. One part is that we neither can nor should handle intended referents as part of langue. Finding out what is referred to is almost by definition part of understanding the message, rather than part of knowing the medium. Understanding language is a matter of making sense, whereas reference is a matter first, of plugging the linguistic meaning into the context in the right way, and subsequently of keeping the record straight; the phrase "reference-tracking" expresses this clearly. The second part of the answer involves those cases where coreferencerelations are part of the linguistic restrictions that the speaker must know in order to handle linguistic expressions correctly — such as in the case of reflexive pronouns. The first statement is still true, but needs to be made explicit on one point. The addressee's langue knowledge, when he hears a reflexive, does not include the actual referent — but it includes an instruction, a piece of process input: namely that he has to look for an antecedent within the same clause and understand the reflexive as referring to the same entity. This may seem as smuggling indexing in through the back door — but there is a crucial difference. The instructional account has no representation of either the referent of the antecedent or the referent of

Differences in relation to standard Functional Grammar 241

the reflexive - only a "within-clause-sameness" requirement attached to the reflexive. As stressed by Leech (1981), co-reference is a one-way relationship, linguistically speaking: it goes from the co-referent item to its antecedent, and is essentially a reference identification process like any other, only with certain restrictions imposed. What language does for the speaker here is to provide him with help for information retrieval: words like that and so are cues for this process. Language does not in itself establish relations of reference to referential "addresses" in the discourse world. This means we can delete the whole apparatus of indexing without losing any information about coded meaning — a considerable simplification. But that does not mean that indexing is simply nonsense. Rather, it belongs at a different stage in the process than langue description: it describes an aspect of the "information updating process" that an addressee must carry out in order to be a competent dialogue partner. If we describe it in terms of the "filing" metaphor that is used in the area handled by discourse representation theory and similar systems (cf. Heim 1983), we can say that the coded meaning, like the business letters of the day, gives rise to changes in some old files and the establishment of some new ones - but the filing addresses are not part of the day's mail. They are part of the addressee's processing strategy only. If he messes up the information updating process, it is not necessarily his understanding of the language that is at fault. An indexed "underlying clause structure" describes what an addressee has made of a clause, not the coded meaning that is attached to it. But since all utterances are made for addressees to make sense of, is it not artificial to abstract from this necessary processing stage? The reason why I think it is not is that it involves either an element of duplication or, as I think is closer to the case in relation to standard Functional Grammar, a neglect of coded meaning. The indexing system, focusing on the referential stage after the processing, essentially usurps the place of a fully developed theory of coded clause meaning. If the theory contained both the meanings and the indexing of entities to which these meanings were attached in the situation, it would be duplication; but underlying structures are formulated so as to aim at specifying coreference possibilities rather than meanings. A very important simplifying property of the process view of layered structure is the solution it provides for the division of labour between descriptions of the code and descriptions of texts formulated in the code.

242 Clause structure in a functional semantics

On the "product" view of layering, there is no natural stopping point when it comes to expanding the layered analysis "upwards" to include larger constituents than clauses (cf. Bolkestein 1992). A case in point is the role of particles like Latin igitur, as investigated by Kroon (1989, 1994). Often such particles establish relationships between large chunks of text, which would then have to be part of linguistic theory if the theory were to cover the products established by interpretations (Kroon's solution, however, does not imply such an expansion of sentence grammar). But if they are understood as process instructions, they need to have scope over one clause but no more. We can say that such words contain an instruction for the proper way of connecting this specific clause with the current discourse world; the clause content can thus be stored in properly connected form in the filing system, but only the instruction for connection, not the whole connected text, is part of langue. Similarly, the narrative tenses of Bantu (cf. Dahl 1985: 112) do not necessitate the introduction of whole narratives into langue: they instruct the addressee to regard the predication in their scope as a "narrative instalment", but require no greater structural unit than the clause, as part of grammar. This also fits with the fact that the first clause of the narrative is typically in a non-narrative tense, even though from a "product" point of view the first sentence is part of the story: the narrative tenses instruct the addressee to add one clause at a time to the ongoing story, and this instruction makes no sense in the case of the first clause, since the story has not started yet. Both within traditional grammar and in functionalist literature, there is a long tradition for squinting at discourse phenomena in grammatical description. As will be discussed in detail with respect to tense in Part Three, section 4.5, the functional approach to meaning makes it possible to be precise about just how much needs to be part of the code. Anticipating the theory a little, we can give an example of the way in which the lack of a clear theory of the interface may cause a conflation of grammatical structure and discourse structure. Givon (1982: 127) suggests that in the Creole prototype tense-mood-aspect system the perfect ("anterior") has the widest, "discourse" scope — because it has a job that goes beyond the individual clause, namely to mark out-of-sequence events, which do not fit into the main story line; and the reason expression order does not reflect this in English, according to Givon, is purely diachronic factors. With the functional view of meaning, the ability of the perfect to indicate "out-of-sequence" is fully compatible with a content-structural

Differences in relation to standard Functional Grammar 243

scope that puts it inside both deictic tense and modality. The mechanism is simply that the perfect adds a semantic property, anteriority, to the state-ofaffairs inside its scope, which puts the clause outside the ongoing narrative progression. The consequences this has for text organization do not have to be part of the code. This does not mean that langue is irrelevant to text structure — but it means that the relevance is in terms of meanings, not in terms of structures. The structures that are part of the code itself are no larger than those specified as "operands" of linguistic elements. There are differences between domains as we move up through the layers from terms to whole illocutions — but there are no content elements that require larger operands than whole illocutions. Content elements may take all combinations of illocutions as well inside their scope, under a very general semantic equivalence principle stating that a complex of illocutions may be regarded as a complex illocution. This means that they may attach themselves to very large utterance combinations, just as referring expressions may refer to very large chunks of text. To take an example, the "direct speech" object of said in President Castro said ".... " may contain thousands of sentences. But as long as it is not part of the very identity of such large-scope elements that they must apply to units with a structural complexity above clause level, this is a matter of selecting the right contextual linkage, not of knowing the structure of the language.

3.9. Semantic clause structure and grammatical universals One consequence of this view of structure is that semantic (contentsyntactic) clause structures, like lexical items, are part of a specific language only. The "underlying" level is thus not universal, but simply that aspect of the description of an individual language which describes how meanings go together in the construction of a complete clause meaning. The search for a level of distributional abstraction that is equally adequate for a single language and universal generalization is thus given up here as unrealistic. One reason for this is that there is an inherent contradiction built into this search, if we see language structure in terms of ways of organizing substance, rather than in the generative fashion of looking for a universal autonomous structure that is differentiated in different ways. Structures are

244 Clause structure in a functional semantics

anchored in the language-specific substance they organize and do not exist in themselves. To the extent languages make the same distinctions and the same combinations in the same content domains, languages have the same structure; but this is not very often the case entirely. Hence, if we set up one formula for the description of several languages, either we overlook the differences or we attribute to all languages the distinctions appropriate to only one. In Functional Grammar, the risk of neglecting differences is an almost inevitable consequence of trying to do without a semantics-of-grammar component, assuming that all operators are universal. This means that there is no room for cross-linguistic variations in meaning in grammatical items such as "the perfect". In contrast to that, the notion of a semantic clause structure insists that a clause in a given language is based on coded meanings in that specific language only. The risk of overdifferentiation can be illustrated by the "placement rules", the specification of word order options in a language. If we try to work out a common set of such rules for several languages (as in the "profglot" program, cf. Dik—Kahrel 1992) we end up with more potential places than required for the description of any single language. Does that mean that the ambitions of cross-linguistic validity of descriptions have to be given up? Yes, in the sense that we cannot have one underlying clause structure which is at the same time a description of exactly what goes on semantically in one language-specific clause, and a description of the universal categories that the clause instantiates. This is the impossible dream of seeing Functional Grammar structures as both descriptions of actual clauses and as universal cognitive structures. Instead, cross-linguistic generalization must be based on the unsurprising fact that it requires reference to more than one language at a time and thus has a different object of description than language-specific description. This looks as if we are back with the early structuralist insistence that each language is a structure unto itself. That is true in the sense that the description of the individual language is logically prior to the description of universals. However, it does not entail that languages are at bottom irreducibly different. What it insists upon is that the only way to find the interesting similarities that all linguists would like to find is to go and look first at one language, then at some more languages, and then find out in what ways they are alike; statements of similarities are appropriately understood as referring to more than one language, not as descriptions of one language only. The short-cut via preconceived structures and inherited

Semantic clause structure and grammatical universal 245

squinting terminology works well as long as we stay within that range of closely related languages on which these descriptive concepts were based; it breaks down when we come across a language that clearly does not instantiate these concepts; and it biases the description of those languages that are similar enough for the linguist to proceed using the same terminological apparatus, but which contain interesting facts that are not captured in terms of the familiar theoretical equipment. The only safe way to avoid that is by looking at syntactic phenomena in terms of languagespecific relations between content and expression in syntax. This caveat applies not only to structural frameworks with universalist ambitions but also to less structure-oriented forms of universalism, as found in non-generative American linguistics. Much of the vocabulary of crosslinguistic comparison still in practice reflects an informal tradition of what Jespersen called "squinting grammar", whereby constructions are described by borrowing terms of languages with "similar" phenomena, without spelling out precisely what similarity is involved. To take the star example of a syntactic concept, the term "subject" is generally used without making it clear what sort of expression-content relationship is at stake; and the criterion is in practice translation equivalence with familiar subject constructions. The considerable body of typological studies which depend on the notion of subject, from Greenberg onwards, thus by and large take the notion for granted without any definition in terms of a pairing of expression and content. The existence of a clustering of properties associated with subjecthood as most people understand it (cf. E.L.Keenan 1976 and the recent version in Givon 1995) means that the word makes sense by everyday criteria across a range of very different languages. In other words, armed with the prototype you can find more or less good candidates for subjecthood almost no matter what language you look at. The problem is that the description is only valid if we can be sure that all languages should be understood as oriented towards the same prototype. That is not likely to be the case; there are languages which cut the pie differently and do not have an expression-content pair that matches Standard-Average-European subjects to any revealing extent, a case in point being the Philippine languages (cf. Mithun 1994; Schachter 1977). The assumption that we can have syntactic notions which ignore the role of the expression-content distinction in syntax is based on an unwarranted faith in the universality of familiar inherited concepts. To extent it is possible to work out experimental methods of the type devised by Tomlin (1995) for

246 Clause structure in a functional semantics

subject choice, in which a non-linguistic input reliably triggers a specific linguistic choice, it will mean a very significant increase in the precision and reliablity of cross-linguistic description. Within the area that grammatic(al)ization theory has dealt with, however, it has made a significant contribution towards providing a sanitized crosslinguistic vocabulary. Looking at a large number of genetically and typologically diverse languages, this approach has demonstrated how certain types of grammatical phenomena are cross-linguistically predictable. The central concept of grammaticization theory is gram, short for "grammatical morpheme", understood as a content-expression pair with such crosslinguistically recurring properties23. Among the findings is that about seventy or eighty percent of tense-aspect grams can be reduced to six types (perfective, imperfective, progressive, future, past, and perfect, Bybee—Dahl 1989: 55), and that these have similar diachronic trajectories. The most striking generalizations from the point of view of grammaticization are in the diachronic area, and this is also where the affinity between cognitive linguistics and grammaticization theory becomes clear24. Because of the scepticism found among grammaticization theorists about the inherited structural vocabulary, of which I have tried to present a sanitized version, a comparison will be illustrative. Grammaticization theory is opposed to structural linguistics on two points: the relation between synchronic and diachronic description, and the relationship between structure and substance. On both points, the revised picture I have tried to argue for is compatible with the position of grammaticization theory; what remains for me to defend is an element of the structural tradition that is not so much in conflict with grammaticization theory as in danger of being lost from view. First, there is the issue of synchrony versus diachrony. From a structural point of view, Saussure's radical separation between the two is too strong a statement of a valid point, analogous to and closely bound up with the overstatement about the autonomy of structure. Saussure's position was useful in getting the analysis of modern European languages out of the clutches of Latin categories. But once it is clear that you cannot say that an element "a" is an X merely because its ancestor "b" used to be an X, it is clearly implausible to say that historical development is completely disconnected from the place of an element in the synchronic whole. It only appears plausible if the structure of an etat de langue is seen as autonomous — hence the connection between the two (over)statements; and such a position makes it impossible to understand how change can occur at all (cf.

Semantic clause structure and grammatical universal 247

Gregersen 1991). In a substance-based view of structure, diachrony stands as one of the arenas in which the validity of theories about the nature of linguistic elements and relations can be tested. In the Hegelian process towards a balanced synthesis, grammaticization theory is thus an important step forward. Secondly, there is the issue of structure versus substance. Bybee (1985b; 1988) points out that structural categories in an individual language — the English modal verbs being a case in point — are often semantically quite diverse, and that semantically related morphemes are often structurally diverse. This intra-linguistic messiness contrasts with the high degree of cross-linguistic regularity that one can find on the basis of semantic substance alone, and therefore structural principles of description fail to focus on the real regularities (cf. Bybee 1988: 247) — at least if we presuppose the autonomy-based view of structure, where structural contrast is vital and semantic substance is out of sight. The account given here is committed both to the fundamental importance of semantic substance and to the importance of structure as a way of organizing substance. If you take structural contrast to presuppose semantic substance, the issue is quite different from the one presupposed by Bybee: it is no longer a question of contrast as opposed to substance, but one in which we begin with substance and then look at the way language organizes it in terms of paradigmatic contrast and syntagmatic combination. The view of structure that I defend is thus compatible with Bybee's criticism of traditional structuralism. The difference of emphasis is in the type of fact that is deemed interesting. The question that I want to ask, giving voice to the structural tradition, is: how far have we got in describing the individual language when we have found that it has one of these grams? Among the things that are not covered are not only messy irregularities of limited interest, but also two types of fact that I think ought to be of interest to any linguist: the precise meanings of the language-particular forms, and the way in which these meanings co-operate in the language system (cf. the detailed discussion on tense in Part Three, esp. p. 396). If we want to have a theoretical apparatus that forces us to ask those questions that are necessary in order to describe a given language adequately, our theory must force us to ask about the structural carving-up and organization of semantic substance; and although grammaticization theory does not reject the existence of those facts, it does not force us to ask those questions. The closing statement in Bybee—Dahl is that grams "must be viewed as having

248 Clause structure in a functional semantics

inherent semantic substance reflecting the history of its development "as much as" the place it occupies in a synchronic system" (1989: 97), and in Bybee—Perkins—Pagliuca (1994), the position is even clearer: In our view, then, language-internal systems, whether tidy or not, are epiphenomenal, and the clues to understanding the logic of grammar are to be found in the rich particulars of form and meaning and the dynamics of their coevolution. Thus we take as our goal the study of the substance of linguistic elements and the processes of change that mold these elements. This substantive approach is not unique to the present study, but is rather part of the growing literature that takes a similar perspective, by which linguistic categories such as "subject", "topic", "passive", "causative", "verb", "noun", "aspect" and "tense" are universal phenomena with language-specific manifestations" (cf. the work of such researchers as Givon, Hopper and Thompson, Comrie, and others). (Bybee—Perkins—Pagliuca 1994: 1)

The counterbalancing emphasis that I would like to offer can be stated in relation to the word "epiphenomenal". I agree that pure structure is not the shaping force that dictates language development — but if the synchronic state of an individual language is deemed epiphenomenal, by the same token the synchronic state of any human condition is epiphenomenal in relation to its history. It would be misguided ever to ask what situation you are currently in, because the situation is "really" just part of a series of diachronic trajectories. Synchrony and diachrony are two ways of looking at the same object: languages as they develop over time. Synchrony matters because at each point in time a language offers speakers a limited set of interconnected options; and we can only understand how language "ticks" if we describe how these options are organized. The point can be made in relation to the notion of grammaticization itself: it does not make sense to speak of grammaticization, unless we know what the difference is between being a purely lexical item and being part of the (synchronic!) grammatical structure of a language. This presupposed property, which may be called "grammaticity", can only be defined in relation to a theory of what different kinds of elements there are in a language system: we need a theory of grammar in order to have a theory of grammaticization; and only a panchronic approach, emcompassing both dimensions (cf. Heine— Claudi—Hunnemeier 1991: 248) can give a complete description.

Semantic clause structure and grammatical universals 249

The dominant position of diachrony as opposed to synchronic expressioncontent relations is stated in relation to the messy semantics of subordinating moods, with the Spanish subjunctive as a case in point: While the approach offered by grammaticization theory does not solve this synchronic problem, it can at least cast the problem in a light which allows us to better understand why subjunctives exist at all and why they have the distribution that they do. If we view the uses of subjunctives as links on a grammaticization chain, we can accept the possibility that a gram might be meaningful in one context but not in another, and we can stop searching for the one meaning that inheres in all the uses and start examining the processes that lead speaker/hearers from one use to another (Bybee—Perkins—Pagliuca 1994: 213).

The polemical thrust of the argument is directed against a "one-form-onemeaning" approach to grammatical description. One can accept that point (cf. the discussion on the nature of content elements above section 3.3), and still insist that the key to understanding the state of an individual language is the relationship between expression and content, as constrained by structural relations on both sides: in its most primitive version, a "oneform-one-meaning" approach would mean that elements retained their properties no matter what context they occurred in, and this would be equally implausible in terms of both synchronic structure and diachronic development. To some extent the focus of interest is determined by the data. The interest in precise description of language-particular meanings and in precise patterns of structural organization presupposes much more detailed knowledge of a given language than the generalizations that can be made on the basis of translation equivalents or grammatical descriptions of large numbers of languages. I do not think there ought to be any conflict between the two approaches. The basic compatibility is clear when the necessity of both universal and language-particular studies is emphasized (Bybee—Perkins—Pagliuca 1994: 149); but also in relation to the structural issue, it is clear that a great deal of the discussion in grammaticization theory actually presupposes the reality of structure; the keynote formulations, as always, are more radical than the actual practice. One passage which reveals this is when broad categories like tense and aspect are said to be "cognitively significant semantic domains, but not structurally significant categories" (Bybee—Perkins—Pagliuca 1994: 3). As

250 Clause structure in a functional semantics

will be apparent in Part Three, I agree completely that tense and aspect are non-structural categories; but this statement would hardly make sense if all structural categories were entirely "epiphenomenal" — in that case no structural categories would ever be "significant". Therefore the structural view, when seen as substance-based, constitutes a necessary corrective to a description based purely on the cross-linguistic generalizations that are central to grammaticization theory. Insisting on the basic status of language-particular structure has implications for the way we must understand theoretical generalizations. Cross-linguistically valid descriptions are possible only as explicitly based on abstractions applied to several (in principle, all) languages at the same time. What almost all linguists find most interesting is generally to discover phenomena which involve systematic regularities on the expression side in the individual language, as well as similarities in both content and expression between languages; there is actually very little disagreement between linguists of different schools on this point. The point I am making is just that the ontological basis of the cross-linguistic concepts that are used to describe languages in this way is an abstraction based on the act of comparison itself. In saying that a language is, for example, "ergative", what you say is that it shares with a group of other languages a certain systematic link between content and expression, whereby (roughly) agents in two-argument clauses are marked in one way and patients in two-argument clauses are marked in the same way as single arguments. This description is different from the description of a single clause in an individual language, because it involves a much higher level of abstraction. The two levels can and should be linked: the concrete expression that marks the agent in a two-argument predication instantiates the cross-linguistic property "ergative". However, the problem then is to stay aware of the two levels we are operating with: the cross-linguistic pattern and the language-specific organization of expression and content. It is tempting to see ERG or ABS in the glossing as covering both at the same time - but the two sets of abstractions do not automatically fit: we cannot deduce from the existence of an absolutive case-marking affix what (for instance) the precise topicality status of an absolutive constituent is; a certain topicality value may be associated with ABS in the language-specific system (cf., e.g., Mithun 1994). Therefore, ABS or ERG in itself does not qualify as a language-specific content element. A language-specific grammar may, of course, include ABS as a

Semantic clause structure and grammatical universals 251

feature; but it will always only partially capture the language-specific function of the elements that have it. This description of the division of labour between semantic clause structures and cross-linguistic categories of comparison implies that underlying structures of the kind postulated in Functional Grammar may be well-motivated and useful when considered as cross-linguistic types (cf. Haspelmath's review of Hengeveld (1992b).25 Eliminating some of the less generalizable features of languages in order to set up some standards of comparison is not only permissible, but necessary in order to make revealing comparisons at all. My point is that there is a difference in ontological status between the two types of structures, reflecting a division of labour: the structures of clauses in one language should not contain specifications that are valid only by comparison; and conversely, crosslinguistic structures should not contain elements that are valid in relation to one particular language only. Another way of saying the same thing is that strictly speaking there are no such things as universal or cross-linguistic structures - for the same reason that there is no universal or cross-linguistic language. Structures are substance-based and cannot exist in disembodied form outside the (individual) languages that they structure. We should speak instead of cross-linguistic structural similarities. To take a biological parallel, there is no such thing as a universal skeleton. But of course we can generalize about skeletal properties just as we can generalize about the softer tissues. Paradigmatic relations provide clear illustrations of the reasons for this caution about structural generalizations. From the generative revolution onwards, the paradigmatic dimension has been overshadowed by the syntagmatic point of view. This may be partly due to the fact that paradigms, as emphasized by Bybee, are embarrassingly language-specific: they fit into a structuralist world picture, whereby each language is an island unto itself. Functionally, too, there is something arbitrary about structural paradigms: in a purely functional system one would be able to choose one meaning at a time, based only on communicative intention.26 A language-specific syntactic paradigm, such as the paradigm of modal verbs in English, occurs when a range of semantic options compete for the same syntagmatic slot, as determined by linguistic conventions in the speech community. Paradigmatic organization is a step removed from the salad-bowl case, in limiting and prescribing the options; it impinges on syntagmatic organization by specifying what kinds of choices have to be

252 Clause structure in a functional semantics

made in filling out the syntagmatic clause chain. It may be more or less arbitrary; the lowest point of arbitrariness is determined by the subprinciple of "compatibility" (cf. above p. 137) which specifies that you cannot (intend to) achieve contradictory communicative intentions: A and non-A are always in paradigmatic opposition. Therefore if there are content elements that divide semantic fields neatly between them one would expect them to be in paradigmatic contrast. It also suggests a rationale for the syntagmatic organization of paradigms: scope relations should reflect the natural combinability of semantic elements. Thus the emergence of content elements simultaneously provides the blueprint of basic paradigmatic structure. As illustrated by the modal paradigm in English, however, there may be conventional restrictions that go beyond the functionally explicable minimum; dialects of English permit combinations like might could (cf. Foley—Van Valin 1984). In Koyukon, there is a basic paradigm of options that cut across the whole tense-mood-aspect continuum (cf. Fortescue 1992). Thus we would not expect there to be universal paradigms. But in an approach based on seeing syntax as describing ways to combine content elements, this is so by the same logic which also forces us to the conclusion that there are no universal syntagmatic structures: since there are no universal content elements, there can be no universal way of organizing them - either syntagmatically or paradigmatically. On the picture I have tried to give, the language-specificity of paradigms does not make them less amenable to cross-linguistic generalization - they just parade the inherent constraints of this project more openly. As an example of how one can set up cross-linguistically useful paradigmatic standards of comparison, see Bache (1982, 1994); but he sees these clearly as metalinguistic tools rather than putative underlying universale. Much of this discussion can be summed up in the contrast between seeing cross-linguistic categories as describing a union versus seeing them as describing an intersection of linguistic features. The search for the (basic, cognitive, innate....) intersection is much more interesting than an accumulation of variants constituting a union. Nevertheless, the establishment of a cross-linguistic level of description must aim at describing a union before the question of intersection can safely be approached. A complete set of cross-linguistic generalizations will contain all types of elements and all similarities found in all languages on earth, and will thus be vastly more complicated than any system found in a single language; like the international phonetic alphabet, it will have endless lists of diacritics. The

Semantic clause structure and grammatical universals 253

usefulness of such a construct in terms of the description of a single language will be like the usefulness of a single large container with all sizes of spanners in it. You cannot have it at home, but if you can go and borrow one every time you need to turn a bolt, there is a good chance that you can find one that fits. Fortunately, the container will have a certain internal order to it. The intersection issue raises its head because some generalizations are more general than others, applying to certain preferred areas of coding - tensemood-aspect categories, first/second argument status, passive - which are the staple issues of grammatical discussions. The status of the layered model of clause structure should be understood in that context: as an attempt to put together some of the most useful cross-linguistic abstractions about the organization of clause meaning. The reason for the close-touniversal status of the basic features of the layered model can, I think, be accounted for in the functional top-down anchoring of clause structure that I will discuss in the next section.

4. Conceptual meaning in a functional clause structure 4.1. Introduction In Part Two, I have so far tried to show how existing theories of structure are correct to the extent that they lend themselves to reconstruction on a functional basis; in conclusion I outlined a revision of the layered clause structure of Functional Grammar in terms of a syntax of coded, functional content. I now return to the conceptual aspect of clause meaning; and here I draw on Cognitive Grammar as developed by Ronald Langacker, the second main source of inspiration. I see Cognitive Grammar as the most ambitious and detailed attempt so far to account for the whole area of coded meaning within a cognitive theory of meaning and grammatical structure. I share with cognitive grammar the sign-oriented approach and the emphasis on the role of meaning in accounting for syntax; on two related points, however, I hope that the descriptive principles I have argued for above will be able to add something to the results achieved within the framework. One is the functional-interactive nature of linguistic meaning; the other, following from the first, is the nature of clause structure. First, I give an outline of the basic notions in cognitive grammar. Then I return to the issue of the interactive nature of meaning, and try to demonstrate how the functional dimension of meaning and the associated aspects of clause structure fit into the picture I am arguing for.

4.2. Language structure in Cognitive Grammar One of the points on which Cognitive Grammar stands out in direct contrast to generative grammar is the explicit rejection of the "process" or "constructive" character of grammar that goes with generative theory (cf. Langacker 1987a: 63-64) and related approaches. In such theories, the grammar in itself is capable of specifying well-formed combinations of items; but in Cognitive Grammar, the task of combining items is left to the speaker rather than the grammar, reflecting the "problem-solving" nature of the combinatory process. The definition of "grammar" is therefore fundamentally based on the "list" approach. Langacker (1987a: 57) sees a grammar as a "structured inventory of conventional linguistic units", (where the premodification "structured" reserves a place for cross-item generalizations). In a similar vein, the importance of language-particular

256 Conceptual meaning in a junction-based structure

organization is emphasized (Langacker 1987a: 46-47), without therefore ruling out a search for universale. Like the basic emphasis on the content-expression relationship, these points can be seen as an updated version of Saussurean principles of description; one of the differences between competence and langue is that the latter is in the nature of an inventory of signs, whereas competence is a system of rules for constructing clauses. The building of syntactic combinations, as in Saussure, therefore takes place outside the core of the system itself: clauses are essentially the speaker's responsibility. Yet it should be emphasized that Langacker does not explicitly adopt any langue notion; on the contrary, the "usage-based" nature of the model is repeatedly emphasized. In a number of passages (cf., e.g, Langacker 1987a: 280) it even appears as if the distinction between linguistic convention and "purely pragmatic" cognitive processing is dissolved. Yet I think the position is not very different from the one I have presented in chapter 1 above: langue, the aggregate of linguistic conventions in the speech community, arises as accumulated and digested parole, which consists of specific usage events. In the following paragraphs, I give an outline of a Langacker's langue-&qmva\ent as I understand it. Grammar is said to involve "the syntagmatic combination of morphemes and larger expressions to form progressively more elaborate symbolic structures" (Langacker 1987a: 82). For each possible type of syntagmatic combination Langacker posits an abstract unit consisting in the operation itself; thus, plural formation is itself a unit, represented as [[[THING]/[...] — [[PL]/[z]]]. This schema, where "thing" stands for the common semantic property of nouns, and [...] for the missing specification of expression, states that you can form the plural of a noun by adding [z]. In other words, combinations are made part of the grammar in the form of "construction units" that can be combined with other ("item") units. The construction Schemas can be used as ways of combining items in canonical ways, thus ultimately taking over the entire job of a process-oriented rule system; in terms of possible output, the compositional paths that Langacker describes makes essentially the same predictions as the layered model described above. Hence, the choice of making the list rather than constructive mechanisms basic does not exclude elements of a dual approach (cf. also the emphasis on the "rule-list fallacy" (Langacker 1987a: 29). But although mechanisms of combination are part of grammar, they are viewed essentially from the bottom-up perspective that goes with the "item" view: language provides

Language structure in Cognitive Grammar 257

the items, and the speakers must put them together. The basic elements in the theory is therefore the semantic properties of those individual meanings out of which speakers must construct their messages. Those semantic properties constitute the core ideas in Cognitive Grammar as a semantic theory. One of the central properties is the semantic "profile" of an item: the item hand invokes the semantic domain of the human body and profiles the part from wrist to fingertips. This notion embodies the same insight as the Saussurean net of "form" casting its shadow on substance, carving out meaning from a presupposed substance domain. However, it goes beyond the net in that it relates meanings in the same breath as filtering them out from one another: the relationship between body, arm, hand, finger and nail is explicated in terms of the general notions of domain, cognitive relatedness and distance. The notion of profile goes with the central concept of "designation": an item is said to designate the profiled part of the domain. This central concept has not been noticed so much before, because it was assumed to equal meaning pure and simple. With the recognition that you cannot invoke the concept of an arm without invoking the concept of a human body as a whole, the distinction becomes necessary: a term can evoke a semantic domain without profiling (designating) all of it. Another element that has helped to clarify the structure of lexical meaning considerably is the use of the figure-ground distinction (introduced by the Danish psychologist Edgar Rubin, cf. Rubin 1915: 3). This points to a fundamental asymmetry in the organization of meanings, namely that there is a priority dimension built into virtually all relations: one element is placed in the foreground, whereas the other is a background element, with respect to which we see the "figure". This asymmetry, reflecting a perceptual or cognitive primitive, pervades the organization of meaning in linguistic items and (secondarily according to Langacker, see Langacker 1987a: 231-32) the way items are combined. The most central place for the distinction is in the concepts of "trajector" and "landmark" in relational predicates, with the trajector as the figure and landmarks as part of the ground. The difference is perhaps most easily perceived in relation to prepositions. Pairs like below and above can be elegantly described as differing only when it comes to trajector-landmark specification: below places the "figured" entity in relation to a point-of-reference that is higher, whereas above places the figured entity in relation to a point-of-reference

258 Conceptual meaning in a function-based structure

that is lower than itself. The existence of trajectors and landmarks is central to the principles on which syntagmatic combination is described.1 The overarching concept in syntagmatic combination is "valence" understood in a broad sense. Taking his point of departure in the role of shared electrons in chemical compounds, Langacker suggests that one of the crucial mechanisms in linguistic combinations is the sharing of aspects of meaning, termed "correspondence" (cf. Langacker 1987a: 282: "Correspondences between substructures of the components are claimed to be an invariant feature of valence relations"). Thus, under and the table can be combined because under designates a relation in space and contains positions for a trajector and a landmark, while the table is an object that exists in physical space and is consequently relatable to other phenomena in physical space. Therefore the landmark of under and the profile of the table can be brought into "correspondence". The result of doing this is the formation of a complex conceptualization in which the schematic landmark of under is "elaborated" by the table. The second factor that determines the nature of syntagmatic combinations is the question of what happens to the profile, the central substructure that determines what the expression designates. The typical case is that compounds inherit the profile of one of the components — which is therefore called the "profile determinant": red pencil has the same profile as pencil. The word pickpocket exemplifies cases where the composite has a different profile from all components. In some cases, such as sequential locative phrases (outside in the backyard near the kitchen table) the choice is either to say that all are profile determinants or none is, and "to keep things simple" the latter alternative is chosen (Langacker 1987a: 291), reserving the term "profile determinant" for cases when there is an asymmetry. A third basic notion is the distinction between the "dependent" and the "autonomous" item in a combination. The definition of dependence is as follows: One structure, D, is dependent on the other, A, to the extent that A constitutes an elaboration of a salient substructure within D (Langacker 1987a: 300)

If an item has an unsaturated substructure that is in need of elaboration, then the item is dependent on another item to do this: the elaborating item is then autonomous, whereas the elaborated item is dependent.

Language structure in Cognitive Grammar 259

The central example of this is provided by the distinction between "things" (designated by nouns) and "relations" (designated by verbs, adjectives and prepositions). Things are conceptually autonomous, whereas relations are conceptually dependent, as illustrated in the so-called "billiardball" model (1991: 13), where the balls are like nouns in being conceptually autonomous, while the relations and interactions between the balls — as occurring in a shot — are conceptually dependent: we cannot conceive of interactions or relations in isolation from things that interact or relate to each other — but we can conceive of things (corresponding to nouns) in isolation from any relation between them. Langacker states that the notion of dependence he suggests is almost the opposite to the notion employed in dependency grammar. Instead of letting this stand as just one more source of confusion between schools, I think the oppositeness can be revealingly analyzed as a straightforward consequence of opposite perspectives. If one is interested in structure and consequently looks for the paths of determination that create clausal structure, clearly the structural position of argument terms is dependent on the main verb of the sentence. This is so for exactly the same reason as Langacker would say that the verb is the dependent member in the relationship between verb and argument noun: the verb "needs" argument nouns around it. It is therefore the semantic dependence of the verb (upon elaborating arguments) that gives rise to the structural dependence of the argument position on the verb (to provide the elaboration sites where nouns can fit in). Langacker combines the two criteria of dependence and profile determinacy in a definition of two central grammatical relation types (1987a: 309-10). The element which determines the profile may be either dependent or autonomous in relation to the item with which it combines; if the profile determinant is autonomous we get a "head-modifier" (endocentric) construction. An example is the case of an adjective modifying a head noun (clean water). The combined entity is in the same category as the head (water), since the modifier adds descriptive material, which is dependent on having something that fills out its elaboration site. On the other hand, if the profile determinant is dependent, as the verb hit in the combination between verb and object (hit the ball), the added element (the ball) is a "complement" rather than a modifier. This is so because hit designates a process, and hit the ball also designates a process, but hit is the dependent member of the combination — it needs an entity to "elaborate" it, and the ball fills the bill.

260 Conceptual meaning in a function-based structure

The central role of dependence also has consequences for the view of constituency in the theory. In most traditional syntactic analyses, constituency is perhaps the most basic relation between elements in the clause. But since autonomy-and-dependence takes care of at least half of what is traditionally handled as aspects of constituency, the burden put on constituent structure in Cognitive Grammar is limited to taking account of plausible conceptual chunkings and corresponding intonational markings. The notion of constituency is thus free to operate so as to permit alternative analyses, for instance the arrow hit/the target and the arrow/hit the target; and alternative "surface" word orders can be easily accounted for in terms of alternative conceptual groupings of sentence members. To sum up: the most central way in which correspondence is brought about in the clause, creating a syntagmatic relationship between items, is "elaboration" — one element fills out missing specifications in another. The sites in which elaboration can occur are already defined by trajectors or landmarks that are built into the meanings of individual items themselves. The "power relations" between items (which item imposes its profile?) plus the directionality of elaboration determines the relations between elements in combinations. The complete network of relations in a complex can be described in terms of a compositional path that specifies the hierarchical relations. The structural apparatus applies on both the content and the expression side; and it not only respects this fundamental distinction but even emphasizes that structural relations of the same kind occur on both sides — in a manner that is in some respects reminiscent of Hjelmslev.2 Yet, although this would constitute syntax by the standards of this book, Langacker does not use the term "syntax" about any aspect of his own system: There is no meaningful distinction between grammar and lexicon. Lexicon, morphology, and syntax form a continuum of symbolic structures, which differ along various parameters but can be divided into separate components only arbitrarily (Langacker 1987a: 3)

This position is the polar opposite of the autonomy doctrine: where generative grammar gave syntax a wholly arbitrary independence of the rest of language, Cognitive Grammar denies it even the status of a definable area within the larger whole of language; Hudson (1992), in reviewing Langacker (1991) summarizes Langacker's position as he understands it by saying that a grammar never needs to refer to anything but semantics and

Language structure in Cognitive Grammar 261

phonology.3 This position to some extent reflects the focus of interest within cognitive linguistics generally speaking; most of the attention is devoted to problems of category structure as opposed to staple syntactic issues; but it is not an accurate reflection of Langacker's own descriptive practice. I think this view of syntax is unnecessarily polemic: it is necessary only as a correction of the autonomous view, in which syntax is distinct from semantics. Syntax has two sides, and is thus integrated in the system — but we still need to recognize the existence of syntactic organization as an interesting fact about human language. The actual descriptive practice of Langacker himself as far as I can see corresponds rather better with the position adopted in this book: a human language consists of an inventory of elements, plus ways of combining these into larger wholes. Above the level of minimal signs, combinations have both an expression side and a content side — and the mechanisms of combination form the natural territory of syntax, or "grammar". All the (semantic applications of the) concepts discussed above (valence, elaboration, correspondence, dependence, profile determinacy, head, modifier, complement and compositional path) can be non-arbitrarily grouped together and seen as belonging to the area I have called "content syntax" — simply because applying these concepts implies looking at more than one individual item at a time and showing how they combine into larger units.

4.3. Cognitive Grammar and the distinction between clause meaning and interpretation As I have tried to show, there is more langue, including more syntax, in Cognitive Grammar than may meet the eye. Nevertheless, I think the "recipe" view of syntax has something to offer on two points that are under-emphasized in the theory: the functional dimension of meaning, and the essential role of syntax. The limitations that I see in the view of meaning and in the view of syntax in are connected: because the total meaning of a clause is seen as basically identical to the total intended conceptualizion of the addressee, Cognitive Grammar underestimates the functional dimension of meaning. From this follows an under-awareness of the "hard core" compositionality that is present at the level of process input which I see as the locus of langue meaning, but not at the level of the

262 Conceptual meaning in a function-based structure

finished conceptual product. I shall now try to demonstrate this in relation to two concrete examples from Langacker. Syntagmatic combination, as described above, is understood in terms of the chemical "valence" analogy: items establish syntagmatic relations by means of correspondences, whereby substructures are shared between items. What precise substructures are in correspondence may differ, although the relative prominence of profiled elements makes them the natural participants in such valence relations. In a footnote, however, we get an exception in the shape of the putative combination elephant ribbon, used as described in the following: Suppose a zoo manager gives presents to his keepers, and uses a different kind of ribbon to wrap the present depending on what type of animal each keeper cares for. He might then say: Let's see, where did I put the elephant ribbon ?, meaning the ribbon with which he intends to wrap the package he is giving to the elephant keeper. The correspondence linking [ELEPHANT] and [RIBBON] holds between two unprofiled entities, namely the keeper and the recipient; the former functions in the encyclopedic base of [ELEPHANT], and the latter in that of [RIBBON] (via the specification that ribbon is used for the wrapping of gift packages). (Langacker 1987a: 282n)

I think the case and the interpretation are plausible and relevant to the understanding of syntagmatic relations; but I do not think the correspondence relation itself, as established via the encyclopedic base, is the locus of the langue relationship between elephant and ribbon. In terms of the "recipe" picture, the basic "content-syntactic" relation would be understandable in terms of a specification of the interpretive tasks that are to be carried out by the addressee; the semantic structure of the NP consists in the hierarchy of processes that must be "executed" in order for the clause to be understood. As in the other elephant example above p. 226, the starting point (according to the built-in logic of the task itself) is to conceptualize the meaning of the head (here ribbon). The next operation — the locus of the syntagmatic relationship between elephant and ribbon — is to let the meaning of the modifier elephant operate upon the ribbon, changing the conceptualization from an ordinary ribbon into an elephant ribbon (whatever that may be). Finally, cued by the definite article, the task is to find a discourse-world equivalent that is identifiable as being talked about.

Cognitive Grammar: clause meaning vs. interpretation 263

According to this picture, the coded interpretive task specifications constitute the langue meaning. How the addressee is going to perform the tasks is a matter of actual communicative skill; and the quotation from Langacker above describes a clear-cut slice of parole, of actual communicative sense-making activity. Thus, the correspondence only comes into being after the communicative job is done. The riddle posed by the combination "elephant ribbon" might be solved in many other fanciful ways without changing the coded meaning and the content syntax of the phrase; the actual correspondence established is therefore not part of the description of the code, only of the message.4 This does not mean that "correspondence" becomes superfluous as a concept — only that it must be understood as a constraint on the interpretation process, rather than as an actual correspondence: until the correspondence is established, the task is not accomplished. This would also appear to lead to a more consistent view if we compare the analysis with a case of violation of selectional restrictions that is discussed in the same context. The phrase charismatic neutrino is said to be anomalous because the trajector of charismatic fas against the profile of neutrino. That is true in the sense that the kind of thing profiled by neutrino cannot have the property "charismatic"; but there is a similarity with the elephant ribbon case which is interesting from the recipe point of view. The staple example is from Nunberg (1978): the ham sandwich at table two is getting impatient illustrates a case of metonymic substitution common in restaurant jargon. Similarly if, at a large physics laboratory, a dashing new research assistant in search of neutrinos has swept all female employees off their feet, it is quite conceivable that jealous male colleagues might refer to him as "the charismatic neutrino". The two cases are treated differently by Langacker, which could be understood to mean that the metonymic way of establishing non-anomalous correspondence is linguistically different from the invocation of keepers and presents in the "elephant ribbon". As I see it, the content syntax is identical in the two cases, and the sense-making processes are essentially of the same kind. In either case, we do not need to solve the riddle in order to establish the linguistically relevant syntagmatic relationship. The anomaly of charismatic neutrino arises if we establish correspondence directly via the profiled entities; but as shown in the case of elephant ribbon, this is not the only option open. Thus, in a syntagmatic combination the conceptual valency relations depend on a basically functional relation between the

264 Conceptual meaning in a function-based structure

items. The difference in viewing is due to the basic identification in Cognitive Grammar between intended "speaker meaning", i.e. the complete conceptualization, and the meaning of the clause — where the recipe format makes a sharp distinction between the recipe and the product that the cook has in mind. This is also the point on which I think compositional syntax is underemphasized. If you take speaker meaning as basic, there are good reasons why compositionality is of limited interest, cf. the discussion above p. 218: the hollandaise is not basically compositional. In the finished product, each element is influenced by all the others; and the grouping of constituents that is implicit in the compositional path is only one among several possible and relevant groupings (cf. Langacker forthcoming b: 8). Let me make it clear that I agree with the criticism of non-semantically sensitive "immediate constituent" analysis that would either not distinguish or totally separate the following two cases: (31) The guests that you've been expecting just arrived (32) The guests just arrived that you've been expecting I also agree with the positive point, demonstrating the significance of a number of groupings that are not marked by constituency relations; among them are relations between elements in an idiom separated as in umbrage was taken by Jack at Jill's remarks; groupings in terms of mental spaces, which does not correspond to constituency in a case like their landlord believed that the building was in better shape than it actually was ("actually" is not in the landlord's belief world), etc. All such groupings do indeed exist and are important for getting the total significance of the clause right. However, this picture does not bring out the crucial role of the compositional skill that was a shared feature of tool use and syntax in language: the ability to act according to the specifications of the recipe. No matter how many different links there are, the basic content-syntactic ability to apply operator to operand correctly is the backbone of human linguistic competence (cf. the discussion above p. 223). As far as I can see, the perspective that understands syntactic relations between content elements in terms of a compositional task rather than in terms of relations between elements of an integrated symbolic structure would also be in tune with Langacker's recent theory of reference-point relations as a very abstract

Cognitive Grammar: clause meaning vs. interpretation 265

unifying feature of a number of different semantic relations (cf. Langacker 1993, 1995); the mechanism whereby the semantic value of one element needs to be determined by going via another element goes well with an approach that emphasizes the importance of the combinatory skill that is the essence of the processual view of syntax. The objection I am formulating is also connected with the avowed "bottom-up" approach to syntax (cf. Langacker 1988b: 131). Although the system should not be understood in the "building-block" way that the pictorial diagrams might lead one to expect, but as a complicated system of overlapping groupings, it does involve a view according to which the elements are basic, and the constraints are basically determined by the speaker's whole conceptualization rather than any inherently privileged syntactic hierarchy. Conversely, the under-emphasized aspect in this picture go with a top-down path into syntactic structure, to which I now turn.

4.4. Conceptualization embedded in interaction: the top-down aspect of syntactic structure In the bottom-up picture of syntax, the factor that regulates the process whereby elements are combined is essentially the intention to convey a whole "unified" conceptual picture: .. .finding no single morpheme or other fixed expression to convey the desired notion, I construct the novel sentence your football is under the table. I can achieve appropriate linguistic symbolization only by isolating and separately symbolizing various facets of my unified conception.. .(Langacker 1987a: 279)

The overarching notion is that of "symbolic structure": such structures differ in various respects, including compositional complexity, but are basically of the same kind. Thus, to put it perhaps too bluntly, whether an utterance has one, two, three or more words — or whether it consists of a noun, a verb or a preposition — simply depends on the nature and complexity of the whole conceptualization. It may be a monadic concept or a whole panorama that one has in mind; it may be a process, a thing or an atemporal relation — whatever it may be, it is a matter of the speaker's choice only. This provides no obvious basis for answering the question: what are the functions that individual items must serve in order to be able to make up

266 Conceptual meaning in a function-based structure

an adequate whole utterance? The point on which the function-based topdown perspective supplements Cognitive Grammar is this element of functional organization. I shall now try to show how the conceptual and the purely functional elements cooperate in bringing about a complete clause meaning. The main point is that clause syntax must be understood as reflecting a division of labour which can only be understood top-down: the existence of a conceptual dimension in human language does not mean that human language has cut the situational embedding — instead, it has developed a semantic clause structure in which the topmost (outer) layers establish the role of the utterance in communicative interaction, while the bottom (inner) layers supply the conceptual meanings which provide the utterance with its content. This discussion builds on some previous conclusions. First, there is the notion of function-based structure introduced p. 155 above, where it was suggested that function-based structures (in contrast to component-based structures) had the character of "differentiation" of a larger whole; the example was that of a knife, defined in terms of its overall function, which could then be differentiated into a "handle" sub-function and a "blade" subfunction. These functional parts would not make sense except in the context of a larger whole (the knife), whereas in component-based structures such as chemical compounds, the parts can exist regardless of the existence of higher-level wholes. Secondly, I depend on the story of how syntactic competence must be understood as a capacity for hierarchical structuring of action sequences (cf. above p. 220). I am now going to suggest how this view of syntactic structure may tie in with the evolution and role of purely conceptual meaning. The connection arises because of one defining feature of conceptualization, and one defining feature of linguistic communication. The defining feature of conceptual competence is the ability to carry out stimulusindependent mental processing. The defining feature of linguistic communication as a type of action is that it always serves in some way to modify the speaker's actual situation ("make a difference"). I am going to illustrate the relation between these two features by means of a comparison between holophrastic and syntactic languages. I see the comparison as involving a step as up from the former to the latter. If we assume that the special distinction of human language is that it has syntactic combinations, the history of evolution has progressed from the first to the second type of language: whole utterance meanings are older in evolutionary terms than compositional syntax. A characteristic feature of

Syntactic structure as conceptualization embedded in interaction 267

holophrastic systems of communication is that each utterance has a readymade situational function: it is directly linked to something in the immediate situation — mating overtures occur in mating situations, etc. Therefore such whole meanings are not conceptual in the sense defined above: they are always situation-bound. The link between the two features is the following: once we have purely conceptual meanings, they do not in themselves fit directly into the situation, and therefore need to be combined with something else in order to "make a difference" in situational terms. Conversely, once we have syntactic structure, we can have elements that do not fit directly into the actual situation — because they can be combined with something else whose job it is to establish the relation. The logic is the same that was invoked in connection with tool use above p. 219: Putting food onto a spoon does not in itself make sense — only if one has the mental ability to envisage the combination with the further step of putting a full spoon into one's mouth does this become a meaningful thing to do. Syntax creates the possibility of making purely conceptual, situation-independent meanings useful — and conversely, the capacity for conceptual, situation-independent mental processing creates the possibility of handling generalized syntactic operations. Let me invoke a widely used example of animal communication, that of the vervet monkey, which has been studied intensively by Cheney and Seyfarth (1980, 1992, 1994), and try to show in the form of a thought experiment what I see as occurring in the process of transition to a syntactic language. The vervet monkey has three alarm calls: one for eagles, one for snakes and one for leopards. There is thus the beginnings of a conceptual element in the language, involving a distinction between three categories of predators; but it is coded together with the functionalinteractive part of the meaning, the element of "alarm". Moreover, as discussed above, this combined meaning relates directly to the situation, so it requires only categorial perception, not "real" concepts. Since each signal has an unanalyzable whole meaning, there is no possibility of creative combination. This also means that the language is describable in behaviouristic terms: each utterance can be understood as directly triggered by a situationally present stimulus (although this is not necessarily always the case). This situational-manipulative "Wholese" language could be differentiated into a language with a minimal compositional syntax, corresponding to the

268 Conceptual meaning in a function-based structure

F(p) formula that underlies the division between the interpersonal and the representational "super-layer" in the clause, if we set up two minimal paradigms, such as: (33) Function

predator category:

warning (beware!) leopard expression of disgust (yuck!) snake ? (interrogative)5 eagle Here the automatic link between the propositional content and a particular interactive function is dissolved; the occurrence of predators does not automatically trigger a specific "utterance meaning". But the predator, let us assume, would still be coded as situationally present; a snake would still be a "snake-in-the-situation", with no choice as to the point-of-application. The next step downwards in the process of structural differentiation according to the layered model described above would then be the distinction between the descriptive content of a proposition and its application to a situation in the world of which we speak. The linguistic locus of the "application" element is deictic tense, i.e. past or present (cf. below p. 327); clauses in the past tense are understood as applying to the past world of which we are speaking, whereas clauses in the present tense apply to the world as it is at the time of speech. Now the snake would not have to be situationally present in order to be coded in an utterance. Our missing-linkish proto-language one would now have three layers: (34) illocution beware yuck ?

tense

concept

past snake present eagle leopard

When meaning elements are differentiated by coding in this manner, it involves changes in all three essential properties of "Wholese" at the same tune. The step to sub-utterance coding by definition eliminates the holophrastic character of the language. Situational boundness begins to disappear because the existence of sub-utterance lexical items means that we code something that in itself has no ready-made function in the actual context (otherwise it would not be "sub-utterance"). When the possibility

Syntactic structure as conceptualization embedded in interaction 269

of referring to something else than the immediate situation arises, we have the possibility of invoking the purely conceptual meaning of the human noun leopard. In order to have such a lexical item, as distinct from a Wholese signal meaning 'leopard!!', the speaker must be able to "entertain" the concept leopard as distinct from its situational presence; and the concept must thus also be distinct from the need to react in any particular way. In the human world, an analogy would be the distinction between an air-raid warning and the concept "air-raid". The fixed repertoire also gradually begins to disappear; sub-utterance items create the possibility of combinations, and although these do not at once become effectively unlimited in number, the stimulus-bound character that goes naturally with holophrases is lost as soon as there is a combining operation involved in making a message. By this same monumental step, we also get an emerging distinction between langue ana parole, separating potential and actual aspects of language. Since the meaning of a Wholese utterance can be described exhaustively in terms of direct situational function, there is no difference between "actual" and "potential" except the type-token distinction, hence no point in distinguishing potential from actual meaning. As opposed to that, once we have sub-utterance meanings, we are forced to have a distinct level of langue that is not reducible to simple stimulus generalization from utterance tokens: a differentiated meaning potential goes with a differentiated potential for combining with other items. And once we have differentiated meanings that do not have their own situational function, we need to be able to organize those meanings in such a way that we can put together whole utterances that do have a situational function; and this organization is by definition non-situational. Structural complexity is the other side of coin in the emergence of subutterance meanings: language acquires structure when the unity of the holophrase is splintered. Therefore, purely conceptual meanings — which are by definition not bound to situational features — only fit into a syntactic language. That is the inherent link between conceptual meaning and syntactic structure. This means that the basic formula for a complex utterance meaning is "conceptualization embedded in interaction"6. The mutual dependence of functional-interactive and conceptual meaning on one another (which will be analyzed in more differentiated terms below) is the basis on which a full understanding of syntactic cooperation on the content side of language must be understood. A corollary to this is that meaning is not born at word or

270 Conceptual meaning in a function-based structure

minimal sign level; and therefore we cannot understand syntax by taking individual, differentiated meanings for granted — as a bottom-up approach is necessarily bound to do. The individual speaker finds language waiting for him in differentiated form; but both phylogenetically and ontogenetically the path into compositional language goes via a more primitive stage of a limited inventory of whole messages. Even if early child utterances look like single adult words, they must at some stage be seen as whole unanalyzable utterances. When children acquire syntax, they obviously have to know words first; but note that this knowledge of words as elements presupposes that they have factored out word meanings from the holistic contexts in which they have come across them. As pointed out in Plunkett et al. (1992), the "vocabulary spurt" by young children only sets in when the potential of single items is liberated from a well-defined situational routine, in which they serve a "holistic", potentially stimulusbound communicative function. The central property of conceptual meaning is its situation- and stimulusindependence. The assumption that the element of stimulus-independence is crucial for the understanding of the specific nature of human language can receive some support from the experience with apes and human language — which also has some bearing on the nature of syntax. In relation to the standing discussion of whether there is continuity or total rupture between the language skills of apes and humans, Pinker (1994: 335) stresses the uniqueness of human language and belittles the results of attempts to teach apes a form of human language, reflecting the generative emphasis on the unique genetic basis for language. But one can agree with Pinker that humans have a genetic basis for language that apes lack, and still maintain that there is something strained in the attempt to downplay the evolutionary continuity that is most strikingly exemplified in the ability of Kanzi, the star bonobo (cf. Greenfield—Savage-Rumbaugh 1990), to follow conversation in English without being taught. Surely in understanding the language ability of humans we need to pay attention to precisely what is shared in order to appreciate what is different. It is particularly striking that Kanzi seems to master some component abilities of syntactic organization — for instance to ascribe properties to entities in the right hierarchical order: when told to "go get the ball that's outside", Kanzi duly ignores the ball in the room and goes to pick up the ball outside. So what is missing?7 In continuation of the emphasis put on stimulusindependence above, I would like to refer to an experiment that I know only from a TV programme, in which a chimpanzee subject was given two

Syntactic structure as conceptualization embedded in interaction 271

sets of choices. In the first, there was a large and a small amount of food; in the second, there was a choice between symbols indicating a larger or smaller amount of food. The procedure was that whatever the chimpanzee subject pointed to was given to the other chimp. When dealing with symbols, the subject learned to point to the small portion, thus making sure that the big one remained behind; but in the case where the actual goodies were displayed, this did not work. No matter how often it was tried, it was impossible for the animal to refrain from reaching out towards the bigger reward, thus effectually losing it every time. As also underlined by Premack and Premack (1983), a major difference between ape and human is the greater extent to which humans have brought their skills under intentional control as opposed to stimulus control. If a syntactic language presupposes a reservoir of meanings with no direct stimulus relation, this in itself suggests why apes are less "instinctually" disposed towards a syntactic form of communication.

4.5. A closer look at non-conceptual meaning The thought experiment above involved two stages of differentiation: a division into an illocutionary component and a prepositional component, and a division into a "application" operation coded by tense and the stateof-affairs that it concerns. In both cases, functional-interactive meanings are split off at the top end of the hierarchy. The collaboration between the two types of meaning in the clause is much more subtle than this illustration shows; but before we develop this issue further, we need to go more deeply into the nature of non-conceptual meaning, to illustrate what is missing when it is accounted for within the conceptualization-based view of meaning in Cognitive Grammar. The example of non-conceptual meaning used in Part One was greetings (hi, hello), which exemplify that human language also has elements conveying holophrastic and situational meanings: greetings have no situation-independent conceptual content. But also elements smaller than whole utterances have the direct situational linkage that is characteristic of functional-interactive meaning; the most obvious example is perhaps deixis. Deictic elements have always been understood as exceptions; as emphasized in Jakobson's term "shifters", they refer to different things in different situations, which is an anomaly if you think of meaning in terms of what

272 Conceptual meaning in a function-based structure

words stand for. The central point in this context is that from the point of view of situational, interactive function they do not change (cf. Harder 1975). Demonstrative pronouns, for example, always have the same function (as opposed to denotation), namely to point to something situationally given, and thus exemplify the direct situational relation that was characteristic also of holophrases. As a result of the coding differentiation, they are not designed to serve as complete messages (except in special circumstances); but their coded properties include that situational link which purely conceptual meanings abstract away from. In Cognitive Grammar, there are two related notions which account for the peculiarities of such elements: "grounding" and "subjectification". Grounding occurs when a symbolic structure is located not in relation to the canonical "objective" perspective, but in relation to the "subjective" scene with the speaker in the centre. Subjectification is the process whereby meaning elements become re-oriented from the objective scene to the subjective scene, as often occurs in processes of grammaticalization, cf. Langacker (1990). Within this picture, the demonstratives can be described as invoking the subjective domain of the speech situation, rather than an objective conceptual domain. Thus, if we replace the "objective" conceptual content with a representation of the "subjective" situation, we also get a constant meaning for the "shifty" deictic elements. This account I see as true and valid for the sophisticated aspect of deixis, i.e. the way in which using words like 7 and you depends on understanding the roles of speaker and hearer. This involves the ability to change perspective — a rather late accomplishment. This ability does indeed involve conceptualization skills, since in order to be able to understand and use deictics, a necessary condition is that the speaker is able to conceive of himself as part of the "grounding scene", as described by Langacker. What is missing is the primitive aspect of deixis: its immediate situational linkage. There is no functional, interactive dimension in the picture offered by Langacker: one type of conceptual structure is invoked instead of the other, but it is all a matter of getting one's conceptualization right. To capture the missing aspect we need to refer to the actual process of establishing a link between the ongoing situation and the conceptualization process in the mind — the element that was automatic at the holophrastic stage. The distinction is analogical to the difference between a fully functional electrical device and a specification of how to plug it in. The notion of "near" or "distant" as involved in prototypical deictics can further illustrate what is lacking in a purely conceptual account. The issue

A closer look at non-conceptual meaning 273

can be seen in the light of the etymology of the word "deixis", which comes from a Greek word for "pointing". The meaning of deictic items involves something similar to a gesture of pointing — which brings a feature of the situation to the attention of the addressee without essential reliance on conceptual resources, and thus establishes a direct relation between mind and situation. A child can point at the age of around nine months, some time before it learns to speak; Premack's chimpanzees developed humanoid pointing spontaneously. We can assume that it develops out of picking up "directedness towards an object" in fellow animals; animals involuntarily point with their bodies as a proto-stage before intentional pointing arises. This also affects the way we should understand the contrast between "proximal" and "distal" deictics. The word here invokes the ground and points to where the speaker is; the word there invokes the ground and points away from the speaker (cf. also Kirsner 1993). But the "nearness" and "distance" do not presuppose a decontextualized concept of "nearness" vs. "distance": you can point to something without possessing a conceptual, decontextualized notion of "distance", of which deictic distance comes out as a special case. Once you have both the concept and the ability to point, you can generalize, setting up a superordinate concept of "distance" and a subdivision into "deictic" and "objective" distance. But before one has achieved, by evolution, the cognitive level where one can make this generalization, only pointing is available. A purely conceptual account of pointing is based on the hindsight of evolutionary superiority. Essentially the same element is involved in the account of defmiteness. In conceptual terms, cf. Langacker (1991: 98), the meaning of the definite article can be described as involving the elements of uniqueness in current discourse space, mental contact by the speaker, and mental contact by the hearer (either previous or as a consequence of the use of the definite NP itself). Thus a NP with a definite article, as in the ferry, designates a ferry satisfying the three conditions described above. The element that is missing according to the functional perspective is the act of establishing a link between the conceptual ferry and the situational ferry. The same element is involved in the distinction between a state-of-affairs and a proposition, which (as discussed above p. 213) is often overlooked. The central point is that the descriptive content of a clause in itself cannot be true or false; in this, it is like a picture hanging on the wall, showing for instance a sturdy fisherman smoking a pipe. It makes no sense to ask

274 Conceptual meaning in a function-based structure

whether this picture is true or false, unless we see it as an attempt to portray a particular person. The deictic tenses, compare the detailed argument in Part Three, code this element of application, in essential similarity to defmiteness as expressed in a noun phrase: that ferry involves an instruction to invoke the ferry-concept and match it with an object in the situation (cf. below), just as a past tense form, as in John went, involves an instruction to match the description 'John go' with an event in the world. Another type of functional-interactive meaning is found in the words yes and no. Both constitute complete speech acts, and the speaker by using either of the two indicates his own position with respect to something in the situation: does he accept it or oppose it? In comparison with deixis, these are clearer cases of purely interactive, situational meaning, because they do not designate or denote anything — they only function as signals of assent or negation. Viewed in isolation from a concrete instance, no conceptual content can plausibly be assigned to them, even in relation to a subjectively construed grounding situation. This property of negation is illustrative, because there is a cline between holophrastic and syntactic forms of negation. The relation between no! (spelled with an exclamation mark to emphasize its situational function) and negation in general can be used to exemplify some of the ways in which functional and conceptual types of meaning interact. One might hypothesize a developmental path for negation that goes from holophrase to syntactic differentiation as described above: a likely source situation for negation is the wholesale rejection of something one doesn't like in the actual situation. To some extent, no! may be seen as the linguistic equivalent (or forewarning) of a physical act of defence. Langacker (cf. Langacker 1991: 132) has analyzed negation also in cases in which it interacts structurally with conceptual types of meaning (not, and no as in no music). The analysis suggested by Langacker is again perfectly convincing as an analysis of the conceptual aspects involved in understanding negation. His analysis begins with the un-negated item, which is then contrasted with a configuration where the item is absent. To illustrate this account, a parallel is suggested with the analysis of the preposition towards, which evokes a completed path but only designates the unfinished trajectory. Just as with negation, we need a situation to compare with in order to understand the conceptual import. However, I think the complexity of negation is different from the complexity of towards. This word designates part of a trajectory, essentially

A closer look at non-conceptual meaning 275

as a hand designates part of an arm. Negation, by contrast, does not designate either the item itself, or the missing item, or the pair consisting of both. What happens is better described by a word that Langacker uses repeatedly in the context, namely cancellation: not is used to cancel whatever is negated.8 In other words, also in its syntactic versions negation is non-conceptual: there remains an interactive root in it (as also emphasized by Givon 1989: 156). Data from language acquisition would appear to be compatible with an assumption that the interactive element is still basic in negation: the child first learns the holophrastic no!, using it whenever there is a danger that events in the situation take an undesirable turn; much later comes the application to conceptual items.

4.6. Two forms of incompleteness: functional and conceptual dependence We have now seen examples of how the two-way differentiation into illocution and proposition was just an initial step in exploring the relation between conceptual and functional meaning. The layered structure, viewed top-down, can be refined to provide a detailed description of that structural differentiation of whole clause meanings into component meanings that is the essence of a syntactic as opposed to a holophrastic language; and the relation between the conceptual and functional-interactive dimensions of meaning continue to play a central role. A functional description by definition looks "upward" for explanatory factors. The basic functional fact about an utterance is the function served by the utterance as a whole; and the function of sub-utterance items must be described by asking for its contribution to the job done by the whole utterance. This approach suggests the existence of a form of dependence that is different from the one described by Langacker. The basic motivation for this type of dependence is that one linguistic element needs another because it cannot do the whole job on its own; when you code a subutterance item, there is always something missing before you have a fully functional utterance. This discovery goes back to the Stoics, who distinguished between "complete" and "defective" expressions (cf. Diogenes Laertius p. 173). Complete expressions include "judgements" and questions; chief among the defective expressions are predicates :"...e.g. "writes", for we inquire

276 Conceptual meaning in a function-based structure

"Who?". A search for criteria of completeness must be pursued from a functional point of view; it is difficult to see what criteria would make one symbolic structure more complete than any other on purely conceptual grounds, short of conceptualizing the whole universe. There is, again, an analogy with the structure of a knife: if we insisted on an "item" ontology only, it would be entirely up to the individual user whether he chose to combine a blade and a handle or he preferred them separately. Only a presupposed functional role makes it possible to explain the privileged role of the combination over its components. The distinction complete-defective applies to all utterance fragments; but the central asymmetry of the elements in the layered structure suggests a differentiation between two complementary types of "incompleteness". As we saw, the coding differentiation embodied in the layered model distinguishes between "operands" and "operators"; and the defining mark of operators is that they "use" the operands in order to create a new and more complex item. At the top (or "output") end we found the functional types of meaning (which relate content to the situation); at the bottom (or "input") end we find conceptual content, which is "used" in various ways by higher-level operators. On the basis of this dichotomy, we can set up two complementary types of incompleteness, giving rise to two types of dependence relations. The incompleteness of operators consists in the lack of a content to operate upon. Starting from the top, we began by differentiating between the illocution (for instance interrogative) and the prepositional content. The illocution operator heralds a functional type, for instance that the utterance is a question, but it does not constitute a question in the absence of a propositional content. The dependence of an operator upon its operand I see an instance of "conceptual dependence": the job of assigning functional status to something clearly presupposes that it is already there, just as the application of a predicate to an object presupposes that object. The name "conceptual" reflects the fact that this type of dependence points downward in the structure, towards the conceptual end, and that what is missing therefore includes the conceptual content. The operand is incomplete in the opposite way. What is missing is a specification of what to do with it — in the example we would have a proposition, but we do not know whether it is to be used to make a statement or ask a question. Therefore the dependence of operand upon operator will be called "functional dependence". Seen from the purely conceptual point of view, there is nothing incomplete about a proposition

Functional and conceptual dependence 277

— as part of an inventory of "issues" we are perfectly capable of entertaining a proposition mentally without incorporating it into an utterance and without making a decision as to its truth value. The incompleteness, and the dependence, only arises when it is invoked in connection with an utterance, i.e. called upon to serve a communicative function. To illustrate the two kinds of incompleteness, imagine first an utterance consisting only of the semantic content 'declarative'. According to the definition above, such an utterance would be incomplete because 'declarative' is conceptually dependent on the proposition it takes inside its scope: the function is specified as that of conveying a fact, but since we as addressees do not know wherein that fact consists, the utterance would be incomplete. Beginning from the opposite end, imagine an utterance consisting of the word table, whose meaning is to invoke the concept in question. In terms of the description given here, this word would be functionally dependent on all the higher layers. We would be perfectly capable of invoking the concept in isolation, but since we do not know in what way the concept is supposed to function in communication, the utterance would be incomplete. The notion of functional dependence is absent in Cognitive Grammar, because all meanings are understood as conceptual, and the special role of functional types of meaning is only implicitly present in the framework. Functional dependence is what accounts for the incompleteness of layers up to the speech act. It cannot be explained in purely conceptual terms why a set of noun phrases, analogous to billiard-balls, would be incomplete as the content of an utterance (i.e. why a full utterance in the form of two noun phrases such as John the boat would be incomplete); or why we find no free-floating predications or propositions, although we are perfectly capable of conceiving of state-of-affairs and propositions on their own. The relation between the two types of dependence can be understood in terms of the slot-filler relationship. Operators create a slot consisting of the "operand" role, to be filled by the conceptual content that is communicatively relevant. Looking at layered clause meanings top-down can thus be described as a succession of slot-defining and partial slot-filling operations, until the lowest operand fills the whole of the slot defined and thus creates no further slots to be filled out. If we approach the same clause meaning bottom-up, we could say that there is a kind of slot-creating element present in the shape of the "functional question": what is the job

278 Conceptual meaning in a function-based structure

of the semantic content in question? This question in a sense asks for an operator to do something with it; yet this question is more naturally understood in terms of a "filler" asking for a slot to fill; slots are basically created from above rather than below. It follows from the definition that operators below the top level are dependent in both ways: thus, deictic tense is functionally dependent on the illocution and conceptually dependent on the state-of-affairs. It follows also that all operands are conceptually independent in relation to the operator. If we look at this distinction in relation to Langacker's notion of dependence, which is sometimes also called "conceptual dependence", it appears that it tallies with what I have called conceptual dependence. The billard-balls example covers the relationship between the predicate and the arguments in creating a state-of-affairs: the predicate takes the arguments inside its scope, and is therefore conceptually dependent on them, whereas the arguments (at the bottom of the scope hierarchy) are conceptually independent of everything else. Also in the case of negation, it fits Langacker's analysis: negation is conceptually dependent on what it negates (cf. Langacker 1991: 132). The bipartition into functional anchoring and conceptual content has been described as operating on a single hierarchical dimension, whereby conceptual operands are embedded in functional operators. As already implied, this is a simplistic account. As a key example of an utterance structure that exemplifies a more complex relationship between conceptual and functional meanings we can take the oldest utterance structure in the history of grammar: the classical division into onoma and rhema, subject and predicate in the traditional sense. In modern pragmatically oriented terminology the intuition underlying the distinction is generally expressed by terms like "topic" versus "comment", but the main point is the same: the distinction between that which we are talking about, and that which we say about it. This utterance structure is easy to relate to the basic distinction between conceptual meaning and functional anchoring: the anchoring in the situation is associated with the subject/topic, and the application of conceptual meaning is associated with the comment/predicate. The interesting thing is that the topic-comment structure appears to turn the basic division between higher-level functional meaning and lower-level conceptual meaning on its head: the situational anchoring in the topic occurs in an argument position, i.e. at the bottom of the scope hierarchy.

Functional and conceptual dependence 279

To account for this in terms of the logic of the story above, we need an extra level of complication in the story. The basic element in the complication that is required is the existence of multiple forms of anchoring. The basic logic in the embedding relationship is that conceptual meaning is function-independent and thus needs to be related to the situation in a functional way; and this operation can take place more than once in the composition of a clause meaning. The notion of "topic" illustrates one point below top level at which this can take place; the "billiard balls" at the bottom of the scope hierarchy can be anchored independently of the functional anchoring of the utterance as a whole — as when you call a shot, naming the ball that you choose to hit first. There is an obvious difference between the two types of functional anchoring in that the situational nature of the topic does not in itself create a viable utterance meaning — it remains functionally dependent on something that will explain what we are to use this topic for; the sun is a possible topic, but not in itself a functional utterance, just as hitting the first ball does not constitute a succesful shot. This difference highlights the fact that it is not just conceptual meaning that is functionally incomplete; situationally anchored information can be just as functionally incomplete, because it does not add anything to the situation; we remain at square one (cf. the principle of sense, above p. 136). The difference as well as the similarity can be understood in terms of the top-down ontology of function-based structures. A totality in the form of a viable utterance meaning is the presupposed overall context of all component choices, and the situational anchoring associated with definiteness can only be understood in that context: in order (ultimately) to perform a function that affects the situation, as a step on the way we can establish a peg, an address in the situation, to which we can attach the function. With a third metaphor: first we locate a socket, then we plug in our functional device. It may appear as if this distinction waters down the definition of "purely functional" meaning to such an extent that it becomes meaningless; if it can occur at both the bottom and the top end, and it can both indicate the (dynamic) function of the utterance and the (static) situational address, the basic logic of the division of labour between function and cognition is less mnemonically apparent. But the two-stage account of functional embedding suggested here can be supported by reference to the similarities in the structure of noun phrases and the structure of clauses, as described within

280 Conceptual meaning in a function-based structure

Functional Grammar by Rijkhoff (cf. 1990, 1992), and in cognitive grammar in the theory of grounding (cf. Langacker 1991: 194). In both cases, the layered structure begins with a conceptual element and moves towards an outer layer where we find the definiteness that is associated with the determiners in the NP and the tenses in the clause. Thus in both cases we find a conceptual core, and a situational periphery, reflecting the structure in which a conceptual operand is plugged into the situation by functional operators. The layering of NPs stops at this point, whereas clause layering continues with the Allocutions, because NPs are not semantically fit to serve as full utterances; they only arise as a result of the differentiation between predicable properties and carriers of properties, and are thus inherently designed to be related to a predicate in the context of a complete clause meaning. But both in the case of the NP and the clause, we find the element that was the core of the theory of functional and conceptual dependence suggested above: a conceptual "operand core" being put to situational use. In the NP the "sub-function" of a definite operator is to attach the descriptive content to an entity in the situation in order to be able to talk about it. In the clause the "sub-function" is to attach the clause content to the situation of which it is either true or false. The topic-comment utterance type is thus not understandable in terms of a simple "anchoring-plus-function" dichotomy: there are two points of functional embedding, one at NP level and one at clause level (on the relation between "topic" anchoring and tense, cf. p. 404-14). In examples like (35) The president resigned the concept 'president' is invoked, and operated upon by the definite article to produce a situational referent; the concept 'resign' is invoked, and operated upon by the past tense to produce a proposition about that situation; finally, the declarative then operates upon it to produce a factual claim.9 If we compare the two stages at which definiteness occurs, it is interesting that at the NP level there should be a paradigmatic contrast with indefinite items, whereas the tense slot has only definite choices.10 In terms of the role of functional meaning as described above, this can be given a clear functional motivation: NP meaning need not be independently anchored in the situation, as long as the whole clause meaning is anchored. NPs can remain purely conceptual and still end up as being applied to the world of discourse at the appropriate point: there's a leopard! is not

Functional and conceptual dependence 281

situationally anchored in the NP itself, but becomes anchored in virtue of the present tense. In contrast, the leopard is back! is anchored both in the NP itself (you know what leopard I'm talking about) and in virtue of the present tense. If tense could be indefinite, the whole clause meaning would not be applied to an identifiable situation, and thus the conceptual content of the state-of-affairs would remain without anchoring in the world of discourse (on "gnomic" preterites, cf. below p. 339). The paradigm example of a syntactic hierarchy is the one that Chomsky took over from traditional grammar: S -> NP VP, VP -> V NP, as associated with the sentence the man hit the ball (cf. Chomsky 1957: 27):

Sentence

VP Verb

The

I

man hit

NP

T

/\N

the

ball

If we look at this supposedly basic formal structure in the light of the discussion above, it will be seen that it can be analyzed as formalizing a combination of several interacting sets of content relations: the "pragmatic" one involved in the topicality hierarchy, whose conceptual correlate is the trajector-landmark asymmetry, and the "semantic" relation between verb and arguments. These again can be factored out into the scope relation (the verb takes both arguments inside its scope) and the specific relations between verb meaning and argument meanings. When all these content relations are coded such that there is a subject with topic status, in relation to which the rest of the sentence is understood as the predicate, we get the classic syntactic structure described in (36); and the existence of such a content relationship must be understood as the motivation for the substitution tests that give the subject its special status. As pointed out in Haberland—Nedergaard Thomsen (1991), syntactic relations arise as a result of specific compromises between the pressures of "pragmatic"

282 Conceptual meaning in a function-based structure

(=functional-interactive) and "semantic" (=conceptual) types of content in the clause; and the understanding of clause structure therefore depends on an understanding of the way those two types of meaning interact. Again, the formal relations expressed in the tree diagram are not wrong — they are just a fairly rough simplification, as any rival structure of the same kind would also necessarily be (cf. the discussion in section 3.2).

4.7. Relations between functional and conceptual aspects of "element" meanings The relations between individual meanings in Cognitive Grammar are understood on the basis of conceptual properties, which supplement each other in the ways outlined in the introduction. Conceptual dependence, as described, arises because of meanings which provide more or less empty sites, into which other meanings may be fitted, as the hand fits into the space defined by the glove. We can deepen our understanding of this conceptual relationship by inquiring about possible functional motivations for this division of labour. This will simultaneously explain why Langacker's conceptual dependence turns out to be the inverse of the functional dependence described above. Basically, the reason why a certain conceptual content needs elaboration must be understood in relation to the functional role it occupies in the division of labour in the clause. Seen from above, each item has a conceptual content that reflects the kind of job it does. Seen from below, the reasoning goes the other way: each item has the job it has because its conceptual content makes it suitable for that job. Neither of these two perspectives is the "right" one; sometimes one is more revealing than the other, but both are necessary. What I hope to achieve in relation to Cognitive Grammar is to supplement the bottom-up perspective, whereby combinations are basically approached from the point of view of individual items, with the top-down perspective where we understand the items as differentiations from the functional whole. In other words, conceptual and functional differentiation are two sides of the same coin in creating word meanings. The standard example of conceptual dependence is the relation between verb and arguments: the verb is dependent because it designates a process involving a trajector (and possibly landmark) site that requires elaboration.

Functional and conceptual aspects of element meanings 283

This reflects the billiard-balls model, according to which relations cannot be conceived of independently of things that are related. From the functional point of view, this conceptual dependence is a natural result of the functional differentiation which created a distinction between words for billiard-balls and words for relations between them. Langacker's definition of dependence in terms of "elaboration of salient substructures" taps the same intuition that I used to justify the asymmetry of the scope hierarchy: some types of meaning are, as it were, inoperative if there are not certain other types of elements to operate upon. The billiard-ball model can be seen as a differentiation of the more primitive clause type that Strawson (1959: 202) called "feature-placing" constructions, such as it is cold. The feature-placing variant would be something like there is billiard going on, in which the whole process of billiard-playing is designated as a whole with no distinction between participants and interactions. In contrast, the red ball hit the white ball distinguishes between a word for the interaction (hit) and words for the participants (the red ball; the white ball). This differentiation must be understood as one process that simultaneously creates two kinds of meaning. A conceptually dependent "relational" meaning by the same token must create a meaning to that fills out the missing related elements — otherwise it would be like inventing a corkscrew without having any corked-up bottles around. This again fits well with the functional role of relational predicates in the state-of-affairs. Functionally speaking, the job of "predicating" relations of things can only be done by words that do not already designate the things. Put differently, the development of concepts that evoke trajectors without designating them must be understood in connection with the function of combining concept and trajector in a creative way: unless we could factor out properties from the objects that carried them, this job could simply not be carried out. There is, I would like to argue, also a functional dimension in the semantics of verbs. The characterization of verbs in Langacker is bound up with the notion of "sequential scanning", i.e. the idea that verbs are understood as involving a conceptual dynamicity absent from nouns (which involve "summary scanning"). This characterization has not always been felt to be immediately convincing (cf., e.g., Hudson 1992); why is resemble more dynamic than similar, and why should two and two makes four be understood as unfolding in conceived time? However, I think the description is true if we add a functional element to the conceptual underpinning

284 Conceptual meaning in a function-based structure

suggested by Langacker. This functional element is in the role of the finite verb; only finite verbs are ultimately understood sequentially, non-finite forms being stativized according to Langacker. But as I see it, the job of finite verbs is not exhaustively characterized by saying that they designate processes which are conceived as progressing in time. Let us for ease of exposition assume the classic subject-verb construction. It is generally assumed that the relation between predicate and subject has some properties which are not reducible to the designated phenomenon. Rather, the job of the predicate is associated with the function of predicating something about the subject, creating an exocentric construction that Jespersen called "nexus". In the terms of Searle (1969), this involves raising the question of the truth of the predicate with respect to the subject. This is reflected in the layered model in the role of the central predicate in the construction of a proposition; and since infinitival and participial constructions are ruled out by the finiteness requirement, the dynamic reading belongs essentially in a context consisting of predication-as-part-of-a-proposition. The dynamicity of the finite verb thus co-occurs with the special function of considering an object (in itself conceived in abstraction from the predicate) with respect to the issue of whether the predicate applies or not. This combination has a temporal aspect that is absent from all other contexts, in the form of bringing subject and predicate together to see if they fit the situation at a particular time. This temporal aspect is different from dynamicity as a property of the conceived situation. The natural case is one in which the two go together: viewing the predicate with respect to a particular time is naturally bound up with viewing a situation as unfolding at a particular time. But the counterexamples mentioned above can be seen as cases where the conceived situation in itself is static, and only the act of predication introduces a temporal dimension.11 It follows from the whole functional approach that individual meanings are not basic and prior to complex meanings, as atoms are in relation to molecules and molecules in relation to cells; the "valence" metaphor is imprecise on that point. Rather, lexical items in the human sense are very sophisticated "functional components" differentiated top-down in the manner of blade and handle in a knife — rather than "physical components" based purely on a bottom-up path of composition. Once lexical items exist, they can of course be combined according to bottom-up principles; but we could not use lexical items properly unless we understood them as fragments of a type of larger whole that must be recreated, in one way or another, every time we use them. This is another

Functional and conceptual aspects of element meanings 285

way of illustrating why syntax is not just the speaker's responsibility, but also part of langue itself. Each individual process of combination is the speaker's responsibility, but constraints and possibilities are pre-wired into the components. If only the items themselves existed as part of langue, and not the mechanisms of combination, a plausible historical scenario would be that the full clause emerged one day after a long phase of only verbs and nouns — in analogy with the emergence of cells after a few billion years of molecules only. The discussion above should illustrate why this is absurd. The interdependence of functional and conceptual aspects of meaning also extends to discourse properties. The "potential-for-use" that constitutes the coded meanings of nouns and verbs is also anchored in the types of purposes that they fulfil: the nouns are the concrete entities that enter into discourses, and the verbs denote the events in which they take part (Hopper—Thompson 1985); as demonstrated by Gillian Brown (1994b), this also goes with a higher degree of abstractness in the meaning of verbs — which tallies with their higher position in the scope hierarchy of the clause. I have described scope relations as the skeleton of content syntax. This implies that in order to get the right interpretive process, we need to follow the priority laid down by the scope relations. The term "skeleton", however, also indicates that it does not cover the whole of content syntax. The missing element is the types of relations that are established by following the route dictated by the scope. As I understand it, the apparatus of Cognitive Grammar, supplemented on some points with a functional dimension, serves to do the job that was assigned to content syntax above: describe how the meanings of content elements go together in creating whole clause meanings.

4.8. Dependence and the division of labour between coded and noncoded meaning The phenomenon of dependence is crucial in understanding how language is structurally designed to fit into a larger pragmatic context — rather than constitute a self-contained structure on its own. Dependence operates at the interface between linguistic meaning and situationally available information; and this mechanism has given rise to a great deal of confusion because of

286 Conceptual meaning in a function-based structure

the association between "structure" and "autonomy". To see why, we need to expand the domain of dependence relations. What is needed to support a dependent item is not necessarily a linguistic specification of the item required; all that is required is that the function (in the case of functional dependence) or the conceptual content (in the case of conceptual dependence) should be available to the interpreter in the situation. Utterances like coming!, porridge, or downstairs, occur all the time without anomaly, in spite of the dependence relations described above — because the missing specifications are situationally available. This might appear to undermine the notion of complete utterances: if anything goes, what happens to the notion of top-down differentiation of whole utterance meanings as the crucial characteristic of human language? However, the distinction is necessary even if the appeal to what can or cannot occur as a whole utterance will not in itself support it. Above p. 243 it was argued that the domain of syntax stops at the level where it has always been assumed to stop, namely at the sentence boundary, because there is no larger constituent that plays a role for determining scope relations: no linguistic item requires an operand that is larger than, but different in kind from, a full sentence. Therefore, the way in which utterances of sentence form (or larger) are dependent on contextual information cannot be described by saying what type of linguistic constituent is "missing", as we can do in the case of utterances below sentence. Thus, an "incomplete" utterance like ""promise?" can be described by specifying precisely and in linguistically constrained terms what must be inferred in order for it to convey a full utterance meaning — and this specification corresponds to what could also be conveyed by a full clause. Items that belong to subutterance categories like nouns and verbs, when occurring as full utterances, draw on situational specifications in ways that could also be handled by precise linguistic categories. Thus there is no difficulty in specifying what a complete clause meaning is, even though utterances may consist of clause fragments. Sentence-based linguistics assumes that such "elliptic" utterances can be "normalized" into the corresponding sentences. The rationale is identical to the one I described above, but the procedure goes the wrong way: instead of saying that an incomplete message calls upon situational information in a way that can be linguistically specified, it reinterprets the clause as "really" or "underlyingly" complete in linguistic terms. In that way it eliminates the situation, leaving only language. It is this strategy, which converts everything on which an utterance depends for being

Coded and non-coded aspects of meaning 287

understood into linguistic items, that ultimately results in the performative hypothesis, where the participants in the speech situation are absorbed into the structure of the clause. The speaker as a member of the real world disappears from view, and in his place we get the subject of a performative verb. Interaction between coding and purely pragmatic mechanisms is made invisible: there is language as far as the eye can reach. Since it is so difficult to be sure exactly how much is coded, it is often claimed that it is futile to haggle about the question. Contextual understanding is what we are all interested in, and only an isolationist linguist would wish to artificially isolate coded meaning — if a clause description includes contextual information, so much the better, or so the logic of the argument goes. But this overlooks the fact that it is precisely the task of language to transcend what is contextually given, adding just what is necessary to modify the situation the way the speaker intends. To understand language is to build something on top of situational information, not to reprocess already established situational information. The importance of this distinction is reflected in the relative level of difficulty of different understanding tasks. The more contextual support you have for a given message, the easier it is to understand it — the more context-independent processing is required, the more difficult it is, cf. Brown (1994a). A model which systematically describes contextually given information as if it were on a par with coded meaning has no way of capturing this difference. The functional view of linguistic meaning and dependence relations makes it possible to avoid this confusion, while accounting for the intuition that is behind it. What happens is that the incomplete meaning triggers an interpretation process which requires functional and/or conceptual specifications. The questions that are thus raised are part of the linguistic aspects of the utterance event in virtue of the dependence relations — the answers to these questions, however, are not part of the linguistic description of an elliptic utterance. Linguistically speaking, the distinction between what is said and what is not said is decisive, even if the message may be the same. This also enables us to say as much as is linguistically needed about the performative anchoring of an utterance. The coded meanings 'declarative' and 'interrogative' were described above as being conceptually dependent on a prepositional content. On the account given here, they are also conceptually dependent on the presence of a speaker and a hearer: the conveyance of a fact or a question is inconceivable without a conveyor as

288 Conceptual meaning in a function-based structure

well as a conveyee and something conveyed. However, the two former elements are typically situationally specified, hence not part of linguistic meaning.12 The "performadox", cf. above p. 233, arises from converting situationally available information, drawn upon by the addressee, into linguistic content stated by the speaker. The fact that dependence is essentially a relationship between coded meanings and their context — requirements that must be fulfilled in order for coded meanings to make sense — means that it is a matter of linguistic convention to what extent explicit linguistic specification is necessary. An interesting case is what is known as "zero anaphora" (cf. Givon 1990, Tomlin 1991). When a subject is missing, but the rules of the language permit the lacuna to be filled contextually, in the terms described above one would say that the basic linguistic fact is the dependence of a verb meaning on an elaboration of its trajector — whether the addressee makes the elaboration on his own or receives assistance from the coding. In languages like Danish and English, the dependence in most situations manifests itself linguistically in the need for an "overt" NP; in Spanish or Mandarin Chinese, the dependence manifests itself in a drawing upon previously introduced referents, in a manner that is very like pronominal reference (cf. Tomlin 1991). The dependence relation in itself functions in a way that is similar to the request for identification that is coded into definite pronouns: it indicates that something is missing, thus prodding the addressee to supply it himself, when the language permits the dependent element to stand alone. According to the account suggested here, what is linguistically present in cases of zero anaphora is thus only the conceptual dependence (of verb on subject), not the subject itself: there is no "zero pronoun", only an anaphoric relation established as part of the interpretation process. Unless we interpret it this way, we deprive ourselves of the possibility of accounting for the dynamic relationship between context and coding options: we let context drift into language.

4.9. Scope, function and semantic relations: the multidimensionality of semantic structure Conceptual dependence was the dependence of operator upon operand: the necessity of a certain type of input to operate on in order for a certain semantic operation to be executable. Above I have argued that scope

The multidimensionality of semantic structure 289

relations have a certain natural affinity with a cline from conceptual to more functional meanings, subsequently complicating the story by saying that the affinity operated in (at least) two cycles. There are even more complications, however. The first is the existence of implications between choices on different levels: sometimes one choice may imply two different levels. The second is the issue of the specific nature of semantic relations. I have described scope as the semantic skeleton; actual meaning depends on the nature of those semantic relations that are channelled through the scope hierarchy. In addition, however, scope cannot be kept apart from the nature of semantic relations: the linkage types between elements have repercussions for scope relations. The first case can be exemplified with interrogative elements. According to the basic layout of the layered model, interrogation belongs at the level of illocutionary operators, as a paradigmatic alternative to 'declarative'. This corresponds unproblematically to the type of questions known as "yes/no questions", where a whole proposition is presented for affirmation or negation. However, the existence of w/z-questions as well as w/i-question words means that we also have to account for questions where an incomplete proposition is presented for completion. In terms of the semantic clause structure this can be described as one semantic element that operates in two places at the same time. The semantic element is the interrogative root, requesting specification of a missing item. This element can be combined with almost any specific grammatical position. In the case of determiners, we have the familiar paradigmatic choice between, e.g., demonstrative, indefinite and interrogative: that the (37) some book was returned what In determiner position, we can choose between demonstrative deictic identification, pure definite presentation for identification, non-identification and request for (addressee) identification of the referent. This makes very good sense in argument position, since one aspect of scope relations clearly restricts the scope of all the paradigmatic alternatives to the NP content: it applies to the status of the NP referent only. However, even if the speaker

290 Conceptual meaning in a function-based structure

requests addressee intervention in a specific grammatical slot only, this has natural implications for the status of the whole utterance — so the interrogative element recurs in illocutionary position. This double scope on the content side is typically reflected in double marking on the expression side. Within the NP, interrogation is marked in the interrogative forms of the words; on the illocutionary level, interrogative status is marked in ways shared with yes/no questions, such as word order and (in English) ifo-periphrasis. This is the mechanism behind the "wh-movement" phenomena. It can be regarded as a movement in the sense that the constituent itself has a natural place elsewhere; but it can be regarded as a case of an element appearing where it belongs in the sense that the illocutionary status of a clause is marked at the beginning of a clause in English, Danish, and a number of other language (Heltoft 1990); and since the illocutionary element is bound up with an argument, the argument has to stand in the proper position for the marking of interrogative status. This raises the question of the status of "echo-questions" such as (38) (38) You said WHAT?! where the interrogative element remains "in situ", and do-periphrasis does not occur. The account provided above suggests that they might be understood as cases where the interrogation remains at argument level and does not affect the illocutionary level — in other words, the whole clause remains declarative, while the argument position is interrogative. A natural context for echo-questions is a situation where the speaker repeats those parts of a clause that he has grasped, while inserting interrogatives in the places of the items he needs to have repeated. As an illustration of the semi-declarative status of the clause one could offer cases where understanding needs to be confirmed, and the natural way of confirming would be to repeat the whole clause in declarative form, rather than ask a question: (39) A: You find the main switch and turn it off B: I find the main switch and turn it off A: and after that you fix the cyclotron B: and after that I fix the WHAT?

The multidimensionality of semantic structure 291

It would be descriptively convenient if the clause could be understood as keeping its "normal" illocution, since echo-questions occur also in imperatives: (40) A: whatever you do, don't smash the cyclotron! B: don't smash the WHAT? Unless we do that, it appears we have to have two incompatible illocutions.13 This type of relation is also important in understanding the relation between conceptual and "pragmatic" meaning. A particular concept may become specialized for pragmatic purposes, and the relevant invariant relation may turn from the conceptual meaning (inherently multifunctional) to the service of a particular pragmatic function. In Danish, the word fej ('cowardly') is used in children's language to indicate 'unfairness' as a feature of the situation. The conceptual path is presumably via the use of unfair means by people who are afraid of losing if they fight fair: hitting someone from behind is something you only do if you axe fej, and throwing stones was regarded as a fej way of fighting when I was a boy. But children use the phrase as modifying, for instance football teams (as divided by the sports teacher) if one has an unfair share of good players ("cowardly teams") etc. On the one hand, this can be described as a modification of conceptual content, from 'cowardly' to 'unfair'. On the other hand, it can be described as a generalization of the situational value of a particular concept: if someone lacks the strength of character and judgment to do what the situation requires, a speech act of moral censure is called for, and fej (no matter where it is placed in clause structure) invokes the appropriate kind of disapproval. In the normal case, the conceptual and interactive dimensions should be understood as collaborating to bring about an appropriate formulation. It makes no sense to ask whether the conceptual or the interactive aspect of meaning is more important, since conceptual meanings (on the basic story told above) evolved as sophisticated means of serving our interactive purposes; they are thus inherently interactive. But the particular type of specialization associated with conceptual meaning may be harnessed by the type of interactive specialization that is found with fej, in which case "speech act type" and "inherent descriptive content" may be competing.

292 Conceptual meaning in a function-based structure

The second type of complication involves the nature of semantic relations. One example is the figure-ground asymmetry that pervades Cognitive Grammar, most notably in relation to trajector-landmark relations. With respect to scope and conceptual dependence, trajector and landmark are in the same position (the verb is equally dependent on both and operates on both in ascribing the relation designated by the verb to them); but the priority inherent in the "figure" status of the trajector means that the scope relation works asymmetrically, too. If we assume as the standard case (cf. Givon 1989: 198) that the subject is also the main topic, we can see the "figure" status as the conceptual aspect of a relation whose pragmatic aspect is topicality, involving situational importance as chief carrier of "aboutness". The priority of situational aboutness may impose a functional bias on the understanding tending towards a reading whereby the whole utterance is read as applying to one argument only — the landmark argument is read only as a piece of information about the subject-topic. Consider the same utterance in two contexts: (4la) How are the kids getting along? (42) — George beat up Joe this morning (41b) Is George still as aggressive as ever? (42) — George beat up Joe this morning In context (4la), we have a situation where it would be natural to say that we had a beating-up relation involving both kids, though one in a more salient role than the other. In context (41b), on the other hand, Joe could be regarded as just an item on George's record, yielding the traditional subject-predicate bracketing of the clause content. Pragmatically, there will be a scale of topicality dominance between the two participants; with respect to coding, there may also be a scale whereby objects may become part of the predicate at one end of the scale (cf. Nedergaard Thomsen 1992 on "syntactic incorporation"). Even at the other end of the scale, however, the basic asymmetry, whereby verb and landmark share their background position with respect to the figure, means that the scope position of the object is always slightly ambiguous as between "fellow argument" and "predicate appendix" status. This asymmetry becomes clearer when we look at modification as opposed to predication. Compare the following two examples:

The multidimensionality of semantic structure 293

(43) The path enters the wood (44) The path into the wood On the "argument" line of reasoning, both enter and into take the surrounding NPs inside their scopes, as trajector and landmark respectively. Yet in (45) the asymmetry discussed above is much more obvious: the standard analysis would be that into the wood takes the path inside its scope, and the path is the head. This is a consequence of the lack of predicative power of into as opposed to enter; the more independent status of the predicate in predication (cf. the exocentric status of the construction as discussed above p. 284) entails a less subordinate status of the landmark also. Modification is by nature a subordinating relationship; since the whole construction inherits the profile of the head, the landmark is inherently part of the modification rather than co-equal. The scope of the relation designated by into (in terms of which the two arguments have co-equal status) is thus "overruled" by the scope of the modifier relation; but both semantic relations are there as part of what goes on in the clause. The same downgrading is inherent in subordinators in "adverbial" clauses: clauses introduced by (although attach themselves as modifiers to the matrix clause which functions as head. As an adverb, though also has the main clause proposition inside its scope, but the omission of the landmark means that the location of the clause with respect to the implied landmark now becomes purely inferential (cf. also N0rgard-S0rensen 1992). It must be emphasized that scope ambiguities, in the commutation-based account, do not automatically constitute linguistic ambiguities. An element can be fixed or floating with respect to scope possibilities — that is one parameter of semantic specificity of coding. Functional Grammar has tended to assume that scope ambiguities by definition constituted linguistic ambiguities; but that again would be an a-priori judgment on "proper" coding. An example of systematic scope ambiguity is the case of focusrelated constituents, including negation. Disambiguation can be achieved by means of intonation, but is not guaranteed; misunderstandings of the kind made possible by utterances like (7), repeated as (45) occur also in the spoken language: (45) / did not do it because I like her

294 Conceptual meaning in a function-based structure

Many other adverbials are ambiguous (for example conditional clauses, cf. Wakker 1992) in a systematic way. In all such cases we need to distinguish between a description of the semantic recipe and the description of possible interpretations, if we do not want to ascribe greater complexity than warranted to the linguistic system itself. The semantic recipe in such cases does not prescribe a precise scope relationship; analogously, it is sometimes left to the cook to decide at what points to add certain spices; but the situational outcome may depend on what scope is selected. On the "locatability" assumption, cf. Bolkestein (1992). The possibility of scope ambiguity can also be used to throw light on an problem discussed by Bybee (1992; cf. also Bybee—Pagliuca—Perkins 1994). There is a general and a specific issue involved. The general issue is the question of how changes in scope can occur gradually in the preferred manner of grammaticization theory, rather than by sudden jumps. The concrete issue is the change in scope from the deontic ("agentoriented") to the epistemic interpretation of may. Bybee's interpretation of scope relations is different from the one adopted here. Her view appears to be influenced by the "transitive" and "intransitive" syntactic structures attributed to "root" and "epistemic" readings within generative grammar. On that interpretion, epistemic readings have the whole of the propositions inside their scope, whereas the agent-oriented reading has only the verb phrase inside its scope, roughly as indicated below: (46) 'John's (modal ('leave'))^ (agent-oriented) (47) modal (('/o/m's ('leave')^) (epistemic) As I see it, this is a good example of a scope reading which is inconsistent because it tries to reflect expression properties (initial subject position) and semantic properties (domain of application of content) at the same time, without reflecting on the distinction. On the purely semantic notion of scope, the subject is inside the scope of the modal in all cases, as discussed above: in the unmarked case, the subject is the ultimate operand of all the semantic material in the clause. The change in the status of the modal goes from having status as a lexical verb to being an epistemic operator that takes a whole predication inside its scope (cf. Goossens 1985). On that understanding, the modal verbs can be understood in two ways: as lexical verbs, or as grammatical operators (cf. also Langacker 1991). To take can as the "least developed" English modal (cf. Bybee—Dahl 1989),

The mitltidimensionality of semantic structure 295

Joe can swim can be understood with can as a lexical verb (cf. also Davidsen-Nielsen 1990), where can takes an object in the form of a reduced clause (swim). As against that, in Joe can be swimming the modal can most naturally be understood as an operator indicating epistemic possibility. Scope relations in the two readings reflect this analysis. With can as a main verb, it takes subject and object directly inside the scope, roughly as indicated below: (48) 'can' ('Joe', 'swim') With can as an epistemic operator, the scope is as follows: (49) 'can' ('be swimming' ('Joe')) Thus the change in scope is from a situation where the subject is directly inside the scope, to a situation where the lexical verb comes in between. However, Bybee's explanation for how a gradual change is possible retains its force in spite of this reinterpretation: it is still the subject that changes in status. Bybee points to the case of dummy subjects, such as ME mon (= impersonal one), where the agent adds no semantic material. Since the subject was too impersonal to have specific semantic properties, scope relations did not matter; thus such contexts provided an avenue of gradual change, cf. one of Bybee's Middle English examples: (50)...mon may hyden his harmes... One can/may hide his misfortunes' which can be read either in the old "agent-oriented" sense, now translatable as can, or in the new epistemic sense corresponding to modern may; but it makes no sense to understand the statement as being about the specific abilities of mon in either case. But even if the example is illustrative, the question remains whether we need such a privileged portal of change. If there are uncoded pressures affecting readings going on all the time, such as those that push object meanings towards being incorporated into the predicated content, such pressures can promote a change towards different scope readings anyway. Bybee herself mentions the role of inferences in the process, the interesting inference being from the lexical reading to the epistemic reading: if Joe is

296 Conceptual meaning in a function-based structure

capable of doing something, I can infer that it is within the bounds of possibility that he actually does it. If this becomes a standard and dominant inference, there is a period during which such statements are understood on two scope readings — one attributing capability to the subject, the other attributing possibility to the state-of-affairs. The gradual change then consists in increasing dominance of the higher-scope reading, whereby the direct relationship between modal and subject is severed. We do not necessarily need a loophole in the shape of a neutralizing context. This change into dual readings is not accompanied by any expression differences in English, and the modal verb stays within the same paradigm. Therefore, the linguistic changes that happen are (1) an extension in the semantic territory of the modal verb, and (2) a corresponding change in the permitted scope readings.

5. Summary: function, structure, and autonomy I have now tried to demonstrate a number of advantages in a function-based account of syntactic structure, mainly in relation to the two approaches to which I owe most. In particular, I have tried to show that an understanding of the functional nature of meaning is crucial if you want to understand why clause structure exists and takes the form it does. I would like, at the end of the argument, to sum up some distinctive features of the position I have defended with respect to the relation between function and structure. I hope to show both that this position is closely related to views that are generally held within functional linguistics, and that there are some points where agreement is less than total; in an attempt to show where my position may have something to offer, I have picked out some functionally oriented linguists to snipe at as I go along. An important point is the defence of the basic ideas in European structuralism. Much of the hostility towards structure that is found in some functionalist circles presupposes the structural notions in generative grammar. There is then a risk that structure may be rejected because of defects that are endemic to generative grammar. This I think to some extent characterizes the stance taken by Paul Hopper in his influential article on "Emergent Grammar" (1987). The word "emergent" is used in the sense of "coming into being"; Hopper takes it from a quotation by the anthropologist James Clifford ("Culture is temporal, emergent, and disputed"), and goes on: I believe the same is true of grammar, which like speech itself must be viewed as a real-time, social phenomenon, and therefore is temporal; its structure is always deferred, always in a process but never arriving, and therefore emergent; and since I can only choose a tiny fraction of data to describe, any decision I make about limiting my field of inquiry (for example in regard to the selection of texts, or the privileging of the usage of a particular ethnic, class, age, or gender group) is very likely to be a political decision, to be against someone else's interests, and therefore disputed. (Hopper 1987: 141)

Hopper concludes from this that in defiance of "abstract rules and native speaker intuitions" the linguist's task is to study "the whole range of repetition in discourse and in doing so to seek out those regularities which promise interest as incipient sub-systems".

298 Summary: Function, structure, and autonomy

There is a great deal that I agree with in Hopper's position, including the central point against the notion of autonomous structure and for the social embedding of language; it is also valuable to be reminded that grammars (in the sense of "systems of interactive routines") are variable in time and across the speech community, so you cannot formulate regularities about language without making a choice that abstracts away from temporal and social dimensions. However, it appears that in his determination to reject autonomy, he rejects too much structure. The central point on which I do not agree is the talk of grammar as a real-time phenomenon. If we take this view at face value, it implies a complete rejection of the langue-parole distinction: there is nothing but temporal flow in the picture. For the reasons given above I think this is a spontanistic simplification of social organization: the flow always reflects the channels that are already there, even in its pattern of overflow. Hopper's position is an overstatement also by the standards of his own general stance; the necessity of preserving structure is more or less built into his argumentation against it. In speaking of incipient sub-systems, one must perforce presuppose the system of which they are sub-systems. A proper study of emergent, spreading regularities depends on the ability to describe the state that the new regularity gradually supplants; a state of transition is not logically incompatible with the existence of definable stages that the transition goes through. One can agree fully that grammatical regularities spread from individual cases without therefore giving up the aim of describing the regularities that go into the production of sentences in a given speech community. Incipient regularities may one day become dominant, and this does not make them less interesting. One reason why it is necessary to supplement the emergent aspect of grammar is evident from the individual, cognitive point of view, cf. above p. 161. Even if grammar is always emergent, the speaker's language is always in a given state at a given point in time — and the cognitive resources are not identical with the individual, temporal flow of an utterance. Therefore Hopper, to be consistent, has to reject the existence of mental structures altogether, as indeed he does (1987: 145). But if conventions specifying how to express (simple or complex) meanings were never stored mentally, only "coming into being", we would never have ways of communicating that were actually available to us — we would be in a constant pidgin-like position. The same point can also be made from the point of view of the speech community: there is always a given range of communicative options open which are "in force" among the speakers;

Summary: function, structure, and autonomy 299

but even if there is a great deal of variability, the domain of variability is bounded by a domain of non-existence — even if English is extremely variable, it is still different from Chinese. When I compare the views I have argued for with those of functionalists who want to retain more structure than Hopper, a crucial point is the interface between structure and function, the mechanism by which linguistic coding imposes structure on semantic substance. The existence of a linguistically structured range of semantic functions tends to be overlooked in functionally oriented approaches: it is unfamiliar to conceive of functions as being structured by language. When a language has several competing options within a functional domain (e.g. two degrees of politeness in pronouns), the language structures a functional domain in a particular way, and neither structure nor (coded) function can be properly understood without both sides being taken seriously. When this crucial ingredient is missing or downplayed, it may result in several types of misunderstanding. Functionalists who do not want to eliminate structure completely often retain elements of a generative-type structure. The functional element then comes out in the attempt to relate this structure to functional categories. This can be done in several ways. We have seen in Functional Grammar how the structures are motivated by semantic and functional criteria. The danger is then the circularity involved in the generative procedure of postulating underlying structural notions: correlations between two sets of facts only count if the two are established independently. If functions are postulated on the basis of structural distinctions only, or (underlying) structures are postulated by purely functional criteria, valid correlations cannot be made; rather, the two are assumed to be somehow the same thing. A particularly risky version of this strategy is to call structural categories by functional names, using "topic" for both structural categories in particular languages and for functional notions defined regardless of structure (cf. above p. 163). This pattern, in which a basically structural approach is furnished with functional trimmings is often criticized by Givon (cf. Givon 1989: 214n). Givon rightly insists that functional domains must be established independently and then correlated with linguistic structure. This basic orientation works very well in the core area of Givon's interests, the "functional-typological" descriptions of cross-linguistic domains, where he describes a range of related functions and a range of coding options associated with them. In this area, where the criterion of identity is

300 Summary: Function, structure, and autonomy

functional and structure comes in as ways in which these functional domains manifest themselves, Givon has provided a wealth of penetrating discussions of the functional-structural interface. However, in relation to the description of language-specific phenomena I think the lack of an explicit level of organization at which the individual language structures the functional substance leads to a lack of precision. I shall give some examples of this. The typical deviance on the part of Givon from the pattern defended here is the opposite to the one he criticized above. Instead of attributing more functionality to structural categories than is really warranted, Givon is prone to regard the functions that he is interested in as criterial for the structural categorization of the specific language he is describing. A case in point is the use of this to introduce referents (cf. Givon 1990: 921), as in (51) So next he passes this bum... The characteristic point about this usage is that the addressee has never heard about the bum before. In consequence, Givon talks about this as being an indefinite article, and posits a "grammatical marker INDEF" with two values, "important" and "unimportant", corresponding to this and a respectively. In principle it is conceivable that one use of this could detach itself completely from the demonstrative and constitute a homonymic island with the status that Givon describes. However, before that could be established, one would have to look at whether there were ways of seeing the introductory use of this as related to the ordinary demonstrative use (I am indebted to Lars Heltoft for discussion of this point). It would appear that the notion of "high deixis" (cf. Kirsner 1979), which is specifically associated with the demonstratives, could be regarded as a more structurally plausible interpretation of notion of importance discussed by Givon. It would be natural to say that this treats an unidentified referent as if it were already situationally definite, because it is so salient to the speaker — rather than say that this jumps to the other side of the structural fence and gets the status of a grammatical marker of indefiniteness. The same tendency is reflected when Givon (1993: 149) sets up a future "tense" in English with three manifestations: will, going to and "progressive". Here, the relation between structure and function is transparently one in which function is in itself criterial for positing structural categories. If the structural

Summary: function, structure, and autonomy 301

organization of (functional) content in a specific language was higher on Givon's agenda cases of this kind would not be permissible. One may then ask what empirical proof there is that the semantic side is "structured" rather than just "coded": in principle, the content might be the same and only the coding devices different. As evidence of the psychological reality of structure on the content side I suggest the existence of caiques, in individual transfer as well as in substratum phenomena: unless the human speaker's semantics was structured, it is difficult to understand why the accustomed way of "cutting the pie" can be transferred to completely different expression systems. The tendency to presuppose a generative-flavoured structure also reflects itself when Givon occasionally goes into "pure" grammar. In fact, Givon has provided a better defence of some of the basic concepts in syntax, in the face of putative typological counter-evidence, than generativists have so far offered (cf. Givon 1995). Under the title "Taking syntax seriously", he lines up the type of evidence on both content and expression side that needs to be faced in positing a syntactic structure of the generative kind. Givon (1993: 30) also relies on a notion of syntactic structure that is close to early versions of generative grammar, operating with a level of deep structure which "corresponds most closely to its semantic structure". In simple clauses deep and surface structure more or less coincide and reveal the semantic structure "with minimal distortion". This is then applied to complex clauses of the classical kind, such as flying planes can be dangerous. But according to the argument above p. 188, there is a very good reason why such deep structures correlate with semantic structures: they are in fact essentially semantic structures, albeit intermingled in the characteristically unclear way with certain expression features (such as linearity). According to Givon's basic linguistic architecture, language codes three "functional realms" (cf. Givon 1993: 21). The first is coded by words and covers conceptual meaning designating entities. Grammatical "structure" codes two distinct types of meaning: prepositional semantics and discourse pragmatics. The three realms are said to be concentrically organized so that pragmatics includes prepositional semantics which includes word meaning. The main difference in relation to the architecture I have suggested is that discourse coherence is said to constitute a special domain of meaning. Givon is aware of the fact that the double role of grammar in coding both single-proposition information and discourse coherence may tempt some

302 Summary: Function, structure, and autonomy

people to downplay the distinction. To defend it, he points to the interclausal links that are established and maintained by definiteness, interrogative forms, aspectual choices, etc. Discourse-coherence in fact constitutes the major functional domain for grammatical coding while prepositional semantics is a relatively narrow area. I think Givon's views are correct, seen as observations about functions that are maintained in actual discourse; but I think the quasi-generative view of structure obscures the way in which these functions are coded in language. The coding which he sees as maintaining coherence corresponds to the predominantly functional types of meaning that were discussed above; but the recipe view of meaning implies that the actual coherence relations are not coded by language but created by the speakers in following the recipe. This means that we get a natural explanation for the fact that grammar serves both purposes: the meanings, also the functional ones, are organized into layers (including the proposition, which Givon takes to be the largest unit) — but these meanings demand being plugged into the discourse context, thus crying out for discourse linkages. Hence, grammar does not code two competing domains. Rather, the content structure determines how larger semantic wholes are constructed out of the constituent meanings; and it is then the sum of the semantic properties and relations that determine how the speaker plugs the whole construct into preceding discourse. Givon's view of structure and function is basically taken over from biology; the biological aspects of the account I have given earlier is inspired by him. But Givon carries over the view according to which structure is basically the internal anatomy, and function is basically the external performance. The idea that functions are also structured by coding is outside this image. But language is not an internal organ — it is a kit of communicative tools, forcing the speaker to structure his performance so as to match the functional specifications of the tools in the kit. This is the rationale for the commutation-based descriptive procedure. In fairness, I should stress that Givon is by no means unaware of the importance of structurally differentiated functions as coded by individual languages; the point in the critique above is to suggest that an explicit theory of the kind I am arguing for may be useful in avoiding occasional lapses in this awareness. The commutation procedure is also the basic difference in relation to Hallidayan "systemic-functional" grammar. As argued in detail by Butler (1988: 89-95) the distinctions on which Halliday-type descriptive procedures are built are not clearly constrained by linguistic facts; the

Summary: function, structure, and autonomy 303

motivation for specific descriptions is in terms of an a priori conception of the type of semantic "substance" choices that the speaker might want to make. A case in point is the set of options associated with the choice between 'declarative', 'interrogative' and 'imperative' (cf. Halliday 1985). Halliday's description provides a two-way choice between "information" and "action", separating 'declarative' and 'interrogative' from 'imperative' — which is in accordance with both semantic and structural facts. It then sets up a distinction between "request" and "offer" in the information domain, distinguishing neatly between declarative and interrogative — but this is then applied also to the action domain, so that there is an option between "requesting action" and "offering action". This yields a very elegant two-by-two set of semantic options — but the category "offer of action" has no relation to a comparable expression option. Why this choice should then be considered part of a description of the code is therefore unclear. The explicit recognition of content syntax also has important implications for clarification of the notion of iconicity. The significance of this concept must be understood in relation to the argument against autonomous generative structure, and arguments for iconicity also tend to presuppose a generative-flavoured notion of structure, yielding a comparison between "structure" or "form" on the one hand and conceptual or functional factors on the other. The aim of studies in iconic motivation (cf. the introduction to Haiman 1985) is typically to explain structural phenomena by reference to factors outside linguistic structure itself, thus chipping away at the autonomy dogma. A car is the way it is because it is used for driving; similarly, language is the way it is because of what it is used for. But if language itself structures its functional domain, there is a missing element in this discussion. We then have two structures that need to be taken into consideration: the structure of expression, and the structure of linguistic content. Usually, as already discussed in relation to generative grammar, linguistic structure is understood basically as "expression structure"; and iconicity is then treated as a relation between expression structure and a content level of which it is difficult to say whether it is understood as coded linguistic content or extralinguistic conceptual or functional domains. These two relations, however, are crucially different, and ought to be kept clearly distinct in the discussion. If we investigate the relation between expression structure and content structure (expression and content syntax,

304 Summary: Function, structure, and autonomy

typically), we are talking about the properties of the linguistic medium itself. If we are talking about the relation between properties of linguistic expression compared with non-linguistic phenomena we are talking about the relation between language and context. To illustrate the difference, I shall take Behaghel's first law (1932: 4), also known as the proximity principle (Givon 1991: 89), one of the generally recognized iconic principles (cf. especially Bybee 1985): elements that are conceptually close tend to be expressed together. This may be understood as referring to the relation between expression structure and content structure, in which case it can be translated into a structural sign: semantic closeness (in terms of scope, typically) is signalled by putting the expressions close together. It may also be understood as referring to extralinguistic closeness, in which case it means that things which are associated conceptually or functionally tend to be expressed together. Both these interpretations express plausible generalizations about language. My point here is just that they are not the same thing. The "intra-linguistic" interpretation expresses the familiar observation that the linear order of expression elements signals the scope of content elements. This is intuitively most obvious when two orders are equally possible: fake Japanese paintings is not the same thing as Japanese fake paintings. This is a relationship between the two sides of language: the order of expression elements reflects the scope of content elements. The extra-linguistic understanding can be found for example in Bybee (1985: 13), where she states that the concept 'walk' can be co-lexicalized with 'through water' (cf. the word wade) because of the mutual pragmatic relevance of the two elements: "These two semantic notions may be expressed together in a single lexical item because whether one has one's feet on dry land or in water is quite relevant to the act of walking". Both principles typically operate at the same time: elements that "naturally" belong together are typically placed close together also in terms of scope. Since extra-linguistic affinity cannot alone explain cases where both orders are possible, and coded scope relations cannot alone explain cases where only one order is possible, we need them both. An explicit level of content syntax forces us to make clear which case we are addressing. Iconic motivation is sometimes understood as competing for the same territory with arbitrariness, such that arbitrariness is seen as the basic structuralist assumption while functionalists look for iconic motivation. For at least two reasons, this is a simplistic way of looking at it. First, as pointed out above, arbitrariness is a functional property: it can only exist

Summary: function, structure, and autonomy 305

when function is presupposed. It does not make sense to ask whether the Atlantic Ocean is arbitrary or motivated. Motivation and arbitrariness are twins, both offspring of function-based structure: they can only operate if there is a lack of total determination, so that some things may be better than others, and some things may be equally good. Thus, whenever we have some degree of motivation we also have some degree of arbitrariness. Secondly, in the special case of linguistic coding, it would be more revealing if the issue was considered with explicit reference to linguistic levels. At minimal sign level, arbitrariness is dominant and iconic motivation weak. There is no particular reason why the expression cheval should designate the concept 'horse'; but as discussed above, the sounds do have some degree of general motivation. Once we take for granted this predominantly arbitrary coding of minimal signs, however, the relationship is reversed at the combinatory, syntactic level: it would be absurd from a functional point of view if the semantic properties thus coded did not to a fairly high degree determine how the minimal expressions were combined. Here motivation is the strong and specific, and arbitrariness weak. If there was no relationship between semantic properties and principles of combination, we would in fact have no combinatory level: all signs would be minimal, since nothing about sign meaning could ever be inferred from the meaning of elements. The very existence of grammar thus in itself reduces arbitrariness (instead of being the locus of it), as already noted by Saussure (Cours p. 182, cf. EngbergPedersen forthcoming for a thorough discussion of iconicity from the point of view of Danish functional grammar). When we go above the levels at which structural conventions apply, the conventional element disappears altogether, and situational motivation takes over. Since there are no langue mechanisms, the issue of their arbitrariness disappears. We may still look for motivation in the relations between messages; but it is not the language that forces an author to put chapter one before chapter two. The iconicity of the temporal sequence of narrative clauses, often used as examples of iconicity, is already a case in point; it is not a feature of the code, only of the type of message that the speaker has chosen to convey — to tell a story you have to convey a sequence of events moving forward in time (for a discussion of the cline of arbitrariness, cf. Togeby 1993). The level at which an interesting discussion can occur is therefore grammar, where we are above the arbitrary minimal signs, but within the

306 Summary: Function, structure, and autonomy

domain of linguistic coding. Givon invokes the biological analogy to account for cases where iconicity is absent, with vestigial organs as an example: just as some organs may lose their functions but hang on in more or less reduced form, so languages may have "excess structure" which used to have a function but has lost it somewhere along the diachronic line. In terms of the picture presented here this case belongs within the "intralinguistic" domain of iconicity, where the relation between expression and content is at issue. What Givon calls "excess structure" is more precisely describable as "excess expression"; "excess" because it has lost the content it used to have. It is thus more a question of isomorphism than iconicity: whether there is an identifiable content component corresponding to an identifiable expression component. The converse case is that of phonological erosion eliminating expression elements, so that content distinctions end up unexpressed. But even if we take away such "late" developments, iconicity cannot account for all the remaining mechanisms at the combinatorial level. Every time we add coded meaning we select particular options and rule out others, and this involves an element of choice, also on the combinatorial level. Such a choice is by definition not reducible to circumstances, although it may be motivated by circumstances. Because of this element of motivation, "conventional" is a better word than "arbitrary". Leftbranching and right-branching modification are equally iconic; which to choose is a matter of convention. Most languages have some elements of both branching strategies: since syntax has both content and expression, this situation entails an element of convention. Therefore we cannot expect to go from minimal meanings to clause meanings via iconic motivation alone. This type of fact is one of the arguments for the autonomy of syntax; but it is only partial, and must be understood as a variant of the (partial) arbitrariness in the relation between the expression and content side of language (cf. p. 178). The position of Cognitive Grammar on this point is not straightforward; the existence of "invisible" phenomena on the content side is recognized, and hence there is no commitment to isomorphism between the two sides in that sense. But going in the other direction, beginning with the expression side, there is a fairly strong assumption that each item must have its own independent content side attached to it. In a number of cases, Langacker has demonstrated that seemingly meaningless expression mechanisms could plausibly be related to a corresponding content element, the analysis of raising constructions being a case in point (Langacker

Summary: function, structure, and autonomy 307

1993b). However, there are some cases where I think this cannot go all the way towards making the two sides match in a one-to-one fashion. Langacker (1987a: 354-55) convincingly illustrates how the occurrence of English periphrastic do can be seen as manifesting the ending point of a process of bleaching, whereby all that is left is the "process" meaning that is also part of the main verb in do-constructions. I think this is a revealing way of describing the semantic trajectory of the diachronic development; but it does not mean that do can be accounted for in an isomorphic manner. In this context it must be remembered that cognitive grammar rejects the process metaphor: the grammar in itself does not create combinations — the speaker does this. Therefore, in attributing content to such near-empty expressions, Cognitive Grammar is casting the description in terms designed for free combinability — where the speaker selects content elements that are suitable for his "unified conceptualization". Therefore, there is something missing in this picture in the case of periphrastic do — because it is not selected in order that it may contribute its semantic mite to the conceptual whole, as "normal" elements are. Even if the obligatoriness of the construction can be captured by giving it "unit" status, the functional relationship between content and expression is obscured if we factor out meanings in elements that are not independently usable by speakers. The commutation procedure here serves to constrain the positing of content elements, so that only contents that can be added or subtracted on their own have independent status in the grammar. In the case of do-periphrasis, this means that we cannot attribute independent content to do, if we accept its obligatoriness in the presence of not + a simple main verb. This does not mean that do is meaningless, either, as standardly assumed. Instead, do co-expresses the meaning of the combination not + main verb: a complex content, with an even more complex expression side. This tallies with Langacker's account in giving do a share in the responsibility for the meaning of the whole — but avoids the attribution of independent meaning to it.14 By focussing on content syntax, the position I have tried to establish rejects both the autonomy of syntax and the non-existence of syntax. Against autonomy I side with Cognitive Grammar in insisting that all combinatory mechanisms have to be explicitly related to the content and expression elements that are combined; against the position where syntax is fully derivable from functional principles as applied to individual items, I want to emphasize that mechanisms of combination have a distinct and

308 Summary: Function, structure, and autonomy

extremely important role to play in human language. This role must be understood in relation to the top-down evolution of syntax: we have single lexical items because we became capable of differentiating utterance meanings into smaller components — and syntax is the link between the new flexible lego elements and the complete meanings that are necessary for communication. In suggesting what such a syntax could look like, I have tried to indicate how such an account would answer some of the basic problems in relating function to structure: the relation between langue and parole; the domain of grammatical structure; the division of labour between linguistic meaning and pragmatic inference; the question of iconicity versus arbitrariness. I have also tried to eliminate from the agenda some concepts and questions that have untenable presuppositions, such as the distinction between "underlying" and "surface" structure, and the issue of the borderline between syntax and semantics. Some of the problems in relation to the role of semantics in linguistic structure is due to an unclarity with respect to whether semantics is inside or outside linguistic structure. In the generative tradition, the role of "straight" syntax presupposes that semantic structures are not really linguistic structures (cf. Jackendoff 1987b: 355). The discussion of isomorphy between expression and content is thus mixed up with the discussion of whether language reflects extralinguistic structures point-bypoint. When these two elements are mixed, you are wrong no matter whether you support one position or the other. Perhaps the chief merit of "content syntax" is that it forces a clarification of this issue: are we talking about the way language organizes its content side, or are we talking about the relation between extralinguistic facts and linguistic structure? This is also a point on which there is a difference in relation to Nuyts (1992). Nuyts argues convincingly that a number of linguistic choices depend on access to conceptual information that is "deeper" than the predicational level at which standard Functional Grammar describes clause structure, and that we therefore need to have a place for that type of information in our theory of language. He then (Nuyts 1992: 223) suggests that the appropriate way of capturing such deeper information is to posit a level of basic conceptual representation which is neutral as between different types of perceptual and linguistic input. Such a representation is designed to do the same job as Johnson-Laird's mental models and Fodor's and Jackendoff's mental languages; but it has a more "developed" structure than mental models and is not itself a language (Nuyts 1992: 233, 237).

Summary: function, structure, and autonomy 309

There is much that is attractive about the design Nuyts suggests for this level. Nuyts is fully aware that there is an interface problem, and argues consistently against any attempt to see linguistic structures as more or less identical with conceptual structures in the manner of Dik (1989). The problem is only its status as providing "basic representations in the grammar". (Nuyts 1992: 236). In terms of the distinction between substance and structure such representations cannot be part of grammar: they describe the substance that constitutes the non-linguistic input to the coding process. One of the uses to which Nuyts puts such representations is to capture semantic similarities between clauses that are different on less deep levels, such as (52) a. Johnson has 35 employees b. Johnson employs 35 people c. There are 35 people working for Johnson d. Johnson gives work to 35 people (Nuyts 1992: 226) As I see it, the fact that different utterances may convey the same conceptual content does not mean that they should be described as identical in terms of linguistic structure. It is a universal characteristic of human languages that the semantic potentials of items as well as constructions overlap extensively; therefore they can convey the same thing without being linguistically identical. Since structure presupposes substance, language can only be described if we can make reference to such (substance) similarities; but linguistic items are not "underlyingly" identical because of them. Putting neutral conceptual representations into grammar to capture such similarities means confusing the structure of the message with the structure of the medium. To sum up: Function-based structure has two design features. First, it exists only as a way of organizing "substance" elements in a functional context: structure presupposes function. Secondly, it is motivated by (but cannot be fully predicted from) functional properties in abstraction from the structure itself, i.e. "substance" properties. As I see it, these two features constitute the crux of the whole autonomy issue. What is true about autonomy is expressed in the second feature: the description of language does not come about as a by-product of descriptions of other things. The classical assumption of isomorphy between language and the world is

310 Summary: Function, structure, and autonomy

untenable, precisely because language is inside the world and through its presence imposes a structure on those parts of it that fall under its purview — a structure that does not and could not exist apart from language itself. To that extent language is autonomous, and its autonomy is bound up with its structure. But this autonomy means only that language exists, i.e. that it is not an entity that should be reduced away by Occam's razor; in Putnam's phrase, it is a natural kind with certain intrinsic properties. This type of autonomy it shares with all other natural kinds. Yet we do not speak about the autonomy of tigers, or about tigers being purely structural. And no-one feels any urge to deplore the loose talk of "tigers as such", which overlooks the fact that we only know tigers from concrete tiger events in pragmatic tiger contexts. Perhaps it is time to stop arguing about language in those terms, too.

Part Three: Tense

1. Introduction The third and last part of this book is devoted to tense. I hope to show that my views on meaning and structure have advantages not just on the abstract level of principle, but also as a framework for describing the linguistic details of a familiar grammatical area. Hence, what I say about tense is simultaneously about the overall theory, embodying a claim that we can understand tense better if we see it in terms of this picture. For this purpose tense is well suited as a testing ground, because most of the vast literature on the subject has dealt with tense as a comparatively isolated domain. Much of the attractiveness of the subject has presumably been that it was a naturally bounded area that could be handled without immediately getting into the universe and other related matters — which, as it were, I have taken as my point of departure. Of course, the description of tense cannot serve as an illustration of the usefulness of the general theory if it does not stand on its own feet as a reasonably attractive account of tense. With respect to my ambitions in the area of tense, I hope some of the characterizations of linguistic data may have a certain degree of freshness — but no-one can go seriously into tense, let alone tense in English, without realizing that this is well-trodden ground. On the most concrete, descriptive-observational level, I doubt that there is any single observation on major areas of tense usage that I or anyone else could make that has not already been made somewhere in the literature — at least if you read the most favourable interpretation into it rather than look for weak points. Therefore, the two main points I try to make with respect to tense are not in the shape of new observations, but concern the way observations fit together in a larger picture. The first point is about the precise nature of the meanings of the tenses, and is based on the functional theory of meaning that I argued for in Part One. The second point is about the precise nature of the structural collaboration between linguistic elements that play a role in relation to tense, and is based on the function-based clause structure that I argued for in Part Two. Together, the two points illustrate how linguistic facts are embedded in context without being reducible to context. An important aspect of the approach I present is the way it seeks to view each linguistic element as belonging in a context which as a subpart includes a structural context: in order to get a description of any item in language right, you have to relate

314 Introduction

it to all relevant factors, structural as well as situational. Unless you put all relevant elements inside as well as outside language within your purview, you may unwittingly either incorporate too little in your linguistic description, because you are overlooking something important — or incorporate too much, because the specific linguistic category you are working with (in this case tense) is the only place you have to put it. The parallelism as opposed to the customary antithetical relation between situational and structural context is part of the theoretical approach: a function-based structure emphasizes both types of context-dependence. In Parts One and Two above some of the most important elements of the pragmatic and structural context have been described. The most immediate non-linguistic context of a linguistic item-in-use includes the ongoing mental construction process that brings about an interpretation of the utterance. In this process the principle of sense stands as the entry condition for linguistic contributions: each coded element must be understood in a way that contributes to whatever is going on. Several types of interactive and cognitive skill are necessary to carry on this interpretive process without being specifically coded into the linguistic medium. The most important interactive skill is the ability to take part in the pattern of life that current language use is part of; one interactive practice that is relevant in relation to tense is storytelling (cf. below p. 483). The most important cognitive skill is the ability to store and structure cognitive input so as to keep track of what is going on. In keeping your record straight, the ability to keep track of mental spaces is an important subcomponent (cf. below p. 429). In discussing the structural context, I begin with the relations between the tenses themselves; next I discuss the relations between tense and other linguistic items in the simple clause: time adverbials and lexical meanings of verbs; third come elements that involve relations in complex clauses (subordinators and embedding matrix verbs). If the dual strategy I have outlined is successful, the reader should end up with a sense of how the theoretical and the descriptive levels throw light on each other: tense illustrates the practical applicability of the theory of function and structure, while the theory accommodates some central properties of tense in an attractive way. There are three chapters, plus a general conclusion. The first chapter is on the meanings of the tenses; the second is on the interaction between tenses and other elements in the simple clause; the third is about problems that go beyond the simple clause.

2. The meanings of tenses 2.1. Some central positions and concepts This section is meant to give a survey of the presupposed background against which my own account of tense must be seen, before the academic sniping begins. Apart from summarizing what I take to be the major established views on tense, I also take up the way in which some of these views have been influenced by linguistic theories — usually for the worse. Tense is one of the oldest subjects in the history of language theory. Aristotle focused attention on it, when (cf. above, p. 9) he noted the temporal boundness as a crucial property of "rhema", the verb phrase. In the classical word-based view of language structure, with the morphologically oriented descriptive practice that went with it (cf. above p. 158), tense came to occupy the place of the prototypical "accidence" category of the verb in the descriptive grammars of Latin and Greek. All those who have had a taste of classical education will remember the page-long verbal "paradigms", in the original sense of example to be followed (on the teacher's demand!); and tense was the main organizing concept in verbal paradigms, just as case was in the nominal paradigms. This means that semantically heterogeneous distinctions became organized under the general headline of "tense"; the category thus traditionally included the distinction between imperfect and perfect forms in Latin. This usage (like many other more or less accidental features of classical grammars) lives on in modern descriptive grammars, and still to a large extent determines what the linguistic community as a whole understands by the term. The forms that are now generally treated under the heading of "progressive aspect" in grammars of English used to be called "continuous tenses" simply because of the central status of "tense" in the grammar of the verb. Both semantically and structurally, classical universalism (cf. above p. 10) has been and remains extremely influential in relation to the way we think of tense. Semantically, tense is traditionally supposed to reflect a mental segmentation of time which again reflects the ontological nature of time as divisible into past, present and future: the present as a point in the middle and past and future stretching indefinitely towards left and right. Because of the assumption of harmony between language, mind and the world, traditional grammar did not feel the need to distinguish very carefully between ontological, semantic, morphological, and syntactic categories; the terms past, present and future were used about both the times as such, the meanings of the linguistic forms, and the linguistic forms

316 The meanings of tenses

themselves. Structurally speaking, the main features of the Latin system acquired the status of a kind of "underlying structure" in terms of which other languages were to be described: if there is only one grammar, which ultimately reflects the world, there is no need to worry about language differences. In this area, perhaps even more than in others, Jespersen (cf. Jespersen 1924: 254-77) enjoys the status of founder of the modern discussion. He points out the errors involved in the traditional Latin-based descriptive practices, sets up a clear theoretical and terminological distinction between categories of the verb and subdivisions of time, emphasizes the claim of the individual language to being treated on its own terms, and carefully examines the implications of beginning the description at the formal (expression) end as opposed to beginning with "notion or inner meaning". The latter approach, as exemplified in The Philosophy of Grammar, he sees as a way to orient description so as to show "how each of the fundamental ideas common to all mankind is expressed in various languages" (1924: 346-47), thus emphasizing that it is not the universalist ambitions, only the Latin bias that he is criticizing. Jespersen's reformist zeal, however, did not include doubts about the assumption that tense meaning consisted in reflecting subdivisions of ontological time (as depicted in the time line). In fact he was so convinced of the time-line system which he proposed that he thought it was "logically impregnable": there could be no other semantic frame within which individual tense systems could be placed. Disposing of various rival accounts by means of these principles, he arrives at a theory with seven tenses corresponding to seven points on the time-line. These are the three primary tenses preterite, present and future, combined with secondary tenses expressing the relations "after" and "before". Arithmetically this should give nine tenses, a solution suggested by the Danish Latinist Madvig; but Jespersen argues that since the present is a point rather than a period, it is notionally impossible to operate with "before" or "after" relations within the area of the present, as opposed to past and future. He then proceeds to discuss the remaining seven, as he assumes, universally distinguishable tenses, listing reasons for recognizing or rejecting them as elements of particular languages. As an example, he claims that there is not a completely pure future in English and Danish, since the verb will (Danish ville) retains an element of volition (p.260) — although there are cases and tendencies that approach the pure future. The present perfect is kept out of the system of tenses, since it includes not only time reference "but also the element of result" (p.269).

Some central positions and concepts 317

One of the first influential monographs to diverge markedly from traditional assumptions about time and tense was Bull's Time, Tense and the Verb (1960), Bull argues strongly for the need to adopt a basically semantic point of view. That was natural in the grammatical tradition which was Jespersen's background, but brought an immense amount of toil and frustration on Bull, working as he did in the context of American structuralism (cf. the discussion of Bloomfield above p. 171). His account was received with a paradigm- and hidebound scepticism that led to a series of demands for revision in one direction and another, until the author finally put his foot down: If previous experience can be used to predict the future, it may be assumed that this study will prove to be too theoretical for some, not philosophical enough for others, too practical and detailed for still others, and since the Indian languages of the New World are not included in the survey, hardly exhaustive. However, after seventeen years I find myself peculiarly uninterested in whether or not I have exhausted the subject. The subject has exhausted me. (Bull 1960: vi)

Among those pioneer insights which did not promote its popularity at the time, the book contains a lucid discussion of the insufficiency of purely distributional analyses for describing language structure. There is also a clear distinction between objective facts and conceptual organization of facts, as well as a discussion of the way in which linguistic and conceptual categories interact. Within the subject matter itself, the pervasive importance of aspectual distinctions for a proper understanding of tense is not only emphasized in principle but worked out in meticulous detail in relation to Spanish, which is the main descriptive target. As a result of his intellectual isolation Bull finds himself forced to work out on his own all the ancillary hypotheses about the world and the mind which are needed to describe the context in which language belongs. The philosophy of time which he develops leads him to one of his central points, namely that tense morphemes do not deal with time in the strict sense of the word. Time as such is conceived as abstract forward-moving duration, the fourth dimension of reality in addition to the three spatial dimensions, and tenses do not refer to segments of time, thus conceived. Instead, that job is done by calendrical systems, which in contrast to grammatical tenses serve as ways of placing events in time so as to keep track of them (compare the modern concept of "time manager"). Instead, verb

318 The meanings of tenses

tenses according to Bull involve the relationship between an experiencing subject and an event, placing it either as "recalled" or "anticipated" in relation to the experiential present.1 The relations denoted by tense morphemes are primarily relations of temporal order. Thus, by saying he came we designate an event, but we do not fit it into a position in time: all we say is that it is anterior from the observer's point of view (p. 18). This means that "past tense" is seen as a "vector", in Bull's terms: an indication of "order" (or "direction"). There are two such vectors, i.e. the directions "backward" or "forward"; in addition there may be "tensors" in the systems, i.e. indications of how far to go in the direction indicated; a tensor plus a vector thus combine to define a stretch of time — but the primacy of the directionality ("recalling" vs. "anticipating") means that the reference to time is only secondary. The specific character of each tense form is determined by the "axis of orientation" chosen by the subject. The "axis of orientation" is the point from which the event is viewed.2 Among the possible axes, he accords primacy to the present and the recalled points as being located in reality, while the anticipated points are unrealized. One merit of the system is that it becomes possible to pinpoint the difference between the tense systems of languages without a past-present distinction (such as Hawaiian and Mandarin) and the standard average European system, in that the former do not indicate axes of orientation, but limit themselves to indicating order and simultaneity. The detailed analysis of the Spanish system contains some general points which deserve to be mentioned separately. With the single exception of what is usually called the "preterito" (the perfective past), all tenses are divisible into two sets that contrast only with respect to their orientation towards either the moment of speech or a "recalled point" (p.55). The tenses oriented towards the moment of speech are called the prime tenses as opposed to the retrospective tenses; and the possibility of getting along with only the prime tenses, once the axis of orientation has been established, is discussed and exemplified in great detail. Also, he describes the possibility of using the tense system in secondary ways: the "probability" use when future tenses are used about actual times, and the "irreality" use when past forms are used about actual times. All tense forms are then carefully discussed with respect to the whole panoply of relevant factors: the aspectual distinctions, the full or reduced system potential, the secondary uses, substitution possibilities and actual frequency. In summing up, Bull underlines the complete irreconcilability between

Some central positions and concepts 319

describing "structure" as conceived in his post-Bloomfieldian era and describing the functions of the tense forms as used by the actual speaker. In order to have room for a linguistics with some relation to the speaker's situation he appropriates the term "applied linguistics" for that purpose. However different from each other glossematic and taxonomic structuralism were, and however fanatically empirical each believed itself to be, as opposed to their speculative predecessors, they came up against similar problems in relation to actual empirical language use (cf. Gregersen (1991, 2: 242). When theories meet data, the data often have to give way. As a logician, Hans Reichenbach (1947) was not distracted by working during the semantic prohibition period in American linguistics. He gave an account of verbal time reference whose central terms came to dominate a great deal of the subsequent discussion, also in linguistics. His analysis introduced three points in time: the point of the event (E), the point of speech (S) and finally the point of reference (R), which was the central innovation in his theory and the point on which he differed from Jespersen's account. Instead of talking about points, I follow a number of authors (cf., e.g., Declerck 1986: 321) in talking about "times", since the trichotomy applies irrespective of the distinction between points in time and periods of time. Reichenbach's system makes possible thirteen different configurations, which Reichenbach reduces to a system of nine different "fundamental forms" (three points multiplied by three relations: before, simultaneous and after), getting us once more back to Madvig's basic system. As a complete theory in itself, this account is not particularly appealing; the interest of the account (and its enormous impact) is due to the contribution of the concept of reference time to the description of the perfect tenses. Especially the past perfect, which was Reichenbach's illustration example, could be appealingly accounted for in terms of an event time E which was before not only the speech time S, but also before a reference time situated between S and E. Quoting from Macaulay's account of Charles II of England, he illustrates how events described in the past perfect are viewed from a later reference time: the historian stops at the time when the scales fell from the people's eyes and looks back, disgusted, on what had happened before they found out about his corruption and debauchery. Thus, in terms of Reichenbachian points, the semantic formula for the past perfect is E — R — S, where the dashes indicate temporal sequence: E denotes the restoration, R the time when the observer evaluates this

320 The meanings of tenses

event, and S the time at which Macaulay writes. The simple present is described as E,R,S (where the comma indicates simultaneity): the time of the event, the time of reference and the time of speech are the same. The simple past and the present perfect are kept apart by means of R, in that the past has R simultaneous with E (E,R — S), while the present perfect has R identical with S (E — R,S). This account, where the use of the point R is less transparent than in the case of the past perfect, is supported by referring to the relationship between tense form and time adverbials. Reichenbach suggests that time adverbials must always indicate reference time; the role of time adverbs is assumed to be fixing the time in relation to which events are viewed. Since past tense goes with adverbs indicating past time, this suggests that E and R coincide in the simple past. Conversely, the occurrence of examples like now I have understood it suggests that the present perfect has R in the present — corresponding to the adverb now. R.L. Allen's study The Verb System of Present-Day American English (1966), like Bull's, was worked out at a time when the pressure from the anti-semantic tendencies in American linguistics was considerable, in the transition time between taxonomic linguistics and transformational grammar. Unlike Bull, he expresses acceptance of the formal orientation, subscribing to "Chomsky's stand, endorsed by Weinreich, that the grammatical description of a language should be autonomous vis-a-vis the semantic description" (Allen 1966: 86); he also accepts that mere "psychological" differences between for instance I have lived in New York for the last ten years and / have been living in New York for the last ten years as "too subjective for proper linguistic analysis" , and parallels it with the choice between a square inside a circle and a circle around a square: "when two modes of expression seem synonymous under certain conditions, the linguist can only note the possible choices and describe the formal differences between them..." (Allen 1966: 94). His background in the teaching of English, however, has made it clear to him that without a reliance on the analysis of meaning no account of English verbal structure is likely to be revealing, and he, too, hints at the unsuitability of structural descriptions for practical purposes (Allen 1966:21). So although he can find no structural principles which exactly match his sense of practical needs, and his views of meaning are clearly influenced by the intellectual climate, he manages to combine a taxonomically inspired account of form, including meticulous attention to allomorphic detail, with a very interesting discussion of verbal meaning that

Some central positions and concepts 321

places his allegiance to principles of formal description as a form of forced loyalty. The most interesting contribution made by Allen is perhaps his insistence on the notion of "identified time", especially as constituting the crucial difference between the past tense and the present perfect. He spells out the parallel between the definite article, marking a noun phrase as denoting something identified, and the past tense, marking an event as taking place at an identified time; and as a consequence of this insight, he reorients the table of times so as to have them organized by a "time of orientation" which is the identified time of either past or present, with the relations "same", "after" and "before", to establish the necessary distinctions between the simple, future and perfect tenses; we thus avoid the proliferation of future tenses that is the result of the standard tripartite picture, where "past" and "future", are parallel and symmetrically arranged around the present (Allen 1966: 152-158). This organization, which retains the advantages of both Jespersen and Reichenbach and has the drawbacks of neither, is not generally credited to Allen, possibly because it is fairly close to Bull's (however, it avoids the problem with Bull's "axes of orientation" that they are not inherently limited in number (cf. p. 318, n. 2). Bull's and Allen's difficulties testify to the fact that the anti-semanticism of mainstream American linguistics made tense a problematic area to handle. Although interest in tense never disappeared, the next influential theory of tense, that of Comrie, did not make its appearance until the formalist euphoria had begun to lose momentum; Comrie's books on tense (1985, cf. also Comrie 1981) and aspect (1976) had an important role in putting these semantico-morphological subjects back on the agenda in linguistics. His general theory builds on two foundations: a critique of Reichenbach's descriptive principles and a cross-linguistic investigation of verbal time reference. With respect to Reichenbach, Comrie points out (cf. also S.Vikner 1985) that relations between times are binary rather than ternary. This is important in relation to the future perfect, which in Reichenbach came out as three-ways ambiguous: when you say he will have left, the event of his leaving can be before, simultaneous with or after S (yielding the three formulas E — S — R, E,S — R, or S — E — R). If instead we relate only two points at a time, we get the formula "E before R after S", which captures the time reference of the form without involving ambiguity. Another reform is to introduce the possibility of more than one reference

322 The meanings of tenses

point, motivated in particular by so-called "conditional perfect" (he would have played, cf. below p. 361), which Reichenbach did not consider because he did not include combinations of "past" and "future". A third major point of criticism against Reichenbach concerns the status of the present perfect, which Comrie wants to eliminate from the tense system altogether (cf. below). Furthermore, Comrie eliminates the redundancy that is involved in Reichenbach's use of reference times in his formulas. R was, as we have seen, generalized to all tenses primarily because it was linked with time adverbs. This theory does not hold true in all cases; in the past perfect, time adverbials can refer to E as well as R, compare the most natural readings of (53) and (54): (53) Yesterday he had left ("yesterday" = R; E lies before R) (54) He had left at five ("at five" = E; R is implicit) R is thus generalized on doubtful grounds, and does not always have any clear function. Accordingly, Comrie allows reference times only when they do not coincide with either E or S. This leaves the present tense (where E and S coincide) as the only instance of simultaneity between points. To put it differently, Comrie only uses R when it is needed in order to keep tenses distinct from each other. Comrie defines tense as "grammaticalized location in time", i.e. in terms that combine notion with structure, and in effect follows the programme outlined by Jespersen: armed with a definition in terms of referentialsemantic substance, with the proviso that it must be associated with a grammatical form in order to fall within the definition, he sets out to look at the whole cross-linguistic spectrum of linguistic phenomena that come under his definition. In the introduction, he takes pains to rule out traditional sources of error. Among these we find the tradition from classical grammar to use the word "tense" about forms that differ in terms of aspect ("internal" temporal constituency rather than temporal location, Comrie 1976: 3). He also discusses the problem of distinguishing between central and secondary meaning and the pervasive difficulty in separating elements of interpretation due to coded meaning from elements due to Gricean implicature. His conclusion, as elaborated in his book, is that it is in fact possible to establish context-independent primary meanings for tenses and set up a coherent grammatical field with strong cross-linguistic

Some central positions and concepts 323

restrictions on the range of possible forms within the field for any given language. Comrie distinguishes between "absolute" and "relative" tense. Absolute tenses are located with respect to the moment of speech, while relative tenses are located with respect to a contextually given reference time. The distinction is illustrated with reference to time adverbials: yesterday is absolute, whereas the day before is relative. In terms of this distinction, he gets three absolute tenses: past, present and future — as in the classical time line. Purely relative tense is found in English almost only with participial, i.e. non-finite forms (cf. Bull's comparison between languages that mark "axes of orientation" with languages that only mark precedence relations). However, there is a hybrid category of "absolute-relative" tenses, where the location involves both a relation with the moment of speech and with a contextually given reference time. These forms, which include the pluperfect, have an R as well as S and E in their formula, because R does not coincide with either E or S. Apart from the tense distinctions in terms of location, he also discusses distinctions in terms of remoteness, drawing on Dahl (1984). Spelled out in words, the overall formula which summarizes his theory says that tense systems relate Event time to Speech time, possibly via some Reference times; and the temporal relations may or may not be marked for temporal distance. One of the most intensively discussed elements of Comrie's theory is the elimination of the present perfect from the tense system, in effect going back to Jespersen's description of English. There are several reasons for his position, the primary being that the present perfect (according to Comrie) cannot be wholly accounted for in terms of location in time: E may be located in exactly the same way whether we say / have done it or we say / did it, so in order to account for the difference we need to invoke a notion of "current relevance", (cf. Jespersen's "element of result") which is not part of the semantic field that is marked off by Comrie's definition. To defend this choice, Comrie enumerates a number of respects in which the present perfect behaves differently from the tenses that are generally said to be part of the same "perfect" system, especially the pluperfect (which cross-linguistically does not always occur together with a present perfect). One case in point is the much-discussed incompatibility between adverbials indicating definite past time and the present perfect in English (cf. below p. 416).

324 The meanings of tenses

Tense has also been approached from the point of view of discourse and literary criticism. To some extent the literary dimension is represented in the grammatical tradition, since traditional grammar was part of the humanistic tradition, in which language and literature were part of the same scholarly field of investigation. Tense usage in fiction has some features that are salient both from the point of view of literary interpretation and from the point of view of the grammar of tense. An obvious instance is the choice of tense in narratives, a subject on which there is a massive literature. There are two central types of problem in this approach to tense: one is the role of tense in determining the function of a clause in the discourse context (degree of emphasis, immediacy, etc.) and the other is the manipulation of point-of-view: the temporal perspective depends on the eyes through which we see (as demonstrated by the role of tense in varieties of "free indirect style"). Also, discourse representation theory, in accounting for mechanisms of text progression, has devoted considerable attention to the role of tense choices. If we try to generalize over the picture that emerges from the standard literature on tense, some broad tendencies can be discerned. Part of the story of the field is that category labels such as tense have in fact survived by a mixture of inertia and usefulness, in exactly the way biological features and ordinary language terms survive. Originally the understanding was based on temporal distinctions as encoded in the Latin verbal paradigm, being extended to translation equivalents in lesser languages by a combination of reliance on the universality assumption and the prestige of Latin. Although the general feeling is that no new underpinning would support their retention as universally valid elements of linguistic theory (cf. Lyons 1977: 690), they linger on because we need them for purposes of communication. In grammars of specific languages with less elaborate morphological paradigms, there has been a tendency for grammars to gradually restrict the term "tense" to the remaining morphological choices, in English thus to the choice between past and present tense. The usefulness of a broadly defined area of tense after the demise of the Latin bias must be understood in the context of two perspectives that inherently depend on a broad definition. The first is the theory of time reference as a semantic domain. To take up such a semantically defined area, leaving aside the question of linguistic structure, is not in the spirit of structural linguistics, but precisely because semantics had been neglected from the structural perspective, the field was open to investigation from other angles, in particular from the logico-philosophical angle. The early

Some central positions and concepts 325

cross-fertilization that took place in the area of tense, with the widespread linguistic adoption of Reichenbach (who, in turn, based his ideas on Jespersen) has resulted in a general conception of tense involving a somewhat untidy mixture of elements from both traditions. The second reason for taking the broad view is the cross-linguistic perspective. Rigid adherence to structuralist principles leaves each language as a structure unto itself. The retention, in an increasingly structural era, of such quasi-universalist terms as tense has served the important purpose of pointing to areas of cross-linguistic interest, in which comparison and generalization appeared to be possible. Even from a structural point of view, cross-linguistic studies have enjoyed a respectability derived from generative ambitions of establishing a "universal grammar". The two interests, in "grammar across languages" and "semantic fields irrespective of structure", have in practice had some of the same interests; indeed, from a functionalist point of view it is inevitable that they should. If language is organized as a vehicle for communication, the natural "tertium comparationis" for cross-linguistic studies must be the things we want to express — as already pointed out by Jespersen. For functionally oriented linguists, who were sceptical towards all abstract theoretical orientations, the traditional labels have continued to provide a common language in which areas of semantic interest could be talked about, while formally oriented linguists have tended to favour the narrow account, relegating "future" and "perfect" to separate compartments. There is disagreement as to whether there are two or three primary tenses: the tradition from Jespersen to Comrie has maintained three, corresponding to the tripartition of the time line; others, including Allen and Bull, have excluded the future from the primary domain. Everyone agrees that secondary tenses are relative, involving an "after" or "before" relation to a reference point, and the perfect is described as relative by all authors who include it in the system. In accounting for the perfect in relation to the past, an important issue has been the notion of "definite" as opposed to "indefinite time", where the past has been compared with the definite article, and the present perfect with the indefinite article. The distinction between the past and present has generally involved a recognition that the present is the unmarked (non-past) category, so that the past is marked in "excluding" the present (being "distant" from the deictic centre), whereas the present has no similar "exclusion" or "distance" in relation to the past.

326 The meanings of tenses

The credentials of the secondary tenses in the system have generally been discussed in relation to a few central issues. When the future has been excluded, it is generally because of its affinities with modality, both formal (in the case of will and shall) and semantic (because of the modal element in the notion of futurity, cf. Lyons 1977: 677). The discussion on the role of the perfect in the system has chiefly turned on the notion of "current relevance", which is not a matter of temporal location and therefore brings in other matters than time reference. In taking up this traditional area, I am not presupposing that the traditional domain of tense has any privileged structural status in English; as pointed out in the introduction, the aim is to see how the area, ad hoc as it may be, can be described in terms of the general approach I have described.

2.2. Individual content elements: the deictic tenses 2.2.1. Introduction Although this section is mainly about individual meanings, these cannot be discussed in isolation from structural context, so I presuppose some salient features of the framework that was established in the first two parts. Among these are the recipe format of "content syntax" with operatoroperand relations between content elements. The content elements are understood as coded functions, with the characteristic interactive feature of "directionality": they are meant to do a job, and presuppose a basis (cf. above p. 107). The place of deictic tenses within the semantic recipe has already been discussed (above p. 273): past and present take a predication, designating a state-of-affairs, in their scope and turn it into a proposition. The future and perfect, to be explored in depth below, belong inside the scope of the deictic tenses; the perfect takes narrowest scope, giving the following full structure: (55)

'past'/'present' (+/- 'future' (+/- 'perfect' (state-of-affairs)))

In this description, we have three paradigms, each with two choices, yielding in combination the traditional eight tenses. This may be contrasted with the view according to which these form a paradigm of eight. Most traditional accounts basically reflect an eight-tense way of looking at it, corresponding to the full classical paradigms, while reflecting an awareness

Individual content elements: the deictic tenses 327

of the elements of compositionality, corresponding to the stem variations in the Greek and Latin verbs. Both ways of looking at this are valid. Reflecting the general assumption of partial compositionality, as expressed in cognitive grammar, we can say that each form has the status of a construction, with its own idiosyncratic properties, and in that capacity stands in paradigmatic contrast to all other constructions. At the same time a large part of the properties are due to meanings and paradigmatic choices at the compositional level. To refer to the two levels of description, I shall speak of the constructional paradigm and the compositional paradigm.

2.2.2. Past and present as pointers The discussion of past and present tense is a case in which the abstract discussion of the functional nature of meaning in part one bears directly on the concrete issue: what is wrong in the generally accepted picture is essentially only that the presupposed view of meaning, i.e. the representational "standing-for" view, is wrong. As I shall try to show in a moment, the almost tautological-sounding claim that the past tense denotes 'pastness' or a past event time (and the present tense 'presentness' or a present event time) is in fact mistaken. The linguistically coded meaning resides in the function that tenses serve rather than in the times that the tenses come to stand for or refer to. Hence, I suggest the following paraphrases (cf. p. 206) of the Grundbedeutungen, i.e. the centres of the two semantic territories: (56) The meaning of the present tense is to direct the addressee to identify a point-of-application S (a situation as it is at the time S of speech), as that which the state-of-affairs in its scope applies to. (57)

The meaning of the past tense is to direct the addressee to identify a point-of-application P, (a situation as it is at time P (such that P lies before S), as that which the state-of-affairs in its scope applies to.

There are thus two aspects of the meaning that need to be kept in mind, and kept separate. One is the element they share: the element of directing the addressee towards a point-of-application, creating a proposition out of

328 The meanings of tenses

a state-of-affairs. The other is the element that keeps them distinct: the temporal specification of the point-of-application. The functions served by the tenses are in the pointing-towards; thus S and P, as part of the coded meaning, denote directions-of-pointing, not actual times. The "basis" that is implicit in the "function" (cf. above p. 107 on the railway ticket that cannot just be valid for a destination, but must also indicate a point of departure), is the position from which the speaker points. In interpreting tenses, we rely on this extralinguistic information, which in the standard case is automatically guaranteed to be available; the temporal aspect of this information I call "utterance time". Thus both deictic tenses point from the "basis time" (=utterance time) towards a "function time". The intuition which this description is based on surfaces repeatedly in the literature: many paraphrases of the meanings of tenses in the literature involve an element of "concerning" or "focusing on" or "directing the addressee's attention to" a particular point in time. Yet as I hope to demonstrate in the discussion below, the implications of this view have generally not been followed up. One reason for this is that most of the attention has been focused on the element where they differ: the temporal addresses. In factoring out the two elements, the shared and the contrasting one, I hope to be able to describe both what is wrong about the singleminded focus on the times in themselves and what is right about the traditional analysis of the tenses as being centrally concerned with time. However, the chief merit I claim for the description I suggest is that the specific description of tense meaning falls into place within a larger theory of meaning and structure. No matter how insightful an isolated observation about tense is, if it is embedded in a theory of meaning and structure which provides no meaningful place for it, it is almost inevitable that the larger framework will override the individual insight and produce an unsatisfying overall picture. Two examples, both of which go quite far in exploring the defmiteness aspect of meaning, can serve to illustrate this. First example: In Allen's discussion of definite past time, he relates the special nature of the definite tenses to the philosophical discussion of what the difference is between the death of Caesar and Caesar died/dies. Allen points out that it is not the presence of a verb, as claimed earlier, but the presence of the tense morpheme "which orients the Predication (Caesar) died to an identified time in the past" (1966: 160); in talking about "orienting", the functional intuition is clearly revealed. However, the understanding of definiteness on which Allen bases his account is attuned

Individual content elements: the deictic tenses 329

to the nature of the time that is identified — the time which is "referred to"; in other words, the usual referential view of meaning takes over. That becomes clear because it presents a problem for his theory that one can embed preterites under perfects as in he has done something he wanted to do (Allen 1966: 168; see also McCoard 1978: 100). Because the two clauses refer to the same time, Allen cannot say that in one clause the time is definite and in the other it is indefinite; the two tense forms therefore cannot be kept apart in terms of defmiteness as understood by Allen. On a functional interpretation of defmiteness, the problem would disappear, because it is bound up with an act of reference, which makes the nature of the referent irrelevant. Second example: In his Semantics (1981) Leech gives a lucid account of the role of deictic tenses in creating a proposition out of a predication, thus giving a clear place to the element of "matching" or "application". However, according to Leech's general views on the division of labour between pragmatics and semantics, this matching operation is something of an anomaly; Leech views linguistic semantics as solely representational (cf. "Grammar is ideational", Leech 1983: 5). Perhaps this is why, in Meaning and the English Verb (2nd ed. 1987), the discussion is conducted in the familiar grammatical terms, involving (cf. below), the familiar notions of location on the time line, and retains Allen's distinction between definite and indefinite past as a property of the times themselves (rather than the element of application). So although Leech and Allen provided descriptions of deictic tense that were congenial with the theory proposed here, in both cases their general theories are unable to accommodate this insight satisfactorily. To see how the functional account avoids problems that the standard representational account runs into, I shall begin by looking at three modern grammatical accounts which exemplify the standard picture: Leech (1987), Quirk et al. (1985), and Palmer (1987). According to the representational tradition, the tenses can be defined in terms of those stretches of the time line that they "stand for", and that again means the time occupied by the "event" or state-of-affairs designated by the clause (=E, in Reichenbach's terms). On closer examination, however, a snag presents itself for this theory. The manner of presentation in the grammar of English which is most widely taken as authoritative, Quirk et al.'s Comprehensive Grammar of English (1985: 175-76) is illustrative of the problem.

330 The meanings of tenses

In the introduction to the section on tense, we find the time line with the three familiar notions of past, present and future. Three "levels" are established: the referential, the semantic, and the grammatical level, reflecting the semiotic triangle. However, the account deviates from classical universalism in permitting the referential and semantic levels to diverge: on the referential level we find the present point separating the two periods of past and future; but on the semantic level, the present is defined in an "inclusive" way : it is not the familiar point, but a period which includes the referential, point-like present. This permits the present (as indicated by a clause) to stretch indefinitely into the past and future, as long as the present moment is included, thus accommodating the so-called eternal truths. This endows the present with the status of unmarked category, because the present can cover the whole time line, whereas the past is said to be more "limited". The first example offered is (58) albatrosses are large birds (59) albatrosses were large birds To explain the distinction, the following gloss is provided: "The author of [59] does not commit himself to the continuation of the past state of affairs it describes into the present" (Quirk et al. 1985: 176). In other words, the present can include everything, whereas the past tense is limited to past time. The same view is found in Palmer (1987: 39): "Present time must be understood to mean any time that includes the present moment... Past time excludes the present moment". In a similar vein, in Leech (1987) the past tense is said to have two meanings. One is "the happening takes place before the present moment", entailing the exclusion of the present moment (in contrast with the present perfect); the other is "the speaker has a definite time in mind". Since the past tense applies to "completed happenings", there is no clear distinction between event and state uses — "everything it refers to is in a sense an "event", an episode seen as a total entity" (Leech 1987: 13). The problem in this account, where the present moment is the privileged territory of the present tense, is mentioned in a note in Quirk et al. in relation to the albatross examples. As the note points out, the fact is that were in "albatrosses were large birds" does not exclude the present moment. Quite possibly the state-of-affairs may occupy the entire time line, just as in the case of the present. Thus, in terms of the location of the stateof-affairs, we can actually establish no clear distinction between the

Individual content elements: the deictic tenses 331

locations "indicated" by the present tense and the locations "indicated" by the past tense. The main point that differentiates the predictions of the functional account from the "standing-for" account can be illustrated in relation to this illdefined notion of "exclusion" of the present. If we assume that the past or present tense has the coded function described above, i.e. to cue an application of the state-of-affairs to either the "world now" or the "world then", we get both the consequences that appear to support the exclusion account and the uncomfortable facts that undermine it. According to this account, the addressee interprets the deictic tense by the act of applying the state-of-affairs to the situation that is pointed to (either present or past). The fact that eliminates the exclusion problem is that the pastness or presentness which is rightly attributed to the two forms is not to be understood as a property of the coded event (state-of-affairs): it is a property of the pointof-application — the world that we are talking about. This difference may be difficult to grasp, but it is crucial in understanding the nature of the coded contrast between the two forms. The fact that the pastness is a property of the point-of-application rather than the state-of-affairs means that it is irrelevant precisely how much of the time line the state-of-affairs takes up. The question is where to tag the descriptive content on to the world — the rest is silence, as far as coded meaning is concerned. When you talk about albatrosses in the past, you are not committing yourself with respect to the present, because you are talking about something else. This problem is relevant to all deictic elements. Deictics have typically been handled by indexing, because indexed variables could be understood as wobbly but nevertheless recognizable members of the class of things that words could stand for (cf. above p. 271). In spite of the fact that it is essentially implausible to have a semantic content that changes constantly, shifters were handled by introducing "shifty" entities that they could stand for, instead of in terms of their "pointing" function which permits us to give them a constant meaning. The other aspects of the meanings of deictic elements, including the proximal-distal contrast, must be understood in the context of the pointing function: as helpful hints facilitating the right identification. This is why the pastness of the past tense is not a property of the state-of-affairs, but a property of the point-of-application: it tells the addressee where to look (cf. also Kirsner 1979, 1993). As argued in detail by Janssen (1993), the phenomenon is essentially the same thing that we

332 The meanings of tenses

find in demonstratives — even if the element of actual pointing is incompatible with the temporal domain (cf. Langacker, work in progress). It follows from this argument that tense semantics needs to get rid of two time-honoured elements in the traditional way of thinking about tense. The first of these is the notion of event time, Reichenbach's E. The elimination of E follows from the fact that the time involved in the deictic tenses is not the time occupied by the event, but the time of the point-of-application. The reason why this has been difficult to see is perhaps that in the prototype case of a true declarative statement the two things must coincide: if the statement is true, there must be an event located at the point-of-application. However, if we take a question — such as did he know about it? — there is still an application time, but not necessarily a time of the corresponding event. To ask precisely what stretch of the time line was or would be occupied by his knowing is meaningless. Even in declaratives there is an awkwardness about the concept when it is used to describe the use of the present in clauses designating states rather than events (the typical case for the simple present, see below). To rescue the notion of event time, the formulation of the meaning of the present is usually modified (compare Comrie's theory as presented above) so that the time of the event is said to "overlap" S. However, overlapping is a curiously unsatisfactory notion if we see it in terms of giving a temporal location. If the job of a tense form is to indicate E = stretch-of-timeoccupied-by-the-state-of-affairs, the notion of overlap involves a partial admission of failure: it would imply that E, the putative meaning, is not accessible in its entirety; as in Salman Rushdie's Midnight's Children one is only permitted to see the relevant object by peeking though one hole at a time. If it is really the hole we are looking through that is relevant, as implied in the notion of application time, the awkwardness disappears, and with it the need for talking about overlap: application time talks only about whether the description applies (at the appropriate time) or not (cf. also Bertinetto 1985 and the discussion below pp. 345-47).

2.2.3. The division of labour between tense and

state-of-affairs

My arguments above have concerned states; in order to be precise about what the element of applying to the world at a particular time means for events, there are some elements of the context in which tenses belong that need to be made clear. These elements have both linguistic and non-

Individual content elements: the deictic tenses 333

linguistic aspects, thus demonstrating the natural affinity between an interest in clarifying structural properties and an interest in clarifying properties due to the context of use: we cannot adequately describe the contribution of tenses, unless we take into consideration both the meanings with which they collaborate and the extra-linguistic context in which they operate. One example of the unclarity in the pre-structural tradition is the "aktionsart" variants of the tenses. A byproduct of the status of tense as most-favoured area of investigation in the traditional atomistic approach to meaning is that semantic properties associated with the state-of-affairs have generally been treated as part of the tense system, compare innumerable grammars on such things as the "universal present", the "state present", the "habitual present", the "instantaneous present", etc. Such concepts are necessary to describe "constructional" combinations of tenses and states-ofaffairs, but from a compositional point of view they are either properties of the state-of-affairs or constructional properties, rather than the tense properties. In order to see how tense collaborates with different types of states-ofaffairs, with the distinction between events and states as the most important one, we need to be explicit about the semantics of states-of-affairs. The traditional account of events was that events were temporally like points, as opposed to states which had duration. Bartsch (1986) showed that this was unsatisfactory and replaced it with a distinction between closed (= bounded) and open (unbounded). The parallel between the distinction countable/uncountable and event/state has been discussed by Leech (1981), Mourelatos (1981) and Langacker (1987a,b): events cannot be divided into smaller intervals without losing their event character, while states can be divided into intervals of arbitrary size — just as uncountables differ from countables in being inherently divisible (up to a point). In this connection, I want to invoke a feature of the account given in Durst-Andersen (1992).3 His theory builds on a distinction between two levels of understanding: the "picture" level and the "model" level. The picture level is associated with immediate perception, in terms of which one can distinguish between two types of pictures: stable and instable, corresponding to states (lie) and activities (work). In order to get to the more complex level at which we can talk about events and actions, we need to process the simple picture-level understanding by means of models which integrate temporally successive stages: we view an activity as leading towards a resultant state (the socalled "process model") — or we view a state as being the result of an

334 The meanings of tenses

activity (the "event model"); and it is the event model that I would like to draw on here. An example is the event of breaking a window: it involves a final state in which the window is broken, as well as a preparatory activity that brings about the state. By incorporating two levels of cognitive processing, Durst-Andersen's account explicitly transcends the standard account which looks at the stateof-affairs only in relation to the time line (the "objective" perspective), without including the vantage point of the observer; and he also specifies what the speaker must do with the "objective" input. In the event model, the speaker puts himself in a position where he views consecutive stages as part of one conceptual construct; and the familiar accounts in terms of internal vs. distant perspective associated with the perfective aspect can thus be seen as a consequence of applying the event model in Durst-Andersen's terms. This has two consequences for the way we understand events. First, you cannot view something as an event, if you assume a vantage point in the middle of it; it needs a vantage point from which you can see the two components as a whole. Secondly, the event category is incompatible with location at a point in time; when you think of something as an event, you subsume two points in time in your conceptualization. In addition to this, we need to consider a factor which is of importance for understanding what it takes for a state-of-affairs to be applicable. In the case of verbs which on the face of it have event meanings, it is often possible to assert them of arguments that are not at the moment engaged in any event. As pointed out by Langacker (1991: 263), we can say Zelda drinks her whisky straight even if Zelda is at the moment asleep. As discussed by Langacker, this depends on a way of thinking about the world that distinguishes between "structural" and "phenomenal" knowledge (in the sense of Goldsmith and Woisetschlaeger 1982: 80): some things are wired into the way the world is, and they can be truly ascribed to the world even if there is no outward manifestation of them now — or even, as an extreme case, at any other time.4 This "idealized cognitive model" in Lakoff s sense is not part of language-as-such, but of the cognitive landscape in which language functions; in our culture it goes back to Plato's and Aristotle's theories of the nature of the world (cf. above p. 8). It permits us to ascribe states-ofaffairs to points-of-application in the absence of actual ongoing events that demonstrate their truth. The question does Joe visit his mother? can be answered in the affirmative regardless of where he is at the moment, provided it is true of Joe as he is now, i.e. if he is "between visits". If he

Individual content elements: the deictic tenses 335

has just sworn never to darken her door again, we cannot say the same thing, although the physical pattern of visiting may be the same. This is an additional illustration of why the notion of event time is awkward in accounting for tense meaning.

2.2.4. Pointing vs. "focal concern" On the basis of this understanding of events, I would now like to consider some theories that are closer to my own. Other theories that do not depend on the notion of event time describe the deictic tenses in terms of a time on which there is a focus of attention (cf. Glismann 1987) or in terms of how the speaker conceives of, envisages or presents the event (Janssen 1991: 166, or Langacker 1991: 249). These accounts I see as pointing to the same intuition as I do, and the only special merit I claim for my story is in the notion of functional-interactive as opposed to conceptual meaning. I think it is important to understand the distinction as well as the necessary interaction between the process of building a conceptual structure and the act of applying it. Because the alternative accounts, although they are getting at the same thing, do not include this element, I would like to see them as partial versions of my more complete story: an account in terms of focus of attention can be seen as following from my account because you must in some sense direct your attention to the point-of-application before you can apply the state-of-affairs to it. But it is less obvious that you can go the other way, from the focus of attention to the point-of-application. As an example, consider the case of a rejected suitor coming back the day after, saying "when I woke up this morning, I had a feeling you had changed your mind. Was I wrong?". In that case, we can only define the "focus of attention" associated with the past tense in was I wrong? in terms of the need to find the point-of-application. In any other sense of the phrase, one would expect that the focus of attention would be very much on the present. Janssen's theory is close to the one proposed here in its emphasis on the parallel with definiteness and demonstrativity, as well as in its criticism of the narrow time-referential tradition. The central element of his account is a distinction between two "regions of referential concern" in terms of which the speaker views the situation: a "focal" and a "disfocal" region.

336 The meanings of tenses

These regions are used to describe demonstratives as well as tenses, and for the tenses the theory translates into an account in terms of what is of "actual" referential concern versus what is "disactual", i.e. outside "the focus of his referential concern" (1991: 168). Janssen concentrates on examples where the same situation can be described in either the present or the past, as when there is a man at the door, and you report the event to the family member in question in one of the two following ways: (60) somebody is asking for you (61) somebody asked for you These demonstrate that the temporal location of the event itself is not the whole story. As I see it, however, the notion of "immediate concern" cannot alone support the description of tense, unless we assume by fiat that states-ofaffairs that apply to a past time cannot be of "immediate referential concern", in which case it is not really an alternative theory. The central problem is the lack of full parallelism between demonstratives and tense. This comes out if we compare the status of the place-of-speech in relation to "here" vs. "there" with the time-of-speech (utterance time) in relation to past and present tense. In understanding the place denoted by "here", depending on circumstances, we can define "here" as being "in this room", "in this house" "in this country", etc., encompassing ultimately everything (cf. Schegloff 1972) — whereas the distal deictic must always be delimited in relation to some point of origin. But if the present was arbitrarily extendable in the manner of "here" we should expect that we could gradually expand the time to which sentences in the present apply to encompass events that are further and further away in time; and this is not the case. To some extent this is true of events which are very close to the time of speech, as in the example above; but there are very strict limits on this. If we had an infinitely extendable present, there are two types of things we should be able to say. First, we should be able to use event descriptions or bounded states in the present tense without any restrictions. In other words, we should be able to say (62) Damn! I (just) lose a million dollars on the stock exchange

Individual content elements: the deictic tenses 337

(63) George is World Champion for more than ten years (i.e. if he is still champion and the next competition will not be until the ten-year period has expired) If is was just a matter of how we conceive of the event in relation to two regions of more or less focal concern, we would just have to expand our psychological present (our domain of "immediate referential concern") until it was big enough to accommodate the event. The problem turns on an issue which is also relevant for the sense in which the present can be said to be unmarked, namely the nature of the link between the present tense and the time of the utterance. It will be apparent that I think this link is important in understanding the present. As generally recognized, in the past we can identify the world as it was during an arbitrarily large period of time and talk about all of it: Queen Vicoria reigned for 64 years applies to the whole period, without the point-like focus that made (63) awkward. Cross-linguistically, the importance of the point of speech in explaining the natural affinity between perfectives (denoting events) and the past tense has often been noticed (cf. Bull 1960, Bybee—Dahl 1989, Leech 1987). Perfectives are viewed from a distance (hence go well with the past); they cannot be located at a point, and therefore cannot be located at the point S as implied in the present tense. Any account that tries to cut the present moment out of the meaning of the present tense fails to account for this pervasive cross-linguistic tendency. The conclusion I draw is that the notion of a point-of-application is basic in accounting for tense meaning, and the notions of focus of attention and the regions of more or less focal concern are derivative in accounting for deictic tense. One reason for choosing a point-of-application that is awayfrom-here would be that you want to cut off application to the here-andnow that is of "immediate referential concern". In all referentially ambiguous cases, the speaker is free to do exactly as Janssen says, but this manoeuvre, which is central to Janssen, cannot be understood if we abstract completely from the selection of a temporal point-of-application. An account in terms of what events "are presented as" (1988: 128) or "what he experiences or envisages" (1991: 166) lacks that element of matching between the speaker's conceptual organization and the reality of which he purports to say something true that is the essence of the "application" account; and this element of directing the addressee to a point-of-

338 The meanings of tenses

application that is not purely a matter of the speaker's personal conceptualization is central to the account I suggest. In the passage quoted above, Janssen explicitly rejects an account which sees the deictic element as embodying a search instruction;5 and indeed the word "instruction" sounds too explicitly hearer-oriented if it stands alone. As pointed out above, the only sense in which I want my account to be hearer-oriented is the sense in which the fitness of linguistic elements depends on their usefulness in triggering the right interpretation processes in the hearer. Two final observations may help to illustrate sense in which the "application instruction" should be understood. First of all, the element of pointing in demonstratives, as generally recognized, is much more focal and explicitly hearer-oriented than the corresponding element in the tenses. It may be compared with definiteness as coded in personal pronouns, which invoke identification but depend on the referent being already available (cf. Givon 1990:916); a typical sequence will therefore be one in which the referent may be picked out by means of a demonstrative, and maintained by means of a pronoun: that man...he. In such a sequence, the deictic tenses are parallel with the pronoun, not with the demonstrative: a past tense is appropriate for talking about a past situation only after the situation is established in the hearer's mind. The present- or pastness is therefore typically redundant in the sense that it adds nothing to what the hearer already knows. But on the procedural view of meaning this does not mean that it is semantically empty. Keeping up the parallel with pronouns, it has roughly the same status as the distinction between 'male', 'female' and 'neuter' encoded in he, she and it in English. Most of the time a genderneutral pronoun as found in many languages would serve just as well; yet the distinction provides an additional way of 'tagging' the referent, helping the hearer to make sure the more interesting parts of the clause meaning go into the right referential "file", and makes it possible to shift more freely between different files. Secondly, the semantic status of this type of tagging may be illustrated by comparing it with the use of uniforms. Sloppily speaking, the uniform is there to "tell" people what status the wearer of the uniform has, just as tenses tell the hearer about the temporal status of the clause. But a uniform is not essentially a vehicle for referential or conceptual information. A uniformed soldier in an army camp is not like a sandwich man, ceaselessly announcing that he is a soldier; nor is the uniform a sign that this person is conceptualized as a member of the armed forces. Rather, the uniform

Individual content elements: the deictic tenses 339

"tags" the person who wears it in the appropriate manner, making it recognizable what real-world "file" he goes into. Temporal information is subordinate to the element of "application", I have claimed, thus agreeing with Janssen in the importance of "what you choose to look at" over "time-indexing"; but once it is understood that the time element is not a matter of adding temporal information to the state-ofaffairs as such, I think it is most plausible to say that the temporal element is indeed basic. I shall return to some reasons for this in discussing modal elements below; but in relation to Janssen, however, I would like to invoke a "centrality" argument. The canonical situation of use is the one in which the speaker's temporal basis is known and tense is used to signal what the state-of-affairs applies to. In such cases, temporal presence and immediate referential concern go hand in hand, like temporal pastness and "disactual" status. But if you deliberately manipulate the temporal perspective, (e.g. by speaking in the past tense of a man waiting outside the front door, when you might as well have used the present), the role of referential concern acquires independent significance. However, instead of being the central meaning this can be seen as one type of "connotative reversal", foregrounding a presupposed element of the "basis" (cf. also p. 343): the choice of a particular expression-content combination (the sign "past" as opposed to the sign "present") becomes meaningful in itself and indicates what the speaker chooses to focus on — as when the choice of the word ambience is used to saturate the atmosphere with the speaker's Frenchness. But we cannot as easily go the other way, because the choice of tense is not as free as an account based on "referential concern" would suggest. Therefore I think "application time" as an aspect of the meaning of deictic tenses is central in relation to "referential concern".

2.2.5. Reference to unidentified past time In the role I assign to time I thus find myself midway between those who want to marginalize time-referential properties and those who want to see periods of time as quintessential. As an example of the latter, we find Declerck (1986, 1991), who sets up a theory based essentially on the tune periods within which each tense form is appropriate. His theory is not affected by the criticism above, because it is not formulated directly in terms of event time, but takes a "time of orientation" as basic (cf. the

340 The meanings of tenses

discussion below). He argues against the notion of definiteness by showing that in a number of cases we do not have to identify any particular time. His counterexamples are: (a)

We know that John lived in Boston for some time and was quite a respectable citizen there. But we do not know when that was, nor where he went afterwards.

(b)

Is Bill in the house? No, he went away.

(c)

What became of your sisters? — Oh, Jane married a sailor, Sue bought a gold mine, and Marj one joined the air-force. (Heny 1982: 134)

(d)

"I see," Beesley said sniffing, "/ didn 't know that before. " (K. Amis, Lucky Jim)

(e)

She never was a great artist (Leech 1969: 143)

(f)

He is no longer the player that he was.

(g)

A London Council which agreed to put £100,000 in a local firm, is to consider having its own "enterprise board" to award loans to local industry (Fenn 1987: 165)

(h)

What do you know about Abraham Lincoln? — He was the first president of the USA. [sic]

What these examples show is that the past tense does not necessarily serve to identify a calendar time. However, this does not invalidate the claim embodied in the description above, namely that the deictic tenses direct the addressee towards a point-of-application, i.e. that part of the world-ofdiscourse about which we want to make a claim. Once calendar time is irrelevant, a precise temporal address is not the only way to make it clear what situation we are talking about. The lifespan of a deceased person is a familiar time anchor for the past tense; even if we misplace the person, as in (h), we know where to go in order to find out what to apply the statement to in order to find out whether it is true. The cases that are most

Individual content elements: the deictic tenses 341

problematic for my account are (e) and (f), where the contrast to the present is in focus. If they are understood as typical, the "direction" element that I have based my account on cannot be upheld; however, I think the use without an implied time period are best understood as idiomatic references to a generic past (as also suggested by Quirk et al 1985: 185n). In the same category I would put the "gnomic" preterite in English, as in men were deceivers ever (cf. Jespersen 1924: 259). Although such cases differ in suggesting that the present is no different, both types of reference to an unspecified past are limited to certain expression types, not a productive option open to the speaker whenever he wants to speak about a vague past.

2.2.6. The markedness issue The account in terms of point-of-application also throws light on the issue of whether the present tense is simply the unmarked "zero" alternative to the past (as for example the non-perfect "zero" alternative to the perfect). A traditional argument for this view is the present tense in so-called eternal truths, Jespersen's utid 'un-time'; but that loses its force once it is realized that eternal truths are in the present tense because they are applicable at utterance time. If the present were really unmarked in Jakobson's sense, it would imply that application time was unspecified — and this is not the case, as we have seen. Furthermore, if the present tense were unmarked, it would be surprising that it is more semantically cohesive than the past. While the past tense has a temporal and a non-temporal "distance" interpretation, the present always links up with what is the case at utterance time. Regardless of being partially zero on the expression side, on the content side the present tense is a substantially and structurally well-defined paradigmatic alternative to the past tense, and as generally assumed deictic tense is therefore obligatory in declarative and interrogative clauses in English. The unmarkedness analysis is to some extent adopted in grammaticization theory (cf. Dahl 1985), reflecting the fact that the present tense is often zero on the expression side. Bybee—Perkins—Pagliuca (1994: 138), however, treat it as a positively definable element; yet their description illustrates the point on which I claimed that a notion of structure is a

342 The meanings of tenses

necessary corrective to the basically cross-linguistic, diachronic descriptive strategy of grammaticization theory (cf. the discussion above p. 248): Unlike Comrie (1985b: 36-41), we find it difficult to view the so-called present tense as a "tense", that is, as having to do primarily with deictic temporal reference. What present covers are various types of imperfective situations with the moment of speech as reference point. (Bybee—Perkins— Pagliuca 1994: 126)

This description is perfectly true from a pure substance perspective that abstracts from structure: states-of-affairs in the present tense are indeed typically imperfective and have no privileged relation with the present moment. But the suggested change of emphasis from tense to "imperfective situations" is revealed as inadequate if we look at it from the point of view of how meaning is structurally organized in English. In terms of the description I have argued for, the relationship between past and present is a paradigmatic choice between two points-of-application (corresponding to what in the quotation is called "reference point", see section 3.4 below); the imperfective properties belong not to the tense, but to the state-ofaffairs in its scope — and the reason why the present almost always goes with imperfective states-of-affairs is that perfectivity is incompatible with application to a point in time. The lack of a structural perspective revives the traditional conflation between properties of tense and properties of the state-of-affairs and marginalizes just that deictic temporal property which is central in understanding the paradigmatic contrast between present and past. In relation to the past tense, the point-of-application analysis can also be contrasted with the standard, very general description that is useful in the cross-linguistic perspective: "Past indicates a situation which occurred before the moment of speech". This is understood as a piece of descriptive information, which differs only in degree from lexical meaning, and in being sometimes used when it is redundant: Thus English past tense is used not only where it is supplying the new information that the situation took place in past time, but also where this information has already been supplied, either explicitly or by the context. (Bybee—Perkins—Pagliuca 1994: 55)

Individual content elements: the deictic tenses 343

Again, this is perfectly true in terms of pure substance, but overlooks the specific semantics of a "true" past tense. As argued, the past tense is not standardly used to "supply the new information that the situation took place in past time"; this only occurs in cases of connotative reversal, such as Cicero's Vixerunt! 'they (have) lived', announcing that the lives of Catiline's fellow conspirators were now a thing of the past.6 The point is again to illustrate how an interest in the structural differentiation between different forms of meaning in the synchronic system may provide a corrective to the more general and substance-oriented perspective of grammaticization theory.

2.2.7. The non-temporal past A special problem is posed by the past tense variant in which no past time is involved, as in cases like (64)-(66): (64) / wish I knew (65) Suppose he really did it? (66) If he came back, what would you do? There is a long tradition (in Denmark it has been the standard account since the last century) for regarding the past tense as a "remote" form, where the remoteness can either be in terms of temporal distance or in terms of remoteness from reality. This way of positing a basically invariant meaning for the past tense has had many followers (cf. Joos 1964, Langacker 1978, Janssen 1991). However, I agree with Taylor (1989: 153) that it is not plausible to say as a general rule that the past tense contributes only the element of distance from the present moment; for one thing, it would not explain why it could not equally well be used about the future. An alternative solution would be to see the modal distance as the primary element, as suggested in Herslund (1988). In relation to this possibility, I would like first to underline that there is in fact an element resembling modality in the account given here of the temporal past, as opposed to the traditional accounts. Instead of seeing tense in terms of the temporal relationship between events, the account here is in terms of a "point-ofapplication", a world to which the clause applies. This means that the temporal element is superimposed upon a "world" element, traditionally

344 The meanings of tenses

associated with modality (see also Lyons 1977: 820). The difference in relation to modality is that the prototypical use of tense involves only one world, namely reality — a statement in the past or present tense without specifically modal elements leaves no room for alternative worlds. The question of a possible invariant meaning thus does not force us to choose between a "world" or a "time" interpretation; it only concerns the more specific issue of whether modal or temporal distance to the point-ofapplication is primary. With respect to that choice, as pointed out by Taylor (see also Leech 1987: 119), the environments in which the modal sense can occur tend to be more restricted than those where the temporal sense can occur, so I do not see the modal interpretation as a plausible Grundbedeutung, seen from the synchronic English perspective. There is also a cross-linguistic diachronic argument against the theory (cf. Fleischman 1989): modal meanings generally appear later than temporal. However, I would like to emphasize how closely related the two senses are, also in terms of the account given here. Structurally this is also evident from the fact that the present forms a natural, unambiguous paradigmatic contrast to both senses. The overlooked meaning component which past and present share, namely the direction towards a point of application, is also evident in the nontemporal sense: even more obviously than the temporal sense, the modal sense can only be used when a non-actual mental space has already been established in the context — otherwise we do not know where to apply the state-of-affairs. And as we shall see in relation to conditionals later, it is crucial that we understand that it is not the state-of-affairs, but the point-ofapplication that is distant from reality (as it is the point-of-application that is distant in time). It is not the state-of-affairs that is claimed to be unreal, or excluded from the present moment. In the paraphrase provided for the meaning of the past tense, the only thing we need to add is an extra way of understanding the point P: instead of being a label for a temporal point of application, where P < S, it can be a non-actual point of application P* — the rest of the paraphrase is the same.

2.2.8. A comparison with Cognitive Grammar A comparison between Langacker's theory of the English verb (cf. especially the most recent version, 1991: 196) and the account I have suggested illustrates the difference between the purely conceptual and the

Individual content elements: the deictic tenses 345

functional view of meaning that was discussed above p. 105. Langacker's theory is based on the notion of grounding (cf. above p. 272), designating the process of establishing a conceptual relationship with the communicative scene. The grounding of clause meaning is achieved by tense in combination with the modal verbs, understood as a fused system expressing two dichotomies: reality vs. irreality, and proximal vs. distal. The reality distinction is expressed via the presence or absence of a modal, and the proximal-distal element is coded by tense. The temporal interpretation of the proximal-distal pair is seen as secondary; "at the schematic level, the system is purely one of modality" (Langacker 1991: 244). However, when the "reality" dimension is chosen, the proximal-distal choice is prototypically reflected in temporal distinctions. Postponing the discussion of the relation between tense and modality, I shall now look at the purely temporal aspect of his theory. In their temporal function, the tenses are analyzed in conformity with earlier analyses by Langacker, in the "naive" way according to which: PRES indicates the occurrence of a full instantiation of the profiled process that precisely coincides with the time of speaking, PAST indicates the occurrence of a full instantiation of the profiled process prior to the time of speaking. (Langacker 1991: 250)

This characterization of the past tense would appear to lack the crucial element of identifying a point-of-application, retaining only the element of distance. The standard argument against the account in terms of pure temporal distance is that you cannot say he was ill unless it is clear what time we are talking about. While there would be nothing to explain the oddity of this if the past only "indicated a full occurrence before now", the instruction to identify a past situation as being talked about pinpoints exactly this oddity. In present-day American English, this is less obvious than in other dialects, because the past appears to be taking over the territory associated with the "hot news" sense of the present perfect: (67) Daddy said, "You eating?" - "No, late" (Jane Smiley: A Thousand Acres, p. 81.) But the identifiability still applies if we go beyond the very recent past, and therefore still needs to be part of the description.

346 The meanings of tenses

In the case of the present, the difference between the two accounts is analogous in principle, but this is invisible in practice because identification is built into the notion of "time of utterance". The difference reduces to the difference between "indicating a process occurring at the time of utterance" and "instructing the addressee to apply the state-of-affairs to the situation at the time of utterance"; and since this in all standard cases ends up with the same consequences, it is a difficult issue to settle. Nevertheless, I think there are cases where the difference can be established. Like most other theories, Langacker's account, as we have seen, understands the temporal location as a property of the coded state-of-affairs rather than the world of which we speak. However, Langacker's theory can handle the "timeless" cases that presented problems for other theories. A central element of his account is the divisibility of imperfective states-ofaffairs: when an imperfective process is communicated in the past tense, it is irrelevant whether it continues or not. As long as the process occurred in the past "objective scene" that a past-tense clause "indicates", we can say that a full instantiation has occurred, regardless of continuation into the present. Similarly in the case of the present: for imperfective processes, the infinitesimal "now" is long enough to permit a full instantiation. The perfective processes, as we have seen, are different. They are unproblematic in the past because the past has duration. The difficulty with a combination of present tense and eventhood have been discussed above in terms of whether a temporally composite state-of-affairs can be applied to a temporal point, and the difficulty as analyzed by Langacker involves essentially the same argument, although it is seen in terms of "occurrence" rather than "applicability". The facts are the same: for a state-of-affairs to be applicable, you must have an occurring event. The cases in which I feel the account in terms of applicability may be more intuitively satisfactory even in the case of the present are those cases which Langacker accounts for in terms of the "structural" as opposed to "phenomenal" knowledge described above. This concept is used by Langacker to set up a special class of imperfective processes: those that are built into the way the world functions, e.g. the sun rises in the east. Once such imperfective processes are recognized, they can then come under the general clause permitting imperfectives to squeeze into the "naive" account of the present as (1991: 263) "consistently meaning "one instance right now"." I think this is the best one can do in terms of a "present occurrence" theory of the present tense — but I think the account is less plausible than the "applicability" account. The case where I think this is easiest to

Individual content elements: the deictic tenses 347

demonstrate is the last case taken up by Langacker, namely the "scheduled future event" case, as in (Langacker's example) (68) The game starts at 8 PM Langacker argues his case as follows: As in all these examples, it is not the event per se that is profiled by the clause, but rather the imperfective process defined by its stable role as part of the "script" of how the world is expected to work. More precisely, the clause designates that portion of the imperfective process that coincides with the time of speaking: the verbal inflection represents a true present tense. (Langacker 1991: 266)

No matter how much one agrees with the importance of the notion of "how the world is expected to work", I cannot see how one can plausibly cut out a time-slice involving an imperfective "occurrence" of starting that coincides with the time of speech. As against that, the applicability account would run as follows: We know of the world that it is so constituted that we can apply the state-of-affairs "game-start-at-8-PM" to it already "now" (at four o'clock); the statement is true and valid already. The actual starting, however, can hardly be stretched to "occur" at four. I think the same would apply to the other cases, such as the timeless mathematical propositions; two and two makes four is difficult to talk about in terms of "occurrence" at all, and the difficulty is aggravated when you cut it into slices of which one is said to occur right now. Hence, although the two theories makes the same predictions, I think the semantic operations involved are more plausible in the case of the "applicability" theory than in even the subtlest "occurrence" theory. 2.2.9. Internal structure The internal structures (cf. Part Two, section 3.3) of the content elements 'past' and 'present' in English are fairly simple, according to the theory suggested here. The paraphrases (40) and (41) describe the centre of the territories, the shared element of pointing and the difference in direction (towards S or P). In the case of the present tense, the variants are accounted for in terms of combinations with elements in either the

348 The meanings of tenses

structural context (for instance different types of states-of-affairs) or the pragmatic context (contexts involving a shifted deictic centre, cf. below p. 431). The past is a little more complicated, partly because P has two main variants, corresponding to the temporal and modal understanding of the pointof-application; the modal variant is partly combinatorial, because it depends on a "space-building" element (cf. below, p. 429) for a non-actual space that can serve as point-of-application. The two variants sometimes create an "internal paradigm": If he came back can be read as referring either to P or P*. There is also a politeness variant, which is occasionally grouped with the unreal variant as constituting a modal class of cases, as in (69) There was something I wanted to ask you However, I think such cases are better understood as peripheral in relation to the central temporal reading of P. The past tense removes the request from the here-and-now where it would violate norms of politeness by invading the hearer's territory (cf. Brown and Levinson 1987), but that effect is compatible with the natural temporal reading: "at some other time I had that desire, which I leave to you to take up as applying to the present". As discussed above, there is no natural "modal" reading that could be generalized to these cases, and you cannot get the polite reading in all sentences; I disagreed cannot be understood as a polite way of saying / disagree. Therefore they can be seen as partly contextual (depending on the potentially face-threatening interaction), partly idiomatic variants. As discussed above, there are other cases where the element of pointing is absent: apart from the "hot news" sense in American English, there are the cases of reference to generic past discussed p. 341. The interpretation I have given similarly places them as peripheral variants, but rather than being contextual or combinatorial variants, they are idiomatic, i.e. they depend on an additional convention that applies to special sets of combinations.

The future

349

2.3. The future 2.3.1. Semantic description In discussing the meaning of "the future" I am not taking for granted that English has a structurally clear-cut "future tense". The point-blank question of whether English has a future tense (cf., e.g., Davidsen-Nielsen 1988) only has an answer if we are quite sure precisely what a future tense is; and since the overarching category of tense does not obviously constitute a well-defined natural kind (cf. Bybee—Dahl 1989), that is not guaranteed to be the case. In approaching the issue I begin by discussing the temporal meaning that I see as defining for a "pure future". As opposed to the deictic tenses discussed above, where I could take my point of departure in fairly uncontroversial structural elements in the grammar of English, the discussion of the future thus starts off as a discussion in terms of content substance only. However, I hope to show that the type of meaning I am after has a central status not only in relation to the cross-linguistic "substance" discussion, but also in relation to English. Futurity is an aspect of the meaning of a great many signs and constructions: adjectives like imminent, nouns like destiny, verbs like become, etc. In homing in on the special status of the future in a verbal context, a central lexical class is that of verbs like plan or want whose complements are understood as awaiting future realization. As part of what such verbs do, they thus assign futurity to clauses that occur as their direct objects. This shared feature itself, abstracted from all indications of how it is to come about — i.e. abstracted from the rest of the content of these verbs — is what I understand by "pure future". My paraphrase is the following: (70) The 'pure future' indicates that the state-of-affairs applies to some situation ahead in time (time F for future) An obvious path whereby a marker of pure future may arise is through a process of bleaching (cf. below). If we compare a "pure future" with other forms of futurity, the difference is that the pure future only presents the state-of-affairs as part of what lies ahead — not as something that is "desired", the "will of fate", "on its way", etc.

350 The meanings of tenses

In comparing with the meaning of the past and present, two differences should be emphasized. First of all, there is no element of identification: we do not have to pick out the right future point-of-application in order to understand what is meant. The precise time or situation does not matter: the important thing is whether there is or is not some situation ahead to which the state-of-affairs applies. Secondly, the element of being ahead in time ("aheadness"), and the dependence on a viewpoint which this implies — you can only be ahead in relation to some particular time — is part of the meaning of the future, whereas the deictic tenses only serve to identify the point-of-application. This property has a number of implications, which will be taken up below. First of all, it rules out a direct extensional analysis: as discussed in relation to Dummett above p. 28, a sentence in the future cannot be objectively true at the time of utterance. The same factor also influences an account in conceptual terms: the conceptualization is necessarily associated with a subjective position located before the anticipated event. Finally, from the functional-interactive point of view, the choice of the pure future is necessarily a choice imposed by the speaker, rather than forced by circumstances: the speaker chooses to "take a peek ahead" by using the future, rather than just stay with established facts. This aheadness aspect, with the element of anticipation that was stressed by Bull and Allen, represents an important difference of emphasis in relation to the Reichenbach tradition, in which temporal relations are reducible to precedence and overlap (cf. the discussion of Comrie above). The importance of point-of-view that follows from aheadness means that the base time of the future ("lookahead time") plays a more complicated role than the base time of the deictic times. But it must be stressed that, as with the deictic tenses (and indeed all other meanings), the basis is not an additional element, but something that must be there in order for the element to have its function: you only get to time F via the presupposed base time. The description above of futurity as a shared element was given without reference to diachrony; but the path from this stipulated meaning to coded pure future in actual languages can be naturally described in terms of grammaticization theory, both because the diachronic trajectories through which pure futures arise are well documented and because futures tend to be diachronically unstable; in the history of the Romance languages we find (at least) two full rise-and-fall cycles of future forms (cf. Fleischman 1982). There are three common pathways from constructions which have futurity as an aspect of their meaning to the pure future (cf. Bybee—Dahl 1989:

The future

351

90), involving the source meanings "volition/desire"; "movement towards a goal"; "obligation" or "possession". The most salient part of the semantic side of the grammaticization process is the loss of those elements of meaning that indicate the path towards realization, since a pure future as defined above does not specify anything about the lookahead time or the path towards the future state.

2.3.2. The prospective Before going into detail with the pure future, it may be illustrative to begin with the "going to" future, representing the second source in Bybee—Dahl's inventory. As generally recognized, going to and its equivalents in other languages present the future as something that is already in evidence at the basis time (cf. Vet 1984): it is going to rain is apposite when you can see the clouds gathering, etc. This "prospective" sense thus depends on what is already the case ("embryonic futurity"), and a prospective can therefore be true even without having the predicted outcome — it can be aborted, so to speak. You can say he was going to do it even if it did not happen because he died two days later. The place of the prospective on a developmental scale that is different from, but moving towards a pure future, has been described by Langacker (1990: 21; 1991: 219-20) in terms of "subjectification". The development is seen as parallel in English and French; examples are: (71) She is going to close the door (72) Elle vafermer la porte These sentences can have two senses. In the fully lexical sense it means that she is moving to a new place so as to close the door; in the prospective sense it means that in a little while she is going to perform the action of closing the door. The process of subjectification, i.e. the realignment of conceptualization from an "objective scene" to situation- and speakerorientation, has two general stages. In the first stage the objective movement evaporates, to become replaced by a subjective movement by the conceptualizer from the reference point to an event; at the first stage going to means 'moving physically', but at the second stage it means only that a

352 The meanings of tenses

mental path is established to the event. In the case of the "go" future, this development is accompanied by a "shift in domain, from space to time" (Langacker 1991: 219); one might also say that the spatial movement would necessarily occur in time, so that what happens is not a shift but a reduction so that the profiled spatial movement disappears, leaving a temporal movement that was originally only a concomitant aspect. Other elements are also bleached away (Langacker 1990: 23): the lexical reading involves an element of intention ("moving in order to"), which is absent in the temporal reading. What is left at this second stage, according to Langacker, is only a profiling of the temporal relationship between the reference point and the event.8 But regardless of whether one places the emphasis on the temporal "path" relationship or the "embryonic futurity" at the base time, there remains a characterization of the base time, which prevents it from being a pure future.9 Before we look at this subsequent stage, we shall look at the pure future as a member of the modal paradigm in English.

2.5.3. The pure future in relation to modality Most authors, even those who do not want to speak of the future tense in English, see the future sense as being one of the variants of mil. This verb, which is now part of the modal paradigm, originally expressed "volition", corresponding to the first developmental pathway in Bybee and Dahl (1989), compare its residual use as a main verb in such cases as God did not will it. In order to be precise about the way in which the pure future sense can be distinguished as a special item in the polysemic network that constitutes the semantic potential of will, it is necessary to take a closer look at modal meaning. Modality, as described in modal logic (cf. McCawley's version for linguists 1981), can be described in terms of "accessibility" to alternative possible worlds. In a logical context this notion provides a way to assign truth values to modalized statements that cannot be evaluated by direct application to the actual world: statements of necessity are true if the statement is true in all accessible worlds, etc. But this is a truth-oriented version of an idea which might as well be understood in pragmatic terms. In the pragmatic perspective, which from the general point of view argued in this book is the most general perspective, the different accessibility relations correspond to different types of pragmatic factors that bear upon what alternatives we have available in the situation.

The future

353

In spite of their different character, moral, physical and logical factors can all be understood in terms of "forces" (cf. Sweetser 1990: 51) constraining our options in various ways. Perkins (1983) has analyzed modal verbs in terms that are compatible with this picture, suggesting that modality interposes a "set of circumstances" between the prepositional content and the actual world which bear upon the truth of the proposition. Modal statements thus do not speak directly about what is the case, but only about those factors which bear upon the issue of truth/realization, as demonstrated in (73):

(73)

George

should can may must ought to

deny it

In making one of these statements, although we are not in a position to commit ourselves directly to the truth of the naked statement "he denies it", we know factors that bear on the issue — otherwise we would simply have nothing to say. The modal verbs are sensitive to whether we are talking about social laws (deontic modality), laws of nature (dynamic modality), logical laws (in Perkins's terms, "epistemic" modality), and to various other factors, but basically they code the degree of force (which is what makes this metaphor linguistically attractive): all modal verbs are multiply polysemous when it comes to what type of modality is involved, but generally monosemantic when it comes to degree of strength: must always codes the "necessity" degree of strength, whether it is deontic (you must resign), epistemic (you must be the new postman), or dynamic (must you behave like an idiot?) With this picture of modal meaning, we can now try to place the pure future sense of -will in the picture, as in (74) George mil deny it I suggest there is one similarity and one important difference between (73) on the one hand and (74) on the other. The similarity is that both the pure future and the modals describe something as non-actual or unrealized; and this is the central semantic argument against seeing the future in the context

354 The meanings of tenses

of tense (Lyons 1977: 677). The difference is that the pure future leaves no alternatives: it speaks categorically about the real world, although at a time that still lies ahead. This point is made by McCawley (1981: 342) when he points out that even if theoretically we are only in a position to speak with uncertainty about the future, we are in fact as speakers rash enough to commit ourselves unreservedly with respect to future events. This is a feature shared with tense: the time line does not branch out into alternatives, but remains undivided in the case of the pure future. This lack of alternatives means that the whole modal apparatus of "factors" and "accessibility relations" is absent in the pure future sense: only the passing of time remains. This is the special variant of the bleaching process that applies to the relation between futurity and modality. An important implication of this picture emerges if we compare what happens if we are wrong, when we make a modalized statement and a pure future statement, respectively. If we look at the whole modal statement (not just the prepositional core), it may make sense pragmatically to evaluate it in relation to the actual world. If we say this can be done, and it turns out that it is actually impossible, the whole modal statement was wrong in the actual situation — because actually it could not be done. If we say this will not occur again, the statement cannot be falsified in the actual situation (cf. Vet 1983) — because the whole modal appeal to alternatives has melted away in the pure future, and our statement only commits us to the future realization, regardless of what can be said about the present.10 This difference distinguishes the future from the modal sense because it puts the future outside the scale of modal strength. As a categorical judgement (cf. Davidsen-Nielsen 1988) it leaves no room for the proposition inside the scope of futurity to remain forever untrue, and therefore it is in a sense stronger than the strongest degree of modality. The evolution of a pure future sense as characterized here thus involves not just bleaching, but also an element of strengthening of meaning due to the increased generality (cf. Sweetser 1988). Seen in the context of the whole meaning potential of will, this puts the future sense in a special position, breaking out of the mid-level degree of "force" (between possibility and necessity) which is characteristic of the other senses. The existence of an internal paradigmatic contrast is exemplified in (75): (75) baby won't eat his mashed potatoes

The future

355

(75) can be interpreted volitionally or in terms of future time. In the first sense, you can go on to say: " — but I'm going to make him eat, if it's the last thing I do". In the future sense, such a continuation would violate the principle of sense, since it is incompatible with the statement about the future to which the speaker has already committed himself.11

2.3.4. Future meaning and content syntax The semantic description of the pure future sense in relation to modal meanings on the one hand and the deictic tenses on the other must be understood in close relation to the structural place of the pure future in the content syntax. As outlined above, the choice of the future comes within the scope of the choice of deictic tense; this means that the aheadness associated with the future has its base time either in the present (ahead at S) or in the past (ahead at P). The realization that the future does not belong in the same paradigmatic slot as the past or present has had a curious history in linguistics. As in the case of the "identification" aspect of the past and present, it would appear that because of the lack of a natural theoretical place for it, it tends to get forgotten again every time it is pointed out. Vet (1981) made a point of the fact that in all the standardly discussed languages, the future is not in the same paradigm as past and present, but can be combined with either (while the choice between perfect and non-perfect is tertiary). After this article appeared, it turned out that essentially the same point had been argued by the Dutch grammarian te Winkel in 1848 (cf. Verkuyl—Leloux-Schuringa 1985), and subsequently forgotten. When you read the argument in Allen (1966) and Bull (1960) from a position of hindsight, you get the feeling that each would consider himself as having priority on that point as well. And it is my general impression that the symmetric time line corresponding to the three paradigmatically distinct forms past, present and future is alive and well again as an account of tense semantics, as from Comrie (1985) onwards — because the assumption that tense meanings equal time referents remains so strong. What we have seen up to now is that there is both a semantic difference and a structural difference between deictic tenses and future, but we have not looked at the relationship between them. On closer inspection, it will be apparent that the principle of sense motivates that the future must be

356 The meanings of tenses

immediately inside the scope of deictic tense. Both "inside" and "immediately" follow from the very nature of a future from which all content is bleached away except the element of taking a peek ahead. If we detach it from a deictic tense, the base time disappears, and what remains would be aheadness in relation to an unspecified time — which is incoherent: we need a base time if the idea of taking a peek ahead is to make sense. Further, if we try to put the present tense inside the scope of the future, we would assign unanchored aheadness to the present moment, which is doubly unintelligible — apart from the fact that we need to ask "ahead of what?", there is no obvious sense in which the actual moment can be ahead. Similarly, with the past we would assign unanchored aheadness to an identified time in the past. Another way of making this point is to think of the possibility of a non-finite future tense. The infinitive form, indicating an atemporal relation in Cognitive Grammar terms, would have to designate precisely that meaning which has been bleached away — namely the property that characterizes the situation at lookahead time. Since the pure future is defined as having lost all semantic substance except futurity, there is quite literally nothing for the infinitive to mean in itself. In English there is no infinitive in modal verbs at all, so there is no way of testing this reasoning out. Danish has an infinitive form, however; and here the pure future interpretation is in fact not ordinarily available.12 In examples of the following type, Danish at ville can only be understood volitionally: (76) Det er skrcemmende at ville do it is terrifying to want to die *it is terrifying to be about to die The reason is that "volition" is what characterizes the situation at lookahead time; when that is bleached away, in the pure future reading, there would be no specifiable content that could form part of a potentially terrifying state-of-affairs. That is why the future must be immediately inside the scope of a deictic tense, and hence always occur in a finite form: the elusive element of aheadness can only make sense if it is anchored to an identifiable time from which one looks ahead. This is a case, therefore, where semantic substance plus the principle of sense directly explains the content-syntactic relations. For that reason one would also expect the relations to be the same in all languages that have such content elements.

The future

357

This content-syntactic account can be understood as a motivation for the distributional patterns used by Davidsen-Nielsen (1988; 1990a,b) to argue for the recognition of a specific future form in English. The distribution should not be seen as a body of independent formal facts, but as a reflection of the semantic meaningfulness criterion: if we have a pure future, we must combine it with other elements in this particular way, prohibiting non-finite forms, etc. One thing that the principle of sense would not prohibit, however, would be to have a paradigm of past, present and future (where the future was understood to be anchored in the present). There is no natural limit to how much language may force into a single paradigm; the uppermost paradigm in the semantic clause structure of Koyukon straddles what in other languages goes from deictic tense via modality all the way down to aspect (cf. Fortescue 1992). As proposed by Bache (1985, fc), we can set up universal categories based on a defined invariant opposition in meaning and investigate language-specific oppositions based on that — but just as there will be differences between the areas of meaning covered by languagespecific signs and "universal" semantic prototypes, there will also be differences between any "universal" prototype oppositions and languagespecific paradigmatic oppositions.

2.3.5 The status of the past future We have seen above that the future was both semantically and structurally distinct from the deictic tenses. Structurally, the main difference was in its combinability with both the present and the past. However, many accounts of tense sidestep the issue of the future in the past. There are many reasons to be hesitant about it, some good and some not so good. One not so good reason which one may suspect has been influential in suppressing a discussion of the past future in many contexts that it is not so easy to pin down on the time line; and since the time line, as we have seen, has had almost axiomatic status in most discussions of tense, this makes it tempting to avoid the forms that are semi-contradictory in linear terms because they are simultaneously past and future. With the point made by Vet about the secondary nature of the future, the past future fits unproblematically into the system; most Reichenbachian accounts have also included it after the precedence relations were seen to be binary rather than ternary, even

358 The meanings of tenses

though simultaneously the present future is generally given special status as an "absolute" tense. A better reason to be wary is that the forms would V and would have V-ed are much more frequently used to indicate non-temporal than temporal relations; they are better known by the names of "conditional" and "conditional perfect" than as indicating past future or past future perfect (we shall return to this issue below, pp. 454 and 459). There are also some other facts about usage that seem to restrict the occurrence of the forms in such a way that one would prejudice the whole notion of a coherent tense system by admitting these deviant members to the time-honoured brotherhood called "the tense system". Leech (1987: 53) describes the forms as being restricted to a "largely literary style of historical writing or narrative". Moreover, they behave differently according to whether the event has in fact taken place between then and now or not. Leech views only instances when the event has occurred as instantiating the "proper" past future. Leech's example is (77) Twenty years later, Dick Whittington would be the richest man in London Only when it is now more than twenty years later, and he was in fact the richest man in London at that time would this be a proper past future. If not, Leech describes it as a kind of free indirect speech, "as if a parenthetic "he said to himself were added" (Leech 1987: 54). However, I think there are factors that will account for these oddities in such a way that we have to include the disreputable relatives as far as English is concerned, once we decide to include the "ordinary" future. If the tense system looks more neat without them, this is one more reason to include them — assuming that the task is to describe structure based on semantic substance without presupposing an entity called "the tense system". If we begin with semantic substance we find the same element of pure aheadness without any information about the path in an identifiable subclass of occurrences with would, exemplified by Leech's cases above. Like the ordinary future, the past future as denoted by would involves categorical commitment to the aheadness of a state-of-affairs, only here the aheadness is viewed from the base time indicated by a past tense rather than a present. This is automatically predicted by the content syntax: what is located at the past point-of-application is not the state-of-affairs as such but the combination that includes the future.

The future 359

The central oddity of the past future stems from this fact, when considered in the light of the principle of sense: how can it make sense for the speaker to commit himself to aheadness as calculated from a past vantage point? There will always be two tempting alternative ways of expressing oneself, one for each of the situations that Leech distinguishes. If the event has already occurred, the simple past tense offers itself as an alternative; this is what speakers "in practice" (Leech 1987: 53) use instead. But if the event has not yet happened, the speaker, if he wants to commit himself to its aheadness, has the more informative option of stating the aheadness based on his own vantage point, S — i.e. use the present future. As further discussed below p. 434, this is what happens standardly in reported speech: (78)

"it will happen within the next few days" -> he said it would/will happen within the next few days

If the present-tense option is not taken, the Gricean mechanism prods the addressee to look for an explanation, and one option is the "tinge of quotation": the "rashness" involved in committing oneself to aheadness is not in the speaker now, but somewhere else, possibly in another person speaking at the past time that is referred to. The quotational tinge can therefore be accounted for in a Gricean manner, leaving the coded meaning exactly as it is in the present-tense form. The difficulties in seeing the options for sense-making of the combination of past and future may also account for the literary flavour. Granted that the past is always an alternative when the present future is not, there must be a reason for avoiding this simpler solution. There is an extra element of temporal location which has nothing to do with objective facts — it is a purely conceptual detour, which must have a point that has nothing to do with knowledge of the facts in themselves. In strictly factual communication that would bring about a violation of the principle of sense whenever a past future was used: an element which is redundant or pointless in relation to the facts. When would it be possible for the extra temporal location to have a point? Only when the point of view from which the event is viewed makes a difference; and the point of view only has a chance to become important to the extent that an internal textual perspective assumes a life of its own, rivalling the interest of the bare facts in themselves. Obviously this priority

360 The meanings of tenses

of interests in itself favours a "largely literary" manner of writing, since fiction is the example par excellence of textual perspective becoming more interesting than the facts. On the other hand, once the telling is taken to be of interest in itself, it need not have the musty flavour that is sometimes associated with linguistic items that are said to be limited to a literary style. I once heard an example in a (British) TV sports commentary reporting on a cross-country rally; having followed the drivers through the first three days, the reporter said (roughly): (79) On the fourth day, conditions would not be any easier The choice of the past future means that we look at what was lying in wait for the drivers, rather than what happened to them. The picture of a mudcovered car on the screen was therefore the picture of someone with more trouble ahead, whetting the appetite of the onlooker for the next pictures. If the simple past had been used, the narrative would have moved on, either leaving the car behind or forcing an interpretation according to which we were now on the fourth day. Another non-literary example: (80) A suit [concerning illegal segregation] was brought against Clarendon County Schools. It was dismissed on a technicality, then resubmitted by Thurgood Marshall and the NAACP as the first direct challenge to school segregation in the South. Over time, the Clarendon suit would be bundled with others — including Brown v. the Board of Education of Topeka (Kansas) — and on May 17, the U.S. supreme court would rule unanimously that segregation of the races was unconstitutional. Forty years later, Levi Pearson's widow, Viola, still lives just off the road.... (Newsweek, May 16, 1994: 26-27) The story compares what happened when it all started with the situation now; and the past future is a way of staying mentally at the beginning, looking ahead — until the point-of-view jumps to the present time in the last two lines. With autobiographies, this complicated temporal perspective is almost built into the genre. The narrator speaks with the authority and claim to interest of the person he is now; the subject of the narrative, however, is

The future

361

his past. These two times can therefore be taken for granted throughout the text without requiring to be established anew. The past future can therefore be used with less extra effort than usual when the author wishes to see the events in the light of what was in store for him at the time. It is also in an autobiography that I have found one of the few "past future perfects" that I have come across: (81) In one sonnet written in that winter of 19241 looked forward without relish to my solitary future — "Eating a Lyons chop in 1930" the sonnet pessimistically began. How could I have imagined that by 19301 would have been already happily married for two years? But the reality of a passion should not be questioned because of its brevity. A storm in the shallow mediterranean may be over in a few hours, but while it lasts it is savage enough to drown men, and this storm was savage. (Graham Greene: A Sort of Life, p. 125) Here Graham Greene keeps four different times going: the generalization about passion applies to the "now" of the reading event, his narrative has got to the winter of 1924, from which he looks forward to 1930 — which in actuality was two years after his marriage (and thus different from his expectations in 1924). The contribution of the extra point-of-view is evident even in two-sentence examples like Comrie's, compare (a) and (b) below: (82) John left for the front; (a) he would never return (b) he never returned The two versions communicate exactly the same facts, including temporal sequence. But what we hear in the (a) example is John's bleak future at the time he goes to war — the story of a young man facing death. In the (b) example the story moves on, telling us of his departure and his end in two seconds flat, disposing of him at once in order to get on with other things. To someone who only wants to know the facts it makes no difference, therefore strictly utilitarian communication has no time for such frills; yet it is more than a stylistic quirk. The difference between the pure past future and one form which says a little about the path, namely the was to "destiny" form can be illustrated by the following quotation (on the 1950s):

362 The meanings of tenses

(83) The postwar baby boom was to make the behaviors and beliefs ofthat decade's offspring disproportionately significant for the rest of their lives. The media, the market, and all social and political institutions would follow their development with heightened interest. (Judith Stacey, Brave New Families, p. 10) Was to in the first sentence suggests something about the destiny of the baby boomers; would follows up on this by describing what actually lay ahead. To exchange the two forms would not be impossible (would, as the more bleached form, is compatible with a "destiny" interpretation). However, it would weaken the textual progression I have described, by encouraging a "destiny" interpretation of situation of the media and institutions at the time, instead of sticking with the family history of the "boom" generation. When this has been pointed out, it should be emphasized that there is of course a sociolinguistic dimension involved, and the choice of the past future is indeed a restricted stylistic choice, even if this has a semantic motivation; it is one of the features that can be used to identify the individual styles of authors, rather than a homogeneous social norm.

2.5.6. The pure future in Cognitive Grammar I have postponed a comparison with the overall role of the pure future sense of will in cognitive grammar until after this account of the past future, because the main difference comes out most saliently in relation to the combination of futurity and pastness. In the semantic accounts per se there is very little difference. As we saw above in Langacker's account of the semi-subjectified "prospective", there is one stage which this type of meaning has yet to go. This involves that the ground (i.e. the deictic centre) becomes obligatory as reference point, and also that the relationship between ground as reference point and event loses its profiling, becoming instead "an unprofiled facet of the base" (Langacker 1990: 25) — compare the description suggested above, according to which a pure future says nothing about the base time. With the pure future, even the "path" to the event disappears from view, so we no longer conceptualize the route, only the end. Such a future is exemplified by both French il ouvrira la porte and English he will open the door (cf. Langacker 1991: 220); if we see this in relation to the modal senses of will,

The future

363

the description in Langacker (1991: 278n) states that the last stage of bleaching eliminates the element of subjectified modal "force" from the meaning of will, thus reducing it to a pure future, although along a different path than the one associated with the prospective.13 The difference between the account given here and Langacker's lies in the issue of where the future meaning belongs in the semantic structure of English. As described above p. 345, Langacker sees the choice present/past as belonging together with the choice "modal/non-modal", namely in constituting the "grounding predication", analogous with the choice of determiner in noun phrases. In this grounding predication, there are two choices: near vs. distant (associated with present vs. past tense) and reality vs. irreality (associated with modal vs. non-modal). When combined, the two choices give four different possibilities (cf. Langacker 1991: 245): immediate reality (simple present), non-immediate reality (simple past), immediate irreality (modal present), and non-immediate irreality (modal past). This account also implies that the present is understood as unmarked in the same way that a non-modal verb is considered unmarked; the present is not considered a full-scale paradigmatic alternative to the past, but is basically non-past. The "grounding" role of modal meaning is part of a theory that is also used to explain why English modal verbs have no non-finite forms, unlike their counterparts in other Germanic languages (compare the discussion above on the possibility of a non-finite future): the meanings of English modals have progressed further on the scale of subj edification than in other Germanic languages. Not only has their meaning been reoriented from the objective scene towards the subjectively conceived ground; the modal aspect has also lost profiling, so that a modalized construction in English profiles the state-of-affairs alone, not the modality under which it is viewed. Since lack of profile on the modal meaning is incompatible with the semantics of non-finite forms, the modal verbs have lost their non-finite forms. I think Langacker's account expresses some important truths about English structure. First of all, this includes the high degree of subjedification of English modals, visible in the "commutation" pair of modal vs. non-modal need: (84) he needn 't come back (85) he does not need to come back

364 The meanings of tenses

In (84), the source of the modality is unambiguously the speaker, who places the state-of-affairs in relation to his own view, whereas in (85) the modality can also be ascribed to the subject. Secondly, it points up the extent to which tense and modality are fused in English; tenses do not typically have their temporal meaning when combined with modal verbs (cf. also Perkins 1983); Langacker's theory reflects the fact that modal verbs constitute one of the linguistic contexts that licence the non-temporal interpretation of the past tense (cf. also on conditionals below p. 458). In comparing this account with my own, I think it is useful to distinguish between the ideal situation for a theory and the facts with which it is compatible. Neither theory fits the facts in English directly; both theories, with the necessary qualifications, can be made compatible with all the facts. However, Langacker's account corresponds ideally to a case where all modal verbs are equally subjectified, and where a modal verb can only have a modal past signalling "distant irreality", rather than a temporal sense that places the modality at an identified point in the past — what might be seen as the prototype case in English. My basic content-syntactic structure corresponds ideally to a situation in which tense and modal verbs are totally independent, and tense signals the temporal point-of-application for a modalized state-of-affairs (which is the prototype case in Danish). Nevertheless, even for English I think there is a certain advantage in taking one's descriptive point of departure in the situation where tense is obligatory and takes modality as an optional element inside its scope, rather than seeing grounding in terms of two cross-classifying binary features. The argument depends on how we want to understand the cases that do not conform to the English prototype. First, there is the issue of subjectification. The verb can is perhaps the least subjectified; in a case like he can speak English, it typically indicates "objective" and "dynamic" modality. It is more naturally described as making a truth-conditional claim about an object than as describing a subjectively construed unreal state in which he does speak English (compare may and must, which I see as fitting Langacker's description). The verb can also be modified independently, as in he can easily swim a mile (cf. Bybee—Dahl 1989: 63), one of the criteria used by DavidsenNielsen (1990b) to distinguish "true auxiliaries". There is thus not total isomorphy between the semantic substance property "high degree of subjectification" and the structural slot occupied by the modals; there is nothing semantically impossible about having an infinitive form with a meaning corresponding to the "ability" sense of English can.

The future

365

Secondly, there is the question of how we describe the cases where modal verbs can have an ordinary temporal past (and present). In those cases, the modalized state-of-affairs is applied either to S or to P in the manner described above. Again, this situation obtains with can: (86) he could pronounce the -word yesterday (87) he can pronounce the -word today This situation is recognized by Langacker, as evident in the discussion of reported modals (1991: 256); but clearly it fits better with a description, where tense and mood are not inherently bound up with each other. There are also cases where the modal past has almost taken over, but rudiments of the temporal interpretations remain; this is the case, for instance, with might. According to my description it is a case like that of Siamese twins, which should be understood on the basis of a situation with two separate individuals, which happen to be partially fused. In an example like (88) Jill said she might help us the element of distance in might can be understood in two ways. Either it refers to the source situation which is distant in the sense of being temporally past, or it can be understood as distant in the irrealis sense: something which might happen is not as close to reality as something which may happen. If we take tenses as basically separate elements, this ambiguity is accounted for already in the internal paradigm between the temporal and the non-temporal variant of the past form, even if very little (only the indirect speech reading) remains of the temporal option. Another reason why I think it is preferable to account for such cases in terms of a basically temporal tense system that takes mood inside its scope is a sense of what is the natural directionality of explanation. Langacker's account entails that tense and mood are equally obligatory; both have a marked and an unmarked option. Above I argued in general terms that the present tense is more than just non-past; specifically in relation to modality I think it is obvious that you can have tenses without modal verbs, but considerably less obvious that you can discuss modal verbs in complete isolation from tense. Langacker's account, which sees the third person -s as marking agreement rather than present tense, would not be possible for

366 The meanings of tenses

Danish, where we have no agreement but a generalized present tense ending, with no obvious difference in tense meaning specifically in the modal verbs, and no difference in the sense of language forcing you to always make a tense choice as opposed to a modal choice. Therefore you cannot very easily go from Langacker's theory of English to an account of the Danish system, whereas it is fairly simple to go the other way. The theory of tense, in this picture, would be one in which something special happens when tense interacts with the modal verbs, and where this manifests itself to a certain extent in Danish and to a considerably greater extent in English. Langacker's theory would fit into this picture as an account of what was typically the case in English in this (tense-wise) special case. The can-like situations would then be normal from a pure "tense" point of view, but abnormal from a "tense+mood" point of view. The cases in which modality is applied to the past time strike me as degenerate cases of "nonimmediate irreality", and as more naturally describable in terms of a situation where modal verbs obey the "normal" pattern instead of having become assimilated to the special modal pattern. This argument has special relevance for the future sense of will. If the "normal" situation only obtained in cases where the modal verbs were really camouflaged main verbs, it would show that the fusion was bound up with the grammaticized and subjectified status of the modals, and thus it would apply to the future as an even more highly subjectified case. But as discussed above in the case of the past future, the future can in fact be placed within the scope of an ordinary temporal past. Like the reported modals, the past future can be accounted for if we see its "non-immediacy" as manifested in temporal pastness. This situation, however, cannot be placed in any obvious way in picture where the modal statements belong in irreality and as such go with a non-temporal reading of the past tense (cf. Langacker 1991: 244) — because it has one foot in reality (the temporal past) and one foot in unreality (the aheadness). The present future as a form of irreality can be placed ahead of the present, because the future is part of irreality — but there is no similar domain of unreality ahead of the past. Also, in the cases where the event has in fact taken place at time S, the state-of-affairs would seem to belong within the overall domain of realis, even if it was unrealized at time P. Although he describes his theory of the grounding predication as basically modal rather than temporal, Langacker says that in the case where we are only interested in temporal relations, "the past-present-future

The future

367

distinction seems intuitively compelling" (Langacker 1991: 243), and therefore lets it emerge as a "special case" within his system. Thus, although Langacker is far from having a tripartition into past-present-future as his basic theory of tense, it stands as that special case in which the system addresses purely temporal contrasts, and thus stands in opposition to the two-by-two choice (present vs. past plus future vs. non-future) that is the cornerstone of the content syntax that I have suggested.14

2.3.7. The status of the future in the structure of English Assuming that the discussion has covered the central facts about substance as well as structure in connection with the pure future, it is time to discuss the status of the future in English. In the place of the clear answer that I have already ruled out, we can try to be precise about the possible criteria and the available evidence. As we have seen, there is nothing arbitrary about the position of future in the paradigm just within the scope of identifying tense — that is where all pure futures must be. What is specifically English here is in other words the existence of a tightly closed modal paradigm in that precise contentsyntactic position, leaving one slot only for modal verbs that can be combined in other languages, such as epistemic plus deontic modality: (89) *he must should do it (90) Er muss es tun sollen (German) (91) Han ma skulle gore det (Danish) In describing the role of the future sense of will in this paradigm, we need to look at it in the context of the other senses. I shall refer to the following survey of the polysemic cluster associated with English

368 The meanings of tenses

(92)

"volition" (cf. Quirk et al. 1985: 229): (a) intention

/'// -write as soon as I can

(b) willingness

/'// do it, if you like

(c) insistence

If you will go out without your overcoat, what can you expect?

"pure future", as dealt with above: It will be okay "predictability" (cf. Daviden-Nielsen 1990a: 163) (a) specific

they ΊΙ be watching telly now

(b) habitual

she will sit therefor hours doing nothing

(c) general

boys will be boys

"order":

Private Smith will report at 1800 hours

The cluster would by most people be seen as fairly cohesive, constituting a natural semantic territory for a frequent and grammaticalized expression. One could even argue quite persuasively for a monosemantic interpretation (cf. Klinge 1993). The point of departure for the discussion is therefore the naturalness of seeing futurity as simply a variant of modality. At the time when the frontline in grammatical discussion was between the moribund notional-semantic universalism based on translation from Latin and the modern each-language-on-its-own-terms grammar, it would appear that only a Latin bias would make anyone see English as having a future — a point of view that is still reflected in the treatment of the issue in modern descriptive grammars (cf. Quirk et al. 1985, Leech 1987, Palmer 1987). With grammaticization theory, the approach based on semantic substance received a fresh start, supported by the prestige of the revamped non-squinting universalism. If we operate with grams seen as independent of the structural facts of the individual language, it is much more natural to see English as having a future. In looking at the question, it is useful to see it in relation to two other cases, French and Danish.

The future

369

The content-syntactic relations with deictic tense, motivated as they are in semantic substance, are the same as in the French future form. Two things make French future more structurally well-defined: First, its morphological expression side. This argument should be treated with caution, because it has all the weight of the tradition behind it, and it may therefore feel more compelling than it should; but it does set the future apart from auxiliary elements, and indicates a greater degree of grammaticization. Secondly, the paradigmatic position of the future: in French (cf. Vet 1986, Harder 1990a), there is a structurally well-defined paradigmatic opposition between the prospective (il va arriver) and the future (il arrivera); they cannot be combined (*il ira arriver), and in contrast to English going to it is not natural in the infinitive: (Mother to child who says that although he had not done his chores, he was just going to) (93) to be going to do it is not enough (94) ?aller le faire ne suffit pas Apart from the alternative futures, nothing else goes into that precise paradigmatic slot; hence, "futurity" has a structural home of its own in French. On this point, the English future is therefore a sort of squatter in the modal paradigm, a less than structurally clear-cut entity. However, in relation to Danish, other sides of the position of English become apparent. Danish is similar to English in that it has an expression for pure future (vil, cognate with English -will); and if it occurs, it must be directly attached to the deictic tenses (for the reasons discussed earlier), and generally has the distributional properties associated with a future sense (cf. DavidsenNielsen 1988). But there are two main differences. First, the position of the pure future sense in the meaning potential of Danish vil is rather different; the Danish network would look roughly like this (cf. Davidsen-Nielsen 1990a): (95) Main verb construction: Er det det, du vil? Is that what you want?

370 The meanings of tenses

Auxiliary construction: strong volition, "insistence": jeg vil vcere formand I want to be chairman weak volition: intention:

han vil gä en tur om lidt he'll take a walk shortly

willingness: jeg vil (gerne) hjcelpe dig I will help you future:

han vil ikke komme til skade he will not get hurt

present predictability (very restricted, cf. Davidsen-Nielsen 1990a: 166): Som den sagkyndige lesser vil vide... as the specialist reader will know... In the summing-up below, the left-to-right dimension reflects diachronic drift (roughly, cf. Traugott 1989), conceptual development and permitted scope readings, compare p. 295: the first two columns ascribe volition to the subject and will therefore defines a state-of-affairs, as in (48); in the next two columns, will is outside the state-of-affairs, as in (49).

(96) strong volition insistence (main vb)

weak volition willingness intention

future

predictability (order)

English and Danish both cover basically three of the stages at the same time: English has all but lost the first stage, which is alive and well in Danish, while Danish has not really developed the last stage, which is wellestablished in English. In this picture, it makes sense to say that vil is not as centrally established in the Danish content system as a marker of future meaning as will in English.

The future

371

This is more than a graphical trick. As emphasized in Part Two (p. 206), the notion of "centrality" is essential in describing coded meaning potentials, and the relations between the pure future sense and modal senses in the polysemic clusters above are a case in point. The rightmost types of senses are those we also find in French; as emphasized by Vet (1983), compare also Comrie (1989), Davidsen-Nielsen (1990a: 166), they must be understood as dependent on the future sense rather than the other way round. The epistemic "predictability" sense in which the speaker is saying something about the present whose verification remains in the future (ςα sera le facteur = that will be the postman) is "based" on the temporal sense in the panchronic manner described in Part Two: the "predictability" sense develops later than the pure future, and trades on the temporal futurity as a metaphorical extension of it (cf. Sweetser 1990: 50, 62). The pragmatic reason for the extension is not difficult to see: the rashness that was posited as a necessarily non-conceptual element in the meaning of the categorical future is the same when we speak of things in the future (which we cannot in the nature of things go and take a look at) and when we speak of things in the present — which as a matter of contingent fact we have not investigated properly yet. It can also be seen as a bleaching process whereby the pure future has both event and verification in the future, and the epistemic sense has only the verification (cf. Vet 1983). Similarly, the use of the future with imperative force (you will eat your nice spinach immediately) presupposes the future sense for its tinge of frigid impersonal coercion. Thus the future is clearly more entrenched in English, although it is not as structurally well-defined as in French. The structural status of the will form is wobbly in relation to the volitional senses: will is only unambiguously available as expressing pure future when the volitional senses are ruled out. This means that it is to some extent a combinatorial variant (cf. the discussion of the meaning of red in the combination red hair p. 208): when the subject is a non-agent, the pure future sense is the most obvious — when the subject is a volitional agent, the speaker cannot use the form to express pure future unambiguously. Yet the relative entrenchedness of the wz7/-future also shows itself in this case: when there is a risk of misunderstanding, the meaning potential of the progressive has a special variant which makes it possible to avoid the volitional element, compare (97) and (98):

372 The meanings of tenses

(97) when mil you pay me back? (98) when will you be paying me back? Where (97) can be read as a question involving willingness, (98) sanitizes it to a question of pure time (cf. Swan 1980), because the "ongoingness" of the progressive makes it an unlikely candidate for a volitional reading: it is unnatural to aim to be in the middle of something, as in / want to be leaving/helping, rather than change to a new state-of-affairs, considered as a whole: I want to leave/help. For comparison, the polysemic cluster associated with the French future has had its modal origins (deontic, rather than volitional) pruned away, leaving the pure future sense as sole prototype for the other senses. With respect to the question of paradigmatic choice, futurity in Danish is again less entrenched than in English: Danish has no well-defined paradigm for expressions of future meaning; it even lacks a "prospective" expression, offering only various idiomatic expressions expressing different types of lexical paths towards the future. The last issue that distinguishes Danish from English and French is interesting because it pinpoints one respect in which French and English are similar and different from Danish. This is a structural fact about English which is not covered by an account which says only what is implied so far. The fact that I am referring to is mostly expressed by saying that there is far greater freedom in Danish (and other Germanic languages) to refer to future events by means of the simple present than there is in English (cf. Davidsen-Nielsen 1990a: 117). Because the Danish simple present can cover anticipated events as well, the expression of pure future is an optional choice reserved for cases where the temporal location would otherwise be unclear — typically in states (which are naturally understood to refer to the present moment), when there is no temporal adverbial to fix the time, as in (99) der vil vcere mulighedfor at stille sporgsmäl there will be possibility for to ask questions there will be time for questions The fact that in English one can speak of future events in the simple present only if they reflect the way the world is at the time of speech (cf. the discussion on the game starts at eight above p. 347) indicates that there is

The future

373

a semantic territory that is unoccupied, if we look at the division between the two deictic tenses, past and present: they do not together exhaust the domain of time-reference, as suggested in the customary description of the present as a non-past. In English, the present tense meaning ("applies now") is to be understood strictly: it cannot be used to speak of what is ahead without the syntagmatic assistance of a form that designates aheadness. Thus, by selecting "simple present" instead of "future present" you are in fact selecting "non-future". It might be thought that one could handle it merely by saying that in Danish the present is indeed the "non-past" form, and English is just eccentric on this point. But on closer inspection, this turns out to be wrong. This is revealed in the fact, which tends to be overlooked, that the past behaves in a parallel fashion. It must therefore be stated at the level of the compositional paradigm, as something about the status of the choice +/future in English, and not as purely relevant for the constructional choice between present and present future.15 To see how the difference affects the past tense, compare the following examples: (100) John havde lagt en seddel om at han kom senere John had left a note about that he came later John had left a note saying that he would come later

(101) Du afleverede opgaven i morgen, ikke? you handed in the-assignment tomorrow, not? you were going to/would hand in the assignment tomorrow, right? To account for this as a structural property of deictic tenses in Danish and similar languages, the notion of point-of-application can be described as having a more elastic definition in the Danish than in the English system. The question in (101) must be answered not on the basis of things as they are now: the addressee must look at the identified time in the past in order to answer the question properly. But the question of applicability does not entail that the state-of-affairs actually occurs at that time — because a realization that is ahead is still taken to mean that it "applies", that it describes the way the world is (or was).

374 The meanings of tenses

This may appear to be semantics in the political sense of the word — but I think a plausible rationale for the usage can be established. If we look at the applicability of a description as analogous to the validity of a theatre ticket, there is a fairly straightforward sense in which a ticket for today and tomorrow are equally good cases of something that is applicable, whereas a ticket for yesterday is no good. On this analogy, a statement about tomorrow has a form of relevance that is similar to that of a statement about now — whereas a statement about yesterday does not affect our current situation in the same way. Thus a statement may apply to a situation now, including the stretch of time ahead — or it may apply to a situation then, including the time ahead of "then". The upshot of this is that there is a sense in which the choice future/non-future is obligatory in English, while it is optional in Danish — while the choice between past and present is obligatory in both languages. Obligatoriness, however, is not a simple notion. Even deictic tense is not obligatory in the absolute sense. You can speak in imperatives, noun phrases and interjections without it. As described above, it is better expressed as a structural dependence between the declarative/interrogative paradigm and the past/present paradigm. In the case of the future in English, the dependence is most obvious on the level of substance: In talking about a time ahead, you can always make a lexical choice to express it: (102) I plan/intend/am going /want to be there tomorrow Yet the inability of the simple tenses to cover the "time ahead" territory is structurally significant. The difference between English and Danish does not concern the full lexical ways of talking about the future, because there the choice is basically the same: plan, intend, etc., have equivalents in the other languages. The locus of the difference is exactly in those cases where there is no specified path to the future, the importance of which is reflected in the fact that the form with will is "much the most common" way of referring to the future (Quirk et al. 1985: 217). It therefore affects the way we describe the simple tenses, as structural options in English — which is clearly part of the grammar. In the English system, the non-future goes with all the other choices unless you do something about it. The reason I want to emphasize this aspect is that this is an observation that I think is difficult to capture unless you have an approach that is both substance-based and structural. If you look for structural contrast only,

The future

375

English looks just like Danish both with respect to the simple tenses and with respect to the contrast between will and zero. If you look only for the semantic substance covered by the future gram in itself, there is no difference either. The difference emerges only when you look at its structural interplay with non-future alternatives. It therefore serves to highlight the central rationale of the approach in terms of substance-based structure, demonstrating why the individual gram and its diachronic orbit is not all we need. In order to get from the cross-linguistic abstractions of grammaticization theory to language-specific concretion, we need to add the paradigmatic dimension and the constructional regularities which the crosslinguistic grams enter into: the patterns established by the individual language, random as they are from a cross-linguistic perspective, are important not only as idiosyncratic facts but also as a whole dimension that is simply absent from the cross-linguistic perspective: apart from individual grams, human languages consist of syntagmatic combinations and paradigmatic choices, and these represent different ways of establishing an overall total pattern of coding options that a native speaker has at his disposal. Any language description totally based on grams would be defective merely by virtue of being atomistic. The role for cross-linguistic generalizations in this picture is to specify a range of interesting abstractions, which one can then apply to the languagespecific descriptions. If we set up an abstraction corresponding to the structural skeleton of the eight-tense system and ask if English has a future tense that fits into that pattern, the answer is clearly yes. If we ask whether there is an independent paradigmatic slot for futurity in the system, the answer is clearly no. If we begin by asking about the structural organization of the verbal system in English, the salient fact is fairly obviously the strongly entrenched position of the modal paradigm just within the scope of deictic tense — and the future stands as a squatter in it. But because the future is strongly entrenched as an indicator of aheadness, we cannot describe the other forms adequately except as negatively specified in relation to the future. The fact that an approach in terms of substance-based structure brings out the role of the non-future dimension of English verb forms illustrates the usefulness of this particular approach also on a practical level: because they can transfer most of the structural distinctions in the Danish system to English, Danish learners tend to assume that the semantic territory of the Danish tenses is the same; and a description based on semantic substance

376 The meanings of tenses

as organized by linguistic structure is the only one that is guaranteed to pinpoint this fact.

2.4. The perfect 2.4.1. The role of content syntax In contrast to the future, there are no problems in relation to the structural status of the perfect in English. Although it is part of a polysemic cluster of senses associated with the verb have (cf. Goossens 1993), the verb as expression side of the perfect is paradigmatically clearly distinct from all other occurrences: no other sense can be expressed in the position in the content syntax where it is within the scope of deictic tense and modality, but outside the scope of the progressive. The perfect constitutes one of the most-favoured topics of English grammar. It is also one in which the lack of a clear awareness of the existence of a content syntax specifying how the semantic elements of the sentence work together has made for considerable confusion in the literature. This is true in two respects. First, the combination "present perfect" has had such an overwhelming prototype status in discussions of the category that a surprising number of accounts are to a greater or lesser extent guilty of mixing up the perfect in the sense of the "present perfect" and the perfect in the sense of "element conveyed by auxiliary have + perfect participle". This is probably partly for historical reasons; in relation to Latin grammar, the perfect only refers to the present perfect, a usage which is still widespread. A striking, but not uncommon example of the confusion is found in the semantic characterization of the perfect as indicating a relationship between past and present time — used in accounts which also include the past and non-finite perfects (as in Van Ek—Robat 1984). Secondly, the lack of an awareness of content syntax is visible in the practice of dividing the (present) perfect into a number of different variants, among which the "experiential" (/ have seen him), the "resultative" (the plane has taken off), the "hot news" (Malcolm X has just been assassinated, cf. McCawley 1971) and the "continuative" (I have lived here for ten years) are the most salient. The problem is essentially the same

The perfect

377

as in the case of the "usage" variants of simple present; because the description of states-of-affairs is a relatively new thing compared with tense, the tradition places these properties under "tense", where the perfect traditionally belongs. In the case of the perfect, the role of adverbials in specifying the precise nature of the state-of-affairs is particularly important. The "continuative" variant is still often described as indicating that "the state continues at least up to the present moment" (cf. Quirk et al. 1985: 192). As demonstrated by Steen S0rensen (1964), this is fairly obviously not part of the meaning of the perfect — because the perfect applies to both the meaning of the verb and the adverbial. An example (cf. also Klein 1992) is: (103) Joe has worked (on this project) for ten years Here the perfect applies to the content of the whole state-of-affairs 'Joe work for ten years on this project', not to the naked predicate meaning 'work', or 'work on this project'. That is why Joe has worked (on this project) is not natural as a "vaguer" variant of (103); the perfect form can in fact not be used unless the state-of-affairs is anterior (an "accomplished fact", in Steen S0rensen's formulation). What continues is not the whole state-of-affairs, but only the state-of-affairs minus the time adverbial, and therefore the perfect does not mean "continuation into the future"; as discussed above p. 217, in certain cases "continuation" is a syntagmatic implicature of the whole combination, in others it is a purely conversational implicature. The entrenchedness of "continuation" as a semantic description of the perfect is a salient example of the under-awareness of the syntactic relations between content elements, and has led to repeated rediscoveries of the bare "anteriority" understanding (at least up to Moens 1987) as a common denominator that has been overlooked in the tradition.

2.4.2 The perfect as a product of grammaticization Although a purely anterior reading is revealing in relation to rampant ambiguity, it is not the most precise semantic description of the perfect. As with the future, a revealing way to approach the semantic substance properties of the perfect is via a diachronic path. Bybee and Dahl (1989: 68) distinguish four sources of perfect constructions: copula + past

378 The meanings of tenses

participles (constructions like he is gone); possessive constructions (like English have); main verb + particle meaning 'already'; and verbs with a meaning like 'finish' — all containing a variation on the theme of a later situation seen in relation to an earlier one. The first two go via the stage of a "resultative", which implies that the resultant stage still obtains. Thus, in Swedish the resultative construction, as in (104) necessarily implies that he has not returned (cf. Bybee—Dahl 1989): (104) Han är bortrest he is away-gone The source for the modern English perfect is a construction like I have him pinned to the wall, in which the Old English participle would be a modifier agreeing with the object in case and number. What happens on the way to the perfect includes a syntactic realignment whereby pinned becomes associated with the main verb instead of being a modifier of the object NP. Semantically, there is a process of bleaching involved, for which a detailed path has been suggested within Cognitive Grammar. Beginning before the possessive stage of the verb have, Langacker observes that the lexical source of the perfect is often (Langacker 1990: 29) a verb meaning 'grasp' or 'hold', i.e. involving energy transmission via direct physical control: a strong form of physical possession. The first stage of attenuation of this verb then involves a development of a vaguer sense of possession, which involves only control in the sense of potential contact, as guaranteed by the notion of "ownership". A further stage involves loss even of the notion of potential control-cum-energy-transmission, so that we arrive at a sense, which Langacker suggests as a generalized schematic notion of possession whereby a possessor is merely a reference point (cf. on the general theory also Langacker 1993a) which we use to establish mental contact with another entity. An example of this construction is (105) We have a lot of skunks around here In a localist framework, this sense could be seen as one in which "having" designated the relationship between a place and the entities in it; the sense in which have can be seen as a sort of passive version of be (cf. Benveniste 1966):

The perfect

379

(106) the table has a book on it (107) there is a book on the table This development is an instance of subjectification in that an objective relationship is replaced by the conceiver's mental journey from one thing to the other. In relation to have in general, the meaning involved in the perfect construction differs by having a temporal dimension, and by having a process as target instead of an object. The semantic transition to the true perfect from a have-possessive involves two things: first, the subject no longer needs to be the locus of relevance; he has finished bothering me now may primarily be asserted out of relevance for "me" rather than him, just as it has rained is not primarily relevant for "it". Secondly, the relevance relation between reference point and target becomes less salient, so that the reference-point relation (roughly equal to the anteriority) is alone in being profiled (analogous to the transition between "ownership" and the "localistic" stage above); the stages through which the English perfect evolved have been described in detail by Carey (1993). The description of the perfect in Cognitive Grammar thus crucially involves a purely subjective, mental relationship between the anterior process and the "ground".16 Langacker's account is, as always, fully compositional without implying that everything can be accounted for at the compositional level (cf. also Langacker 1978; 1991: 211-25). The "current relevance" and the "reference point" goes with the verb have, and the anteriority goes with the perfect participle form of the subsequent verb, giving the combined account in terms of the current relevance of an anterior event.

2.4.3. The central meaning of the perfect: the cumulative model of time This picture, with which I basically agree, can be contrasted with some other influential suggestions; in presenting what I see as my own contribution I shall try to demonstrate what is the unifying element in the various pictures offered. McCoard (1978) sums up the literature in terms of four different "theories" about the meaning of the perfect: "current relevance", "indefinite past", "embedded past" and "extended now"; and most theories adopted in present-day grammars can still be seen in terms

380 The meanings of tenses

of one or more aspects of this picture. Palmer's (1987) account of the perfect begins by setting up a period of time lasting right up to the present, as exemplified by the "extended now" theory; the fact that some states-of affairs can equally be included in a period including or excluding the present moment is then answered by invoking the term "current relevance". Leech (1987) at first describes the perfect as "past with present relevance", but the central characteristic of the four different uses turns out to be the contrast between definite and indefinite past, essentially following Allen (1966). The "extended now" is also present where it is said that the perfect means that the event has occurred "at least once in a period leading up to the present". In discussing the "indefinite past" theory, whose main exponent is Allen (1966), McCoard concentrates on showing (1) that there is nothing that prevents the use of a present perfect in cases where there is an unambiguously definite past time involved (/ have done it today is just as temporally definite as / did it on Monday); (2) that the preterite can refer to periods of time that are by any ordinary standards rather indefinite: Man descended from the apes; also, if we combine the two cases / overslept (have overslept) this morning hardly permits any distinction. The embedded past is discussed as an afterthought, because it goes with early generative theories in which the present perfect was derived from a past embedded in a present. The semantic implications of this view are not in the centre of generative debate, but it sometimes works out in terms of "current relevance", sometimes as a version of the "extended now" — but without the implications being clearly worked out. His central argument against the "current relevance theory" — under which label Langacker's account would fall — is that all specific ways of spelling out this notion can be shown to be ungeneralizable; and the most generalizable formulation ("connection with the present", McCoard 1978: 65) does not "afford any intrinsic means of setting the preterite apart". In arguing his case, McCoard does an admirable job in pointing out the context-dependence of a number of generalizations, specifically in relation to the oft-repeated claim that there is an element of "result" lingering on. The main reason for supporting the "extended now" theory is the criterion of adverbial co-occurrence (1978: 129): adverbials referring to a period of time that excludes the present go only with the past, whereas adverbials referring to a period of time that includes the present moment go with the present perfect, compare (108)-(109):

The perfect

381

(108) Did you see the exhibition? (109) Have you seen the exhibition? The natural reading is that the former occurs if the exhibition has stopped, the latter if the exhibition is still on; and in such cases the theory has an empirical support that is found absent in the other theories. But there are cases where we do not have either the adverbs or an implicit time period of the "exhibition" case, for instance "experiential" examples like I have seen him vs. / saw him. McCoard admits that Of course it may not be terribly important to the message to make these distinctions, in which case the choice may be arbitrary, or perhaps one form will be utilized because it is the unmarked alternative, thus de-emphasizing the semantic contrast (McCoard 1978: 130).

But the situation is worse than the borderline case that is suggested here. What we are talking about is a time period that is neither Reference time nor Event time in Reichenbach's sense, but a ghostly period within which we locate the event "somewhere", and which we must interpolate in the clause in order to cover the difference between saw and have seen. In such cases, there is little substance in this distinction. I now turn to my own suggestion for how we should capture the element of current relevance, which involves the way we conceive of time. If human beings were like the lilies of the field, who do not worry about yesterday or tomorrow, it would presumably be correct to say that anterior events were merely absent when they were over. However, evidence about the way our minds function suggests that this is not the way we are made. The nature of memory is such that important earlier experiences leave a kind of indelible mark such that later experience must be understood as superimposed upon what is there already. Hence, as human beings, we are not paper-thin salami-slices of time: we are accumulated products of past events. Similarly, on the social level, our identity as perceived by others as well as by ourselves (the link is in the feedback mechanism discussed above p. 95) is the sum of what we have accomplished or bungled in the past: the notches for successes (shot-down enemies for the fighter pilot), and the little black marks in your diary for things that still cause you to wake up screaming in the night. This is the way we tend to understand not just our own situation, but also the situation of the world in general. I shall

382 The meanings offenses

dub this the "cumulative" model of time: we accumulate anteriority as we accumulate other properties. According to this model, accomplished facts or anterior states-of-affairs may be understood as part of the reality that we still inhabit; overt, physical proof of relevance is therefore not required. This provides a very good reason for language to offer a way of talking about the world at any temporal stage as being characterized by what happened before that time. I have described this effect in terms of the "balance sheet metaphor" (Harder 1989b): a business is obliged by law once a year to give a description of where it stands at the "time of reckoning", and it lives up to this requirement by summing up the financial events of the preceding period. My paraphrase of the perfect is therefore: (110)

The perfect signals that the state-of-affairs in its scope is anterior to a time of reckoning, and views the anterior state-of-affairs as a property of the situation at that time

The "time of reckoning" is the base time of the perfect, as "lookahead time" is the base time of the future. This description is not meant to ignore the criticism of "current relevance" as a vague notion (cf. also Declerck 1991: 340): since, by Gricean criteria, any statement must have some form of current relevance, this way of describing the perfect is not satisfactory. Rather, the cumulative model is an attempt to be more specific about the nature of the relevance that is associated with the perfect. One reason for preferring this account is that the extended now account applies only to the present perfect, whereas the cumulative model of time is associated with all combinations involving the perfect. In looking at a non-finite case we can also profile the two theories in relation to the search for objective differences that led McCoard to reject the vaguer forms of current relevance. A famous non-finite example is the Tennyson quotation saying that it is better to have loved and lost than never to have loved at all. A sceptic might ask Tennyson for concrete, testable specifications of how states of having loved and lost are better than states of never having loved at all, and if no invariant elements could be found, dismiss his claim as empirically vacuous. However, according to the cumulative model of time this represents a Martian type of positivism; anybody with human experience can understand this way of thinking, and objective correlates are beside the point. Conversely, I fail to see how one can make sense of the

The perfect

383

claim that the loving-and-losing must be placed in a period that includes the (unspecified) base time rather than a period that stops before the base time. This account is intended to be fully compatible with Langacker's account. As opposed to what was the case for the deictic tenses and the future, I do not think there is anything about the perfect that cannot be fully captured in a conceptual semantics. The fact that the meaning of the perfect can be understood purely conceptually can be seen from the free occurrence of the perfect in non-finite forms. No identified base time is necessary to use the perfect; "time of reckoning" for to love as opposed to to have loved can be any appropriate time.

2.4.4. The compositional perfect and the present perfect The tendency to focus on the present perfect when talking about the perfect has not only made it difficult to get the properties of the perfect "as such" in focus; it has also made it difficult to see what is special about the present perfect itself, that particular constructional combination where the perfect is combined with the present. This fuzziness has affected both discussions of diachronic trajectory, tense vs. aspect discussions, and discussion of what is involved in the "current relevance" that is associated with the present perfect. As emphasized above, if the standard situation in language is partial compositionality, an accurate description can be expected to assign some semantic properties to the components, and some to the constructions. The present perfect, including the issue of "current relevance" is a good example of that. According to the account given above, all perfect forms are associated with the type of relevance that is associated with the cumulative model of time. The "current relevance" that has been discussed in relation to the present perfect, however, is due not solely to the perfect itself; rather it is a product of two kinds of relevance: the "cumulative relevance" of the perfect in itself, and the element of application to the present that is involved in the contribution of the present tense. Thus, the relevance associated with "application now" comes on top of the relevance associated with the time of reckoning — to the extent the present tense remains a vividly compositional element in the construction. This picture can be used to derive, as a natural consequence, the "now" element of the extended now theory as applying to the present perfect.

384 The meanings of tenses

Since the clause content is supposed to apply to the world at the moment of speech, obviously it would be incoherent to understand it as applying to a period of time excluding the moment of speech. The extension backward in time follows from the anteriority that constitutes the temporal function of the perfect, and which in the present perfect has its base time in the time of utterance. This way of looking at the construction strikes me also as illuminating when we look at the special properties of the present perfect in a diachronic perspective. As described in Bybee—Dahl (1989: 74) and Langacker (1991: 224-25), the present perfect is particularly liable to lose all relevance meaning and travel in the direction of a generalized anterior tense; as is familiar, in French and German it is encroaching on the territory of the simple past tenses.17 What happens when the perfect becomes applied to the present moment is that the relationship between event time and speech time becomes identical with that of the simple past (as pointed out by Comrie 1985). Since there must always be some Gricean relevance in relation to the situation of speech, the constructional paradigm provides a choice between two ways of talking that can be applied to almost the same objective circumstances, rendering the choice arbitrary in many situations; in particular, the "hot news" context is one where this is likely to happen, because the very recent past is almost by definition viewed as characterizing the present. According to the general account provided above, the use of the past requires that there is some past point-ofapplication that can be identified, whereas the present perfect requires only that the past event is seen as characterizing the present situation — which is always possible if you are not too picky about your criteria. Once the criterion of current relevance begins to be relaxed, it is therefore natural according to Gresham's law that the perfect is liable to become an increasingly soft option — and that the last resort of the simple past tense should be in types of narrative contexts, where the focus is clearly distinct from the present (cf. the discussion of narrative contexts below p. 483). In relation to English, where the present element in the present perfect is strong, the lack of full compositionality in the relationship between present and past perfect in English is often emphasized. We shall deal with two aspects of this below: different combination possibilities with time adverbials (cf. p. 404), and "backshift" (cf. below p. 440). There remains the basic issue of compositional meaning. A widespread analysis has it that the past perfect is "an anterior version either of the present perfective or of the simple past" (cf. Quirk et al. 1985: 195). This is plausible if one

The perfect

385

understands the distinction between past and perfect in terms of definite vs. indefinite time (as discussed above, p. 328-329). But if we describe the past tense in terms of identifying tense, this ambiguity disappears: there is only one identified time involved, namely the time P corresponding to the past tense (cf. also Klinge: 1988), as exemplified in (111):

(l\l} He had left her We only have to identify P in order to interpret (111) — the time of the event, whether definite or indefinite, is irrelevant to the coded meaning. However, there is indeed a difference between the way the perfect functions when combined with the past and when combined with the present: the criteria of "relevance" are different and vaguer in connection with the past. Therefore it is true in one sense that "the Past Perfect covers an area of meaning (further in the past) equivalent to both the Past and Perfect" (Leech 1987: 47). It is true in the sense that the type of situations and temporal configurations that can be conveyed by the past perfect would be divided between past and perfect, if they were "shifted forward". If you have a referential view of meaning, this means that the past perfect is ambiguous. But if we see meaning in terms of the instructional specifications given above, this does not necessarily imply a linguistic ambiguity. The cumulative perspective implies only that we choose to view anterior events as applying to a later "time of reckoning". This analysis is the same in the case of the present and past perfect, and matches both types of past perfects. But what about the differences? In comparing, we have to consider not only syntagmatic but also paradigmatic aspects. In relation to the present moment, as long as the competition between two ways of seeing the same temporal configuration remains close, the criteria of selection are influenced by the presence of the competitor. With respect to the past perfect, however, there is no paradigmatic competition: if you want to mark states-of-affairs as being two steps away, there is only one option. When resources are more sparse, you have to stretch them. The "density of coding", in Givon's term, is higher, the closer we are to the deictic centre, and the combination of past and perfect is as far away as we can get in English. This is part of the rationale for partial compositionality: in different linguistic contexts, the communicative needs are different, and therefore meanings that are obviously related can be expected to behave slightly

386 The meanings offenses

differently, and be conventionalized as having different properties. Like all other constructions, perfects should therefore be seen as having some properties at the compositional level and some at the constructional level. The only way of being dead wrong about the compositionality issue is therefore to suggest an ambiguity at the compositional level to account for differences in constructional properties. As an argument for this, crosslinguistic data suggest that if a language has a perfect, it tends to be used similarly in different constructional combinations, as pointed out by Dahl (1987: 498) in his criticism of Comrie's position with respect to the present perfect (cf. Comrie 1985: 78). The cumulative view of time as the compositional contribution is compatible with the variability that is found in the different structural contexts.

2.5. The place of tense meanings in the general theory of semantic clause structure The content elements described above are fairly simple. Nevertheless, it has been quite intricate to give a description of them that could hope to avoid traditional pitfalls. In looking back on the three syntagmatic slots in clause structure, I would now like to point out how tense exemplifies one of the very general features of the theory I outlined in part two: clause content as conceptualization embedded in interaction. The conceptual core of the clause is the state-of-affairs. At the point where tense elements come in, we can see how the functional, interactive element in clause meaning is beginning to accelerate in importance; we can so to speak get a close-up view of the way in which the general formula "conceptualization embedded in interaction" works in the nitty-gritty details of linguistic structure. The perfect is clearly distinct from fully lexical meaning, being subjectified to the point where the content depends on a way of understanding time, rather than a concretely pin-downable physical state of grasping or possessing. Yet its meaning is not a case of complete subjectification, hence the free occurrence of non-finite forms. The cumulative model is understood as intersubjectively valid rather than solely speaker-oriented; Tennyson purports to speak on everyone's behalf. Also, the meaning of the perfect has an objective foundation in the shape of the temporal precedence relation: you can say whether a declarative clause in the perfect is true or false based on this relation.

Tense meanings In the general theory of clause structure 387

At the next higher scope level, we find the future, which is more subjectified: the temporal relation is completely dependent on the subjective rashness of the human conceptualize!, and there is no property ascribed to the lookahead situation, as opposed to the reckoning time of the perfect. Therefore the relation between lookahead time and the future time is of necessity only imposed by the speaker — you cannot find it in the objective situation. In the case of the past future, you would sometimes be able to find the actual event located at a later point in time — but that is beside the point, because the meaning of the future is in the aheadness, which is independent of whether the event has actually taken place or not. This I see as entirely consonant with the general approach of cognitive grammar (even if there are some differences in the actual description): increasing subjectification the closer we get (diachronically and synchronically) towards the topmost grounding element in the clause. However, according to the general theory of clausal organization of meaning that I have suggested, conceptualization is not the whole story; the reason why there is a more and more salient "subjective" element as we move to the top end of the clausal hierarchy is that this is the end of the hierarchy where the functional dimension of meaning becomes dominant. The mechanism of subjectification, I suggest, can only be fully understood if we see it as the result of pressure exerted from the functional-interactive — and therefore speaker/hearer-oriented — aspect of clause meaning, specifying what to do with the clause. The interactive function of the clause is inherently bound up with the (necessarily subjective, but not necessarily conceptual) orientation of the speaker. To take a last example, the element "declarative" is inherently speakeroriented rather than conceptual, indicating utterance function rather than descriptive content. However, modality as expressed by modal verbs can go either way: adding nuances to the descriptive content, or to utterance function, or both. When modality becomes subjectified it moves, as described by Langacker, from characterizing the objective scene, to becoming oriented towards the ground — but this reorientation is related to interactive purposes such as commitment, mitigation, and hedging. Subjectification is thus not an autonomous process — it is function-driven; and the functional properties are not solely outside the clause, in the interaction, but also inside the coded meaning of the clause. Now that I have described the meanings of the elements that enter into the traditional area of tense, it will be apparent that they can be neatly

388 The meanings of tenses

arranged on a scale that moves down the scope hierarchy from purely functional (deictic tense), functional with a very subjective conceptual content (future), to an element with a definable conceptual content (that can be investigated in terms of truth at the time of utterance). In this triad, the future has an interesting hybrid state: close to purely functional, but with some subjective content. I have postponed the discussion of the functional side of the future in order to be able to deal with it specifically in this perspective. The functional-interactive aspect of the future has been captured, since Boyd and Thorne (1969), in terms of the notion of "prediction" (cf. also Cutrer 1994), which I think is intuitively compelling in the case of the present future: someone indulging in the rashness of assigning categorical aheadness to a state-of-affairs comes down as having made a prediction. But there are some problems with this word. First of all, it only obviously fits present-tense, declarative versions. Interrogative versions and past versions do not fit this word very well. It is sometimes said that mil it happen? "asks for" a prediction and therefore can be described as denoting "prediction", but according to that reasoning we could also say that is he here? asks for a statement of fact and therefore could be described as denoting "statement of fact". Also, the word suffers from the problems affecting all performative-verb paraphrases of utterance status: what precisely does this mean? If this account is to be final, there must be a natural kind of utterances with well-defined properties such that only they are predictions, and this class must correspond to the class of utterances with the future marker. I do not think such a class will be easy to establish if the presence of mil is recognized as inadmissible (because circular) evidence. Therefore I think the interactive element in mil should be characterized not with respect to the whole utterance, but precisely with respect to what it does to the element that it takes inside its structural scope, namely the state-of-affairs. Here the answer is clear-cut: it places the state-of-affairs as describing a situation that lies ahead, an anticipated situation. The interactive element is in the speaker-imposed choice of "taking a peek ahead", which given the nature of time can never reflect the brute nature of the world but must involve an act of anticipation: speaker commitment takes the place of conceptual substance. So the pure future is grounded with respect to situational speaker, but not necessarily with respect to situational time: there can be anticipation with respect to a past base time. This illustrates both the relationship and the difference between pure future and the most subjective type of modal meaning (cf. Langacker 1990:

Tense meanings in the general theory of clause structure 389

29). In that stage, according to the "force" interpretation of the modals, what happens is that the speaker's judgement is subjected to forces tending towards a certain conclusion. There is a close paraphrase between this type of modality and performatives in the domain of making inferences: (112) you must be tired (cp. / conclude that you are tired) (113) he may be in town (cp. / leave open the possibility that he is in town) This interpretation, sometimes called "subjectiveepistemic", where modality is not an objective fact and does not express judgements of what is deontically good or bad, but simply serves as hedges expressing the dependence of the description on processes going on in the speaker's mind, has a little more subjective content than the pure future, as also expressed by Langacker, in the shape of the modal force. The subjectivity of the force is intimately bound up with what the speaker is doing with his utterance and has very little to do with the conceptual content. In English, this sense cannot be attached to a modal with temporal past time reference, as they can, perhaps marginally, in Danish and German: (114) Das müsste er eigentlich wissen Det matte han vel vide I suppose he must have known But just as with the past future, the subjective content can only be understood in relation to the speaker's act in making the utterance, not in relation to a force that was in operation at the time of application. I have taken some time over this question of the "unnatural" combination with the past, because I think it is important in understanding the process of abstraction that is involved in generalizing structural options in language. Even if the natural prototypes seen in natural contexts constitute the obvious point of departure for understanding meaning, the reality of linguistic structure means that the natural lumps of substance are not the whole story. The abstraction involved in the process that creates a pure future focuses on the speaker's imposition of a certain perspective, that of taking a peek ahead. In the past, the element of prediction goes away, while this element remains. In the present, which is a more significant and natural context — one would expect that a past future construction would

390 The meanings of tenses

never occur except in languages with a present future, whereas the opposite occurs frequently — the element of prediction can be seen as following from the act of anticipation in relation to now. The three properties of the three tense meanings can be summed up in a diagram such as (115); note that this is not a compositional analysis, but a description of how gradual change in semantic substance reflect gradual ascent/descent in the semantic hierarchy of the clause:

(115) Non-conceptual content: subjective-conceptual content: objective-conceptual content:

present/ past + -

future perfect + + 4+

3. Tense & Co: time reference in the simple clause 3.1. Introduction After looking at the individual meanings in the light of the compositional structure, I now turn to the compositional structure of time reference in the clause, viewed against the background of the individual tense meanings. The main aim of the following sections is to demonstrate that time reference, the most intensively discussed subject in the tradition of tense semantics, can be understood as emerging in a very simple manner from the coded meanings proposed above, as co-operating with other meanings in the scope hierarchy of the clause; and that an account of time reference becomes significantly simpler in a functional than in a referential semantics. The account I give is thus not designed to make any new descriptive claims with respect to time reference. On the descriptive level, the main features have remained essentially the same throughout the discussion. Among the classics Bull (1960), Allen (1966) and Vet (1981; 1986) specify timereferential properties that essentially coincide with the system I propose. On the level of time-referential detail there is much that I do not discuss; see Declerck (1991) for a recent discussion of many of the points on which the earlier systems do not yield precise predictions. The point that I focus on, as did most of my predecessors, is the question of what account most naturally predicts just those time-referential properties that we all more or less agree on. In relation to that central issue, the objections I have to earlier accounts all derive from what I see as the main advantage of my own, i.e. that instead of assuming that the referential properties are semantically primitive, and see function as peripheral, I see the basic structurally coded semantics as function-based and look for the referential properties as emerging from the way coded meanings co-operate with each other and with the situational information that is available.

3.2. Logical vs. functional operators The content syntax that I propose is based on scope relations, adapted from logic, where they were first given a precise form. The structures posited above are reminiscent of the system that we find in tense logic a la Prior (1967), where we find "past" and "future" operators applied to a propositional nucleus, according to the location of the event in relation to

392 Time reference in the simple clause

a presupposed present time. It is natural, therefore, to begin a discussion of the way I see complex time reference as emerging from interacting functional meanings by comparing with logical operators. As pointed out repeatedly in the literature, logical operators do not give a satisfactory theory of tense as a linguistic phenomenon (cf. Bybee—Dahl 1989: 62, Hornstein 1990: 166, Kamp—Reyle 1993: 491). I first try to spell out the basic difference, drawing on the description of logical semantics in part one; then I try to show some ways in which the difference manifests itself. Functional operators are blueprints for actions, like instructions to a washing-machine (rinse; spin, etc.); they specify how to modify a semantic input in different ways. Logical operators can be viewed in the same way, but are much narrower in their applicability. The fundamental reason why tense logic does not capture the semantic properties of tense in natural language is that logical semantics takes truth as its primitive notion and defines meaning in relation to that — whereas meaning as a property of natural language is defined in terms of interactive practice and only secondarily involves truth-conditions. This means that the status of propositions in language is different from its status in logic. Propositions as linguistic entities only arise as a result of interaction between meanings that are not inherently prepositional; conversely, propositions in the logical sense of mappings between world states and truth values do not have to be understood as linguistic entities (cf. Stalnaker 1984). It must be remembered that a tenseless state-of-affairs is not about anything, and therefore cannot be true or false. The special role of deictic tenses is that they bring about a matching between the state-of-affairs and the world to which it is meant to apply, and of which it can therefore be true (or false) — thus creating a proposition as a linguistic entity. This is incompatible with viewing tense as a factor that changes the truth and falsehood of propositions; therefore a truth-conditional description is incompatible with the way deictic tenses work in language. Unless propositions exist regardless of tenses, you cannot take a proposition and then apply tense operators to it. Needless to say, the question of the relation between truth and time has its own legitimacy, regardless of the linguistic issues involved. Prior's interests in the influence of the time factor on the truth and falsehood of propositions include a metaphysical dimension: how does the fact of temporal mutability influence the status of things that exist at the moment of speech? (cf. also the discussion in Kamp—Reyle (1993: 491) on the granularity of time). But the investigation of how temporal operators affect

Logical vs. functional operators 393

logical inferences is not identical with the problem of how to describe linguistic tense. Nevertheless, when the logical point of view invaded linguistics, many linguists came to adopt logical approaches, also to tense; even Lyons, who in many cases pointed out the untenability of adopting logical positions unquestioningly in linguistics (cf. the discussion on existential quantifiers and natural-language determiners, Lyons 1977: 455-460), adopts a "world-based" view of time reference (Lyons 1977: 813-14).1 In spite of the difference of perspective, the interpretation of tense as bringing about a proposition may throw some light on the problems that arise when a metaphysical discussion is carried on in close relation to a discussion of ways of talking, as Prior does. The metaphysical problem associated with substance and defmiteness in a purely referential semantics (cf. above p. 11) shows up in connection with tense as well in relation to the implications of change over time. The problem arises in Aristotle's discussion on tense and existence, since if the passage of time is seen as a matter of "becoming", we have a problem closely related to the nature of "being": nothing can ever come into being, since either it was there already or it came out of non-being, which is impossible. Temporal mutability thus invites a description in which existence is viewed as a property which comes to or leaves objects, just as the concept of substance gave rise to thinking about existence as a property. Prior's discussion follows up this discussion from Russell (who, as we have seen above p. 23, claimed that existence could not meaningfully be asserted of a particular). Moore objected that it was meaningful enough, only necessarily true (like the statement χ = x, cf. Prior 1967: 150) — and the statement this doesn't exist, similarly, is necessarily false. One of the reasons Prior prefers to put it this way is that one would like to be able to say this might not have existed, which would be difficult to do if existence could never be seen as assertable of a particular entity. The way to avoid contradictions is then to take to make the proposition only stateable at the point where "this" is in existence — so that we do not infer that there was a time when it would have been true to say this does not exist. As pointed out (Prior 1967: 8), this reflects the Augustinian understanding according to which the basic form of temporal existence is the present — which may then be anticipated or recalled; the property of Prior's logic that it always ultimately applies at the time of speech is therefore ontologically motivated.

394 Time reference in the simple clause

The interesting thing about this caveat in a linguistic perspective is that this condition is automatically satisfied if we see the tenses as giving directions, rather than adding temporal information to the state-of-affairs. If one takes the linguistic point of view that propositions only come into being as a result of matching a state-of-affairs with the world at a particular point in time, then we automatically avoid such complications, both in relation to being and becoming: nominal defmiteness attaches NPs to referent entities, and the definite deictic tenses attach states-of-affairs to referent situations — and only as a result of these interactive functions does the issue of truth or falsehood arise. The problems come about if your semantic theory forces you to have things that are propositions but have no situational anchoring ("application"). Based on this difference between logical and functional-semantic operators, we can now look at the empirical inadequacies of tense-logical formulae and demonstrate how the operators proposed here do better. Kamp—Reyle (1993: 497) sum up the deficiencies in four points: "indexicality" (the fact that time reference is not always relative to higherscope operators), "non-iteration" (natural tenses do not occur more than once) "context-dependence" (tenses refer to the context of speech), and interaction with adverbials (which are usually ignored in tense logic). Indexicality and context-dependence both arise from the interactive direction-for-identification that goes with deictic meaning (for the cases when embedding shifts the deictic centre, cf. below p. 431). In this they differ from Prior's operators, which are solely relational, and therefore have no way of accessing the situation directly. The way in which tenses and adverbials interact is the subject of section 3.5 below. This leaves "non-iteration", the fact that natural-language tenses cannot be iterated in the manner of logical operators. One of Prior's examples (1967: v) is a formula with two futures applied to it F(F(p)) -> F(p), rendered as the sentence If it mil be that it will be that p, then it will be that p. The difference between natural language and logic shows itself in the fact that Prior has to use whole clauses to paraphrase his tense operators; natural languages cannot have two futures. How does the account in terms of functional operators avoid iteration? The answer follows from the meanings proposed above in combination with the principle of sense. Logical operators add truth-conditional properties; functional operators, however, add to the interactive potential of the utterance in ways that must make sense. For deictic tenses, the semantic property that makes iteration collide with the principle of sense is

Logical vs. functional operators 395

the "pointing" element. Deictic tenses direct the addressee towards the point-of-application, and it is barely conceivable to iterate this operation: it would mean that the speaker pointed to himself pointing to (himself pointing to...) a point-of-application. For the future, the semantic property that prevents iteration is the fact that the pure future says nothing about what obtains at lookahead time: in embedding one future within another you would therefore not get anything that is different from what the first future already indicated, namely the aheadness of the state-of-affairs; you would just subdivide the aheadness relation into two aheadness relations. An iteration of the pure future would therefore be devoid of content, thus violating the redundancy aspect of the principle of sense. In the case of "embryonic" futures, which do provide a description of the situation from which one looks ahead, it is a different matter: be about to describes somebody who is in a state of being all set, and therefore you could say / will be about to leave when you return; and somebody accused of being slow to get ready, might conceivably even claim that he is about to be about to leave. This has implications for the perfect. As opposed to deictic tenses and the future, the perfect makes a claim about its base time, the time of reckoning. Therefore it would not be meaningless to iterate the perfect, provided it made communicative sense to interpolate an extra time that one wanted to describe. Although this is not possible in English, iterating perfects occur in a number of languages (as pointed out by Comrie 1985: 77); in German there are constructions like er hat es getrunken gehabt, a similar "passe surcompose" occurs in French; with respect to Danish, see Glismann (1989), Harder (1989b). The non-iteration of the perfect can therefore be naturally seen as a reflection of language-specific conventions rather than universal properties. I have tried to demonstrate why the terminology of "operator" in relation to tenses does not mean that the theory suffers from the same shortcomings as the theory of tenses as logical operators. The reason why I retain the quasi-logical notions of operators and scope is partly that I do not know how else to express the "task structure" of the linguistic recipe, but also that I think the logical constructs are a special case of the same general phenomenon that enters into content-syntactic structures and other forms of hierarchically structured action (compare the evolutionary aspect as discussed above p. 219). The similarity in the functioning of scope relations should be obvious; the difference is essentially that functional operators

396 Time reference in the simple clause

specify operations to be performed instead of modifying truth-conditional properties.

3.3. Time-referential formulae as emerging from meaning plus structure I shall now show first how the system described above permits us to specify exactly what temporal information is provided by the tense system, and later how tense-cued information collaborates with other information in the linguistic and situational context. If we disregard the semantic subtleties discussed above and abstract from everything else than the time-referential aspect of the meanings (= the "function times"), we can abbreviate the meanings in terms of the letters used in the formulae: S for the time of speech (pointed to by the present), P for the past time of application (pointed to by the past tense), F for the time ahead assigned by the future, and A for the anterior time assigned by the perfect. In terms of this notation, the system automatically predicts the following time-referential properties: simple present (heplays): simple past (he played): present future (he will play): past future (he would play): present perfect (he has played): past perfect (he had played): present future perfect: (he will have played): past future perfect (he would have played):

S (State of-Affairs, = SoA) P (SoA) S (F (SoA)) P (F (SoA)) S (A (SoA)) P (A (SoA)) S (F (A (SoA))) P (F (A (SoA)))

It may not be immediately obvious how this is to be understood; since it is central to the discussion below I shall spell it out in detail. The scope hierarchy can be described bottom-up (from the operand point of view) or top-down (from the operator point of view); for the purpose of seeing the interplay of the time-referential properties it is simpler to begin from the operator end. Let us take a complicated case such as the present future perfect, as in he will have played. We begin with the meaning of the

From coding to time-referential formulae

397

present tense, which directs the addressee to apply the whole thing to the world at time S. The operand of this instruction, however, contains as its highest element a future form, which says that the rest of it belongs at a time ahead (F): in other words, it applies now that something is ahead. What is ahead, however, has a perfect form as its highest element, specifying that the state-of-affairs is anterior (at A). All in all, the present future perfect thus tells us that right now there is ahead of us a situation, in which an state-of-affairs is anterior. Two of the general properties mentioned above are crucial to the mechanism: the "process input" status of meanings, and the directionality which is built into functional meanings, compare the railway ticket which not only takes you somewhere, but also requires that you start from somewhere. The "process input" status is what eliminates event time from the coded time-referential meaning: language instead codes instructions specifying what to do with the coded event. Event time can be specified as resulting from the operator that takes the state-of-affairs directly inside its scope, but even so, the operators do not fix any time as such — they just indicate an operation which applies to the event, and the time can only be fixed by the addressee when he has finished his whole interpretation process. In order to grasp the meaning of the words, we do not have to know any real times at all — but in order to make sense of the message in interpreting it, we have to plug it into the situation at least at one point (S for the present), and probably two (F = the time at which the state-of-affairs will be anterior); E is usually unspecified in the perfect tenses, where it corresponds to A, see below; but whether it is specified or not, it is only A which is relevant to describing the meaning of the perfect. The directionality from base to function is important in describing how meanings interact in bringing about complex time reference, without requiring individual tense morphemes to be semantically complex. Every time we interpret one of the function times S, F and A, we do it by anchoring them in their respective base times: S is located by reference to the time of utterance, F is located with respect to S, and A with respect to F. The base times are not independent parts of the meaning: all functional meanings must be understood in relation to a baseline. The model example of the railway ticket illustrates why: we could not have a ticket going to somewhere without also going from somewhere — but when we buy it, the situation usually provides a default interpretation of where we start from, which is why we rarely specify it. Similarly, we need not state base times

398 Time reference in the simple clause

in the time-referential formulae, but can take the general mechanism for granted: if we work from the top end (the situational end, as opposed to the conceptual end), each form provides the temporal basis for the next form — except for the topmost (deictic) tense, whose time basis is the situation. This exemplifies both the similarity and the difference with respect to scope relations in logical formulae, as discussed above: just as in logical formulae, higher-scope operators provide the frame within which the operand is understood; but unlike a logical operator, a functional operator can have contextual anchoring as its output. The advantage in this account consists in making it transparent how time reference fits into the content structure of the language. Structural complexity and time-referential complexity matches up: there is one timereferential element in the simple past and present, two in the forms with two elements, and three in the forms that contain both a future and a perfect. All other systems that I know of have a descriptive apparatus which describes each form with at least two different times because they describe the forms by reference to a presupposed world instead of by looking at the job the meanings do. The extra times are generally due to one of two things: the introduction of the event time in the formula, as already discussed, or the complications associated with the notion of reference time, which forms the subject of the next section.

3.4. Reference time: a family resemblance concept The notion "reference time", together with its successor concepts, has had a central place in discussions of tense since it was introduced by Reichenbach. From a referential point of view, two times have always been taken for granted: the time of speech (S) and the time of the event (E). Since this in itself, coupled with the usual relations (precedence and simultaneity /overlap), yields only three possible time configurations, a secondary set of relations between times and tenses are required to account for the usual six- or eight-tense systems. Jespersen's two relations "before" and "after", as applied to the three basic tenses, implied "reference tunes" but did not make the notion an explicit part of his system; Bull's "axes of orientation" also served as "points of reference", but did not provide a naturally constrained concept to build upon.2 What gave Reichenbach's theory its popularity was that his system appeared to capture the necessary additional element in terms of one intuitively striking concept, as

Reference time: a family resemblance concept 399

exemplified in the analysis of the past perfect (cf. p. 319 above), where E is not located directly in relation to S, but in relation to a reference time before S. Moreover, it appeared in the beginning to be operationally clearcut: a time adverbial was assumed always to correspond to R as the "time indicated by the sentence". Since definite time adverbials in the past always go with past tenses, the crucial distinction between the present perfect and the past could be seen as falling out of the system automatically. As pointed out repeatedly in the literature, this attractive construction did not stand up to scrutiny; time adverbials did not locate themselves unambiguously in relation to tenses (cf. p. 414 below). The past perfect, for example, could have a time adverb referring to E, as in (116) He had done it at ten and then left (where at ten = E) When this operational criterion was no longer available, the notion of reference time in the sense of "time indicated by the sentence" was not unambiguously applicable in all cases any more. Comrie, as we have seen, avoided using the notion in all cases where it overlapped with other times. But if this operation is to be more than a notational ink-saving device, it raises the question of what the criterion is for something to be a reference point. The criterion in Comrie's case is whether a specification of the temporal relations requires more points than E and S, and R is thus simply a handy device whose precise nature is left unspecified: since the temporal relations denoted by the past perfect can only be specified if we set up an extra time point besides E and S, we give it an R. And since the past future perfect requires two time points to relate E to S, we assign it an Rl and an R2, as in the example from Greene given above: (117) by then I would have been happily married for two years E (=being married) before Rl (two years later) after Rl (the time from which we look ahead) before S. This criterion is at the same time extremely natural (we use the points we need in order to get relations right) and rather mysterious: if we do not just draw them out of a hat, in the manner of underlying forms, by what mechanism do these points arise?

400 Time reference in the simple clause

The mystery recurs in rival accounts. Sten Vikner (1985) suggests a revision which goes in the opposite direction of Comrie. Instead of getting rid of reference points whenever they are not strictly necessary, he describes all tenses using not only the reference point introduced by Reichenbach, but also the second reference point that becomes necessary when you include the past future perfect in the tense system. The system provides a systematic account of the eight tenses in the system in terms of relations between these four points in tune. The price of the consistent use of the same apparatus for all tenses is a large amount of redundancy, since in all cases except the past future perfect at least two points are identical; the simple present is thus accounted for in terms of four overlapping points. This use of reference points has one advantage over Comrie's in that the reference points do not disappear when they overlap, but are systematically related to the tense choices. But we still need an account of precisely what their nature is. One consistent definition is implicit in Prior's account. Prior (1967: 13) criticizes Reichenbach's notion of a reference point by saying that instead of distinguishing between S and R, Reichenbach ought to have seen that S is merely the first in a series of reference points. This is evident in Prior's account, where you can add "previous" and "posterior" operators indefinitely, and the system keeps track of them by anchoring the first (highest) to the moment of speech and then subsequent operators by means of the earlier ones in the manner that is built into scope organization. However, Prior's reference times only work for one temporal relation at a time; reference points is all you have — all the way down. This means that you can no longer speak of R "as such". The source of error that is latent in this definition is illustrated by a description in Smith (1978), where the relational R and R = "time indicated by the sentence" give different results. The example is (118) Phyllis decorated the cake before midday "In this sentence RT is Past, midday; ET precedes RT as indicated by before", Smith claims (1978: 49; RT=R, ET=E). As pointed out by Vet (1984), "midday" is only reference time in relation to before — whereas the ordinary Reichenbachian understanding, which Smith otherwise adopts, would identify R with the time denoted by the adverbial (i.e. the interval before midday). Prior's R's can therefore not do the job of Reichenbach's R — there are too many of them.

Reference time: a family resemblance concept 401

Declerck (1986, 1991) discusses this ambiguity in the Reichenbachian accounts and eliminates it by distinguishing explicitly between the Priorean reference times and the reference times that stand for "time indicated by the sentence". Prior's notion of reference time he renames "time of orientation", while the Reichenbachian successor concept is called "time established" (cf. Declerck 1991: 250-51). It is established either by an adverbial or by the context and is thus seen as external to the tenses. Yet tenses sometimes do in fact establish it; the problem of how to fix R reliably in relation to linguistic categories therefore resurfaces with "time established". This is due to the fact that in a reference-based approach the critical factor is the nature of the times as such, not the linguistic mechanisms. There is no way of nailing the property of being "established" down to any specific linguistic element — it is ultimately a situationdependent entity, not a structure-related one. In a reference-based account, the necessary times and temporal relations stand as primitives in the analysis, since they are the tools for the description of time-referential properties. When Declerck showed that Reichenbach's R was a conflation of times that do not always coincide, he therefore needed a larger basic inventory than previous R-based accounts; and he ends up with seven basic times and four basic relations. Since times of orientation are inherently relative, it is necessary with three times serving "orientation" functions in the inventory of primitives: the basic time of orientation (Prior's topmost R), the time of orientation that coincides with established time, and a third which is neither of these. Declerck's notion of established time is similar to the time L (for "localizer") suggested by Bertinetto (1985), which either specifies E or R, and is typically indicated by a time adverbial, but may also depend on pragmatic factors. In arguing for the role of this type of time, Bertinetto considers whether it might make "event time" superfluous, but because of the assumption that reference must be basic, he rejects this idea: "E is necessarily implied by all the tenses, for it would be nonsensical to assume that a given tense refers to no time" (1985: 60). The proliferation of R's does not end with the partition into times of orientation and established time; we shall return to the remaining problems, which concern text progression, in section 4.5 below. If we take stock at this point, we are left with the following times, all of which in some sense indicate "time with reference to which we place the event/state-of-affairs":

402 Time reference in the simple clause

(119) a. the time with respect to which any given temporal relation is calculated (Prior's R). b. uppermost point of reference for tense operators, usually = S c. time at which the clause is temporally anchored, as fixed (cl) by a time adverb or (c2) by the context In lining up these five different times, I am not suggesting that this shows their absurdity; indeed, I think it is a significant step forward from the simplistic assumptions that underlie Reichenbach's R that we should now be able to point out how many temporal complexities that are involved in calculating time reference. However, I hope to show that it will be easier to understand the problems if we look at it from the point of view of how functional meanings interact with each other and with the situation in the process of interpretation. The first stage is to show why we do not need to include any of these fission products in the account of tense meanings, beginning with (a) Prior's reference time. As will be apparent, the job of Prior's reference point is done by the mechanism whereby functions are understood as having a basis provided by the higher-scope elements that ultimately plug the utterance into the situational context. The advantage over Prior is that we avoid saying that all points are simply reference points, and instead see the base times as emerging automatically from the interaction between the coded function times. The account where base times follow from function times also applies to (b), the topmost reference point. There is a difference, however, in that it does not operate via the interaction between meanings, but situationally, via the plugging-in mechanism associated with the deictic tenses. In interpreting a deictic tense, addressees depend on the basis provided by the utterance situation, from which the speaker points either at the present or the (to-be-identified) past. The mechanism is the same as the one that attaches lower forms to higher forms: you relate what you get to what is already there — linguistic context is just a special case of situational context. But since we can only describe what language does in context if we are able to distinguish between language and context, it is essential that we mark clearly what is in the code and what is in the context

Reference time: a family resemblance concept 403

— otherwise we will end up, as described above pp. 286-287, by seeing the speaker as part of the code. This is why it is important to distinguish S as the coded time that is pointed towards, from the situationally defined time which the speaker points away from — the fact that they are (typically) identical in referential terms does not make them identical from a functional point of view. This is as much as we can cover in relation to the naked tense system; (c) will be covered in the following section. However, we can already see some of the functional underpinning that was buried under Reichenbach's R. Because deictic tenses are identifying, the task of interpretation requires the addressee to establish a temporal anchor in the world-of-discourse at the point S or P. Therefore the deictic tenses, as opposed to the other tense forms, do some of the work in giving the state-of-affairs an external time anchor. These forms have generally been accorded a privileged status from Bull onwards (compare also Bertinetto's (1985: 67) notion of "time anchor"; Vet—Molendijk's notion of "evaluation time" (1985: 152); Bartsch's (1988) notion of reference time). The reason why previous discussions in terms of "definite vs. indefinite" and "absolute vs. relative" have not succeeded in establishing this privileged status in the "reference point" context, although they point to the same facts as the notion "application time", is the focus on the actual times rather than the act of identification. From the point of view of actual times no ultimate difference can be found between deictic and non-deictic times. One temporal location is much like another; it is the function that establishes it that is different (cf. the discussion of Allen's view of definite and indefinite past time pp. 328-329 above). Under the functional view, we get an intuitively obvious "Reichenbach R" when we have an application time (S or P) that is simultaneously time basis for a secondary tense choice. This is what makes the past perfect example ideal: R is both identified time (P) and basis for the perfect. The simple tenses are less obvious, because there we have only the application times, no secondary choice based on them. If we factor these properties out as suggested here, we can make the same predictions without having the problems that follow from positing an explanatory concept without looking at its relation to coded meanings. This account can throw some light on the problems in Comrie's use of the notion reference time. Comrie's R is basically identical to Prior's R. What Comrie does by eliminating overlap is just not to mention R when the

404 Time reference in the simple clause

time already has another name (E or S). This is just a notational convention and thus without empirical consequences — but it leaves the relationship between semantic structure and temporal configuration opaque. There is nothing in Comrie's formulae which indicates that past perfect involves the same coded meaning as the simple past (P for past application time). The advantage of the diagrams suggested here is that they specify the temporal location in a manner that makes the relation to semantic structure transparent even in the complicated cases. To take the most complicated case, the past future perfect: (120) Comrie: E before Rl after R2 before S (121) content-syntactic structure: P (F (A (SoA))) In terms of the account I suggest, instead of popping up out of nowhere, the two reference points follow directly from the coded meaning: R2 is the time P associated with the past, and Rl is the time F associated with the future. In leaving out the reference to S, we indicate that the time of utterance as it relates to the past tense (in contrast to present tense) is not an independent part of its coded function, but a presupposed element of the situational base.

3.5. Tense time, adverbial time and topic time The temporal reference of time adverbials has been intricately tied to accounts of tense since Reichenbach linked the identification of R with the time adverbial. A basic part of the intuition that was behind Reichenbach's analysis is expressed in the principle of the "permanence of the reference point", in conjunction with the formulation "time indicated by the sentence": it touches on the issue of discourse continuity and the overall sense of where the meaning of the sentence belongs, temporally speaking. Declerck covers this type of time with his concept of "established time", which can either be expressed by an adverbial or be contextually determined, compare (119c). My account, as usual, emphasizes the need to distinguish clearly between something signalled by linguistic means and something that is an unexpressed part of the situational context. In addition, it distinguishes between two types of job that time adverbials can do. This distinction is

Tense, adverbial and topic time 405

primarily in terms of semantic substance, and is not matched by a clear structural distinction; even if I try to show that some light can be thrown on it in terms of the layered clause structure, the account is somewhat messy here. My defence is that it is part of a well-established linguistic mess, and thus at least does not add significantly to the existing confusion. There are three preparatory stages in the argument. First, I introduce the purely situational time and relate this to tense in simple cases. Secondly, I discuss the relation between tenses and one form of job done by adverbials. Thirdly, I introduce the second type of job done by adverbials. Finally, I try to show how the four types of time-related elements cooperate. I shall use the term "topic time" about the purely situational notion, the time which is not coded because the interlocutors take for granted that this is the time that the conversation is about (for a different but related notion of topic time, see the discussion of Klein 1992 below). To understand the notion as I want to use it, it is important to realize that it has no inherent relation to coding at all. The central property that motivates the term "topic time" is that it is the temporal location of whatever is the current topic of conversation. It follows from this definition that there is only a well-defined topic time if there is a well-defined temporally anchored topic. The way in which the topic time is bound up with the conversation topic is obvious if you think of the conversation topic "what was your recent vacation like?". This topic is indissolubly connected with a certain time period in the past, and it therefore provides a topic time that will give the relevant context for statements like the hotel was awful, I had diarrhoea all the time, we lost our baggage in the airport, the beaches were overcrowded. In such simple cases, there is a straightforward collaboration between topic time and the time-of-application that is coded into deictic tense: the discourse-based topic time licenses the use of a deictic tense that corresponds to it. These clauses can be understood as having the topic of "our recent vacation" as point-of-application; and the associated topic time as the time of application for the deictic tense, which in this case must be the past.3 This leaves the question of how time adverbials fit into the picture. The most familiar type of semantic job done by temporal expressions is that of adding temporal specifications to the state-of-affairs. This is more or less the default semantic description of both tenses and time adverbials in the

406 Time reference in the simple clause

literature, also in Functional Grammar, compare the following quotation on the parallels between operators and satellites: For a simple example, consider the following sentence: (14) I saw him yesterday In (14) both the past tense form of the verb and the adverb yesterday locate the SoA "my seeing him" on the time axis. The grammatical and the lexical strategy have approximately the same function and both modify the same layer, the predication, which designates the SoA for which a temporal setting is specified. (Hengeveld 1990a: 8)

I shall begin by discussing the relations between this function and the function of tenses. It should be noted that from Bull onwards it has often been pointed out that there is a difference between tense times and adverbial times in that adverbial times typically provide detailed quantitative and calendrical information that the tenses do not provide. To some extent, this difference may be regarded a matter of degree, of relative abstractness, and reflects the general difference between lexical and grammatical information, and is not in conflict with Hengeveld's position. The point where this picture is incomplete is in the supposedly shared role of adding temporal information to the state-of-affairs, which is neither fully accurate in relation to tenses nor in relation to adverbials. This is perhaps most easily seen if we begin with those cases where it fits best. Certain time adverbials have precisely the function of predicating temporal specification about the state-of-affairs. This function is most unambiguously illustrated in a non-finite clause, where the issue of deictic tense does not arise: (122) (It will be nice) to go for a walk next Sunday Here, next Sunday is clearly additional information about the state-of-affairs to which the predicate nice is ascribed. In terms of the temporal concepts we have been discussing, it is oriented towards the event time E and in this we have a clear difference with tense semantics. The job of tense operators is not to add extra information about event time in the manner of "next Sunday". For the deictic tenses this has been argued in detail above. For the future, the closest adverbial paraphrase according to the theory

Tense, adverbial and topic time 407

described here would be ahead; and the subtle but decisive difference between predicating aheadness of the state-of-affairs and speaking of it in the future should be evident in (123) vs. (124) below: (123) The difficult pan occurs (or "lies") ahead (124) The difficult part will occur The future — as exemplified in (124) — signals the subjective action of taking a peek ahead, not the ascription of a property. The perfect, alone of the tense operators, has objective conceptual content and thus as part of its job does in fact predicate temporal information about the state-of-affairs; but it also invokes the cumulative model of time and sees the event as part of the baggage at the time of reckoning; and because of the primacy of the time of reckoning, the precise position of E is often uninteresting (more on this below); therefore, "specification of event time" is a poor description even of the perfect. However, the fact that the two functions are different does not mean that the two things are independent of each other. The functions coded by tenses must be compatible with the temporal specifications of the time adverbs. You cannot apply states-of-affairs with a past-time or future-time specification to the situation now: (125) *He is depressed yesterday/tomorrow Nor can you apply states-of-affairs with a present-time or future-tune specification to a past situation: (126) *He was depressed tomorrow/now or take a peek ahead at previous events: (127) *This will happen yesterday And if we attach a certain type of temporal specification, such as adverbials with since, it motivates the choice of a perfect tense, because these time periods are both inherently cumulative, and characterize the time of reckoning:

408 Time reference in the simple clause (128) / have known them since I was ten

The longer the period beginning at "since" is, the more solidly does the anterior state-of-affairs characterize the situation at reckoning time.4 In general, the relation between time adverbials and tenses is not basically a question of redundancy or of arbitrary distributional restrictions, but of compatibility between related and collaborating meanings; and there is more to the story of compatibility below. But predication of temporal information is not the full story of time adverbials, either. In terms of the build-up of predications, the specification of time comes at the same stage as the specification of place in an NP (cf. Rijkhoff 1988, 1992), i.e. at the stage of "location": outside the domains of quality and quantity. In terms of the progression from operand end to operator end, this exemplifies the path from conceptual core to situational embedding. Time adverbs can be understood as standing on the borderline between the two realms. On the one hand a temporal specification may be seen as an additional attribute of the conceptual state-of-affairs; on the other hand one may see it as involving a movement out of the purely conceptual realm. In this, they are characteristically different from the "aspectual" modifiers, for instance adverbs of duration. To take an example: "read (Joe, "War and Peace")" is a state-of-affairs; if we add a qualifying satellite (carefully), this is still the case; if we add a durational specification (in twenty-four hours), it is still purely conceptual structure. If we add indefinite locations (sometime, somewhere) we may be in doubt: there is almost no information added on the conceptual level, since a location of some kind would necessarily be implied anyway; but the element of external anchoring is also weak, since there is no identification involved. However, if we add definite locations: in Hawaii, on April 24, 1992, we transcend the purely conceptual specification and put it somewhere on the map and the calendar. Yet while adding an external location, the definite adverbials also add a conceptual specification. Reading "War and Peace" in springtime in Hawaii sounds as if it might be a rather different experience, i.e. involving a stateof-affairs with a different conceptual specification, than reading it in the dead of a Russian winter. These two ways of adding to clause content are both part of the semantic potential of locational satellites. In terms of clause semantics, the two functions can be described as belonging just inside or just outside the state-of-affairs proper: the conceptual specification is part of the descriptive content, whereas the element of putting the state-of-affairs

Tense, adverbial and topic time 409

on the map and the calendar is necessarily outside the state-of-affairs itself. Thus the last addition to the conceptual content blends imperceptibly into the first external element that places the whole thing somewhere. This duality has implications for understanding the role of adverbials in relation to topic time. In understanding this issue, we need to understand both how topic time is different from temporal specification of the kind just described and how it is bound up with it. The difference is a matter of function, while the connection is a matter of time reference. Beginning with the functional difference, I would like to build on the discussion in Part Two, p. 279 about the dual situational anchoring of clausal meaning. In addition to the functional anchoring that is built into the layered structure, there is a superimposed dimension in which the progression from what is given to what is added is a central element; the classic case is the subjectpredicate sentence type, in which the subject is topic and situationally given, while the predicate adds new information to it. The role of topicrelated adverbials must be understood in this context. The difference in function between the two types of adverbial has been discussed in relation to Functional Grammar by de Vries (1992: 9) with reference to Clark and Clark's (1977) concept of "frame": the topic-related adverbial provides a frame into which the rest of the sentence fits. The difference between the two roles that an adverbial can have can be seen in an example discussed by Carlson (1989) in relation to genericity (usually associated with a generic subject), compare (129)-(130): (129) Hurricanes arise in the Pacific (130) In the Pacific, hurricanes arise In (129) the locational specification is predicated about the birth of hurricanes, whereas in (130), where the locational adverbial is in the fronted position, this is not so clearly the case. Instead, the location seems to become the site of the topicality that goes with the subject hurricanes in the most natural interpretation of (129). Similarly, fronted temporal adverbs can be understood as indicating a time with topical status: instead of predicating something about the state-of-affairs, they indicate what situation the state-of-affairs is about, compare (131)-(132): (131) During our holiday, everything went wrong

410 Time reference in the simple clause

(132) Everything went wrong during our holiday The difference can be seen in its influence on the most natural readings for everything: In (131) it could plausibly be understood as indicating "everything about the holiday". As an iconic fact about constituent order, adverbials in initial position, like a number of other initial elements, often serve to link clause content to the pre-utterance situation. This is not a clear-cut coded difference in English, however. (131) and (132) can be read as synonymous — basically because the fronted position is not reserved for topical constituents only, but can also be used for sheer emphasis, as in (133) Never have I seen such a mess This problem is basically similar to what holds for the coding of topicality in nominal constituents; as argued in detail by N0rgard-S0rensen (1992), the whole tradition of sentence pragmatics is generally guilty of assuming that too much is structurally coded. The close relationship between the coding of temporal and the nominal topicality is also evident from the fact that just as a topic time can be bound up with a certain discourse topic, the equivalent coded time can just as naturally be bound up with a clause subject, as in (134) The Second World War was really worth fighting Because of the subject itself, there is no need for temporal adverbs; the (stipulatively!) topical subject provides the temporal anchor on its own. Although the coding is difficult to pin down, the difference between the two jobs is fairly clear on the substance level. When no topic time has been previously established, a topic-adverbial may serve to establish a topic time; an example is the introduction to fairy tales: (135) Once upon a time there was a King and a Queen. Once the topic-adverbial has done its job the story-teller may continue based on the topic, and topic time, that has now been established: (136) They had a daughter whom they loved dearly

Tense, adverbial and topic time 411

Note that once upon a time would not be very likely as predicated temporal specification ("incidentally, this happened once upon a time"), but serves nicely to locate the story in the very indefinite past. If the difference between topic-adverbial function and temporal specification can now be taken as established, the next step is to see how they are connected. As adumbrated above, the connection is referential rather than functional: the times referred to by these functions must coincide. It cannot, on the principle of sense, be so that we take a temporally specified state-ofaffairs and locate it at a topic time which is different from the predicated time. Internal time and external time must ultimately be the same if interpretation is to be possible. Second-World-War-fighting (internal) can only be located during the Second World War (external). The time we are talking about, and the time which we can ascribe to the state-of-affairs must be compatible, if the situational and the predicational ends are to meet. If this was not the case, you would be able to say things like: (137) *Last night, I met Joe this morning This means that when we have one adverbial it provides information relevant for both topic and state-of-affairs location. Therefore it would often be superfluous to have separate adverbials for each function, and a complete structural divorce between the two would be impractical. In the case of adverbials that are not referentially anchored, a clearer difference emerges in analogous circumstances: (138) Similarly, he answered her (139) He answered her similarly In (138), similarly relates the state-of-affairs topically to what has gone before; in (139) it predicates a property of the state-of-affairs. The two occurrences of similarly are standardly placed in different adverbial classes, a solution which cannot capture the parallel differentiation in function within adverbials of time and place. We have seen above that time adverbials can have two functions that the deictic tenses in themselves do not have: to provide a topic-oriented time, and to give the state-of-affairs a temporal specification. In the linguistically simplest case, we have only an implicit topic time and the deictic tense that

412 Time reference in the simple clause

applies to it. In more complex cases we may have one or both adverbial functions represented. (140) is an illustration of how all four times can operate together: (140) On the third night, the -waiter wept throughout the dinner (a) Topic time is situationally given ("our recent holiday") (b) Topic-adverbial time provides a narrowed-down temporal site for the state-of-affairs (On the third night} (c) Specifier-adverbial time indicates the temporal location as a property of the state-of-affairs itself (throughout the dinner gives an extra twist to what happens in a way that on the third night does not) (d) deictic tense brings about the matching between state-of-affairs and appropriate point-of-application that establishes the intended proposition (past tense in wept) This functional differentiation occurs under compatibility restrictions that constrain all four temporal elements: topic-adverbial time must make contact with topic time in order to stay on topic; specifier-adverbial time must be compatible with topic-adverbial time in order for the full state-ofaffairs to be locatable at the designated site; and the pointer associated with deictic tense (temporally near vs. distal) must be compatible with the other temporal indicators in order for the state-of-affairs to be applicable. The supposedly simple system of times that I proposed as a semantic theory of tense instead of the complicated earlier systems may now appear to have become just as complicated as its predecessors: in the above example we have no less than four overlapping times. But I think the complexity emerges from the complexity of the described object in a reasonably transparent manner. If we have two time adverbials plus a tense, we have three temporal meanings that we need to account for; if we also want to relate the sentence to the time of the discourse world, we have (at least) four temporal elements. The four times are therefore not an artifact of the theory, but something everyone would have to account for, no matter what their theory might be. And overlap is not the same as redundancy, when it describes the relation between different linguistic elements; by the principle of sense, separate pieces of information about the same event

Tense, adverbial and topic time 413

should be compatible with one another. The situation is clearly different from that of a theory which describes any single linguistic form by means of overlapping times. The type of co-operation that I have described is compatible with a parallel drawn by Vet (1986) and Janssen (1991) with the structure of noun phrases. The tenses, and the determiners, are grammatical operators; the state-of-affairs and the noun (in standard Functional Grammar, the referential variable) are the modified elements; and the adjectival modifiers and the temporal satellites stand as "restrictors" (cf. Vet 1986); Janssen (1991) explicitly uses the term "definite description" in the tense context. This parallel also includes the identifiability requirement that goes with the pointing function of definite determiners and tenses; the addressee must be told enough to perform the necessary identification, and the same type of expression may do the job in both types of expression. In the following examples, the defmiteness element in determiner and tense might become infelicitous for the same reason, if last Tuesday was omitted: (141) Joe got drunk at the party (last Tuesday) (142) Joe got drunk (last Tuesday) In (142) last Tuesday is most naturally understood as a temporal specifier, but because of the referential identity constraint, it simultaneously does the job that in the maximally complex case is done by the topic adverbial. One may ask how it is that a definite time adverbial could not actually code the "application" instruction that is specifically associated with deictic tense on its own? In principle there is nothing to prevent it; a natural situation would be if an adverbial meaning 'then' and an adverbial meaning 'now' could transform a state-of-affairs into a proposition. The fact is, however, that in languages like English the matching between state-ofaffairs and world that I call application seems to be only coded into identifying tense. 'George do that at three o-clock yesterday' is still only a state-of-affairs rather than a proposition; you cannot use it to claim something about George, or to ask a question about him by uttering it on a rising intonation. The account whereby "direction-for-application" is reserved for the tenses and kept distinct from the concrete temporal address is not an explanation for why tense is obligatory, because many languages do well without such an element. However, if we assume that (for whatever reason) it happens to monopolize this function in languages like English,

414 Time reference in the simple clause

we can understand why we need to have tenses in all clauses that are to be understood as expressing propositions. The question of why we have to mark time incessantly (cf. Quine 1960) is then partly answered by reference to the fact that what we basically do in all finite clauses is to mark the application element that is constitutive of a proposition — it just happens that in coding something as a proposition we are simultaneously forced to code its temporal point-of-application.5

3.6. Adverbials in complex tenses Above, we have only considered cases where the deictic tenses stand alone; when we have complex tenses, the plot thickens. Here there are several tunes that are potentially relevant — and the question arises how adverbials function in that context. The principle of sense suggests that time adverbials should be usable of anything that it makes sense to locate in time. Assuming that it makes sense to have only one topic-adverbial time, we would still have room for as many specifier-adverbial times as there are times in the tense system. In a complex case like the past future perfect, there would be three such slots, as in the following specimen of nonEnglish: (143) *When Joe talked to me (= topic-adverbial time), he yesterday (=P for past) would, by Tuesday (=F for future), have finished his paper on Monday (=A for anterior) An account of the structure of linguistic time reference needs to explain why this is not so; and Klein (1992) takes an important step towards such a theory. With specific reference to the present perfect, he suggests a general pragmatic principle called the "p-defmiteness constraint" to explain why certain double time specifications are impossible. In describing what I think is correct in this constraint, I simultaneously argue that it follows from general Gricean principles as embodied in the principle of sense; in demonstrating why I do not think it can do the whole job, I try to show that a function-based account is better than a referent-based account. The theory depends on two time concepts, TT and Tsit. Tsit corresponds roughly to "event time"; TT abbreviates "topic time" and denotes "the time for which, on some occasion, a claim is made" (Klein 1992: 535). This time description is associated with the time of the finite verb, i.e. the

Adverbials in complex tenses 415

application time of the deictic tense; it embodies the same intuition that was involved in the criticism above p. 332 of "event time" as being the basic concept in temporal semantics. However, the tense theory is different, involving the traditional tripartition into past, present and future as its basic element; this makes a difference in relation to the future, as discussed below. According to the p-defmiteness constraint, "In an utterance, the expression of TT and the expression of Tsit cannot both be independently p-definite". The notion "p-defmite" is short for "position-definite"; in this terminology, the time involved in the present tense is p-definite because it must include the moment of speech, whereas the time involved in the past is not p-definite (until it has been contextually fixed). The p-defmiteness constraint embodies an important point, which immediately relates and explains the oddity of sentences like (144)-(145): (144) *At seven, Chris had left at six (145) *Chris has left at six (Klein 1992: 546) The reason for the constraint is that if event time is specified (at six), the state after leaving holds any time after that, and therefore it is uninteresting to pin the topic time (time of application) down to exact clock time. Doing so (at seven) suggests that it might no longer be true at five minutes past seven. This constraint is designed to account for one of the most-discussed features of English grammar: the fact that you cannot specify event time in the present perfect. Since its TT is always pragmatically p-definite (including the point "now"), specifying event time runs against the pdefiniteness constraint. In Klein's terms, cases like I have seen the sun at midnight are accounted for by saying that midnight in such cases is not pdefinite, referring to a recurring point in time. In his conclusion, Klein suggests that the p-definiteness principle is a special case of a more general constraint, which could be glossed as "reasonable contrast"; this would also explain why *today, four times four makes sixteen is odd. To go further in this direction, one might relate it to the Gricean maxim of quantity: including a time adverb like today makes the statement less informative (because you say nothing about all other times), so if both versions are true, one should prefer the one without the

416 Time reference in the simple clause

time adverb. It might also be accounted for in terms of the pointlessness (as part of the principle of sense, above p. 136-137) of including today: it adds nothing worth having to the utterance. I think the point made is valid, and goes a long way towards accounting for the restrictions in the number of possible time adverbials. As pointed out in Haberland—Heltoft (1992), however, there is a problem in using pragmatic constraints to account for language-specific phenomena. Unless they embody reference to culture-specific phenomena, they must hold for all languages with similar content elements. Danish has a present perfect that is very similar to the English (i.e. not as attenuated as the German or French formal counterpart); just about the only clear-cut difference is that it is possible to append information about event time, as in: (146) Jeg har sovet godt i nat (147) */ have slept well last night (148) Jeg har sendt ham et brev i gar (149) */ have sent him a letter yesterday How can this be? I think we can preserve the main thrust of the pdefmiteness principle while explaining the pragmatic acceptability of the Danish examples if we take a look at the directionality of the argument. Klein's account, paraphrased above, makes it absolutely clear that once you have an event time, it would be pointless to specify the exact reference time (time of reckoning, in my terminology) — but interestingly, it does not try to take the other route, from a specified time of reckoning towards the additional specification of the event time. From that angle, the pointlessness is not so apparent: while one can deduce the applicability of any subsequent time of reckoning from a known event time, one cannot deduce any particular event time from a known time of reckoning. To put it differently, even if you know the time of reckoning (the "deadline") beforehand, it might still be of interest to know when the event took place: (150) At seven, he had left — but I don't know when he actually left. — whereas once you know the event tune, it is pointless to ask for a precise later deadline. This difference is interesting if you look at the meaning of the clause from the processual perspective that is built into the notion of functional meaning. The ongoing build-up of meaning, which

Adverbials in complex tenses 417

takes place in real time, will automatically differentiate asymmetric cases, so that if we know which time is present "first", we shall know whether the p-defmiteness constraint is likely to be absolute. If we have a constant "topic time" which functions as the time of reckoning, the typical situation is for the time of reckoning to be known beforehand (cf. Reichenbach on the "permanence of the point of reference"), while event time needs to be specified clause by clause, since each encodes a new event. This leaves the possibility of specifying event time pragmatically open, in cases where precision might be called for. In many cases it will be pointless to add information about event time, because we may only be interested in the situation at reckoning time: but in Danish, as we have seen, it is deemed acceptable to specify the time of sleeping as last night, or the time of sending off the letter as yesterday. We are then back at the question of why this should be impossible in English. I can see no other way to describe the situation than by referring to the unsatisfying, but empirically unavoidable fact that out of the combinations that are made possible by the individual content elements and structures in any given language, plus general Gricean principles, there are always some that are conventionally used and some that are not. I suspect that combinations like these occur quite frequently as accidents de la parole in English, but so far they fail to have become "sanctioned" in Langacker's sense. Returning to the general issue of multiple time specification, this issue is one that has revealed some disagreement among native speakers, including linguists. With the proviso that not all speakers accept them, Smith (1978: 52) presents examples like (151) Last night, Mary had disappeared 3 months ago The gloss she provides is that they have a two sentence-source such as Bill told me last night: Mary had disappeared 3 months ago. Similar sentences are also accepted by Hornstein (1990), for which he is criticized by Dahl (1992: 647), whose native informants reject them. I would like to suggest that the explanation lies in the kind of situations in which one might be interested in having both specifications. It is a frequent experience that sentences which have been judged ungrammatical in an off-hand armchair situation turn out to be grammatical if the pragmatic context is weird enough. I think Smith's gloss points the way: if one is in a state of

418 Time reference in the simple clause

confusion about what the situation is, information that one would otherwise take for granted might suddenly become useful. Suppose a group of people are in possession of a number of conflicting pieces of information about strange disappearances and slowly try to gather the threads and piece the information together. If in the morning some pieces of information gathered the day before suddenly jell into place, it would (according to me) be okay to say something like (152) Now we finally know that last night Mary had disappeared 3 months ago Similarly, the quotational reading would be obvious if investigator A comes up with a different story from the one he had argued the day before, and investigator B says (153) Today she is at home in bed, but last night she had disappeared three months ago What is specified by last night is not the (completely redundant) time of the state after disappearance: on the cumulative view of time, the perfect does not solely denote an after-state, but also a situation seen as describable by means of what has gone before; last night therefore indicates what situation the (now discarded) accumulated information described. If we take the future into consideration, an extra, potentially specifiable time comes into the picture. However, lookahead time will very rarely be specified together with F. This is due to the same type of reasons that are expressed in the p-definiteness constraint, seen in combination with the nature of the pure future. Because the pure future says nothing about lookahead time, it would be pointless to specify it in an adverbial: a pure future statement is supposed to be valid regardless of when in the pre-time it is made. Yet for this very reason, exceptions may arise. In the first months of 1994 there was a protracted scandal involving the issue of whether a figure-skater would be allowed to continue after evidence suggested that she was involved in an assault on a rival. Police statements, press rumours, legal claims and counter-claims reverberated, and after one judicial decision a TV reporter said (154) At this point Tonya Harding will still go to the Olympics in Liliehammer next month

Adverbials in complex tenses 419

The suggestion was that predictions might have to be time-indexed under these special circumstances. This analysis implies that you should not rule out an example because the circumstances are weird — instead, ordinary pragmatic restrictions on particular linguistic combinations can be suspended if pragmatic circumstances are not ordinary. This also suggests that the true locus of the restriction is at the instructional stage: do not use time adverbials to make specifications that are pointless, such as to locate a "state after" when you already have a precisely defined event time. Although the following discussion of Klein's tense theory would most naturally belong in the section on the future, I have postponed it until now because it is convenient to be able to discuss it in light of the concept of topic time that I have suggested above. Klein's notion of "topic time" (indicated by quotation marks below) has affinities both with the application times that are coded into the deictic tenses and the purely situational topic time that I defined above. The notion of "application time" is defined in relation to the instruction involved in the defmiteness component of the past and present. Klein's times, as evident in relation to the notions of pdefmiteness and b-defmiteness, are established time spans ("product" or "referent" notions); and since the difference is only visible from the functional point of view, they are difficult to keep distinct from event times, as I shall try to demonstrate. Klein claims that there can be several topic times in the past (Klein 1992: 545), as when the statement Chris was ill is valid for several periods in the past. From a functional point of view, the past tense only asks the addressee to locate the proper world/time address at which to apply the state-of-affairs, and it does not matter whether the event occurrence is continuous or not — the meaning of the past tense is the same. This also implies a clear difference in relation to my conception of topic time. Let us imagine that Chris's illness is brought up by his dissatisfied boss: Chris wasn't much good to the company last year, because he was ill. Chris's boss could either be talking about the busy spell before Christmas or about attendance rates on Mondays; and whether Chris was ill on seven consecutive Mondays or all of December, coded semantics deals only with the instruction to locate the point-of-application, and both topic time and application time can preserve a generic singular — only the actual event times need to be in the plural. This also fits with the fact that plurality might as well occur in the present (although the notion of p-defmiteness predicts a difference):

420 Time reference in the simple clause

(155) whenever I want(ed) to talk to him, Chris is/was ill The functional account, which keeps referred-to time spans out of the meaning definition of tenses, can have a unitary notion of application time and leave the discontinuities to adverbials or the process of interpretation. The difference also comes out when Klein recognizes a future "topic time", symmetrical to the past. Klein illustrates the parallel between past and future topic time with the statements (Klein 1992: 547): (156) Chris was in Pontefract (157) Chris will be in Pontefract Since in these statements neither "topic time" nor event time is specified, according to the p-defmiteness account they need pragmatic specification of one (but not both) of these in order for the utterance to be felicitous. Let me begin by accepting the intuitive judgment on which this account is based; the question is how it should be accounted for. According to my account the two sentences would be different in terms of application time: the past tense, as involving defmiteness, requests identification of the application time; but the future, according to the account in terms of "aheadness", has an application time in the present. But that does not explain why something seems to be missing in (157). I have two observations to make. First, I would like to suggest that the oddness has something to do with the fact that (157) designates a state. The point can be illustrated if we change the verb in Klein's example with an event verb: (158) Chris went to Pontefract (159) Chris will go to Pontefract Now the future ceases to require a temporal specification, while the past still needs one (if we disregard the "hot news" sense, cf. above p. 345); and Klein's theory does not predict this difference. I think the explanation has to do with the purely situational notion of topic time as "the time we are talking about", which may sometimes be more necessary in relation to states. If the future state-of-affairs is an event, it can be linked up with the present simply by seeing it as constituting a step away from the way things are. In (157), however, we skip the intervening event and go straight to a future state that is different from the present state — and this will often call

Adverbials in complex tenses 421

for a temporal specification: there must be a topic, located in the future, that we are talking about. A case in point is (160) The Hendersons will all be there where the topic time is the occasion envisaged in the Beatles' Sgt Pepper's Lonely Hearts Club Band. But note that a situational topic time is a different animal from a coded application time. This shows itself in the fickleness of the situational topic time: it is not always required, even for states, and its presence/absence is not accountable for by reference to any obvious linguistic facts. Cases like they ΊΙ be sorry or he ΊΙ be all right do not necessitate a future "topic time" in Klein's sense. This also goes for the present future perfect, in spite of the fact that it almost always requires some sort of temporal specification of the reckoning time for the perfect. With the pure future reading of will, (161) George will have answered the question seems odd if we do not know the deadline. However, time may be completely irrelevant if the context is right, compare the following quotation (which describes the good will of a person whose actions may be irregular from the point of view of official morality): (162) ...And you can depend on him — in a way, his way. Those pyjamas — he'll bring them, I'm sure. But how will he have got them? (Graham Greene: The Captain and the Enemy, p.34) Thus I think Klein's "topic time" in relation to the future tense has the status of a purely situational topic time, while in relation to the present and past it serves as application time. For that reason, I think the concept does not accurately capture the collaboration between coded and situational time. In cases where we have complex tenses but only one time adverbial it may be unclear what tense time is being specified, as we have seen for the past perfect. In terms of content syntax this means that adverbial scope is not determinate. In a completely scope-ordered structure with rigid observance of Behaghel's First Law, position would relate verb stem and time adverbial, so that time adverbials always came right before (or after)

422 Time reference in the simple clause

the stem whose time they marked. Thus we would have both (163) and (164): (163) *He had at eleven left (164) He had left at eleven Attempts to render (163), where at eleven = P, usually put the adverbial first, but as already pointed out, the fronted position serves more than one purpose, so a fronted adverbial can also indicate event time (=A): (165) At eleven he had left, at twelve he had hired a car, and finally at one he had given himself up to the police By and large positional options are the same whether time adverbials modify P, F or A, although in pragmatically difficult cases some Behaghelian mechanisms sometimes stretch the rules of grammar. A structural correlate to the lack of positional differentiation is that the development from full to auxiliary verbs involves gradual loss of capability for separate modification (cf. Spang-Hanssen 1983, Davidsen-Nielsen 1990b). Scope ambiguities are therefore greater in this area than in connection with other syntactic relations: for some purposes the whole verbal complex operates as one semantic whole. Since Kleinian restrictions suggest that we rarely need more than one specification, that will rarely create problems. In general, relations between scope and position of adverbs are not fully determinate; possibilities of fronting and considerations of weight combine to give a fairly complex system (cf. Jackendoff 1972, Bartsch 1972; compare also what was said above p. 293-294 on scope indeterminacy).

4. Beyond the simple clause 4.1. Tense in subclauses: general remarks Subordinate clauses behave differently from simple independent clauses in a number of ways. In finite subclauses, the only type I discuss1, the central issue from a tense point of view is the phenomenon traditionally known as "consecutio temporum" or "sequence of tenses", i.e. the fact that tense choice in subordinate clauses is dependent on the tense choice made in the matrix clause. I shall try to show that an account in terms of functional meanings, collaborating in a structured content syntax, can make a significant contribution to our understanding of this phenomenon. In relation to a structural account that abstracts from functions instead of seeing structure as function-based, the advantages of the function-based account are fairly obvious. If the sequence of tenses is understood as a purely structural relation, a simple one-way dependence between mainclause tense and subclause tense, one has to allow for a wide variety of exceptions. An algorithm of rules that would allow you to predict actual tense choices would be so complicated that a natural conclusion is the negative one Rohrer (1986: 91) quotes from Brunot: "Le chapitre de la concordance des temps se resume en une ligne: II n'y en a pas." I try to show that some of the problems stem specifically from the referential view of tense meaning, and can therefore be understood as artifacts produced by an incomplete understanding of linguistic semantics. If meaning is understood in functional terms, one of the standard premises of the discussion evaporates: the referential oddities that occur in embedding contexts are no longer synonymous with oddities of linguistic content — i.e. tenses can mean exactly what they always mean, even if there are special complications in determining their referential correlates. These complications can then be explained in terms of either content syntax (when there are special linguistic meanings in the linguistic context, the interpretation of tenses will be affected by them), or situational factors (when the world of which we speak has special, complicated properties, this brings about complications in the referential potential of the linguistic expressions). Postulating a purely formal rule in order to explain why tense expressions do not have the usual referential correlates only makes sense if a reference-based semantics is prescribed by fiat. Looking at the role of the content syntax, we need to have a precise understanding of what is semantically special about subclauses. What native

424 Beyond the simple clause

speakers understand intuitively is the combined effect of subclause meaning and tense meaning; and we can only be precise about the contribution of tense in subclauses if we are simultaneously precise about the semantic context in which it operates. In functional terms, the fundamental difference between subclauses and independent clauses lies in their functional potential, the type of job they do. The grammatical element that specifies the functional status of the clause as a whole is the illocutionary (i.e. declarative/interrogative) operator; and as argued in Harder (to appear), we can understand some central types of subclauses as having subordinates that occupy the same content-syntactic slot as this operator. Instead of getting an independent illocutionary function, subclauses get a subordinate ("sub-illocutionary") function; only the topmost illocution, the one in the matrix clause, is independent. Subordinators like if, when, since, so, that all assign clauses a functional identity that must be understood by reference to a higher clause; in terms of job description, such a clause is not its own boss, but a mere hireling. As occupants of the illocutionary slot, the subordinates exert an influence on everything in the subclause, because everything is inside their scope. If tense choice in subclauses was totally free, it would mean that the subordinate job done by subclauses had no relation to the temporal location of the matrix clause. Conversely, sequence of tenses as an across-the-board phenomenon would mean that the subordinate job done by the subclause operated totally within the time frame established by the matrix clause. The actual complications are somewhere in between these two extremes; and a large part of the variations can be understood as reflecting the particular subordinate job the individual subclause types do. I shall go through two fairly simple cases and one more complicated case. Temporal subclauses are close to the dependent end of the spectrum with respect to tense, because the job they do directly ties in with the role of tense. In temporal subclauses with propositions as ingredients, the proposition is used to define a time — which in turn is used to locate the matrix clause state-of-affairs. If such a subclause proposition applies to a past situation, it is only compatible with pastness in both subclause and matrix clause. Notice that like all compatibility-based constraints this one is mutual rather than inherently one-way. If the speaker wants to speak in general about what he was doing at the time of Kennedy's assassination, he may prefix the proposition with when, thus creating a time out of the proposition:

Tense in subclauses: general remarks 425

(166) When Kennedy was assassinated... Once he has started out as in (166), the (identifying) tense is fixed for the matrix clause as well; and the only reason why it is more natural to speak of subclause tense as being restricted is the subordinate status of the subclause itself, in terms of which all semantic choices in it are made subject to compatibility with the matrix clause that it presupposes. Relative clauses are close to the independent end of the spectrum. This is because their job is to create a complex property which is assigned to an entity2. Properties are standardly coded by simple adjectives; the reason for having a proposition as an ingredient in a property is typically that we want to specify a particular time in relation to which the property belongs. Thus, the friendly girl and the girl who was friendly differ only in that the latter is more explicit about the temporal location of the friendliness. If the relevant property is defined in relation to a specific event, a relative clause is the most direct way of coding it; the referent of the man who phoned yesterday may have no other identifying property than his role in the event coded in the relative clause. This specific function of relative clauses accounts for the free tense choice: since the exact nature of the property depends on the application time of the proposition, time reference must have access to whatever situations the property-defining propositions apply to. Not all tense phenomena in subclauses are equally clearly motivated, however. One type of semi-motivated regularity is the behaviour of the future in temporal subclauses. In such subclauses, the future typically does not occur even where the temporal location would make it obligatory in an independent clause: (167) */'// do it when Joe will come home, cp. (168) /'// do it when Joe comes home From Jespersen onwards, there is widespread agreement that this reflects some form of dependence on the time that has already been indicated by the matrix verb (cf. Allen 1966: 168; Declerck 1991: 34,38); the present indicates "simultaneity" or "relative present" within the domain established by the main verb. There are certain difficulties with this view (cf. Harder 1989a: 21), as evident in those cases where subclauses occur without matrix clauses, compare (169):

426 Beyond the simple clause

(169) And when Joe returns? Since there is no matrix clause, the capability of the simple form to refer to the future cannot depend in any simple way on a matrix clause tense. A different motivation for the simple form can be found in the compositional semantics of the when-clause. We have seen that when converts the proposition into an adverbial denoting the time at which the proposition is or becomes true; so if we take the proposition '(that) Joe returns' and put it in the scope of when, we get a time defined by the truth of the proposition 'Joe returns' — "Joe-returns-time", so to speak. Under this assumption, the presence of a future form is unwarranted, because we want to indicate the time at which the proposition is actually true, not the time at which it is still ahead. From a strictly compositional point of view, the deviant when Joe will return would indicate the time at which Joe's return is ahead ("Joe-will-return-time"), rather than the time at which Joe actually returns.3 However, if this was the universal bottom-line argument, we would never find languages where such subclauses had the future tense, as in fact they do in French: (170) Je leferai quand Jean viendra I it do-fut when John come-fut In such languages, the compositional motivation for the simple tense is overruled by the "extrinsic", situational perspective on the event, in terms of which both matrix clause event and subclause event lie ahead in tune. The future thus marks the implied time span within which the speaker views the time-fixing proposition as becoming true. Both future and simple forms thus have some motivation to point to; it is therefore an arbitrary choice on the part of the language whether this special time-referential slot should be assigned to the simple present (as motivated by the compositional semantics of the subclause) or the present future (as motivated by the situational time perspective). Since both forms make sense, we might wonder why languages tend to make a choice at all — why not simply have them both and leave it to the speaker? And indeed, as often in borderline cases, pragmatic factors may sometimes cause a suspension of normal practice. I have never found a case of a future in an English when-clause, but I have found one case in Danish (which in this respect is otherwise like English). The context is one in

Tense in subdauses: general remarks 427

which the peek-ahead element is very strongly emphasized, so it seeps into the subclauses as well: (171) Nar genniveauet i de nceste 15-20 ar vil afsl0re en komplexitet med utallige indbyrdes relationer, nor milj0et vil vise sig at spüle ind i selv de mindste genftenomener, og när indlcenng, imitation og lignende vil blive indarbejdet i besknvelsen af dyreadfcerds genese, sä vil sociobiologerne og tilsvarende reduktionister kunne bruge Dawkins' argumentationsform og hcevde nojagtig det samme som de hcevder i dag (K0ppe 1990: 175). ['When the gene level during the next fifteen to twenty years will reveal a complexity with countless interrelations, when the environment will turn out to play a role in even the smallest genetic phenomena, and when learning, imitation, etc., will be integrated in the descriptions of the genesis of behaviour, then the sociobiologists and similar reductionists will be able to use Dawkins' form of argument and claim exactly what they are claiming today.'] Similarly, Declerck (1991: 103) quotes some examples where in English the future has slipped into before-clauses, including the following from Kruisinga(1931: 494): (172) ...it will be some time before we will see men about town walking down Pall Mall with a sugar stick in the mouth instead of a cigarette I am not going to go into detail with the behaviour of English tense in this area; a detailed account can be found in Declerck (1991). It seems to me that the only generalization that one can suggest in terms of semantic structure is that if the future as a forced coding option for time ahead is totally dominant in a language, all such subclauses will be in the future (a language which is close to this extreme appears to be modern Hebrew, which also enforces it in conditional subclauses, as opposed to French). With lower degrees of entrenchedness, futurity-coding can be expected to give way to competing motivations in various contexts, possibly so that an implicational scale could be posited. A test case for this theory is the relationship between relative and interrogative w/zen-clauses on the one hand and temporal wften-clauses on

428 Beyond the simple clause

the other, which by anybody's standards would appear to be closely related. Can it explain why the first group standardly require future marking, while temporal clauses do not? I think the crucial factor is the structural slot that when goes into. As described above, the temporal subordinator when takes the place of the illocutionary operator, converting the proposition into a time. In relative and interrogative clauses the temporal element does not fill the illocutionary slot but has the position of an adverbial, inside the proposition. Hence, there is a proposition involving a future time inside relatives and interrogatives, whereas the temporal clauses are not inherently futureoriented, but only have this orientation imposed extrinsically. All three types occur in the following extract: (173) "I dread the day when you'll walk out that door for the last time. When I'll know you won't be coming back the next day, because you ΊΙ be in an airplane, on your way back to England. " "Idon't know yet when I'll be leaving", he said... "When you've all flown away, I'm going to feel terribly lonely." (David Lodge: Paradise News, pp. 321-22) In the relative clauses (when you'll walk... and when I'll know...) and the interrogative clause (when I'll be...), there is a prepositional nucleus involving a reference to future time ("you will walk out" "I'll know", and "I'll be leaving"), and this time-referential element is decisive for the nature of the property coded by the relative clauses and the implicit question coded by the interrogative clause: different times would mean different properties and questions. In the temporal clause (when you've all flown...), the meaning of the clause is "At you've-flown-away-time I'm going to feel terribly lonely", and futurity is not coded inside the subclause itself — although it might conceivably be coded in cases of exceptionally strong motivation, like (171) above.

Indirect speech 429

4.2. Indirect speech 4.2.1. Mental spaces and referential ambiguity I shall go into detail with subclause tenses on only two points: indirect speech and conditionals. In introducing these two complicated cases, I would like to make some observations on the role of the concept of mental spaces in tense theory. Cutrer (1994) bases her theory of tenses, which is in many ways congenial to the approach suggested here, on distinctions described in terms of the mental spaces that tenses cue or refer to. When I restrict my use of the notion to the cases below, it is because of an Occamist leaning: mental spaces constitute a very powerful descriptive tool, because it enables you to multiply entities corresponding to the inventory of spaces, and therefore I prefer to use it only in the narrower range of cases where it is obviously necessary. In the widest use of the term, a mental space is any contextually motivated grouping of conceptual constructs, space being the mother of all metaphors, as it were (cf., e.g., Fleischman 1989). On that view, temporal and geographical regions can be seen as special cases of mental spaces; and the topic adverbials discussed above would be standard examples of "space builders" (cf. Fauconnier 1985). But this interpretation is not strictly necessary; in cases without special complications, they can equally well be regarded as regions within the space of reality. There is also a more specific reason why I think the domain of tense is more revealingly handled in terms of a narrow use of mental spaces, namely the basic difference between the semantic substance domains of tense and modality: tense typically involves reference to one world, whereas modality involves alternative worlds, and therefore modality constitutes a paradigm example of the need for recourse to mental spaces in the narrow sense. The past and the present involve different points-of-application, but these are typically situated within the same reality. It is a shared feature of the deictic tenses in the prototype uses that temporal relations are understood as existing outside the head and as compelling regardless of point-of-view, as I tried to show in the discussion of Janssen's theory of the deictic tenses in section 2.2.4. Whatever is located by means of a standard deictic tense is therefore presented as accessible independently of the mental, conceptual grouping itself. In addition, the theory of the perfect that I have argued for

430 Beyond the simple clause

involves a conceptualization in which temporally anterior facts continue to hang around, which makes a location in a different mental space an awkward metaphor. Mental spaces exist by virtue of intentionality; only the ability of intentional subjects to construct their own representations of reality makes it possible to distinguish between more than one place where the same things can be at the same time.4 Mental spaces are therefore a necessary tool when conflicting intentional representations are involved — but this is a very sophisticated type of case. As discussed in relation to Gopnik (cf. p. 47), it is only after age three that children stop regarding their own conceptualizations as direct representations of the real world, and begin to be able to reason about alternative pictures of reality; and only this ability enables speakers to keep track of the relation between different representations of the same world. These are also the standard cases explored in Fauconnier (1985), with differences between the girl in reality and the girl in the picture, Hitchcock in the film and Hitchcock the director, etc. Within the basic tense system, the only case where this mechanism is involved is the future, which depends explicitly on the subjective conceptualization of the speaker. However, in the special contexts of indirect speech and in conditionals, the creation of alternative versions of reality plays a decisive role. As in the previous sections, I shall argue that a focus on coded functional meaning as opposed to time reference, coupled with an increased awareness of the semantic half of syntax, can throw new light on some classical problems. I begin by describing the characteristic features of the referential picture, and afterwards return to the role of coding. In mental space terms, the main referential feature of indirect speech is that the mental space of the subclause is a "daughter" space of the "parent" space defined in the matrix clause. Reported speech involves an original speaker (the source) and a speaker who passes on the message (the reporter), and because of the two representations involved we need separate mental spaces in understanding an utterance. In the simplest possible case, that of accurate direct quotation, the original utterance is preserved by the reporter, who merely hands over the utterance verbatim, as in (174) Source situation, Sunday: (Joe to Jack:) I am still thinking of yesterday (175) Report situation, Monday: (Jack to Jill:) Joe said, "I am still thinking of yesterday"

Indirect speech 431

In relation to (175), the daughter ("source") space in which Joe speaks comes into existence only as part of the parent ("report") space in which Jack speaks. In terms of what is communicatively available, there is unilateral dependence between report space and source space. The report space is the actual situation, and the source space is only created as part of ongoing communication — Jack could also have taken up the content of the message (Joe's thought) without bothering to place it explicitly in an earlier "speech space", as in (176):5 (176) Joe is still thinking about what happened the other day The main linguistic problem concerns the interpretation of deictics. Because they point to referents in the speech situation, a problem arises when there are two speech situations, and the deictic centre may therefore be "shifted". In interpreting the report, Jill needs to understand the first person pronoun / and the time adverb yesterday by going back to the source space (/ = Joe, yesterday = Saturday), or she will get the wrong message ("Jack is still thinking of Sunday"). In the direct version, it is fairly obvious that this is not because the deictics are ambiguous; the function of the pronoun / is exactly the same (self-reference); only the context of interpretation, the "viewpoint" that determines its reference has changed. This description, boringly uncontroversial as it is, implies that the two spaces (source and report) are part of the context in terms of which reports need to be understood, and the cognitive ability to keep track of them is necessary for the addressee to understand the message. The most important single fact that needs to be understood about reported speech is the fact that the addressee has to operate with two spaces, corresponding to the two speech situations, at the same time and sort incoming information correctly between them — with or without linguistic cues. The point of emphasizing this hopefully obvious fact will emerge in relation to alternative, less "clean" ways of reporting; and this is where the functional view of meaning begins to have important implications. The standard indirect version is the following: (177) Joe said that he was still thinking of the day before If you have a referential rather than a functional view of meaning, (175) and (177) are synonymous, and the differences are only "formal": two sets

432 Beyond the simple clause

of expressions for the same content. But according to the functional view of meaning, the expressions used are different because the (process-input) meanings are different — they just happen to guide the addressee's interpretation process towards the end-product, i.e. the same message. Among the differences in meaning we find a difference in the illocutionary slot: instead of the declarative illocution, which is preserved in the direct speech, we find the meaning of the complementizer that in the role of specifying the functional status of the proposition. While the declarative illocution signals that the proposition is to be understood as conveying a fact, that signals that what follows is a proposition construed as an entity — a situation-description with no truth-commitment on the part of the speaker (cf. the discussion in Harder to appear). In other words, there is only one declarative speech act in (177), whereas there were two in (175); instead of reproducing the source speech act, (177) includes the content of the speech act construed as an entity. For the deictic elements, this difference between the direct and indirect versions goes with a shift in the space from which they are viewed in the coding (V-point space in Cutrer's terminology): so in direct speech we carry out the interpretation based on the source space (Joe = "I"), in indirect speech based on the report space (Joe = "he"). Before we go into the details, I would like to highlight one aspect of the approach that this argument represents. Instead of presupposing that one form must be derived by formal rule from the other whenever there is referential identity, it asks, "by what means does an addressee get the same referential interpretation in the two cases?". There are three kinds of fact which enter into the answer. The first is the interpretive skill required to handle the specific nature of such sophisticated referential contexts, including mental space differentiation. Referent identification necessarily involves this situation-management component: even in the simplest cases, mastery of language never guarantees that you will get the referents right. The second factor concerns the contribution of those linguistic expressions with which tense has a content-syntactic relation. It would be nice if we could explain tense in subclauses solely by these two factors, as reflecting the basic theory of matrix-clause tense within the referential and content-syntactic context that is characteristic of each subclause type. However, a third factor is necessary: there are some special facts about the semantic potential of the linguistic expressions in this particular type of context. This is where there is an issue involving specific aspects of linguistic "form" in indirect speech; if such cases did not exist,

Indirect speech 433

not only would there be no rule of sequence of tenses, there would be nothing about tense in indirect speech that needed specific attention from the linguist, whatever his approach.

4.2.2. Point-of-view: coding and pragmatic considerations In investigating such special properties, a function-based approach looks for a radically different type of account than an approach in terms of a formal rule. As we saw above pp. 215-216 in the case of set plurals vs. individual plurals, when linguistic resources have to be stretched to cover cases that introduce an extra dimension of ambiguity, we would expect that expressions sometimes go this way, sometimes that way, because they have to do double duty — we would not expect a fully clear-cut pattern as in cases where purely mechanical rules apply. The descriptive task therefore consists in finding out in each type of case first of all whether linguistic coding is vague or specific, and secondly (if specific) whether it forms a pattern consistent with what we find in other types of case. As we have seen, part of the story about indirect speech is that while the "ordinary" elements are unchanged from the source space, deictic elements have to be (re)oriented to reflect the viewpoint of the report space: 7 becomes he, etc. This does not hold in all cases, however. Referent identification, which involves a deictic element in the shape of definiteness, is in general systematically vague between source ("de re") and report ("de dicto") readings, with no explicit clues, as illustrated in (178): (178) He said that the fellow from Poughkeepsie was coming Here we do not know whether the description "the fellow from Poughkeepsie" stems from the source or the reporter. Also, there are other ways of sharing between source and reporter than the clear-cut deictic/nondeictic division, most famously through "free indirect speech": (179) He was still thinking of yesterday Only tense and person deixis now reveal the "mole" narrator's "reportviewpoint"; he is so to speak reporting from a position where he can spy on the mental processes of the source subject. The deictic adverbs are

434 Beyond the simple clause

source-oriented, thus requiring interpretation from a different viewpoint than tense. To account for that split, we therefore need more than a broad rule for "indirectness". Among the special conventions applying to tenses in English in indirect speech (including "free indirect speech"), there is a rule saying that when the source space is in the past, the deictic tenses take the reporter's space (rather than the source space) as their viewpoint. The fact that this is a linguistic convention can be seen by comparison with Russian, where the rule is different and deictic tenses typically take the source space as their viewpoint. This is a linguistic choice with an element of arbitrarily in it — languages may go either way. As emphasized previously, however, that does not mean that it is not a semantic choice: it is a choice specifying which of two semantic options the tense form is going to code in this special linguistic context. This theory gives the same prediction for most cases as the formal rule, which sees it as something the matrix verb tense does to the embedded clause tenses (cf. Jespersen 1924: 290; Hjelmslev 1938: 159, Comrie 1985: 111).6 The advantage in seeing it as a matter of fixing a semantic viewpoint can be illustrated with the way this theory deals with an important class of exceptions to the purely formal rule, namely those cases where an original present form is retained instead of being shifted to the past: (180)

"George is coming " —> He said that George is coming

(181)

"They will return "—> He said that they mil return

The standard account of such cases is that (as an exception) you can opt out of the sequence-of tenses-rule when the reported statements still apply (cf. Quirk et al. 1985: 1027). The theory suggested above would give a different account: if we maintain that time reference always reflects the reporter's point of view, we get a reading where the reporter ignores the original time reference and substitutes a new one. This account reflects the same referential facts, and is in that sense equivalent. However, I see at least two advantages in it: (a) such cases are no longer an exception to the basic generalization: the present tense in the indirect speech reflects the reporter's point of view, and

Indirect speech 435

is motivated by the fact that the speaker intends to let the clause apply to the present. (b) it predicts that the embedded clause conveys something different than the source clause did: the reporter brings the content of the statement up to date, letting it apply to the situation that is relevant in the report space and leaves out that part of the original applicability that is now dead and gone. In presenting the information, the reporter surreptitiously edits the content of the source utterance — by the expedient of repeating it in the same tense form. If he wanted merely to report what was said then, he could not repeat the exact words in the new situation. According to this account, the only additional feature needed to account for English deictic tense in indirect speech is an extra specification with respect to the point of view from which the tense should be understood: always the reporter's point of view. The updating is an exception not from the point of view of tense, but because it is a special kind of mixing of report and source: the reporter is shedding the obsolete part of the original message in reporting it. Comrie's reasons for choosing a syntactic rule over a point-of-view explanation (he uses a "deictic centre account", as he calls it, to account for all other shifts from direct to indirect speech) are the problems that occur in connection with sequence of tenses when the future is involved. Some of these problems arise from the assumption that the future should behave like the present and the past, as predicted by the traditional tripartite tense theory. I shall try to show that the distribution of tense forms is better understood once the nature of the future as a non-deictic tense is appreciated. As pointed out by Comrie (1985, 1986), clauses in the future do not follow the pattern found in the (supposedly symmetrical) past. If that were the case, the correct form of (182) would be (183) rather than (184), see also Declerck( 1991: 160): (182) Diana mil say, "I am dancing" (183) Diana will say that she will be dancing (184) Diana will say that she is dancing

436 Beyond the simple clause

Comrie's account involves an additional formal rule saying that when the matrix verb is nonpast, the original tense is retained. Declerck's account involves a semantic generalization saying that tenses are interpreted relatively to a matrix verb in the present. Hence, the simple present indicates "simultaneity with the head clause situation" in subclauses when the matrix verb is not in the past (the same mechanism that he uses to explain the absence of future in subclauses, as discussed above). These two accounts also cover the use of the past as seen from a future viewpoint, as in (185) Diana will say: "I was dancing" (186) Diana will say that she was dancing In such examples, the past is used even if Diana's dancing is in the future as viewed from the report space. As compared with the case where the source was in the past, we now have a situation where the point of view is no longer the reporter's, but that of the (anticipated) source situation. This difference is made explicit both in Comrie's and Declerck's account; but even if we recognize that this is a correct description, we would still like to know why there should be this difference between reporting past and future "source utterances". Cutrer (1994) accounts for the discrepancy through her "fact-prediction" principle, whereby a source clause cannot change "futurity" status as a result of embedding. I think this theory points to the right factor; however, the crucial point does not need to be stated in a special principle, but follows from the semantic description of the future provided above pp. 349-350. The fact-prediction distinction is the result of a difference between the nature of the deictic tenses and the secondary tenses: deictic tenses are by nature the only ones that are freely transferrable from one point-of-view to another. The crux is the distinction between conceptual and non-conceptual meaning. The deictic tenses have no conceptual content, and only point to where to apply the state-of-affairs — therefore, if the reporter wants to pass on the source message, he remains faithful to it by getting the new listener to apply it at the right point, not by preserving the original tense. However, a source future is part of the (subjective) conceptual content of the source utterance, not just a matter of temporal address — it involves, as discussed above, taking a peek ahead. In eliminating (or introducing) this element you would be seriously tampering with the meaning of the source utterance. If at some point in the future, Joe makes

Indirect speech 437

a statement saying Jack did it, you could not convey Joe's meaning by having him claim that Jack will do it. Conversely, somebody who says it mil happen can never be accurately reported as having claimed that it (actually) happened. This explanation can be tested against a case that creates problems for both Comrie's and Declerck's accounts. The existence of alternations like the following (this one is taken from Dowty 1982) creates problems especially for an account in terms of a formal rule: (187) One day John will regret that he is treating me like this (188) One day John will regret that he was treating me like this Both versions are understood to refer to mistreatment at the time of the report. The possibility of using the present as in (187) falls outside both rules, as pointed out by Declerck; but whereas a purely formal rule cannot appeal to pragmatic principles, Declerck can handle the case by referring to the general phenomenon of "domain shift": tense in subclauses can, as a marked possibility, break out of the temporal domain in which they are embedded and reorient themselves towards the time of the utterance.7 It is the same explanation that Declerck invokes to explain "updated" embedded futures, as in (181) above; for the following reasons, however, I do not think it is the right one. First of all, the domain-shift account would predict that the future can be introduced into reported statements when the content is domain-shifted into the present (as in *Diana will say that she will be dancing, when the dancing is simultaneous with the saying). Why should this be impossible, if the time of the utterance can always, as a general option, override the embedded time of orientation? Secondly, I think something different is going on when a source present is preserved, as in (180)-(181), and when a source past becomes a reported present, as in (187). In the "updated" future I have argued that there is nothing exceptional going on tense-wise, because the viewpoint is unambiguously the report time (=now) — what happens is that the reporter edits the statement by cutting out the reference to that part of the "source future time" that is now in the past. In the imagined retrospection case, however, I think that what happens is a case of vacillation with respect to viewpoint: on which viewpoint should you base the choice of tense? Now or the time

438 Beyond the simple clause

when John repents? Hence, I think (187) and (188) convey exactly the same message. There is no editorial or other difference — it is only a question of which way to refer to the same point-of-application. In other words, the reason why reports about future sources tend to take the source space as their viewpoint is that in most cases you would wreck the message by reorienting it. With a past source, somebody who says it happens can accurately be reported as saying that it happened because what is at some point present remains part of the same real world, even when it becomes past. In the case of an anticipated future utterance which looks back on the intervening time as a section of the past, there is no way to refer to its content from the point of view of the reporter while preserving the content of the utterance — because this time exists only as a factual time in the mental space associated with the yet-to-be-spoken utterance. The only way to preserve the message is therefore to stick to the future source as the deictic centre. Therefore there is no alternative to (189):

\ (189) In 2010, Ebenezer mil soy that he got tenure in 2005 where got is in the past tense. For obvious reasons, there is no alternative, either, if Ebenezer looks back on a time before the utterance time (for example 1993) — it would then be past from both viewpoints. Only if he chooses to look back on the exact time of the report utterance does the choice illustrated in (187)-(188) arise. According to this story, the two mental spaces that enter into the understanding of indirect speech are basically part of the referential context rather than the semantics; tenses only marginally reflect mental space differentiation. In other cases space differentiation has a greater role in semantics; in many languages, the subjunctive plays a role in signalling the distinction between what is in the source space and in the report space. Thus in German, the present subjunctive can be used to place propositions as applying to the source space, as in (190) Er kommt morgen nicht weil er sagte mir, dass er krank sei/ist he comes tomorrow not because he said me, that he ill besubjunctive/be-indicative In the indicative version, the illness is viewed as existing in the report space, in the subjunctive as part of the source space only. Similarly, in French, we find examples like

Indirect speech 439

(191) // veut qu'elle mette une robe qui soit/est belle He wants that she take-subj a dress which be-subjunctive/beindicative beautiful (Fauconnier 1985) According to this account, the subjunctive version codes a description of the dress that is solely in the source space, i.e. the "want"-space, and the world of the reporter does not contain any beautiful dress. Alternatively, the indicative version codes a description of the dress that is valid in the report space.8 Even where this distinction exists, however, direct commutation cases such as these are marginal and hard to find; the problem of what space to choose is therefore not coded in any straightforward way. Even in languages where the subjunctive is available, the ambiguity of mental space reference, as I see it, is mainly in the discourse world rather than the language — but the ambiguity gives rise to a range of fairly messy linguistic ways of coping with this extra complication. In addition, I am now going to claim that one reason why coding is not clear-cut has something to do with the fact that even the referential distinction between mental spaces is not so clear after all. An important merit of Fauconnier's theory is that he describes ways in which mental spaces are linked: the "identification" or "access" principle that allows us to refer to elements in one space by a description that fits in another space; and the "flotation" principle whereby assumptions in daughter spaces spread to the parent space unless there is something to block them. As emphasized by Fauconnier, mental spaces are structured in a very partial fashion, thus only partially distinct from each other. A very basic reason for this is that ultimately they are Intentional rather than wholly free-floating worlds: they are essentially excrescences of the "master space" which is constituted by the reality in which the speech act takes place. Mental spaces, therefore, do not entail ontological pluralism. This is why the source space exists only in the situation as brought to us by the reporter. The reason why the reporter can play around with objects that "really" belong in the source space is that he is the statutory owner of the speech act — it's his utterance, and he can do what he likes with it. The source utterance hovering in the background has no special linguistic status; it is essentially a contextual circumstance like any other. The speaker may

440 Beyond the simple clause

choose, in reporting what happened, to take over as much as he can from the original utterance; but he may also edit and abbreviate as it suits him, subject to the same ethical demand of truthfulness that a description of nonverbal features of the situation would be subject to. The other spaces are there because he decides to put them there, and sometimes not much may survive: (192) "One of the Georges", said Psmith, a certain number of hours' sleep a moment how many — made a man being has slipped my memory..." Psmith, p. 93.)

"I forget which, once said that day — I cannot recall for the something, which for the time (P.G.Wodehouse, Mike and

Because of the pervasive overlap between spaces, it would be overkill to have a language that always kept track precisely of where the elements properly belonged, mental-space-wise. There are cases where it might be important whether the dress is beautiful in the want-space or the report space; and where it might be important to mark the illness as belonging in the source space only — but the default case is that the same descriptions are taken to apply in both. And even if they do not, the reporter can always make it clear that his authority for putting things in the report space is that he found them in the source space: he is ill — that's what he said, at least. In other words, linguistic items should not be understood as fundamentally oriented towards the resolution of space ambiguities, although in the critical cases discussed above they do have special sophisticated properties that speakers need to know about in order to get it right. The place to begin is to look at what distinctions they make, and then see how that helps us in case of space ambiguities — rather than to begin by laying out the ambiguous referential world and then describe the linguistic forms as ambiguous if they do not separate all the referential possibilities.

4.2.3. The past perfect and "back-shift" The existence of referential ambiguities has played a dominant role in the discussion of the meaning of the past perfect. Quirk et al. (1985: 1026) set up a set of correspondences supposedly related in terms of a rule of backshift (see also Leech 1987: 105):

Indirect speech 441 DIRECT SPEECH

BACKSHIFTED IN INDIRECT SPEECH

(i) (ii) (iii) (iv)

past past or past perfective past perfective past perfective

present past present perfective past perfective

This rule is recognized to be leaky, as in the cases where the speaker updates the statement by repeating it ("when the time reference of the original utterance is valid at the time of the reported utterance", Quirk et al. 1985: 1027), as we discussed above. In the case of the past perfect we see the familiar ambiguity emerging: a past perfect may (time-referentially) correspond to three different source tenses. In order to understand how this reflects the same meaning in the past perfect, let us look at the following examples: (193) "I have seen him", said John (194) John said that he had seen him (195) "iwas tired", said John (196) John said that he had been tired (?was) (197) "I left early", said John (198) John said that he left early (had left) The criterion that is decisive for choosing a simple past as opposed to a past perfect in a reported clause is the same as always: whether the speaker wants to refer via a time of reckoning after the state-of-affairs, or refer directly to the time of the state-of-affairs itself. With the present perfect (193), there is only one option, as one would expect: the present that is a compositional element is incompatible with the past time of reckoning, so this present has to be converted into a past, yielding (194) with the past perfect. In (195), the use of the simple past, as in John said that he was tired would tend to be understood as placing was tired at the same time as said — because it is a state (cf. below, p. 478). Since the past perfect enables the speaker to stick to the same identified time and put John's tiredness before that time, this is the obvious choice if you want to preserve the source message.

442 Beyond the simple clause

In the case of (197), the situation is different. An event, as we have seen, cannot be located at the point of speech, so left cannot be understood as simultaneous with said. In many cases the temporal interpretation could be "next-in-sequence", as in "narrative continuation", but since the marking of aheadness is obligatory in English (cf. above pp. 372-374), this interpretation is not possible (as it would be in Danish, cf. DavidsenNielsen 1990: 148). Therefore left is most naturally understood as anterior anyway. Whether one would use the past perfect depends on what the topic time of the whole discourse is. If the topic is a crucial meeting, at which John ("the idiot!") left early, and we merely skip ahead to hear his excuse, the simple past would be natural: (199) — Why on earth wasn 't he at the meeting? Have you asked him? — Yes, I phoned. He said he left early because he was disgusted with the whole thing. If the topic time is the conversation with John, on the other hand, the past perfect is natural: (200) / had a long talk with John yesterday. He told me about the way he understood his whole position. He said he had left early and would not come to the next meeting. The theory that describes the past perfect as ambiguous between a "real" past perfect or "past-in-the-past" owing to a "rule of backshift" is therefore not convincing. The fact that the speaker can choose between past and past perfect is much more obviously compatible with an understanding that explains the use or non-use of the past perfect in such cases in terms of the communicative usefulness of its standard meaning, involving the interpolation of a time of reckoning from which the state-of-affairs is viewed. Therefore, the process of backshift must be seen not as a linguistic rule, but as a decision made by the speaker in (re-)coding from his own deictic position. This understanding is implicit in much of the discussion in Quirk et al. and Leech, but competes with the "rule" understanding. Thus, Leech sees would in indirect speech as ambiguous in a manner similar to that of the past perfect (cf. Leech 1987: 54), because it may either be a backshifted "ordinary future" or a "future in the past". But the fact that both

Indirect speech 443

uses involve an identified time in the past (P) and a projected situation ahead of P means that the semantic composition "go back to P and look ahead from there" fills the bill in both cases; and only the tenacious assumption that a difference in denotable situations equals a difference of meaning will justify positing an ambiguity there. The whole concept of "indirect speech", in other words, involves a misunderstanding of the same kind that underlies the notion of an "indirect speech act": an assumption that the indirect version is best understood by beginning with the direct version and the derive the "indirect" version from the "direct" one. Instead, the fruitful approach is to ask what the coded meanings will do for us, and then look at the interplay between coded meaning and context for ambiguities in conveyed meaning — just as in all other types of utterance. If you believe that clause meanings are referential, mapping clauses onto "world states" which the clauses "stand for", embedding is a source of endless semantic ambiguities of the kind that flourish in Montague grammar. However, if meanings are instructions for interpretation, and only the finished interpretations actually locate messages in relation to worlds, there are no special linguistic ambiguities in embedded "indirect-speech" clauses, only problems of referentidentification — owing to the complicated contexts that arise as a result of embedding. The more spaces we have to keep track of, the more choices we have to make when we piece the linguistic clues together. A "transposed" (Rohrer) or "back-shifted" past future, for example, has a content which is split up between two mental spaces: the pastness belongs in the deictic situation, and the forward projection in the reported situation — but the reason that the past future is handy in such situations is precisely that it has both the two notions as part of its coded meaning.

4.3. Tense in conditionals 4.3.1. Introduction Perhaps the most complicated and most intensively discussed area within the semantics of complex sentence constructions is conditionals. Among the problems of conditionals are the distinctions into content, epistemic and speech act conditionals (cf. Sweetser 1990); into neutral, unreal and counterfactual conditionals; the logical properties of conditional statements,

444 Beyond the simple clause

with counterfactuals as the hardest subcategory (cf. Lewis 1973, Stalnaker 1984, Johnson-Laird—Byrne 1991). Below I try to show how the semantic properties of the main types of conditionals can be accounted for in terms of the principles and the tense descriptions used above (for a more detailed discussion with alternative suggestions, cf. Harder 1989a). As before, the main thrust of the argument will be to show that a referential or conceptual account, without taking the interactive element into consideration, will fail to single out the crucial properties in terms of which the coding must be understood. The referential bias is most obvious when a truth-conditional semantics is explicitly presupposed, but is present whenever an account tries to explain the meanings of different types of conditionals in terms of the real-world situations in which they would be used. The purely conceptual bias is present when conditionality, and different types of conditionals, are described solely in terms of how the speaker conceives of the propositions contained in them. One type of account that is compatible with both the referential and the conceptual view is the account in terms of degrees of probability or epistemic stance (cf. van der Auwera 1986, Fillmore 1990). In contrast, I suggest that both referential and conceptual aspects of conditionals can only be properly understood if the functional-interactive element is a basic part of the picture. I think it is a merit of this view that it needs very few special provisos to provide a natural account of this knotty area.

4.3.2. The role of "if" As illustrated in Fauconnier (1985 and forthcoming; cf. also Sweetser forthcoming, Cutrer 1994), the nature and relationship between the mental spaces that are involved in understanding conditionals are at the core of the problems; and the role of the tenses must be understood in that context. A natural starting point from a content-syntactic point of view is the subordinator if. Like other subordinators, it must be understood as indicating the way we use the proposition that it takes inside its scope (cf. above p. 424). Unlike other subordinators, z/requires an account depending on its space-building capacities: it introduces a hypothetical mental space into the discourse, and part of the work of understanding consists on the one hand in keeping the hypothetical space distinct from the parent reality space, and on the other in relating the two spaces in the right way. In order to carry out a proper interpretation process it is sometimes necessary to set

Tense in conditionals 445

up a very complicated set of relations between a number of different spaces (cf. Fauconnier forthcoming). The basic mental space aspect of the relationship between the if-clause (1C) and the main clause (MC) is described in Sweetser (forthcoming): the MC-event is embedded in the space associated with the 1C. If Joe goes, Jim goes signals that a mental space in which Joe goes also contains the event of Jim going. The way this space relationship comes about can be described very simply in terms of content syntax (cf. also Wakker forthcoming): the 1C is an adverbial satellite that takes (most of) the MC inside its scope. By virtue of this scope relationship, which is essentially the one found in all modifier-head relationships, the 1C determines the way we are to understand the MC. A close analogoue is the way a topic-oriented adverbial of time or place tells us where to locate the event denoted by the clause. One of the points on which the functional view of meaning makes a great difference in this area is the way it can be used to account for the relationship between 1C and MC. A procedural element has been part of the debate on conditionals since Ramsay conjectured (cf. van der Auwera 1986) that the way to understand conditionals is to add the proposition in the 1C to your body of assumptions and see what follows. Within a discussion that is specifically procedural, this picture has been elaborated by Isard (1974), with special reference to tense. Conditionals also played a decisive role in the motivation for Discourse Representation Theory (cf. Kamp 1981). The idea is placed in a cognitive science context in terms of mental models by Johnson-Laird (1986: 55); for an approach in terms of text worlds as part of the process of interpretation (cf. Werth to appear a, b). In emphasizing the functional nature of the meaning of if, I am therefore not pretending to bring something new into the picture. However, I think the procedural element has so far been seen mainly as a path of approach to the properties of the products — the models, spaces or text worlds it gives rise to. Instead it ought to be understood as the crucial coded property in itself. Like the subordinators discussed above, z/goes into the illocutionary slot, indicating the function of the proposition in its scope: it signals what to do with the proposition, rather than add referential or conceptual properties. The special job of the proposition inside the scope of if is difficult to characterize without circularity, because "conditionality" is not decomposable into simpler elements; but like other primitive elements it can be characterized in terms of relations with other elements.

446 Beyond the simple clause

The job it does involves a discourse strategy. As we saw above p. 269, the ability to entertain a proposition without concrete external stimuli to support it is a crucial step on the evolutionary path to linguistic competence. Such unanchored propositions may serve a variety of purposes, of which perhaps one of the most crucial is planning: instead of always having to be prompted by direct stimuli, the speaker can "think ahead". As a by-product of this ability, he also gets the ability to act in the present not just on the basis of the situation as it presents itself now, but also on the basis of events and assumptions that have not yet manifested themselves; in contingency planning you "act ahead", as when you get hold of a fire-extinguisher although the building is not presently on fire. It is this general ability that underlies the function assigned by if. When we want to make a speech act which becomes operative not in relation to the currently presupposed situation, but in relation to a situation that may arise, we can indicate the hypothetical situation in which it becomes operative by using a proposition prefixed by if. The function assigned to the 1C is to serve as the missing link in relation to a MC that is one step ahead of the current situation — a function that is different from both a declarative and an interrogative function. In mental-model terms, if instructs the addressee to construct a mental model including the 1C, and says that once this model is true, the MC speech act applies. It follows from this stepping-stone status that an 1C is necessarily subordinate to an MC; in the absence of a (possibly implicit) MC, the point of building the model would be missing. The sub-illocutionary operator 'if is conceptually dependent on both the 1C and the MC: unless there is both a stepping-stone and a place to go, the conditional relationship cannot do its job. The asymmetry between them is a special case of the trajector-landmark asymmetry that we also find between arguments of a predicate. Some alternative conditional constructions may serve to illustrate the onestep-ahead account. Cross-linguistically, conditionals are often constructed like questions,9 as in Danish (201) kommer du, sä hager jeg en kage come you, then bake I a cake This similarity is natural if the conditional is seen as a telescoped version of a discourse sequence like the following:

Tense in conditionals 447

(202) A: kommer du? come you Will you come?

yes

C: Sa bagerjeg en kage then bake I a cake then I'll bake a cake What the conditional does in this scenario is to skip the interlocutor's turn in the middle and go directly to what follows from an affirmative answer, in effect allowing the speaker to take two steps in one turn — acting ahead, as it were. This telescoping deprives the first clause of its interrogative force, chaining it instead to the chariot wheels of the second clause; but it is similar in leaving the truth value of the proposition open. Similarly, the imperative version, as in do that and I'll kill you explicitly formulates a two-step procedure, where the second step only becomes operative once the first has been made. This sequentiality must be distinguished from two other types of "procedural" sequentiality, constituent order and psychological processing order (which again have a natural affinity, but no one-to-one relation). Conditional ICs can stand last as well as first, as in (203) /'// do it if you help me This does not change the basic "stepping-stone" sequentiality in the conditional: it is still true that "I'll do it" is only realized via "you'll help me". The difference in constituent order suggests that in (203) the MC is first considered in direct relation to the actual situation; what the speaker does at the end is to point out that there is a necessary intermediate step. This will be natural especially if the 1C instead of taking a completely uncertain proposition as a stepping-stone demotes a standard assumption to the status of something that needs to be ascertained first: (204) We ΊΙ be okay next month — if the world is still standing

448 Beyond the simple clause

The fact that the stepping-stone sequence is still the same can be illustrated by the fact that in (204) the speaker is felt to take a step backwards in producing the 1C: the MC gets us on to a safe haven, but the 1C snatches it away again by pointing out a necessary stepping-stone on the way.

4.3.3. Comparison with representational accounts This description of conditionality differs from received wisdom only in its strictly procedural, functional nature: nothing is added to our representational knowledge by if. The relation between the propositions and the actual world is not at issue, nor is the way the speaker conceives of them apart from the use to which he puts them in his speech act. Their use in the speech act, of course, has consequences for the way the addressee must conceptualize them in understanding the utterance — but these conceptual properties do not match the linguistic meanings. Let us look at an example of the difference between the "product" and the "process" level, illustrating why linguistic meaning matches up with the process level. On the product level, when we have the mental spaces with the two propositions placed correctly in them, 1C and MC are equally hypothetical. This shared property is part of the standard theory which sees the two propositions as co-equal and related by a conditional relationship (whatever its exact nature). However, this does not reflect the crucial difference between the status assigned to the two clauses, as coded by the different "illocutions": if in the 1C and (in the standard examples) declarative in the MC. This coding expresses a significant semantic difference. With respect to the 1C, the speaker does not commit himself at all; but with respect to the MC, he commits himself to a full speech act. The speaker who says (205) If you come, I'll bake a cake is committed to baking a cake in the situation where the condition is fulfilled, but he is not in any way committed to the addressee coming. The linguistic coding reflects the difference in functional status, not the similarity in hypothetical status (whether viewed referentially or conceptually). This strictly functional-interactive understanding suggests why any picture that tries to specify a particular referential or conceptual relation between 1C and MC is wrong, even in the most abstract version in which it is only

Tense in conditionals 449

a "constraint" (cf. Barwise 1986). To illustrate the conceptual emptiness of the "if"-relation, consider the following example: (206) If Joe catches two wombats, he'll have four What precise conceptual relationship should be understand as obtaining between the event of catching two wombats and the state of having four wombats? It is fairly obvious that there is no relation that can be regarded as referentially or conceptually true of the two propositions as such. Rather, as suggested by Fauconnier and Johnson-Laird, the interpretation of the 1C triggers a space-building (mental model-building) process — and the inverse "embedding" relation between 1C and MC obtains between the space that is constructed in interpreting the 1C and the MC, rather than directly between one proposition and the other. Only a two-wombat person would provide the necessary link. This means that there is no "if" relation between the explicit semantic content of the two clauses, hence there is nothing in the referential world (out there) or conceptual world (in there) that corresponds to a conditional relationship linking these two propositions — only a two-step discourse operation whereby 1C is claimed to license the speaker to say MC. The fact that the two-step strategy places 1C as leading to MC gives rise to the "sufficiency" relationship that is suggested by Sweetser (1990) as a general formula for the semantics of conditionals, and of which material implication is a special case. Similarly, there is a natural affinity between the two-step account and the two classes of constructions that Haiman has described as related to conditionals: topics (1978) and co-ordinating constructions (1983). The comment is the second half of a dyad whose necessary first half is the topic which it concerns; if two elements go together, the second conjunct follows once we have the first.10 But I suggest that it is the twostep discourse strategy that is the basic element in conditionality as a specific linguistic content element. If this is true, the intuitions that constitute the basis for a referential or conceptual account of conditionality need to be accounted for in a different way. Their status is basically as world states or assumptions that warrant the two-step manoeuvre. In understanding why the relations that motivate the two-step strategy do not in themselves constitute the meaning of if, it is useful to look at the nature of those "warranting relations" in different types of conditionals.

450 Beyond the simple clause

In the type of conditionals that is generally considered the prototype, there are clear-cut warranting relations between 1C and MC; and if this were the only type, it would be plausible with a fairly concrete referential characterization of conditionality. The issue can be seen as turning on the nature of the sequentiality that goes from 1C to MC. In the prototype that is called "predictive" by Palmer (1974: 142), there are three sequence relations, as in (207) If it rains, the match mil be cancelled In (207) the movement from 1C to MC is causal, temporal and inferential at the same time. The causal link can be regarded as the basic warranting relationship; because it operates in real time, it creates the temporal sequence, and because it is seen as invariable it licences an inferential sequence. (I shall call such cases "trigger" conditionals rather than "predictive", because all sentences with will have an element of prediction about them). The verb "follows" can cover all three types of sequence, separately or together: if we have the rain, the cancellation follows. Most authors consider this kind basic (cf. Palmer 1987, Sweetser 1990). But conditionals do not have to involve this threefold relation. Sweetser sees the other kinds as involving a transfer from a physical domain to other, derived domains; without disagreeing with this, one can also see it as extensions from a prototype, based on an abstract, schematic element in the shape of the stepping-stone relation itself. The causal and the temporal links are one-directional: you can only get to the later effect via the previous cause, not the other way round. The inferential link, however, can go both ways; apart from the obvious forward-moving type there is the "backward" inference from effects to causes, making possible what Sweetser (1990: 116) calls epistemic conditionals. One of Sweetser's examples is (208): (208) If she is divorced, she's been married Such epistemic conditionals are based on the possibility of inferring backwards in time and/or causality, and thus based on the inferential sequence only. To signal the link with Sweetser's taxonomy, I shall talk about an "epistemic sequence". Sweetser's third type, the "speech acts" conditionals are interesting from my point of view because there is none of the three warranting referential

Tense in conditionals

451

relations left. The only sequence is the one tied up with the stepping-stone function of the 1C as a discourse operation — the condition licences the conveyance of the information: (209) if you're thirsty, there's beer in the fridge The schematic "stepping-stone" sequence is still there: the speaker uses the 1C to get to a point where he feels the MC is appropriate. If it were only in this type of example that the warranting relations were absent, it would perhaps not be a convincing argument for the primacy of the two-step discourse strategy in the semantics of if. But there are other cases, such as when the MC is not a declarative speech act, or is unspecified: (210) If he asks me, what do I tell him? (211) And if he dies? In these cases the stepping-stone effect of the 1C is clear-cut, whereas the putative relation between 1C and MC that warrants it cannot be specified. In (211), the speaker makes the move to the stepping-stone and leaves the next move unspecified, making the notion of a "constraint" relation, including a "sufficiency" relation, between 1C and MC vacuous. I shall therefore take the stepping-stone function as the crucial contribution of if, constituting the special semantic context within which tenses in conditionals must be understood. 4.3.4, The role offense I: reality-based conditionals Tense plays a crucial role in one central type of semantic differentiation between conditionals, which is usually understood in terms of three degrees of hypotheticality: (212) If John does it, he mil be caught (213) If John did it, he would be caught (214) If John had done it, he would have been caught

452 Beyond the simple clause

In the following, I shall try to show how the general theory of tense, coupled with content-syntactic relations to conditionality as described above, can explain the differences between (212)-(214). The discussion involves some issues which have been given a great deal of attention in the literature. One is the use of the simple present in the 1C of the first type of conditional. Another is the precise semantic nature of the "unreality" of (213) and the "counterfactuality" of (214). This again involves the issue of temporal reference in the non-temporal conditionals. As a test case for my account I shall try to provide a precise answer to the question of why there is a form missing in the series: (215) ?If John has done it, he -will have been caught To begin with the easy part: when tense is understood in its basic temporal sense, the theory of tense in MCs can be stated very briefly — tenses behave exactly as in ordinary independent clauses. This can be checked in a very simple way. As pointed out by Johnson-Laird (1986), you can always in principle leave the condition implicit, as when the mother who catches her offspring on the point of defying house rules says (216) I'll smack you meaning "If you do that, I'll smack you". If we take the condition as understood, tense in the MC will be motivated simply by the aheadness in relation to the situation: the smacking is ahead in time. MCs in trigger ("predictive") conditionals are in the future just as ordinary unconditional predictions (past future if the condition applies to real past time); MCs in epistemic conditionals are in whatever tense the inference is about; MCs in speech acts conditionals are in the tense appropriate for the speech act. Tense in ICs is rather more complicated. One of the facts that have received the most attention is that ICs are in the simple present even if they are about a time ahead. In terms of the tense theory I have suggested the choice, compositionally speaking, is not between present and future but between future and non-future. As pointed out by Leech (1971: 60), Haegeman—Wekker (1984: 51), the motivation for the (non-future) present has something to do with the "fact" status of the conditional. It adds something to the stage before the MC applies, and is regarded as the factual basis for the prediction in the MC. Thus "(if) it rains" must be true at the

Tense in conditionals 453

stage before the match gets cancelled (cf. Nieuwint 1986 for a detailed discussion of this issue). In terms of the framework developed here, the simple present is compositionally motivated in the same manner as the simple present in whenclauses (cf. pp. 425-426 above): the proposition is used to set up a time (when) or a condition (if), defined as being the time or condition in which the proposition "it rains" is true. Thus it would be illogical from a compositional point of view, if the clause was in the future tense, because that would mean that the rain was still ahead in the hypothetical case that we are considering; an "it-rains"- situation is different from an "it-will-rain"-situation, and it is an "it-rains"-situation that we want to use as a steppingstone. This account also fits with the rare cases in which we do find futures in ICs: (217) If he will be left destitute, I'll change my will (Palmer's example, 1974: 149) (218) "I'd like — tonight — to drop in and see you. At ten? If you will be alone" (Graham Greene, The Quiet American, p. 139) Here the destitution (or the being-alone) is still ahead, but its aheadness is enough to trigger the MC.11 This takes care of the role of the non-future in the 1C, compositionally speaking; but we still have to look at the specific contribution of the deictic present. The present is part of both constructions and has a crucial function in relation to the space-building process that is triggered by a conditional clause. The element that is most important is the one that I claimed was traditionally overlooked, the instruction to identify a point-of-application. It acquires a special significance because within the scope of if, the clause content does not have its usual relation to present time. As usual, the present tense signals "apply to the present situation", but within the scope of if this does not translate into an actual event time in the present — which is the usual interpretation of what "present" means. A declarative clause invokes the present situation and uses it in making a statement; an interrogative clause invokes the present situation and uses it in asking a question — while an 1C invokes the present situation and uses it to build a stepping-stone that brings us to the MC. Once again: it is the point-ofapplication that is present, not the state-of-affairs; therefore the hypothetical

454 Beyond the simple clause

stepping-stone space ends up containing everything in the present situation plus the state-of-affairs that is applied to it by the 1C. This account avoids the muddle of using the contradictory terms "fact" and "hypothesis" simultaneously about conditional ICs in the present, while it captures the intuitions that motivate this dual description. The contentsyntactic relation between if and present tense is such that if cues the construction of a space functioning as stepping-stone towards a MC; within the scope of this instruction, the present tense does its usual job: that of selecting the situation to which the clause adds its descriptive content. Hence, in present-tense ICs the point-of-application of the MC is thus an enriched situational present. This is why in present tense conditionals, even if we set up a hypothetical space, we cannot introduce assumptions that contradict current beliefs. Thus, barring belief in resurrection, we cannot say (219) If Queen Victoria comes into this room... Similarly, if the 1C is in the past, an ordinary temporal reading would give us a hypothetical space with all our beliefs about the past point-of-application plus the assumption introduced by the 1C (an enriched past), as in (220) IfFermat had a genuine proof, he was several hundred years ahead of his time These conditionals, where tense in the 1C is understood in its ordinary temporal sense, can therefore be called "reality-based conditionals": the point-of-application, as cued by tense, is precisely what it always is, once we see it in relation to the semantic role of the subordinator if.

4.3.5. The role of tense II: P*-based conditionals The role of the point-of-application is crucial also in understanding what goes on in an 1C containing a past tense with a non-temporal reading. Although an 1C as a whole is always hypothetical, its point-of-application is not hypothetical in the ordinary cases discussed above: you get your hypothetical space by beginning with reality and adding something. The key difference, according to the theory proposed here, is that with the non-

Tense in conditionals 455

temporal past you begin with a point-of-application P* that is independent of reality. This is why you are free to assume anything you like, such as (221) If Queen Victoria came into this room... The special properties of "unreal" and "counterfactual" conditionals can all be accounted for if we describe the interaction between the unreal point-ofapplication P* and relevant factors in the content-syntactic and situational context. This theory offers a natural description of what is "normal" and what is special about conditional tenses, cf. Dahl's remark (1985: 67) about the conditional tenses being both conditional and tenses at the same time. Among the things that are the same is the relation to a presupposed pointof-application: as argued above, the non-temporal past can only be used when a hypothetical space is already cued, for example by if. Among the differences are that in the temporal reading, the point-of-application has properties which in themselves suggest that it must be regarded as distant: a past event has a pastness that exists before you choose what tense you want to use about it. In the non-temporal reading, however, it is purely the tense choice that determines its status as distant in the sense of realityindependent. The element of distance therefore has more of a life of its own in the non-temporal reading, even if it cannot in itself cue the construction of a hypothetical + reality-independent space, but has to depend on a hypothetical space that is already cued.12 A third aspect of the reality issue is again parallel with the ordinary temporal use. It is important to be aware that the distance from reality is a property of the space, not of the state-of-affairs. It is the point-of-application, not the state-of-affairs which is distant from reality — just as in the temporal sense it is the point-of-application rather than the state-of-affairs that is a thing of the past. The emphasis on this point is the main difference between this account and most of the literature on "unreal" conditionals with respect to the description of (212)-(214). The central question is: The relationship between 1C and MC in these examples is the same, namely the "trigger" relation — so what difference is it that the tenses code? Most descriptions of (212)-(214) involve one or both of the following elements: that (213) indicates unreality in relation to now, while (214) indicates unreality in relation to the past; or that (213) views the event as

456 Beyond the simple clause

unlikely, while (214) views it as counterfactual (cf. Quirk et al. 1985, Leech 1987, van der Auwera 1986, Fillmore 1990). I share the intuitions that these accounts reflect, but I do not think that they capture the way in which the English language structures the semantic substance that is involved. To capture the function of P*, I speak of "thought experiment" conditionals in contrast to "reality-based" conditionals. For the simple past "thought experiment" like (213), repeated below as (222), I offer the paraphrase (223) based on the tense meanings suggested above: (222) If John did it, he would be caught (223) If in a thought experiment (=at P*) we posit: 'John do it', it follows (causally, inferentially and temporally) that 'John be caught'. Note that the only difference from an analogous paraphrase of (212) is that we locate the state-of-affairs at P* instead of S — keeping the thought experiment carefully distant from our assumptions about the actual situation. This paraphrase is compatible both with the "unreality-in-the-present" and the "unlikely" epistemic stance. The choice between if he does it and if he did it can be seen in terms of "probability" or "pure thought experiment vs. enriched present" without any great empirical difference, on the face of it. This means there is a problem in deciding which account captures precisely what is coded. The argument I invoke depends on the relationship between the epistemic stance (probability judgment) that is naturally associated with a given form, and the actual attitude of the hearer. The option of conducting a pure thought experiment is useful because it does not require that the speaker commits himself either way with respect to epistemic stance, i.e. probability judgments. You can conduct a thought experiment with respect to a matter that you in fact consider as settled, just as it is proper procedure to conduct a scientific experiment even when you regard its outcome as certain. This suggests a testable difference in relation to "epistemic stance" theories. If the coded meaning involves a probability judgment, by choosing the non-temporal past version you necessarily reveal your own stance. But consider the case of a shy admirer saying (224) If I asked you out, would you come?

Tense in conditionals 457

Has s/he thereby become committed to this being necessarily a remote possibility? I think this would be a bad characterization of the situation. In the happy case where the answer is (225) Of course I would, silly! it would be rather surprising if the speaker did not in fact proceed to make 1C true; and that may well have been the intention. But situating the possibility as a pure thought experiment makes it less potentially facethreatening to deal with the answer. In other words, I suggest that just as the temporal past is about P and does not address the issue of what is the case at S, so is the non-temporal past about what occurs at P*, in the thought experiment, and says nothing about what is the case, or probably the case, at S. The reality-independence of the space determines the nature of the spacebuilding process. Instead of taking the richly specified present situation as the point of departure and adding an extra assumption, the speaker must reverse the direction of the process: the unreal space is empty except for the one assumption contained in the 1C, and the filling-out process selects aspects of the real world based on compatibility with that assumption. Beginning with if I had a million dollars, you can go on with all the interesting possibilities that this assumption may cue.13 In the "trigger" case, we do not have to change the semantic specifications for the future and perfect content elements, compared with reality-based conditionals: the P* reading of the past tense accounts for all differences. In other cases, however, there is one difference in the reading of the future, when it is based on P* rather than P. When the non-temporal past occurs in the 1C, the use of the future in the MC does not necessarily indicate a temporal sequence — the future may code only the epistemic sequence, i.e. the inferential step taken in drawing the conclusion. The difference comes out in relation to ICs with state verbs, which unlike event verbs are most naturally understood as applying at the actual present time, compare the discussion of Discourse Representation Theory p. 477. The following examples are illustrative: (226) If she loves me, she believes me (now) (227) If she loves me, she will believe me (later) (228) If she loved me, she *believed/would believe me (now or later)

458 Beyond the simple clause

This is irrelevant to trigger conditionals because they always involve a temporal sequence between 1C and MC, requiring an MC in the future. But in epistemic conditionals it makes a difference: the MC cannot be in the simple past that would be natural from a purely temporal point of view, compare (228). That might suggest that would is meaningless, or just signals the construction "unreal conditional", since it has nothing to do with temporal relations any more. However, it retains one crucial feature, that which separated the pure aheadness sense of will from other modal verbs (including other senses of -will): its categorical nature. Will in this sense still indicates that the MC follows without fail; of the alternatives listed below, all the others leave open the option that the consequent might not materialize: (229) If she loved me, she might/should/could/ought to/would believe me Because of this feature, there is still a clear-cut analogy between the temporal future and will as indicating epistemic sequence. The plausibility of the interpretation of the "epistemic sequence" sense of would as a variant of the pure future sense is supported by the fact that in Danish it is optional in non-temporal contexts in precisely the same way that the pure future is optional in temporal contexts, compare (230-232): (230) Hvis Hun troede pä sig selv (a) havde hun det nok bedre If she believed in herself had she it probably better (b) ville hun nok have det bedre would she probably have it better 'If she believed in herself, she *felt/ would probably feel better' (231) Han sagde, at han he said that he

(a) kom senere came later (b) ville komme senere would come later

'he said that he *came/ would come later'

Tense in conditionals 459

(232) Det sker ikkel vil ikke ske 'it happens not/will not happen 'It *does/ will not happen'

4.3.6. The past perfect conditional: meaning, usefulness and implicatures The use of would as marker of categorical epistemic sequence is important for understanding the use of the perfect in conditionals. As descriptions of the time of reckoning, clauses in the perfect are most naturally understood as designating states and therefore as not moving the event forward (but compare pp. 477-80): (233) If he has fixed it, everything is okay (S-based) And as with other epistemic conditionals, if we change the 1C present into a non-temporal past, -would is obligatory to mark categorical epistemic sequence, yielding a so-called counterfactual conditional: (234) If he had fixed it, everything would be okay (P*-based) (235) Paraphrase: Assuming that at P* the fixing is anterior, it follows that everything is also (already) okay at P*.14 According to this interpretation, we have a semantically normal perfect, anchored in an unreal point P*. The MC is understandable as a nontemporal version of a present perfect, combined with the would that marks epistemic sequence. In understanding the compositional logic of this construction, it is crucial to understand the motivation for the perfect tense in the MC, the "consequent". In a thought experiment, if we can derive consequences that should be already there, it means that they can be compared directly with the actual world. This entails, for one thing, that they licence a "backward", "modus tollens" inference: if the consequent is not true, neither can the antecedent be true — because if it were, the results should already be in evidence.

460 Beyond the simple clause

We are now approaching an answer to the question: why is the so-called "counterfactual" version very common, and naturally understood as the third in the series of "trigger" conditionals, whereas the temporal version in the present perfect, such as (215), is an awkward, armchair combination that can only be understood as an unlikely epistemic conditional? I think this can be understood in relation to what is involved in basing oneself on a point on the real time line as opposed to a reality-independent imagined point. In the temporal version we are inferring from one hypothetical part of the anterior epistemic baggage to another anterior circumstance, and both must be compatible with what we know. There is a hole in our actual knowledge, something that we could know which in fact we don't know (the 1C); and this enables us to infer another fact (the MC) which again we could know, only we don't. In most situations outside detective novels, we would not handle such situations via hypothetical reasoning, but by finding out: because it is reality-based, we could go and find out instead of sitting in an epistemic armchair. However, if we posit an assumption as part of a thought experiment, it is a different matter. On the one hand we are not bound by what we consider actual reality, and can play freely around with the possibilities, positing any state or event whose implications we would like to follow up. But on the other hand, we cannot go and look: the consequences follow only in the imagined thought experiments. Therefore it opens up possibilities that could not have arisen in any other way. If X has happened, we can go and look at the consequences right now, but in imagining that X had happened (P*-based), only our imagination can help us. What if Hitler had won the war? What if I had been born a millionaire? The motivation for putting the MC in the perfect, to repeat, is that we want to posit a situation in which the consequences are already there: In a situation where Hitler's victory is a long-established fact, what dreadful changes have (already) followed from that victory, i.e. what is the (imagined) world like right now? In a situation where my millionaire birth is long behind me — what delicious consequences have (already) followed from it? Anteriority is crucial exactly because we want to imagine a situation that has already arrived, i.e. is part of the alternative epistemic baggage right now. The crucial communicative purpose of the so-called "counterfactual", on this theory, is to establish something as a parallel evolution that can be compared directly with now — in horrified fascination, daydreaming, or as a clinical experiment. The modus tollens inference is a special instance of the usefulness of direct comparability:

Tense in conditionals 461

(236) If Joe had done it, he wouldn't have given the money back By comparing with a situation in which Joe has in fact given the money back, we license the inference that Joe has not done it. But is this not just a cumbersome way of getting at the standard account in terms of epistemic stance, according to which (236) signals counterfactuality or unreality in the past? One advantage over the standard theory is that it fits with the example advanced by Comrie (1986) to demonstrate that counterfactuals are not necessarily against the assumptions of the speaker: (237) If the butler had done it, we would have found just the clues that we did in fact find The reality-independent situation matches reality in this case, enabling us to draw backward (abductive rather than deductive) inferences that are positive rather than negative concerning the truth of the 1C proposition. Note that we cannot amend the account in terms of epistemic stance by weakening it to "very unlikely"; the epistemic stance in this case is not tongue-in-cheek negative, but rather non-committal: let's forget about what we assume now and consider what follows from a pure thought experiment. The standard epistemic stance of counterfactuality can be derived, as an implicature, from the account in terms of a coded, P*-based thought experiment — when it is used to license a negative inference. But if we take counterfactuality to be the coded meaning, we cannot explain cases like (237). The reason why the perfect can be combined with the "trigger" interpretation in the non-temporal reading is that we are viewing the whole process from a position of imagined hindsight: Assuming that X has occurred at P*, it follows that it has triggered Y. The temporal reading, as argued above, offers no similar options, because the reading is bound to existing reality. Consider the present-perfect version of Comrie's example: (238) If the Butler has done it, we (will) have found exactly those clues we did in fact find What wrecks this version as a meaningful statement is the fact that the model-building is based on reality rather than on the thought experiment.

462 Beyond the simple clause

Instead of getting a parallel situation whose properties in comparison with the real situation is fraught with interesting inferences, we get a conclusion about the actual situation which is completely empty, because it restates what we already know. Many accounts of non-temporal conditionals take the temporal information as relevant to the coded meaning: the "unreal" reading is one temporal step away from the "real" reading. Past perfect is unreal in the past; past is unreal in the present etc. What is true in this theory is that the non-temporal interpretation is the only possible one when adverbial temporal specification is incompatible with a temporal reading of the tenses (cf. Bull 1960, Hornstein 1990). However, as a theory of what is coded in the tense system, I think it is wrong. When used non-temporally, the past does not (in addition) code the temporal point-of-comparison in reality. This is clear also because the non-temporal past is compatible with implied real topic times in both past and future: (239) Joe was desperate. So far Jack had been blissfully ignorant, but what would Jack do if he knew about Mary? (240) It is going to be a terrible wait: what would Jack do next year (do you think) if he heard about Mary? Even the past perfect, as often pointed out, can also apply to the present or future; Schibsbye's example (1966) is (241) If I had been in better health, I should have joined you uttered just before the addressee leaves. It does not matter to the sense of (241) whether it applies now or earlier; the extrapolation from P* (at which better health is already an established fact) to a situation where the speaker's participation is already an established fact, is equally valid whether it is compared with past, present or future reality. For a discussion with respect to future-oriented cases of the German equivalent, see Leirbukt (1991). The reason why the past future and the past future perfect are so rare in the temporal sense as opposed to the conditional sense is the same reason which made conditionals in the present perfect rare: the fact that all events must be plotted into the same time line. The non-temporal past establishes a parallel domain to reality: there is no access to the consequences that

Tense in conditionals 463

follow from an event located at P* except via P*; they exist only as thus conceived, so if we want to get at them we have to go via P* and use the appropriate conditional. As opposed to that, the temporally based futures have to end up on the same time line that our lives are located on; and therefore they are accessible from other points of view, including S as the most natural point of view. Only specific interest in maintaining this complicated point of view can thus motivate a temporal past future perfect, as discussed above, whereas the non-temporal conditional is typically the only road to the relevant inferences. To support the view that the only two differences between the conditional senses and the ordinary temporal senses are in P vs. P* and in the epistemic vs. the temporal sequence cued by would, let me mention an example where I heard the conditional perfect as ambiguous between the two readings.15 There was a description of a computational processing procedure that a number of people were supposed to learn to handle; and the sentence in question was something like (242) Following this programme, after about two weeks we would have trained them... I realized that I had missed the cue with respect to whether the programme had actually been implemented or it had for some reason been given up or postponed. Depending on this choice between P and P*, the form in question comes out as a conditional perfect or a true past future perfect: there are three points involved in both cases with similar relations between them, and here the conditional would must be understood as corresponding also to a temporal sequence, although this is not always the case.

4.3.7. The "present subjunctive " reading of past tense The close parallel between temporal and the non-temporal past in conditionals is also underlined in Isard (1974). His account is implemented in a program that answers questions, real or hypothetical, with respect to options in a game of tic-tac-toe. It can provide answers if the programmer asks: if I take seven, will you take six? what would you take if I took five? If I had taken four when I took five, what would you have done? The syntax Isard provides is an example of procedural "task structure" (cf. above pp.

464 Beyond the simple clause

111-112), specifying a series of operations that the machine must perform to make sense of each question. The central idea of the programme is that tense and subordinator interact in identifying a situation that is used as the basis for answering the question about the next move: in other words, the programme reacts by constructing a model, with respect to which the MC is evaluated. There are two differences between Isard's account and mine, which both concern the understanding of language that is involved, rather than the actual mechanisms. First, I see the instructions as constituting the meaning, whereas Isard understands meaning in terms of the referent situations. Second, Isard retains the temporal element also in the "non-actual" cases. Both these differences come out in the following extract: Our account of the difference between sequences like (16) What will you do if I take five? Will you take four? and

(17) What-would you do if I took five? Would you take four? is not then in terms of a difference in meaning between them, but just that the hypothetical situation is filed under the label "present" in one case, and under "present subjunctive" in the other. (Isard 1974: 251).

In terms of a functional semantics, it does involve a difference of meaning whether the addressee is instructed to understand this as a pure thought experiment, or as concerning an enriched present. And the P*-based case is not a present subjunctive, because the link with the actual present is not linguistically coded. The use of the term "present subjunctive" to indicate the interpretation whereby a thought experiment is used to present an alternative that might have existed at S is an unfortunate conflation (also found in Fillmore 1990: 138) between interpretation and linguistic analysis. There is no present aspect of the coded meaning; and there is no manifestation of a linguistic element that can plausibly be called "subjunctive". Also, if we return to the time when the present subjunctive was more of a living element in English ICs, its function was clearly distinct from that of the non-temporal past, compare the following extracts from Hamlet:

Tense in conditionals 465

(243)

Therefore have I entreated him along With us to watch the minutes of this night That, if again this apparition come He may approve our eyes and speak to him (I,i, 26-30)

(244)

If there be any good thing to be done That may to thee do ease and grace to me, Speak to me. Ifthou art privy to thy country's fate, Which happily foreknowing may avoid, O, Speak! (I,i, 130-35)

In the last extract, the first 1C is in the subjunctive, the second in the indicative, but both clearly base themselves on the situation at S, rather than on the P* of a thought experiment. This follows from the fact that an actual exhortation to speak stands as the consequent; in comparison, it would be inconsistent to say (245) *Ifyou knew anything, speak to me! because assumptions about a reality-independent situation can have no direct implications for the actual situation. The function of the subjunctive, in fact, is likely to be precisely that of epistemic stance: to indicate greater insecurity with respect to the probability of the 1C. This is compatible with a stipulation (possibly coded by the present element in the construction "present subjunctive") that the 1C, unlikely as it is, should be added to existing assumptions to create an enriched present. In this case, the two functions (indication of epistemic stance vs. location at a point P* in a reality-independent space) are clearly distinguishable, and the account in terms of P* is more precise as a description of the modern tense-based system.

4.4. Functional content syntax and "normal" syntax With the previous section, I have said what I have to say about tenses as part of language structure. I would now like to sum up the advantages I see

466 Beyond the simple clause

in the central notion of content syntax, and the associated view of meanings as structurally organized, collaborating functions. In doing so, I compare both with semantically oriented accounts, and generative accounts — in order to set off my own views against a well-defined contrast based on the mainstream picture of syntax. Let me first repeat that most of the salient features of my own story are also found in most other accounts, for the natural reason that this is where the interesting things go on. Almost all linguists would like to assign the "true" meaning to each tense, to specify just what it contributes to the meaning of the sentence, to show how the whole meaning depends on the parts, and to find out how distributional regularities reflect meaning; and instructional, compositional, semantic elements have typically been part of the solutions. Obviously, I do not pretend to have invented these things; but I do claim to have provided a systematic framework for them. What happens in most other accounts, as I see it, is that these crucial phenomena have to compete with other ways of looking at the issues, leaving the ultimate account opaque with respect to the aspect I focus on: collaboration between functional meanings. I begin with the basic issue of the relationship between content and expression. With the exception of Langacker's Cognitive Grammar, the relationship between content and expression in syntax is almost always left unclear. The subject I have dealt with above is the semantic half of syntax; but I am now going to say a few things to illustrate how the description of the expression side can also benefit from the separation into content and expression syntax. The "task structure" that constitutes the semantic recipe is associated with two separate tasks: one on the content side, and one on the expression side, as illustrated in the instructional paraphrase of the old elephant (p. 226). Because meaning equals coded function, it is natural that content structure is basic: expression must serve content, not the other way round. If we attach the expression side to the content tasks that have been described in sections 2.2, 2.3 and 2.4., and disregard other elements in the system, the corresponding expression tasks are, very roughly: (246) state-of-affairs: access lexical verb stem + NP-expression(s) in its scope (247) Perfect: prefix the highest verb stem with the auxiliary have and convert it to perfect participle

Content syntax and "normal" syntax 467

(248) Future: prefix the highest verb stem with the stem will and convert it to infinitive (249) past tense: access the past form in the paradigm of the highest verb stem (i.e. unless the past tense slot is pre-empted by an irregular form, add -ed)16 The only potentially interesting thing about this primitive description is its relationship with the content structure that has been described above, i.e. a version of (250) below: (250) past/present (future (perfect (state-of-affairs))) The relationship between content and expression is such that the expression instructions make reference to the content structure (note the reference to the "highest verb"); to get the right complex expression (cf. Harder 1990a), we need to base the execution of the expression instructions on the content structure: expression serves content, as implied in the functional relation between the two sides. An example, describing content and expression of the present future in John will be here: (251) Content-syntactic structure: decl (present (future ("be here" ("John")) (252) content "recipe" paraphrase (cf. above p. 214): take as a fact (apply to S (view as lying ahead ('be here' ('John')))) Expression, beginning from within (the centrifugal principle, cf. Fortescue 1992): (253) fut: stem WILL + inf (BE here) = WILL be here (John) (254) present: present form (WILL be here) = will be here (John) (255) decl: (subject first): John will be here

468 Beyond the simple clause

Instead of being "underlying", the content-syntactic structure describes the way the content elements co-operate, and thereby simultaneously structures the way we express the clause. Even if it is language-specific the semantic structure can be used for some of the same purposes as abstract underlying constructs, because semantic structures are not as cross-linguistically variable as expressions (cf. above p. 253). Thus, the French future can be described by means of a content-structural skeleton that is basically the same as (253). The central expression elements would be: (256) fut: select future stem of verb (i.e. if no irregular form has preempted the future slot, use the infinitive form) (257) present: select present form of highest verb If inserted in the derivation of the equivalent Jean sera id, the interesting steps would be: (258) fut (ETRE ici) = SER- id (259) present (SER- ici) = sera id In this case an operation is performed on both the content and the expression side at each stage of the way; matching may not be so perfect in all cases, but if the description is always in lockstep, it is guaranteed to reveal discrepancies automatically. Also, a paradigmatic contrast will be evident on the content side of syntax, even if it is not evident on the expression side. The expression rule for the prospective ("select stem ALLER + infinitive of the highest verb") will then operate — for those speakers who accept the combination — in the same structural slot as the rule for the pure future (256), yielding, e.g., ilva avoirfini as an alternative to il aurafini. Many accounts which are basically doing what I call content syntax use a format of "features" to capture semantic properties, where the features for the description of tense are spelled out in terms of points in time and other semantic properties. This is essentially a matter of notation and has no bearing on the validity of their theories. But if, as I have argued, we should aim for a descriptive procedure that is guaranteed to ask the right questions, there is something unsatisfying about this format. Features go well with the tripartition into phonology, syntax and semantics that I argued against above pp. 199-200, and thus make it easy to be unclear about the

Content syntax and "normal" syntax 469

two-sidedness of language. When "syntactic" features are contrasted with "semantic" ones, the distributional motivation for positing features is uncomfortably confusable with the semantic motivation. Even when the account is clearly semantic, I think it is preferable to have a description which asks directly "what does this mean?" and "how do these meanings collaborate?", since these are questions that no-one can avoid asking anyway, instead of going the circuitous path via features and properties. To focus on the format itself rather than descriptive claims, I can illustrate this in relation to the account by Davidsen-Nielsen (1990: 59-64), which is essentially compatible with the theory suggested here. The only major difference involves the past futures, which are said to have a "basis" in the past, symbolized as B, yielding the following configurations: (260) Future of the past: B — E (261) Future perfect of the past: B — E — R According to the system I have proposed, the B can be understood as the time P, namely the time-of-application associated with the past tense. The features suggested by Davidsen-Nielsen are (1990a: 62): 1. +/- PREVIOUS: in forms marked + PREVIOUS, event time precedes reference time. It is used to distinguish perfect and non-perfect tenses 2. +/- POSTERIOR: in forms marked + POSTERIOR, event time follows either speech time or basis time. It is used to distinguish future and nonfuture tenses. 3. +/- THEN: in forms marked +THEN, reference time or basis time precedes speech time. It is used to distinguish the past tenses from the present tenses.

470 Beyond the simple clause

These features together generate a feature matrix:

Present Present perfect Past Past perfect Future Future perfect Future of the past Future perfect of the past

Then Posterior -

Previous +

+ + + +

+ + + +

+ +

The objection I have to this format is that the relationship between semantic properties and compositional coding, although inferrable, is opaque. All forms are described by the same features, thus creating the element of redundancy that was discussed in relation to other Reichenbachian theories; and one has to add the features up to get a description of the meanings. To defend the description against the charge of redundancy, Davidsen-Nielsen sets up a version of the feature matrix, where the "plus" features are replaced by a markedness feature. This generates a scale of complexity — from the totally unmarked simple present to the triply marked past future perfect — which is more similar to the system I have argued for. Nevertheless, I do not think this solves the problem, which concerns the way of thinking that is embodied in the feature notation. Reflecting the Saussurean tradition of emphasis on distinctions, the features are said to "keep apart" the different elements in the system. In the diagram above, the simple present gets three minus features indicating three points on which it is different from other forms.17 But if these "negative" properties are really all due to one positive entity, namely its meaning, surely it is preferable to have a descriptive format which is explicitly based on a positive ("substantial") content element. As I see it, the fact that more than any other makes the notion of content syntax preferable is that the ontological status of the object of description is so obvious: meanings working together. As opposed to that, what is the ontological status of abstract features and times, the staple inventory of other accounts, if they are not epiphenomena of the meanings? DavidsenNielsen's features can be described as emerging from the tense meanings described above: PREVIOUS can be derived from the perfect, THEN from the past, and POSTERIOR from the future; and times arise, as explained

Content syntax and "normal" syntax 471

above, as implications of these meanings. This, I suggest, is the only legitimacy they have. All we need to defend the account given here is an assumption that there are coded meanings: we need no additional apparatus in the form of times and features involving temporal relations — whereas the only way features and properties can be anchored in the real world is as aspects of those meanings that remain uncomfortably off-stage in feature matrices. The merits of content syntax can be profiled by comparison with solutions in terms of the picture whereby syntax is supposed to be distinct from semantics. The first generative version is the one in Chomsky (1957). At that early stage the aims were clearly oriented towards the expression side only, so the complications of handling semantics within a distribution-based pattern of thinking did not really arise — semantics was simply absent, rather than being a problem. Of course semantics cannot be omitted from syntax, if you believe that syntax is inherently two-sided; one of the things that could not be handled satisfactorily in terms of this purely expressionbased picture is, as one would expect, constituent structure; for a criticism of Chomsky's theory from the point of view of two-sided syntax (cf. Langacker 1991: 198). But as semantic considerations became unavoidable, the interface between syntax and semantics also became a point on the agenda in relation to tense. Smith (1978) is illustrative of the problems this raises. She concludes that temporal facts basically have to be accounted for outside syntax, by a combination of semantic rules of interpretation and contextual information; a salient example of this is in her analysis of Reichenbach's R, which she sees as fixed by time adverbials and tense in combination — except in cases where we have a past tense that can only be interpreted relative to context, as in (262) it rained heavily All this is perfectly compatible with the account given above; but Smith's conclusion is that this is a confirmation of the autonomy of syntax — because syntax works independently of all these semantic considerations. In other words, it does not capture the way in which syntax serves to make meanings co-operate both internally (between themselves) and externally, with the context, for instance by the request for identification that is bound up with the deictic tenses.

472 Beyond the simple clause

The fact that semantic considerations fairly obviously play such a dominant role in relation to tense is probably the reason why there have not been many generative treatments of it (cf. Dahl 1992). The main recent exception, Hornstein (1990) is interesting as an illustration of how the autonomous view of syntax handles tenses while recognizing the role of semantics in the picture. The point of departure for Hornstein's syntactic solution is the set of notions that Smith placed outside syntax proper, namely Reichenbach's times: Hornstein gives them a place in his theory on a purely syntactic, non-semantic level of analysis. Each tense is assigned what he calls a "basic tense structure" in the form of a configuration of S, R and E, which has no semantic interpretation (1990: 14). In Hornstein's version, as in Reichenbach's, there are two relations, a comma and _ (pronounced "line"), yielding the familiar formulae S,R,E for the simple present, and E_R_S for the pluperfect. When they reach the stage of semantic interpretation, these two relations will be read as "simultaneous with" and "follows" — once again the interpretations we are familiar with. But Hornstein's specific contribution is to posit a level of autonomous syntax where this is not yet relevant. On the autonomous syntactic level, the linear order between symbols separated by a comma makes a difference: S,R,E is not identical to E,S,R. These syntactic "basic tense structures" function as input to more complex configurations. A central element of the theory is the generation of "derived tense structures", which arise when tense configurations are combined with a time adverbial. What happens in this process is essentially that the time adverbial becomes associated with either R or E. Here the significance of the linear order of points manifests itself in the possibility of getting a derived tense structure corresponding to (263) John is leaving tomorrow where E is removed from S and R (when it is associated with the time adverbial tomorrow). This is permitted, because E stands last in line in the configuration S,R,E — if the simple present had been S,E,R, this would have been ruled out. This is also motivated by the fact that (264) is ruled out: (264) *John leaves yesterday

Content syntax and "normal" syntax 473

If (264) were to be possible, it would require that the present could yield a derived configuration E,R — S. But that is impossible because the basic tense structure is S,R,E and not E,R,S (cf. Hornstein 1990: 16). On the other hand, this gives problems in accounting for why you cannot say (265) *John has come home tomorrow since the basic configuration would allow it by the same mechanism that made (263) possible (cf. Hornstein 1990: 23). I think this analysis is a textbook example of how "pure underlying syntax" postulates empirically unconstrained formal devices where we should either have an analysis in terms of meaning or a frank admission that we have no explanation for what is going on. For Hornstein, the beauty of his theory is exactly that it takes over at a point where we leave semantics behind: "Descriptive semantics often proceeds oblivious to issues of explanatory adequacy", he claims (Hornstein 1990: 5). Following generative mythology, he claims that because there is no semantic motivation for his structures we must assume that they are part of the innate competence that children need to figure out the tense system (Hornstein 1990: 106; 117). In that way, innate structures bridge the gap between what makes sense and what does not make sense; because "descriptive" semantics does not postulate inherently meaningless structures to account for this, they do not aspire to the explanatory level in Chomsky's sense. One problem with this is the lack of scientific foundations for these entities (cf. the general discussion above, p. 185). Another is that it tempts the generative linguist to orient his description towards unmotivated structures as the crown and glory of his achievement: real explanation begins when the structures no longer make sense. As pointed out by Dahl (1992), it would in fact be possible to assign a temporal interpretation to Hornstein's basic structures, so that they would be both semantically motivated and do the job Hornstein wants them to do; Hornstein appears to have rejected such a proposal but the arguments are not specified in the book. But it would spoil the whole point of the enterprise, according to the logic of autonomous syntax; if the structures were semantically motivated we would be down on the lowly level of descriptive semantics again. It seems to me that the basic error in autonomous syntax is clearly illustrated in this account. As I have argued above p. 307,1 think we need to assume that there are cases where expression structures acquire a life of

474 Beyond the simple clause

their own and spread to cases where they are not semantically motivated, a case in point being the necessity to have an expressed subject in English in cases like it is raining. This means that we cannot automatically predict expression properties from content properties in all cases. But if we conclude from such cases that syntax is wholly autonomous from semantics, we are putting the cart before the horse. Surely, the right way to look at it is to see syntactic expression devices basically as ways to signal how content is syntactically organized. Constructions where this is not the case must be understood as special, separately conventionalized exceptions, as when scope relations between 'can' and 'seem' are reversed in (266) / can't seem to understand this (or, occasionally, as signs that we have not analyzed the content side of syntax carefully enough). Postulating abstract entities that look just like semantic elements except for having no interpretation is not a revealing way to go. Let me note in passing that his theory postulates that tenses are essentially adverbs (Hornstein 1990: 165), thus illustrating the temptation to use underlying structure to ignore the distinctions that language makes in favour of the structural distinctions that the linguist wants to make. There is a great deal of interest in Hornstein's book, and most of the time he deals with genuine problems of how temporal configurations interplay; thus when he describes the mechanisms whereby the time of adverbials is associated with the time of tenses, he is essentially doing what I call "content syntax", and making explicit the compatibility restrictions. Also his interest in finding cases where semantic constraints do not automatically explain ungrammaticality has a number of interesting results; why, for example is (266) not better than (267)?18 (267) *John came before Harry arrives (268) *John came after Harry arrives Hornstein's book is therefore a good illustration both of why you can get interesting results in spite of having a bad theory, and of why it would be better to have a good theory.

Tense and discourse 475

4.5. Tense and discourse 4.5.1 Introduction According to the standard division of labour between functionally and structurally oriented linguistics, it would be absurd for an avowedly functional theory to postpone the treatment of phenomena beyond the sentence to the last section. However, the purpose of this book is not to deal with actual discourse for its own sake; it is to show how the functionality of language manifests itself in its so-called structural "core" — thus demonstrating that it is a mistake to locate functional properties in the periphery of structure. Hence, even in this section, there is no attempt to look at discourse-assuch; the subject remains language-as-such. In most of the discussion, I have been concerned to point out the limitations in structurally oriented views which lack a sufficient understanding of the functional dimension. In this last section, I will turn around and look at problems in functionally oriented accounts that do not have an adequate theory of linguistic structure. Because of the focus on language-as-such, there is much in actual discourse that this book has nothing to say about. Its contribution lies in offering a way of thinking about language that makes it possible to be equally insistent on taking both function and structure seriously. The chief instrument in achieving this goal is the function-based view of structure. It rules out as misbegotten not just the separation of function and structure that was seen above, with the structural core and the functional periphery, but also a version which is more liable to affect functionalists. This second version views linguistic structure with suspicion: instead of describing language as structure, it aims to describe the functions it has in actual communication. But if you try to do without a clear-cut notion of language-as-such and deal only in functions — thus handing over linguistic structure to the enemy, as it were — you cannot describe the interplay between language-as-such and social functions: a correlation requires more than one entity. The lack of a clear view of what sort of entity a language is thus threatens the possibility of scientific precision: if your view of language is invisibly determined by the functions you are interested in, it is not surprising (or revealing) that there turns out to be a very close correlation between language and function.

476 Beyond the simple clause

Investigation of discourse can have two purposes, which are both interesting from a functional point of view, and which ought to be pursued in close communication with each other. One is to look for regularities in the code; the other is to look for regularities in the interaction. According to the view of pragmatics argued in Part One above, the first is a part of the second: regularities in the code are a subset of all the regularities you find in linguistic interaction. However, in looking at the same material, the two interests will yield patterns of description that sometimes overlap and sometimes diverge (cf. above pp. 161-162 — because not all regularities in the code are significant for understanding what goes on in the concrete interaction, and not all patterns in the actual interaction are mediated by structures in the code. At some level, everybody knows this; but because there is no generally accepted picture of what a function-based structure is like, functionalists often have to rely too much on well-established inherited notions, like tense, with unclear notional/structural implications, and therefore have difficulties in establishing a greater degree of precision than these terms will support. One type of methodology that exemplifies the dangers of this situation is to take a particular word, or grammatical form, and correlate it directly with a range of discourse situations, positing the result as a description of either a particular linguistic form, or of the discourse properties found. But as underlined by Schegloff (cf. above p. 163), the relevant discourse properties are not consistently marked by little buoys in the shape of linguistic expressions; and the relevant semantic features are not consistently accompanied by certain discourse events (as in the case of skal). What this methodology cannot reveal are the cases where the discourse features occur without the form, or where (in other, uninvestigated discourses) the linguistic forms have other situational correlates. Below, I shall try to illustrate the risk of discourse-based categories drifting into the description of language-as-such because of an insufficiently constrained correlational pattern of thinking. This unclarity can show itself in two ways: either too much is seen as structurally coded, because all functional regularities are attributed to language as such — or too little is seen a structurally coded, because language "really" only exists out there, where people are actually talking to each other. In terms of the basic dichotomy between function and structure, these are two different ways of fighting a formalist conception of language-as-such: either you weight down the formal skeleton by dumping functional regularities on it, or you annihilate it by saying that there exist

Tense and discourse 477

only functional regularities. In terms of the strategy that I have followed, both are versions of the same mistake: to presuppose the formalist view of language in your argumentation, instead of setting up an alternative view of what language-as-such is.19

4.5.2. Discourse Representation Theory I shall now look at some claims made about tense in discourse in the light of this basic distinction between describing language and discourse. Among these approaches, it is natural to begin with Discourse Representation Theory because it takes its point of departure in the sentence-internal tradition: an important part of its tense theory is a development of Reichenbach's. However, it looks at tense description in the light of a focal interest in textual progression, of which temporal progression is a salient aspect, (cf. Kamp—Rohrer 1983a,b, Rohrer 1985, Vet—Molendijk 1985, Bartsch 1985, 1988, Moens 1987, Kamp-Reyle 1993). This approach to tense raises two additional problems with Reichenbach's reference point. The first problem becomes apparent in relation to what is called "extended flashbacks" (Kamp—Reyle 1993: 594). Consider (269): (269) Fred arrived at 10. He had got up at 5; he had taken a long shower, had got dressed and had eaten a leisurely breakfast. He had left the house at 6:30 All the past perfect clauses use the arrival time as reference point in the familiar sense. But in addition, they form a narrative series, where each uses the previous sentence as reference time in the sense of point of departure for narrative progress. Both types of time may be said to denote "where we are", although in a different way. To keep these times distinct, Kamp—Reyle sets up two different concepts. The first keeps the name of reference point, abbreviated Rpt, and describes the time that is involved in narrative progress (indicating "previous event time"). The second concept, "temporal perspective point" = TPpt, covers the function of the time from which the events are seen, corresponding to one sense of Declerck's time of orientation; in the extended flashback above, it is "at ten", the time in relation to which the past perfect is calculated (=P in our terminology). These two form the final items on the

478 Beyond the simple clause

list of splinter concepts of Reichenbach's R. The specific interest in textual progression imposes a special twist on the TPpt concept; it serves not only to characterize the perfect forms, but also to contrast events and states. This contrast is of great interest, because events are typically understood as moving the action forward, while states typically are understood as leaving the time unchanged: (270) She came in. He looked at her (event 1 + event2) (271) She came in. He knew what she was after (event coinciding with state) In French, the narrative significance of the contrast between imparfait and passe simple lies in the ability of the passe simple to impose the "event" property on the state-of-affairs, thus moving the action forward in time. The classic example is (cf. Kamp—Rohrer 1983a): (272) Pierre entra (passe simple). Marie telephona (passe simple) Pierre came in. Marie phoned (someone) (273) Pierre entra (passe simple). Marie telephonait (imparfait) Pierre came in. Mary was phoning (someone). The temporal configurations correspond to (270)-(271) above; when the second sentence is in the imparfait, the action does not move forward. In English, the same contrast is usually marked by the progressive as opposed to the simple past, as shown in the translation.20 The distinction between events and states involves not only the relationship between the state-of-affairs and the Rpt; it also has implications for the perspective from which the state-of-affairs is viewed. As discussed earlier, an event needs to be viewed from a distance, because it involves integrating two temporal phases into one construct, and this is incompatible with a point-of-view in the middle of things. As opposed to that, states (including progressives), are naturally viewed from the inside; somebody saying I am worrying myself sick is clearly viewing the process from a perspective right in the middle of things. When we extend this analysis to the past, it naturally leads to the assumption that statements in the past must get two different interpretations. When we have a past state such as he was worrying himself sick, it would be natural to retain the TPpt in the middle of events; but when we have a past event as in he died soon after which

Tense and discourse 479

moves the action forward, we need to have a distant TPpt, most naturally understood as coinciding with the present moment from which the narrator views his story. Thus past events get a TPpt in the present, while past states typically have a TPpt that is also in the past.21 Because of the orientation of the account towards describing how tenses interact in bringing about temporal relations in the narrative structure, we therefore get a number of ambiguities. The most striking example is the past perfect, where the TPpt has to do double duty in describing both the "perfect" properties and the "event/state" properties; it therefore becomes three-ways ambiguous (one of the logical possibilities is argued not to have any actual instances). Kamp—Reyle are aware of the problems in positing such ambiguities, and would prefer to avoid them (1993: 599), but see them as necessary to account for the complexities of the data. Let us now see how this account fares in relation to the distinction between the description of language and the description of discourse. In order to give the past perfect a description which enables it to drive the narrative forward as in (269), it must have a semantic variant (one of the three) which has the same non-stative feature as the passe simple. The question is, in this context, whether it is sensible to regard this as linguistically coded. I think it is more plausible to assume that the semantic specification of the past perfect is as outlined above, involving nothing but a combination of perfect and past: roughly, locating a state-of-affairs before point P in the past. One argument for that in relation to French is that in the (literary) French system the plus-que-parfait has a perfective counterpart in the shape of the passe anterieur. As described by Vet—Molendijk (1986: 153-54), the passe anterieur can move events forward in the same way as a passe simple (which is part of its compositional build-up): (274) Pierre monta (passe simple) dans sa chambre. En mains de rien il eut termine (passe anterieur) son travail Pierre went up to his room. In less than no time he had finished his work This type of use clearly differs from the extended flashback use, and one would want in a discourse perspective to keep them apart. However, the descriptive system suggested by Kamp—Reyle, in which this ability is located in the feature +/- STAT cannot do this: -STAT indicates the

480 Beyond the simple clause

property of moving the Rpt forward, and must be assigned to both the passe anterieur and the extended flashback variant of plus-que-parfait. In light of the above, it would appear to be compatible with the investigative aims of Discourse Representation Theory to say that some of the temporal distinctions reflected interpretive choices made by the speaker in his process of interpretation rather than ambiguities in coding. In that case, it would be simpler to say that the extended flashback use is a case where progression is assigned as part of the interpretation process only. To illustrate the merit of this view, consider the following quotation (caps mine, highlighting the relevant instances of the past perfect): (275) Nettie sat in her chair, holding her lampshade in her lap. She had been holding it in her lap ever since the crazy Polish woman had driven past her house the first time. Then she HAD COME again, parking and honking her horn. When she left, Nettie thought it might be over, but no — the woman HAD COME back yet a third time. Nettie had been sure the crazy Polish woman would try to come in. She had sat in her chair, hugging the landscape with one arm and Raider with the other, wondering what she would do when and if the crazy Polish woman did try — how she would defend herself. She didn't know. (Stephen King, Needful Things, p. 159) The two capitalized instances of had come enter into a narrative sequence; they are both anterior, as in the French example above, to the time indicated first (Nettie sat in her chair), but describe two stages in the harrowing experience that is described in the flashback. However, the sequence is marked by the time adverbials then, again, and a third time, helping the reader to organize a passage which is temporally fairly intricate. A simple past (thought) describes the time in between the two events that are described in the pluperfect — and I suppose no-one would try to provide this "state past" with any semantic description that could explain its temporal position in between two past perfects. In view ofthat, I do not see any call to attribute the sequential properties to the past perfect. The state/event character of the state-of-affairs clearly plays a role: the narrative is not brought forward by had been sure or had sat — but that does not suggest the necessity of a special feature of the past perfect tense form. The reader just has to organize the narrative based on the clues ("instructions") provided by the text; if narrative sequence is not cued in itself, he has to provide it on his own.

Tense and discourse 481

I think the text-progressional features that are attributed to the tenses are really descriptions of the product, instead of being descriptions of tense meanings. The procedural element in Discourse Representation Theory is thus to some extent overshadowed by the referential element that dominates the tradition. The aim as described by Rohrer (1986: 80) is "to write explicit rules that extract the temporal information contained in a text". If we operate with a basic structural distinction between langue and parole, a description of linguistic meaning must be different in principle from information "contained in a text" — and the coded meanings belong on the langue level, not on the "product" level at which the temporal structure of the text belongs. The ambiguities that arise are natural in relation to texts, because tenses will be ambiguous in relation to actual textual time structures, just as all actual functions are more specific than coded functions. Some versions of the theory downplay the Reichenbach heritage and concentrate on properties of states-of-affairs, cf. Moens (1987). But even if you concentrate on such properties, with the distinction between states and events in a central role, as pointed out by Sandström (1992 [to appear]) the tenses in themselves do not reliably predict the relation between the clause and the Rpt: events can be grouped together as constituting part of the same scene (suddenly a lot of things happened at once: the light went out, a girl screamed, the wall collapsed, and a siren started to wail); states can be read inceptively (suddenly he was lying on the floor): there is no way in which a theory based on the difference between stative and non-stative verb forms, coupled with a wider theory of temporal contours of states-of-affairs, in itself can bring about the right temporal structure in a text. Narrative expectations and interpretive strategies play a necessary part in the picture. A theory of discourse progression therefore needs to be conscious of itself as a discourse theory, using a theory of language-as-such as an ancillary discipline only. In such a theory, the distinction between states and events will remain very important, because the semantic property which it describes plays a role in deciding how to place a new clause in relation to the context: events cannot be located at a point in time. But in order to know precisely where it goes, we draw on non-coded, situational knowledge as well. This type of problem can be described as a complication in relation to the way we understand "topic time". Some topics are temporally simple, some topics are temporally complex. Narratives are so constituted in their way of

482 Beyond the simple clause

dealing with topics that you cannot produce them without at intervals changing the temporal location; and this means that in a narrative, the most salient way you can stay within the established topic time is not by keeping temporal location constant, but by making it move. Rpt is thus a genre-specific variant of topic time. For deictic tenses, the central element of meaning is not specification of time but assignment of point-of-application (with temporal restrictions). In relation to a narrative discourse, the situation that constitutes the topic (= the point-of-application) can thus stay the same, while the actual time within the point-of-application moves forward with the new instalment. Narrative genre expectations, specifying the nature of the topic, therefore do an important job in establishing temporal structure, in collaboration with the coded meanings. For those who have doubt about this, I suggest they find a novice in crime fiction and test him out on any reasonably sophisticated crime story, whether on film or in book form, and see if he can piece the action together. If my own experience is anything to go by, it will be a long time before he is ready for Raymond Chandler's The Big Sleep. This is an instance of the general rule that in order to be able to talk insightfully about what is a property of the langue as opposed to a property of contextual factors, you have to be extremely precise about the contextual processes that language enters into. The question of genre comes under the general issue of what repertoire of actions the speech community makes available to speakers; as emphasized in the principle of sense, the basic pragmatic constraint on interpretation is that the utterance must situate itself in relation to the "Lebenswelt" that is recognized to be common to interlocutors in the culture. The first time you hear a "shaggy dog" story, you do not realize that the bewilderment that you feel is the point of the thing; but once you know that, the story makes perfect sense. Learning to piece together the temporal structure is part of the same type of sociocultural skill; and in order to describe the contribution of tense, we need to be able to refer to the relevant aspects of those skills.

4.5.3. The "historical"present The use of the present about past events, which in deference to tradition I shall call the "historical present", is perhaps the most familiar example of a tense usage whose relations with the central meaning are problematic. If we have an adequate account of the process of discourse interpretation,

Tense and discourse 483

however, we may be able to retain a simple semantic description of the present. The most basic fact about the process of interpreting narratives can be described in terms of mental spaces: in understanding a story, you must set up a mental space as the locus where you put and organize "story information". The reality space consisting of the actual situation thus gets a daughter space in the shape of the world of the unfolding story. The narratee has to keep track of two mental spaces; and if the narrator talks about somebody having to go to the bathroom, you have to decide whether this belongs to the story space or to the actual situation. Although this ability is exercised in the process of language comprehension, it is not part of linguistic competence as such. Any information that you get, including for instance information from a photograph, has to be processed in terms of where that information belongs. Therefore the ambiguity that is built into the two-space situation is not inherently a linguistic ambiguity; the narratee simply has to deal with two worlds at the same time, and has to find out what belongs where. Walter Mitty is not the only person, fictional or real, who has had trouble keeping the two worlds apart. The two spaces also have a temporal aspect. In understanding narrative we need to navigate between speaker-now and story-now, as pointed out in Fleischman (1990: 125). This mechanism is crucial in understanding the use of the present about past events. In referential terms, this usage is an exception to the basic differentiation between past and present. The standard account of it, i.e. in terms of greater "vividness", or of attributing "psychological" actuality to the event (cf. Leech 1987), are true enough as far as they go, but they do not explain why you do not in general say things like Hitler loses the Second World War, even if you consider this event as having lasting and actual significance. Also, it leaves unclear whether we should consider "reference to past events in a vivid manner" as part of the meaning potential of the present tense. Rather than attribute this usage variant to the code, however, the general phenomenon of mental spaces allows us to locate the ambiguity in the cognitive context of language. As described in Langacker (1991: 266), what happens is that the deictic centre of the process of communication shifts from the actual space to the narrative space. This is why you cannot usually begin an everyday narrative in the present tense, however vivid you want to make it. The standard pattern is to begin by using tenses based on the actual situation, until the narrative space can be taken as established — and

484 Beyond the simple clause

then speaker and hearer can shift the temporal viewpoint to that of the narrative space. This also corresponds to the rule for narrative tenses, as described by Dahl (1985), compare the discussion above p. 242: the first clause is in a non-narrative form, and only the tenses that follow up on it are in the narrative form. This account explains why the present tense lends greater vividness: it reflects a manner of narration in which the temporal distance between the narration and the narrated event is removed. Stories told in the past tense, in contrast, reflect a perspective whereby the speaker views events as taking place in a past mental space that is observed from a distance. For this reason, Weinrich (1964) sees the past as the essentially narrative sense: the present would accord narrative events a degree of immediacy that would soon become exhausting: you have to move along with the narrative all the time. This is a reasonable account of the "epic preterite" as used in novellength narratives — for shorter, oral stories, the immediacy is more bearable as well as consonant with a desire on the part of the narrator to keep his audience alert. However, even if the temporal distance is eliminated, it does not mean that the narrative space is totally conflated with the actual space. This can be seen in the possibility of using event verbs in the present tense when telling a story: (276) So he comes up to me and says... In (276), we are still perfectly aware that this is a story being told, and we do not look around to see where "he" is — because if (276) applied to the real ongoing situation, the non-progressive form could not be used. So we can have a "perspective point" that is distant while coinciding temporally with the moment of speech. While accounting for the intuitive vividness, the "mental space" explanation allows us to keep the present unambiguous in terms of temporal distance: it has this effect precisely because it means what it does.

4.5.4. Fleischman's theory of tense in narratives Seeing the ambiguity in the understanding of the present tense as located in the cognitive context rather than in the language is also essential in understanding the phenomenon of tense shifting. This problem is worth

Tense and discourse 485

devoting some attention to because it gave the impetus for a thoroughgoing functionalist investigation of tense in narrative texts, Tense and Narrativity (Fleischman 1990). I think this book is in all respects an impressive account of the role of tense in the narrative tradition. However, I think the theory I propose of the relation between function and structure can provide a clearer account of the role of linguistic structure in bringing about the textual functions that are the main subject of the book. In order to bring out this point I shall first go through the structure of her tense theory with particular reference to the example of the present about past events and then take up the seminal issue of tense shift. The view on the relationship between the linguistic system and function that is reflected in Fleischman's account illustrates the problem functionalists often have with linguistic structure, which I described above: because the formal view of structure and semantics is so strong in American linguistics, functionalists tend to presuppose it and therefore either place their own interests in the periphery of a traditionally conceived structural core or throw doubt on the whole idea of linguistic structure. Both tendencies can be found in Fleischman. Fleischman aligns herself (1990: 12) with Hopper's stance of "emergent grammar", thus doubting the basic idea of any constancy behind usage. However, she also bases her theory on an assumption that there is a "strictly grammatical" core, surrounded by a pragmatic periphery. Thus she interprets the freedom of tense usage in early vernacular texts as an instance of languages putting their available morphology to use for purposes not strictly grammatical but pragmatic." (Fleischman 1990: 67)

The ambiguous status of the term grammar is apparent if we compare with a passage from the conclusion: At each level of the hierarchy, a slightly different grammar emerges through the interaction of referential meaning and context (Fleischman 1990: 313)

The result of this is a profound unclarity with respect to criteria for saying that something is a property of the language, as opposed to saying that it is a property of the interpretation of the text. More precisely, Fleischman's account is based on the assumption that the traditional reference-based semantics and the traditional view of structure

486 Beyond the simple clause

must (more or less grudgingly) be understood as constituting the core of the system (Fleischman 1990: 16-18). Because of the poverty of this conceptualization of linguistic structure, the more interesting properties are then located in three superimposed "levels of the linguistic system" (Fleischman 1990: 57 — my italics): the textual, the expressive and the metalinguistic levels. These categories are explicitly based on Halliday, in whose thinking there is a similar unclarity between the criteria for positing linguistic as opposed to contextual distinctions (cf. above p. 302). Because of her assumption that the grammatical properties of tense reside primarily in their referential potential, Fleischman views the present tense as basically unmarked, i.e. as being able to refer across the whole time line. At the "textual" level, the present tense is then described as having the function of foregrounding. With respect to textual interpretation, I wholly concur in Fleischman's description: By abandoning the distanced, dispassionate posture of the historian and representing material in the fashion of an eyewitness observer, a narrator communicates to an audience that the information reported in the PR[esent]-tense clauses (event or description) is worthy of attention. (Fleischman 1990: 195)

What I do not agree with is the establishment of separate structural levels to accommodate these facts. If we have the right description of the coded meanings and their way of co-operating with each other and the context, we need not duplicate the explanation in the description of the language system. The narrative use of the present arises because a present tense instructs the addressee to understand the state-of-affairs as applying hereand-now, but in a situation where the story space takes over the role of point-of-view from which time reference is fixed; the point-of-application for a narrative present is therefore "story-now". One reason why the extra levels might appear attractive is that if the foregrounding effect is located at a special textual level, we can leave the referential properties as they are in the bottom grammatical level: the pedestrian grammatical properties remain, while tenses soar on the narrative wings at the higher textual or expressive levels. This makes it unclear what restrictions, if any, there are between properties at different levels. But a rejection of this account has to face the same challenge that Fleischman took up: how can we reconcile the co-existence of referential properties and text-foregrounding properties at the same level of the system

Tense and discourse 487

(coded meaning as syntactically organized in the clause) if tense shifts in an apparently random fashion? The answer that I have already adumbrated is to be sought in the ambiguous context that narrator and narratee share. Above I described two recurrent patterns which reflect different, but internally consistent points-ofview: one pattern with the epic preterite marking a measure of "objective" distance, and another pattern where you start with a situation-based past and continue with a story-based present tense. In the literature that Fleischman took as her challenge, tense shifts much more frequently and apparently randomly — this is why they call for the careful textual interpretation that she provides. However, neither the consistent patterns nor the unpredictable shifts are guaranteed by the language system: it is the speaker's responsibility to decide in what way he is going to view the narrative space. Is he going to take the story-now as his temporal basis or the situational now? Sticking to the same perspective is the result of the same sort of consistency that enables a speaker to stick to the same opinion with respect to a problem: if the speaker has thought about it, understands the problem well, and has arrived at a settled position, he is likely to be consistent about it; if he is answering on the spur of the moment, does not have the facts well ordered in his mind, or would like to live out the conflicting possibilities in his reaction, communicating on an expressive as well as an intellectual level, he is likely to be less consistent. But one factor which needs to be considered in its own right is the role of fictionality. 4.5.5. Tense in fiction In everyday narrative we have a real temporal context that constrains the tense choice. In fiction, the situation becomes even more complicated. Fiction is by definition decoupled from the actual discourse world; the point-of-application does not exist independently of the text itself. The implications of this basic lack of contextual functionality is a foundational subject in the theory of literature; and one aspect of this concerns the directionality of meaning that reveals itself in the phenomenon of presupposition; literature involves systematic presupposition failure (cf., e.g., the discussion in Kock 1978). The fact that coded functions need to be understood against a presupposed basis gives the process of interpreting fiction a special dimension — which must be understood as special preci-

488 Beyond the simple clause

sely because the ordinary words are used with their ordinary meanings, but in very special circumstances. In ordinary communication, the real-world situation is the source of the baseline information, and coded meaning adds to it as we go along; in fiction there is no such bedrock to build on. Even in ordinary communication, we sometimes use the linguistic material used in an utterance as a source of information about the speaker's point of departure instead of paying attention to what the speaker is trying to say. Such a connotative reversal (cf. pp. 208-209) is non-cooperative if used by the addressee in a normal situation. In fiction, however, this is standard. If we look at the most unconstrained position, the beginning of a work of fiction (before the fictional world has acquired a logic of its own that constrains the author's choices), we can see the way in which coded meanings in fiction basically do double duty. In normal communication, only the functions are active, whereas the baseline information is already there. In fiction, the reader has to deduce the baseline situation while adding the coded function.22 To take a simple example: the first utterance in Mark Twain's The Adventures of Tom Sawyer is

(277) Tom! Ordinarily, such an utterance would be perceived together with its speaker, who would therefore be situationally given. The function of the utterance, i.e. the act of summoning Tom, would therefore stand alone. But since the reader of the book knows nothing of the speaker beforehand, the interpretation of the utterance produces both a function and a basis element: a situational basis including a speaker, plus the communicative function of the utterance. The situational basis is being reconstructed, and by means of the very utterance that presupposes it — a process which could be called "fictional bootstrapping". This bootstrapping process uses all the cues provided in the text; tense, however, plays a particularly important role. To some extent the absence of a real-world time line with past and present time means that the dimension which corresponds to the deictic tenses is lost (cf. Bache 1986). But this again means that the narrator can instead use tenses to manipulate the process whereby the narratee reconstructs the situational basis. An illustration would be a novel which began

Tense and discourse 489

(278) It was just after the third world war Instead of (as in normal functional discourse) being able to presuppose the narrator's location as well as a past point-of-application, the reader has neither aspect of the ordinary situational basis available. As part of the interpretive effort he thus has to set up both the narrated space (to which the statement applies) and a narrative point-of-view (from which the message comes). In this case we have to construct a past which includes a third world war and a narrator who looks back on it. Normally tense has a function (instruction-for-application), and some contextual presuppositions derivable from the principle of sense (if the instruction-for-application is to make sense, there should be a situation of speech on the basis of which we could unambiguously identify a point-of-application). In fictional discourse the reader instead uses tense to help reconstruct all the missing elements. Because of the double duty, tense becomes a much more powerful tool; but crucially, this can only be understood if we look at the ordinary meaning and match it up with the extraordinary context. Setting up a special literary meaning would be missing the whole point. The situational basis normally constitutes, or determines, a presupposed point of view from which the narrated story is understood. Because tenses in ordinary discourse reveal the temporal point of view from which the events are viewed, the tenses are powerful cues for the (re)construction of a canonical point of view for the narrated story. And this is not simply a matter of finding out what is going on. In everyday communication, the situational basis is very important for the way a story is received. Therefore, the point of view that is built up in telling a story is of paramount importance for the narrative experience as a whole. The most obvious example is the use of the elements of nearness vs. distance. The standard epic preterite puts the narrated space at a contemplative distance from the narrator; in fiction, the narrative present has special possibilities, which can be illustrated with a (fictive) fiction beginning: (279) He is coming towards me and I have about five minutes left before the bomb in my suitcase goes off Here the reader reconstructs a situational basis in which the narrator is telling his story from a point in the midst of the ongoing series of events that he is narrating. In contrast to everyday narrative, fiction permits a

490 Beyond the simple clause

situation where the difference between narrator as a person and the narrator as source of the act of narration disappears: the fictional narrator can go on till the bomb goes off and then be blown to pieces, an experience which an everyday present-tense narrator is spared. As before, we reconstruct both a fictional point-of-application and a fictional narrator in order to be able to perform the application that is coded by the deictic tense. In this case, we identify with the total immediacy that is communicated by the combination of the present and the progressive; but as before, the textual significance of this choice can only be properly understood if we see it as arising from the perfectly ordinary coded function, operating in a very special type of context. The problem I am addressing in Fleischman's thinking about linguistic structure comes out in relation to tense, when she says (in discussing the "emergent" conception of grammar, Fleischman 1990: 13) that the negotiability of grammatical structure explains "how tense shifting came into being in the first place". In the same context, she talks of tense shifts as indicating "ungrammatical" tense usage (inverted commas by Fleischman). The main reason why this is wrong is that it burdens the linguistic system with the responsibility for the tense choice. In two consecutive sentences, the choice of tense is connected only by a relation of consistency, not grammaticality (cf. the discussion of the upper limits of grammatical structure above p. 242). But there is a further error, subtly different from the first, namely in presuming that vagaries of use must reflect vagaries in structure. Both problems reflect a conflation of language structure and language use. As a functionalist, one should not wish to proliferate grammar beyond necessity: if there is a plausible contextual explanation for a linguistic phenomenon, there is no reason to set up more levels of the system to account for them. Tense shift comes out as a phenomenon similar to other cases of ambiguous situations: even in a system with very welldefined rules of address, there will be persons with ambiguous status. This situation is clearly distinct from a case (such as with the Danish polite pronoun-of-address) where the norm itself is in a state of flux. However, before we can handle the issue of tense-shifting definitively, we have to consider the element of coding that also operates in relation to the conventions that arise within literature. To some extent, special literary conventions can be regarded as independent of linguistic conventions; whether taboo words are used in literature or not is of limited interest for the description of any given language. But to some extent words acquire special "genre" affiliations. The discussion of connotative reversal is

Tense and discourse 491

relevant here. The fact that the speaker uses for instance French is normally designed to be invisible, because that is an external fact about the code as a means of expression: what you are standardly supposed to be attending to is what the speaker uses his (French) words to communicate to you. But as emphasized above, connotative reversal may become part of the coded meaning, as in cases like nigger. Because of this mechanism, the contextual affiliations of special tense uses may be absorbed into the coded meaning of the tense forms themselves. As literary genres get under way, conventional links arise between the genres and those expressions which have become customary as vehicles for them. Therefore the theory of tenses as associated with certain forms of narration cannot be reduced to ordinary coded meaning plus fictional context — there are indeed special linguistic conventions associated with established literary genres. The most familiar example of this phenomenon is perhaps the role of the passe simple in French, which has acquired a distinctly literary flavour. Fleischman (1990: 123) suggests that "as part of their linguistic competence adult speakers possess a typology of narrative forms". In Fleischman's terms, the development of the passe simple has resulted in a privileging of a textual function at the expense of its referential function. She quotes Roland Barthes (1953), "obsolete in spoken French, the preterite, which is the cornerstone of Narration, always signifies the presence of Art; it is part of a ritual of letters. Its function is no longer that of a tense"23 (cf. Fleischman 1990: 31). I think this is an important half-truth. The connotative link between the passe simple and the grand tradition of French literature has been gradually strengthened, as the form went out of use in other contexts. You can no longer use it without in a measure assuming the voice of the French academy. But the referential properties of the passe simple are intact, as reflected in the massive literature within discourse representation theory on its special properties in narrative sequences (cf., e.g., Vet and Molendijk 1985). What has happened is not a war between the referential and the textual "levels"; the relation between tense forms and their conventionalized uses in special literary forms exists as an aspect of their coded functional potential, i.e. as coded meaning. Seeing the connotative link as part of the meaning saves us from having to set up additional levels of structure: there is only one structural item, namely the perfective past. It just so happens that when you choose it you also choose to inscribe your

492 Beyond the simple clause

text — and your narrator — in a particular tradition, with a particular type of communicative effect as the result. Thus the awareness of literary form is related to the linguistic system as an aspect of the functional potential of those linguistic expressions that are specially associated with literary genres. This is analogous to the way in which knowing the meaning of the word check as a whole utterance is bound up with knowing how to play chess. Lexically, language involves the whole world, in so far as we can talk about it; but we need not duplicate the structure of the social world in describing the structure of the linguistic potential.

4.5.6. Tense shifting in non-narrative discourse With this in mind, let us return to the issue of tense shift. To illustrate why it would be dangerous to look for complicated narrative conventions in accounting for complicated patterns of tense shifting, and why it might be safer to put a large part of it down to the indeterminacy in the fictional context-of-interpretation, let me adduce the following data from another fictional genre, namely a page of a mathematics workbook: (280) (a) Inham had 8,567 registered votes a week before the election. That week, 935 people register. How many voters did Inham have for the election ? (b) The fund drive raised $71,250 from 2,850 people. Everyone gave the same amount. How much was each contribution ? (c) A travelling salesman sells vacuum cleaners for $222. One day, he sold 320 vacuums. How much money did he receive? (d) Each of 45 census takers is given a list of 15 houses to survey. How many houses in all will they survey? (Mathematics unlimited pp. 168-169, Holt Rinehart & Winston) We note in (a) that the present tense signals the decisive event, and therefore is a motivated candidate for the "immediacy value" of the present; in (c), the general rule is in the ("universal") present, but the

Tense and discourse 493

events are in the ("narrative") past. Thus tense choice is not random — but to go from there to postulate a system that would predict exactly the choices made, and would therefore have to be as "emergent" as the use itself would be to blur that distinction between langue and parole which makes it meaningful to speak of a linguistic system in the first place. This does not rule out that certain patterns of tense shift might become conventionalized in particular genres; but we would hardly want to have a special department of our linguistic competence to account for the way we use tenses when we set mathematical problems. The way to understand tense shift is then to look at the contextual motivation for choosing one perspective or the other, as Fleischman indeed does throughout — but we should not interpret the factors we find as constituting additional levels of the linguistic system. An illustration of the conflicts that may arise in tense choice is provided by this advertisement in an airline magazine: (281) It is no surprise that the most successful negotiation seminar in the United States -would be created and designed by Dr.N.N. No other negotiator in the country has a similar background. Dr. N.N. has combined over 30 years on-the-job experience with advanced academic credentials in negotiation techniques. After earning an Engineering degree from the University of Colorado and a Masters in Business from Columbia University, he became a negotiator for the Hughes organization. There he won the first Howard Hughes Doctoral fellowship award In the opening sentence of the paragraph, we go from simple present (is) to past future (would be), thus shifting from a point-of-application in the present to a point-of-application in the past. The past vantage point from which we take a peek ahead at the success story only becomes explicitly accessible in the second paragraph, when we look at how it all began. The discrepancy can be explained by a clash between two aims: to stick to present-tense urgency and to look upon success as predictable from the past. The first incongruous sentence epitomizes this pincer movement on the reader; the rest of the first paragraph maintains the present perspective (look at the qualifications he has!), and then the following paragraph tears itself away from the present and goes back to the origins (obviously a man who began like that would have to end up as number one).

494 Beyond the simple clause

Newspaper stories have a tense-shifting pattern which is often opposite to the "standard" tense-shifting pattern in oral narrative: instead of first placing the story in relation to the time of speech and then switching to "story time", the headline describing the main event is often in the present, while in the article itself the same event is in the past tense. (282) BASE CLOSINGS GET GREEN LIGHT BY COURT The fast-track process used to trim unneeded military bases cannot be second-guessed by courts, the Supreme Court said Monday. (USA Today, May 24, 1994) This pattern can be seen as motivated by the function of headlines as opposed to follow-up stories: the headline may be the only thing the reader sees, and if so it constitutes an extremely simplified ongoing narrative from newspaper to reader: today's events pared to the bone — next instalment tomorrow. If the reader goes into the story itself, he is confronted with a more complicated world, where not every sentence (as suggested by Weinrich 1964) can bear the immediacy of the present tense, and the story is therefore told from a perspective of temporal distance. The simplification account can receive support from the fact that not only recent past but also immediate future events are told in the present in the headline, with the future coming in the article itself. The two types of "headline presents" may stand side by side; alongside (282) was found (283): (283) BIG 3 NETWORKS LOSE 12 STATIONS TO FOX In the biggest-ever realignment of TV affiliates, Fox — the feisty fourth network — will get 12 stations now allied with CBS, ABC, and NBC... (USA Today, May 24, 1994) Again, the descriptive format I see as the most natural is one in which we look for the motivation in the meaning that is shared between all usages of the present tense, and add genre conventionalizations as specific superimposed aspects of its functional potential (on headline language, cf. Swan 1980); Danish newspaper headlines do not use the present about past events to the same extent as English, so this is a separate fact about English usage which is not predictable from our general knowledge about what tenses mean.

Tense and discourse 495

This understanding sees the wider communicative potential associated with tense as being composed of a quite narrow structural account, a broad view of coded meaning which includes genre conventions, and finally a "product" account that relies on the interaction of coded properties with contextual circumstances. The central logic of the assumptions about tense shift as a matter of communicative choice can be illustrated in relation to language learning. One of the most pervasive "errors" in written English in the Danish school system at all levels is the apparently random shift between past and present (as pointed out in the annual reports of external examiners). If we approach the issue from Fleischman's point of view, we would be tempted to look for signs of emergent flux in the system, and look for the different textual and metalinguistic properties of the learner texts in order to find the underlying patterns of use. I am sure that one could in fact find such patterns; but I am also sure that they would reflect the ordinary semantic properties of the tense forms, in a situation where the users are linguistically insecure. Likely reasons of insecurity include the double context inherent in the narrative genre, the strain of writing in a foreign language, and the fact that doing school assignments is not necessarily perceived as a sense-making activity. According to the theory I have outlined, the problem lies not in the system but in the degree of consistency and expository competence the pupils are capable of keeping up in writing. In fictional contexts, we should therefore not see the error as reflecting inherent incompetence, keeping in mind the narrative tradition described by Fleischman and the mathematics problems above. Tackling the problem (where it is a problem) is part of the job of teaching textual cohesion, not of teaching English.24 The value of a fairly limited theory of language-as-such in all wider contexts where language plays a part has both an internal and an external aspect. Internally, with respect to linguistics, it lies in the classical virtue of simplicity of description. Externally, in addition to the virtue of simplicity, it has the merit of limiting the reliance on language-as-such in seeking an explanation for what happens in discourse, emphasizing that in spite of its pervasive importance in human life, language is basically the medium rather than the message.

5. Conclusion 5.1. Overview In the introduction, I claimed that the linguistic significance of the foundational discussions in Parts One and Two would become clear in Part Three. I now want to sum up what I see as the results of this overall strategy. The point of the discussion of meaning and structure is to avoid certain unfruitful dilemmas which have continued to plague post-structuralist linguistics. There are two positions which have been particularly damaging: with respect to meaning, the entrenched association of semantics with representational as opposed to functional meaning; with respect to structure, its entrenched association with autonomy. They are responsible for the meaningless conflicts between structure and function, between pragmatics and semantics, and between syntax and semantics. Once such conflicts are taken for granted, the confusions they generate survive no matter what side you choose. The alternative picture I have outlined is designed to replace the conflicts between structural, semantic and pragmatic interests with linguistically appropriate patterns of collaboration between them. The description of tense illustrates how linguistic description can be improved by the suggested integration, in terms of which function is the basic concept both in relation to meaning and structure. This picture replaces the accepted picture where structure and representation are basic, and function is superimposed like an overcoat when you go from the cosy den of language into the great outdoors of pragmatics. An important aspect of this suggestion is that the mental directionality of approach is reversed: rather than begin with traditional abstractions, largely imported from philosophical positions that are no longer compelling, and then cautiously move in the direction of actual usage, we should take the whole field of language use as the basic phenomenon, and then describe the role and contribution of language-as-such as only one aspect — although a very interesting one — of this whole domain.

5.2. Meaning The classical assumption was that language acquired meaning by reflecting the world. In the tense tradition, this is reflected in the pattern of

498 Conclusion

description that begins by drawing a time line, supposedly representing the ontology of time, and then accounts for tenses by plotting them in somewhere along the line. In contrast, I tried to show that while tenses certainly have ontological and truth-conditional implications, these can only be understood as the result of semantic properties (meanings) which are not in themselves reflections of ontological or truth-conditional categories. The most salient consequence is that neither event time nor reference time has any place in the description of tense meaning — although they are of course central for time reference. The elimination of reference-based concepts from the semantic vocabulary made it possible to see referential ambiguities as resulting from simple meanings interacting with complex contexts. Further, a truth-conditional approach to tense is inappropriate from a linguistic point of view because the notion of truth only becomes linguistically appropriate with the function coded by deictic tense. Therefore we have to base a description of the linguistic role of truth on a prior understanding of tense, rather than base a description of the linguistic role of tense on a prior understanding in terms of truth. The deictic tenses also resist analysis in terms of a conceptual view of meaning: like other indexicals, they cannot be accommodated within a purely conceptual picture. In this, they exemplify the way in which functional systems respond to external factors, instead of being part of a purely mental universe.

5.3. Structure The concept of function-based structure is designed to avoid two mistakes with respect to linguistic structure: claiming autonomy for it (as in structuralism), and eliminating it (as in certain varieties of functional linguistics). The description I suggest for English tense reflect this position in several ways. First, the structural properties are motivated by the functions (coded meanings) of the tense elements: scope relations and meaning must be understood as interdependent properties. Secondly, the expression side of syntax dances to the tune called by the content side: the way in which expression mechanisms interact can only be understood by reference to the content relations that they are designed to code. Thirdly, the structure is no more "clean" than the content elements: the pure future in English has a fairly messy status, because it is coded as a deviant element in a pattern where modality is the dominant substance element. Nevertheless, we cannot

Structure 499

describe English properly without capturing precisely that messy position which future meaning has in the structure of English — which I tried to profile by comparison with French on the one hand and Danish on the other. What the learner must know includes the precise boundaries between the territories covered by the individual content elements that constitute the options. Each content element usually constitutes a "bush" of polysemously related sub-options with a pattern of internal paradigmatic relations — and any idealized tense system captures only one dimension of it. This complicated pattern of actual language-specific facts can be naturally seen in a diachronic context. But diachrony does not have the last word; it is necessarily only one half of a dichotomy. First of all, the developmental path can only be revealingly understood in the context of the synchronic relationship between the different meanings involved, especially as they are distributed differently on the scale from the functional to the conceptual end of the hierarchy. Secondly, diachronic paths are not invariable; the retreat of the present perfect from the "hot news" variant in modern American English is not predicted in the "standard" trajectory of perfects (Bybee in discussion 1993). In relation to most of the tradition, I see the combination of (functional) meaning and structure as the chief contribution my account has to offer. Tense as traditionally conceived, and explicitly defined by Comrie, must satisfy two criteria: one based on the domain of meaning (time reference), another based on structural criteria (only "grammaticalized" items count as tense morphemes). Beyond the existence of this structural "entry criterion", surprisingly little attention is devoted to structure in the description of tense. Each traditional tense tends to be described essentially on its own; and this must be seen in the context of the traditional isolation of syntactic structure from meaning which was taken over and modernized by generative grammar. Because my account sees the organization of meaning as the central structural property of language, structural and semantic properties can be related to each other; because it sees meaning as functional, it can incorporate the way meanings draw on situational context in a theory of language-as-such. As one result of this approach, the traditionally central properties of time reference come out as a by-product of the three vital elements in language description: individual meanings, structural context (mainly content syntax), and situational context (including cognitive routines).

500 Conclusion

5.4. Survey of times It may be helpful to sum up the total number of times that this account involves and their status in relation to semantic structure: Situational times: 1. Topic time (Time implied by the discourse topic) Narrative variants: Rpt, Rpt + 1 2. Utterance time (Time of the communicative event) Narrative variant: time of the point-of-view implied in the act of telling the story. Note that the point-of-view can be distant even if narrative space is viewed as simultaneous with reality space, as in the narrative simple present. Tense times: 1. Function times: a. S/P (= application times) b. F for future c. A for the anterior location coded by the perfect Function times are aspects of the meaning of the tense morphemes. The function times of tenses do not add temporal specifications — they insert states-of-affairs in temporal contexts, thus presupposing one or more points of view from which they are seen. 2. Base times: Like all acts, coded meanings presuppose a baseline against which their function operates. The base times are the temporal points-ofview that must be presupposed in order to view something as ahead, anterior, etc. (ahead in relation to what?); and the role of the base time is therefore dependent on the nature of the function time; "lookahead time" is different from "time of reckoning", because of the semantic differences between the future and the perfect. Base times are established as part of the process of interpreting the

Survey of times 501

semantic structure, working from the situational end downwards in the scope hierarchy, beginning with the situational utterance time, then the deictic time, etc.

Adverbial (function) times: 1. Temporal specifications: as part of the layered scope hierarchy, adverbials add temporal information to the state-of-affairs, in the case of definite time adverbial simultaneously anchoring it outside the conceptual universe. 2. Topic-adverbial times: apart from adding information to the state-of-affairs, time adverbs can indicate or specify the temporal location to which it applies. This system predicts the relevant time-referential properties without building on any a priori reference-based system of times such as Reichenbach's. The only one of Reichenbach's times that is coded into language systems is the time of speech (S), which is an aspect of the proximal deictic category; one of the senses of "reference time" is built into the working of content syntax, and thus has linguistic status, but without needing to be stated separately as an aspect of individual meanings in the tense system. By including the syntagmatic interaction between tense meanings and other meanings in the clause, primarily time adverbials and subordinators, we can get the language itself to organize all the relevant times for us — without having to postulate any external theoretical constructs in the shape of features, time points and periods, etc.

5.5. Conceptualization embedded in interaction Briefly speaking, the story of linguistic meaning can be summed up in the formula: conceptualization embedded in interaction. But as I have tried to show, it does not work in terms of a dichotomy into a pragmatic envelope and a conceptual content: there is a functional element in the core of the language system itself, including the syntactic organization of meaning,

502 Conclusion

which is illustrated in the analogy with hierarchical organization of tool use: operators do something to operands, whether the operand consists of food or meaning. In linguistic structure, conceptual content is gradually knocked into shape by the higher-order functional meanings. Function and conceptualization collaborate throughout the clause structure. As we move towards the "operator" end, the purely functional aspects of meaning dominate, and remaining conceptual elements are increasingly "subjectified" — while the conceptual content dominates at the "operand" end, where the functional contribution is modest and conceptimbued. As we move up the hierarchy, more and more of the conceptual content is in place and more and more of the remaining semantic tasks involve what we do with it. The relations between meanings in the clause can therefore not be understood only in terms of a bottom-up process of meanings that are stitched together into a whole quilt — it must be understood in terms of a top-down differentiation of a whole well-formed utterance meaning, which constitutes the (evolutionarily and ontogenetically) basic locus of linguistic meaning. The cline from a predominantly functional end towards a more conceptual end of the scope hierarchy is clearly reflected in the area of tense. Deictic tense is purely functional, invoking the act of application only. The future has conceptual content, but one which is dependent on the interactive act of adopting a look-ahead perspective. It can therefore not occur in a nonfinite, i.e. purely descriptive form but must be attached directly to a finite form. Finally, the perfect is fully conceptual and can occur in non-finite forms.

5.6. Semantics and pragmatics With the peak-foundation model replacing the core-periphery model of the relation between semantics and pragmatics, the point of departure for a description of tense is the whole process of communication in which tense forms do their job. The basic phenomenon available to us is that of language in action, and the specific task of the linguist is to try and describe the narrower role and nature of language-as-such in the picture. This view serves as a basis for criticism of descriptions where the capacity of tenses to have certain types of parole-functions is transferred in an unprincipled way to the description of the linguistic potential as such. Tenses play a role in narrative structures, both in the build-up of textual

Semantics and pragmatics 503

progression and in assigning differential status to different elements in the story. Tense also plays a role in the creation and organization of mental spaces in structuring understanding of discourse generally. It also influences what time the conversation is understood to be about. In none of these cases, however, is the relationship between discourse function and linguistic potential as direct as some pragmatically oriented accounts would have us believe. Thus, from Reichenbach's R to Klein's "topic time", it has been thought that in order to have a pragmatically adequate theory of tense, the "time we are talking about" must have a central part to play. The situation, however, is very similar to what is the case for the notion "topic" in languages without a regular topic marker: we should resist the temptation to say that the functional importance of topicality in itself qualifies it for independent status in the description of the language — instead, we should explore how the existing linguistic categories, interacting with context, manage to convey it. The topic time will often be the time of the identifying tense, since the application time will typically be the time we are talking about; also, it will often be the time indicated by a time adverbial. However, neither is necessarily true; the present future and the past perfect are cases that permit more than one option for topic time. If we make the past perfect ambiguous in order to have structural categories that reflect topic time unambiguously, we force a distinction onto the language which it happens not to make — and which no past perfects that I know of ever makes. We therefore misrepresent the delicate interplay between coded and contextual information by forcing context-motivated distinctions into structure. Langue is an abstraction. What gives it some hope of serving a useful purpose is the fact that language-as-such arises as the result of a process of abstraction both for the linguist and for the ordinary speaker. Unless speakers habitually distilled something out of concrete usage events and let it play a role for the way they used linguistic items next time, language as a social institution could not exist. With this mode-of-existence the surprising thing is not that language is variable, but that it is so relatively stable. Some elements in the area of tense are more pervasive than others, and these include the structure-determining semantic properties: the application element of the deictic tenses; the distal element of the past and the hereand-now of the present; the categorical aheadness of the future; and the time of reckoning for the perfect. For all practical purposes — teaching,

504 Conclusion

language description, investigation of social life by means of language — an awareness of the relative invariance of certain aspects of the functional potential of linguistic elements is a useful thing to have. If this study has helped to show how it is possible to look for structural regularities in a way that simultaneously highlights the functional embeddedness of language, it will have served its purpose.

Notes

Part I, chapter 2 1. The terms ontology and metaphysics are both used about this area, with "ontology" focusing on the inventory of elements, "metaphysics" on the activity and products of the human mind when asking questions about the structure of the world. 2. What the early authors had to say about meaning as a property of language "as such" is generally in the nature of "stray remarks" (cf. Ullmann 1962: 1), insightful but unsystematic. 3. It should be said explicitly that in stating the Platonic positions in the form of doctrines (as I am doing) one is ignoring the nature of Plato's writings. The dialogue form, however one-sided the argument sometimes appears, always leaves it to the protagonists to draw the conclusions, and taken together the dialogues do not constitute a monolithic set of views. What I have tried to sum up is the position which is historically associated with the name of Plato, and which turns up time and again in later mental history. An ironic fact is that whereas Plato used the distinction between the eternal underlying order and the chaos on the surface to criticize the ruling (dis)order, the powers that be have generally found it useful as a way of justifying the way things were: deep down everything is in order. 4. Cf. Phaedo 102b: ".. .the forms exist individually and.. .other things participate in them and derive their names from them." 5. The nature of language is taken up in relation to the sophists, because their notorious dexterity with words documented the untrustworthiness of language, showing that language use belongs, like sensory experience, on the side of appearances rather than reality. If he is to be able to point out what is wrong with the way the sophists used words, Plato cannot claim that it is the Forms alone that endow language with meaning — then there would be no difference between a false and a meaningless statement. He therefore has to leave a place in his theory of the power of words for "images" of a more shadowy nature. Plato had to counter an objection that the sophists used as a general escape clause from all criticism, derived from Parmenides' maxim that not-being must never be said or assumed to exist. The problem with this apparently innocuous doctrine is that if there is a prohibition against saying that there "is" something which "is not" actually there, we cannot speak of falsehood — since admitting the existence of falsehood means admitting "non-being" as something that exists (namely as something attributable to falsehoods). For a close analysis of the

506 Notes to pp. 7-15

position Plato works out and the problems it raises, cf. Siggaard Jensen (1980). 6. Aristotle's views vary somewhat across his writings, in ways which have been viewed as indicating a gradual development away from Platonic positions. 7. Passions or affections are also used to render the Greek termpathema. 8. Although it is unclear how much significance should be attached to this formulation (cf. the introduction in Ackrill 1963), the semiotic triangle is often traced back to it; the Stoic grammarians were the first to really work out a theory in which a "lingua mentalis" played a significant role. The reception history of Aristotle's theory as presented in Peri hermeneias is described in Arens (1984). 9. In the tradition hypokeimenon, which is a perfectly valid term in its linguistic sense of "that of which something is predicated" was, by the mechanism described above, tacitly taken to designate an aspect of the world — which was also translated into substance (cf. for example Jespersen 1924: 75). Linking "being" with "substance", and letting everything else be "predicable of substances" smuggled meaning into metaphysics, raising the problem of the "unknowable substratum" (cf. Strawson 1959: 210n, Pinborg 1972: 61, 100). 10. I am indebted to Finn Collin for pointing this out to me. 11. The rationalist element can be seen from titles such as An Investigation into the Laws of Thought (Boole 1854); Begriffsschrifi, eine der aritmetischen nachgebildete Formelsprache des reinen Denkens (Frege 1879). 12. Empiricism takes many forms; the phenomenalist orientation towards sense data as the source of knowledge may dominate to a greater or lesser extent over the physicalist orientation towards the world about which the senses are supposed to tell us. If the phenomenalist orientation is taken to its extreme, the world as distinct from the sense data fades away altogether. However, the spirit of empiricism that was part of the general scientific and technological development of the age in which formal logic took shape, generally took it for granted that the value of sense data was to serve as a reliable source of information about the world. The general attitude taken by logicians in the empiricist tradition (as well as by their opponents) reflects this: they are the "no-nonsense" representatives of pure solid facts. In the account given below, I take it for granted that the strength of the movement historically depends on this identification. This means that to the extent that there are problems in maintaining the notion of a world as independent from subjective sense data, this is a weakness in the whole empiricist project (cf. Juul Jensen 1973: 70). In giving an account of the logical tradition as an answer to the problem of meaning, I shall also try to show how the weaknesses of the account of meaning is related to the central weakness in the nature of the empiricist project. 13. It might be thought that this was formulated for logical subject expressions only; but even in the case of adjectives, Mill argued that denotation (as his

Notes to pp. 15-29 507 term was) was to be considered basic: "We may say, without impropriety, that the quality forms part of the signification; but a name can only be said to stand for, or be the name of, the things of which it can be predicated" ([1970]: 14). 14. Frege understands both logic and mathematics as being about aspects of reality, and argues in a dedicated manner against those who see these disciplines as a matter of arbitrary manipulation of symbols (1903, par. 86137). 15. Russell's orientation towards the pursuit of knowledge can be illustrated by the following quotation: I came to philosophy through mathematics, or rather through the wish to find some reason to believe in the truth of mathematics. From early youth, I had an ardent desire that there can be such a thing as knowledge, combined with a great difficulty in accepting much that passes as knowledge. It seemed clear that the best chance of finding indubitable truth would be in pure mathematics...(Russell 1924: 323). 16. Logicians who assume that logical subjects get their meaning from the thing they stand for had problems with logical subjects like "the golden mountain". The Austrian logician Meinong had proposed a solution according to which pseudo-entities like the golden mountain and the round square were given a sort of shadow existence in order to allow such statements to be about something. This earned him the status of Russell's favourite example of logicians who lack that "feeling for reality which must always be preserved even in the most abstract studies" ([1952]: 97); he appears as whipping boy in several passages (cf. 1905, 1918, etc.). 17. The empirical untenability of this view of meaning even in the case of words for sensory impressions was pointed out by Wittgenstein (1953). The intuitive plausibility of a perception-based theory of meaning lies in the shared external source of perception, as with a red thing that speaker and hearer are both looking at. But how can the language user succeed in hooking words onto sense impressions in cases like pain, where there is no such shared external source? 18. This development ran parallel with a development within the model science of physics whereby the basic entities became less and less reminiscent of objects familiar from the ordinary solid world. 19. Without demonstrating what it would look like in detail, Dummett suggests that a semantic theory based on verification might stand a better chance.

508 Notes to pp. 42-45

Part I, chapter 3 1. The human observer tends to think markers like MALE are meaningful, but that is only because they are reminiscent of other items in the human language which he understands. Translating man into MALE, ADULT, HUMAN inside the machine is really no more semantic than translating it into "marker 1", "marker 2", "marker 3". One might also give up composition, like Fodor, and simply translate it into MAN, which again is neither more nor less semantic than man. Fodor's reply to the "needless duplication" involved in translating into another language is as follows. Assuming that the human mind is capable of representing the content of a natural language sentence to itself, we need to account for both the natural language sentence and the mental representation. There is no empty translation or duplication, only two similar, but necessarily distinct representations, neither of which consists in meanings as such — both are symbolic representations. 2. Although Putnam originally formulated the position of computational functionalism, he also (cf. Putnam 1977, 1988) formulated the reasons why formal systems could not achieve the aims they were designed to reach, cf. p. 27 above; and his criticism rules out both the feasibility of model-theoretic semantics in the form of Lewis and Montague and the syntactic theory of mind formulated by Fodor. The positions can be seen as points on a route away from classical positivism (cf. Putnam 1988: xii). First, Putnam shows that mental states cannot be captured in terms of physical substance - we need to look at the manner of organization. Subsequently he shows that the manner of organization is not enough - interpretation is necessary also. Putnam's criticism of Fodor's attempt to establish a relation between symbols and the outside world is that any causal theory which tries to relate formal systems causally to their meanings cannot defend itself against possibilities of misidentification: if cows cause a tokening of the mental symbol 'horse', either this must imply that 'horse' in some cases means cows, or there must be something else besides the causation which keeps 'horse' from meaning cows, and which cannot be explained in causal terms. 3. There are two contrasting mistakes which are relevant in relation to the Chinese Room. One is the error Searle is after: to continue the behaviourist pattern of thinking in forgetting about inner states, letting a methodological stance obscure the ontological problem. The opposite mistake is to say that because machines are different from human beings they could never begin to think. Searle does not rule out "real" machine intelligence, saying that if machines began to think they would have to reproduce the causal powers of the brain; but some of his arguments lend themselves to an understanding in terms of which it is the substance rather than the structure which is important for thinking — and the

Notes to pp. 45-46 509 strong AI camp have concentrated their counterattack on that issue. Their point is that if you work patiently to make your computational process more and more like the workings of the brain, it can ultimately only be a metaphysical or religious matter for Searle to draw a line saying that there is one final property which you will never be able to mimic. Why this arbitrary No entry sign? Searle's distinction between the causal powers of an entity and the formal programme that it instantiates did not convince his opponents; there is no reason in principle why causal powers cannot be simulated as well (cf. also Gardner 1985: 175-76). Searle appears to be hovering on the brink of making a claim which is too strong, as when he is talking about a computer as not being made of the right stuff. Searle's view is that thinking and understanding are processes that are bound up with the physical whole that constitutes a human being, so that having intentional states is natural to people just as lactation is natural to female mammals (1980: 450). An instructive illustration of the ontological confusion through which strong AI can come to appear scientific can be seen in Hofstadter's reply to Searle (1980). Hofstadter, who is honestly disgusted with Searle's views, makes some observations about the sort of difference that he perceives between himself and Searle, calling it a "religious" difference. His own religion he describes (1980: 434) as that of someone trained in "physics and science in general", saying that it evokes in him a kind of "cosmic awe" to see how substantial phenomena fade away as one looks at them in increasingly fine detail; hence, "reductionism does not 'explain away'; rather, it adds mystery". Searle's religion he associates with the notion of soul, to Searle's understandable annoyance (since Searle's basic argument turns on the causal powers of the brain). Hofstadter sees his own religion as firmly resting on scientific methodology, whereas Searle's is metaphysical in the bad old sense. However, Hofstadter's own metaphysics comes out in a subordinate clause in the beginning of his article (1980: 433), where he says that "it will always baffle us to think of ourselves as, at bottom, formal systems". The illegitimate step is that of going from a (let's assume) correct description of phenomena at the micro-level to a conclusion that the ultimate truth resides at just that level — that there is identity between what we are at the "bottom" of the levels of analysis, and what we "really" are (cf. also the discussion of emergence Part Two, section 2.2). The only way this could make empirical, scientific sense, would be if he had looked at a number of objects, some of which were formal systems and others which were not, and found that human beings belonged in the former group. But if Hofstadter castigates Searle for postulating properties that cannot be reproduced in a formal computer simulation, his world picture cannot accommodate a distinction between entities that can be characterized in terms of formal operations and entities that cannot. And if everything, by

510 Notes to pp. 46-68

5. 6.

7.

8.

9.

Hofstadter's scientific credo, can only be a formal system, we can say without even looking that human beings are formal systems, too. Similarly, in Dennett's review of Penrose (1989), he speaks of the "Cathedral of Science" (capital letters by Dennett) as embodying the beliefs of the scientific world view, and as supporting the case for strong AI. When the hard-nosed scientists begin to invoke religious metaphors, it is not unfair to assume that argument has moved outside the scientific arena itself. This sense of the concept is generally traced to Brentano; but perhaps the identification is misleading (cf. Putnam 1988: 1). The essence of his argument comes from the same sources: first, the fact that brain processes can be simulated on a computer shows nothing about the intrinsic nature of the brain processes, and secondly, if you take seriously the intrinsic nature of the mental processes generated by the brain (as familiar to everyone from their own mental life), they are clearly distinct from the processes going on in a computer; cf. on the latter point also Edelman (1992: 218-227). There are several ways in which such an understanding might be spelled out. One possibility is a "business-as-usual" interpretation: we can continue to talk about things as before, with an awareness that truth-conditions, logical structures, other people and everything else in the world that we talk about are only accessible to us in the form provided by our cognitive apparatus. This would be fully compatible with (for example) a truth-conditional semantics: truth-conditions are conceptual constructs, like everything else. I take it that not many cognitive linguists would find this an interesting position to take. The phenomenon of blindsight provides an interesting glimpse of a mechanism that is more primitive than perception but affects the organism in similar ways. Until it was described (by Weiskrantz and associates), most people would have assumed that in order to be oriented with respect to an object you would have to know about it — but blindsight, if occurring on its own in organisms without perception in our sense, would permit the triggering of motor routines without recourse to any intentional awareness. This is a process of delegation: the higher cognitive control centre (cf. JohnsonLaird 1988: 356) stops worrying about the task because it has a reliable subordinate mechanism that can do the job. Automation means that a skill becomes "second nature", in that it is part of the way in which you react rather than how you think about it; skiing for the expert skier functions in principle just like flying for a bird or burrowing for a rabbit. But this does not mean that the intentional representation of what you have to do necessarily disappears. An expert skier has a wired-in competence; but if he functions as a skiing instructor he must also have a consciously accessible knowledge of how to do it which he can use to teach beginners. Conversely, an intentional representation may be impossible to dredge up from the neuro-cognitive

Notestopp. 68-72 511 underground, as when you start singing a lullaby but the words refuse to report on duty. The decisive feature of intentional representations is not that they are always conscious, but that their typical qualities only reveal themselves as conscious mental content (we must assume this has a specific neurological correlate, but we do not know what it is). As described by Penrose (1989), the most sophisticated intellectual achievements typically arise as a result of mental processes of which the subject is only very partially aware. Unless the scientist sometimes woke up in the middle of the night because his subconscious mind had generated an insight that it could not wait to report until the morning, he would be intellectually crippled. The one thing we may safely say is that the relation between the systems is not simple. 10. The precise quotation is: Det er ganske sandt, hvad Philosophien siger, at Livet maa forstaaes baglaends. Men derover glemmer man den anden Saetning, at det maa leves forlaends. Hvilken Saetning, jo meer den gjennemta?nkes, netop ender med, at Livet i Timeligheden aldrig ret bliver forstaaeligt, netop fordi jeg intet 0jeblik kan faa fuldelig Ro til at indtage Stillingen: baglaends. (Kierkegaard 1968: 61) I borrow the translation by Henrik Rosenmeier, produced on another occasion for Niels-J0rgen Skydsgaard: 'Indeed it is true, as philosophy teaches us, that life must be understood backwards. But because of this, the sequel is forgotten that life must be lived forwards. The more one considers this proposition, it leads precisely to the conclusion that the life temporal is never understood truly, precisely because at no moment am I granted sufficient quiet to enable me to adopt a position: backwards' 11. The fact that we cannot talk about background states without creating representations of them makes it difficult to pinpoint their specific nature: by directing attention to them you create a (secondary) representation of them, and what you then have before you is not the background state itself but an imitation which may frustrate the argument by having all the properties of the background state except the property of being a background state. One painful type of cases in which the difference may remain tangible even when the background state coexists with an explicit representation is when someone close to you dies. That person's importance to you does not consist in a set of representations (although we also have secondary representations of ways in which he was important) — it is part of your life itself, not just of the cognitive picture of life. This is why we go on acting in ways that presuppose

512 Notestopp. 72-99 that the dead person is still there. "Oh — I forgot", we may say, but this is misleading in the sense that the death is still inscribed in our cognitive system. What happens is that our way of living as a whole relies on the missing person, and therefore we have to restructure not just our representational knowledge, but our whole life before the death has been fully absorbed. 12. A recurrent theme in Max Frisch's works is the mistake involved in making a rigid explicit picture of people instead of sharing the process of life with them. 13. This is discussed by Johnson (1993) in response to McLure's criticism of Johnson (1992). The debate between McLure and Johnson is about consciousness; but as pointed out by Johnson, that is not the crucial issue here.

Part I, chapter 4 1. Or rather the cognate nouns (cf. Harder 1978): in Searle's classic example, the analysis of promising, he rejects a number of cases in which the performative formula is used, because the noun does not naturally categorize these instances. 2. The discussion goes back to Plato's Cratylus. 3. This bottomlessness has been a bone of contention; Lewis among others have claimed that it was harmless, because each assumption need not be actively represented, while others have criticized it (cf. e.g. Sperber—Wilson 1986). In terms of the position argued in this book, the default attitude to conventions comes under the heading of "reliance" (cf. p. 71-72), which is not actively stored; the bottomless assumptions only arose when you start reflecting (cf. for more detailed discussion Harder 1990c). 4. Before we even ask what function language has, we should ask whether language has a function at all. In theory, it could be a piece of evolutionary flotsam, like female orgasms according to Gould: very interesting in the individual context, but a side effect of other more significant developments, without a contribution of its own to make. Yet in the absence of candidates for more significant developments of which it could conceivably be a side effect, it is reasonable to assume that language has an independent contribution to make towards the differential fitness of man as a species. 5. On Edelman's theory, Millikan's cosmic accident would begin to have functional features the moment its neuron organization began to adapt to the challenges that faced "it" after being assembled in the likeness of a human being. 6. One could try to argue in terms of the "typical" individual: what would the child be missing if it did not learn language? One would then get a repetition of what applies to the species; a plausible general picture, but no knock-down argument. Inability to have the sort of human intercourse which language mediates would be disastrous in terms of fairly uncontroversial norms for what

Notes to pp. 99-132 513 a human life is, but there are other things that the subject would also miss out on. 7. Only if quasi-experimental conditions recur, such as the loss of standard limbs in different types of animals that turn aquatic, can one be reasonably confident about it: saying that walking is the function of legs looks like a fairly safe bet. With human language, we have only one species to look at; and the various abilities that interact with language are to closely associated in human history to permit a serious differential argument about which is evolutionarily more crucial. 8. In the picture argued in this book, the dynamic nature of meanings implies that the linguistic meaning resides in the instructions alone, not in the representation that the addressee is enabled to produce on the basis of these instructions. Therefore the double reliance on truth conditions and instructions in discourse representation theory is rejected here in favour of a picture where truth conditions apply only after the representation has been set up — it is the finished representation which is designed to be matched with reality, and not the linguistic meaning of the utterance. 9. On the way to this task, the network was trained on an auto-associative learning task: to go from an input image to associating it with the same output image, and from an input label to the same output label; this is non-trivial in the sense that input representations are translated into other representational formats before being converted back. 10. In discussing the relation between linguistic meaning and conceptualization, it is important to be aware that the experiment does not cover everything about either of the two human skills. First of all, we have to distinguish between the operation of a computational neural network and mental processing in a human subject. As discussed above p.62, the simulation does not distinguish between the production of intentional representations and the production of purely physiological reactions. Conceptual skills as defined here involve the ability to produce the representation in the absence of direct environmental stimuli, as directed by the intention of the subject. We do not know what the neurological correlate of this skill is, but one thing we do know is that the structure is going to be more complicated than computer networks as we know them.

Part I, chapter 5 1. Bateson et al. discuss insincere forms of display as well as insincere verbal communication. The simplest form of conflict is exemplified by the case (Bateson et al. 1956: 257) where the mother displays hostile withdrawal while verbally expressing loving concern, saying you 're very tired, I want you to get

514 Notestopp. 132-168 your sleep. (The theoretical account in Bateson et al. differs from the one I present in seeing the message I call "displayed" as "metacommunicative", but I do not think the two analyses differ in their implications with respect to this example.) 2. But if you cannot even form the intention of doing something that you prerepresentationally know to be impossible, what about the students who were told to go home and act like visitors in their own home? To understand what happened, it is essential to distinguish between the "university" and the "family" aspects of their action. It is quite possible to act in the sense of pretending to be something that one is not; and this is what the students did. Their action was not a way of participating in the family dinner (since that would be incoherent with the acts they performed) — it was part of a project that rejected the situation "family dinner" as the context of action in favour of the situation "sociological experiment" — a situation of which the rest of the family were not informed, and in which they existed only as guinea pigs, not as fellow human subjects. Hence, what the students did was indeed incoherent with the situation of the rest of the family, which is why it was so traumatic. What happened was the unmaking of sense for purposes of observation, rather like an anatomist cutting into live tissue to see how the organism reacts.

Part II, chapter 2 1. It is true that in looking at component-based structures, we can sometimes say that the structure will remain roughly the same even if we insert different elements into it: in many compounds we can replace sodium with potassium and get similar properties. However, it is the partial sameness of the potassium or sodium components that makes it possible to preserve higher-level sameness, not the structure as such. The properties that arise at higher levels always depend on the properties of the components. 2. The Saussure who did this may be something of a Frankenstein figure created by his survivors, cf. Gregersen (1991); but the effects on thinking about language that are associated with his name remain what they are, for good and for bad. 3. Because the word "form" is used also for the expression side of language, as in the pair "form-meaning", I use "structure" instead. Since form in this sense denotes the relational properties of a linguistic item as part of the system, the loss in the reformulation is hopefully minimal. In one context "structure" is less suggestive, however: when used about the carving out of linguistic content elements from prelinguistic substance. 4. From this point of view it is mysterious how the autonomy idea can be preserved at all — how relations could ever be seen as primary over the related items. The reason why Hjelmslev saw substance as presupposing form had

Notestopp. 168-179 515 partly to do with positivist views (cf. p. 171), but the way he upheld it in relation to language must be understood in relation to an unclarity in his thinking. Hjelmslev operated with a three-term opposition: form-substancepurport: "purport" designated the pre-linguistic stuff, and substance the stuff as formed by language. In that sense, form is presupposed by substance; but "purport" would logically also be presupposed, since substance consists in formed (structured) purport — and this is against Hjelmslev's thinking. In Hjelmslev (1943 [1966]), substans 'substance' and mening 'purport' are sometimes used (sloppily) as interchangeable terms — cf. the self-correction p. 70, 1.14 — reflecting the basically formalist orientation of the whole pattern of thought. Chomsky discusses, as one situation that would threaten his picture of autonomy, the case in which there was shown to be total point-by-point correspondence between meanings of sentences and syntactic structures, and gives examples to show that such a claim (as embodied in Montague grammar) is untenable. However, as far as I can see, his notion of autonomy would be compatible even with total correspondence between sentence meanings and sentence expressions; this seems to me also to be implied in Newmeyer's argument, where any amount of iconicity can be captured by a generative description. The unease one would feel in case of total correspondence would merely be attributable to the fact that the description would be redundant, rather than false: one would still, if one chose, be able to describe all non-semantic levels fully "in terms of formal primitives" — if for no other reason then because there is no canonical discovery procedure constraining the postulation of formal primitives. To mention some examples, it appears in his definition of "subject" in purely configurational terms (Chomsky 1965: 70): what is the subject depends on the derivational procedure, not on properties of object-language clauses. It is also evident in the concern with the notion of how to constrain grammatical theory. An object can be given any number of formal descriptions, if one's descriptive apparatus is strong enough, and transformational grammars can essentially generate anything (cf. Peters and Ritchie 1973). Having argued that a transformational grammar was necessary, Chomsky found himself with the risk of getting a metalanguage that was too strong to be truly revealing, and therefore set to work chiselling this metalanguage down to size — rather than work with an object language about which he wanted to say something true. This is of course a perfectly valid approach, as long as the two levels are kept clearly apart; but the types of constructs that went into this process were defined on the basis of properties that were much more clearly relevant in terms of the formal structure of the metalanguage, such as subjacency, etc., than in relation to properties of object languages, where their relevance has proved rather more difficult to establish.

516 Notestopp. 180-199 7. I am indebted to Helge Dyvik for pointing out this passage to me. 8. We have seen some reasons why generative grammar was more successful than glossematics, and more may be added. First, most of the distributional relations which Chomsky based his calculus on were generally recognized as basic to sentence description; only the transformational level, which has later more or less disappeared, was new. Secondly, Chomsky's relations were properly formal in the mathematical sense, in contrast to Hjelmslev's dependency relations which were defined in terms of the nature of the object, hence not really formal in the logical sense. Dependency relations, presupposing an object of description in which something either is or is not present, are not axiomatic, but begin by taking the object for granted, cf. Brands (1974).

Part II, chapter 3 1. Lyons (1968: 180) makes clear that "morpheme" as used in modern structural grammars is actually a distributional unit, not a form-meaning combination, thus making it consistent with the "neither-form-nor-meaning" approach criticized above. 2. Using the same analogy with the lexicon, Langacker (1987a: 12) argues against such a division, suggesting that expression and content are so closely associated that it does not make sense to prise them apart. While I agree that at division that sees the two sides as autonomous is misguided (cf. the discussion of Jackendoff below), I think an analytic separation is a necessary preliminary step towards investigating what the precise relationship is between the two sides. 3. The reason is that the aim of deriving all and only the grammatical sentences leads to describing elements in terms of properties that are inherently ambiguous between semantic and non-semantic properties — even if the description factors them out as neatly as possible. To take an example: In the unification formalism of "construction grammar" as described in the introduction by Fillmore—Kay—O'Connor (1992), the categories "noun" and "proper" are listed as belonging to the syntactic features, whereas "number" and the count/mass distinction are listed under the semantic features. In terms of the framework suggested here, all four features would be seen as having both expression and content aspects. Fillmore and Kay explicitly tackle the problem in relation to the issue of cooccurrence restrictions. The unification mechanism is basically a machine for grammaticality decisions: if elements have compatible specifications, they unify — if they don't, they don't. Thus if two elements are related by co-occurrence rules describable in terms of semantic features, the most obvious thing is to give them the same semantic feature specification. Admitting that in some cases it may sound odd, they argue that it is actually uninteresting to figure out what

Notes to pp. 199-204 517 precise semantic properties each element has. Offering the murky case of agreement markers on verbs and adjectives, they suggest that if we do not know what we would use the answer for, we can save ourselves the trouble of figuring it out. It is tempting to follow their advice in the case of agreement; but let us instead focus on the theoretical implications of this stance. What they say is, in effect: we do not care what the elements mean, as long as we can provide them with a feature specification that gives the right distributional pattern. The account given in this book assumes that the priority should be the opposite: let's figure out how much of the syntax we can account for by assuming that syntax is basically about combining meaning fragments into meaning wholes, and see the distributional pattern as following from that. In the case of combination restrictions between determiner and noun, an example of this would be to say that the coded content of the series one, two, three., does not involve the feature "countable", but rather they have the function of counting — and therefore the noun with which they combine must be countable. The criticism of generative structures is essentially the same that Hjelmslev made of the logical sign concept, in which the sign is only one-sided, thus ruling out commutation (cf. Hjelmslev 1948: 33). On the meta-level the project has a great deal to recommend it. Our descriptive metalanguage must enable us to characterize the substance adequately, regardless of structural distinctions, and then specify where the structural distinctions are. On that level, the result might be a semantic metalanguage corresponding to the IPA for phonetics (cf. the remarks in Bybee—Perkins—Pagliuca (1994: 149) on this ambition in relation to grammaticization theory), which would be useful in standardizing terminology. The traditional metalanguage is riddled with all the sources of error associated with using everyday language uncritically as a scientific tool, plus those errors that come in when elements of stipulative definitions are habitually imposed on the informal usage in a halfimplicit fashion; one thing that her approach has already demonstrated is how certain popular quasi-technical terms (such as "direct" versus "indirect" in the analysis of politeness) have sown confusion instead of providing clarity. The main aim of Wierzbicka's project, however, consists in investigating meaning at the object-language level; and even if we look only at the purely conceptual meanings there are problems. If concepts are anchored in experience, then there is no reason to assume that "rich" concepts like a child's concept of 'mother' should be understood as inherently decomposable. Rather, its elements go together in creating a gestalt: it has many different aspects, which can be described by Wierzbickan paraphrases but it is not reducible to a paraphrase. Therefore meanings are not "composed" of conceptual primitives — although conceptually primitive words are likely to be the least misunderstandable meta-tool available to use in describing them. Leibniz's reduction of

518 Notestopp. 204-211 entities to the sum of their properties presupposes a conceptual world reducible to bottom-up properties. Also, the project presupposes that all linguistic meaning is conceptual in nature and thus runs counter to the view adopted here. If we understand object language meanings as equivalent to declarative sentences formulated in the metalanguage, we eliminate the interactive dimension of meaning. The problem can be exemplified with purely functional meanings like 'declarative' and 'predication', which are not currently part of the basic inventory of universal elements; but unless they are presupposed, none of the declarative descriptions of meaning make sense (cf. above, p. 106). 6. Hjelmslev somewhere uses this example to argue that one instance is enough: there is a distinction between first-person and second person in the English verb, period. 7. The feature "non-serious" as a general feature of games I owe to a paper (arguing against family resemblance in semantics) that I read once as an external examiner. Unfortunately I cannot recall the name of the author. 8. Layered structure is generally mentioned in the same breath as the adjective "hierarchical"; thus Hengeveld (1990a), which provides a general introduction to the layered format of description, is entitled "the hierarchical structure of utterances". However, the notion of layering is not wholly identical to that of a hierarchy. Standard examples of hierarchies include administrative organization in terms of departments with sub-departments, giving rise to a chain of command with one head at the top, a tier of executives who function as heads of their departments, and so on down to the lowest tier of employees. The most obvious example of this in language is traditional constituent structure. Generative grammar, as also revealed in the pervasive metaphor of regimentation expressed in the terminology of "command", "government", "control" etc., started off with a clearly hierarchical as opposed to layered clause structure. However, there is no conflict between layered and hierarchical structure. When a new layer is added, this operation can be seen as creating a new constituent on a higher hierarchical level. Also, since there is an element of sub-layering associated with noun phrases (cf. Rijkhoff 1990, 1992), there is more than one core item, with associated layers involved in clause structure, adding a clear hierarchical element. The change from phrase structure rules to X-bar syntax reoriented generative structure in the direction of layering: each "bar" level can be seen as a superimposed layer; and as described in Siewierska (1992), there is considerable similarity between the actual structural levels postulated by current versions of generative and Functional Grammar. Foley and Van Valin's version of layering (cf. 1984: 78) focused on the innermost parts, and involved at the outset a distinction between the nucleus (containing the predicate, possibly complex), the core (containing the argument NPs) and the periphery (containing the "circumstantial" or, in Functional

Notestopp. 211-222 519 Grammar terminology, the "satellite" NPs). As also pointed out in Van Valin 1990: 199) this is a more "syntactic", i.e. expression-oriented approach than in Functional Grammar: for example, constituent order plays a criteria! role in motivating clause structure (cf. Foley—Van Valin (1984: 284). 9. On the interpretation I shall defend in detail later, we get the four main layers by a semantic construction process, so that the layers are made up of content elements; but since I do not want to take that interpretation for granted, I do not use single quotes to indicate content elements at this point. 10. As argued later, combined elements may be co-lexicalized, so that they carry additional meaning together; in that way syntagmatic implicatures may become part of the coded meaning of the combination. 11. A reservation which should be pointed out in relation to the "recipe" view of meaning is that it may sound as if it assumes a speaker consciously trying to affect the addressee rather than "speaking his mind". As pointed out above p. 121, I ignore those aspects of the problem which are specific to the speaker perspective, focusing on those aspects of coding where the speaker can be assumed to have internalized those routines that he can expect to invoke in the hearer. Differences between egocentric and hearer-oriented types of coding strategies are not covered by this theory; to the extent languages differ in their orientation on this point, the recipe analogy does not want to take sides — and hence, of course, fails to capture that distinction. 12. This stage is not documented in the example. 13. Greenfield refers to evidence from aphasia conditions suggesting that manual hierarchical skill is typically, but not always affected by the same disorders, and that both types of skill are located in Broca's area. All claims of precise brain localization tend to be contentious, but the degree of relatedness in terms of brain localization is not necessarily crucial for the point made in the article. When the general capability of the brain permits the development of hierarchical skills, this may be widely distributed or localized; if it is localized, a further differentiation of hierarchical planning skills in which language gets its own place is a natural development given the overriding importance of language in human life. 14. The generative view of syntax can be rescued if you assume that the syntactic ability involved in pidgin communication is different in kind from that involved in real linguistic communication — as argued by Bickerton (1990: 118): a pidgin speaker reactivates the proto-language faculty shared with the apes, whereas a creole speaker flips the switch to "real" syntax, and there is no intermediate possibility. This is not a very attractive hypothesis (cf. also Studdert-Kennedy 1992); among Bickerton's favourite arguments for it is the existence of purely formal as opposed to semantic criteria for the interpretation of null elements — which Bickerton sees as the hallmark of real syntax (cf. Bickerton 1990: 111, 169). But as demonstrated by Kopeke—Panther (1991),

520 Notes to pp. 222-231 this generative assumption does not stand up to scrutiny: addressees of utterances in German like I promise you to be the first I get back to interpret the utterance according to semantic plausibility rather than supposedly formal control relations. 15. Real processes are different for at least two reasons: the short-cuts which routine makes possible; and the hermeneutic circle (cf. Harder—Togeby 1993: 477) which implies that we need the whole in order to understand the parts, and the parts in order to understand the whole, so that the processing needs to be able to go through more than one round in order to get everything in place. But as argued above pp. 227-228, we can see the linguistic structure as being inherent in the processing task itself, while allowing leeway for deviation in the actual execution of the task. 16. David Kirsh has pointed out to me that in the case of language, the mental states need to have structure enough to sustain inferential relations. Since the components in final products such as hollandaise are non-recoverable, this would make the analogy too strong. The point in this context, however, does not depend on the existence of non-recoverability. Even if temporal location as an aspect of a message does not have a special hierarchical position, it is recoverable from the situation. 17. Hawkins (1994) suggests a theory of syntactic constituency and word order which emphasizes the importance of processing, and in that respect resembles the view of syntax presented here. Following the generative tradition, however, he sees syntax as autonomous in relation to the semantic input, while I see (content) syntax as the combinatorial part of semantics. 18. That would also suggest a solution to a problem that I will return to later, namely the problem involved in letting the expression of definiteness in Danish influence the underlying representation of the English definite article. If we see operators as parts of abstract structures, this suggests a rather strong reliance on the existence of a level of abstraction that is equally good for all languages, and raises the issue of the Functional Grammar equivalent of the "Universal Grammar" hypothesis. However, if we see the "d" as a label for a content element, it embodies a claim that the phrases in English and Danish contain the same content element — a much more manageable claim. 19. Thus Dik would ultimately have the same problems of anchoring meaning that Fodor has. Dik's preferred version of a semantic theory is in terms of meaning definitions, partly specifiable in terms of meaning postulates (cf. Dik 1989: 81), partly by direct reference to natural kinds like "geranium"; but he does not specify how the relation to natural kinds would come about. The "language-of-thought" idea can take many forms; one striking aspect of Dik's way of thinking about it is that we can use the structures postulated in Functional Grammar for the "language-of-thought" as well as for natural languages, thus eliminating (as far as possible) the distinction between mental

Notestopp. 231-258 521

20.

21.

22. 23.

24.

25.

26.

processing and language processing. This idea is not generally shared among functional grammarians (cf. Hesp 1990; Nuyts 1992), but the similarity between the type of structures that go into generative grammar and functional grammar is clear. In Hengeveld's version of the theory the standard example of illocutionary operators are markers of mitigation or reinforcement of the basic illocutionary force. However, there is disagreement in Functional Grammar as to the status of the Allocution. Dik (1989: 60) views the basic illocutions as operators in themselves. The semantic structure of clauses may be represented as going directly from states-of-affairs to illocutions: decl (state-of-affairs), or with the illocution as incorporating the element of application: decl + application (state-of-affairs). I am indebted to Lachlan Mackenzie for pointing this connection out to me. Actually, this point, to which we shall return, is terminologically slightly unclear: generally, in clearly cross-linguistic contexts the word gram-types is used; but the fact that "gram" is applicable at both levels is made clear by the adoption of an orthographic principle according to which language-specific grams "are designated by their proper names, which are therefore capitalized" (footnote 1). To describe the status they claim for their categories, they invoke a parallel with the colour categories established in Berlin and Kay (1969). The fact that I try to be precise about the difference should not overshadow the area of agreement, particularly the dual interest in expression and content; the types of fact investigated here are obviously congenial to the general approach argued here. Among the main points made by grammaticization theory are the types of diachronic development on the expression side which involve progressive loss of independent status: loss of positional freedom — affixation — fusion; on the content side it is mirrored by loss of semantic specifications, as well as by types of development that can be captured in terms of the interests of cognitive linguistics, including metaphor (cf. for instance Sweetser 1990). Haspelmath talks of structures "intermediate between syntax and semantics" — a variant of the problematic concept of the "borderline" between syntax and semantics (cf. p. 196). Instead, they are structures which should be understood as having both content and expression. For a discussion of the notion of "paradigm" in a theory of language that is structural and functional at the same time, see Heltoft (forthcoming).

Part II, chapter 4 1. It covers essentially the same notion as the concept of arguments 1 and 2 in Functional Grammar, but involves an explicit semantic analysis of the notions.

522 Notes to pp. 260-275 2. The similarity is striking when the relation between vowel and consonant in the syllable is classed with the relation between head and modifier. The difference between the manner of operation of the principle in the two systems is more or less due to the reversal of the priority between structure and substance: Hjelmslev's fundamental mistake was to try to define the identity of elements in terms of their co-occurrence restrictions, where the natural relationship is that co-occurrence restrictions are due to the properties of the elements as such. Nevertheless, Hjelmslev's practice of "squinting at substance" means that we recognize the same type of relations in many cases: root as autonomous with respect to morphemes, which are dependent; vowels as autonomous with consonants, which are dependent; heads in endocentric constructions as autonomous with respect to modifiers, which are dependent, etc. 3. For a criticism of "syntax" as neither-form-nor-meaning, see Langacker (1987a: 316). 4. The standard argument for this is the one that asks how much knowledge of language can help the addressee. The person who did not understand what elephant ribbon meant in the context described above would be ill advised to take a course in advanced conversational English to avoid similar mistakes; at the level of linguistic resources, it is only the functional relationship between modifier and head which links the meanings together. 5. This is a rather humanoid element. Even Sarah, the cleverest of Premack's chimpanzees, never started asking proper questions, cf. Premack and Premack (1983). Available evidence suggests that chimpanzees are able to ask questions that have to do with desires — i.e. "where is my baby?", in a situation (I am referring to a TV programme whose credits I do not have) where the baby has been taken away because of illness. Such questions are difficult to tease apart from directives like "can I have my baby back?". What would be interesting would be questions that clearly asked for information only. 6. I am ignoring the theoretical possibility of a language that was purely conceptual and where situational function was always assigned by contextual inference. 7. One of the options which have been subject of great deal of discussion is whether the crucial missing element is a "theory of mind": with the question of whether apes see each other as having minds that it makes sense to address, or they just use their own mental powers egocentrically. 8. I agree with the description but prefer the word "reject", which preserves the link with situational rejection. The word "cancel" might suggest that the description is simply withdrawn, but what happens, as argued at length in Millikan (1984), is rather that a description is replaced by a complementary description — which can be captured by saying that not is used to reject a description, essentially as no! is used to reject a potential event.

Notes to pp. 280-318 523 9. The fact that topic NPs are typically more or less devoid of descriptive content (cf. Givon 1989) does not count against the basic structure of clause meaning; it just reflects the fact that we only code as much of the hierarchy as necessary in a given situation, compare the discussion of "ellipsis" Part Two, section 4.8. 10. I am indebted to Jan Rijkhoff for making me aware of this issue. 11. This is in accordance with Jespersen's characterization (1924: 87, 114-15): he also regards finite verbs as the only really verbal forms, but associates the dynamicity as much with the "nexus" relation as with the verb in itself. 12. Even the third could in fact be left unspecified, if we take a slightly rising intonation contour mmm ? as a possible way of asking a situationally obvious question. 13. Such double scope situations potentially occur in all cases where single elements have "natural" implications elsewhere in the system. As always, however, coding may or may not reflect this. For instance, the choice of a pejorative term anywhere in the system has implications for the interactive status of the whole utterance, but since there is no special clause type for pejorative utterances, such a choice does not have dual grammatical scope. 14. A more attractive idea (suggested to me by Ellen Contini-Morava) would be to see do as expressing 'non-assertion' — which is redundant in the context but meaningful as an expression of a category that is pervasive in English.

Part III, chapter 2 1. Here Bull echoes St Augustine's views on the nature of time (Confessions XI. 17-20). After considering the possibility of seeing past and future as existing in themselves, St.Augustine reaches the conclusion that the only sense in which past and future exist is as conceived at present by the mind — but this view did not make its way into the grammatical tradition. 2. Bull sets up a purely hypothetical tense system based on his theory, suggesting that although in principle the number of possible axes is infinite, there are good reasons to suppose that the number involved in actual tense systems will not exceed four. The starting point is the moment of speech, at which the subject is actually experiencing reality; from that point of view he can project the axis of orientation either backwards or forwards, yielding a "recalled point" or an "anticipated point"; and as a fourth possibility he may set up a "recalled anticipated point". His hypothetical tense system has twelve options, because at each "axis of orientation" one has the choice between going back, going ahead and staying at the point. 3. It should be emphasized that this is part of a much larger theory, orignally developed in relation to an account of Russian perfective aspect, but with wideranging implications for typological analysis.

524 Notes to pp. 334-352 4. Goldsmith and Woisetschlaeger use the distinction to describe the progressive, which Langacker handles basically in terms of unboundedness, suggesting (Langacker 1987b) that general validity is compatible with progressives. 5. Janssen (personal communication) has said that this version of the instructional approach is compatible with his view; more generally, the only significant difference between Janssen's views and mine (cf. the "context-of-situation" account in Janssen 1995) may be the issue of the centrality of time. 6. The episode is reported by Plutarch, using the Greek aorist form; he adds an explanation of this unusual usage, suggesting that it was common practice in order to avoid conjuring up the unpleasantness of the unmentioned event (Plutarch's biography of Cicero, chapter 22). 7. The point of departure for the argument in Herslund (1988) is a recorded misunderstanding in which a Danish past tense was taken as referring to a nonactual world: Del var pä tide '(that was) about time, too'. This is a natural possibility because of the optional marking in Danish of both temporal and modal aheadness (cf. pp. 458-459), allowing it to mean something like '(If that were to happen) it would be about time, too'. In English, this option does not exist, and I doubt if "straight" past tenses could be understood in that way outside cases like / thought... which I would place under the "politeness" variant (cf. p. 348). 8. I suspect this interpretation makes it too attenuated; it is not quite clear to me how the account purely in terms of temporal relationship is compatible with the possibility of aborting the coming event. Also, the fact that the prospective is seen as analogous to the French passe compose, which has only a relation of anteriority left (Langacker 1990: note 11) suggests that the account may move the prospective too far in the direction of the pure future, in contrast to the account in terms of an "embryonic futurity". 9. This account implies that the description in Vet (1984) of the "deictic" nature of the perfect and prospective vs. the "anaphoric" nature of the future and the past must be rejected as a semantic description of the tenses. The evidence for this interpretation is the introduction to sequences dealing with the past or the future, in which the first sentence is typically in the perfect or prospective (establishing a deictic relation) whereas the future or past typically take over, referring anaphorically to the point established by the perfect or prospective: I have seen her. She was beautiful

I am going to leave tomorrow. It

be hard.

This account is perfectly valid in terms of textual progression, but not in terms of meaning: The term "anaphoric" is not generally true in its usual sense of the past tense, since the time referred to can be established either textually or situationally, and it is certainly not true of the future. The point at which a

Notes to pp. 352-356 525 statement with a future in it applies to the real world is determined by the choice between the two deictic tenses, present and past (as also emphasized in Vet 1981); and it applies not at the time of the event, but at the time in relation to which it is ahead (cf. also the notion of "topic time" p. 405). The pattern where the prospective is used to introduce a sequence about future events and followed up by the pure future can be explained in terms of the pragmatics of textual progression: prospective and perfect both predicate something about the base time, so that you get to future and past via the present "epiphenomena": it is natural to take here-and-now as your point of departure. 10. Not all modal statements can be falsified in any obvious way; thus statements with might are not easily proven wrong, since there is very little that we human beings can be absolutely sure of. 11. It is interesting to consider what would be involved if the future were to be a deictic mirror image of the past. According to the description given above it would mean that there was a certain situation in the future that you wanted the state-of-affairs to be applied to, and the right understanding would depend on the addressee's ability to identify the right segment of the future as being talked about. One should always be wary of saying that something is impossible; but it would require that there were sections of the future that were shared ground between the interlocutors, and where people could ask and tell each other about them — and it is not easy to imagine why this should be a standardly coded situation. 12. The future reading is only available in cases like Del forventes/forventedes at ville she snart it is/was expected to "will" happen soon I think the existence of such constructions — which Niels Davidsen-Nielsen brought to my notice — can be regarded as compatible with an overall prohibition against non-finite futures. To begin with, the futurity is anchored in the finite tense of the embedding verb rather than unanchored. What it suggests is that the finite anchoring might be higher up than an immediately higher finite tense, thus leaving scope for non-finite futures to shop around for anchoring. But if that were generally the case, we would expect to find cases like *Han habede at ville blive forfremmet he hoped to "will" (be about to) to be promoted The special distinction of the cases that do occur, I think, is that the " nominative-with-infinitive" construction is grammaticized to a higher degree than usual embedding constructions (as also suggested by the fact that unlike ordinary passives, they have no direct active counterpart). The status of the

526 Notes to pp. 356-373 embedding verb would be analogous that of an "evidential/probability" operator modifying the speaker's commitment, rather than a fully separate finite clause. On that interpretation, the semantic structure would be something like "expected" (past/present (fut (state-of-affairs)) 13. As discussed in part II above, there is a problem involved in the status of the "unprofiled" conceptualization of the relationship to the ground: is this really part of the meaning, or is it just the awareness of the situation which is presupposed by all communicators, without which communication would be impossible? 14. This difference is not due to any inherent reluctance to see the aheadness of will as identical in past and present contexts. In Langacker (1991: 247) will and would are explicitly related in terms of a notion of predictability: Jeff will finish on time expresses that the event is predictable on the basis of present reality, while Jeff would finish on time "makes a comparable extrapolation with reference to some hypothetical situation" (cf. the discussion of conditionals below p.443); and the same would appear to hold for the past future. Langacker (personal communication) has explained that he would see this as irrealis located with respect of a point of reference, an account which would be more compatible with the one suggested here. 15. The tradition and modern treatments are almost unanimous (but cf. DavidsenNielsen 1990a: 184) in locating this property as belonging with the simple present tense, thus misplacing the property in terms of the structure of the English content system. To illustrate that it applies across all the constructional paradigm, I offer the following examples. First, the "timetable" cases that are okay in English: The game begins at eight (validity at S of later event) The game began at eight (so he still had two hours) (validity at P of later event) Then the unscheduled events that are not okay in English but okay in Danish: Simple present: * We think he comes tomorrow (so we won't worry) vi tror han kommer i morgen we think he comes tomorrow

Notes to pp. 373-384 527 Simple Past: *We thought he came the day after (so we did not worry) Vi troede han körn dagen efter (sä vi vor ikke bekymrede) Present Perfect: */ have returned before you wake up (so don't worry) jeg er kommet tilbage for du vagner I have come back before you wake Past Perfect (more marginal, according to informants; here explicit marking of futurity would be preferred): *He said he had come back before I woke up (so I had had no cause to worry) Han sagde at han var (ville vcere) kommet tilbage f0r jeg vägnede he said that he was (would be) come back before I woke 15. An even later stage, the one that has occurred in the case of the passe compose, is also outlined in the theory. In that case, the "relevance" relation has faded away entirely, leaving only the anteriority; and the final stage, which would involve a development into a true past tense would require two further developments: R must be identical with the ground, and the relationship itself lose its profiling (compare the account of the future pp. 263-367) so that only the grounded state.of-affairs is profiled. This development would only apply to the present perfect; and it would not on the theory provided here be a true past tense. 16. But to become true past tenses according to the theory outlined here, they would have to develop an additional identificational property: bleaching could not in itself bring about this development, as opposed to what is suggested in Bybee—Dahl 1989 and Langacker 1990.

528 Notestopp. 391-398

Part III, chapter 3 1. In a manner which is his own but uses the apparatus of modal logic (of which tense logic is a branch), he distinguishes between two time-indexed worlds that are relevant to a proposition: the world it is true of and the world it is true in. He also distinguishes between timeless and omnitemporal propositions, according to whether they are "outside time" (like mathematical propositions) or hold good from the beginning to the end of time. According to the position adopted here, both these distinctions view temporal location from a position that is at cross-purposes with the organization of meaning in the clause. The discrepancy arises precisely because of the logical perspective: logical propositions carry worlds with them, while meanings are unanchored until they are brought into relation with the world by the interlocutors. In the linguistic perspective, "timelessness" vs. "omnitemporality" concerns the world we are talking about, not clause meaning: the clause merely provides a description which it instructs us to apply to the world we are talking about. From a linguistic perspective, the discussion of truth in versus truth of (which by Lyons is linked to the issue of the timelessness of truth) has no other substance than the distinction between the time of utterance and the time we are talking about. If a declarative statement (uttered at S) is true, it means that the descriptive content of the statement fits the world at the time we are talking about (in the past tense = P). To put it differently: it is true in the situation at time S, o/the situation obtaining at P. This means that truth is indeed timeless in the sense of "immutable": A true proposition cannot later become untrue. If truth was changeable it would mean that suddenly the world at that time was no longer what it used to be — which means rewriting history, as in the world of newspeak. On the other hand this does not mean that a proposition can be carried back and forth on the time line while retaining its truth value: it only becomes a proposition by having its application to the time line fixed once and for all. 2. Bull's account, as we have seen, has a well-defined division of labour between tenses and adverbials in orienting the speaker/subject; but there is a weakness in the concept of "axis of orientation" which is introduced as more basic than mere points in time. As mentioned above, the number of such axes is in principle indefinite, but in his system limited to four; yet in practice he accords the distinction between "now" and "recalled point" primacy in his description, for the very good reason that this distinction is responsible for the division of the system into two clearly distinct subsystems; but this dichotomy is not predictable from his basic concept. To support it he points out, quite correctly, that the two basic axes stand for "actual events" as opposed to merely anticipated ones, but this is not part of the theory, just an ad hoc argument. The embeddedness of the two "prime" axes in a system built on four pillars (which

Notes to pp. 398-426 529 in turn have been selected on an ad hoc basis out of an unlimited number) results in an unclarity with respect to the "anticipated" part of his system; it is difficult to see why they are not just points on the prime axes, those defined by what Bull, with uncharacteristic confusion between fact, concept and linguistic structure, calls the morphemes PP and RP (1960: 23). This gives a redundancy in the system (which Bull admits himself), in that the points on the "anticipated" axes are not used by the tense inventories that the system is designed to describe. 3. Since the topic time is understood as purely situational, it follows that there is no one-to-one correspondence between that and the application times of clauses used by the participants — remarks like I'm sorry, are you comfortable in that chair? can always be added in between. The correspondence depends on the pragmatic status of the clauses used; only on-topic clauses have any interesting relation with topic time. In the holiday example, we would expect time adverbials to refer to times within the holiday period; but it can be discussed how much empirical content there is even in this generalization, since statements like we looked forward to this holiday from the Bicentennial onwards or we'll sue the travel agent first thing in the morning make reference to times outside the topic time, and yet in one sense stay on topic. It all goes to show that any attempt to be very specific about relationship between discourse notions and coded meanings is risky. 4. But since the temporal facts would be the same if it was in the non-perfect tense, it is a matter of language-specific convention whether one or the other is sanctioned for this slot. 5. A question that I do not discuss but which belongs in this context is the role of the subject in relation to the finite verb in bringing about a proposition (cf. Herslund—S0rensen 1994: 84).

Part III, chapter 4 1. One of the differences is the possibility of non-finite clauses with correspondingly reduced tense forms, which are not dealt with in this book (but cf. Davidsen-Nielsen 1990a: 152). 2. This job description is not coded by a separate subordinator, but may be regarded as an aspect of the "relative" status of the pronoun or adverb. 3. But this would give no substantial semantic contrast between future and nonfuture, for reasons which have to do with the precise semantic nature of the future and its content-syntactic collaboration with temporal subordinators. Because the future says nothing about its temporal basis, the future cannot be used to define a time; its function is to peek ahead, not to state what holds at lookahead time. The only time-defining potential in the proposition lies in the

530 Notes to pp. 426-453 state-of-affairs itself; you cannot define a time as the specific time at which Joe's return is ahead. 4. An argument for across-the-board use of mental spaces is that all utterances depend on the speaker's conceptualization rather than objective reality; but that reflects the pan-cognitive fallacy (cf. above p. 55), which overlooks the fact that utterances are understood as functioning in an ecological context which is wider than the speaker's head. 5. Note that this is opposite to the referential perspective, where the source event is intuitively speaking more real than the report event. Communicatively speaking, the reporter is more real and accessible than his source, and the source speaker exists only in the situation because the reporter brings him there. Anybody who has ever been quoted by the press will be aware of the paradoxical situation this conflict of perspectives may give rise to. 6. For a thorough discussion of the variant theories, see Declerck (1991: 158). 7. Comrie does not consider examples of this type; he bases his argument on analogous cases where (he claims) the present is not possible, such as ?John will say (a week from nowj that he is absent today',·. In terms of the account suggested here, the naturalness of the choice of the report-space perspective (yielding the present tense) depends on how natural it is to view the future message from the present perspective. If his absence is the truth, which he is trying to keep hidden, and he will give up pretence a week from now, it strikes me as possible, if marginal (try replacing say with admitl). In general, the point that choices are not consistent (as recognized by Comrie in relation to the past vs. past perfect (1985: 117n) fits better with an account in terms of vacillating referential perspective than in terms of a purely syntactic rule. 8. Fauconnier argues that this description is a better way of capturing the traditional distinctions between referential/opaque and specific/non-specific readings. Michael Herslund (p.c.) suggests that there is a problem for Fauconnier's distinction as opposed to the specific/non-specific account in the fact that in both cases, there must be a beautiful dress in the want-space (according to the most natural reading). 9. This possibility that English only has with the non-temporal past, as in had he agreed, everything would have been simple. 10. Even in languages that have other standard codings for conditionals, we find conditionality naturally expressed by co-ordination as in the imperative example, as in do that and I'll kill you, or through simple juxtaposition, as in we don't win — you don't pay, an advertisement for accident attorneys in America. 11. As discussed above in relation to temporal clauses, the fact that the present is compositionally motivated does not prevent language-specific rules from overriding this with a general rule that insists on futurity in relation to speech time being marked, as in modern Hebrew. The future in the subclause is then

Notes to pp. 453-474 531

simply irrelevant from a compositional point of view, and the language cannot distinguish the aheadness-triggered cases from the ordinary cases. 12. In children's make-believe speech in other languages, including Danish, this is possible. Fleischman (1989: 16) quotes the following example fromLe ban usage (Grevisse 1969: 671): j 'etais (imparfait) le malade, et tu appelais (imparfait) le docteur 13. Depending on the situational context and the point you are making, mutually contradictory assumptions may be added to the same 1C (cf. Lewis 1973). 14. Because of the peculiarity of would in non-temporal contexts, (218) could also mean Assuming that at P* the fixing is anterior, it follows that everything will be okay at a stage after P* This reading corresponds to the temporal relations in If he has fixed it, everything will be okay In the following argument I ignore this reading. 15. It was a linguistic talk given by Geoffrey Leech (May 31, 1989, at the Copenhagen Business School). 16. For the present tense, only the -s ending needs to be taken care of, and that depends on subject assignment, so I leave it out here. 17. A descriptive objection here is that as discussed above, the simple present is not unmarked in the same sense that the non-perfect is unmarked. Unlike the non-perfect, the simple present actually assigns a time to the state-of-affairs. Non-perfect is simply an absent semantic element, whereas the present, like the past, does something to what it takes in its scope. Therefore the account that sees the present as assigning S to the state-of-affairs is preferable to the feature account. As between the two members of the past/present paradigm, we may (in a weaker sense) call "present" the less marked member — but it is very important to keep the two senses apart (cf. also Durst-Andersen 1992: 169). 18. This falls out automatically from Hornstein's theory, but not from mine. However, I think the possibility of saying for instance Thank God I got this thing out of the way before Mum comes barging in tomorrow

532 Notes to pp. 474-495

shows that the problem is connected with the fact that the verb arrive in Hornstein's example is understood basically as coding a non-arrival in the past (Harry did not arrive before John came); and if there is a real arrival time in the present time zone, it is in fact possible to have the combination. 19. The advantage in having a function-based theory of language-as-such in studying what happens in actual language use is that it suggests what an invocation of the code can add to the situation at the point where it occurs, thus making its limited but central contribution to the process of social life that actual discourses are part of. Describing what goes on in actual linguistic communication can then proceed without either duplicating the discourse patterns in describing language-as-such or pretending that discourse works entirely without any dependence on a pre-existing linguistic potential. 20. In describing the tenses, Kamp—Reyle uses the format of times and features that was criticized above in relation to Davidsen-Nielsen; in the description of verb forms, the differential relation to the Rpt is captured by the feature STAT. 21. This is a very much simplified version; Kamp—Reyle's argument is much more detailed and crucially involves the temporal adverb now; the analogous argument for French is given in Kamp—Rohrer (1983b). 22. Hjelmslev's concept of connotation has shown itself to very useful particularly in the analysis of fiction, precisely because of this double process of signification: the position in which the message belongs is created along with the message itself. 23. The French original is: Retire dufranfaisparle, lepasse simple, pierre d'angle du Redt, Signale toujours un art; ilfaitpartie d'un rituel des Belles-Lettres. II n 'est plus charge d 'exprimer un temps. 24. The task of correcting genuine tense randomness in contexts where choice could be motivated must be understood not as part of the job of teaching grammar but as part of the job of teaching discourse strategies. This is an essential part of writing competence: to make your text serve the purpose it is designed for. Once the criterion for tense choice is clear, the next stage is to learn to revise, since texts are rarely perfect after the first attempt. Like occasional errors in third person -s, the fact that the tense choice is sometimes inconsistent is in itself not a sign that anything is wrong, but at a certain level it is serious if it is a symptom of the fact that the writer is unable to reflect on his own product and to eliminate both grammatical and communicative inconsistencies.

References Abraham, Werner—Theo A.J.M. Janssen (eds.)

1989

Tempus—Aspekt—Modus. Die lexikalischen und grammatischen

Formen in den germanischen Sprachen. Tübingen: Max Niemeyer. Ackrill, John L.

1963

Aristotle's 'Categories' and 'De Interpretationen Oxford: Claren-

don Press. Allan, Keith 1986 Linguistic Meaning. 2 Vols. London: Routledge & Kegan Paul. 1992 "Semantics. An Overview", in: William Bright (ed.), 394-99.

Allen, Robert Livingston 1966 The verb-system of present-day American English. The Hague—Paris: Mouton & Co. All wood, Jens 1976 Linguistic communication as Action and Cooperation. (Gothenburg Monographs in Linguistics 2.) Gothenburg: Department of Linguistics, University of Gothenburg. 1977 "A Critical Look at Speech Act Theory", in: Osten Dahl (ed.), Logic, Pragmatics and Grammar. Lund: Studentlitteratur, 53-69. 1981 "On the Distinctions between Semantics and Pragmatics", in: Wolfgang Klein—Willem Levelt (eds.), Crossing the Boundaries in Linguistics. Dordrecht: Reidel, 177-189. 1984 "On Relevance in Spoken Interaction", in: Sven Bäckman—Goran Kjellmer (eds.), Papers on Language and Literature. Gothenburg: Acta Universitatis Gothoburgensis, 18-35. 1992 "On Dialogue Cohesion", Gothenburg Papers in Theoretical Linguistics 65: 1-12. Andersen, Henning 1973 "Abductive and deductive change", Language 49: 765-793. Anderson, John M. 1971 The Grammar of Case: Towards a Localistic Theory. London: Cambridge University Press Anderson, Lloyd B. 1982 "The 'Perfect' as a universal and as a Language-Particular Category", in: Paul J. Hopper (ed.), 91-114.

Arens, Hans 1984 Aristotle's Theory of Language and its Tradition.

(Amsterdam

Studies in the Theory and History of Linguistic Science III, 29.) Amsterdam: John Benjamins. Aristotle, see Ackrill 1963, Ross 1908.

534 References

Aske, Jon—Natasha Beery—Laura Michaelis—Hana Filip (eds.) 1987 Proceedings of the Thirteenth Annual Meeting of the Berkeley Linguistics Society. Berkeley: Berkeley Linguistics Society. Axmaker, Shelley—Annie Jaisser—Helen Singmaster (eds.) 1988 Proceedings of the Fourteenth Annual Meeting of the Berkeley Linguistics Society. General session and parasession on grammaticalization. Berkeley: Berkeley Linguistics Society. Bach, Emmon 1968 "Nouns and Noun Phrases", in: Emmon Bach—Robert T.Harms (eds.), 90-122. 1989 Informal lectures on formal semantics. Albany, N.Y.: SUNY Press. Bach, Emmon—Robert T.Harms (eds.) 1968 Universals in Linguistic Theory. New York: Holt, Rinehart & Winston. Bache, Carl 1982 "Aspect and Aktionsart: Towards a semantic distinction", Journal of Linguistics 18: 57-72. 1985 "The semantics of grammatical categories: a dialectical approach", Journal of Linguistics 18: 57-72. 1986 "Tense and aspect in fiction", Journal of Literary Semantics XV/2: 82-97. 1994 "Verbal categories, form-meaning relationships and the English perfect", in: Carl Bache—Hans Basb011—Carl Erik Lindberg (eds.), 43-60. (in prep.) The Study of Aspect, Tense and Action: Towards a theory of the semantics of Grammatical Categories. Bache, Carl—Hans Basb011—Carl Erik Lindberg (eds.) 1994 Tense, Aspect and Action: Empirical and Theoretical Contributions to Language Typology. (Empirical Approaches to Language Typology 12.) Berlin—New York: Mouton de Gruyter. Barthes, Roland 1953 Le degre zero de l'ecnture. Paris: Seuil. Bartsch, Renate 1972 Adverbialsemantik. Frankfurt an Main: Athenäum. 1984 "The Structure of Word Meanings: Polysemy, Metaphor, Metonymy", in: Fred Landman—Frank Veltman (eds.), 25-54. 1986 "On Aspectual Properties of Dutch and German Nominalizations", in: Vincenzo lo Cascio—Co Vet (eds.), 7-40. 1987 " Frame representations and discourse representations", JTLIprepublications 87-02. Amsterdam: Institute for language, Logic and Information, University of Amsterdam.

References

1988

535

"Tenses, Aspects and their Scopes in Discourse", ITLIpublications 88-07. Amsterdam: Institute for language, Logic and Information, University of Amsterdam.

Barwise, Jon 1986 "Conditionals and Conditional Information", in: E. K. Traugott—J. S. Reilly—A. ter Meulen (eds.), 21-54. Barwise, Jon—John Perry 1983 Situations and Attitudes. Cambridge, Mass: MIT Press. Barwise, Jon—Jean Mark Gawron—Gordon Plotkin—and Syun Tutia (eds.) 1991 Situation Theory and Its Applications, vol.2. Stanford: Stanford University. Basb011, Hans 1971 "A Commentary on Hjelmslev's Outline of the Danish Expression System (I)". Acta Linguistica Hafniensa 13, 2: 173-211. 1973 "A Commentary on Hjelmslev's Outline of the Danish Expression System (II)". Acta Linguistica Hafniensa 14, 1: 1-24. Bateson, Gregory 1980 Mind and Nature. A necessary unity. London: Fontana. Bateson, Gregory—Don D. Jackson—Jay Haley—John Weakland 1956 "Toward a Theory of Schizophrenia". Behavioral Science 1: 251-64. Behaghel, Otto 1932 Deutsche Syntax—eine geschichtliche Darstellung. Band IV. Heidelberg: Carl Winters Universitätsbuchhandlung. Bennett, Jonathan 1976 Linguistic Behaviour. Cambridge: Cambridge University Press. 1991 "How do Gestures Succeed?", in: Ernest Lepore—Robert Van Gulick (eds.), 3-15. Benveniste, Emile 1966 Problemes de linguistique generale. Paris: Gallimard. Berlin, Brent—Paul Kay 1969 Basic Color Terms. Their Universality and Evolution. Berkeley and Los Angeles: University of California Press. Bernsen, Niels Ole—Ib Ulbaek 1993 Naturlig og kunstig intelligens. Kobenhavn: Nyt Nordisk Forlag Arnold Busck. Bertinetto, Pier Marco 1985 "Intrinsic and Extrinsic Temporal References. On Restricting the Notion of 'Reference Time'", in: Vincenzo Lo Cascio—Co Vet (eds.), 41-78. Bickerton, Derek 1990 Language and Species. Chicago: University of Chicago Press.

536 References

Blakemore, Diane 1987 Semantic Constraints on Relevance. Oxford: Blackwell. Bloomfield, Leonard 1933 Language. New York: Holte, Rinehart and Winston. Bolinger, Dwight L. 1968 Aspects of Language. New York: Harcourt, Brace & World. 1977 Meaning and Form. London: Longman. Bolkestein, A. Machtelt 1992 "Limits to Layering: Locatability and Other Problems", in: M. Fortescue—P. Harder—L. Kristoffersen (eds.), 387-407. Boole, George 1854 An Investigation of the Laws of Thought. London. [1953] [Reprinted New York: Dover.] Boyd, Julian—James P. Thorne 1969 "The semantics of modal verbs", Journal of Linguistics 5: 57-74. Brands, Hartmut 1974 "Hjelmslevs Prolegomena", Linguistische Berichte 30: 1-17. Brown, Gillian 1994a "Modes of understanding", in Gillian Brown—Kirsten Malmkjaer—Alastair Pollitt—John Williams (eds.), Language and Understanding. Oxford: Oxford University Press, 10-20. 1994b "The Role of Nominal and Verbal Expressions in Retelling Narratives: Implications for Discourse Representations", Working Papers in English and Applied Linguistics I: 81-90, University of Cambridge: Research Centre for English and Applied Linguistics. Brown, Penelope—Stephen C. Levinson 1987 Politeness. Some universals in language usage. (Studies in International Sociolinguistics 4.) Cambridge: Cambridge University Press. Bühler, Karl 1934 Sprachtheorie. Jena: Fischer. Bull, William E. 1960 Time, Tense and the Verb. Berkeley: University of California Press. Bürge, Tyler 1989 "Wherein is Language Social?", in: Alexander George (ed.), 175-91. Bursill-Hall, Geoffrey L. 1971 Speculative Grammars of the Middle Ages. (Approaches to Semiotics 11.) The Hague: Mouton. Butler, Christopher S. 1988 "Pragmatics and Systemic Linguistics", Journal of Pragmatics 12: 83-102.

References 537 Butterworth, Brian—Bernard Comrie—Osten Dahl (eds.) 1984 Explanations for Language Universals. Berlin: Moutonde Gruyter. Bybee, Joan L. 1985 Morphology. A Study of the Relation between Meaning and Form. Amsterdam: John Benjamins. 1988 "Semantic substance vs. contrast in the development of grammatical meaning", in: S. Axmaker—A. Jaisser—H.Singmaster (eds.), 247-64. 1993 "Mechanisms of semantic change in grammaticization", paper given at the third international cognitive linguistics conference, Leuven, Belgium. Bybee, Joan—Osten Dahl 1989 "The creation of tense and aspect systems in the languages of the world", Studies in Language 13, 1: 51-103. Bybee, Joan—Revere Perkins—William Pagliuca 1994 The evolution of grammar. Tense, aspect, and modality in the languages of the world. Chicago: University of Chicago Press. Carey, Kathleen 1993 Pragmatics, Subjectivity and the Grammaticalization of the English Perfect. [Unpublished dissertation, University of California, San Diego]. Carlson, Gregory 1989 "On the Semantic Composition of English Generic Sentences", in: Gennaro Chierchia—Barbara H. Partee—Raymond Turner (eds.), Properties, Types and Meaning, vol. II. Dordrecht: Kluwer. Carnap, Rudolph 1934 The Philosophy of Science. Baltimore: The Williams and Wilkins Company. [1967] [pp.5-19 reprinted in Rorty 1967 (ed.), 54-62] 1942 Introduction to Semantics. Cambridge, Mass: MIT Press. 1950 "Empiricism, Semantics, and Ontology", in: Leonard Linsky, (ed.), 208-28. Caton, Charles E. (ed.) 1963 Philosophy and Ordinary Language. Urbana, Chicago and London: University of Illinois Press. Chafe, Wallace L. 1970 Meaning and the Structure of Language. Chicago: University of Chicago Press. 1971 "Directionality and Paraphrase", Language 47,1: 1-26. Cheney, Dorothy L.—Richard M. Seyfarth 1980 "Vocal recognition in free-ranging vervet monkeys", Animal Behaviour 28: 362-367.

538 References

1992

"Precis of How monkeys see the world", Behavioral and Brain Sciences 15: 135-182. 1994 " Mirrors and the attribution of mental states ", Behavioural and Brain Sciences 17,3: 574-577. Chomsky, Noam 1957 Syntactic Structures. The Hague: Mouton. 1965 Aspects of the Theory of Syntax. Cambridge, Mass.: MIT Press. 1975a The Logical Structure of Linguistic Theory. New York: Plenum Press. 1975b Reflections on Language. New York: Pantheon. 1977 Essays on Form and Interpretation. Amsterdam: North-Holland. 1981 Lectures on Government and Binding. Dordrecht: Foris. 1986 Knowledge of Language: Its nature, Origin and Use. New York: Praeger. 1988 Language and problems of knowledge. The Managua Lectures. Cambridge, MA: MIT Press. Christensen, Johnny 1993 "Aristoteles — Manden der formede vore begreber", in: Christian Gorm Tortzen (ed.), Aristoteles om mennesket. Biologi, etik, poltik. K0benhavn: Pantheon. 6-12. Christensen, Niels Egmont 1961 On the Nature of Meanings. Copenhagen: Munksgaard. Christophersen, Paul 1939 The Articles—a Study of their Theory and Use in English. Copenhagen/London: Munksgaard/Mitford. Clark, Herbert H.—Eve V. Clark 1977 Psychology and Language: An Introduction to Psycholinguistics. New York: Harcourt Brace Jovanovich, Inc. Clark, Herbert H.—Deanna Wilkes-Gibbs 1986 "Referring as a Collaborative Process", Cognition 22: 1-39. Cohen, L. Jonathan 1964 "Do Illocutionary Forces Exist?", Philosophical Quarterly XIV, 55: 118-37. Comrie, Bernard 1976 Aspect. Cambridge: Cambridge University Press. 1981 "On Reichenbach's Approach to Tense", in: Roberta A.Hendrick, Carrie S. Masek, Mary Frances Miller (eds.), Papers from the 17th regional meeting. Chicago: Chicago Linguistic Society, 24-30. 1985 Tense. Cambridge: Cambridge University Press. 1986a "Conditionals. A Typology", in: E. K. Traugott—J. S. Reilly—A. ter Meulen (eds.), 77-99. 1986b "Tense in indirect speech", Folia Linguistica 20: 265-296.

References

1989

539

"On identifying future tenses", in: Werner Abraham—Theo A.J.M. Janssen (eds.), 51-63. Cooper, Robin—Hans Kamp 1991 " Negation in Situation Semantics and Discourse Representation Theory ", in: J. Barwise—J. M. Gawron—G. Plotkin—S. Tutia (eds.), 311-333. Craik, Kenneth 1943 The Nature of Explanation. Cambridge: Cambridge University Press. Cussins, Adrian 1990 "The Connectionist Construction of Concepts", in: Margaret Boden (ed.), Oxford Readings in Philosophy: The Philosophy of Artificial Intelligence. Oxford: Oxford University Press, 368-440. Cutrer, Michelle 1994 Time and Tense in Narrative and in Everyday Language. [Unpublished PhD dissertation, University of California, San Diego]. Dahl, Osten 1970 "Some notes on indefinites", Language 46: 33-41. 1971 "Nouns as set constants", Gothenburg Papers in Theoretical Linguistics 3. Gothenburg: University of Gothenburg. 1984 "Temporal Distance: remoteness distinctions in tense-aspect systems", in: Brian Butterworth—Bernard Comrie—Osten Dahl (eds.), 105-122. 1985 Tense and Aspect Systems. Oxford: Blackwell. 1987 Review of Comrie 1985. Folia Linguistica 21: 489-502. Dancygier, Barbara. 1988 " A note on the so-called indicative conditionals ", Papers and Studies in Contrastive Linguistics 24: 122-133. Davidsen-Nielsen, Niels 1984 "Tense in Modern English and Danish", Papers and Studies in Contrastive Linguistics 20: 73-84. 1986 "Modal verbs in English and Danish", in: Dieter Kastovsky—Aleksander Szwedek (eds.), Linguistics across historical and geographical boundaries: in honour of Jacek Fisiak. Berlin: Mouton de Gruyter, 1183-1194. 1988 "Has English a Future?". Acta Linguistica Hafniensa 21: 5-20. 1990a Tense and Mood in English. A Comparison with Danish. (Topics in English Linguistics 1.) Berlin: Mouton de Gruyter. 1990b "Auxiliaries in English and Danish", Papers and Studies in Contrastive Linguistics 25: 5-21. Davidson, Donald 1967a "Truth and meaning", Synthese 17: 304-323.

540 References

1967b

"The logical form of action sentences", in: Resher, Nicholas (ed.). The Logic of Decision and Action. Pittsburgh: The University of Pittsburgh Press. Also in Davidson (1982). 1982 Essays on Actions and Events. Oxford: Clarendon Press. Davidson, Donald—Gilbert Harman (eds.) 1972 Semantics of Natural Language. Dordrecht: Reidel. Davies, J.M.D.—Isard, Stephan D. 1972 "Utterances as programs", in: Donald Michie—Bernard Meltzer (eds.), Machine Intelligence 7: 325-339. Edinburgh: Edinburgh University Press. Declerck, Renaat 1984 "'Pure future' will in ^/-clauses", Lingua 63: 279-312. 1986a "From Reichenbach (1947) to Comrie (1985) and beyond", Lingva 70: 305-64. 1986b "Two notes on the theory of defmiteness", Journal of Linguistics 22: 25-39. 1991 Tense in English. Its structure and use in discourse. London: Routledge. Dik, Simon C. 1978 Functional Grammar. Amsterdam: North Holland. 1985 "Copula auxiliarization: how and why?", Working Papers in Functional Grammar 2. 1986 "On the notion "Functional Explanation'", Working Papers in Functional Grammar 11. 1987 "A Typology of Entities", in: Johan van der Auwera—Louis Goossens (eds.), 1-20. 1989 The Theory of Functional Grammar. Vol 1: The Structure of the Clause. Dordrecht: Foris. 1990 "On the Semantics of Conditionals", in: J. Nuyts—A. M. Bolkestein—C. Vet (eds.), 233-261. 1994a "Verbal semantics in Functional Grammar", in: C. Bache—H. Basb011—C.-E. Lindberg (eds.), 23-42. 1994b "Computational description of verbal complexes in English and Latin", in: E.Engberg-Pedersen—L.F. Jakobsen—L.S. Rasmussen (eds.), 353-383. Dik, Simon C.—Kees Hengeveld 1991 "The hierarchical structure of the clause and the typology of perception verb complements", Linguistics 29, 2: 231-259. Dik, Simon C.—Peter Kahrel 1992 "ProfGlot: a multi-lingual natural language processor", Working Papers in Functional Grammar 45.

References

541

Diogenes Laertius

1965

Lives of Eminent Philosophers. With an English Translation by

R.D.Hicks. Cambridge, Mass: Harvard University Press. London: Heinemann. Dowry, David R. 1982 "Tense, time adverbs, and compositional semantic theory",

Linguistics and Philosophy 5: 23-55. Ducrot, Oswald 1972 Dire et ne pas dire. Principes de semantique linguistique. Paris: Hermann. Dummett, Michael 1976 "What is a Theory of Meaning" (II), in: Michael Dummett 1993, 34-93. 1989 "Language and Communication", in Alexander George (ed.), 192212. 1993 The seas of language. Oxford: Clarendon Press. Durst-Andersen, Per

1992

Mental Grammar. Russian Aspect and Related Issues. Columbus,

Ohio: Slavica Publishers. Edelman, Gerald M. 1992 Bright Air, Brilliant Fire. On the Matter of the Mind. New York: Basic Books. Edwards, Paul (ed.) 1967 The Encyclopedia of Philosophy. New York: The Macmillan company & the Free Press. Eisenberg, Peter 1976 Oberflächenstruktur und logische Struktur. Tübingen: Niemeyer. Ek, Jan E. van—Nico J.Robat 1984 The Student's Grammar of English. Oxford: Blackwell.

Emonds, Joseph 1976 A Transformational Approach to English Syntax. New York: Academic Press. Engberg-Pedersen, Elisabeth—Lisbeth Falster Jakobsen—Lone Schack Rasmussen (eds.) 1994 Function and Expression in Functional Grammar. (Functional Grammar Series 16.) Berlin: Mouton de Gruyter. Engberg-Pedersen, Elisabeth—Michael Fortescue—Peter Harder—Lars Heltoft—Lisbeth Falster Jakobsen (eds.)

to appear

Content, Expression and Structure: Studies in Danish Functional Grammar. Amsterdam: John Benjamins.

542 References

Engberg-Pedersen to appear "Arbitrariness and iconicity", in: E.Engberg-Pedersen—M. Fortescue—P. Harder—L. Heltoft—L.F. Jakobsen (eds.) Evans, Gareth 1982 Varieties of Reference, ed. by John Me Dowell. Oxford: Clarendon Press. Fauconnier, Gilles 1985 Mental Spaces. Aspects of Meaning Construction in Natural Language. Cambridge, Mass.: MIT Press. to appear "Analogical counterfactuals", in E.Sweetser—G.Fauconnier (eds.) Fillmore, Charles J. 1968 "The Case for Case", in: E. Bach—R. T.Harms (eds.), 1-88. 1985 "Frames and the Semantics of Understanding", Quaderni di semantica VI: 222-254. 1990 "Epistemic Stance and Grammatical Form in English Conditional Sentences", in: Michael Ziolkowski—Manuela Noske—Karen Deaton (eds.), Papers from the 26th Regional Meeting of the Chicago Linguistic Society, 137-162. Fillmore, Charles J.—Paul Kay 1992 Construction Grammar. [Xeroxed MS, Department of Linguistics, Berkeley] Fillmore, Charles J.—Paul Kay—Mary Catherine O'Connor 1988 "Regularity and idiomaticity in grammatical constructions: the case of Let alone", Language 64,3: 501-538. Fillmore, Charles J.—D. Terence Langendoen (eds.) 1971 Studies in Linguistic Semantics. New York: Holt, Rinehart and Winston. Fink, Hans 1980 "Om institutioner sorn forudsastning for sprog", in: P. Harder—A. Poulsen (eds.), 79-87. 1988a "Et hyperkomplekst begreb. Kultur, kulturbegreb og kulturrelativisme I", in: Hans Hauge—Henrik Horstb011 (eds.), Kulturbegrebetskulturhistorie. Aarhus: AarhusUniversitetsforlag, 923. 1988b "Kulturen og det ubetinget absolutte. Kultur, kulturbegreb og kulturrelativisme II", in: Hans Hauge—Henrik Horstb011 (eds.), Kulturbegrebets kulturhistorie. Aarhus: Aarhus Universitetsforlag, 140-154. Fischer-J0rgensen, Eli 1966 "Form and Substance in Glossematics", Acta Linguistica Hafniensa 10, 1: 1-33.

References

1975

Trends

543

in Phonological Theory. A historical introduction.

Copenhagen: Akademisk Forlag. Fleischman, Suzanne

1982

The future in thought and language. Diachronie evidence from Romance. Cambridge: Cambridge University Press.

1989

"Temporal Distance: A Basic Linguistic Metaphor", Studies in Language 13, 1: 1-50. 1990 Tense and Narrativity. Austin: University of Texas Press. 1991 "Toward a theory of tense-aspect in narrative discourse", in: Jadranka Gvozdanovic—Theo A.J.M. Janssen—Osten Dahl (eds.), The function of tense in texts. Amsterdam: North-Holland, 75-97. Fodor, Jerry A. 1975 The Language of Thought. New York: Thomas Y. Crowell [1976] [Reprinted Hassocks, Sussex: The Harvester Press.] 1983 The Modularity of Mind. Cambridge, Mass.: MIT Press. 1987 Psychosemantics. Cambridge, Mass.: MIT Press. Fodor, Jerry A.—Zenon W.Pylyshyn 1988 "Connectionism and Cognitive Architecture: A Critical Analysis", Cognition 28: 2-71. Foley, William A.—Robert D. Van Valin 1984 Functional Syntax and Universal Grammar. Cambridge: Cambridge University Press. Fortescue, Michael 1984 West Greenlandic. London: Croom Helm. 1992 "Aspect and Superaspect in Koyukon: An application of the Functional Grammar Model to a Poly synthetic Language", in: M.Fortescue—P.Harder—L. Kristoffersen (eds.), 99-141. Fortescue, Michael—Peter Harder—Lars Kristoffersen (eds.) 1992 Layered Structure and Reference in a Functional Perspective. Amsterdam: John Benjamins. Fox, Barbara—Paul Hopper (eds.) 1994 Grammatical Voice: Its Form and Function. Amsterdam: John Benjamins. Fowler, H.N. (ed.) 1947 Plato. I-XII. Cambridge, Mass: Harvard University Press.

Frege, Gottlob 1879 Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle: L. Nebert. 1884 Die Grundlagen der Arithmetik. Eine logisch mathematische Untersuchung über den Begriff der Zahl. Breslau: Wilhelm Koebner.

544

References

1892

"Über Sinn und Bedeutung", Zeitschrift für Philosophie und philosophische Kritik 100: 25-50. 1893 Grundgesetze der Arithmetik I. Jena: Hermann Pohle. 1903 Grundgesetze der Arithmetik II. Jena: Hermann Pohle. 1919 "Negation", in: P.Geach—M.Black (1952), 117-35. 1956 "The Thought: A Logical Inquiry", Mind 65. 289-311. [1967] [Reprinted in Strawson 1967, 17-38]. Faerch, Claus—Gabriele Kasper 1986 " Procedural knowledge as a component of foreign language learners' communicative competence", AILA Review 3: 7-23. Gardner, Howard 1985 The Mind's New Science. New York: Basic Books. Garfinkel, Harold 1972 "Studies of the Routine Grounds of Everyday Activities", in: David Sudnow (ed.), 1-30. Gazdar, Gerald 1979 Pragmatics: Implicature, Presupposition and Logical Form. New York: Academic Press. 1980 "Pragmatics and Logical Form", Journal of Pragmatics 4,1: 1-13. Geach, Peter T.—Max Black (eds.) 1952 Translations from the Philosophical Works of Gottlob Frege. Oxford: Blackwell. Geiger, Richard A.—Brygida Rudzka-Ostyn (eds.) 1993 Conceptualizations and Mental Processing in Language. (Cognitive Linguistics Research 3). Berlin: Mouton de Gruyter. Geeraerts, Dirk 1992 "The return of hermeneutics to lexical semantics", in: Martin Pütz (ed.), Thirty Years of Linguistic Evolution. Amsterdam: John Benjamins, 257-282. George, Alexander (ed.) 1989 Reflections on Chomsky. Oxford: Basil Blackwell. Gernsbacher, Morton Ann—M.Faust 1994 "Skilled Suppression", in: F.N.Dempster—C.N.Brainerd (eds.), Interference and inhibition in cognition. San Diego: Academic Press, 295-327. Gibson, James J. 1966 The Senses Considered as Perceptual Systems. Boston: Houghton Mifflin Company. Givon, Talmy 1979a On Understanding Grammar. New York: Academic Press. 1979b (ed.) Discourse and syntax. (Syntax and Semantics 12). New York: Academic Press

References 545

1982

"Tense-Aspect-Modality: The Creole Prototype and Beyond", in: Paul J. Hopper (ed.), 115-63. 1984 Syntax. A Functional-Typological Introduction. Vol.1. Amsterdam: John Benjamins. 1989 Mind, Code and Context. Essays in Pragmatics. Hillsdale, New Jersey: Lawrence Erlbaum. 1990 Syntax. A Functional-Typological Introduction. Vol.2. Amsterdam: John Benjamins. 1991 "Isomorphism in the Grammatical Code: Cognitive and Biological Considerations", Studies in Language 15, 1: 85-114. 1993 English Grammar. A Function-based Introduction. Amsterdam: John Benjamins. 1995 Functionalism and Grammar. Amsterdam: John Benjamins. Glismann, Otto 1987 "Omtid og tempus", Nydanske Studier 16/17: 237-257. K0benhavn: Akademisk forlag. 1989 "Jeg har haft gjort det", Mai og Male 12,4: 6-10. K0benhavn: Gad. Goldberg, Adele E. to appear "Making one's way through the data", in: Sandra A. Thompson—Masayoshi Shibatani (eds.) Grammatical Constructions: Their Form and Meaning. Oxford: Oxford University Press. 1995 Constructions: A Construction Grammar Approach to Argument Structure. Chicago: Chicago University Press. Goldsmith, John—Erich Woisetschlaeger 1982 "The Logic of the English Progressive", Linguistic Inquiry 13: 79-89. Goossens, Louis 1985 "The auxiliarization of the English modals", Working papers in Functional Grammar 1. 1987 "Modal shifts and predication types", in: J. van der Auwera—L. Goossens (eds.), 21-37. 1993 "Have in a Functional Grammar of English", Working Papers in Functional Grammar 54: 1-33. Gopnik, Alison 1993 "How we know our minds: The illusion of first-person knowledge of intentionality", The Behavioral and Brain Sciences 16,1: 1-14. Gould, Stephen Jay 1992 Bully for Brontosaurus. Harmondsworth: Penguin. 1994 "The Monster's Human Nature", Natural History 7: 15-21.

546

References

Grandy, Richard E.—Richard Warner (eds.). 1986 Philosophical grounds of rationality: intentions, categories, ends. Oxford and New York: Oxford University Press Greenberg, Joseph (ed.) 1966 Language Universals, Cambridge, Mass.: MIT Press. Greenfield, Patricia M. 1991 "Language, tools and brain: The ontogeny and phylogeny of hierarchically organized sequential behavior", Behavioral and Brain Sciences 14: 531-595. Patricia M. Greefield—E. Sue Savage-Rumbaugh 1990 "Grammatical combination in Pan paniscus: Processes of learning and invention in the evolution and development of language", in: S.T. Parker—Kathleen R. Gibson (eds.), "Language" and Intelligence in Monkeys and Apes, 540-578. Gregersen, Frans 1991 Sociolingvistikkens (u)mulighed. Vols 1-2. K0benhavn: Tiderne Skifter. Grevisse, Maurice 1969 Le bon usage. 9th ed. Gembloux: Duculot. Grice, H. Paul 1957 "Meaning", Philosophical Review 66: 377-88. 1975 "Logic and Conversation", in: Peter Cole—Jerry L.Morgan (eds.), Speech Acts. (Syntax and Semantics 3.) New York: Academic Press, 41-58. 1989 Studies in the Way of Words. Cambridge, Massachusetts: Harvard University Press. Gulz, Agnetha 1991 The Planning of Action as a Cognitive and Biological Phenomenon. (Lund University Cognitive Studies 2.) Cognitive Science Department: Lund University, Sweden. Gärdenfors, Peter 199 la The Emergence of Meaning. (Lund University Cognitive Studies 5.) Cognitive Science Department: Lund University, Sweden. 1991b Medvetandets evolution. MS, Lund University. 1992 Blotta tanken. Nora, Sweden: Nya Doxa. forthcoming What is in an image schema? [Unpublished MS] Haberland, Hartmut 1986a "A Note on the Aorist", in: Jacob L. Mey (ed.), Language and Discourse: Test and Protest. A Festschrift for PetrSgall. Amsterdam: John Benjamins, 173-184. 1986b "Reported speech in Danish", in: Florian Coulmas, ed.), Direct and Indirect Speech. Berlin: Mouton de Gruyter, 219-53.

References 547 Haberland, Hartmut—Lars Hei to ft 1992 "Universals, explanations and pragmatics", in: Michel Kefer—Johan van der Auwera (eds.), Meaning and Grammar. Berlin: Mouton de Gruyter, 17-26. Haberland, Hartmut—Jacob L. Mey 1977 "Pragmatics and linguistics". Journal of Pragmatics 1: 1-13. Haberland, Hartmut—Öle Nedergaard-Thomsen 1991 "The long and winding road towards a theory of grammatical relations", Journal of Pragmatics 16, 2: 179-205. Habermas, Jürgen 1971 "Vorbereitende Bemerkungen zu einer Theorie der kommunikativen Kompetenz", in: Jürgen Habermas—Niklas Luhmann: Theorie der Gesellschaft oder Sozialtechnologie. Suhrkamp: Frankfurt am Main, 101-41. Haegeman, Lilliane—Wekker, Herman 1984 "The syntax and interpretation of futurate conditionals in English", Journal of Linguistics 20: 45-55. Haiman, John 1978 "Conditionals are Topics", Language 54,3: 564-589. 1983a "Paratactic if-clauses", Journal of Pragmatics 7,3: 263-281. 1983b "Iconic and Economic Motivation", Language 59,4: 781-819. 1985 (ed.) Iconicity in Syntax. Amsterdam: John Benjamins Halliday, Michael A.K. 1970 "Functional diversity in language, as seen from a consideration of modality and mood in English", Foundations of Language 6: 322-361. 1985 Introduction to Functional Grammar. London: Edward Arnold. Harder, Peter 1974 "Videnskabelighed og hensigtsmeessighed—en metodisk uklarhed i det glossematiske begrebsapparat", PAPIR 1,3: 4-17. 1975 "Praedikatstruktur og kommunikativ funktion", Nydanske Studier 8: 102-12. 1976 En strukturel og funktionel beskrivelse af bestemthed i moderne engelsk. [Unpublished M.A. thesis, University of Copenhagen]. 1978 "Language in action. Some arguments against the concept "illocutionary"", in: Kirsten Gregersen, (ed.), Papers from the Fourth Scandinavian Conference of Linguistics. Odense: Odense University Press, 193-197. 1979 "Tekstpragmatik. En kritisk vurdering af nogle principielle og praktiske tilgange til tekstbeskrivelsen, med ansatser til et alternativ", Nydanske studier 10/11: 104-30.

548 References

1989a

"The instructional semantics of conditionals", Working Papers in Functional Grammer 30. 1989b "Om fertid og f0rf0rtid i dansk", Mal og Mcele 12, 4: 11-13. K0benhavn: Gad. 1990a "Tense, semantics and layered syntax", in: J. Nuyts—A. M. Bolkestein—C. Vet (eds.), 139-163. 1990b "The semantics and pragmatics of reference", in Lita Lundkvist—Lone Schack Rasmussen (eds.), Pragmatics and its Manifestations in Language. (Copenhagen Studies in Language 13.) Copenhagen: Copenhagen Business School, 41-78. 1990c "Kognition og gensidighed", Psyke og Logos 11,1: 153-170. 1991 "Linguistic Meaning: Cognition, Interaction and the Real World", Nordic Journal of Linguistics 14: 119-40. 1992 "Semantic Content and Linguistic Structure in Functional Grammar. On the Semantics of ' N o u n h o o d ' " , in: M.Fortescue—P.Harder—Lars Kristoffersen (eds.), 303-327. 1994 "Verbal Time Reference in English: Structure and Functions", in: C. Bache-H.Basb011—C.-E.Lindberg (eds.), 61-79. forthcoming "Subordinators", in: Betty Devriendt—Louis Goossens—Johan van der Auwera (eds.), Complex structures. A functionalist perspective. Berlin: Mouton de Gruyter. Harder, Peter—Christian Kock 1976 The Theory of Presupposition Failure. (Travaux du cercle linguistique de Copenhague XVII.) K0benhavn: Akademisk forlag. Harder, Peter—Arne Poulsen (eds.) 1980 Hvad gar vi udfra? K0benhavn: Gyldendal. Harder, Peter—Ole Togeby 1993 "Pragmatics, cognitive science and connectionism", Journal of Pragmatics 20: 467-492. Hare, R.M.—D.A. Russell (eds.) 1970 The Dialogues of Plato, translated by Benjamin Jowett. I-IV. London: Sphere Books Ltd. Harris, Zellig S. 1951 Methods in Structural Linguistics. Chicago: University of Chicago Press. Haspelmath, Martin 1993 Review of Hengeveld 1992b. Studio Linguistica 47:1, 95-99. Hawkins, John A. 1978 Definiteness and Indefiniteness. A Study in Reference and Grammatically Prediction. London: doom Helm. 1994 A Performance Theory of Order and Constituence. Cambridge: Cambridge University Press.

References

549

Heim, Irene 1983 "File change semantics and the familiarity theory of defmiteness", in: Rainer Bäuerle—Christoph Schwarze—Arnim von Stechow (eds.), Meaning, Use, and Interpretation of Language. Berlin: Mouton de Gruyter, 164-189. Heine, Bernd 1993 Auxiliaries. Cognitive Forces and Grammaticalization. Oxford: Oxford University Press. Heine, Bernd—Ulrike Claudi—Friederike Hünnemeier 1991 Grammaticalization: a conceptual framework. Chicago: University of Chicago Press. Heltoft, Lars 1990 "En plads til sprogvidenskabens hittebern. Om talesprog og saetningsskema." SelskabforNordiskFilologis ärsberetning 1987-89, 26-45. forthcoming "Paradigmatic structure, word order and grammaticalization", in: E.Engberg-Pedersen— M. Fortescue—P. Harder—L. Heltoft—L. Falster Jakobsen (eds.). Hengeveld, Kees 1987 "Clause structure and modality in Functional Grammar", in: J. v. d. Auwera—L. Goossens (eds.), 53-66. 1989 "Layers and Operators in Functional Grammar", Journal of Linguistics 25: 127-157. 1990a "The Hierarchical Structure of Utterances", in: J. Nuyts—M. Bolkestein—C. Vet (eds.), 1-23. 1990b " Semantic Relations in Non-Verbal Predication", in: J. Nuyts—A. M. Bolkestein-C. Vet (eds.), 101-122. 1992a "Parts of Speech", in: M. Fortescue—P. Harder—L. Kristoffersen (eds.), 29-55. 1992b Non-verbal predication: theory, typology, diachrony. (Functional Grammar Series 15.) Berlin: Mouton de Gruyter. 1992c The Internal Structure of Adverbial Clauses. Paper given at the fifth international conference on functional grammar, Antwerp 1992. Heny, Frank—Barry Richards (eds.) 1983 Linguistic Categories: Auxiliaries and Related Puzzles. Vols. 1-2. Dordrecht: Reidel. Herrnstein, Richard J. 1985 "Riddles of natural categorization", Philosophical Transactions of the Royal Society of London, B308, 129-144.

550 References

Herslund, Michael 1988 "Tense, Time and Modality", in: Victoria Rosen, (ed.), Papers from the Tenth Scandinavian Conference of Linguistics. Bergen: Department of Linguistics and Phonetics, University of Bergen, 289-300. Herslund, Michael—Finn Sorensen 1994 "A Valence based theory of grammatical relations", in: EngbergPedersen—Jakobsen—Rasmussen (eds.), 81-95. Herslund, Michael—Ole M0rdrup—Finn S0rensen (eds.) 1983 Analyses grammaticales du Francais (Revue Romaine 24.) Copenhagen: Akademisk Forlag. Hesp, Gees 1990 "The Functional Grammar Computational Natural Language User and Psychological Adequacy", in: J. Nuyts—A.M. Bolkestein—C. Vet (eds.), 295-312. Hjelmslev, Louis 1938 "Essai d'une theorie des morphemes", in: Hjelmslev 1959, 152-164. 1943 Omkring sprogteonens grundlceggelse. Kebenhavn: K0benhavns Universitet. [1953] [English Translation: Prolegomena to a Theory of Language. Bloomington: Indiana University.] [1966] [New Danish Edition: K0benhavn: Akademisk Forlag] 1947 "The Basic Structure of Language", in: Hjelmslev 1973, 119-173. 1948 "Structural Analysis of Language", in: Hjelmslev 1959, 27-35. 1957 "Pour une semantique structurale", in: Hjelmslev 1959, 96-112. 1959 Essais linguistiques. (Travaux du cercle linguistique de Copenhague XII.) K0benhavn: Nordisk Sprog- og Kulturforlag. 1973 Essais Linguistiques II. (Travaux du cercle linguistique de Copenhague XIV.) KJabenhavn: Nordisk Sprog- og Kulturforlag. Hockett, Charles F. 1954 "Two Models of Grammatical Description", Word 10: 210-231. Hofstadter, Douglas A. 1980 Reply to Searle 1980. The Behavioral and Brain Sciences 3: 433-434. Hoffmeyer, Jesper 1993 En snegl pa vejen. Betydningens naturhistorie. Kebenhavn: OMverden/Rosinante. Hopper, Paul J. 1979 "Aspect and foregrounding in discourse", in T. Givon (ed.), 213-241. 1982 (ed.) Tense-Aspect. Between semantics and pragmatics. Amsterdam: John Benjamins.

References

551

1987 "Emergent Grammar", in: Aske et al. (eds.), 139-157. Hopper, Paul—Sandra A. Thompson 1984 "The discourse basis for lexical categories in universal grammar", Language 60,3: 703-752. 1985 "The iconicity of the universal categories 'noun' and 'verb'", in: J.Haiman (ed.), 151-183. Horn, Laurence R. 1992 "Pragmatics, implicature, and presupposition", in William Bright (ed.): International Encyclopedia of Linguistics, Vol 3. Oxford: Oxford University Press, 260-266. Hornstein, Norbert 1977 "Towards a Theory of Tense", Linguistic Inquiry 8: 521-577. 1990 As Time Goes By. Cambridge, Mass: MIT Press. Hudson, Richard 1992 Review of Langacker 1990a. Journal of Linguistics 28: 506-509. Isard, Stephan D. 1974 "What would you have done if...?", Journal of Theoretical Linguistics 1: 233-255. Itkonen, Esa 1978 Grammatical Theory and Metascience. Amsterdam: John Benjamins. 1992 "Remarks on the language universals research ", hi: SKY: The Yearbook of the Linguistic Association of Finland, 53-82. Jackendoff, Ray S. 1972 Semantic Interpretation in Generative Grammar. Cambridge, MA: MIT Press. 1987a Consciousness andthe Computational Mind. Cambridge, Mass.: MIT Press. 1987b "X-bar Semantics", in J. Aske et al. (eds.), 355-364. 1990 Semantic Structures. Cambridge, Mass: MIT Press. Jacobsen, Erling 1984 De psykiske grundprocesser. K0benhavn: Centrum. Janssen, Theo A.J.M 1988 "Tense and temporal composition in Dutch: Reichenbach's "point of reference" reconsidered", in: Veronika Enrich—Heinz Vater (eds.), Temporalsemantik. Beiträge zur Linguistik der Zeitreferenz. Tübingen: Max Niemeyer, 96-128. 1989 "Die Hilfsverben werden (deutsch) und zullen (niederländisch): modal oder temporal?", in: Werner Abraham—Theo Janssen (eds.), 65-84. 1991 "Preterit as definite description", in: Jadranka Gvozdanovic—Theo A.J.M. Janssen—Osten Dahl (eds.), The function of tense in texts. Amsterdam: North-Holland, 157-181.

552 References

1993

"Tenses and demonstratives: Conspecific categories", in: R.A. Geiger—B. Rudzky-Ostyn (eds.), 741-783. 1994a "Tense in Dutch: eight "tenses' or two tenses?", in: Joachim Ballweg—Rolf Thieroff (eds.), Tempussysteme in verschiedenen Sprachen, Tübingen: Max Niemeyer, 93-118. 1994b "Preterit and perfect in Dutch", in: Co Vet—Carl Vetters (eds.), Tense and Aspect in Discourse. Berlin: Mouton de Gruyter, 115-146. 1995 "Deictic and anaphoric referencing of tenses", to appear in Cahiers Chronos 1. Janssen, Theo M. V. 1981 "Montague Grammar and Functional Grammar", in: Teun Hoekstra—Harry v.d. Hulst—Michael Moortgat (eds.), Perspectives on Functional Grammar. Dordrecht: Foris, 73-97. Jensen, Uffe Juul 1973 Videnskabsteon. Vols. 1-2. KJabenhavn: Berlingske. Jensen, Hans Siggaard 1980 "Platons og Aristoteles' logik", in: Sten Ebbesen— Troels EngbergPetersen—Poul Lübcke (eds.), Studier i antik og middelalderlig filosofiogidehistone. Copenhagen: MuseumTusculanum, 133-148. Jespersen, Otto 1924 The Philosophy of Grammar. London: Allen & Unwin. Johnson, Mark 1992 "Philosophical implications of cognitive semantics", Cognitive Linguistics 3,4: 345-366. 1993 "Why cognitive semantics matters to philosophy", Cognitive Linguistics 4,1: 62-74. Johnson-Laird, Philip N. 1977 "Procedural Semantics", Cognition 5: 189-214. 1983 Mental Models. Cambridge: Cambridge University Press. 1986 "Conditionals and Mental Models", in: E. Traugott—J.S. Reilly—A. ter Meulen (eds.), 55-75. 1988 The Computer and the Mind. London: Fontana. Johnson-Laird, Philip N.—Ruth M.J. Byrne 1991 Deduction. Hillsdale, NJ: Lawrence Erlbaum. Joos, Martin (ed.) 1957 Readings in Linguistics I. Chicago: University of Chicago Press. Kamp, Hans 1981 "A theory of truth and semantic representation", in: Jeroen Groenendijk—Theo M.V. Janssen—Martin Stokhof (eds.), Formal Methods in the Study of Language. (MC-Tracts 135.) Amsterdam: Mathematical centre, 277-322. Also in Jeroen Groenendijk—Theo

References

553

M.V. Janssen—Martin Stokhof (eds.), Truth, Interpretation, and Information. Dordrecht: Foris, 1-42. Kamp, Hans—Uwe Reyle 1993 From Discourse to Logic. Vols. I-II. Dordrecht: Kluwer. Kamp, Hans—Christian Rohrer 1983 Temporal Reference in French. University of Stuttgart. [Unpublished MS.] Kant, Immanuel 1781 Kritik der reinen Vernunft. [1952] [Reprinted, ed. by Raymund Schmidt. Hamburg: Felix Meiner.] Katz, Jerrold J.—Jerry A. Fodor 1963 "The Structure of a semantic theory", Language 39: 170-210. Keenan, Edward L. 1976 "Towards a universal definition of subject", in: Charles N. Li (ed.), Subject and Topic. New York: Academic Press, 303-333. Keenan, Elinor Ochs 1976 "The universality of conversational implicature", Language in Society 5: 67-80. Keizer, M.Evelien 1992a "Predicates as Referring Expressions", in: M.Fortescue—P. Harder—L. Kristoffersen (eds), 1-27. 1992b Reference, predication and (in)definiteness in Functional Grammar. A functional approach to English Copular sentences. Amsterdam: Free University. Kempson, Ruth 1975 Presupposition and the Delimitation of Semantics. Cambridge: Cambridge University Press. 1986 "Ambiguity and the Semantics-Pragmatics Distinction", in: Charles Travis (ed.), 77-103. Kierkegaard, S0ren 1968 Papirer, Vol. IV, ed. by Niels Thulstrup. K0benhavn: Gyldendal. Kimball, John P. (ed.) 1973 Syntax and Semantics, vol. 2. New York: Seminar Press. Kirsner, Robert S. 1979 "Deixis in discourse: An exploratory quantitative study of the Modern Dutch demonstrative adjectives", in: T.Givon (ed.), 355-375. 1993 "From meaning to message in two theories: Cognitive and Saussurean views of the Modern Dutch demonstratives", In R. A. Geiger—B Rudzka-Ostyn (eds.), & 1 -113. Klein, Melanie 1952 Developments in psycho-analysis. London: The Hogarth Press.

554 References

Klinge, Alex 1988 Tempus i engelsk. (ARK 42.) Copenhagen: Copenhagen Business School. 1993 "The English modal auxiliaries: from lexical semantics to utterance interpretation", Journal of Linguistics 29: 315-357. Kock, Christian 1978 "Literature as presupposition failure", in: Graham D. Caie—Michael Chesnutt—Lis Christensen—Claus Faerch (eds.), Occasional Papers 1976-1977 (Publications of the Department of English University of Copenhagen 5) Copenhagen: Akademisk Forlag, 93-102. Kohut, Heinz 1977 The Restoration of the Self. New York: International Universities Press. Kopeke, Klaus-Michael—Klaus-Uwe Panther 1991 "Kontrolle und Kontrollwechsel im Deutschen", Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung 44: 143-166. Kreeger, Lionel 1975 The Large group — Dynamics and Therapy. London: Constable. Kripke, Saul A. 1972 "Naming and Necessity", in: Donald Davidson—Gilbert Harman (eds.), 253-355. Kroon, Caroline 1989 "Causal connectors in Latin: the discourse function of nam, enim, igitur, ergo", in: Marius Lavency—Dominique Longree (eds.), Proceedings of the Vth Colloquium on Latin Linguistics, CILL 15.14: 231-243. 1994 "Discourse connectives and discourse type: the case of Latin 'at'", in: Joszef Herman (ed.), Linguistic Studies on Latin. (Studies in Language Companion Series 28.) Amsterdam: John Benjamins, 303-317. K0ppe, Simo 1990 Virkelighedens niveauer. De nye videnskaber og deres historie. Copenhagen: Gyldendal. Lakoff, George—Mark Johnson 1980 Metaphors We Live By. Chicago: University of Chicago Press. Lakoff, George 1987 Women, Fire and Dangerous Things. Chicago: University of Chicago Press. Landman, Fred—Frank Veltman (eds.) 1984 Varieties of Formal Semantics. Dordrecht: Foris.

References

555

Landau, Barbara—Lila J. Gleitman 1985 Language and Experience. Evidence from the Blind Child. Cambridge, Mass: Harvard University Press. Langacker, Ronald W. 1978 "The form and meaning of the English auxiliary", Language 54: 853-882. 1987a Foundations of Cognitive Grammar, vol.1: Theoretical Prerequisites. Stanford: Stanford University Press. 1987b "Nouns and verbs", Language 63: 53-94. 1988a "An Overview of Cognitive Grammar", in: Brygida Rudzka-Ostyn (ed.), 3-48. 1988b "A Usage-Based Model", in: Brygida Rudzka-Ostyn (ed.), 127-161. 1990a Concept, Image, and Symbol: The Cognitive Basis of Grammar. Berlin: Mouton de Gruyter. 1990b "Subjectification", Cognitive Linguistics 1,1: 5-38. 1991 Foundations of Cognitive Grammar, vol. 2: Descriptive Applications. Stanford: Stanford University Press. 1993a "Reference-point constructions", Cognitive Linguistics 4,1: 1-38. 1993b Raising and transparency. Paper given at the ICLA conference in Leuven, Belgium, July 1993. 1994 Conceptual Grouping and Pronominal Anaphora. [Paper given at the anaphora symposium in Boulder, Colorado, May 1994.] 1995 "Subject, topic, setting, possessor, and reference-point: an integrated account of almost everything." Paper given at the conference on functional approaches to grammar, Albuquerque, July 1995. forthcoming The Contextual Basis of Cognitive Semantics. [Unpublished MS., University of California, San Diego]. Lauerbach, Gerda 1993 "Interaction and cognition: Speech act schemata with but and their interrelation with discourse type", in: R.A. Geiger—B. RudzkaOstyn (eds.), 679-708. Leech, Geoffrey N. 1971 Meaning and the English Verb. London: Longman. 1981 Semantics. Second edition. Harmondsworth: Penguin. 1983 Principles of Pragmatics. London: Longman. 1987 Meaning and the English Verb. Second edition. London: Longman. Leirbukt, Oddleif 1989 "Zur Kontrastivität Deutsch-Norwegisch im Bereich hypothetischer Bedingungsgefüge: Konjunktivische versus indikativische Formenbildung und semantisch-pragmatische Differenzierungen", in: Ahti Jäntti (ed.), Probleme der Modalität in der Sprachforschung.

556 References

(Studia Philologica Jyväskylänsia 23.) Jyväskylä: University of Jyväskylä, 60-109. 1991 "Nächstes Jahr -wäre er 200 Jahre alt geworden. Über den Konjunktiv Plusquamperfekt in hypothetischen Bedingungsgefügen mit Zukunftsbezug", Zeitschnft für germanistischen Linguistik 19,2: 158-193. Le Loux-Schuringa, J.Anke 1992 "Tenses in 19th-century Dutch sentence-grammar", in: Jan Nordegraaf—Kees Versteegh—Konrad Koerner (eds.), The History of Linguistics in the Low Countries. Amsterdam: John Benjamins, 253-71. Lepore, Ernest—Robert Van Gulick (eds.) 1991 John Searle and his Critics. Oxford: Basil Blackwell. Levinson, Stephen C. 1983 Pragmatics. Cambridge: Cambridge University Press. Lewis, David 1969 Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. 1972 "General Semantics", in: Donald Davidson—Gilbert Harman (eds.), 169-218. 1973 Counterfactuals. Oxford: Blackwell. Linsky, Leonard (ed.) 1952 Semantics and the Philosophy of Language. Urbana, Chicago & London: University of Illinois Press. [1972] [Reprinted in paperback edition] Lo Cascio, Vincenzo—Co Vet (eds.) 1985 Temporal Structure in Sentence and Discourse. Dordrecht: Foris. Lycan, William G. (ed.) 1990a Mind and Cognition. Oxford: Blackwell. 1990b "The Continuity of Levels of Nature", in: Lycan 1990a, 77-96. Lyons, John 1966 "Towards a Notional Definition of the Parts of Speech", Journal of Linguistics 2: 209-236. 1967 "A note on possessive, existential and locative sentences", Foundations of Language 3: 390-396. 1968 Introduction to Theoretical Linguistics. Cambridge: University Press. 1977 Semantics. Vols. I-II. Cambridge: Cambridge University Press. McCawley, James D. 1971 "Tense and Time in English", in: Charles J. Fillmore—D. Terence Langendoen (eds.), 96-113. 1981 Everything that Linguists have Always Wanted to Know about Logic* (*but were ashamed to ask). Oxford: Blackwell.

References

557

1987

"The multidimensionality of pragmatics", Behavioral and Brain Sciences 10, 4, 723-724. McCoard, Robert W. 1978 The English perfect: tense-choice and pragmatic inferences. Amsterdam: North Holland. McGinn, Colin 1989 Mental Content. Oxford: Basil Blackwell. McLure, Roger 1993 " On " Philosophical implications of cognitive semantics " ", Cognitive Linguistics 4.1: 39-47. Mackenzie, J.Lachlan—M.Evelien Keizer 1990 "On assigning pragmatic functions in English", Working Papers in Functional Grammar 38, 1-33. Main, Tom 1975 Some psychodynamics of large groups, in: Lionel Kreeger, (ed.), 57-86. Malinowski, Bronislaw 1930 "The problem of meaning in primitive languages", in Charles K. Ogden—Ivor A. Richards: The Meaning of Meaning. London: Routledge & Kegan Paul, 296-336. Mandler, Jean 1988 "How to Build a Baby: On the Development of an Accessible Representational System", Cognitive Development 3: 113-136. 1992 "How to Build a Baby: II. Conceptual Primitives", Psychological Review 99,4: 587-604. Mandler, Jean—Patricia J. Bauer—Loraine McDonough 1991 "Separating the Sheep from the Goats: Differentiating Global Categories", Cognitive Psychology 23: 263-298. Mandler, Jean—Loraine McDonough 1993 " Concept Formation in Infancy ", Cognitive Development 8:291-318. Marr, David

1982

Vision: A Computational Investigation into the Human Representation

and Processing of Visual Information. San Francisco: W.H.Freeman. Marsh, Ronald C. (ed.) 1956 Logic and Knowledge. London and New York: Macmillan. Matthews, Peter H.

1993

Grammatical theory in the United States from Bloomfield to Chomsky. (Cambridge Studies in Linguistics 67). Cambridge: Cambridge University Press.

558 References

Meulen, Alice ter (ed.) 1983 Studies in Model-Theoretic Semantics. (Groningen-Amsterdam Studies in Semantics 1.) Dordrecht: Foris. Mey, Jacob—Mary Talbot 1988 "Computation and the Soul: A Propos Dan Sperber & Deirdre Wilson's Relevance", Journal of Pragmatics 12: 743-789. Mill, John Stuart 1843 A System of Logic [1970] [Reprinted London: Longman] Millikan, Ruth 1984 Language, Thought and Other Biological Categories. New Foundations for Realism. Cambridge, Mass: MIT Press. Mitchell, Robert W. 1986 "A Framework for Discussing Deception", in: R.W.Mitchell—N.S. Thompson (eds.): Deception: Perspectives on Human andNonhuman deceit. Albany, N.Y: SUNY Press, 3-40. Mithun, Marianne 1994 "The Implications of Ergativity for a Philippine Voice System" in: Barbara Fox—Paul Hopper (eds.), 247-277. Moens, Marc 1987 Tense, aspect and temporal reference. [Unpublished PhD thesis, University of Edinburgh] Montague, Richard 1970a "English as a Formal Language", in: Richard Montague 1974, 188-221. 1970b "Universal Grammar", in: Richard Montague 1974, 222-246. 1974 Formal Philosophy: Selected Papers of Richard Montague, ed. by Richmond H.Thomason, New Haven: Yale University Press. Mortensen, Arne Thing 1992 Sprogligt handvcerk. Essays om besknvelser og kognition. Roskilde: Roskilde Universitetscenter. Mourelatos, Alexander P. 1981 "Events, Processes and States", in: Philip Tedeschi—Anne Zaenen (eds.), 191-212. Nagel, Thomas 1974 "What Is It Like to Be a Bat?", Philosophical Review 4, LXXXIII: 435-450. Nedergaard Thomsen, Ole 1991 "An Overview of the Fundamentals of a Functional-Pragmatic Theory of Grammatical Structure", in: Copenhagen Working Papers in Linguistics 1 1-53.

References 559 1992

"Unit Accentuation as an Expression Device for Predicate Formation. The Case of Syntactic Noun Incorporation in Danish", in: M. Fortescue— P. Harder—L. Kristoffersen (eds.), 173-229. Newell, Allen—Herbert A. Simon 1976 "Computer Science as empirical inquiry: symbols and search", Communications of the Association for Computing Machinery 19: 113-26. Newmeyer, Frederick 1992 "Iconicity and generative grammar", Language 68, 4: 756-96. Nieuwint, Pieter 1986 "Present and future in conditional protases", Linguistics 24: 371-392. Nunberg, Geoffrey 1978 The Pragmatics of Reference. Bloomington, Indiana: Indiana University Linguistics Club. Nuyts, Jan 1992 Aspects of a Cognitive-Pragmatic Theory of Language. On Cognition, Functionalism, and Grammar. Amsterdam: John Benjamins. 1993 "Representation and communication: Searle's distinction revisited." Journal of Pragmatics 20: 591-97. 1994 Epistemic Modal Qualifications. (Antwerp Papers in Linguistics 81.) Antwerp: University of Antwerp. Nuyts, Jan—Machtelt Bolkestein—Co Vet (eds.) 1990 Layers and levels of representation in language theory. A functional approach. Amsterdam: John Benjamins. N0rgard-S0rensen, Jens 1992 Coherence Theory. The Case of Russian. (Trends in Linguistics 63.) Berlin: Mouton de Gruyter. Palmer, Frank Robert 1974 The English Verb. London: Longman. 1987 The English Verb. Second edition. London: Longman. Pawley, Andrew K.—Frances H. Syder 1983 " Two puzzles for linguistic theory: nativelike selection and nativelike fluency", in: Jack C. Richards—Richard W. Schmidt (eds.), Language and Communication. London and New York: Longman, 191-226. Peirce, Charles Sanders 1931 Collected Papers, ed. by. Charles Hartshorne—Paul Weiss. Cambridge, MA: Harvard University Press. Penrose, Roger 1989 The Emperor's New Mind: Concerning computers, minds, and the laws of physics. Oxford: Oxford University Press.

560 References

Pepperberg, Irene M. 1991 "A communicative approach to animal cognition: A study of conceptual abilities of an african grey parrot", in: C.A Ristau (ed.), Cognitive Ethology: The Minds of Other Animals. Hillsdale, NJ: Lawrence Erlbaum, 153-186. Perkins, Michael 1983 Modal expressions in English. London: Frances Pinter. Peters, Stanley—R.W. Ritchie 1973 "On the generative power of transformational grammar", Information Sciences 6: 49-83. Pinborg, Jan 1972 Logik und Semantik im Mittelalter. Ein Überblick. Stuttgart: Frommann-Holzboog. Pinker, Steven 1989 Learnability and Cognition: the acquisition of argument structure. Cambridge, Mass: MIT Press. 1994 The Language Instinct. How the Mind Creates Language. New York: William Morrow and Company. Plato: see Fowler, H.N. and Hare—Russell (eds.). Plunkett, Kim—Chris Sinha 1992 "Connectionism and developmental theory", British Journal of Developmental Psychology 10: 209-254. Plunkett, Kim—Chris Sinha—Martin F. M011er—Ole Strandsby 1992 "Symbol Grounding or the Emergence of Symbols?. Vocabulary Growth in Children and a Connectionist Net", Connection Science 4: 293-312. Popper, Karl 1972 Objective Knowledge: An evolutionary approach. Oxford: The Clarendon Press Popper, Karl—John C. Eccles 1977 The Self and Its Brain. [1981] [Reprinted Berlin: Springer] Poulsen, Arae 1980 "Perceptionens erkendelsesinteresse og del praktiske grundlag", in: P.Harder—A. Poulsen (eds.), 47-57. Prebensen, Henrik 1988 "Morlille kan ikke flyve. Natursproglig semantik og datamater", Psyke og logos 2: 287-306. Premack, David—Ann James.Premack 1983 The Mind of an Ape. New York: W.W. Norton & Co.

References

561

Prior, Arthur N. 1967 Past, Present and Future. Oxford: Clarendon Press. Putnam, Hilary 1960 "Mind and Machines", in: Sidney Hook (ed.), Dimensions of Mind. New York: SUNY Press, 148-79. [1975] [Reprinted in Putnam 1975a, 362-385] 1975a Mind, Language and Reality. Philosophical Papers, vol.2. Cambridge: Cambridge University Press. 1975b "The Meaning of "Meaning"", in: Putnam 1975a, 215-271. 1980 "Models and Reality", Journal of Symbolic Logic 45: 464-82. 1981 Reason, Truth and History. Cambridge: Cambridge University Press. 1983 Realism and Reason. Philosophical Papers, vol.3. Cambridg: Cambridge University Press. 1988 Representation and Reality. Cambridge, MA: MIT Press Pylyshyn, Zenon W.: 1984 Computation and Cognition: Toward a foundation for Cognitive Science. Cambridge, Mass: Mit Press. Quine, Willard Van Orman 1960 Word and Object. Cambridge, Mass: MIT Press. Quirk, Randolph—Sidney Greenbaum—Geoffrey N. Leech—Jan Svartvik 1985 A Comprehensive Grammar of English. London and New York: Longman. Rawls, John 1955 "Two concepts of rules", Philosophical Review 64: 3-32. Reichenbach, Hans 1947 Elements of Symbolic Logic. New York: The Free Press. [1966] [reprinted] Rijkhoff, Jan 1988 "A typology of operators", Working Papers in Functional Grammar 29. 1990 "Explaining Word Order in the Noun Phrase", Linguistics 28: 5-42. 1992 The Noun Phrase. A Typological Study of its Form and Structure. Amsterdam: University of Amsterdam. Rijksbaron, Albert 1986 "The pragmatics and semantics of conditional and temporal clauses. Some evidence from Dutch and Classical Greek", Working Papers in Functional Grammar 13. Rohrer, Christian 1986 "Indirect discourse and 'Consecutio Temporum'", in: Vincenzo lo Cascio—Co Vet (eds.), 79-98.

562 References

Rorty, Richard 1967 (ed.) The Linguistic Turn. Recent essays in philosophical method, Chicago: Chicago University Press. 1979 "Transcendental arguments, self-reference and pragmatism", in: Peter Bieri—Rolf-P. Horstmann—Lorenz Krüger (eds.), Transcendental Arguments and Science. Dordrecht: Reidel, 77-103. 1980 Philosophy and the Mirror of Nature. Oxford: Basil Blackwell Rosch, Eleanor—Carolyn B. Mervis 1975 "Family Resemblances: Studies in the Internal Structure of Categories", Cognitive Psychology 7: 573-605. Rosch, Eleanor—Carolyn B.Mervis—Wayne D.Gray—David M. Johnson—Penny Boyes-Braem 1976 "Basic Objects in Natural Categories", Cognitive Psychology 8: 382-439. Rose, Steven 1980 "The harmonics of holism", Times Literary Supplement, Nov. 21, 1314. Ross, John Robert 1970 "On declarative sentences", in: Roderick A. Jacobs—Peter S. Rosenbaum (eds.), Readings in English Transformational Grammar. Waltham: Ginn, 222-272. Ross, W.D. (= Sir David) 1908 The Works of Aristotle Translated into English, Vol. VIII. Metaphysica. Oxford: Oxford University Press. Rubin, Edgar 1915 Synsoplevede figurer: Studier ipsykologisk Analyse. K0benhavn og Kristiania: Gyldendalske Boghandel—Nordisk Forlag. Rudzka-Ostyn, Brygida (ed.) 1988 Topics in Cognitive Linguistics. Amsterdam: John Benjamins. Rumelhart, David E.—James L. McClelland—The PDP Research Group 1986 Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Cambridge, MA: MIT Press. Russell, Bertrand 1905 "On Denoting" in: Ronald C. Marsh (ed.), 39-56. 1918 "The Philosophy of Logical Atomism", in: Ronald C. Marsh (ed.), 177-281. 1919 "On propositions: What they are and how they mean", in: Ronald C. Marsh (ed.), 285-320. 1920 Introduction to Mathematical Philosophy. London: Allen & Unwin. [1952] [Extract from chapter XVI reprinted in Leonard Linsky (ed.), 95-108] 1924 "Logical Atomism", in: Ronald C. Marsh (ed.), 323-343. 1950 "Logical Positivism", in: Ronald C. Marsh (ed.), 367-382.

References

563

Ryle, Gilbert 1949 The Concept of Mind. London: Hutchinson. Salone, Sukari 1983 "The pragmatics of reality and unreality conditional clauses in Swahili", Journal of Pragmatics 7,3: 311-324. Sandra, Dominiek—Sally Rice 1995 " Network analysis of prepositional meaning", Cognitive Linguistics 6.1. Sandström, Corel to appear "The temporal interpretation of w/iew-clauses in narratives", in: Lars Heltoft—Hartmut Haberland (eds.), Proceedings of the 13th Scandinavian Conference of Linguistics (January 1992), Roskilde: Roskilde University Centre. Saussure, Ferdinand de 1916 Cours de linguistique generate. [1968] [Reprinted Paris: Payot.] [1959] [English translation: Course in general linguistics. New York—Toronto—London: McGraw-Hill.] Schachter, Paul 1977 " Reference-Related and role-related properties of subjects ", in: Peter Cole—JerroldM. Sadock(eds.), Grammatical Relations. (Syntax and Semantics 8.) New York: Academic Press, 191-226. 1983 "Explaining Auxiliary Order", in: Frank Heny—Barry Richards (eds.), 145-204. Schegloff, Emanuel 1972 "Notes on a conversational practice: formulating place", in: David Sudnow (ed.), 75-119. Schibsbye, Knud 1966 Engelsk Grammatik. Vols. 1-4. Kebenhavn: Natunnetodens Sproginstitut. Searle, John R. 1965 "What is a Speech act?", in Max Black (ed.), Philosophy in America. London: Allen & Unwin, 221-239. Also in Searle 1971 (ed.), 39-53. 1969 Speech Acts. Cambridge: Cambridge University Press. 1971 (ed.) The Philosophy of Language. Oxford: Oxford University Press. 1979 Expression and Meaning. Cambridge: Cambridge University Press. 1980 "Minds, Brains and Programs", The Behavioral and Brain Sciences 3: 417-57. 1983 Intentionality. Cambridge: Cambridge University Press. 1984 Minds, Brains and Science. Harmondsworth: Penguin.

564 References

1986

"Meaning, Communication, and Representation", in: Richard E. Grandy—Richard Warner (eds.), 209-226. 1991 "Response: Meaning, Intentionality, and Speech Acts", in: E.Lepore—R.Van Gulick (eds.), 81-102. 1992 The Rediscovery of the Mind. Cambridge, MA.: MIT Press. Seyfaith, Robert M.—Dorothy L. Cheney 1992 "Meaning and Mind in Monkeys", Scientific American, December 1992, 122-128. Shannon, Claude E. 1938 "A Symbolic Analysis of Relay and Switching Circuits", Transactions of the American Institute of Electrical Engineers 57: 1-11. Siewierska, Anna 1991 Functional Grammar. London: Routledge. 1992 "Layers in FG and GB", in: M. Fortescue—P. Harder—L. Kristoffersen (eds.), 409-432. 1993 " Semantic functions and theta-roles; convergences and divergences". Working Papers in Functional Grammar 55: 1-21. Sinha, Chris 1988 Language and Representation. London—New York: Harvester. Wheatsheaf. 1993a "On representing and referring", in: Richard A. Geiger—Brygida Rudzka-Ostyn (eds.), 227-246. 1993b "Cognitive Semantics and Philosophy", Cognitive Linguistics 4.1: 53-62. Sinha, Chris—Tania Kuteva to appear " Distributed Spatial Semantics", Nordic Journal of Linguistics 18,2. Smith, Carlota S. 1978 "The Syntax and Interpretation of Temporal Expressions in English", Linguistics and Philosophy 2: 43-99. Smolensky, Paul 1987 "The constituent structure of connectionist mental states: a reply to Fodor and Pylyshyn", The Southern Journal of Philosophy XXVI, Supplement, 137-161. Spang-Hanssen, Ebbe 1983 "La notion de verbe auxiliaire", in: Michael Herslund—Ole M0rdrup—Finn S0rensen (eds.), 5-16. Sperber, Dan—Deirdre Wilson 1986 Relevance. Communication and Cognition. Oxford: Blackwell.

References 565 Steedman, Mark J. 1982 "Reference to Past Time", in: Robert Jarvella—Wolfgang Klein (eds.), Speech, Place and Action. New York: John Wiley and Sons, 125-157. Stern, Daniel N. 1985 The Interpersonal World of the Infant. New York: Basic Books. Stjernfelt, Frederik 1992 Formens betydning. Katastrofeteori og semiotik. Kebenhavn: Akademisk forlag. Strawson, Peter F. 1959 Individuals. London: Methuen. 1961 "Singular Terms and Predication", in: Strawson 1971, 53-74. 1967 (ed.) Philosophical Logic. Oxford: Oxford University Press. 1969 "Meaning and Truth", in: Strawson 1971, 170-189. 1971 Logico-Linguistic Papers. London: Methuen. Studdert-Kennedy, Michael 1992 "Leap of faith: A review of Language and Species", Applied Psycholinguistics 13: 515-527. Sudnow, David (ed.) 1972 Studies in Social Interaction. New York: The Free Press. Swan, Michael 1980 Practical English Usage. Oxford: Oxford University Press. Sweetser, Eve 1982 "Root and epistemic modals: causality in two worlds", in: Monica Macaulay—Orin Gensler—Claudia Brugman—Inese Civkulis—Amy Dahlstrom—Katherine Krile—Rob Sturm (eds.), Proceedings of the eighth annual meeting of the Berkeley Linguistics Society. Berkeley: Berkeley Linguistics Society, 484-507. 1988 "Grammaticalization and semantic bleaching", in: S. Axmaker—A. Jaisser—H. Singmaster (eds.), 389-405. 1990 From etymology to pragmatics. Metaphorical and cultural aspects of semantic structure. Cambridge: Cambridge University Press, to appear "Mental spaces and the grammar of conditional constructions", in E. Sweetser—G. Fauconnier (eds.). Sweetser, Eve—Gilles Fauconnier (eds.) to appear Mental spaces, grammer and discourse. Swinney, David A. 1979 "Lexical Access during Sentence Comprehension: (Re)consideration of Context Effects", Journal of Verbal Learning and Verbal Behavior 18: 523-534.

566 References

Serensen, Holger Steen 1964 "On the semantic unity of the perfect tense", English Studies 45 S: 74-83. 1967 " Meaning ", in: To Honor Roman Jakobson (Janua Linguarum, series major XXXHL), 1876-1889. The Hague: Mouton. Talmy, Leonard 1985 "Force Dynamics in Language and Thought", in: William H. Eilfort—Paul D. Kroeber—Karen L. Peterson (eds.), Papers from the Parasession on Causatives and Agentivity at the Twenty-First Regional Meeting of the Chicago Linguistic Society (CLS 21, part 2.), 293-337. 1988 "Force Dynamics in Language and Cognition", Cognitive Science 2: 49-100. Taylor, John R. 1989

Linguistic Categorization. Prototypes in Linguistic Theory. Oxford: Oxford University Press.

Tedeschi, Philip—Anne Zaenen (eds.) 1981 Tense and Aspect. (Syntax and Semantics 14.) New York: Academic Press. te Winkel, Lambert A. 1848 "lets over de natuur en het gebruik van de wijzen der werkwoorden", Magazijn van Nederlandse Taalkunde 2: 116-130. Thorsen, Nina 1983 "Standard Danish sentence intonation — phonetic data and their representations", Folia Linguistica Vol. XVII, 187-220. Togeby, Ole 1993 Praxt. Pragmatisk tekstteori. Vols. 1-2. Ärhus: Aarhus University Press Tomlin, Russell S. 1991 "The management of reference in Mandarin Discourse", Cognitive Linguistics 2-1: 65-93.

1995

"Focal attention, voice, and word order. An experimental, crosslinguistic study", in: Michael Noonan—Pamela Downing (eds.), Word Order in Discourse. Amsterdam: John Benjamins, 521-528. Traugott, Elizabeth 1989 "On the rise of epistemic meanings in English: an example of subjectification in semantic change", Language 65, 1, 31-55. Traugott, Elizabeth—Alice ter Meulen—Judy S. Reilly—Charles A.Ferguson (eds.) 1986 On Conditionals. Cambridge: Cambridge University Press. Travis, Charles (ed.). 1986

Meaning and Interpretation. Oxford: Blackwell.

References

567

Turing, Alan 1936 "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, Series 2, 42: 230-265. 1950 "Computing Machinery and Intelligence", Mind LIX, No. 236: 433-460. Twaddell, W. Freeman 1935 On Defining the Phoneme, Language Monograph No 16. Also in Martin Joos (ed.) 1956, 55-80. Ulbffk, Ib 1989 Evolution, sprog og kognition. [Unpublished PhD Dissertation. University of Copenhagen] Ullmann, Stephen 1962 Semantics. Oxford: Blackwell. van der Auwera, Johan 1983 "Conditionals and antecedent possibilities", Journal of Pragmatics 7,3: 297-310. 1986 "Conditionals and Speech acts", in: E. K. Traugott—J. S. Reilly—A. ter Meulen (eds.), 197-214. 1990 Coming to terms. [Habilitation thesis, Universitaire Instelling Antwerpen]. 1992 "Free Relatives", in: M. Fortescue—P.Harder—L. Kristoffersen (eds.), 329-354. van der Auwera, Johan—Louis Goossens (eds.) 1987 Ins and Outs of the Predication. (Functional Grammar Series 6.) Dordrecht: Foris. Van Valin, Robert D. 1990 "Layered Syntax in Role and Reference Grammar", in: J. Nuyts—A. M. Bolkestein—C. Vet (eds.), 193-231. Vendler, Zeno 1967 Linguistics in Philosophy. Ithaca, New York: Cornell University Press. Verkuyl, Henk J.—J. Anke Le Loux-Schuringa 1985 "Once Upon a Tense", Linguistics and Philosophy 8: 237-261. Vennemann, Theo 1973 "Explanation in Syntax", in: J.P.Kimball (ed.), 1-50. Verschueren, Jef 1987 Pragmatics as a theory of linguistic adaptation. Wilrijk: IPrA Working Documents 1. 1994 Meaning and pragmatics. Paper given at the 20th annual meeting of the Berkeley Linguistics Society, February 21, 1994.

568 References

Vet, Co 1981

"Some Arguments against the Division of Time into Past, Present and Future", Antwerp Papers in Linguistics 23: 153-65. 1983 "From Tense to Modality", in: Alice ter Meulen (ed.), 193-206. 1984 "Is there any hope for the 'futur'", in: Hans Bennis and W.U.S. van Lessen Kloeke (eds.), Linguistics in the Netherlands. Dordrecht: Foris, 189-196. 1986 "A Pragmatic Approach to Tense in Functional Grammar", Working Papers in Functional Grammar 16. Vet, Co—Arie Molendijk 1985 "The Discourse Functions of the Past Tenses of French", in: V.Lo Cascio-C.Vet (eds.), 133-159. Vikner, Sten 1985 "Reichenbach Revisited: One, Two or Three Temporal Relations?", Acta Linguistica Hafniensa 19: 81-98. Vries, Lourens de 1992 "Notional and coded information roles", Working Papers in Functional Grammar 52, 1-20. Wakker, Gerry 1992 Conditionals at different levels of the clause. [Paper given at the fifth international conference on Functional Grammar, Antwerp 1992] Weinrich, Harald 1964 Tempus. Besprochene und erzählte Welt. Stuttgart: W.Kohlhammer. Wekker, Herman C. 1976 The Expression of Future Time in Contemporary British English. Amsterdam: North-Holland. Werth, Paul to appear (a) ""Remote worlds"—the conceptual representation of linguistic would", in: Eric Pedersen—Jan Nuyts (eds.), Linguistic and Conceptual Representation. to appear (b) Text Worlds: Representing Conceptual Space in Discourse. London: Longman. Whorf, Benjamin Lee 1956 "Some verbal categories in Hopi", in: John B. Carroll (ed.). Language, Thought, and Reality. Cambridge, MA: MIT Press, 112-124. Wiener, Norbert 1961 Cybernetics. Cambridge, MA: MIT Press. Wierzbicka, Anna 1987 "Boys will be boys: 'radical semantics' vs. 'radical pragmatics'", Language 63: 95-114. 1988 The Semantics of Grammar. Amsterdam: John Benjamins.

References

1991

569

Cross-Cultural Pragmatics. The Semantics of Human Interaction. (Trends in Linguistics 53.) Berlin: Mouton de Gruyter. Winograd, Terry 1976 "Toward a procedural understanding of semantics", Revue internationale de Philosophie 3,117-118: 260-303. Winograd, Terry—Fernando Flores 1986 Understanding Computers and Cognition: A New Foundation for Design. Norwood, New Jersey: Ablex. Wittgenstein, Ludwig 1918 Tractatus logico-philosophicus. [1960] [Reprinted Frankfurt am Main: Suhrkamp] 1953 Philosophical Investigations. Oxford: Basil Blackwell. Wright, Crispin 1987 Realism, Meaning & Truth. Oxford: Blackwell. Wright, Larry 1973 "Functions", The Philosophical Review, Vol. LXXXII, 2: 136-68.

Index of names Alex 66, 115 Allan, Keith 30 Allen, Robert L. 320, 321, 325, 328, 329, 350, 355, 380, 391, 403, 425 Allwood, Jens 83, 127, 132, 136, 140, 212 Andersen, Henning 104 Aristotle 8-12, 15, 22, 23, 31, 90, 93, 155, 157, 195, 315, 334, 393, 506 Augustine of Hippo 393, 523 Austin, John L. 82-83, 212, 232, 240, Bach, Emmon 238 Bache, Carl 238, 252, 357, 488 Barthes, Roland 491 Bartsch, Renate 333, 477 Barwise, Jon 52, 449 Basb011, Hans 187 Bateson, Gregory 51, 132, 513, 514 Behaghel, Otto 304, 421 Bennett, Jonathan 123 Bertinetto, Pier Marco 332, 401, 403 Bickerton, Derek 100, 218, 519 Bloomfield, Leonard 105, 170-173, 187, 317 Bolkestein, Machtelt 211, 234, 242, 294 Boyd, Julian 388 Brown, Gillian 285, 287 Brown, Penelope 146, 348 Bull, William E. 317-325, 337, 350, 355, 391, 398, 403, 406, 462, 523, 528, 529 Bürge, Tyler 119 Butler, Christopher 302, Bühler, Karl 79, 212

Bybee, Joan 246-253, 294-296, 304, 337, 341, 342, 349-352, 377, 378, 384, 392, 499, 517, 527 Byrne, Ruth 224, 444 Carey, Kathleen 379 Carlson, Gregory 409 Carnap, Rudolph 25-30, 87, 175, 186 Chafe, Wallace 184, 188 Cheney, Dorothy L. 66, 267 Christensen, Johnny 10 Churchland, Paul 60, 62 Chomsky, Noam 41-43, 48, 56, 99, 102, 122, 170, 173-191, 195, 215-217, 220-224, 228, 238, 281, 320, 471, 473 Christensen, Johnny 10 Christensen, Niels Egmont 79 Clark, Herbert H. 409 Clark, Eve 409 Connie, Bernard 248, 321-325, 332, 342, 350, 355, 361, 371, 384, 386, 395, 399-404, 434-437,461,499,530 Cussins, Adrian 62 Cutrer, Michelle 388, 429, 432, 436, 444 Dahl, Osten 210, 242, 246, 247, 294, 323, 337, 341, 349-352, 364, 377, 378, 384, 386, 392, 417, 455, 472, 473, 484, 527 Davidsen-Nielsen, Niels 295, 349, 354, 357, 364, 369-372, 442, 469, 470, 525, 526, 529, 532 Declerck, Renaat 319, 339, 382, 425, 427, 435-437, 477, 530 Dennett, Daniel 46-47, 510 Descartes, Rene 13, 57

572 Index of names

Dik, Simon 149, 163, 210-213, 226, 228-238, 244, 309, 520, 521 Diogenes Laertius 275 Dowty, David 437 Dummett, Michael 28, 57, 99-100, 122, 125, 350, 507 Durst-Andersen, Per 333, 334, 531 Dyvik, Helge 516 Edelman, Gerald 43, 65, 70, 93, 510, 512 Engberg-Pedersen, Elisabeth 305 Evans, Gareth 222 Fauconnier, Gilles 114, 429, 430, 439, 444, 445, 449, 530 Fillmore, Charles 31, 53, 195, 444, 456, 464, 516 Fink, Hans 84, 121, 132 Fischer-J0rgensen, Eli 167 Fleischmann, Suzanne 344, 350, 429, 483-487, 490, 491,493, 495, 531 Fodor, Jerry A. 41-55, 61, 108, 182, 218, 222-224, 308, 508, 520 Foley, William A. 211, 224, 237, 252, 518, 519 Fortescue, Michael 211, 226, 252, 357, 467 Frege, Gottlob 18-24, 28, 30, 83, 130, 176, 506, 507 Gärdenfors, Peter 66, 105, 120, 206 Garfmkel, Harold 134 Gazdar, Gerald 87-88 Geeraerts, Dirk 52 Gernsbacher, Morton Ann 223 Gibson, James J. 47, 50-51

Givon, Tom 93, 104, 155, 211, 221, 225, 242, 245, 248, 275, 288, 292, 299-302, 304, 306, 338, 385, 523 Glismann, Otto 335 Goldberg, Adele 198 Goldsmith, John 334, 524 Goossens, Louis 294, 376 Gopnik, Alison 47, 430 Gould, Stephen J. 92, 154, 221, 512 Greenfield, Patricia 219-224, 270, 519 Gregersen, Frans 159, 169, 186, 247, 319, 514 Grice, H. Paul 85-87, 123, 133-146 Gödel, Kurt 27 Haberland, Hartmut 127, 281 Habermas, Jürgen 106 Haiman, John 189, 203, 303, 449 Halliday, Michael 212, 302, 303, 486 Harris, Zellig 173 Haspelmath, Martin 251, 521 Heidegger, Martin 72 Heim, Irene 241 Heine, Bernd 248 Heltoft, Lars 290, 300, 521 Hengeveld, Kees 211-213, 232-243, 251, 406,518, 521 Heny, Frank 340 Herrnstein, Richard 66 Herslund, Michael 343, 524, 529, 530 Hjelmslev, Louis 89, 164, 167, 169-171, 181, 186-187, 190-191, 201, 208, 260, 434, 514-518, 522, 532 Hoffmeyer, Jesper 51

Index of names

573

Hopper, Paul 248, 285, 297-299, 485 Horn, Laurence 30, 480 Hornstein, Norbert 392, 417, 462, 472-474, 531, 532 Hudson, Richard 260, 283

Kock, Christian 102, 208, 132, 208, 487 Kohut, Heinz 95 Kroon, Carolyn 242 Kruisinga, E. 427 K0ppe, Simo 150-152, 189, 427

Isard, Stephan 111, 445, 463, 464 Itkonen, Esa 173, 175, 185

Lakoff, George 31, 53, 58, 72-73, 111-116, 334 Langacker, Ronald W. 53, 68, 69, 76, 105, 116, 128, 255-265, 272-275, 278, 280-84, 294, 306, 307, 332-335, 343-347, 351, 352, 362-367, 378-380, 383, 384, 387-389, 417,466, 471, 483, 516, 522, 524, 526, 527 Leech, Geoffrey 85, 88, 189, 213, 241, 329, 330, 333, 337, 340, 344, 358, 359, 368, 380, 385, 440, 442, 452, 456, 483, 531 Leibniz, Gottfried Wilhelm 203, 517 Leirbukt, Oddleif 462 Leloux-Schuringa, Anke 355 Levinson, Stephen 87, 127, 146, 233, 348 Lewis, David 27, 42, 86, 87, 134, 233, 444, 508, 512, 531 Lyons, John 138, 236, 238, 324, 326, 344, 354, 361, 393, 516, 528

Jackendoff, Ray 43-44, 182-183, 198, 308, 422, 516 Jakobson, Roman 271, 341 Janssen, Theo A.J.M. 331, 335-339, 343, 429, 524 Janssen, Theo M.V. 232, 413 Jespersen, Otto 105, 181, 198, 245, 284, 316, 317, 319, 321-323, 325, 341, 398, 425, 434, 503, 526 Johnson, Mark 59-62, 72, 76 Johnson-Laird, Philip 13, 37, 66, 109-114, 203, 205, 224, 308-309, 444, 445, 449, 452, 510 Joos, Martin 343 Kamp, Hans 113, 392, 394, 445, 477-479, 532 Kant, Immanuel 13-14 Kanzi 270 Keenan, Edward L. 245 Keenan, Elinor Ochs 138 Keizer, Evelien 163, 237-239 Kempson, Ruth 31 Kierkegaard, S0ren 70, 511 Kirsner, Robert 204, 273, 300, 331 Klein, Melanie 95 Klein, Wolfgang 377, 405, 414-421

Mackenzie, Lachlan 163, 521 Madvig, Johan N. 316, 319 Mandler, Jean 64, 111 Marr, David 38, 48 Matthews, Peter 173 McCawley, James D. 144, 352, 354, 376 McCoard, Robert W. 329, 379-382 Mey, Jacob 127, 140, 141

574 Index of names

Millikan, Ruth 96-98, 102-103, 129, 512, 522 Mithun, Marianne 245, 250 Moens, Marc 377, 477, 481 Molendijk, Arie 403, 477, 479, 491 Montague, Richard 29, 30, 42, 232, 443, 508, 515 Mortensen, Arne Thing 81 Mourelatos, Alexander P. 333 Nagel, Thomas 63 Nedergaard Thomsen, Ole 197, 281, 292 Newmeyer, Frederick 178, 515 Nieuwint, Pieter 453 Nunberg, Geoffrey 263 Nuyts, Jan 56, 88-89, 122, 211, 237, 308-309, 521 N0rgard-S0rensen, Jens 30, 293 Ockham, William of 34, 87 Pagliuca, William 248, 249, 294, 341, 342, 517 Palmer, Frank 329, 330, 368, 380, 450, 453 Peirce, Charles Sanders 104 Perkins, Michael 353, 354, 364 Perkins, Revere 248, 249, 294, 341, 342, 517 Penrose, Roger 152, 510, 511 Pinker, Steven 95, 178, 270 Plato 6-8, 11-15, 19-20, 26, 32, 80, 108, 129-132, 151, 156-157, 183, 195, 334, 505, 506, 512 Plunkett, Kim 117, 270 Popper, Karl 46, 94, 108, 130, 151 Premack, David 151, 271, 273, 522 Prior, Arthur N. 391-394, 400-403

Putnam, Hilary 27-28, 40, 44, 56-58, 76, 87, 97, 119, 129, 186, 310, 508, 510 Quine, Willard V.O. 24, 414 Quirk, Randolph 329, 330, 341, 361, 368, 374, 377, 384, 434, 440-442, 456 Reichenbach, Hans 319-322, 325, 329, 332, 350, 381, 398-404, 417, 470-472, 477,478, 481, 501, 503 Reyle, Uwe 113, 392, 394, 477, 479, 532 Rice, Sally 74 Rijkhoff, Jan 280, 408, 518, 523 Rohrer, Christian 423, 443, 477, 478, 481, 532 Rorty, Richard 13, 15, 32, 57, 130 Rubin, Edgar 257 Rumelhart, David 61 Russell, Bertrand 12, 21-25, 30, 34, 393, 507 Sandra, Dominique 74 Saussure, Ferdinand de 150, 157-170, 204, 246, 256, 257, 305, 470, 514 Savage-Rumbaugh, Sue 270 Schachter, Paul 245 Schank, Roger 42 Schegloff, Emanuel 161, 163, 336, 476 Searle, John R. 13, 34, 44-49, 62, 67-71, 82-86, 89-98, 105, 118, 122-125, 133, 212, 284, 508, 509, 512 Shannon, Claude 49 Siewierska, Anna 198, 231, 518 Siger de Courtrai 33

Index of names

Sinha, Christopher 47, 56, 117, 118, 218, Smith, Carlota 368, 400, 417, 471, 472 Smolensky, Paul 222 Sperber, Dan 85, 100, 139-146, 512 Stalnaker, Robert 392, 444 Stern, Daniel 95 Stjernfelt, Frederik 157 Strawson, Peter F. 14, 85, 86, 96, 283, 506 Stuart Mill, John 15, 28, 130 Swan, Michael 372, 494 Sweetser, Eve 59, 353, 354, 371, 443-445, 449, 450, 521 Sorensen, Holger Steen 217, 377 Talmy, Leonard 53, 59 Taylor, John R. 343, 344 te Winkel, Lambert 355 Thorn, Rene 157 Thompson, Sandra 248, 285 Thome, James P. 388 Togeby, Ole 54, 65, 305, 520 Tomlin, Russell 245, 288 Turing, Alan 37, 40, 41, 43, 45, 176 Twaddell, W. Freeman 172 Ulbiek, Ib 66, 108 Ullmann, Stephen 505 van der Auwera, Johan 240, 444, 445, 456 Van Valin, Robert 211, 224, 231, 237, 252, 518, 519 Vennemann, Theo 224 Verkuyl, Henk 355 Verschueren, Jef 56

575

Vet, Co 211, 238, 351, 354, 355, 357, 369, 371, 391, 400, 403, 413, 477, 479, 491, 524, 525 Vikner, Sten 321, 400 von Neumann, John 37 Weinrich, Harald 484, 494 Wierzbicka, Anna 203, 517 Williams, Raymond 121, 132 Wilson, Deirdre85, 100, 114, 139-146, 512 Winograd, Terry 114, 189 Wittgenstein, Ludwig 13-14, 25, 58, 79-81, 105, 135, 206, 507 Woisetschlaeger, Erich 334, 524 Wright, Crispin 58 Wright, Larry 91

Index of subjects abduction 104, 209, 292, 461 aboutness 5, 12, 18, 21, 24, 27-28, 35, 49, 50, 243; see also Intentionality absolute vs. relative tense 323, 358, 403 accessibility 352, 429 adverbial of time 313, 320, 323, 377, 380, 399, 402-422, 431, 445, 462, 472, 501, 503 alarm calls 76, 267 algorithm 54, 60, 180, 423 ambiguity 179, 188, 189, 201, 204-207, 211,215-217,292-294, 321, 337, 365, 377, 385-386, 422, 429-443, 479-487, 516 anaphora 138-141, 288, 524 animal communication 76, 267-271 anterior, see perfect anteriority 242-243, 377, 379, 381-385, 396, 397, 460, 524, 527, 531 application (time) 268, 271, 274, 280, 327-348, 350, 373, 383, 384, 389, 394-396, 403-405, 412-415, 419-421, 425,429-438, 453-455, 469, 482-493,500-503, 521, 528, 529 arbitrariness 55, 154-156, 165-170, 252, 304-310, 426 artificial intelligence 33-46, 51, 54, 176, 191, 209, 509, 510 artificial life 46 aspect 249, 357 autonomy 92, 127, 128, 149, 152-157,173-191, 194-198, 221, 243, 246, 266, 297-310, 320, 471-474, 516, 520, 522

Background 58, 70-77, 96, 105, 115, 117, 133-135, 138,511 backshift 384, 440-443 Bantu 242 base time 350, 352, 355-358, 362, 382-384, 395-398, 404, 500, 525 basic-level concept 53, 64 behaviourism 34-35, 39, 41, 48, 52, 55, 58, 61, 66, 76, 80, 102, 171-174, 267, 508 billiard-ball model 237, 259, 283 biology 4, 51, 89-92, 97, 103, 150, 151, 302, 324 black box 34, 58, 61, 176 bleaching 349, 356, 362, 378 bottom-up 149, 171, 190, 206, 212, 256-271, 277, 282-285, 396, 502, 518 categorical aheadness 354, 358, 371, 388, 458, 459, 503 causality 13, 43, 51, 88-93, 124, 125, 166, 450, 508, 509 centrality 179, 206, 322, 327-328, 339, 371 centrifugal principle 226, 467 Chinese 288, 318 Chinese room 44, 176, 299, 508 "classical" AI 33-52 classical universalism 10, 12, 18, 33, 315, 324, 330 coded vs. contextual information 114, 261-265, 285-288,402-405, 410,421,423-424,433-443,476, 480, 482, 486,488-492, 495,502 cognition viii, 3-5, 33-96, 105-118, 182,218-224,227-228,244,270, 271, 314, 334, 483, 484, 499, 510-512, 530

578 Index of subjects

Cognitive Grammar 149, 237, 255-266,271-282, 285,292, 296, 306, 307, 327, 344-347, 362-367, 378, 379, 387, 466 cognitive linguistics 33, 35, 52-56, 58, 68-78, 111, 116, 246, 261, 521 cognitive science viii, 3, 34-36, 41, 44, 48, 49, 61, 228, 445 coherence 209, 242, 301, 302 communication 76, 81-89, 100-108, 119-125, 130-146, 165, 200, 206, 208,263,266-271,276-277, 308, 324-325, 359-361, 431,475-489, 502, 513, 519, 524, 526, 532 communication vs. representation 96-97, 99-100, 104-105,139-146, 275-285, 329, 448-451; see also functional-interactive vs. conceptual meaning commutation 200, 207, 302, 307, 363, 439, 517 compatibility 136, 424, 425, 441 complete vs. defective expressions 275-276 compositionality 19, 116, 218, 222-224, 261,264-267, 271,327, 333, 373, 379, 383-386, 390, 396-398, 426, 441, 452, 453, 459, 466, 470, 479, 531 partial compositionality 262, 383 computation 35-46, 48, 50, 54, 55, 60-62, 112, 464, 465, 509, 513 concept 10, 22-24, 35, 38, 46, 49, 60, 62, 64, 68-77, 105, 106, 115-122, 140, 144, 517 conceptualization 35, 64-65, 68-78, 108, 115, 125, 149, 210, 261-296, 317, 334, 338, 350, 351, 362, 386-390, 430, 501, 502, 513, 526, 530

conditionals 13, 28, 294, 427, 430, 443-465 connectionism 52-55, 60-62, 66, 117-120, 222 connotation 208-209, 339, 343, 488-491, 532 connotative reversal 208, 339, 343, 488 consciousness 45-48, 59, 60, 65, 67, 74-75, 89, 95-97, 115, 185, 511 consecutio temporum 170, 423, 424, 433, 435 constituent structure 195, 264 constitutive rules 84, 96, 102 construal 274, 432 constructions 198, 256, 262, 307, 327, 373, 378, 383, 395, 443-453, 474, 516-517, 526 content vs. expression 69, 166-170, 178-182,187-188,224-228,237, 249, 260-261, 301-303, 306, 466-474 content syntax 149-150, 193-200, 210,220,224-228, 261-262, 285, 303-306,326,355-357, 364, 369, 386-391, 395-398, 404,423-424, 432, 444, 454-455, 465-474, 529 context 285-288, 296, 306, 313-314, 333, 389, 394, 402, 404, 415, 431, 438 continuative perfect 217, 376-377 convention 85-86, 96, 306, 386 conversation analysis 161-164 counterfactuals 443, 454-463 coreference 238-241 correspondence 8, 11, 20, 33, 113, 195, 310 cosmos vs. chaos 6-8, 13, 80, 128-131, 151, 505 Creole 221, 242, 519

Index of subjects

cross-linguistic 225, 244-247, 250-253, 299,321-325,337,342, 344, 349, 375, 386, 521 cumulative model of time 379-383, 386, 407, 408, 418 current relevance 323, 326, 379-384 Danish 69, 75, 194, 204, 217, 227, 230, 231, 288, 356, 364-375, 389, 426, 442, 446, 458, 490, 499, 520, 524, 526, 531 Danish Functional Grammar viii, 305 de re vs. de dicto 433 declarative vs. interrogative 106, 132, 137, 140, 141, 212, 213, 232-237,290-291, 303, 332,341, 374, 386-388, 432, 453 declarative vs. procedural knowledge 112, 116 decomposition 42, 203 definite vs. indefinite 23, 230-231, 262, 274, 280, 288, 300, 325, 328-331, 340, 385, 403,408,415 deictic centre 325, 348, 362, 385, 394, 397, 429-438 deixis 271-275, 300, 325, 433 demonstrative 272, 289, 300, 338 dependence 153, 169, 258-261, 269, 275-288, 394, 425, 431, 446 dependent vs. autonomous 258-260 diachrony 242, 246-253, 306, 307, 342, 344, 350, 370, 375, 377, 383, 384, 499, 521 discourse 100, 113, 240-243, 262, 273, 280, 297-298, 301, 324, 340, 403-405, 410, 412, 439, 442,446-448,457, 475-494, 513, 529, 532 Discourse Representation Theory 113-114, 445, 477-482

579

distal 273, 331, 336, 343, 345, 412, 503 distribution 170-191, 224, 230-238, 243, 249, 317, 357, 369, 408, 466-471, 516, 517 do-construction 307 dualism 13, 20 E-language 43, 56, 175 echo question 291 elaboration 258-251, 282, 283, 288 embedding 266, 279, 314, 423-443, emergent grammar 297, 485-492 empiricism 13-14, 21, 24, 34, 39, 57, 84, 186, 506 encyclopedic information 53, 262 English 42, 74, 77, 194, 202, 204, 213, 220, 224, 245, 247, 251, 252, 270, 288, 290, 294-296, 299, 300, 306, 313-495, 518, 520, 522-524, 526, 530 environment 90-103, 118 epic preterite 484-491 epistemic 235, 294, 295, 353, 367, 371, 389, 443-465 epistemology 6, 8, 13-14 ergative 250-251 essential vs. accidental 10 eternal truths 341 ethnomethodology 134 event vs. state 333, 419-422, 441, 442, 478, 481 event time 319-323, 330-339, 346-347,381,384,397-422,453, 469, 477, 498 evolution 4, 90-98, 101, 125, 218-224, 266, 270, 273, 307, 354, 446, 460 existence 12, 22, 23, 393 experience (subjective, primary) 35, 52, 63-74, 94-95, 134-135

580 Index of subjects

extension 19, 28, 43, 76, 81, 110, 112, 120, 296, 350 family resemblance 81, 206-207, 398, 518 feature-placing sentences 283 feedback 92, 95, 98, 103, 154, 381 fiction 324, 360, 482, 487-492, 532 figure vs. ground 257, 292 file (referential) 241, 338-339, 431, 432 finiteness 283-284 focus 293, 328, 335-337 force dynamics 53, 59, 75, 353, 389 formalization 15, 25, 36, 38, 43, 176-186 free indirect speech 358, 433, 434 French 351, 362, 368, 369, 371, 372, 384, 395, 416, 426, 427, 438, 468, 478-480, 491, 499 fronting 410, 422 function, definition of 88-91 function vs. basis 107, 326-328, 397, 402, 487, 488; see also base time Functional Grammar 149, 211-212, 228-244, 251, 255, 280, 293, 299, 305, 308, 406, 409, 413, 518-521 function-based structure 92, 149-190,255,266,275-282,297, 305, 309, 314, 476, 498 functional view of meaning 79-125, (definition 101), 161, 214, 220, 255, 313, 327 functional-interactive vs. conceptual and referential meaning 149, 350, 386-390,408-409, 414, 416,423, 444, 448-451, 464-465 functionalism (computational) 40, 89, 508

functionalism (linguistic) viii, 79, 88-93, 103, 165, 178, 201, 242, 297-310, 325, 485, 490 future 28-29, 300, 315-318, 321-326,349-376,382,386-390, 406-407, 418-421, 425-428, 435-438,442,443,452-470,498, 523-527, 529, 530 generative grammar 41-42, 53, 173-191, 201, 219, 228, 231, 255, 297, 301, 303, 471-474 generative semantics 184, 189 genetic basis for language 95, 221, 270 genetic code 51 German 194, 227, 367, 384, 389, 395, 416, 438, 462 Gesamtbedeutung 206 grammaticality 176, 186, 196, 417, 490 grammatic(al)ization theory 246-253, 295, 368, 377 Greek 273, 315, 327, 506, 524 greetings 77, 105, 106, 123, 132, 140, 271 grounding 272, 274, 280, 345, 363-366, 387 Grundbedeutung 206, 327, 344 Hawaiian 318 head-internal vs. head-external 105, 119, 129,429 head-modifier relation 194, 210, 259, 293, 445 hearer-orientation 114, 121, 338 hierarchical relations 88, 152, 154, 194,219-227, 260,262, 265-268, 271, 278, 387-390, 391, 396, 499, 501, 502, 518,523 holism 151 holophrase 163, 220, 266-271 hot news 345, 348, 420, 499

Index of subjects

hypercorrection 122 1C = zy-clause 445-465, 517 iconicity 64, 66, 128, 169, 178, 203, 304-310, 410, 515 identified time 321, 328, 350, 356, 364, 373, 385, 403, 441, 443; see also definite vs. indefinite I-language 43, 56, 175, 177 illocution 82-87, 122, 212-213, 232-243, 268, 271, 275, 276, 278, 289-291, 424, 425, 428, 432, 446, 448, 521 imperfective 246, 342, 346, 347; see also state implicature 87, 133, 217, 236, 322 indirect speech 430, 429-443 indirect speech act 103, 443 inferencing 285-296, 450, 459; see also process of interpretation information 42, 49-52, 56, 139-146 innateness 218-224, 473 institutions 85, 164-168 instructional approach to meaning 113-114, 193, 210, 214-217, 226-227,240-242,274, 338,341, 345, 385, 394, 397, 413, 419, 446, 453, 454, 466-468, 489 intention 65, 67, 71, 85-86, 95, 99, 106,133-142,218-220,265-270, 514 Intentionality 35, 46-52, 56, 64-71, 75, 97, 133,430, 439, 509-511 interaction v, vi, 56-60, 105-106, 118, 122-125, 127-130, 133, 142-146,165, 190, 218,265-270, 291, 314, 335, 348,386-390, 501 internal paradigm 207-208, 348, 354, 365 International Phonetic Alphabet 252, 523

581

interpersonal 220, 233, 236, 268 interrogative 106, 140, 174, 212, 213, 233, 236, 268, 276, 287, 289, 290, 302, 303, 341, 374, 388, 424, 428, 446, 447, 453 intonation 194, 199, 293, 413, 523 knowing-how vs. knowing-that 67, 90, 115, 117, Koyukon 252, 357 landmark 257, 258, 281, 282, 292, 293, 446 language-as-such, see langue language-of-thought 42, 54, 108-112, 222,520 language-specific vs. universal 243-253, 300, 357, 375, 395, 416, 468, 521, 529, 530 langue vi, 83, 159-167, 190, 240-243,246,256,261-263,269, 285, 298, 305, 308, 334, 475-477, 481, 482, 493-503, 532 Latin 121, 170, 242, 246, 315, 316, 324, 327, 368, 376 layered clause structure 149, 209-213,232-256,268,275-282, 288-296,386-391,405, 409,501, 518 learning 54, 63, 66, 67, 69, 99, 117, 118, 161, 177,221,495 levels-of-analysis 43, 150-153, 189, 190, 509 lexical item 85, 248, 268, 269, 304 lexicon 181, 197, 231, 260-261, 406, 516 linguistic power structure 120 linguistic turn 13, 15, 19, 21 logical semantics 15-32, 82, 87, 104, 113, 114, 391-396

582 Index of subjects

lookahead time 350, 351, 356, 382, 387, 395, 418, 500, 529 MC (=main clause in a conditional) 445-465 macro-concept 120, 121 marked vs. unmarked 225, 250, 325, 341-343, 363, 365, 437, 469, 470, 486, 531 mathematics 15, 20-21, 36, 39, 49, 50, 88, 175, 177, 184, 347, 507 meaning, directionality of 107, 487 meaning and denotation 3-32 meaning and cognition 33-77 meaning, actual vs. potential 53, 204, 215, 269, 289, 354, 369, 371 memory 64-66, 112, 113, 381 mental content 46, 55-57, 67-73, 123, 125, 511 mental models 109-114, 130, 210, 214, 224, 308, 446-449, 457 mental spaces 114, 264, 344, 348, 429-445, 482-484, 503, 530 metalanguage 174-190, 203, 252, 515, 517, 518 metaphysics 11-32, 87, 392, 393, 505, 506, 509 modal verbs 251, 296, 363, 364 modality 195, 243, 326, 343-345, 352-355,357,363-372,376,387, 389, 429 model-theoretic semantics 19, 26, 97, 239, 508 modularity 53 morphology 199, 260, 324 motivation 53, 55, 56, 155, 165-170, 183, 275, 280, 281, 304-310,425-428,452,459,460, 473, 493, 494 motor routine 61, 67-69 mutual expectation 86

naming fallacy 183-185 narrative 242, 243, 305, 324, 358-361,433,442,477-495,500, 502 natural kind 28, 118, 129-130, 310, 349, 388 necessary-and-sufficient conditions 10, 116, 205 negation 6, 9, 211, 213, 274, 275, 278, 289, 293 networks 54, 55, 60, 61, 62, 116-122, 222-224, 513 neuro-cognitive 68, 510 neuron 61, 62, 94, 512 non-conceptual content 62, 76, 271-275, 371, 390, 436-443, 449 non-finite 284, 323, 356, 357, 363, 376, 382, 383, 386, 502, 525,529 noun phrase 216, 226, 274, 279-281, 321 obligatoriness 307, 365, 374, 413-414 Occam's razor 200, 310, 429 one-form-one-meaning 207, 249 onoma and rhema 9, 278 ontology 6, 8, 11, 84, 98, 102, 123, 150-161, 171, 189, 190, 236, 276, 279, 315, 316, 336, 250, 251, 439, 470, 498, 505, 508, 509 operand 149, 211, 214, 215, 225, 237, 264, 276-288, 294, 326, 396-398, 408, 502 operator 16, 149, 211, 215, 230-237,264,276-288,294, 326, 391-398, 408, 424, 428, 446, 502, 526 orientation (time or axis) 318, 320, 321, 323, 339, 398, 401-402, 437, 477, 479, 523, 528 overlapping 332,350,398-400,403, 412-413. 470. 476

Index of subjects

P* (the unreal point-of-application) 344, 348, 454-465, 531 p-definiteness 414-422 pain 47, 64, 69, 70, 81, 97, 507 panchronic 206, 248, 371 pancognitivism 35, 55-59, 75 paradigmatic relations 106, 168, 202,207-208, 226,247,251-252, 268, 280, 289, 296, 327, 341-344, 352-357,363,369,372, 375, 384, 385, 468 parole 161, 164, 165, 256, 261-265, 269, 288, 298, 308, 417, 481, 493, 502 passe anterieur 479, 480 passe simple 478, 479, 491, 532 past future 357-362, 366, 387, 389, 396, 399, 400, 404, 414, 526 past future perfect 358, 361, 396, 399, 400, 404, 414 past perfect 319, 320, 322, 396, 399, 403, 404, 421, 440-443, 459, 462, 470, 477, 479, 480, 503, 527, 530 past tense 102, 172, 202, 204, 225, 227, 268, 274, 280, 313-348, 358, 359, 363-366, 373, 384, 385, 396, 404, 406, 412, 419, 420, 438, 454, 457, 463, 467, 469, 471, 484, 494, 524-528 peak-foundation model 127-128, 132, 502 perception 6, 8, 21-23, 26, 47, 50, 61, 64-67, 69, 75, 76, 213, 235, 267, 333, 507, 510 perception, categorial 64, 66, 69, 76, 267 perception and conceptualization 64-65 perfect217, 242-246, 315, 318-326, 345,355,358,376-390,395-400,

583

403, 404, 407, 414-416, 429, 440-442, 459-463, 466-470, 477-480, 499-503, 524-528 perfective 246, 318, 334, 346, 384, 479, 491, 524; see also event performatives 82-85, 232-236, 287, 388, 512 performative hypothesis 232, 287 phonology 69, 89, 183, 188, 199-200, 207, 261, 306, 468 physical symbols hypothesis 40, 42-43, 61 physics, see science pidgin 211, 221, 298, 519 planning 65-66, 218-222, 446, 519 point-of-application 268, 327-345, 348, 350, 358, 364, 373, 384, 385, 429, 438, 453-455, 482, 486-490, 493 point-of-view 324, 350, 359-361, 429-443, 478, 484-489, 500 pointing 273, 295, 325, 328, 331, 332, 335-339, 380, 395, 413 politeness 146, 299, 348, 517, 524 polysemy 89, 205, 353, 367-376, 499 positivism 24, 52, 171, 175-176, 190, 382, 508, 515 postmodernism 13, 32, 57, 130, 131 pragmatics vii, viii, 4, 8, 29, 30, 56, 59, 77, 80-88, 103, 105, 124-146,200, 208,228-232,256, 281-282, 291, 301, 304, 329, 352, 354, 410, 414-422, 426, 433, 476, 497, 502, 525 predicating 83, 283-284, 292-293, 406-409,411 prediction 388-390, 419, 436, 450, 452 present tense 9, 202, 268, 281, 322-348, 356, 365, 366, 373,

584 Index of subjects

383, 397, 404, 415, 451-454, 482-487,489,492-494,526, 530, 531 prestige 121-122 presupposition 71, 106, 208, 487 procedural semantics 107-118, 227, 445, 463 process vs. product 111-114, 214-218,223-224, 227-228, 242, 262-265,419,432,445-446, 448, 481, 495 process of interpretation 110-121, 129, 134, 138, 214-218, 223, 261-265,285-288, 314,338, 348, 397, 402, 420, 423-495, 500, 501 profile 257-263, 293, 363, 382 progressive 246, 300, 315, 371, 372, 376, 478, 484, 490, 521, 524 proper name 23, 96, 230 proposition 9, 22, 71, 82-87, 108, 112, 113, 124, 195, 212-213, 233-239,268-277,289, 301,302, 326,353,354,391-396,412-414, 424-428, 444-453, 511, 528, 529 prototype 53, 117, 119, 205-208, 332, 450 Pygmalion effect 173, 176 qualia 64, 68, 70, 78, 95 rationalism 13-14, 39, 57, 506 real value 97-98, 129 realism 26, 57-58, 97, 129 reality 6, 18-32, 37, 49, 55-59, 71-72, 75, 82-84, 121, 233, 318, 344, 345, 353, 354, 361-366, 429-430,438-439,451,454-463, 483, 500, 523 reality-based conditionals 451-457 recipe 210, 214-219, 223-224, 239, 261-264, 294, 302, 326, 395, 466-467, 519

reference 18-24, 111-113, 129-130, 238-241,262,273,279-280,288, 313, 322, 391-504 reference failure 112, 129, 137 reference time 319-323, 342, 381, 398-404,416,469,477-482,498, 501 relative clause 425 Relevance Theory 139-146 reliance 58, 67, 71-72, 77, 96, 106, 115, 135, 138 representation 42-44, 47, 61-78, 107-114, 122-125,127-129,135, 139-141,212-214,309,445, 457, 477, 480, 481, 491, 497, 510, 511, 513 rules 14, 16-18, 25-27, 41, 42, 46, 48, 54, 67, 84-87, 96, 102, 111, 131,170-190,226-231, 256, 288, 419,422,423,432-437, 440-442, 468, 492, 493, 508, 530 schizophrenia 121, 136 science 3, 5, 13-15, 24-32, 34-36, 39, 41, 44, 46, 48, 49, 55-57, 59, 61, 89, 110, 131, 149-151, 171, 175, 184-185, 228, 445, 473, 475, 507, 509, 510 scope 209-213, 216, 221-227, 237, 242-243,252,277-296,304,326, 342, 354-356, 386-398,400, 402, 421, 422, 424, 426, 444, 445, 453, 454, 466, 474, 498, 501, 502, 523, 525, 531 self 65, 95 sense and reference 19, 20, 82, 83 sense-making 54, 133-146, 217, 221, 240, 263, 414-422, 426, 495 sense, the principle of 136-138, 143-146, 217, 236, 279, 314, 355-359, 394,395,411,414-422, 482, 489

Index of subjects

sequence of tenses, see consecutio temporum shifters 271, 331 simulation, see computation situational boundness 265-288 slot vs. filler 277-278, 290, 428 social entities 118-121, 135 sociolinguistics 121, 362 soliloquy 100, 125 Spanish 249, 288, 317, 318 speech community 101, 119, 120, 129, 162, 165, 170, 251, 256, 298, 482 speech time 319, 323, 384, 469, 530; see also utterance time squinting grammar 245, 368, 522 state 420, 442 state-of-affairs 212-218, 236-237, 271-283, 326-335, 339,342-350, 356-366, 372-377, 382-388, 392-420,436,441,442,453-456, 466, 467, 478-480, 486, 501 stimulus control 65, 77, 266, 269-271 Stoics 181, 275 strong AI 44-54, 176, 509, 510 structural complexity 45, 46, 49-50, 102, 114, 157, 242-243, 269,398 structuralism 52, 89, 149-190, 244, 247, 297-310, 317-319, 324-325, 497, 498 structure 49, 88, 92, 149-310, 341-343, 349, 355, 367-376, 386-390, 396-398, 423, 433, 440-442, 456, 465-474, 498-499 component-based 154-157, 171 vs. substance 153, 200, 304-310, 456 integrative view of 150, 157-159, 164, 169

585

subclause 293, 423-426, 428-430, 432, 465, 509, 530 subject (human) 25, 69, 102, 157, 228, 318, 430, 513, 523 subject (grammatical) 6, 12, 22, 154, 195, 198, 216, 245-248, 278, 284, 288, 292-296, 364, 379, 506, 515 subjectification 272, 351, 363-364, 379, 386, 387 subjectivity 95, 274, 350, 389 subjunctive 249, 438, 439, 463-465 subordinator 314, 428, 444-448, 454, 464, 529 substance 9, 11-12, 40, 152, 165-170,193,198-205,215,218, 236,243-251,257,299-303,309, 322, 342, 343, 393, 405, 410, 456, 498, 506, 508, 514, 515, 517, 522, 528 component substance 152-167, 173, 189 functional substance 156, 190, 300 semantic substance 201, 204, 247, 248, 299, 322, 356, 358, 364, 368, 369, 375, 377, 390, 405, 429, 456 surface 7, 183-189, 196, 230, 260, 301, 308, 505 survival 60, 90-101, 130, 154 Swedish 378 syntagmatic implicature 217-218, 236, 377 syntax vi, 25, 29-30, 37, 42, 55, 59, 110, 127, 149, 150, 177-183, 193-200, 209-232, 245, 252, 260-271,275-310, 326, 355, 358, 367,376,386-391,421,463-474, 497-501, 517-522

586 Index of subjects

syntax and semantics 178, 180, 182, 215-217, 229-232,281-282,308, 468-474, 497, 521 task structure 210, 223, 224, 262-264, 395, 463, 466-468 tense vi, viii, 3, 9, 28, 102, 129, 149, 158, 170, 172, 242-243, 248-250,268, 271,274, 278-281, 300, 313-504, 523-532 deictic tense 212, 236, 243, 268, 278, 313-348, 355-357, 369, 373-376,383,385,394-395, 402-406,429-443, 490, 498, 502 tense logic 392-396, 528 tense shifting 484-495 text 42, 241-243, 324, 361, 401, 445, 477-495, 532 thermostat 63, 90 third realm 20, 130, 132 time 70-71, 317, 313-314, 317, 344, 352,354,381,392-393,404-422, 488 time adverb, see adverbial of time time of reckoning 382, 383, 385, 395, 407, 416-417, 441, 442, 459, 500, 503 time reference 316-326, 389, 391-422, 425, 430, 434, 441, 452, 486, 498, 499 tool use 219, 264, 267, 275-279, 282-286, 307, 502 top-down 120, 149, 152, 154, 206, 253, 265-271, 282, 328, 396, 398, 502 topic 163, 205, 297, 248, 278-281, 292, 299, 405, 449, 503, 523 topic time 404-422, 442, 445, 481, 482, 500, 501, 525, 529 traditional grammar 186, 242, 281, 315, 317, 324

trajector 257, 258, 263, 281-283, 288, 292-293, 446 truth 7, 10, 15, 16, 20-32, 33, 37, 42, 44, 53, 58, 65, 80, 86, 113, 114, 122, 129-132, 186, 213, 232, 233, 277, 284, 332, 334, 337, 350, 352, 353, 364, 388, 392-396, 432, 507, 509, 510, 513, 528, 530 truth conditions 42, 86, 113-114, 130, 132, 199, 364, 392-396, 444, 498, 513 typology 228, 245-246, 299 underlying 7, 8, 15, 160, 183-190, 224-244, 278, 299, 301, 308, 316, 399, 468, 473-474, 505,520 universale 243-255, 309, 325, 333, 357, 395, 426, 518, 520; see also cross-linguistic utterance meaning 87, 135, 268, 269, 279, 286, 502 utterance time 328, 336, 341, 346, 383, 387, 402, 500, 501 valence 258, 261, 262, 284 variable 22, 238-241, 413, 468, 503 variability v, 8, 119-122, 165, 204, 206, 299, 386 variation 8, 172 virtual governor 120, 131 volition 351, 352, 356, 368, 370, 372 words and concepts 60, 76, 117, 119 world of discourse 240-242, 280, 281, 340, 403, 412, 439, 487-495 zero 202, 204, 209, 227, 288, 341, 375