The Elements of Mathematical Semantics 9783110871432, 9783110129571


261 24 15MB

English Pages 272 [276] Year 1992

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
1 Some topics in semantics
1.1 Aims of this study
1.2 Mathematical linguistics
1.3 A functional view of meaning
1.4 Truth conditions and truth values
1.5 Counterfactuals
1.6 Compositionality and syntax
1.7 Pragmatics
1.8 Propositional relations
1.9 Ambiguity
1.10 Formal logic and natural languages
1.11 Universal semantics
2 Background notions from mathematics
2.1 Purpose of this chapter
2.2 Sets
2.3 The cardinality of a set
2.4 Product sets
2.5 Relations and functions
2.6 Equivalence relations
2.7 Boolean algebras
2.8 Isomorphisms and homomorphisms
2.9 Effective processes
3 Background notions from formal logic
3.1 Scope of this chapter
3.2 The calculus of propositions
3.3 The nature of propositions
3.4 Monotonicity
3.5 The predicate calculus
3.6 Modal logic
3.7 Lambda abstraction
3.8 Montague’s intensional logic
4 Vagueness and ambiguity
4.1 Background
4.2 Ambiguity
4.3 Structural ambiguity
4.4 De dicto vs de re
4.5 Intensions and temporal quantifiers
4.6 Modalities
4.7 Regimentation
5 Logical form in binding theory
5.1 Levels of representation
5.2 Logical form
5.3 Wellformedness in binding theory
5.4 Case
5.5 Logical form in semantic representation
6 Pragmatics
6.1 Definition of pragmatics
6.2 Indices
6.3 Contextual properties
6.4 Performatives
6.5 Fuzziness
6.6 Presuppositions
6.7 Types of semantic presupposition
6.8 Truth-value gaps
6.9 Primary and secondary presuppositions
6.10 Presuppositions and questions
6.11 Pragmatic presuppositions
7 Categorial grammar
7.1 Categorial grammar
7.2 A severely limited grammar
7.3 Some category assignments
7.4 Abbreviations
7.5 Spelling-out rules
7.6 The lexicon
8 Semantic rules
8.1 Semantic rules
8.2 Logical and nonlogical connectives
8.3 Nominals
8.4 Some verb types
8.5 Wh-words
8.6 Adjectives
8.7 Adverbs
Bibliography
Index
Recommend Papers

The Elements of Mathematical Semantics
 9783110871432, 9783110129571

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

The Elements of Mathematical Semantics

Trends in Linguistics Studies and Monographs 66

Editor

Werner Winter

Mouton de Gruyter Berlin · New York

The Elements of Mathematical Semantics

by

Maurice V Aldridge

Mouton de Gruyter Berlin · New York

1992

Mouton de Gruyter (formerly Mouton, The Hague) is a Division of Walter de Gruyter & Co., Berlin.

® Printed on acid-free paper which falls within the guidelines of the ANSI to ensure permanence and durability.

Library of Congress Cataloging in Publication

Data

Aldridge, Μ. V. (Maurice Vincent) The elements of mathematical semantics / by Maurice V. Aldridge. p. cm. — (Trends in linguistics. Studies and monographs ; 66) Includes bibliographical references and index. ISBN 3-11-012957-4 (acid-free paper) 1. Semantics — Mathematical models. 2. Mathematical linguistics. 3. Language and logic. 4. Pragmatics. 5. Categorial grammar. I. Title. II. Series. P325.5.M36A43 1992 4 0 1 4 1 Ό 1 5 1 - dc20 92-23202 CIP

Die Deutsche Bibliothek

— Cataloging in Publication

Data

Aldridge, Maurice V.: The elements of mathematical semantics / by Maurice V. Aldridge. — Berlin ; New York : Mouton de Gruyter, 1992 (Trends in linguistics : Studies and monographs ; 66) ISBN 3-11-012957-4 NE: Trends in linguistics / Studies and monographs

© Copyright 1992 by Walter de Gruyter & Co., D-1000 Berlin 30 All rights reserved, including those of translation into foreign languages. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Disk Conversion: D. L. Lewis, Berlin. — Printing: Gerike GmbH, Berlin. Binding: Lüderitz & Bauer, Berlin. — Printed in Germany

Acknowledgements

In the preparation of this book, I was generously supported by a grant from the Human sciences research council as well as by the University of The Witwatersrand. I have received invaluable assistance in the form of academic advice and technical help from many colleagues and friends, among whom I owe a special debt to Professor J. Heidema of the department of mathematics at Rand Afrikaans University. Above all, my thanks go to Christine Aldridge whose tireless aid in the technical preparation of the book has been beyond reckoning and to Oliver Aldridge who was responsible for dragging me into the computer age. Johannesburg, June, 1992

Maurice V. Aldridge

I dedicate this book to St. Dunstan 's. St. Dunstan 's is an organisation devoted to the care and assistance of the military blinded and, without their endless support, I would never have been able to set out upon the wad of scholarship, let alone become, in some small degree, a linguist.

Contents

1 Some topics in semantics 1.1 Aims of this study 1.2 Mathematical linguistics 1.3 A functional view of meaning 1.4 Truth conditions and truth values 1.5 Counterfactuals 1.6 Compositionality and syntax 1.7 Pragmatics 1.8 Propositional relations 1.9 Ambiguity 1.10 Formal logic and natural languages 1.11 Universal semantics

1 1 1 2 8 11 11 13 14 16 17 18

2 Background notions from mathematics 2.1 Purpose of this chapter 2.2 Sets 2.3 The cardinality of a set 2.4 Product sets 2.5 Relations and functions 2.6 Equivalence relations 2.7 Boolean algebras 2.8 Isomorphisms and homomorphisms 2.9 Effective processes

20 20 20 23 24 25 27 29 32 33

3

37 37 37 41 42 43 50 58

Background notions from formal logic 3.1 Scope of this chapter 3.2 The calculus of propositions 3.3 The nature of propositions 3.4 Monotonicity 3.5 The predicate calculus 3.6 Modal logic 3.7 Lambda abstraction

X

Contents 3.8

4

Montague's intensional logic

61

Vagueness and ambiguity 4.1 Background 4.2 Ambiguity 4.3 Structural ambiguity 4.4 De dicto vs de re 4.5 Intensions and temporal quantifiers 4.6 Modalities 4.7 Regimentation

78 78 80 85 91 98 100 102

5 Logical form in binding theory 5.1 Levels of representation 5.2 Logical form 5.3 Wellformedness in binding theory 5.3.1 Some typical problems of coreference 5.3.2 Some conditions on coreference 5.3.3 Wh-questions and quantifiers 5.4 Case 5.5 Logical form in semantic representation

104 104 105 107 107 108 116 120 125

6 Pragmatics 6.1 Definition of pragmatics 6.2 Indices 6.3 Contextual properties 6.4 Performatives 6.5 Fuzziness 6.6 Presuppositions 6.7 Types of semantic presupposition 6.8 Truth-value gaps 6.9 Primary and secondary presuppositions 6.10 Presuppositions and questions 6.11 Pragmatic presuppositions

126 126 127 128 132 137 137 139 147 152 153 155

7 Categorial grammar 7.1 Categorial grammar 7.2 A severely limited grammar 7.3 Some category assignments 7.3.1 Nominals and intransitive verbs

162 162 163 166 167

Contents

7.4 7.5 7.6 8

7.3.2 Nominals and transitive verbs 7.3.3 Some verb types 7.3.4 Wh-words 7.3.5 Common nouns, quantifiers and of 7.3.6 Mass and abstract nouns 7.3.7 Adverbs and adjectives 7.3.8 Relatives and appositives 7.3.9 Comparison 7.3.10 Intensifies 7.3.11 Equatives 7.3.12 Degree complements Abbreviations Spelling-out rules The lexicon

XI

168 170 174 176 181 182 185 186 190 191 193 194 196 199

Semantic rules 8.1 Semantic rules 8.2 Logical and nonlogical connectives 8.3 Nominals 8.3.1 Unitary nominals 8.3.2 Common nouns and intransitive verbs 8.3.3 Logical quantifiers 8.3.4 Proportional quantifiers 8.3.5 Partitives and genitives 8.4 Some verb types 8.5 Wh-words 8.6 Adjectives 8.7 Adverbs

201 201 203 214 214 215 217 222 226 227 234 235 238

Bibliography

243

Index

251

Chapter 1

Some topics in semantics

1.1 Aims of this study The central preoccupation of this study is semantic. It is intended as a modest contribution to the development of a general theory of meaning as presented in certain proposals made over the last two decades and principally associated with the work of Richard Montague (Thomason 1974). I choose to give this approach the neutral name "mathematical semantics" but, in keeping with common usage, it could also be called "Montague semantics" or "Montague grammar" in recognition of the central role which Montague's writings play in its foundation. In this chapter, I present a preliminary survey of the kinds of semantic phenomena with which I shall be concerned. However, since the general approach adopted for the solution of these basic questions constitutes a subdiscipline of mathematical linguistics, it is appropriate to begin with a few informal remarks on that more general enterprise.

1.2 Mathematical linguistics In this book, mathematical linguistics is assumed to be a noncalculative discipline primarily concerned with the formal modelling of language. It is noncalculative in the sense that calculations of proportion play no role in its methodology and it is mathematical in that it makes extensive use of certain subdisciplines of mathematics, notably, set theory and formal logic. As the approach is noncalculative, it involves no discussion of statistical linguistics, word counts, probability grammar, etc. The essence of mathematical linguistics is, in my view, most clearly set forth in G l a d k i j - M e l ' c u k (1969). Those authors present an imaginary situation in which a mathematician resolves to construct a model of human linguistic behaviour. His observations lead him to establish, on the one hand,

2

Some topics in semantics

a plane of content - a set of meanings - and, on the other, a plane of expression - a set of texts, or utterances. Our mathematician, further, observes that the set of meanings corresponds to the set of texts in a nonrandom fashion. For some texts there are certain meanings and for others one meaning only and vice versa. This observation leads him to hypothesise the existence of a set of rules which map the one set into the other and it is the unambiguous and explicit formal statement of these rules which becomes the central goal of his study. It seems plausible to equate the concept, Language, with the system of rules itself. This equation is reminiscent of de Saussure's approach (1915), though it is not entirely compatible since he did not see form and expression as necessarily distinct. Whether or not we are prepared to accept the equation above, the close parallel between the two planes of content and expression and the traditional levels of deep and surface structure is unmistakable. In particular, Gladkij and Mel'cuk's approach is reminiscent of the work of Cresswell (1973) who adopted the term "deep structure" as a name for semantic representations which are converted by a simple set of rules into "shallow structures". It is also, of course, close to the concepts of "logical form" and "Surface representation" in generative semantics. The advantages of such multilevel approaches are very considerable and I shall adopt a parallel treatment in this study. Being thus concerned with the description of rules, it is natural that the formal linguist should approach her/his task within the framework of formal logic (chapters 2 and 3) and that discipline, therefore, plays a central role in the metatheory and methodology. Although, in my opinion, Gladkij and Mel'cuk's book is among the best texts on mathematical linguistics available, it was prepared before the influence of Montague's work on the discipline became universally apparent and thus, concentrating, as it does, on formal grammars - devices for the description or generation of syntactically grammatical utterances - , it makes little contribution to the exploration of semantics, which was Montague's primary interest and is the focus of this book.

1.3 A functional view of meaning The philosophical complexity of the nature of meaning is so profound and its specialised discussion so extensive that it seems wisest, in a linguistic study, to treat it on a somewhat naive level. Rather than attempting, for example, to

A functional view of meaning

3

investigate the nature of existence or to explore the relation between the sign and what is signified, it appears, to me, more profitable simply to present a particular theory of meaning almost as if it enjoyed the status of "God's truth" and allow the pages which follow to constitute its elaboration and justification. In this preliminary exercise, 1 shall make use of several notions, such as Function, Intension, and Proposition, which are not discussed in detail until later chapters. Consider, first, what is involved in understanding the sentence: (1)

Mon crayon est noir.

As Carnap (1961) stressed, to understand (1) it is necessary to know what the world would have to be like in order for it to be true. It is certainly not necessary to know whether it is true in fact. If we understand (1), then, in an important sense, we can be said to know what it means. Of course, the problem of knowledge is one of immense complexity. For instance, should we distinguish between world-knowledge and linguisticknowledge, or is there no difference? I shall not explore such questions here. However, it seems obvious that the depth of our understanding is dependent upon the depth of our knowledge and that we do not need complete knowledge to know of a sentence that it can be true. Thus, I do not understand (2) in a profound sense, yet I know that it is true and, hence, know what it means. (2)

The centre of mass of the universe is a point.

Given the above reservations, let us say that to know the meaning of a sentence like (1) is sufficiently to know the set of conditions which would have to hold in order for it to be true. This is most certainly not to say that the meaning of the sentence is its truth value. In the orthodox view, a sentence is false if it is not true. Thus, in this orthodoxy, there are two and only two truth values, truth and falsehood. It would be nonsensical to make the same claim for sentence meanings. That meaning is more than mere truth or falsehood is clearly demonstrated by Lewis (1970) who points out that, if it were not so, we would be obliged to consider all tautologies as having the same meaning and likewise for contradictions. Obviously, just because a certain state of affairs actually is the case, we are not always obliged to accept that things must be that way. Thus, many elements in the set of truth conditions which provide the foundation of our understanding are only contingent conditions, they are not necessary. Sets of such conditions may be thought of, metaphorically, as possible worlds of which the actual world is but one. Thus, for example, while, in the actual

4

Some topics in semantics

world, London is the capital of England, other worlds are possible in which that is not so. Alongside contingent conditions, we have a subset of necessary conditions. Thus, for example, if a proposition, p, is true, then it necessarily follows that the disjunction "p or q" is true, no matter whether q be true or false. This is so because, by necessity, the disjunction of any true proposition with any true or false one always results in a true proposition. Propositions which are true by necessity are "logically" or "tautologically" true. Similar considerations also hold, mutatis mutandis, for contradictions. Thus, any proposition of the form (p and not-p) is false by necessity. We may think of the set of necessary conditions as the set of all possible worlds. Carnap's example (1) is apt in a book written in English because it facilitates the demonstration of the obvious, but sometimes neglected, point that what a sentence denotes is a proposition, which could often be denoted by other sentences. Thus, the proposition in (1) is equally well denoted by: (3)

My pencil is black.

as it is by several sentences such as: (4)

La couleur de mon crayon est noire.

or: (5)

My pencil is black in colour.

Given that a sentence denotes a proposition, it is common to think of that proposition as a function which, having the set of conditions as input, yields truth or falsehood as its value. Call such a function an "intension". Then, the intensions corresponding to the sentences so far given have the form, F(W), where W is a set of possible worlds. As remarked above, the value of such a function - its "extension" - is usually taken to be truth or falsehood, though my evaluations will not always be so restricted. Sentences - or the propositions they denote - are not primitive. They have internal structure which reflects the structure of the possible worlds which determine their values. Thus, in analysing the meaning of a particular sentence, it is necessary to determine both the meanings of its parts and the manner of its composition. This principle, the principle that meanings are "compositional", reflecting the influence of Frege, is at the heart of current work in semantics and much of this book will constitute an instance of its application. It can be argued that, at the most basic level, the operation of compositionality is observable in the morphology of many languages. Thus, for instance,

A functional view of meaning

5

in English, the suffix -er may, among other things, combine with a verb, say walk, or kill, to form an agentive noun, walker, killer. The meanings of such nouns are, thus, arrived at on the basis of the meanings of the individual morphemes. However, to argue consistently for a compositional approach to word-meaning is frequently very difficult and apart from some discussion of case in chapter 5 and the morphology of comparison in chapter 7/8, I shall largely ignore morphology in this book. Taking morphology for granted, the constitutive parts of sentences are made up of words. What determines the semantic and syntactic ability of a word to combine with others is the syntactic category to which it belongs. In order to exploit the compositional principle in arriving at the meaning of a particular sentence, therefore, it is necessary to determine the meanings of its words as governed by their syntactic function. Thus, a fundamental aim in the development of a semantic theory is the determination of the possible range of meanings of the categories on which the structure of sentences depends. The simplest sentential structure is that consisting merely of a proper noun and an intransitive verb, as in: (6)

Jack runs.

To provide a semantic account of such a sentence, it is necessary to state the conditions for its evaluation on the basis, first, of the words in isolation and, then, as they combine in a subject-predicate structure. Let us use the term "individual" as a synonym of "thing". Evidently, any possible world will contain one or more individuals, which will be endowed with certain properties and stand in particular relations to each other. We may loosely think of proper nouns, like Jack, as having unique values. More exactly, proper nouns denote constant functions which, given the set of all individuals as input, always yield a particular member as output. If we regard the property of running as characteristic of certain individuals, we may think of the predicate runs as denoting a function which, given the set of individuals, picks out a given subset. Thus, the proposition, runs(Jack), has the value true if and only if the individual denoted by Jack is in the set of running individuals. If the proposition, runs{Jack>, is true, then, the propositional function, runs(x), is "satisfied" when Jack is substituted for χ in its argument place. That is to say, the proposition that something has the property of running is satisfied by at least one individual, namely, the individual picked out by the constant function assigned to Jack. As a more interesting example, consider the following. (7)

The president of France is French.

6

Some topics in semantics

Without discussing the internal structure of the president of France, it is apparent that (7), on one interpretation, claims that any individual picked out by the function denoted by the subject noun phrase has the property of being French. Indeed, even if there were no such individual at the time of utterance, (7) would be meaningful and, therefore, capable of being judged true or false, or, perhaps, pointless. Thus, we think of phrases like the president of France in terms not merely of their extension, their denotation, but also in terms of their intension or sense. Put more precisely: the extension of a given noun phrase function is the individual or set of individuals which it picks out at a given possible world and moment. The intension of such a function is all of its possible extensions in all worlds and moments of time. Similarly, the extension of a given intransitive verb function, say that denoted by runs or be French, is the set of individuals it picks out at a given instant. The intension of such a function is all of its possible extensions. From these remarks, it is evident that the notion of an intension is usually more general than that of an extension. Reverting to the looser way of speaking: to know the intension of the president of France is to know what conditions must be met for any individual to be the value of the function it denotes. To know the extension of the same term at a given moment, it is necessary only to know which particular individual its associated function picks out at that moment. Parallel remarks hold, of course, in respect of the intensions and extensions of predicates. Of course, the sentences of a natural language are not all as simple as (6) or (7). The propositions they denote are frequently concerned with relations between individuals. Thus, for instance, (8) asserts that Scott stands in the relation is the author of to Waverley. (8)

Scott wrote Waverley.

Like noun phrases and intransitive verbs, transitive verbs, such as write, have extensions and intensions. The extension of the wr/te-function is the set of ordered pairs of individuals, e.g. < Waverley, Scott >, which is its value at a given world-time pair. Its intension is the range of its possible extensions. If the verb necessarily involves three individuals - I here ignore the instrument involved in writing - for example, give, it denotes a function which has a set of ordered triples as its value, and so forth. In the case depicted in (8), the verb can be treated, from a semantic point of view, as purely extensional since both Scott and Waverley must actually exist in order for the proposition to be true. However, such extensionality

Afunctional

view of meaning

7

does not always characterise transitive verbs. Thus, for example, while find in (9) is extensional, seek in (10) is not, unless a specific whale was involved. (9)

Jonah found a whale.

(10)

Jonah sought a whale.

At first glance, the nonextensionality of seek in (10) might appear to demonstrate nothing more than the need for intensions as well as extensions in semantic analysis. However, extended to further instances, it soon becomes evident that much more is involved than a simple case of ambiguity. Consider, for example, the following case: (11)

Necessarily, the prime minister of England's minister of England's residence.

residence is the prime

That this sentence is true is obvious from the tautological status of: (12)

The prime minister of England's England's residence.

residence is the prime minister of

The same cannot be said of: (13)

*Necessarily,

the prime minister of England's

residence is no. 10.

Although no. 10 and the prime minister of England's residence have exactly the same extension - a particular house in Downing Street, London - they do not mean the same. Thus, while, in the actual world, the sentence: (14)

The prime minister of England's

residence is no. 10.

is true, it is not so by necessity. Cases like these figure prominently in the philosophical and linguistic literature and will frequently be the focus of attention in this book. They were first discussed by Frege in the context of Leibniz's law of substitution. Briefly, Leibniz's law says that two terms having the same denotation may be substituted for each other without affecting truth values. The fact that (13) is false shows that the two phrases in question are not completely interchangeable. Thus, it is evident that the notion of meaning cannot be simply equated with extension. While the function denoted by no. 10 Downing Street has an extension which coincides precisely with its intension, the same cannot be said of that denoted by the prime minister of England's residence and, thus, the terms are not completely interchangeable. Yet another instance of the need for intensions which results from the failure of Leibniz's law is provided by:

8 (15)

Some topics in semantics Jack believes that 1+3

= 4.

Since the expression ( 1 + 3 ) has the same extension as (2 + 2), i.e. the successor of 3, we might expect that the two could be substituted in (15) without affecting its meaning. However, reflection shows that this is not so. If (15) is true, then Jack has a belief about ( 1 + 3 ) and, unless he is totally irrational, it is necessarily true that he believes that (1 + 3) = (1 + 3 ) . However, it certainly does not follow that his belief about ( 1 + 3 ) requires that he hold that (2 + 2) also has the value 4. Rather, therefore, than saying that the object of Jack's belief is the extension of the expression (1 + 3) in (15), we claim, for the present, that his belief has to do with its intension. I return to such cases and especially Cresswell's sophisticated treatment (1985) later, chapter 4, section 4. Thus, we see that a verb of "propositional attitude" like believe is like such transitive verbs as seek in failing to be fully extensional. Like such verbs, its possible values will not be ordered pairs of extensions, but ordered pairs of an extension - the subject - and an intension - the object. According to the functional view of meaning espoused here, then, the denotation of a sentence is a proposition and it is this denotation which constitutes the meaning of the sentence. A proposition is a function which takes, as argument, possible worlds and denotes a value, typically true or false. These evaluations are assigned to propositions relative to possible worlds and it is our apprehension of such worlds which enables us to say which value is assigned in a given case. The meanings of propositions are arrived at compositionally through the meanings of their parts and the manner of their composition. The meanings of the parts are, themselves, functions some of which have straightforward extensions as their values and others of which have, as values, either intensions, or ntuples consisting of extensions and intensions.

1.4 Truth conditions and truth values Of course, the very idea of a semantics based, albeit not exclusively, on truth values must presuppose a theory of truth. The discussion of such theories in its most profound philosophical form lies well beyond the legitimate concerns of a linguistic essay on semantics. As in the previous section, therefore, my remarks in this area may appear rather superficial.

Truth conditions and truth values

9

An issue which is clearly central is the difference between the notions of truth condition and truth value. Whatever is doubtful about the nature of meaning as a concept, we are certain that for two sentences to be synonymous, they must mean the same thing - they must assert the same truth. Thus, if we can establish what it is for a sentence to refer to a truth, we have the basis for a theory of synonymy and, hence, of meaning. A common-sense view is that the propositional content of a sentence evaluates to truth if it reflects the way things actually are - including, of course, the fact of somethings' being possible and of others being necessary, etc.. Tarski's famous example (1956): (16)

Snow is white.

has the following truth condition: (16) a.

"Snow is white. " is true just in case snow is white.

This may be generalised as: (17)

Sentence s is true just in case p.

Although (16a) represents the truth condition for (16), it does not represent its truth value. It is usually assumed that the connective just in case in such formulae as (17) is strict implication. That relation yields the value true only when both protasis and apodosis have the same truth value. This is, in fact, a source of difficulty since, construing the relation thus, we seem obliged to accept infinitely many nonsensical conditions such as: (18)

"Snow is white. " is true just in case grass is green.

Such combinations are true under logical implication, but they clearly do not require us to accept that the antecedent depends for its meaning upon the consequent. The difficulty disappears - or at least recedes - if we accept that truth conditions are not in a one-to-one relation with truth values (Fodor 1977). That is incorrect. They are in a many-to-one relation. It is patently erroneous to claim that because there are only two truth values - ignoring the possibility of an in-between value - there are only two truth conditions. It seems, to me, reasonable to hold that a given proposition has a particular truth as its meaning, which is not to say that that particular truth is its truth value. Truth values are associated with particular truths or falsehoods, they are not those particulars themselves.

10

Some topics in semantics

This simple approach has the advantage of avoiding the complex philosophical problems surrounding the role of necessary truth and contingent truth in the analysis of such formulae as (16a). It is often held that we can exclude such nonsense instances of (17) as (18) by narrowing down the kind of truth involved to necessary truth. Obviously, the truth of (16a) is necessary. However, the philosophical complexities of necessity are considerable I offer some discussion in chapter 3, section 6 - and it seems unnecessary to call upon it here. We do, of course, rely upon the notion of necessity when we invoke Leibniz's law in arguing for the reality of intensions, but the motivation there is altogether more powerful. Moreover, it seems unwise, in the present connection, to call upon the notion of nonlogical necessity or analyticity, Katz (1964). An expression, p, is nonlogically necessary if the meaning of its predicate is fully included in its subject or vice versa. Thus, (19) is analytic: (19)

A spinster is a woman who never

married.

While such cases are patently analytic, analyticity, itself, does not provide a foundation for a theory of meaning since it relies upon a prior notion of meaning for its own definition - comparable objections could, doubtless, be levelled at my own appeal to synonymy above. Given that there is only one world, one actual state of affairs, we might be tempted to say that propositions are true if they express facts. The notion of a fact is difficult to disentangle from what I referred to above as a "particular truth". For simplicity, let us say that facts are actual truths and that anything which is only possible is not a fact. Appeal to facts is usually appeal to extensions and, as we have seen already, it is improper to adopt an exclusively extensional view of meaning. Frege showed that, while the evening star and the morning star have the same extension, they certainly do not have the same sense. Moreover, we frequently speak not of actualities but of possibilities and whether these possibilities are viewed as reasonable - perhaps there are ten planets - or unreasonable - there could be fiery dragons - their expression requires intensions, not extensions. Thus, the intensional approach to meaning seems very sensible. It is worth mention, here, that Montague himself (1970a) tried, without success, to construct an extensional semantics, only to abandon it in his famous (1973) paper.

Counter/actuals

11

1.5 Counterfactuals To introduce possible worlds, as we have done, into semantics is not entirely uncontentious. One interesting problem which it raises concerns cross-world identity. Lewis (1973) discusses in considerable detail counterfactual sentences, for example: (20)

If Jack were a rich man, he would marry

Sally.

This sentence can be used appropriately if and only if, in the actual world, Jack is not a rich man. If it is true, it must be so with respect to a nonactual world and, in such a case, is the individual denoted by Jack the same individual in both worlds? Lewis proposes that we accept that the worlds concerned be as similar as possible, differing only sufficiently to make the sentence counterfactual. We should, therefore, trace the counterfactuality of (20) to variation in nonessential properties of the individual constantly denoted by Jack. It is apparent that the problem of cross-world identity, although fairly simple in cases like (20), can assume enormous dimensions. Even if the distinction between essential and accidental properties were clear-cut, it would still be difficult indeed to account for such extreme cases as: (21)

If London were a rich man, it would marry

Sally.

While, therefore, I shall follow Montague's practice and regard proper nouns as rigid designators, I am conscious of at least some of the metaphysical difficulties.

1.6 Compositionality and syntax I regard the central task of semantics as the study of the rules by which word-meanings are combined into complex units - which is not to say that it takes no heed of word-meanings. In order for the theory to be compositional, it must take account of how lower-order meanings combine to form higherorder meanings and, ultimately, the semantic values of sentences themselves. A compositional semantics has, therefore, to be framed in such a way that it relates the two levels of content and expression in accordance with the syntactic properties of sentences. Hence, part of the task of providing a semantic description is the provision of a syntactic analysis.

12

Some topics in semantics

It is important to stress that the syntactic analysis has the sole function of enabling the semantic description. Syntax is not an end in itself in the framework of Montague's programme and, in this, that programme differs fundamentally from the work of the transformationalists and other formal linguists. The syntactic model to be used is chosen for its ability to characterise the compositional nature of sentence-meaning and not for its ability to explain formal syntactic phenomena or to predict syntactic relations. Syntactic phenomena are interesting only if they have a bearing on semantic issues. It is in this spirit that I devote a substantial part of this study to the syntax, especially chapter 7. It will, however, be immediately apparent that the question of what contribution to sentence-meaning is made by syntax is not always a straightforward one. Carnap (1961) emphasised that the analyst should strive for a one-to-one relation between form and meaning, to establish an "intensional isomorphism" between the two. A major task in such an enterprise is to determine which syntactic variations are meaningful and which are purely formal - among others, Fodor (1977) discusses alternatives like the father of my father and my father's father. This task also requires, of course, that we settle questions of synonymy between apparently equivalent word and phrase-pairs, e.g. spinster and woman who never married. The establishment of such an isomorphism is exactly what our opening vignette supposes to be the central aim of mathematical linguistics. At the moment, however, it seems unlikely that Carnap's aim is within practical reach. On the one hand, it is clear that rather than thinking in terms of isomorphy, we must explore the link between form and meaning in terms of the weaker relation of homomorphy - chapter 2. There is not a general one-to-one relation between expressions and meanings, but a manyto-one relation in many instances. The semantic content of a sentence - its "prepositional content" - is, as we saw earlier, often expressible in a number of alternative ways. The examples provided earlier mostly involved different language encodings of the same proposition. However, it is obvious that many different syntactic patterns may be equivalent semantically, including Fodor's examples referred to above and, in many cases, such alternative formulations as active and passive.

Pragmatics

13

1.7 Pragmatics An adequate semantics for any language could not, of course, be restricted to the analysis of sentences whose lexical items have fixed values. In most of the examples so far, it is permissible to view the parts as having restricted values. Such restriction may be achieved by setting up a model whose individuals, their properties, and the relations holding between them, are predetermined. However, natural languages make use of lexical variables whose domains cannot be so easily fixed. The set of personal pronouns, for instance, form a class of indexicals whose references vary from one context to another. Carnap's sentence, quoted at the outset, illustrates the point. Another example is: (22)

You are tired.

(22) may be true for an addressee on a particular occasion and false for the same addressee on another. Evidently, the semantics must incorporate a theory of language in use. This theory, the Pragmatics, appears, at first, to be dangerously vague. However, given Montague's own formal account of pragmatics (1968) and the work of other scholars, especially Cresswell (1973), this aspect of meaning can be fairly rigorously described and need not result in unwanted vagueness. The pragmatics is needed, in any case, if the semantics is to be capable of accounting for temporal and place indexicals. Such systems are excluded from standard logics since they introduce contingency into the evaluation of propositions and, hence, make truly deductive reasoning impossible. Thus, for example, (23) is usually treated as if it made no reference to time: (23)

Jack married

Sally.

More radically, (24) could not be treated at all in standard logic because of the indeterminacy of the adverb. (24)

London is here.

A somewhat similar situation is presented by cases like (25) which depend for their evaluation on the subjective opinion of the speaker. (25)

Paris is beautiful.

Gradables like wise ought not to figure in an exclusively truth-functional system, but they clearly cannot be excluded from a natural-language semantics.

14

Some topics in semantics

Another limitation of standard logics is their failure to treat sentence types other than declaratives. Declaratives are, of course, of especial importance in philosophy since they are viewed as expressing propositions uncluttered by such modalities as interrogation. However, current work in Government and binding theory, building on earlier studies both in Transformational Linguistics and Generative Semantics, provides crucial insights into the semantics of other sentence types, especially questions. I shall attempt a fairly detailed discussion of such work in chapter 5 and return to questions in chapter 6. Central to much contemporary work in pragmatics is Speech-act Theory, Austin (1962) and Grice (1975). One of the most difficult problems in making use of this theory is the formulation of "felicity conditions" and their inclusion in a formal system. Fortunately, however, the work of several scholars, including Lakoff (1972, 1975), has demonstrated that it is proper to broaden the scope of satisfaction to include appropriateness or felicity and I shall make use of this extension. The question of how we might include Gricean implicatures into a formal semantics is far less clear. Evidently, when a maxim like relevance is deliberately broken, the semantic result is all-important. However, as Fodor's discussion (1977) suggests, such implicatures do not fit readily into a formal semantics and I shall, therefore, have nothing to say about them. Even more contentious is the question of whether to include other aspects of language in use such as connotations and metaphors in a formal description of natural language. It has not, to date, seemed possible to build these phenomena into the semantics for lack of a rigorous theory of their function. Thus, while it is obvious that old maid is metaphorical in: (26)

John is an old maid.

and that its connotational force is paramount in: (27)

Mary, who is thirty, is an old maid.

such matters are not reflected in mathematical semantics. Recent work, such as Kittay (1987), shows that the essential formal theory is at least in prospect. Even so, I shall not attempt to pursue such matters here.

1.8 Propositional relations The semantics discussed so far is primarily concerned with the sentence as the basic unit - basic in the sense that only sentences have truth values.

Propositional

relations

15

The basic status of sentences does not, however, imply exclusion of interest in inter-propositional relations which, as Katz (1966) and many others have argued, constitute a vital part of the subject matter of semantics. Indeed, within the confines of Montague's own work - as with most other formal linguists, including Keenan - Faltz (1985) - certain relations which hold between propositions figure prominently. Especially important is the relation of entailment. It is obvious that we must be able to say which propositions logically follow from any assertion, as (29) from (28). (28)

John loves Mary and Mary or Jane loves Oliver.

(29)

John loves Mary.

However, from (28), we cannot infer : (30)

Mary loves Oliver.

Paraphrasing Keenan - Faltz, we may say that a proposition, p, entails another, q, if and only if ρ is at least as informative as q. Thus, (28) contains all of the information conveyed by (29). In addition, (28) asserts the disjunction of two other propositions either of which may be false, so prohibiting the inference of (30). The examples just given may appear rather uninteresting, since they reflect purely logical properties of the truth-functional connectives and and or. However, questions of entailment can be highly complex. Consider, for example, the relation between (31) and (32) in which causal factors are involved: (31)

The storm caused the hayrick which was made by John to collapse.

(32)

The storm caused John to make a hayrick which

collapsed.

compared with: (33)

Panic caused the elephant to trumpet

(34)

Panic caused the elephant to trumpet.

loudly.

Obviously, (34) follows from (33), but (32) is not entailed by (31). This is so because, in (31), the object of caused is an infinitive complement with to collapse as its main verb, but, in (32) the object of caused is an infinitive with to make as its main verb. The relation of Presupposition has been an important part of philosophical enquiry for many years and, more recently, linguistic discussion has also focussed on this topic. Probably the most famous example is Russell's (1905):

16 (35)

Some topics in semantics The present king of France is bald.

If there exists a unique individual who is the present king of France, then (35) is obviously true or false. However, if such a person does not exist, it is arguable that (35) is merely pointless. I return to this topic at some length in chapters 4 and 6. As we have already seen, Leibniz's law plays a central role in establishing the degree to which noun phrases are interchangeable. The relation of identity is of great semantic interest. It is, for example, important to our understanding of a set of problems concerning the evaluation of certain types of argument. An instance from an early paper by Montague-Kalish (1959) - see also the discussion in Quine (1960) - is: (36)

The number of planets is 9. Kepler was unaware that the number of planets exceeds 6. Therefore, Kepler was unaware that 9 exceeds 6.

Clearly, this is nonsense. In spite of the attention such puzzles have received, their resolution is still to be achieved to everyone's satisfaction and my own treatment, chapter 4, will, I hope, not be without interest.

1.9 Ambiguity A most crucial factor in the semantic analysis of natural languages which has a bearing on all of the kinds of issues referred to above is their pervasive tendency towards ambiguity. It is an essential task of the theory to resolve ambiguities as a sine qua non to the assignment of values to expressions. The formulation of a mechanism for the resolution of ambiguity therefore plays a dominant role in the grammar, being fundamental to the compositional view of meaning. Of course, problems of ambiguity have featured in language studies for centuries. Linguists have been concerned with lexical ambiguity, as in: (37)

The seal was heavy.

and structural ambiguity, as in: (38)

John saw the girl in the street.

In current transformational studies, the emergence of Government and binding theory has been largely dependent on the investigation of a particular source of ambiguity, namely, ambiguity of coreference, illustrated by:

Formal logic and natural languages (39)

17

John said that he had won the prize.

The solutions which have been proposed in the theory for such problems have, in my view, enormous importance for semantics generally and I shall discuss them at length in chapter 5. Montague's treatment of ambiguity is not novel, although he puts more emphasis than is customary on some types, notably intensional ambiguity, as in the sentences discussed above involving the verb seek and ambiguities of quantifier scope, as in: (40)

Every man loves a woman.

It could, however, be argued - I shall not do so - that his treatment is unsatisfactory since it ignores the kind of lexical ambiguity reflected in (37). In fact, Montague's work largely ignores the structure of particular noun intensions, verb intensions, etc.. Thus, he showed little interest in differences in meaning between given adjective pairs, such as tall and short or adjective types such as thin and correct, or verbs such as walk and run. Naturally, given his predominantly formal interests, he was careful to make explicit the differences between the uses of logical words such as necessarily and possibly, or the and a/an. For the rest, the decision that a given lexical item should have this or that meaning is simply taken in light of the particular problem under investigation. In my view, ambiguity is arguably the most important problem in semantics and I shall devote most of chapter 4 to examining some of the types involved. However, while I shall, in chapter 8, draw up some "semantic rules" for individual words, the words themselves will be representative only and will, therefore, be few in number. Further, the rules will relate to fairly formal questions having to do with the satisfaction of propositional functions what is required for them to become true or appropriate propositions - and will offer very little of interest to the study of ambiguity in the context of lexicography.

1.10 Formal logic and natural languages A semantics based, in large part, on truth-evaluations obviously requires the support of a formal logic. In particular, such a logic may be expected to provide a rigorously defined, nonambiguous language into which the naturallanguage expressions can be translated, so that, if we have the semantics

18

Some topics in semantics

for the logical expressions, and if the translations are precise, then, in an important degree, we have the semantics for the natural-language expressions also. The logic which Montague employs, besides being intensional, is both modal and tensed. However, it rests upon the standard logics of the propositional and predicate calculuses and I shall, therefore, offer a fairly broad discussion in chapter 3 of such logics as well as a more detailed account of Montague logic proper. The version of the latter upon which my exposition will be based is that set out in Montague (1973). I shall also draw on the work of other scholars, most notably, Cresswell (1973, 1985), whose use of lambda abstraction in linking syntactic and semantic representations, or "logical form representations", is especially valuable.

1.11 Universal semantics The brief outline of semantics provided in this chapter may give the impression that the primary concern is with English. While this book will be entirely based on English data, it is certainly not intended to be solely about English as an individual language. My reason for confining myself to English examples is merely a function of my status as a native speaker of the language and a lack of confidence in my non-English intuitions. Although two of Montague's papers contain the word "English" in their titles, the theory is intended to be universal in the broadest possible sense. Indeed, the title of his philosophically most important paper Universal grammar (1970b) is more reflective of his programme than any other. For Montague, it is vital that semantic theory be maximally general and thus, for him, the term "universal grammar" embraces all languages, artificial as well as natural. In this usage, "universal" does not refer, as is common in linguistics, to features of all natural languages. Mathematical semantics, in its purest form, is not at all concerned with establishing the actual universals found in natural language. The theory is, therefore, not concerned with psychological issues of language acquisition, nor yet with statistical probability. The focus is solely upon formal universals and this focus is based upon the assumption that there is no essential difference between the artificial languages of mathematics, including logic, and those we call natural. This view does, in fact, have a long tradition in philosophy; see for example, the classic paper by Church (1951). Thus, Mathematical semantics is, in essence, the semantic theory of language in general and, as such, is as much part of mathematics as it is of

Universal semantics

19

philosophy or linguistics. In this spirit, its scope does not include the kind of psychological dimension referred to above. However, from the stand-point of natural-language studies, it may be that the exclusion of psychological considerations is ultimately impoverishing - it is certainly not in line with the currently popular interest in so-called "Cognitive Linguistics". Even so, I shall not, in this book, make any attempt to bring such considerations into the discussion. The kinds of issues touched on in this chapter will constantly recur as central themes in this book. I do not pretend to provide an exhaustive account of them, let alone an explanation of the philosophical problems which surround them. I now turn from the discussion of the scope of the semantics to an account of some of the background notions from mathematics, logic, and linguistics, which are required to appreciate both Montague's work in particular and the enterprise of mathematical semantics in general. Naturally, to those readers who are well versed in the various disciplines concerned, I shall have little of interest to say in the preliminary chapters.

Chapter 2

Background notions from mathematics

2.1 Purpose of this chapter In writing this chapter, I have been sharply conscious of the fact that it might be considered by some readers to be superfluous. To some, its content will be profoundly familiar. For such readers, it would probably be best to pass over the next few pages altogether. To others, the discussion may appear unnecessary because the next chapter on logic deals with most of the issues commonly associated with formal semantics. Indeed, it is not strictly necessary to know anything about mathematics in order to follow the remarks, arguments and technical developments in this book and there are parts of this chapter which will scarcely receive mention in the sequel. My motivation for presenting a brief account of background notions from mathematics is that these notions, besides being helpful in appreciating the highly technical discussions of Montague and other scholars working in mathematical linguistics, such as Cresswell, serve to place the more familiar discussion of the next chapter in a different perspective and, in doing so, considerably deepen one's understanding.

2.2 Sets The development of set theory from the ideas of Cantor (1932) has been among the major achievements of modern mathematics and its application in much formal semantics is central. Intuitively, a set is a collection of elements whose membership is determined by some characteristic which they share with all others in the set. Thus, among numbers, the positive numbers form one set which contrasts with that of the negative numbers. The wellformed expressions of a natural language constitute a set which contrasts with the informed ones and so forth. Intuition also serves to assure us that set membership is not always clearly defined. Many sets, e.g. the set of beautiful things, are "fuzzy". I shall not

21

Sets

discuss such sets here, though their existence will be taken for granted in the sequel. I f S is any set and a, b, c, . . . are elements of S, then S = {a,b,c, . . . } and for any a contained in S, a is a member o f S, written (a e S). The members of a set may be grouped in terms of the notion of a subset, i.e. some grouping of elements in respect o f a given characteristic. If S contains the subset a, a is a "proper subset" o f S, written (a c

S), if and

only if, " i f f ' , S contains elements not in a. If it turns out that there are no elements in S which are not also in a, a is a mere subset of S, indicated by (a C S). If everything in S is also in a and vice versa, then, obviously, (a = S). In accord with the above, w e say that t w o sets, S and T , are "identical" iff ((S C Τ ) & ( T c S)), i.e. if S and Τ have exactly the same members. If a is a proper subset of S, then the set o f elements in S which are not in a constitute the "complement" of a, written " - a " . Thus, if S is the set of numbers and a is the subset of natural numbers, then the subset in S which comprises the non-natural numbers is the complement, -a, of a. Similarly, if S is the set of wellformed sentences of English and a is the subset of positive sentences, then the subset in S of negative sentences is the complement of a. In fact, it is common practice for the notion of a complement to be identified with negation both in formal logic and in linguistics. This is not, o f course, to be confused with the metalinguistic convention in which " c o m plement" is the label for a constituent acting as the object o f a verb or preposition, though that usage ultimately derives from the same notion o f set completion. It is customary to accept the existence o f a set containing no elements at all. This set, the " e m p t y " , " v o i d " or "null" set, is symbolised " 0 " . Frequently, the notation (S e 0 ) is used to indicate that S has no members, i.e. is a member of the null set. Thus, the first prime number > 2 which is even is an element of 0. A n analogous instance from English is provided by the set o f sentences consisting of a single word - ignoring ellipsis. Another example of a member o f the null set is the fabulous unicorn which figures so prominently in philosophical and linguistic discussion. From the above example, it will be apparent that, if [J is the universal set, then the value o f any statement which is true is a member of [J. Since the complement, -|J is the empty set, it contains the values of all false statements. Thus it is that the empty set, 0, is frequently employed to represent falsehood, while " 1 " often symbolises truth. Zermelo (1908) required that the empty set be a member of every set. Hence, any set S, in addition to its characteristic elements, a, b, c,

...,

22

Background notions from

mathematics

contains the empty set. Further, since every set is said to contain itself as a member, the null set contains one member, namely, itself. The "intersection" or "meet" of two sets S and Τ is the subset of S which is contained in Τ plus the subset in Τ contained in S. This subset may be symbolised {S & T} and may be verbalised as the set whose members are both in S and T. Since a given element, a, must belong to both S and Τ to be in the intersection of S and T, it follows that the intersection may contain fewer elements than either or both of the intersecting sets. Given the set, S, of books and the set, T, of works of art, then, clearly, the set of objects which are both books and works of art is smaller than either S or Τ - at least this is so in our world. When two sets, S and T, are "joined", the result is the "logical sum" or "union" of S and T, symbolised {S ν T}. The "joint" of two sets contains those elements which are in either or in both of those sets. Thus, the union of two sets is always greater than is either one in isolation. The set of things which are either books or works of art or both clearly is more numerous than either set taken alone. It is evident from the above that the relations of intersection and union correspond, in the field of sets, to the relations of conjunction and disjunction in natural languages. It is, further, clear that intersection corresponds to logical conjunction and that the status of union is precisely that of inclusive disjunction, not the exclusive variety, see chapter 3. Intersection and union correspond mathematically to multiplication and addition - they are commonly called the "logical product" and "logical sum" respectively - and since their natural-language equivalents are conjunction and disjunction, it is not unusual, e.g. Reichenbach (1947), to symbolise and as * and or as + . To see the plausibility of these equations, it is only necessary to consider the result of adding 0 to 1 and of multiplying 0 by 1. Obviously, (1 + 0) = 1, while (1 * 0) = 0. Any disjunction, (p or q), is true, i.e. has the value 1, if either disjunct is 1. Any conjunction, (p & q), is false, i.e. has the value 0, if either conjunct is 0. Thus, using ρ and q as propositional variables, the following equivalences hold: (a)

(p & q) = (p * q);

(b)

(pvq)

= (p + q).

These equivalences are particularly important in the context of certain rules of equivalence, especially the distributive laws (see below for further discussion and chapter 3).

The cardinality of a set

23

When two sets, S and T, share no elements in common, they are "disjoint". Thus, the set of natural numbers and English sentences is a disjoint set, which is, in effect, the null or void set. The term "disjoint" is thus frequently used in the sense 'distinct'. Since it is possible to confuse the names of set elements with the elements themselves, it is common practice to interpret, say, "S =(a,b,c, . . . ) " as standing for the set of distinct objects denoted by a, b, c, . . . rather than the set of names themselves. Following Carnap's (1961) usage, a set is taken "in extension" unless otherwise specified.

2.3 The cardinality of a set Using Russell's illustration (1919), an exceedingly simple way of viewing the operation of addition is in terms of bringing the members of one set into a one-to-one correspondence with the members of an ordered set. Thus, if we take the set of natural numbers, Ν, = {0, 1, 2, 3, . . . , n} as being an ordered set containing subsets, e.g. {1,2,3,4,5}, we may bring another set, say the fingers of one hand, into a one-to-one correspondence with it. This process in which the elements of the one set are paired with those in the other establishes a relation of "similarity" between the two. Among the real numbers, the set of rational numbers is, of course, infinite since there is no highest number. However, since, by definition, that set can be counted, it is said to be "denumerable" and any set which can be brought into a one-to-one correspondence with it is a "denumerable set", or a "denumerable infinity". In contrast, the set of irrational numbers, the "endless decimals", as Cantor showed, is nondenumerable. To establish the "cardinality" of a set, we have only to establish the similarity relation just referred to between it and some subset of the set of natural numbers. If a set contains no members, as already mentioned, it is said to be "empty", or "null" - equivalently, "void" - with cardinal number 0. If a set has only one member, it is called a "unit set" and is usually symbolised "I". In ordinary counting, it is universal practice to take the last member of a natural number subset which is brought into a one-to-one relation with the members of some other set as the number of that set. Thus, we say that 5 is the number of fingers on one hand, rather than specifying the set as {1,2,3,4,5}. When we employ this abbreviatory convention, we arrive at the cardinal number of the set or the cardinality of the set.

24

Background notions from

mathematics

Of course, since subsets are members of sets, it is possible to count not just the individual elements, but their groupings as well. The set of all subsets of a set, including the set itself and the void set, is called the "power set". Thus, it follows that the cardinality of a power set is 2", so that, for example, a set with three elements will have a power set whose cardinal number is the cube of 2, i.e. 8. It is to be noted here that the ordering within subsets is not significant. Hence, {a,b} = {b,a}, etc.. To calculate all possible permutations is a more complex and, for our purposes, irrelevant process. It is important to note that in order to know that two sets, S and T, are cardinally equivalent - have the same cardinal number - it is not always necessary to know what that number is. To illustrate, again using an example from Russell (1919 ): assuming that a given shoe-shop has no broken pairs in stock, we know that the set of right shoes is cardinally equivalent to the set of left shoes. Another illustration of the same point, this time from Reichenbach (1947) is provided by the seats of a theatre and the size of the audience. If all the seats are taken and no patron is standing, we know that the set of seats and the set of patrons have the same cardinal number, even if we are ignorant of the number involved.

2.4 Product sets By a "product" or "Cartesian" set, {S X T}, is understood the set of all ordered pairs, < s, t > , where s e S and t e T. The notion of an ordered pair will be defined below in terms of the notion, Function. For the moment, it is sufficient to remark that: if a pair is ordered, for example, < s.t > , then that pair is not equivalent to any other ordering of the same elements, e.g. < t,s >. Mathematically, the members of a product set, {S X T}, are such that each member from S occurs as the first member paired with each element in T. Thus, if S has three elements and Τ has five, then each element of S will appear in five separate pairs. Thus, if S = {a,b,c} and Τ = {d,e,f,g,h}, then {S X T } comprises the pairs: (1)

{< a,d >, < a.e >, ...< b, h >; < c,d >, < c.e > ... < c,h

a.h

>; < b,d

>, < b,e

>,

...
}.

Hence, the cardinality of a product set is the cardinal number of the one multiplied by that of the other, namely, {S X T}.

Relations and functions

25

2.5 Relations and functions The notion of a product or Cartesian set provides a natural background for the consideration of relations and functions. Essentially, a relation holds between two elements in such a way as to bring them into a pair, which may or may not be ordered. Thus, a relation like < brings the elements of the set, N, of numbers into a set of ordered pairs such that one is less than the other. Similarly, the relation, is the author of, brings the elements of the set, S, of writers into an ordered relation with elements of the set, T, of written works, such that, for each pair, the one is the author of the other. It is to be observed that, in this example, the relation, is the author of is not equivalent to is an author of. In contrast to these examples, the relations, = and is married to, bring two elements into an unordered pair. If "a = b" is true, or, if "a is married to b" is true, then the inverses of these statements are also true. Symbolically, if we use R to indicate a relation and the variables χ and y for the elements concerned, we may write "y R x" to mean that y stands in the relation, R, to x. Further, since a set of ordered pairs of elements is a subset of the relevant product set, we may say that, for the relation in question, (R e {S X T}) or, if one set is involved, (R 6 {S X S}). When a relation holds between the members of one set, say the set, N, of numbers, it is said to hold "in" that set. When the relation holds between the members of two sets, it is said to be "from" the one set "into" the other. Hence, < is a relation in the set of numbers, while is the author of is a relation from the set of writers into the set of written works. A relation of one set into itself is often referred to as a "transformation". It is to be remarked that, in stating that a relation, R, holds, the ordering of the variables in the symbolic expression is opposite to what would seem to be intuitively natural. Instead of "y R x", we might have expected "x R y". The former ordering is, however, conventional in mathematics. Thus, given the numbers 1 and 2 and the relation < , the ordered pair < 2. 1 > satisfies the relation since "1 < 2" is a true statement. Similarly, if y = Dickens and χ = Oliver Twist, then the pair < Oliver Twist, Dickens > satisfies the relation, is the author of The conventional ordering described above seems more natural when considered in respect of the interpretation of graphs. If the number of elements in a relation is finite, we may exhibit them in graph form with the intersections, or lattice points, indicating each ordered pair. As is customary, the horizontal axis is taken to be the x-axis and the vertical the y-axis. On the x-axis, the

26

Background notions from

mathematics

independent variables are written and, on the y-axis, their dependent counterparts. In reading such a graph, it is practice to read the elements on the x-axis first. The elements in a relation which comprise the first components of the associated pairs - the elements on the x-axis - are known as the "domain" of the relation, while those making up the set of second components - elements on the y-axis - are referred to as its "range" or its "value". Thus, if the relation, R, is is the author of, the domain is the set of written works and the range is the set of writers. Finally, the set of elements which together make up the domain and the range is called the "field" of the relation. Obviously, the relation, < , is potentially multi-valued. Thus, if U is a set of numbers, and χ = η, for some n, there may be a number of elements, y, which pair with η to satisfy the relation. Similarly, the relation, is an author of, will be multi-valued since books, etc. may be co-authored - hence the indefinite article, an, in the English name for this relation. A relation which is uniquely valued is conventionally known as a "function". Thus, for instance, the relation between any number and its square, cube, etc. is a function, as is the relation, husband of, in a monogamous society. If we regard the set of pairs satisfying a relation as constituting the relation - an equation which is proper since an exhaustive list of such pairs constitutes a definition of the relation - we may rephrase the above definition of a function as a uniquely valued relation by saying that a function is a set of ordered pairs sharing no common first member. As an illustration, let U = { 1,2,3}. Then the cube-function defined over U is: { < 1,1 > , < 2 , 8 > , < 3,27 > } . By contrast, the relation, is the cube root of, is not a function since, for any power, the absolute value may be positive or negative. Hence, for instance, 8 has the cube roots 2 and -2, so that 8 appears as the first member of two distinct pairs. This is a useful way of looking at functions, in the context of natural language, since it emphasises the functional VS relational status of given expressions. Thus, Oliver Twist is the first member of only one pair defining the function, is the author of, i.e. < Oliver Twist, Dickens >, since the book in question had only one author. By contrast, the expression written by denotes a relation since, for any argument, it may have several values. Thus, written by includes many pairs sharing the same first member, e.g. < Dickens, Oliver Twist >, < Dickens, Great Expectations >, ... Functions may be generalised symbolically as F(x), where F is the function variable and χ the domain variable. Thus, if F is the cubing function, F(3) = 27. If F is the function, husband o f , then F(Mrs. Thatcher) = Dennis

Equivalence

relations

27

Thatcher. This notation is the one customarily employed by logicians. Further, since the value of F(x) is y, it is common to employ the function expression in place of y itself. Thus, F(x) = y. A function such as cube is a simple function in the sense that only a single operation is performed on a given argument. It is obviously possible to create complex functions, which involve two or more operations. Such complex functions may be viewed as comprising functions whose domain contains other functions and their domains. As illustrations, consider the following, in which F is the squaring function and G the factorialising function, F(G(2)), i.e. 4; G(F(2)), i.e. 24. Complex functions are not, of course, restricted to mathematics. An excellent example from kinship relations, provided by Thomason (1974), is the following in which F = mother of and G = father of Combining these functions with respect to an appropriate argument yields either F(G(x)) or G(F(x)). Given the definitions, the first of these expressions yields the paternal grandmother of x, while the value of the second is the maternal grandfather of x. In theory, there is no limit to the degree of complexity which such functions may assume. In stating the semantic rules for representative English words, chapter 8, many of the functions - values assigned to words - are complex functions. Thus, for example, seldom denotes a complex function, having, as its domain, propositions standing as argument to the function denoted by often. It is to be remarked at this stage that, although the term "function" is reserved in mathematics and in the writings of Montague for uniquely valued relations, in other disciplines, including formal logic, the expression is sometimes used more loosely for relations with multivalued arguments. In fact, some scholars, including Reichenbach (1947), employ the term "function" as a synonym for "predicate" or "verb". Thus, run is a one-place function, kill is a three-place function.

2.6 Equivalence relations From the viewpoint of individual members of a set, the important relation between them and the set itself is membership. Thus, we say "s e S" just so long as there is some characteristic property by virtue of which s is a member of S, even if that property is only membership itself.

28

Background notions from

mathematics

When we consider relations between sets and subsets, the most fundamental is that of equivalence or equality. Thus, the expression "S = T" claims that: (D.l)

s e S iff s e Τ & t e Τ iff t e S.

Of course, since every member of S is in S, including S, the definition of equivalence just given implies that S = S. When we say that a set is equal to itself, we assert that the equivalence relation is a "reflexive relation". Equivalence, however, is not the only reflexive relation. Thus, parallel to is also usually held to be reflexive since any line is taken to be parallel to itself. Similarly, the relation as big as, or that denoted by born on the same day as are also reflexive relations. In addition to being reflexive, it is intuitively obvious that equality is "transitive", by which is meant: (D.2)

if set S = Τ and Τ = U, then S = U.

If line a is parallel to line b and b is parallel to line c, then a is parallel to c. If χ is born on the same day as y and y on the same day as z, then, clearly, χ is born on the same day as z. Finally, it is apparent that the definition of equality involves the assumption of symmetry. We say that a relation, such as equivalence, is a "symmetrical relation" if it holds in both directions. Thus: (D.3)

if S = T, then Τ = S.

If line a is parallel to line b, then b is parallel to a. If χ is born on the same day as y, then y is born on the same day as x. Relations, such as parallel to, which have the property of holding for equivalence are called "equivalence relations". Such relations will always have the properties of reflexivity, transitivity and symmetry. A relation which is not reflexive is sometimes called an "irreflexive" relation. Thus, < is irreflexive, since, if (x < x) were true, then (x = x) would be false which is patent nonsense. As well as being irreflexive, < is clearly "nonsymmetrical". This is so because, if a < b, then the reverse cannot hold. Although < is irreflexive and non-symmetrical, it does have the property of transitivity since, if a < b and b < c, then a < c. Clearly, a relation which is irreflexive, non-symmetrical and intransitive is not an equivalence relation. Thus, son of is a nonequivalence relation, as is parent of Hence the need for the prefix grand in these and similar cases to express a transitive-like relation as in:

Boolean algebras

29

(2)

John is the son of Peter and Peter is the son of Jack. Therefore, John is the grandson of Jack.

(3)

Mary is a parent of Jean and Jean is a parent of Sally. Mary is a grandparent of Sally.

Therefore,

It is to be observed, here, that the question of whether a given relation has any one or all of the properties mentioned above is not always a simple yes/no matter, especially in respect of relations denoted by natural language expressions. Thus, the relation brother of while it is irreflexive and transitive, may or may not be symmetrical, as can be seen from the following examples: (4)

*John is his own brother.

(5)

John is a brother of Jack and Jack is a brother of Fred. John is a brother of Fred.

(6)

*John is a brother of Mary. Therefore, Mary is a brother of John.

Therefore,

Following Reichenbach's practice (1947), we may use the prefix "meso-" to indicate such inbetween values. Hence, brother of is a mesosymmetrical relation. It is customary to further classify relations according to the degree of their places or arguments. Thus, a relation like husband of is a one-to-one relation in a monogamous society and is thus a function in Montague's strict usage. The relation is a finger of is also, stricto sensu, a function since it is manyto-one. The relation native language o f , on the other hand, is one-to-many, while pupil of is many-to-many. Finally, it is worth noting that natural languages do not always have specific terms to denote given relations. In English, orphan denotes the child of deceased parents, but there is no corresponding term for the relation of being a bereaved parent.

2.7 Boolean algebras In a sense, we may consider our discussion of sets as a discussion of an algebra. In that algebra, the elements are sets and the operations are those on sets. In general, any algebra is a system (A,F) consisting of elements, A, and operations, F, defined on A. Thus, if A = the set of numbers and F = { + , * , - , / } and the relation == is defined, then (A,F) is the algebra of arithmetic.

30

Background notions from

mathematics

An algebra which is an algebra of sets is known as a "Boolean algebra" after George Boole (1854). The elements of the algebra may be indifferently interpreted as actual sets, or as propositions, as sentences of a natural language, etc. (Langer, 1953, offers an amusing demonstration of the fact that the slicing of a cake can be thought of as a Boolean algebra!). In fact, it is the case that any system in which the relation, < , is defined and which has the three operations, disjunction, conjunction and negation, i.e. V, & and -, is a Boolean algebra if it satisfies the following postulates. (7)

Boolean

postulates

a. 1 and 0 are distinct. b. (x & y) = (y & x); (χ ν y) = (y ν χ). c. (χ & (y & ζ)) = ((χ & y) & z); similarly for disjunction. d. (x & (y ν ζ » = ((x & y) ν (x & z)). e. (χ & -χ) = 0; (χ ν -χ) = 1. f. (χ ν 0) = χ; (χ & 1) = χ. A selective commentary on this axiomatic system seems appropriate. Postulate (7a) establishes the disjoint status of 1 and 0. It is the fact that these elements are disjoint which, of course, permits postulate (7e). Since 1 and 0 = truth and falsehood respectively, postulate (7a) also requires that these values be distinct, thereby guaranteeing the law of excluded middle. Postulate (7b) is the familiar rule of commutation. Thus, in arithmetic, the operations of addition and multiplication are commutative since the result of adding two numbers or of their multiplication is indifferent to the ordering involved. Similarly, in a natural language, the operations of disjunction and conjunction are commutative, provided they are of the logical variety (see chapters 3/8). Postulate (7c) is the rule of association. According to this rule, groupings are irrelevant in multiplication and addition and, indeed, the associative law permits the equation of bracketed and bracket-free expressions under the appropriate operations. In natural languages, the operations of disjunction and conjunction are also generally associative, as can be seen from the following: (8)

John came and Mary and Jane left. — John came and Mary left and Jane left, (so too for or).

The fourth postulate is the law of mathematical distribution. It is to be noted that, formal logic and natural languages allow for a second distributive

Boolean algebras

31

law in which or and & supplant each other in (7d). Clearly, this second law does not hold in arithmetic since (x + (y * ζ)) φ ((χ + y) * (χ + ζ)). The postulates in (7e) proclaim, on the one hand, the contradictory status of a conjunction of a proposition and its negation and, on the other, the tautological status of the disjunction of a proposition and its negation. In terms of sets, it is obvious that any element, a, cannot belong both to a set and to its complement. That is to say: the set of elements belonging to S and -S is the void set, 0, and any statement to the contrary is false, i.e. has the value 0. Conversely, the set of elements belonging either to S or its complement -S is the universal set 1 and any statement to that effect is true, i.e. has the value 1. I return again to the question of contradictions in natural languages. Here, it is to be noted that, in spite of their apparent lack of informativity, tautologies such as (9) and (10) occur fairly frequently in natural discourse as expressions of emotions such as resignation. (9)

Either the government

will fall or it will not fall.

(10)

If Sally is upset, she is upset.

The Postulates in (7f) are especially interesting from the viewpoint of natural language since they enable the systematic identification of conjunction with multiplication and disjunction with addition. In chapter 3,1 shall provide some discussion of and and or in the context of formal logic. It will be seen that, in that context, two statements joined by and result in a false statement if either is false. If two statements are joined by or, the result is a false statement only if both disjuncts are false. Let χ = 1 and y = 0 and let 1 and 0 represent truth and falsehood respectively. Since (1 * 0) = 0, while (1 + 0) = 1 , it follows that truth times/and falsehood equals falsehood and truth plus/or falsehood equals truth. Thus, as noted earlier, multiplication equals and and addition equals or. It is, however, important to note that, if the conjuncts are not propositions, the {and — *) equation fails. Thus, in English, and is often used to mean + in informal statements of arithmetic, as in: (11)

7 and 7 make 14.

Like any formal system, including a logical system, Boolean algebras must be consistent and complete. A system is consistent if and only if, for any formula a which can be derived in it, the negative - a is not also derivable. For the system to be complete, any wellformed formula a which is not a postulate must be derivable in it as a theorem. This is possible only, of course, if the postulates are valid - true under any interpretation. It also

32

Background notions from

mathematics

requires that all rules of derivation be truth preserving. I return to rules of derivation, or inference, in chapter 3.

2.8 Isomorphisms and homomorphisms Any two algebraic systems may be related in a number of ways. The simplest cases to consider are those in which there is one operation only, which may arbitrarily be treated as though it were either addition or multiplication. Since it is convenient, at this point, to consider number systems, the additive or multiplicative operations do, in fact, correspond to their arithmetical uses. However, it is important to realise that this correspondence is not necessary. Thus, the expressions "a * b" or "a + b" may be taken as designating any operation on two arguments, so that "*" = "+". Given two systems, say of numbers, one, S, may correspond precisely to the other, S \ just in case we are able to associate each element in S in a one-to-one relation with each element in S' and if, further, the operation is preserved under the correspondence. Such a relation is referred to as an "isomorphism" and may be simply illustrated by the following case. Let S be the set of natural numbers and S' be the set of their common logs - logs to the base 10. Then, for every a and b in S - for arbitrary a and b - there will correspond a unique element a', b' in S'. If, for example, a = 10, then a' = 1,. If b = 100, then b' = 2. If, now, the operation in S is multiplication and that in S' is ordinary addition, the relation between S and S' will be isomorphic, as Table 1 testifies. Table 1. An isomorphism S a = 10 b = 100 (a * b) = 1000

AaA. b. XxY = > XYx. c. a = > · -. d. - = > a. Here, command (12d) is the start-command, having the void set to the left of the arrow, and (12c) the stop command, having void to the arrow's right. Command (12b) merely ensures that elements and auxiliary-marked elements interchange to permit the operation of (12c). Command (12a) generates the requisite number of copies and, since it may take its output as input, it may be applied an infinite number of times. In fact, of course, the number of applications of (12a) will be determined by the number of elements in E. Thus, if Ε = aa, its copy will be "aa" and so forth. The above algorithm makes explicit reference to an alphabet in Β consisting only of A. It should, however, be clear that the number of admissible elements is not limited. Thus, if Ε is the English word cat, then we can derive its copy simply by expanding the algorithm to allow for the fact that three letters are involved as follows: (13)

a. cC —> CcC. b. aA —> AaA. c. tT — • TtT.

This expansion will yield: (14)

CcCAaATtT.

which will input to the interchanging command eventually to yield: (15)

CAcCaATtT.

36

Background notions from

mathematics

This string will input the terminating command, suitably expanded, which erases the auxiliary alphabet to yield: (16)

CATCAT.

The algorithmic technique is, of course, very familiar to the modern linguist in the form of a standard phrase-structure grammar and its application will be assumed in the remainder of this study.

Chapter 3

Background notions from formal logic

3.1 Scope of this chapter The following discussion of background notions from formal logic has, of necessity, to be very brief and, in consequence, sometimes rather superficial. The science of formal logic is of immense complexity and contains many subdisciplines, including, according to some, e.g. Whitehead-Russell (19 ΙΟΙ 3), mathematics itself. It is also fitting to add that there is a vast number of works both introductory and advanced on particular logics and that, since some of these at least have a very wide readership across several disciplines, including linguistics, it is not appropriate to attempt the repetition of their content here. I shall, therefore, offer a broad account only of those systems which are particularly relevant to linguistic semantics. Only in my discussion of Montague's intensional logic (1973) shall I attempt a reasonably detailed presentation.

3.2 The calculus of propositions The calculus of propositions includes a system whose atomic elements are propositions and whose operations are the functions: conjunction, negation, disjunction, implication and equivalence. The propositional variables are taken from the set { p, q, r, . . . } and the operations are symbolised by: &, v, A, —> and < — r e s p e c t i v e l y : where A is exclusive disjunction and p).

58

Background notions from formal

logic

Moreover, within such a logical system, the relation of accessibility must also be transitive and symmetrical. In contrast to the situation within propositional logic, that in deontic logic is less clear. Thus, for instance, since the worlds in a deontic system are morally ideal worlds, the actual world is not accessible from its ideal self. Thus, when the actual world is included in a deontic system, the accessibility relation is not reflexive (McCawley, 1981). Hence, under such circumstances, the axiom (60) does not hold.

3.7 Lambda abstraction At this point, it will be useful briefly to present the outlines of a technique which makes possible the conversion of a propositional function into an expression of another type, frequently a predicate. This technique, known as "lambda abstraction", is employed extensively in much current work in natural language formalisation, including Montague (1973), Lewis (1970) and Cresswell (1973, 1985). Consider, first, a rule which states that a given category of expressions, 7, is formed by the combination of a functor category, β , and n-arguments of category a . For the purposes of exposition, let 7 be the category of Sentence, β that of Intransitive verb and a that of Nominal. Such a rule would combine a nominal like Percy with a verb like runs to yield a sentence: (61)

Percy runs.

A system of rules which builds up complex expressions in this way is known as a "Categorial grammar" and its discussion will figure prominently in much that follows - especially chapter 7. For the present, let us assume that we have such a grammar and that it can be used to provide the sentences of a natural language with descriptions of their syntactic derivations. In cases like (61), the situation is perfectly straightforward. However, (61) is, by no means, representative of the majority of sentential constructions. Recalling the discussion of the interpretation of sentences involving multiple quantification (section 3.5), how would one derive a sentence like (62)? (62)

Everyone is related to someone.

One symbolisation of (62) in the predicate calculus would be: (62) a.

(1,x),

(v,y) ((H(x) & H(y)) —

R(x,y)).

Lambda abstraction

59

As in the earlier examples, however, (62) is ambiguous - some might insist that everyone is related to God - so that an alternative representation is also required, namely: (62) b.

(v,v), (V.x) (H(y) & (H(x) —-> R(x,y)).

Obviously, since (62) is ambiguous, it must be provided with two distinct semantic representations corresponding to (62a) and (62b). These alternatives must both include the formula, (R(x,y)), which is common to (62a) and (62b). At the same time, we must provide (62) with syntactic derivations which, as far as possible, mirror its semantic representations. As a first step, we might say that a verb like is related to is of category, 3, and that it combines with two words of category a, such as everyone and someone, to form a sentence. This simple approach is not, in itself, sufficient to accommodate the facts of (62a) and (62b), including the formula which they share and the relative orderings, in which they differ, of the quantifiers. Let us say that we have a rule which says that, when a formula contains a variable which is within the scope of the lambda operator, Λ, the whole has the status of an intransitive verb. Such a verb would have the following form: (63)

(X.x

(F(x))).

A structure like (63) might, for example, be exemplified by the following lambda abstract: (63) a.

(X.x

(Runs(x))).

If we wished to represent the structure of a simple sentence like (61), using lambda abstraction, we would have: (61) a.

(Percy(\.x

(Runs(x)))).

Presuming that the individual denoted by Percy satisfies the propositional function, (Runs(x)), (61a) can be converted into the standard logical equivalent of (61) simply by the substitution, Percy/x, and the deletion of the operator, Λ, along with the superfluous brackets. If, instead of performing the actual substitution, we include the variable among the deletions, we have (61) exactly. The representation of (61) as (61a) is, however, more complex than either the syntax or the semantics requires. We gain nothing whatever from its employment. However, consider again the more complex problem of (62) with its alternative readings. Since lambda abstraction allows us to create verbs from propositional functions and, since it can be used to reflect scope

60

Background notions from formal

logic

relations between quantifiers, we can provide (62) with alternative semantic representations as follows: (62) c.

(Everyone (A,* (fA.y (R(x,y)))

(62) d.

((X,y (Everyone (X,x (R(x,y)))))

someone))). someone).

At first sight, these structures seem rather cumbersome. However, quite apart from being easily assigned a semantic interpretation, their power resides in the fact that, while the scope relations are reflected correctly, the linear positions of the English words correspond to that of the actual sentence (62). Again, as with the representation of (61), (62c,d) may be converted into structures equivalent - assuming the use of the logical signs - to (62a,b), simply by the substitutions, Everyone/x and someone/y, and the indicated deletions. Alternatively, the natural English string, (62), may be obtained by deleting all logical symbols. Probably the best known exponent of the technique just outlined is Cresswell (1973, 1985). Cresswell shows that, by employing lambda abstraction, an enormous array of natural-language structures can be represented so as closely to reflect the link between syntax and semantics. I shall follow his example in this study. There are, of course, several technical aspects to lambda abstraction which I have not mentioned. I shall, in chapter 7, show how Cresswell incorporates lambda abstraction directly into a categorial language in such a way as to allow for the construction of any category of expressions. Here, I shall comment only on the principle of Lambda conversion. Clearly, if an expression, a , is to be converted into another, a ' , then α must be semantically equivalent to a ' . Cresswell (1973) discusses, at length, an equivalence relation, due to Church (1941), called "bound alphabetic variance". The details of this relation are complicated, but the basic principle is not. Two expressions, a and a ' , are bound alphabetic variants if and only if they differ just in respect of free variables. The equivalences in question will be as follows. a.

If a differs from a ' only in having free χ where a ' has free y, then a — a'.

b.

If a differs from a ' only in having free variables where a ' has none, then a = a ' .

c.

If a is ((A,x (7,x))/?), for some β, and a' is 7, β, then a = a'.

Montague's

intensional logic

61

Any expressions, a and a ' , meeting one of these equivalences are bound alphabetic variants and may be converted into each other. As indicated, from the viewpoint of mathematical semantics, the attraction of lambda abstraction is that, incorporated within categorial grammar, it makes possible the construction of semantic representations in terms of the syntax. Thus, it permits the straightforward association of the two planes of meaning and expression.

3.8 Montague's intensional logic If we combine the apparatuses of the higher-order predicate calculus and an enlarged version of modal logic which allows for intensional interpretations, along with lambda abstraction, we obtain an Intensional logic. If, further, we enrich such a system with tenses, it becomes a Tensed intensional logic. Montague (1973) employs such a logic to provide the semantic representations of natural-language expressions. This section will focus upon his development. The enlargement of the modal logic takes the form of the addition of two more operators, one for intensionality and the other for extensionality. Using Montague's symbolisation, we indicate the former, for any expression a , by A a and the latter by v a . This enlargement is made possible by virtue of the interpretation of modal systems in terms of sets of possible worlds rather than an arbitrary world as in the case of the predicate calculus. Instead of thinking of the denotation of a given expression, say a definite description, as some entity in a possible world - its extension in that world - we may regard the expression as referring to the set of its extensions in all possible worlds. To repeat the commentary in chapter 1: the range of its possible extensions is what is understood to be the intension of an expression. Thus, the intension of the evening star is the set of all of its possible extensions, including, in our world, the planet Venus. The intension of the morning star is its extension in each possible world, including the planet Venus in our world. We conclude, therefore, that while the evening star and the morning star have the same extension in our world, they might have different extensions in other possible worlds and, thus, they have different intensions. This is why Frege's famous: (64)

*Necessarily, the evening star is the morning star.

62

Background notions from formal

logic

is false, i.e. denotes a false proposition, whereas, as we have already noted: (65)

Necessarily,

the evening star is the evening star.

is true, i.e. has a true proposition as its denotation. The intension/extension distinction is an ancient one in philosophy (see, for example, Dowty et al. 1982). However, in its modern form, it is usually traced to Carnap's development of Frege's idea that an expression has both a sense and a reference. In Carnap's original treatment (1947), the intension of an expression was a function with state descriptions or models as its domain of arguments and extensions in its range of values. Lewis (1970), along with other scholars, including Montague (1973), refined on the Carnapian view of this function by making its domain an ntuple of relevant factors - relevant to the determination of meaning - including possible worlds and moments of time. Such ntuples are called "indices" and their elements "co-ordinates". In chapter 6, I shall discuss indices in some detail, including Lewis's system. Here I shall simplify by saying that the domain of a function is a set of indices made up of a possible world and moment of time and its values are extensions. I take it that certain expressions may have extensions which coincide with their intensions. Thus, a sentence denotes a proposition and has a proposition as its intension, i.e. a function from indices to truth values. The intension of many proper nouns, e.g. Scott, is a function from possible worlds and moments of time to a unique individual and is, thus, coincident with its extension. As the remarks in the opening chapter suggested, the notion of an intension is fundamental to an understanding of the notion of Meaning. Again following Lewis (1970), we may say that to say what a meaning is is, in part, to say what it does. Since intensions relate possible worlds to extensions, they are obviously part of what a meaning does. Since this chapter is devoted to background notions in formal logic, it would be pleasing to present intensional logic as a purely formal system. However, one of Montague's major interests in developing his system lay in its applicability to natural language. It seems strained, therefore, to discuss this logic without referring to its application. What follows will, therefore, constitute an important - albeit derivative - part of what I have, myself, to say about the semantics of natural languages. In the previous section, I introduced, in very general terms, the notion of a categorial grammar. While I shall not elaborate on this notion until chapter 7, it is again convenient to use it here. Essentially, a categorial grammar is a simple context-free grammar which allows for the construction of infinitely many "derived" categories from a small number of "basic" ones. Such gram-

Montague '5 intensional logic

63

mars were developed by Ajdukiewicz (1935) who was indebted to Lesniewski and, ultimately, to Husserl. Montague (1973) assumes two basic categories, e and t. We may think of these as the categories of entities and sentences respectively. The letter "t" stands for "sentence" because only sentences denote truth bearing expressions. From these two basic categories, we derive others of the form, (a/a) or (a/b). In terms of functions, such derived categories have their range of values in the category named by the leftmost symbol and their domain of arguments in the category represented by the rightmost symbol. Thus, (a/a) takes, as argument, a member of category, a, and yields a value in a. Similarly, (a/b) takes a member of b as argument and has a value in a. The utility of such a system is that it readily lends itself to the construction of complex expressions in a manner which explicitly reflects their compositional structure. Such a system, therefore, lends itself naturally to the exploitation of Frege's principle of compositionality. As a simple instance of a derived category in English, let (t/t) represent the category of sentential adverbs, including modal operators, such as necessarily. Such expressions concatenate with sentences to form sentences, as in (64). Functionally: a sentential adverb is a function whose argument is a sentence, member of t, and whose value is a sentence, member of t. In like fashion, we may think of a one-place predicate, such as walks, as a function with argument in e and range in t. Thus, walks, being of category (t/e), takes an expression of category e, say Percy, as argument and yields, as value, a member of t, namely the sentence:

(66)

Percy walks.

Obviously, there is more to a categorial grammar than the above paragraphs suggest. For instance, it proves useful to set up a system of rules of functional application which spell out the applications of functions to arguments and any morphological or other changes those applications may bring about. In addition, it is necessary to provide a lexicon which lists the basic expressions belonging to each category, basic or derived. The lexicon will, of course, be language-specific. Let us assume, in addition to the two syntactic categories, e and t, the existence of a semantical object, s. S may be thought of as the set of senses or intensions. Let Y represent the set of "Types". Then Y is the smallest set such that:

64 (67)

Background notions from formal

logic

Members of Y a. e and t e Y; b. where a, b e Υ, < a , b > e Y; c. if a e Y, then < s, a > e Y.

self-evidently, (67) allows for both basic and derived types. The intensional logic employs constants - primed tokens of their naturallanguage counterparts, e.g. man' = man - and variables of each type. Each type is written in the reverse order to that used for the syntactic categories and 7' is replaced by ','. Finally, derived types are enclosed in angles. Thus, the type notation corresponding to (a/b) is < b, a >. (C) of (67) permits intensional types of any category, basic or derived. Montague uses an assignment function, F, whose domain is the set of all constants such that: (68)

If a e Y and a e constanta,

then F(a) e < s, a >.

Thus, F assigns to each constant an intension. For instance, Atlantis' no longer denotes a member of type e, but an intension of type < s,e > . Verbs like looks for', which create opaque, or "oblique", contexts, take as object argument a member of < s,e >, say F(Atlantis'), and yield, as value, an intransitive verb, e.g. looks for'(F(Atlantis')), which denotes a member of type, < s,,t » . Such a verb, in its turn, takes a member of < s,e > as argument, say F(Percy'), and yields, as value, a proposition, member of < s.t >, i.e. the proposition denoted by: (69)

Percy looks for

Atlantis.

Since < s,t > is a function with domain in possible worlds and moments of time and range in truth values, the value of the proposition denoted by (69) will be in {1,0}. This account of (69) is greatly simplified and departs from Montague's treatment on several counts, including the fact that he would take for as a separate item - an intensional preposition. The point is that the verb looks for does not presuppose the existence of its object and the fact that it takes a member of < s, e >, rather than the extensional type, e, allows for this. In addition, the subject of the verb must be rendered extensional by a suitable postulate, see below. As another illustration, consider an oblique context created by the sentential adverb necessarily. Let necessarily' be of type, < s,< s.t > t >. That is to say: let it be an intensional adverb. Accordingly, necessarily' takes a

Montague 's intensional logic

65

member of < s.t > , a proposition, as argument and yields a proposition as value. This permits the solution of Frege's evening star paradox. Necessarily claims universal truth or falsehood of its complement. Necessarily', therefore, demands that its argument be an intension not an extension, which is particular to some arbitrary world and moment of time. We tend to think of oblique contexts in terms of classes of verb, adverb, adjective, etc. which create them. However, as Cresswell (1973) demonstrates, the necessity for "intensional objects" is not restricted to such constructions. He gives the following example of a sentence which depends, for its ambiguity, crucially upon the fact that the subject noun phrase denotes a function whose values change with its temporal arguments. (70)

The prime minister of New Zealand will always be a British

subject.

Whether such sentences provide "the most plausible evidence" of the need for intensional objects, or whether they are, at some level of analysis, of a kind with the more obvious opaque cases, is unclear. However, it is very apparent that opacity must be of central concern in semantics and Montague's intensional logic was constructed with this concern at its heart. I take up the topic, including Cresswell's examples, again in chapter 4. I gave the type for necessarily' above as < s,< s.t > t >. This is, however, a simplification. In fact, the function, F, assigns all constants a member of type < s.a >, so that necessarily' will actually be of type, < s. < s,< s.t > t > > , in which < s.< s.t > t > = a. This manoeuvre proves, ultimately, to lead to simplification in the grammar. Its effect is to make a completely general treatment possible in which all expressions are initially treated intensionally. During the computation of the meaning of any sentence, those intensions which are inappropriate are reduced to extensions through the use of the operator V in a manner to be discussed later in this section. Thus, while, on one reading, the object noun phrase in (69) must remain intensional, that in (71) must be extensionalised. This is so because photograph, like find mentioned in chapter 1, requires an extensional object. (71)

Percy photographed

Atlantis.

One alternative to complicating the types would be to complicate given semantic rules so that they sometimes apply to intensions and, at others, to extensions. This alternative is, however, undesirable since it diminishes the generality of the analysis without resulting in a significant reduction in complexity. Yet another alternative is to interpret the class of individuals so widely as to include intensional objects which may, then, act as arguments to

66

Background notions from formal

logic

functions just as ordinary extensional objects do. This is the approach which I adopt in later chapters of this study. It is to be noted that (c) of (67) allows for an infinite iteration of intensions. This is so because, if < s, a > is a type, then so is < s, < s, a » and so on. This unwanted iteration is of no semantic consequence since, by definition, an intension is a function from all possible worlds and moments of time to extensions. Hence, the intension of an intension is a constant function and endlessly to apply it to itself leads to nothing more than an infinity of identicals. In the interests of readability, I shall often omit one or all tokens of the intensional symbol, s, when referring to given types unless they are central to the point at issue. It will be evident that the operation of a categorial grammar, as sketched earlier, is completely general. There is nothing to prevent the generation of arbitrary sequences most of which would be ungrammatical. If we say that, where a and b are categories, (a/b) is a category, then, since adverbs and common nouns are categories, so is (adverb/common noun). This sanctions the creation of such odd strings as necessarily leopard. Indeed, if we treat all words as part of the lexicon, including articles and other grammatical words, we could even generate such monsters as leopard the, or such the and. There is nothing wrong with this liberality in the context of universal grammar. Indeed, its presence is to be welcomed since we can never be sure that a given concatenation will not be required. One consequence is, of course, that, in concert with most current theory, we are led to view language-specific rules as filters which permit the passage of only a subset of the grammar's output. Parallel considerations hold for the peculiar semantic types which are assigned to the relevant expressions. However, it may be that the semantic rules which remove nonsense are less language-specific than their syntactic counterparts. I discuss a logical example below. The intensional logic, like any other formal system, must contain a set of formation rules which specify those expressions which are well-formed. Such expressions are called "meaningful expressions" and their specification is such as to allow for the licence referred to above, while retaining the customary restraints of standard logics. These rules provide for the inclusion of lambda expressions and the usual wffs of the predicate calculus as well as the modal, tense and intensional operators. I reproduce Montague's formation rules below, commenting only on those which are of special interest or whose significance is not immediately apparent.

Montague's (72)

Formation

(R.a)

Every variable and constant of type a e MEa.

intensional logic

67

rules

Comment: The intensional logic employs denumerably many variables of each type and each is a meaningful expression of that type. Among such variables are those which range over predicates as in the higher-order predicate calculus. The logic also employs infinitely many constants of each type. It is necessary that these sets be denumerable since each variable is indexed by a natural number. The idea of a language with an infinite vocabulary is not, as Cresswell (1973) shows, implausible, especially if we regard numbers as words. (R.b)

If a e MEa and u is a variable of type b, then

(A,U(Q))

e

ΜΕ^.α>.

Comment: This rule permits lambda expressions as meaningful expressions and determines their status. For example, suppose that α is a formula, i.e. of category t, and u is a variable of type e, then ( A , u ( a ) ) is a verb, that is, a member of ME. As we saw in the previous section, this verb, a function, applied to an argument of the appropriate type, namely of e, yields a sentence. (R.c)

If a e ME

and β e MEa, then a(3) e MEh.

Comment: This rule stipulates the domain of arguments for any function and gives the status of the value for those arguments. To illustrate: let a be a verb, i.e. a function of type < e.t > , and β an appropriate argument, i.e. a member of e, then the value of et for that argument is a member of t. If α = runs' and β = Percy', then runs'(Percy') c. t. Although rule (R.c) dictates an inflexible condition on wellformedness the argument for a given function must be of a specified type - it leaves free the specification of function and argument. Thus, paralleling the earlier discussion, we could, conceivably, have a function, < e, < t.t > > , represented, say, by necessarily", which took as argument a member of e, say Percy', to yield a sentential adverb, i.e. (necessarily" Percy'). This adverb would then concatenate with a proposition in the usual way to yield a proposition, e.g.: a.

*((Necessarily Percy) it is snowing).

(R.d)

If α, β e MEU, then (a = 3) e ME,.

(R.e)

If φ, ρ e ME, and u is a variable, then: -(φ), (φ & ψ), (φ ν (φ > Ο, (Φ R ), ((V,m) (ο)), ((v,u) (φ)), (L(cp)), (W (φ)), (Η (ο)) e ΜΕ,.

68

Background notions from formal

logic

Comment: This rule provides for all of the customary wffs of the predicate calculus - of any order. It also allows for the operators of modal logic represented by L, so that possibly must be derived - and tense operators. Montague's tense provision is rather slight, allowing only for future, symbolised W, and past, symbolised H, with present unmarked. Cresswell (1973) argues that so-called "Tense logic", even in Prior's own formulation (1968), is not particularly helpful in the analysis of the tenses of natural language. Dowty et al. (1981) provide a full discussion of tense in the context of Montague grammar and I shall briefly return to his treatment below. (R.f)

If α e MEa> then ( Λ α ) e

(R.g)

If a e ME,

ME.

then ( v ct) e MEa.

Comment: (R.f) and (R.g) establish the wellformedness of expressions prefixed by the intensional and extensional operators. It is obvious that v . is productive only if a denotes an intension - its application to an extensiondenoting expression would be vacuous. It will be seen below that the utility of these operators is in providing flexibility in applying functions to arguments, in spite of the rigid constraint of (R.c) above. (R.h)

Nothing is a meaningful expression except as provided for by (R.A) to (R.G).

Having provided the syntax of the logic, through the formation rules, it is now time to turn to its semantics. First, the possible semantic values possible denotations - for the different types must be provided. This is done by the following definition, where A, I, J represent the set of individuals, the set of possible worlds and the set of moments of time respectively. The notation is to signify 'the set of possible denotations of a with respect to A,I,J' - for brevity, I shall often simply write Da. As usual, the notation xy signifies a function with domain in y and values in x. (73)

Possible denotations of types a.

D*-,J

= A.

Comment: This simply establishes the set of possible semantic values for any expression of type e as the set of individuals. As noted earlier, since the set of individuals is taken in the context of all possible worlds and moments of time, A must include individuals which do not occur in the actual world, such as Pluto. The inclusion of such nonactual individuals is, of course, necessary if the logic is to be useful in the analysis

Montague 's intensional logic

69

of natural language. It will also be recalled that many things are included in the set of individuals which may not, normally, be thought of as individuals, including propositions and formulae. Since we can, and often do, talk about possible worlds and moments of time, these entities also may be members of A. Thus, to say that an expression has such-and-such a semantic value relative to < A,/, J > is to say that it has that denotation with respect to the set A. The subsets I and J are picked out for especial mention because they are denotation-determining in many cases. That is to say, they are used to fix the extensions of expressions. I should, however, mention that the intensional logic does not contain expressions which denote ordered pairs of I and J, indices, directly. This point is made by Dowty et al. (1981) who refer to a thesis by Gallin (1972). It seems that this reflects the fact that "natural languages do not make explicit reference to indices". Thus, for example, the phrase here and now has absolutely no meaning in the absence of a context of use. (73) b.

= {0,1}.

Comment: As in most current work in formal semantics, the denotation of a proposition is asserted by Montague to be a truth value. It is to be noted that Montague's intensional logic employs a binary truth system, falsity/truth. His commitment here is a profound one since it involves, among other things, adherence to the Russellian theory of definite descriptions (see chapters 1, 4 and 6). This approach obviously has significant implications for the semantics. As we saw earlier, it is clearly possible to take another view, namely, that adopted by Strawson (1950). This alternative claim amounts to saying that sentences, like Russell's, which suffer from presuppositional failure - it presupposes the existence of a king of France - are neither true or false. This approach leads naturally to the adoption of a multi-valued system in which, in addition to 0 and 1, we might include an indeterminate value, or even a range of approximations to truth or falsehood. The standard introduction to such logic is probably still Rescher (1969) at least for non-Polish speakers. I return to Russell's account and the controversy it generated frequently, especially in chapter 6. (73) c.

D

=D°a.

Comment: (c) says that the denotation of a complex type < a.b > is a function from the possible denotations of a to the possible semantic values of b. Thus, if < a.b > is an intransitive verb, member of < e.t > , then its possible denotation is a function from individuals or sets of individuals

70

Background notions from formal

to truth values. If < a,b

logic

> is a transitive verb, member of < e. < e,t

>>,

its denotation is a function from individuals to a function from individuals to truth values. The denotation of loves'

is a function from A - taken as

objects - into a function from A - taken as subjects - into { 0 , 1 } . To say that the values of certain functions are truth values is, of course, to say that those functions are propositions. (73) d.

D

=D'aXJ.

Comment: The possible denotations of type < s,a > are functions from all indices - ordered pairs of possible worlds and moments of time - into the possible denotations of a. As discussed earlier, the need for such functions in the semantics stems from opaque contexts in which we cannot say that given expressions have ordinary extensions as their semantic values. Montague uses the term "individual concept" for members of < < s, < a,b >>

s,e

and "proposition" for < s,t

>;

"property" for members of

>.

Montague provides further justification for his use of individual concepts as denotations for nouns by his treatment of certain "extraordinary common nouns" such as temperature.

Such nouns, he claims, denote functions with

shifting values. As he, himself, expressed it (197lb/1973; p264): "the individual concepts in their extensions would in most natural cases be functions whose values vary with their temporal arguments". The most interesting cases are those in which the noun in question occurs in subject position, as in: (74)

The temperature is rising.

I return to the analysis of sentences like these later (chapter 4). In the meantime, it is apparent that there is much in common between Montague's "extraordinary common nouns" and Cresswell's extraordinary complex proper nouns such as the prime minister of New Zealand. Of course, the extraordinary behaviour of some nouns in subject position does not alter the fact that, in the majority of cases, to predicate a property of something does imply that thing's existence. Thus, it is a convenient simplification to say that a verb denotes the set of elements of which it is true. Montague capitalises on this convention by allowing any expression 7 of type < a,t

> to denote the extension of a. Thus, if a e a, then 7 ( a )

asserts that the things denoted by a are, in fact, members of the set denoted by 7. Thus, if 7 is an intransitive verb, then a e e and 7 denotes a set of individuals. It is easy to adjust this formulation so that, if 7 is a member of < a. < b.t verb.

> > , then 7 denotes a set of ordered pairs, i.e. is a two-place

Montague's intensional logic

71

Convenient simplifications aside, we have seen already that the expression, < s,a > , is completely general. There is no reason why a should not stand for a derived type, including < < s. e >, t > . In such a case, < s, a > would represent the type, < s , « s,e >,t > > . Such properties are the denotations, stricto sensu, of intransitive verbs and are functions from world-time pairs to functions from individual concepts to truth values, i.e. propositions. As observed in section 6.5, properties may, themselves, have properties. Thus, we might say that, in (75a) the property of walking has the property of being universally instantiated by human beings. (75) a.

Every man walks.

In such cases, every man has the complex type < s,, t » , t » . This type looks formidable indeed. However, if its s-tokens are ignored, it will easily be seen to represent a type which takes an intransitive verb, < e, t > to form a sentence, t. The elegance of this way of treating quantifier phrases can readily be appreciated by looking at an example. Let π be a property of the type just described and let the notation, { x } , mean 'x has such-and-such a property'. Then, the logical equivalent of (75a) will be: (75) b.

( λ , π (i.x)

(man', χ — > π

{x}))(walks').

Here, the lambda expression is a noun phrase and (75b), after lambda conversion, becomes the proposition: (75) c.

( V , x ) (Man'(x)—>

walks'(x)).

In great measure, the beauty of (75b) resides in the clarity with which it shows the status of every man as a higher-order property of the verb walks. This is not, however, to claim that it fully represents the meaning of every. In chapter 7,1 shall discuss further the status and structure of quantifier phrases. In fact, Montague extends the term "property" to refer to the denotation of any type < a,b >>. In this generalised use, we may, for example, have properties of propositions, i.e. members of . As noted earlier, a sentential adverb such as necessarily' would be of this type. As a final case, consider an expression, 7 , which is of type < s, < a,< b,t >>>. If we allow the second member of this expression to be of type, ,,t > > , it will be seen that the denotation is a twoplace relation. Montague calls a relation of this kind a "relation-in-intension". Of course, for many transitive verbs, as noted already, their status in the final computation will have to be extensional.

72

Background notions from formal logic

(73) e.

By Sa or the set of senses of type a, is understood D .

Comment: Dowty et al. (1982) explain the difference between Montague's "sense" and "intension" as that between the set of possible intensions of a given type and the actual intension for a given expression of that type. The assignment function, F, which has all constants as its domain, yields as value for any constant, a , of type a, a member of Sa. Thus, F(leopard') has a common-noun intension selected from the possible common-noun intensions, i.e. one which picks out the set, {x: leopard(x)}, in each world-time pair. Evidently, intensions will always be senses, but there may well be senses which, being never selected, are not intensions in a given language. Thus, by setting up the set of senses, Montague frees the logic from any particular language and, indeed, from the confines of human languages in general, be they natural or artificial. Having established the possible denotations for each type, the system is completed by the construction of an "interpretation" or "intensional model" which, as with the models of modal logic, assigns extensions to actual meaningful expressions. Such an intensional model, symbolised, M, is a quintuple < >4,7,7, > . Here, A, I, J are as before; —> is a simple ordering on J, i.e. an ordering on moments of time; F is the assignment function referred to above. Obviously, the crucial element of Μ which distinguishes it from a mere denotation-function is the ordering — As well as nonlogical constants, such as walks' and leopard', the logic employs denumerably many variables and these, as in the predicate calculus, are assigned values by a function, g. Thus, g(u) e Da, whenever u is a variable of type a. For example, if he„ is the nth variable of type, e, then the value of g ( h e m ) is the individual denoted by hem under the assignment. In the presentation of the semantic rules, it proves convenient to adopt notational conventions which explicitly indicate an intensional vs extensional interpretation-assignment of semantic values. Thus, by a M , g is understood the intension of the meaningful expression, a , with respect to Μ and g. By contrast, by a M , l j - g is meant the extension of a with respect to Μ and g. in this latter notation, < i.j > , a "point of reference", represents a member of the product set, < IXJ >. To illustrate: let a = the king of France', then a M ' g is the function which picks out an individual at each possible world and moment of time, or returns 0. On the other hand, denotes some individual at a particular world-time pair, or returns 0. Montague's semantic rules which spell out the precise interpretation for any given meaningful expression are elaborations of and extensions to those

Montague's

intensional logic

73

for the predicate calculus discussed impressionistically in section 5. I present them below with comment where this seems necessary. (76)

The semantic

rules

(R.a)

If α is a constant, then a M · 8 is F(a).

Comment: As stated above, the possible denotations of nonlogical constants are intensions, not extensions. (R.b)

If a is a variable, then q m ·'··/·« is g(a).

Comment: Naturally, the possible denotations of variables are extensions, not intensions. Hence the specification of the pair, < i j > , in the rule. (R.c)

If a e MEa and u is a variable of type b, ( Λ . u a ) M l - j - g is that function, h, with domain, Dh, such that, whenever χ is in the domain, h(x) is , where g' is the M-assignment like g except for the possible difference that g'(u) is x.

Comment: This is a formal statement of Tarski's (1941) strategy by which the truth of a formula is established by satisfaction, outlined in section 5. Of course, (R.c) is concerned with lambda-expressions and such expressions are functions. Thus the equation of the lambda-expression with the function, h, whose value for some element, x, in the domain of the variable, u, is the g'-assignment in G. (R.d)

If a e ME and 8 e MEa, then (a (3) is aM,iJ-g(ßM.ij.g^ ί that is the value of the function ' for the argument 8M .

Comment: If a is the constant functor walks' and 0 is the constant Percy', then the value of a for the argument 8 is a truth value. In general: the interpretation of any functor is a function which gives a value for an appropriate argument. (R.e)

If a, 8 e MEa, then (a =

is 1 iff a

M

i s

·'·.'·• and there is no reason for freeing L from the same condition even though it is not technically necessary. (R.h.2)

is 1 iff φ

Μ

i s

1 for some j' such that j -> j' and

j/j'· Comment: Take j as the moment of utterance and j' as some moment later than j, then: (78)

It will snow.

is true at the time of utterance, j, iff at j' the sentence:

Montague's (79)

It is

intensional

logic

75

snowing.

is true. (R.h.3)

(H0) M > i J < g is 1 iff d / 4 ^ ' * is 1 for some j ' such that j ' -> j a n d j ' ΦΙ

Comment: If j is, again, the moment of utterance, then a past tensed sentence will be true at j iff its present tense form was true at some moment, j ' , earlier than j. Dowty et al. (1981) show how this simple tense system can be expanded very considerably without the necessity of introducing additional symbols though we might wish to use them for stylistic reasons. Thus, for instance, it will always be can be obtained by flanking W by negation signs and similarly for it has always been, using H. Naturally, these extensions lead to increasingly complex interpretive rules as the moment of utterance's relation to the time at which the present tense sentence is true becomes more complicated. (R.i)

If a f MEa, then ( A n) w ·'··'··'·' is

Comment: Since A is the intensional operator, when prefixed to q , the resulting expression denotes an intension which, by definition, is the set of all possible extensions of a at all possible worlds and moments of time. Hence, at any world-time pair, < / . /' > , Λ α denotes an intension and so no particular member of < IXJ > is mentioned in the second part of the rule. (R.j)

If α e ME,

then ( V Q ) W •''>/.* is

M a

-iJ^).

Comment: If a denotes an intension, then prefixing a with the extensional operator converts it into an extensional expression. This is indicated in the rule by specifying the function's argument < i.j >. We may illustrate the mechanics of the two operators, A and V. It will be recalled that (R.c) in (72) lays down a strict condition by which the type specifications of argument and function must match. Thus, a function of type, .t > , must have an argument of type < s.e > , while < e.t > must take an e as argument. In the semantic analysis of natural-language expressions, this strictness is at times inappropriate. The operators, A and V, are prefixed to an expression to reverse its intensional/extensional status - as indicated in (R.i) and (R.j). Thus, Ae — < s.e > and v < s.e > — e. Hence, if ο f « .ν.ί- > . i > and A 3 € e, then a( ß) = t. If 7 e < e.t > and δ e < s.e > , then 7(v