Concepts and language: An essay in generative semantics and the philosophy of language 9783111350271, 9783110995947


255 19 13MB

English Pages 186 [188] Year 1973

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
PREFACE
CONTENTS
1. Conceptual Structures in Semantic Theory
2. Structures of Concepts
3. The Concept of Concept
Epilogue
Appendix: One Class of Logically Oriented Grammars for Semantic Representation
Bibliography
Index
Recommend Papers

Concepts and language: An essay in generative semantics and the philosophy of language
 9783111350271, 9783110995947

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

JANUA

LINGUARUM

STUDIA MEMORIAE N I C O L A I VAN WIJK D E D I C A T A edenda curai C. H. V A N S C H O O N E V E L D Indiana University

Series

Minor,

132

CONCEPTS AND LANGUAGE An Essay in Generative and the Philosophy

of

Semantics Language

by

PHILIP L. PETERSON Syracuse

University

1973 MOUTON THE HAGUE • PARIS

© Copyright 1973 in The Netherlands. Mouton & Co. B.V., Publishers, The Hague. No part of this book may be translated or reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publishers.

LIBRARY OF CONGRESS CATALOG CARD NUMBER 72-94498

Printed in Belgium by NICI, Ghent.

To Anne

PREFACE

This essay concerns certain fundamental issues in both philosophy and linguistics. My main aims are: (i) to expose the nature of the conceptualism involved in two central trends in semantic theory; (ii) to characterize further the appropriate concept of concept (an extension of Frege's notion); and (iii) to defend with certain new arguments a realistic interpretation of the nature of universals as concepts or meanings. Intimately interrelated with these three philosophical aims are some objectives that directly pertain to current semantic theory within a generative-transformational approach to linguistic description. At the end of Chapter 1 and in the Appendix, I offer some details of a generative grammar for English (i.e., details of a novel approach to one) that relate to both (a) some aims of generative-transformational means of semantic representation (generative semantics) as well as (b) the possibly independent aim of combining aspects of representations of logical form with representations of deep structure underlying natural language sentences. The approach I have adopted is not totally unprecedented or unparalleled, even though I have some personal satisfaction of devising much of it in relative isolation. The claims I make in the essay can be summarized as follows. (1) Concepts themselves are typically structural (structures of further concepts), a notion that receives empirical justification and considerable development in both of two current approaches to semantic description (the so-called interpretive and generative approaches). (2) A consequence of this is that the Fregean conception of concepts as 'predicative' and/or 'unsaturated' can be expanded. They are also 'transformable'. (3) Adoption of a frame-

8

PREFACE

work motivated by linguistic evidence and explanation (that of current scientific linguistics) renders irrelevant most of the traditional philosophical objections to concepts that are concerned with the existence of universals and the nature of ideas - though some of Quine's objections are not completely eliminated. (4) I recommend, finally, that the phenomenon of cognition (cognitive or mental processes and capacities as studied in scientific and philosophical psychology) be conceived to be similar to, and as complicated as, the phenomenon of possessing and using the conceptual structures underlying competence in a natural language. Some warnings to the reader are in order. Readers who are primarily linguists or grammarians may think that what I offer in the expository treatment in Chapter 1 and the descriptive and polemical work of Chapters 2 and 3 is supposed to constitute an empirical argument for either (a) a 'logically-oriented' approach within generative semantics or (b) generative semantics in general. That is not intended. The ideas I offer (via sample rules and derivations) for combining logical motives and grammatical ones into a unified method of grammatical description are fairly speculative and are only weakly motivated empirically - which is not to say, of course, that they are totally unmotivated empirically. My main claim in this regard is just that such details will be of some utility - it remains to be seen exactly how or to what extent - in the eventual resolution of the generative-interpretive polemics in current semantic theory. Aside from such qualifications, however, these details should help to clarify the relevance of transformational grammar methodology for an important philosophical issue, thereby suggesting its wider relevance for similar or related philosophical problems as well. I hope, therefore, that the outline of grammatical fundamentals and of some of the recent developments in semantic theory (both in Chapter 1) will be instructive to many philosophical readers even if it proves tiresome for many grammarians and semanticists. I began the study which culminates in this essay in 1965 while pursuing postdoctoral research in linguistic theory and the philosophy of language at the Massachusetts Institute of Technology.

9

PREFACE

I am indebted to the Humanities Department and the Research Laboratory of Electronics of M.I.T. for their effective accommodation of my needs during the 1965-1966 academic year. In particular, I thank Jerrold J. Katz for his advice and assistance in many matters. Also, I gratefully acknowledge the financial support of that research by Syracuse University, as well as its support of an additional leave in 1970 to prepare this manuscript. Financial support from 1966 to 1971 for research related to this essay was also supplied by the U.S. Air Force through their sponsorship of the Linguistics Project - a component of the Syracuse University-Rome Air Development Center research contracts on Large Scale Information Processing Systems. Robert D. Carnes, Associate Professor of Philosophy, State University of New York College at Oswego, was involved in most of the Linguistics Project research. His ideas, criticisms, and discoveries have deeply affected all of my own thinking and are so numerous that I cannot list them. Thus, I can only thank him in general and hope that I have not done his contributions too much disservice. Most of the text of this essay is based upon materials prepared to accompany a lecture entitled "Language, Conceptual Structure, and Mind", presented to the Conference on Language and Higher Mental Processes, State University of New York at Binghamton, March 21, 1970. I am grateful to Professor Theodore Mischel for the invitation to participate and to the Research Foundation of the State University of New York for its support of the conference. Finally, Chapter 1 has been greatly improved, I believe, by the opportunity of receiving many detailed criticisms and comments on an earlier version from Arthur L. Palacas. Syracuse, New York 1971

CONTENTS

Preface

7

1. Conceptual Structures in Semantic Theory 1.1 Grammar 1.2 Interpretive Semantic Theories 1.3 Generative Semantic Theories

13 13 30 50

2. Structures of Concepts 2.1 Logical Predicates and Concepts 2.2 Logical Structure as Conceptual Structure . . . . 2.3 Grammatical Structure as Conceptual Structure . . 2.4 The Concept of Concept: Summary

85 85 91 106 115

3. The Concept of Concept 3.1 Philosophical Ramifications 3.2 Psychological Ramifications

118 118 141

Epilogue

148

Appendix : One Class of Logically Oriented Grammars for Semantic Representation

150

Bibliography

179

Index

185

1 CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

The purpose of this chapter is to explain how two approaches to semantic theory - interpretive semantics and generative semantics - both embody the philosophical view known as conceptualism. I preface the discussion of interpretive semantics (in section 1.2) with a specification of some of the terminology, formalisms, and basic concepts of grammatical description (section 1.1). My introduction (at the end of 1.2) of generative semantics - viz., by way of the proposal of predicate calculus formulae being adopted for semantic representations - is more semantical (or, perhaps, just more formalistic) than is typical. The more commonplace, grammatical motives for generative semantics are outlined in section 1.3. Also, some suggestions are presented there for a novel generative semantics (LSR grammars) based upon logical notation.

1.1 GRAMMAR

The framework I am concerned to examine, discuss, and extend is that of linguistic description of natural languages through use of generative transformational grammars. Much of the point of this chapter and the next derives from rather generic and abstract characteristics of grammatical description. For this reason, as well as to promote comprehension of the particular technical terms I utilize, I begin with a brief outline of formal properties of grammars and transformations. A grammar, G, of a language, L - or G(L) - is constituted of a vocabulary, V, which is itself divided into a non-terminal vocabulary, Vn, and a terminal vocabulary, Vt, plus a special symbol, S,

14

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

which is a member of Vn, and a set of grammatical productions P. The productions are context-sensitive and context-free rewriting rules, often called phrase structure (PS) rules. Successive use of PS rules starting with the initial symbol S constitutes a derivation, each line of which consists of strings of vocabulary items from either or both of Vn and Vt. When no more productions can be applied to a string (the last line of a derivation) and also all members of the string are members of Vt, then a sentence of the language L is derived (or 'generated'). Formally, L = {x / S^y x and x e V*}. Such a grammar is often called a phrase structure grammar, mainly because the productions are constructed with the aim of formalizing a means of assigning PHRASES to sentences of a language (typically, a hierarchy of phrases of phrases). Transformations are NOT PS rules and their addition to a PS grammar changes it into a transformational grammar (TG). To introduce the idea of a transformation, or transformational grammatical rule, it is helpful to consider the representation of a grammatical derivation of a sentence in a 'tree' diagram. I will sketch the idea (and, thereby, review some basic concepts of PS grammars) with two simple artificial languages - the language PN, the set of well-formed formulae for a Polish notation propositional calculus, and the language AR, an arbitrary artificial language. The productions (PS rules) for both languages are as follows:

(1) PN: S OSS ~ S S o->& O -* V O = S -* p S q S -> r etc. as needed for propositional constants

AR:

S-yaA A -* a A-* BBC A BaC A -> Ca B -* b B SSc C -» cB C -* ab C -* Ac

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

15

For these examples, I adopt the convention that all vocabulary items not on the left-hand side of any PS rule are members of Vt (and are mostly lower case) and that all other vocabulary are members of Vn (and are upper case). In addition to the kinds of PS rules (productions) found in (1) for PN and AR, which are context-FREE (CF) rules, there could be context-SENSiTiVE (CS) rules. A CS rule still 'rewrites' only one vocabulary symbol with one or more other vocabulary symbols (including the null string), but there is a restriction to context surrounding the item to be rewritten. Two alternative notations for a typical CS rule are: B B c

or

B S c

B -> S / B—c

both read, for example, "B goes to (is rewritten as) S in the context B—c (following B and preceding c)". Some sample derivations and associated tree structures for each language are the following: (2) Derivations and Trees PN: 1. S 2. OSS 3. OS OSS 4. &S 0 ss 5. &p 0 s s 6. &pVSS 7. &p VpS 8. & p V p q

1. 2. 3. 4. 5. 6. 7. 8.

S OSS &SS &p

s

&pO

0 S g

&p

O p q

&p

Vpq

S O

S

S

/N 111 v

O S S &

P

s s

&p

p

q

16

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

AR:

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

S a A aBaC aBacB a B acb aS S cacb aaAScacb aaBaCScacb aabaCScacb aabacBScacb aabacBaA cacb aabacBaBaCcacb aabacBaBacBcacb aabacBaBacbcacb aabacbaBacbcacb aabacbabacbcacb

The tree structures correspond to the line-by-line derivations for the respective grammars of PN and AR. Notice that I have given two different derivations for the PN sentences & p Vpq. (And, only four lines out of eight of the two derivations agree). This

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

17

does NOT mean that it is ambiguous, since the two derivations are equivalent with respect to phrase structure. That is, they assign the same structure of phrases to the sentence. This is most easily seen in the tree notation. The corresponding tree - often called a phrase-marker (P-marker) - represents the set of equivalent derivations and does not distinguish among them. A particular lineby-line derivation is merely one of a number of ways of tracing all the paths down a tree. Ambiguity, thus, becomes an important technical notion. A sentence or string is said to be ambiguous if it has two distinct P-markers (not just two distinct derivations, for any two such might represent one and the same structure of phrases). Also, a grammar whose productions permit generation of ambiguous sentences is said to be an ambiguous grammar and the language itself to be ambiguous. Because none of its sentences are ambiguous in this sense, PN is an unambiguous language with an unambiguous grammar. An ambiguous grammar G(PN') can be simply obtained from G(PN) by merely replacing S -* O S S with S -» S O S. Then, very many sentences of the revised language PN' that can be generated are ambiguous - e.g. p&q Vr. Transformations, as typically conceived in recent grammatical methodology,1 can be most easily described as functions on tree structures or P-markers (not functions merely on single symbols or strings). A transformation applies to such a structure as a whole (or to a part of it which is itself a structure) to change it into a new structure. (In addition to structures and sub-structures derivable by PS rules of a PS grammar being called P-markers, outputs of a transformation that are NOT generable by the PS rules alone are also called P-markers - e.g., 'derived' P-markers). Consider a transformation of an AR structure, one which transforms (3) into (4). 1 I sketch here a concept of transformation associated with grammatical research in the last decade. Earlier notions - such as Chomsky (1957) or distinguishing singular from generalized transformations - would not quite fit this discussion. Also, for more information about the terminology adopted for describing PS grammars, and for some mathematical facts about PS languages, see Ginsburg (1966).

18

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

b

There are a number of remarks to be made about such a change. First, to repeat, a transformation changes a structure, not just a string. If P-markers (structures) generated solely by PS rules are regarded as line-by-line derivations, then transformations must be rules that apply to more than one line of the derivation - i.e., either the input conditions for the rule or the output affect more than one line. If it did not, then it would only be a PS rule (or an equivalent PS rule, or set of them, could be constructed), not a transformation. Second, the structural change of this example - (3) into (4) could be accounted for by various different kinds of transformations. The first alternative, a limiting case of greatest specificity, is that the transformation in question requires that the first whole structure in (3) be transformed into the second whole structure in (4), symbolically:

b

An alternate notation, called 'labeled bracketing', can also be used - for example :

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(6)

19

[ « [ [ b] a [ c [ b ] ] ] ] S AB B C B BCAS

1 =

>

2 4

3 5

1

4 2

5 3

(That is, (6) represents the transformation of the string of members of Vt numbered 1 through 5 - a string with the structure indicated by the labeled brackets (equivalent to the left-hand-side of (5)) into a different order, indicated by what follows the double arrow. Many investigators appear to prefer this sort of notation, evidently because of features like (i) ease of recognition, (ii) less detailed output, (iii) more freedom with respect to use of variables, and (iv) simpler display of reordering). The transformation, as represented in the notation of (5), says that one must have an entire structure of the form on the lefthand-side of the double arrow, and that it is transformed into the form on the right. It does not say, however, that this transformation cannot apply to the same structure, if it was embedded in a larger one - say in (7) S

20

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

where the substructure within the dotted lines could be transformed by the rule, even though the right-most C node could not be moved by the rule. (Also, (5) does not say that the PS rules of G(AR) are changed into, or extended by adding, the PS rules that might be 'read off' the tree on the right side of (5).) But (5) is not the only transformation rule that could have caused the change originally specified. We might have a less detailed, thereby more general, transformation such as

(8)

A B

a

C

B

a

wherein no matter what the nodes C and B branch into (or 'dominate', as is often said), the restructuring indicated in (3) and (4) can take place. In an analogous fashion, we might also have had a transformation stated solely in terms of members of Vt, say

(9)

a b a c b 1 2 34 5 4 5 12 3

which in addition to being able to account for the original change, could also be the transformation which accounted for the change of (10) into (11).

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(10)

21

(11)

B

B

,/K \ ' A A A S

S

a

c

C

a

b

A

a

b

B

C c

C

S

B /

/^N a c

S b

c A

h

C a

A

a

b

Neither of the transformations introduced previously - viz., (5) or (8) - could account for this kind of change.2 Thirdly, for a transformational change from a structure like (3) into one like (4), there can also be many kinds of subsidiary conditions which could be attached to statements like (5), (8), and (9) to produce many more species of transformations. We might require, for example, that (8) not apply unless something else (either 'above' the covering S, or 'below' one of the unexpanded nodes) be DISTINCT from a specified vocabulary item. This could be done by inserting a variable ranging over V, say X, and specifying that X be distinct from what we have in mind - for example:

2

Stating transformations in terms of Vt items is not particularly effective, for it can leave UNspecified (as versus (5)) what the output structure should be - e.g., that 4 and 5 are NOT adjoined under the same Vn node, C, that 1 and 2 are. Yet, purposes may vary, for though formats like (5) seem to me easier to read and more detailed, the greater freedom and lack of specificity of (6) or (9) may be wanted - such as with respect to the use of variables in rule statement.

B

I

b

22

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(12) 5 a

S A

B

a

C

X

B

a

X where x ^ ab

Or, we might require that a transformation of this type only apply to the 'highest' S, such as by marking that node by surrounding it with boundary markers # , say: (13)

B a C

B a

To sum up, then, a transformational grammar requires a specified PS grammar which provides P-markers (representable in trees or line-by-line derivations) for its transformations to operate upon. Specified transformations restructure such P-markers to produce new, so-called derived P-markers. Further, transformations may continue to apply to derived P-markers until a so-called 'final' derived P-marker is obtained. Transformations can be schematically thought of as operations which reorder and delete vocabulary items from a string of terminal vocabulary items; i.e., it is useful to think of it this way if it is kept in mind that the full conception of transformation is one of transmuting structures into different structures, not just strings into strings (which in many cases would permit reformulation as sets of CS PS rules).3 Reordering and deletion are also useful to remember as symptoms of transforma-

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

23

tions because o f their o b v i o u s correlates in the m a i n application o f this m e t h o d o l o g y o f T G s , namely, t o natural languages. Natural languages characteristically manifest reordering - e.g., the sentence You did shut the door reordered to o b t a i n the question Did

you

shut the door? - and deletion (ellipsis) - e.g., deleted subject a n d verb in Shut the door, or I will have

to.

T h e language generated by a T G is, therefore, the set o f sentences constituted o f terminal vocabulary items ( a n d with the structure indicated) in their final, derived P-markers, schematically: (14) S

3

s

S

In addition to reordering and deletion, there are two other matters, substitution and addition. Addition (via transformations) is certainly possible though not usually utilized, for most new items that could be introduced by a transformation can be included in the initial P-marker and carried along until needed or else deleted. Note that, formally speaking, deletion might come into TGs in at least three ways - in CF rules, in CS rules, and in transformations - though it is usually effected in transformations. Further, CF rules are more or less preferred over CS rules in using TGs for natural language description. CS rules permit expression of some co-occurrence and selection restriction relationships, but most (or all) of these for natural languages ought, for empirical reasons (it is often held), to be handled by transformations. Thus, it is felt that much use (even any, perhaps) of CS rules probably confuses issues in grammar construction. (Note further that a C F grammar can be constructed that is equivalent [more or less, as a T G component] to some PS grammar with CS rules by simply dropping the 'contexts' in the CS rule statements. Then the 'language' generable from the CF rules is simply larger than the CS grammar it was based upon; i.e. some additional structures result which might have been blocked by use of CS rules in place of certain C F rules. But if the phrase structures so generated, are only used as inputs for a transformational component, then superfluous ones will never be inputs. Any transformations and restrictions that CS rules could have effected can be covered by appropriate transformations).

24

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

where the initial P-marker, I, is the structure for a string of terminal vocabulary items xi...xn, where the derived P-marker, D, is the first of possibly many transformed structures, where the final, derived P-marker, FD, is the derived structure of the string zt...zk, and where Zi...Zk (as well as yi...yk) is a string constituted of elements of the initial string xi...x¡c in some reordered, deleted, expanded, or substituted form. Thus, the string xt...xk is not a member of a language with a transformational grammar unless it is also a string with a final, derived P-marker of the grammar (which can also occur vacuously, if it has an initial P-marker and no transformations apply to it and there is no restriction against initial P-markers being identical to final derived ones). In a larger sense, the structure of a sentence generated by a TG is more than just its final, derived P-marker. It is the whole transformational history implicit in that P-marker. Two final points about TGs. Often it is necessary to require that certain transformations apply in a prespecified ORDER, rather than freely. Related to this is the even more complicated specification of CYCLES of transformations, such that a specific transformation in the cycle can only apply in its proper order in the series that constitutes the cycle. Thus, we might have a TG with ordered transformations - say, Ti, Ti,...Tk - or even one with sets of transformations ordered in cycles so that in one cycle Ti precedes Tj and another it follows it. 4 I have so far sketched the methodology of phrase structure and transformational grammars in a fairly abstract and abbreviated way, only vaguely mentioning natural languages and primarily mentioning two artificial languages - that of one notation for the propositional calculus, and a purely arbitrary selection of symbols and strings stipulated to be a language. The main point of the methodology is its application to natural language. The methodology is designed with that in mind. It is, however, useful to linger upon such non-natural languages in order to get the formal 4

For applications of cycles in phonology, see Chomsky and Halle (1968) for example. About cycles in syntax, see Ross (1967) or Jacobs and Rosenbaum (1968) 235-249, for example.

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

25

dynamics of grammatical description well in mind. Consider PN. It is simply the set of all well-formed formulae for one notation of the propositional calculus. It is an infinite set, for the wffs of the propositional calculus are infinite. Further, the reason they are infinite is not because there could be an infinite number of distinct propositional constants p, q, r, ... (which, if there were, would require an infinite number of PS rules to introduce them - viz., S -y p, S -* q, S -* r, ...). In fact, if an infinite number of vocabulary items were the sole reason for the infiniteness of a language, then as a language it would be grammatically uninteresting. That is, it would be uninteresting in a way that a natural language is interesting. For with natural languages, it is not the infiniteness of vocabulary, but of structures that is the source of the size of the language. It is this fact which is at the bottom of the explanation of an 'interesting' language being one whose grammar is a finite mechanism (thus no infinite vocabulary) that generates an infinitely large language. Use of PN will help illustrate this point. Imagine that instead of English we all spoke a less rich and more awkward language that could be adequately described by the grammar of PN. That is, we can only assert logically simple and compound sentences. Then assume that we already know (or must somehow learn) a large, but finite list of simple sentences without breaking them down into smaller segments (words, phrases, and such). Also, presume that with the help of frequent use of negation, and (perhaps) intonation differences for indicating questions, imperatives, and such other sentence uses there are besides assertion, we can succeed in linguistically communicating with each other in some way which approximates the use of English. (A very large presumption!) Now the point about infinite language with a finite grammar is simply that there would be among PN speakers an infinite number of different assertions to make. The only reason one could not make all of them would be NON-linguistic - viz., physical, physiological, and like reasons why no one will be around long enough to make them. However, even though a PN speaker in principle has this infinite ability - can make an infinite number

26

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

of different and grammatical assertions (barring non-linguistic complications such as time) - the LINGUISTIC explanation of how he could do it is rather simple. He need be no superman, infinite store, or perfect intellect. All we must recognize is that he need only be constructed in some way analogous to the construction of a machine which could say all those sentences. How could we construct such a machine? Well I would take the grammar for the language as my guide. The grammar is finite. Though its list of simple vocabulary items may be large (propositional constants and variables), still its structural part - that part that accounts for the infinite number of different structures and the recursiveness of them - is quite small. Principally, it is the first five rules of PN in (1). I would simply build a device that can output the sentences of PN in accordance with this grammar - for example, by random selection among alternative rules after starting the machine by giving it an S 'free' (and instructing it to give itself another S any time it has finished writing down a sentence). (Or, more complicatedly, constructing it to recognize sentences of PN - i.e., sort Vt strings of G(PN) into sentences and non-sentences of PN). When I have constructed such a machine, I will say it has got - or possesses - the grammar of PN. It has not got an infinite number of items, parts, or rules within it. (That would be uninteresting, for if we allow an infinite number of actual rules to our grammar, then we could just simplify our task of devising a grammar that can enumerate all the grammatical sentences by substituting for X in the rule scheme S -» X every one of the infinite number of sentences there are). It has simply got an infinite CAPACITY. This kind of 'capacity' relates (albeit in a very general way) to the concept of linguistic 'competence' of linguistic theory. It is, hopefully, commonplace that the aim of providing a generative grammar for a natural is to adequately describe what it is to KNOW the language in the sense of being competent in it - 'possessing' or 'having' the language oneself. (I myself 'have', 'know', or 'am competent in' English - specifically, some sort of dialect of contemporary American English, though I can understand and

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

27

possibly speak some other English dialects. I don't 'have' or 'know' any others in this competence sense, though I do have some well-informed inklings of what French, Spanish, German, and Russian are like). The very core of this competence in a natural language appears to be the possession of its grammar. At the very least, this is an 'internalization' of it in the person (animal, or machine) in a way somewhat analogous to my 'internalizing' the grammar of PN in a machine I build that can both output and recognize sentences of PN. (Of course, what the machine does in detail in outputting or recognizing a sentence is likely to be rather distant from what I do in detail with sentences of English. It is only the rough analogy that is the point here). Even if my machine can do little else - such as answer or ask questions - it still has got some minimal competence or capacity in PN. It's got the grammar - 'internalized' if you will. For natural languages, of course, we are going to insist that more is involved in linguistic competence than just 'internalizing' a grammar the way my machine might. But being able to produce and recognize random grammatical sentences of the natural language will, at the very least, be required as the basic core. How much more should be required before we say a person (animal, machine) 'knows' or 'has' a language, say, English? This is where the other side of the coin of competence arises; namely, 'performance'. Linguistic competence is primarily understood as one component of the COMPETENCE-PERFORMANCE DISTINCTION. A person 'knows' or 'has' a language (competence). He USES this knowledge or competence to engage in actual linguistic communication - performance in speaking and listening and, derivatively, in reading and writing (and, perhaps, even in thinking itself). Not everything a person actually says is going to be a grammatical utterance, or part of one. One may simply change one's mind or drop dead in mid-sentence, or suffer a linguistic symptom of some long dormant brain disorder. And what I talk about seems irrelevant to my competence in the language I use to talk about it. Clearly some aspects of language manifestations are linguistic only in that they relate to actual linguistic performance. Whereas others

28

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

relate more clearly to that ABILITY (capacity, competence) in the language which I demonstrate I have by instituting it in actual uses of the ability in communicating linguistically. Borderline problems, of course, exist. How many features of linguistic performance bear upon extracting a description of linguistic competence (such as a grammar of the language in question) ? What happens to be true in our world, you might well say, does not bear upon linguistic competence. If snow were not white, nor Syracuse cold in the Winter, that would have no effect on English, the language I use everyday to describe the snow and this Winter. Truth is reformulated as a matter of the referential relation between speakers (or, perhaps, the language) and the way the world is. Change the world and you don't necessarily change the language used to describe it. You only reassign truth-values. Okay. But what of proper names? Are they in the language? George Washington and Washington, D. C. are certainly proper nouns of English. But must one know them, in the internalized way one knows or has the grammar of English? Apparently not. At least not any particular proper names, though one must certainly know the syntactic contexts in which proper nouns rather than common nouns - can grammatically occur in English (e.g., that a common, not proper, noun can occur in I saw the oldest ... in town yesterday and a proper noun, but not common noun alone, can occur in I gave ...a large package about noon). So, referential matters of true description and naming with proper names are not matters of competence. But couldn't this reasoning be extended to other words as well - particularly to the substantive words (full morphemes) of the language? Red means what it does (a particular generic color) and loud what it does (a quality of sound) as a contingent matter of fact, don't they? Couldn't we very well have loud mean what red does, and vice versa, without crucially or essentially changing what English is? We would still have an adjective with all the same syntactic and semantic properties for the color in question, and another adjective with the same properties for the sound quality in question. The reasoning does not really hold up, I think. A word that has

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

29

semantic content is (sometimes) deemed a full morpheme linguistically. Empty morphemes do not themselves have such content, but play the role of signaling syntactic (and/or structural) properties. Any meaningful segment of language - sentence, phrase, or (meaningful) word - is taken to be, for matters of linguistic description, a sound-meaning correspondence (about which more below in the next section). Thus, red and loud are the words of English that they are because of sound-meaning correspondences that are established in English. Linguistically, a word is not a separate entity that may or may not have a certain meaning associated with it or may or may not have a certain pronunciation (rather than some other). A word simply is identical to a soundmeaning correspondence. Change the sound enough or change the meaning enough (viz., not much) and you have a different word, not the same one with a new meaning or new sound. Now a language can very well change - so that, say, loud comes to mean what red does now and vice versa - but then we would have a different language in just the sense that the previous soundmeaning correspondences would not all hold. Perhaps that change alone would be so slight as to be inconsequential. Perhaps. But the point is that it is the kind of change that, in principle, is a real change. It parallels syntactic change, such as when one structure is ungrammatical and leads to anomalous utterances at one time and at a later time it is no longer ungrammatical and can contribute to expressing a definite meaning. The exchange of loud and red is not an inconsequential-to-the-language change that the exchange of the names Marc Anthony and Julius Caesar would be (or better, just Marc and Julius). The Latin language could be the same, and even Roman history the same, if it were Marc Anthony who was ruler and Julius Caesar who lamented Anthony's death, rather than vice versa. Those people's NAMES would be different, but the language and history would be the same. Not so, if red meant what loud does, and vice versa, for English. English would be different than what it is in fact. 5 5

Apologies to N. L. Wilson (1959) and his 'principle of charity'. It will

30

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY 1.2 INTERPRETIVE SEMANTIC THEORIES

I have sketched the formal nature of a pervasive method of contemporary grammatical description - namely, that of transformational grammar - and some aspects of the use of the methodology (concerning infiniteness, internalization, and competence and performance). The principal model of linguistic description for a number of years (e.g., the 1960's) is Chomsky's conception of a two-level syntax which is incorporated in a three component model of linguistic competence in semantics, syntax, and phonology. The familiar diagram is : (15) S

\/ deep structure

\

semantic representation

/

surface structure

phonological representation

The two levels of syntax are called 'deep' and 'surface' structure become clear, hopefully, that the conception of language I am attempting to elucidate makes more use of the competence-performance distinction than of the (logical) syntax-semantics-pragmatics distinctions. The rough, preanalytic idea (which certainly deserves much more clarifying analysis and evaluation than I shall give it herein) is that a natural language (at any one moment of time) is that abstract object divorced from contingencies of its actual use it is that which is put to use in concrete linguistic communications. Grammatically and semantically describing it does not, therefore, cover all of the phenomena encompassed by the (logical) syntax, semantics, and pragmatics of a language (or, of those three sorts of characteristics of formal languages applied to natural languages). Specifically, essential components of the logical semantics are to be omitted - viz., what predicates are TRUE of (the extensions of general terms) and what names (individual constants) NAME. Yet, not quite all of the logical semantics is omitted. For the intensions remain, which I consider to be concepts, but see Chapter 2 below.

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

31

(alternatively, 'base' or 'underlying' and 'superficial' or 'derived'). The left column of the diagram - movement from S to deep structures and then to surface structures - constitutes the SYNTAX of the language being described. The two horizontal rows represent the other two components of linguistic description; i.e., deep structures containing lexical items are inputs to the SEMANTIC component and surface structures containing representations of underlying lexical items are inputs to the PHONOLOGICAL component. Both deep and surface structures are structures of sequences of segments - 'words' or morphemes - each of which has syntactic, semantic, and phonological features (properties). The model portrayed in (15) can illustrate an early view of the 1960's (Katz and Postal) that transformations-don't-changemeaning, as well as the view consolidated by Chomsky in his Aspects of the Theory of Syntax.6 What I mean by the transformations-don't-change-meaning view is simply the assumption that the grammar which generates deep structures of a language is a PS grammar with no transformations added to it. For the sentence John persuaded the doctor to examine Mary the following deep structure (approximate and rough for purposes of illustration only) might be proposed:

(16) S NP N

/

NP persuided

Join

Det the

N

doctor

NP Det

the

^ N

VP 1'

doctor

^ examine

6

NP

Mary

This model does not represent very well the earliest view (e.g., in Chomsky's

32

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

This structure is, on the model, the one that is pertinent for semantic interpretation. One view was that such a structure is generated by the PS part of the grammar. Transformations then apply to it to transmute it into a syntactic structure that is more superficially transparent in the spoken form of the sentence, possibly:

/ persuaded

V

VP'

NP

/

the

Det

N

I

doctor

/

to

V

NP

/ I. N , examine

N

Mary

That is, transformations which delete one of the structures underlying the second the doctor phrase of the deep structure and which insert to (or better, reorder an existing element added to (16) underlying to) change the base structure from one suitable for eventually obtaining a semantic representation of the sentence into one suitable for obtaining a phonological description of it. (17) is (or, is proposed to be) the syntactic structure of the spoken utterance, though it does not adequately represent all aspects of Syntactic Structures (1957) or "On the Notion 'Rule of Grammar'" (1961)) wherein recursiveness of an 5-structure (embedding of Ss as parts in a complex S) was handled by transformations (viz., by the 'generalized* transformations, rather than the 'singular' ones which operated on only single Ss). In the views depictable by (IS), embedded ¿'-structures are all introduced in the base structures via optional Ss in PS rules of the base grammar - e.g., NP -»• (Det) N (S). That is, the phrase structure rules are the source of recursiveness, not the transformations themselves. Katz and Postal (1964) begin the shift away from the strict distinction of singulary and generalized transformations (though they don't reject the distinction). In a general way they are compatible with the model of (15).

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

33

the spoken utterance - only its (derived) syntax. (These other phonological aspects concern a specification of what each sound involved in the sound sequence is, how they interact with others in this sentence to modify their pronunciation, what the stress pattern is, etc. Also, it should be emphasized that (16) and (17) are oversimplified examples for illustration only. Many more complications come into the actual description of such a sentence on this model). The consolidation in Chomsky's Aspects involves, among other things, a movement away from merely recognizing two types of rules (PS rules and transformations) towards distinguishing levels (two) of syntactic description. The earlier, seemingly simpler approaches conceived of a PS grammar generating deep structures (with a simple lexicon available on the basis of which lexical items could be inserted in deep structures via CF or CS rules). The consolidated approach is a generalization of this wherein a transformational sub-grammar generates deep structures, and transformations replace any CS rules (in particular, lexical insertion becomes a much more complicated substitution transformation utilizing a more complicated lexicon). The 'base' grammar has, in turn, two components - a categorial component (formulated as a CF grammar) and a lexicalization component (comprised of lexicon and lexicalizing transformations). There are now clearly transformations in the base component, even though deep structures remain the structures containing lexical items (prior to 'purely' syntactic transformations) which are inputs to a semantic component. 7 7

In principle, much about the transformations-don't-change-meaning approach clearly remains. Most of the transformations previously studied by grammarians still apply after lexical insertion; i.e., apply to deep structures in the process of transmuting them into surface structures. It is only that subcategorization, selection restrictions, use of feature notation, the nature of the lexical insertion rules, and other such fine details (about which see Chomsky, 1965, Chapter 2) make it clear that no simple phrase structure grammar will generate deep structures. Moreover, the same general method of representing the sound-meaning correspondence that a sentence is remains ; viz., an initial structure is generable in the grammar which provides crucial material for a full characterization of the MEANING of the sentence, and a final

34

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

This kind of sketch is the briefest background possible for introducing what are called INTERPRETIVE theories of meaning. The interpretive approach to semantic theory is formally described, and developed somewhat, by Jerrold Katz. 8 It is the componential or compositional approach to meaning. Larger segments of language - phrases and sentences - obtain the meaning they have from the combining of meanings of their parts. The manner of combination of meanings required by any meaningful constituent of a sentence is dictated (i.e., reflected or manifested) by the syntax it has - namely, its deep structure. For a phrase like the very old man, the whole of it has the meaning it has in English because of (i) what the words mean separately and (ii) h o w they 'add u p ' or combine for a phrase with that syntax. Each minimally meaningful constituent or morpheme either has a meaning - so-called full morphemes - or does not - so-called empty morphemes. The full structure is eventually generable (after all syntactic transformations have applied) which provides the material for a characterization of its SOUND. Deep structure is not identical to a representation of the sentence's meaning. It merely provides a representation of the non-semantical form or structure (which is also non-phonological) - i.e., the syntactic structure - that is directly relevant to representing meaning. One could say of (16) that a group of lexical items with this (deep) syntactic form have some meaning because of (i) what each item means itself and (ii) what the syntax - the form and structure - that these items occur in is. The words John, doctor, Mary, examines, persuades, and the don't in that order, or with just any random format, mean what the sentence John persuades the doctor to examine Mary means. Some form to the words - to the lexical items underlying such words - must be provided. The form or syntax that such a sentence already has which is also the syntax particularly germane for fully explicating its meaning is that called deep structure. A similar line of reasoning holds about surface structure with respect to, not meaning but, sound. With a full description of both the sound and the meaning, plus a connection by transformations of the syntax between deep and surface structures of the sentence, we have then the complete description of what the sentence John persuades the doctor to examine Mary is i.e., what sound-meaning correspondence it is. 8 Katz' theory was originally stated by Katz and Fodor (1963). It was further developed by Katz and Postal (1964). It has seen still further development by Katz alone (Katz, 1966) and he is continuing his investigations. However, we should not credit or blame (as you will) Katz for the general approach. For the interpretive approach is quite natural and runs throughout many other investigations in linguistics, logic, and philosophy.

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

35

morphemes for the very old man are old, very, and man; the is (more or less) empty. 9 Because old is an adjective (no matter where it occurs) and is a modifier (in this phrase), its meaning must be combined with what it modifies - viz., the meaning of man. Katz' PROJECTION RULES are the formal devices that represent such a compositional feature. A projection rule for modification describes how the meaning for old and the meaning for man combine, in the modifier-to-head sort of composition, to give a meaning for old man as a whole. As many species of projection rules will be needed by the theory as there are different grammatical relations among constituents which bear upon compositions of complex meanings out of their simpler parts - e.g., subject-predicate, modifier-head, adverbadjective, object-verb, etc. 10 For the very old man, however, the meaning of very must FIRST be combined with that of old since very old as a whole is the modifier of man, not just old by itself. For the very old man, a representation of the meaning of it as a whole could not be obtained if the modifier-head projection rule were applied first in the semantic component. For if it did, then meaning of very would be left dangling, uncombinable with the derived meaning of old man; i.e., restrictions on the projection rules would have to reflect that. Thus, a derived representation (via use of another appropriate projection rule) of the meaning of very old must combine, via a modifier-head projection rule, with the meaning of man to give a derived reading of very old man. That derived reading, in turn, would not necessarily combine with a meaning of the to give a 9

Alternatively, one might choose to call only full morphemes genuine morphemes, and the empty ones not morphemes, but 'particles' or 'syntactic words', or etc. The point is that the has no DICTIONARY meaning in the sense that the other terms do. Rather it signals structure - semantic and/or referential structure. Thus, very old man has a different structural role in I saw a very old man and I saw the very old man, generally paralleling the logical distinction between existential generalization and singular reference. 10 About grammatical relations, see Chomsky (1965) 68-74. For a slightly longer, though still very brief, rendition of this componential description, see Peterson (1968a) (e.g., 471-472) wherein I first mentioned the main topic of this essay. Better yet, of course, just read 151-175 of Katz (1966).

36

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

final reading for the whole phrase. It would be advisable to regard very old man as constant in different kinds of contexts (such as every very old man and a very old man), with the differences being explained in some other way than by componential aspects of the semantic theory. One ought to wonder what the underived meanings - the ones subject to combination via such projection rules - are. I distinguish combining the meaning of very old with the meaning of man from combining the phrase very old with the word man. What are these things, the meanings ? Well, of course, they are one-half of what the segments in question are (or are taken to be) - viz., soundmeaning correspondences. It is fruitful to first ask how to REPRESENT (display, formalize) these meanings. Then the question of what item is, thereby, represented is the question of what meanings are in a slightly more tractable form. The meanings of underived terms of a linguistic segment (i.e., of very, old, and man, though not of very old itself which is what the compositional operations explain) are represented in DICTIONARY ENTRIES for the morpheme in question. Further, the collection of entries for morphemes of a language just is the dictionary for the language - a technical, theoretical, even 'ideal' kind of dictionary that has to be posited as a crucial part of any semantic theory (interpretive in this case). (Thus, appeals or reference to ordinary dictionaries - such as Webster's or OED - may be of little help in understanding what this ideal dictionary is; they may be of considerable help, of course, in trying to construct it). The dictionary entry for man, for example, is the association between (i) an English sound that I am representing by the ordinary alphabetic representation( what is between the following commas), man, and (ii) a syntactic characterization of the item plus a semantic characterization of it. Thus, appropriate syntactic markers or syntactic features represent the kind of syntactic item man is - e.g., +Noun, +Common, +Concrete, + Human - and a list of LEXICAL READINGS represent the possible distinct meanings of the word - n readings for the n ways a word is ambiguous. A lexical reading, in turn, consists of a set of SEMANTIC MARKERS and SELECTION RESTRICTIONS. Thus, consi-

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

37

dering one meaning of the term man, the lexical reading consists of a set of SEMANTIC MARKERS that constitute its semantic 'parts', so to speak, along with selection restrictions which further contribute to the representation of semantic co-occurrence relations (a topic I will not further discuss - except to say they concern rules violated by such anomalies as honest babyn). What are semantic markers? They seem to be the basic unit in the representation of the meaning of a term (on a reading, i.e., ONE of its meanings). In Katz' notation for semantic markers, the ones for a reading of man might be proposed to be: (Male) (Human) (Adult) ... etc. Semantic markers constitute the core of the semantic representation, then, of individual morphemes, morphemes that combine via projection rules with others in a grammatical structure to compose the meaning of a grammatical constituent. But what ARE they? They seem to be words themselves, even if individually parenthesized. The intention of Katz is clear. A particular semantic marker - say, (Male) - is NOT a word, nor a representation of one. Rather it is, on Katz' version of the interpretive approach, part of the meaning of one - a unit of meaning. To quote Katz: Semantic markers represent the conceptual elements into which a reading decomposes a sense.12 ... the meanings of words are not indivisible entities, but, rather are composed of concepts in certain relations to one another.13 The meaning of a word, then, is represented by the semantic markers associated with the morpheme underlying the word in a lexical reading; i.e., in a narrow, technical sense that is what the meaning is, since all the details about syntactic features, selection restrictions, projection rules, and the rest of the theory in which semantic markers play a role must come into the complete account. Further, semantic markers are, as indicated by Katz, representations of CONCEPTS. Concepts, and (apparently) sub-concepts that are elements of concepts, constitute the MEANING of a word. 11 12 13

From Katz (1966) 161. Katz (1966) 155. Katz (1966) 154.

38

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

Semantic markers represent these entities - concepts - in the theory of meaning developed by Katz. There is an air of circularity. The componential part of the theory (projection rules) apply to MEANINGS of words. The meanings are said to be concepts, but the only representation of these concepts (meanings) forthcoming are technical notations which contain English words (even if parenthesized). Do we have, thereby, meanings themselves? Don't we have merely a charade in which one group (and style) of words is posing as the meaning of some others? Isn't this just the natural circularity of all ordinary, nonideal dictionaries ? Thereby uninteresting as a theory of meaning ? I will answer for this kind of interpretive semantic theory, though not for Katz who is quite able to take care of himself. We certainly do not, I contend, have a circular and uninteresting theory. 14 The over-emphasis on the question of what semantic markers in fact are supposed to be, or to represent, can entirely obscure what the (Katzian sort of) interpretive semantic theory sets out to do. It is a theory of compositional or componential meaning. It provides a framework and technical terminology for describing how particular strings of morphemes underlying a grammatical segment of language (phrases, sentences) COMBINE their SEPARATE meanings to give a meaning to the WHOLE string. It does not, nor was it ever intended to be, a philosophical or scientific theory that reduces meanings of individual morphemes (or strings or structures of them) to something else - such as behavior, objects or classes of objects that expressions with those meanings might be true of, dispositions to act or talk, etc., etc. So there should be absolutely no surprise that the meanings of longer than minimal segments of well-formed English are explained in the theory in terms of meanings of shorter ones. Complex, derived meanings are, then, supposed to be explained in terms of simpler units, ending eventually in semantic markers. 15 14

At best, the circularity is a kind of peculiarity not without analogues in mathematics, science, and philosophy. Cf. Peterson (1969a). 15 For an excellent recent illustration of the fruitfulness, and apparent necessity, of positing such semantic units - whether called 'semantic markers',

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

39

Technically, for purposes of scientific linguistics and semantics, we may well leave the terminology alone at that point. For philosophical purposes, as well as in attempting to understand some motivations for the interpretive theory, we may go further and say, as Katz does, that semantic markers are simply CONCEPTS themselves. I recommend going this further step in order (i) to discuss the philosophical issues bearing upon taking meaning to consist of concepts and conceptual structures as well as (ii) to introduce a conception of conceptual structure of, hopefully, some utility with regard to describing cognition and cognitive processes in general. Semantic markers DO NOT REFER to concepts. They REPRESENT them in descriptively theoretical propositions of semantics (which may or may not be fully expressible in English). Some WORDS or phrases may very well refer to concepts - or PERSONS using them might. But words are IN languages, and are put to use by language users. Semantic markers are in a THEORY - specifically, notations for them occur in theoretical descriptions of semantic features of grammatical segments of a language (e.g., English). The semantic marker (Male) represents the concept male. '(Male)' is a functional element utilized in the complete description of the grammatical and semantical features of some natural language expression that might include the words male or man. Use of the WORD male (or man) might well 'refer' (as some say) to the concept MALE (or, to follow Strawson's terminology, 'introduce' that concept into discourse), though that deserves further discussion. In any case, on the interpretive approach (such as developed by Katz) we find at bottom that concepts are what meanings are. I am endeavoring to explain how concepts and conceptual structures enter into current semantic theories, as well as to defend and expand on their use. It is clear that they do enter on Katz' account. I conclude (granted, from this one instance alone) that they would enter into any species of interpretive semantics connected to deep structures within the transformational framework of linguistic 'semantic features', or whatever - see Chapin (1970).

40

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

description. Next I will discuss some further developments in semantic theory which though departing from the interpretive approach (like Katz') require concepts also as basic theoretical entities. Before that, I will mention some suggestions from Charles Caton, suggestions which will permit an easy introduction to a method of semantic representation I happen to prefer. In his paper "Katzian Semantics Renovated", Professor Caton gives some scrutinizing attention to Katz' projection rules. His aim is to test their adequacy in producing desired semantic representations and, where they are found wanting, to correct or modify them. The first important point he considers is the nature of a collection of semantic markers (either in the lexical reading of a dictionary entry or in the derived collections associated with a phrase after projection rules have applied). Caton proposes that such a collection be itself thought of as a semantic marker, viz., a COMPLEX semantic marker constituted of semantic markers as parts. After characterizing Katz' projection rules as LOCAL in the sense that all of his (Katz') examples concern deriving readings for a larger constituent (or node) from readings of its sub-constituents - e.g., reading for Ni from readings for N2 and N3 in (IB)

Caton proposes that there is a need for stronger than local projection rules (e.g., sub-local and supra-local). He achieves this by citing the absence of a capacity to portray semantic distinctions in certain phrases. For example, if the reading for the phrase a man is to be based upon a reading for each of its components (as it appears to in surface structure), then NO distinction between its occurrence in A man is an animal and A man came here yesterday is represented

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

41

(which clearly seems to exist).16 Similarly (from McCawley), the projection rules that apply to a rock has diabetes - usually with no reading, since anomalous - when the sentence occurs within a sentence like It is nonsense to say that a rock has diabetes must go INTO its immediate component readings in order to provide a satisfactory reading for the whole.17 Caton proposes 'sub-local' projection rules and 'supra-local' ones. Sub-local rules provide a reading for N\ in (19) Ari

N,

Nt

not just from the readings for (possibly itself a derived reading) and N3 (a derived reading depending on Ni and N5), but from and N5 also. Such would provide the possibility of deriving different sorts of readings for a man in virtue of the reading (and syntax) of preceding V, and thereby represent the relevant semantic difference in the VPs of (20) 5

N

John 16

V

is

NP Det

N

a

man

This is, of course, only an illustrative example. The article a, like the, will doubtless be better rendered syntactically than as a semantic item with a reading. See note 9 above. 17 Cf. Caton (1969) 12-13. This point is not lost on Katz; cf. Katz (1966) 161.

42

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(21) S NP N

John

VP V

hit

NP Det

N

a

man

a difference describable in logical jargon as the difference between a singular predication {a man being a predicate) and existential generalization (a man in (21) signifying some object which in turn is a man). 18 Analogously, a supra-local rule assigns the reading of in a structure like

(22)

by not only utilizing readings of its constituents N i and Ns, but by going outside the node itself (JV3) to the reading of some other node, say As Caton says, To illustrate with (5) and (6) [the sentences of (20) and (21) above] supra-local rules (of which two might be involved) would or could assign the reading of the constituent 'a man' in terms of the readings of 'is'/'hit', 'a', and 'man', while sub-local rules (of which again two might be involved) would or could assign the reading of the constituent 'is a man'/'hit a man' in terms of the readings of 'is'/'hit', 'a', and 'man'. 19 18 19

(20) and (21) are Caton's (7) and (8) of Caton (1969) 14. Caton (1969) 16. My insertion in brackets. (Also, it should be realized

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

43

Caton's next suggestion introduces some further motives for complicating Katz' projection rules and also provides a detail of the interpretive model which will permit easy introduction of the alternative generative one. He proposes that atomic (non-complex) and complex semantic markers be formulae of quantification theory (open and closed wffs). Instead, that is, of readings (whether resulting from projection rule application or not) being mere collections of semantic markers, some structure is added - an apparently missing feature in the sketch so far presented (and one that I don't suppose Katz ever meant to explicitly reject). Caton proposes the structure found in the functional calculus. For an example like John hit Tom, the desired semantic representation is, then, Hjt. And the projection rule required to produce such is one that operates on (the partially interpreted tree)

where j and t are readings, respectively for John and Tom in John hit Tom, and more importantly, Hxy is the reading of hit.20 This tree displays the substitution of readings for lexical items subject to interpretation (viz., hit, John, and Tom). The task is to provide another projection rule to combine these partially interpreted forms into the final semantic representation Hjt - viz., some sort of substitution rules for placing the constants in the

that some use of selectional restrictions could probably be used to account for the same phenomena Caton notes). 20 I am disregarding my previous point (at the end of section 1.1 above) about names for purposes of smooth discussion only.

44

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

appropriate places of the two-place predicates. Local projection rules could suffice. Similarly, they could suffice for John hit a man, projecting {3x) (Hjx & Mx)" from (say)

which could be projected from

However, it is clear that local rules (or sub-local ones, all of which could be reduced to species of local rules if there is further subcategorization)22 will not always work - say, for examples like A man is an animal and A man is rational, etc. That is, a semantic representation '(*) (Mx Dx)' for All men die must be (apparently) derived from a form 21 22

From Caton (1969) 19. Cf. Caton (1969) 12, footnote 20.

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

45

(26) S VP

NP Det

N

V

All

men

die

by way of a supra-local rule on the subject NP such as (27) 5

(*) ( Mx D tyr)

AU

men (M)

Caton's suggestions are pertinent to defending the interpretive approach, though not claimed to be fully adequate or final (and were not so offered by Caton). The point is that the interpretive approach is a natural point of view for semantic theory and troubles with it (such as what form semantic representations should take in the end, or what complications to projection rules are required) don't seem to be reason enough for dropping it. Defenses might be devisable. Caton's suggestions are easily comprehensible samples. I want to use Professor Caton's version of interpretive theory as a means to introducing the generative approach - particularly the variety I favor. The interpretive picture is natural enough. There are, from the grammar, a recursively enumerable (at least) set of syntactic structures suitable for inserting lexical items into. Choose one, the theory says. Insert some words (morphemes) in it. Now you have a representation of the deep structure of some actual sentence (presuming the transformations of the rest of the grammar will permit generation

46

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

of a surface structure that can be phonologically interpreted). With respect to the diagram in (15), you have just created the top part: (28)

N/ deep structure

You have a string of lexical items that have some meanings individually and which, in light of the deep structure they are in, will combine to mean something as a whole. That is, they will if the semantic component permits it and can provide a semantic representations for the whole.23 With Caton, we have a proposal for what semantic representations look like - viz., predicate calculus formulae. Thus, we can add the semantic component to our diagram (29)

I

deep structure

semantic representation

and also know what an example would look like - e.g., 23

Always remember that any group of morphemes inserted in any deep structure may well not scan semantically - i.e., not yield a semantic representation other than that for anomaly. Similarly, any deep structure may not survive the transformational and phonological component - i.e., for syntactic or phonological reasons be ungrammatical. Certainly some semantic anomalies might not survive the transformational component. An interesting question is are there ANY permissable deep structures that do issue in well formed semantic representations, but can't be said (either surface-syntactically or phonologically). Also, remember that all this discussion is of a competence model, so that

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

47

(30) NP

VP

deep structure: Det

N

I.

\

All

men

j v

I die

semantic representation: (x) (Mx r> Dx) With this Caton-revised Katzian sort of approach to interpretive semantic theory, I can very easily construct a GENERATIVE approach to semantic theory. The key is to notice that these kinds of semantic representations are themselves well-formed (structured) strings of elements which could very well have their own distinct 'grammar' to generate them (especially in the formalistic sense of grammar introduced in 1.1 above). If we added a suitable notation to the original diagram of (15) to represent this fact, we get (31) S'

I

semantic representation

phonological representation

where S, via a grammar, G, generates deep structures containing the 'choosing' of syntactic frames for inserting lexical items (and subsequent interpreting of them) is not in any way supposed to reflect features of how a language user PERFORMS the task of saying something, deciding what to say, or deciding what he means by what he says. What we are here concerned with is greatly abstracted from the speech situation. We are only trying to get straight a minimally correct statement of what an abstracted-from-the-use sentence might BE (syntactically and semantically). Thus, the pains with methods of representing it.

48

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

lexical items and S', via a distinct grammar, G', generates semantic representations directly. So what? We have two ways to get semantic representations one directly through their own grammar (a PS grammar, by the way), another mdirectly (though pertinent to full linguistic description) via the base grammar, G, and projection rules of the semantic component. However, an alternative for semantic theory should be obvious in these diagrams. It is based merely upon allowing projection rules to work in the reverse, a non-extraordinary idea I submit. If they could, then we have a new, so-called generative model for linguistic description, viz.,

To make it more palatable, we can revise our notion of backwards operating projection rules by noting what such (forward OR backward) projection rules do in general. They map structures onto semantic representations which are also structures (in terms of their own formal grammar, G'). Operating backwards they map semantic representation structures onto deep structures. Opera-

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

49

tions that do that can be conceived of and represented as TRANSFORMATIONS. (This is particularly obvious if the merely formalistic concept of transformation introduced in 1.1 above is kept in mind). Thus, our model is (dropping the prime from S', since there is no other species of S to distinguish it from) (33) S

Also, I have inserted two double-arrows in place of the former single ones (that represented, respectively, the backwards projection rules and the process of phonological interpretation) to represent what would be the real state of affairs in such a model of linguistic description. For phonological interpretation of surface structures is largely by way of transformations also,24 and our projection rules ought now to be thought of as transformations anyway since in working backwards their previous role in representing combining operations on meanings has been destroyed (they now break them up, more or less). 24

Cf. Chomsky and Halle (1968).

50

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

1.3 GENERATIVE SEMANTIC THEORIES

I have sketched an introduction to the form of one kind of generative semantic theory. It is closely akin to a specific instance of a grammar for semantic representation that I have been developing. 25 This introduction, however, is not typical of how certain grammarians have come to propose an alternative to the interpretive approach. As might be expected they take a less semantical, more syntactic route. Before sketching some of their grammatical considerations, consider briefly what a generative semantics amounts to, even at this point. The allegedly natural - and I still think it largely is - approach of deriving semantic representations of sentences by (i) 'looking up' the meanings of lexical items in an ideal (Katzian) dictionary, and then (ii) placing representations of the meanings into a suitable representation of their deep structure, and finally (iii) combining them into a representation of the meaning of the whole sentence in which they are occurring

has been replaced by a reverse process of (a) generating representations of meanings of whole sentences, after which (b) an 'ideal' dictionary must be used backwards to find a lexical item (or items) that can be inserted into the derived structure which, in turn, (c) is subject to transformations transmuting it into a surface structure suitable for deriving ultimate phonological representation.

In brief, we now GENERATE meanings (i.e., representations of them) and then find words to express them - where both meanings and words to express them have definite structures ('syntax'). (Also, the locus of recursion is shifted from the deep structure level to the generation of semantic representations). To the polemical charge that this is just an illicit introduction 26 The chronology of the development of a so-called LSR grammar is not, however, coincident with this introduction. We came to it with other motivations in mind - largely concerning the concept of grammatical subject and the relationship of grammatical syntax to logical syntax. Cf. Peterson and Carnes (1968) and Peterson (1968a).

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

51

of facts of linguistic performance into description of competence - since in using language we seem to decide what we want to say (meaning) and then go ahead to find some way to explicitly say it - I could simply reply in kind. The Katzian or other interpretive approach can also be said to be an illicit introduction of facts of linguistic performance into the description of competence - as follows: We often want to know what some segment of language means. We 'parse' it in some way we think pertinent and, then, when the discovery of meaning seems to turn on the meaning of individual words, we look up the words in a non-ideal dictionary (say, Webster's). With the meaning for the questioned words we find there, we are usually able to assign meaning to our original segment as a whole on the basis of the parts we now know. The interpretive approach to semantics models that performance rather closely, doesn't it? A more sanguine, less polemical observer might just say the generative theorist is modeling linguistic speaking (language use from the speaker's point of view) and the interpretive theorist is modeling it from the hearer's point of view (although this assessment would seem to be at odds with the competenceperformance distinction). Moreover, a speculative observer might say the interpretive approach reflects an 'analytical' attitude and the generative one a 'synthetical' attitude. I prefer to assert NONE of these things. My aims being descriptive and exploratory herein, all that need be remembered is that the competence-performance distinction is as important as it is problematic. I am NOT endeavoring to unravel its problems and difficulties. Rather, I say to polemicists, examine the proposals by grammarians working on these topics to see if ANY are of use in linguistic description. I have introduced a generative model, then, but what considerations do prompt further inquiry? The grammatical and syntactic ones mainly, concerning the transformations in particular descriptions of particular English sentences. I will, again, just sketch some of these. After that, I will give some samples of the generative semantics I myself propose. Recent objections to the interpretive semantic approach have centered around the very concept of deep structure, a syntactic

52

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

level providing the structure for input into the semantic component. They can be roughly summed up in the question: Is there deep structure? Or: Is there a specifiable level of underlying syntactic structure which plays the role of deep structure in the Chomskyan model? The question does not signal a rejection of the generative-transformational approach. It is raised within it. Thus, the entire methodology of formal grammars applied to linguistic description of sound-meaning correspondences that constitute sentences - adhering to the competence-performance distinction, and fully utilizing transformational grammatical rules - is unchanged. The only question is, is there a particular, clearly specifiable syntactic level at which all that required in the interpretive theories by deep structures does occur. To raise the question is to re-schematize the Chomsky model (as in (15)) into (34) S

where deep structures are in question. Note that this is, as yet, no rejection even of the Chomskyan model. For in the Aspects view (and later) at least some transformations occur before the alleged deep structure level is reached. Lakoff and Ross 26 claim that the reasons for positing this 'deep' 26

Lakoff and Ross (1967) ; Lakoff (1968).

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

53

level are that it is (a) the simplest base for all transformations to start to apply, (b) where co-occurrence and selectional restrictions are defined, (c) where grammatical relations are defined, and (d) where lexical items are inserted. I summarize their criticisms of these considerations as defining a 'deep' level of syntax as follows. First, the idea of a single level where transformations most simply apply (or begin to - say, where a main cycle of them starts) is no special reason for singling out a special level of 'syntax'. Further, I would ADD to Ross and Lakoff, that if there is such a level it is probably just the very earliest phrase structure 'level', since even for Chomsky some transformations or transformationlike rules apply before reaching the deep level (i.e., in order to construct it). 27 Secondly, there are arguments that semantic cooccurrences and selectional restrictions take precedence over syntactic ones, so that such semantic restrictions should occur generatively earlier (not vice versa, as on the deep structure, interpretive semantics approach). Thirdly, grammatical relations are of only semantic utility, so that if semantic representations properly distinguish semantic features implicit in actual sentences (e.g., man bites dog semantically distinct from dog bites man), then that is all that is required. 28 Finally, and most importantly I believe, there just does not appear to be a single place or 'level' at which lexical insertion does, ought, or could take place. Lakoff and Ross 27

Namely, subcategorization rules, lexical insertion itself, etc. Moreover, the facts about derivational morphology (e.g., Chapin, 1967) apparently require transformations which, if correlated with deep structures, would seem to apply before lexical insertion in the generation of deep structures. 28 To this point, Carnes and I (1968) have added that grammatical relations are only used in linguistic description to structure input for an interpretive semantic component - i.e., they are required for selection of appropriate projection rules - so that if the interpretive approach is dropped in favor of prior generation of semantic representations, grammatical relations serve no purpose in linguistic description. I believe now that grammatical relations e.g., subject-of-an-5 - are at best relative distinctions of use in discussing aspects of linguistic description (e.g., loosely calling a constituent a 'subject' relative to some covering 5-structure at some stage of generative description) but that they do not play a very substantive role in the theory. (Also, remember Chomsky's qualms about distinguishing genuine grammatical relations from "irrelevant pseudorelations"; Chomsky, 1965, 73).

54

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

cite examples where it appears that the passive-transformation ought to apply (if it can) before lexical insertion so that, say, the cowboy kicked the bucket will not undergo it and *the bucket was kicked by the cowboy won't be generable. McCawley, in particular, has given this last point considerable attention. 29 It seems to be the most important of the four points from Lakoff and Ross. 30 For if lexical items are inserted at various points in the stages of transformational generation from initial P-markers up to surface structures, then that alone destroys the notion of a specific level of syntax called 'deep' wherein the semantic components finds its input. Admittedly, an interpretive approach might be salvaged from this situation, conceived, say, along the lines of this model: (35)

I %3

lexical insertion I

Y Ù

semantic representation

lexical insertion n

surface structures

But why bother, especially if semantic representations can be 29

a. McCawley (1967), (1968a), (1968b).

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

55

incorporated in generative stages prior to lexical insertions without an extra semantic component? McCawley's points in general bring out the way in which lexical items themselves implicitly involve structures which themselves require transformations, and still 'deeper' syntactic-like structure. If it is there, implicit in a single lexical item, shouldn't that structure be made explicit in the full semantic representation it is involved in ? For example, consider the item kill such as in Tom kills Bill. Prior to lexical insertion of kill on the deep structure model, and ignoring the items Bill and Tom for this example, the purported deep structure would be (something like)

with the V node awaiting insertion of a representation of kill. (I have omitted, for the sake of simplicity in illustration the apparatus of syntactic subcategorization required by a Chomskyan sort of lexical insertion rule). AFTER insertion, the whole structure would then be submitted to the semantic component. And could the implicit structure of kill be produced from any such component? Namely:

30

Lakoff has given considerable attention to the second point - concerning co-occurrence and selection restrictions. Cf. Lakoff (1965), (1968). For a critical reply to the Lakoff-Ross approach, see Katz (1970a).

56

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

where (cause), (become), (not), and (is alive) are used here to represent conceptual components within the (roughly) syntacticlike structure indicated. That is, the fuller semantic representation of Tom kills Bill should contain the species of lexical structure displayed in (37). Moreover, that structure itself would seem to require transformations before lexical insertion to bring it into something like (say) (38)

/

NP

VP VP

Tom

/

V

(cause)

VP NP

J

Bill

V 1

(to becomeJ

VP ^V (not

which, in turn, if lexical insertion of kill is desired, yields

alive)

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

57

(39) 5 NP

VP

Tom

kills

Bill

And, such a lexical insertion rule is itself a transformation (nothing new, of course, since so is Chomsky's 31 lexical rule). Or more briefly, as McCawley 32 suggests, the lexical insertion rule for kill is a form (omitting node labels) roughly like: (40)

\

Cause

Become

Not

kill

Alive

I believe examples and suggestions like these are startling enough to suggest that there is quite a bit more to semantic representation and lexical insertion in a transformational grammar than is hinted at by the Chomsky (Aspects) deep structure model and the Katz sort of interpretive semantics component. (Moreover, even Caton's sophistications of the interpretive approach would not seem to touch upon the pre-lexical structure and need for transformational treatment). Kinship terms provide similar indications of implicit structure. Consider Bill is a cousin of Joe (a rendering for my purposes of 31 32

Chomsky (1965) 84, 89-90. McCawley (1968a) 73.

58

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

Bill is Joe's cousin). Isn't the implicit pre-lexical structure something akin to the following? (41) S

(parenthesized terms again playing the role of conceptual elements in the structure). A structure like this then can yield (in some transformational way) both (42)

5

or the original

(43) £

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

59

(where, of course, X is child of Y = Y is parent of X = Y is X's parent).33 The reaction to generative semantics by those expected to defend the interpretive (deep structure) approach has been neither slow nor meager. Important among the many topics pursued have been: (i) derived nominals and nominalizing transformations (whether the regularities of derivational morphology - such as between derived and base forms like laughter / laugh, wisdom / wise, belief / believe, etc. - submit to appropriate transformational analysis, analysis that might constitute part of the 'semantic' transformations mapping semantic representations onto pre-lexical structures);34 (ii) the defensibility of the McCawley sort of proposals for transformational development prior to lexical insertion;35 33

The case of kinship terms in this regard was suggested to me by Langacker (1969) though I have done some violence to his method of, and sophistication in, explaining the phenomena. 34 Chomsky (1970a) has proposed a distinction between a 'lexicalist' thesis and a 'transformationalist' one - where the former is roughly correlated with a defense of the basic deep structure model (compatible with interpretive semantics) and the latter correlated with generative semantics. He argues exhaustively against the transformationalist rendering (i.e., generative semantics) of relationships of derived nominals to forms they might be based on (e.g., laugh, construct, believe) - particularly as transformational regularities appropriate for incorporation in the grammar prior to lexical insertion. Rather, he sees most of these relationships as pertaining to items in the lexicon, relationships relatively abstracted from the grammar (ones constituted of a fairly idiosyncratic nature). Thus, the defender of generative semantics based primarily on regularities of derivational morphology (cf. Chapin, 1967) has much troublesome evidence to deal with. 35 Fodor's (1971) criticisms of this kind of evidence is symptomatic of the considerations raised and potential replies. First, he illustrates an anomaly in ordering of transformations for cases like McCawley's - viz., that do-so transformations for derivations of structures with kill must occur AFTER lexicalization (lexical insertion of kill from semantic structures approximating cause...to die) in order to block sentences like *John killed Mary and it surprised me that she did so, while it must occur BEFORE lexicalization in the seemingly parallel sort of case of melt derived from cause...to melt in order to permit sentences like Floyd melted the glass though it surprised me that it (OR: he) did so. Then he goes on to argue that not even one of kill or melt should be derived from an underlying 'causal' phrase. His claim is that if any such derivations are permitted, an important, independently motivated generalization would have

60

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(iii) considerations about case structures and their relationships to deep structures ;36 and to be given up - viz., that the necessary and sufficient condition for an NP being shared with an instrumental adverbial is that the NP be the deep subject of the verb that the instrumental modifies. Fodor eventually mentions the kind of reply to be expected in defense of the generative position on such examples - that there must be constraints on variously involved transformations (particularly, lexicalization). Fodor regards this possibility as an ad hoc defense, but the generative semanticist might regard such cases more as suggesting discoveries of the kind of constraints required. Lakoff seems to be developing this kind of consideration in explicating the generative semantics approach; cf. Lakoff (1970a). Chomsky similarly attacks arguments of the Lakoff and McCawley sort (cf. Chomsky, 1970a, 1969), ultimately proposing an "extended" standard theory (sometimes "revised" standard theory) in answer to transformationalists as well as to new problems (e.g., the manner in which quantifiers in surface structures determine semantic interpretation). The extended standard model might be summarized as follows: S Deep Structures (with lexical insertion)

It

Semantic Representations

Surface Structures

Phonological Representations This model is a startlingly enough departure from the basic deep structure (interpretive semantics) model to resist even notational reformulation in a generative model - such as that depicted above in (31)-(33). For devising a separate 'grammar' to generate semantic representations (as even Chomsky admits to be possible, Chomsky, 1970b, 12, though unimportant), and regarding the inverses of semantic interpretation rules (from the pair (Deep Structures, Surface Structures) to Semantic Representations) as transformations, becomes extremely implausible, since the output of these new 'semantic transformations' would be two different levels of syntax themselves related by distinct transformational rules. 36 Cf. Fillmore (1968), Chomsky (1970b, 1971a) regards these considerations as the most important source of potential difficulty for the 'extended' standard

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

61

(iv) the extent to which generative semantics is a genuine alternative rather than a 'mere notational variant' of the standard (deep structure, interpretive semantics) model - or, even, of the 'extended' standard model.37 In any case, I propose a 'logical' approach to generative semantics.38 I have defined 'logically-oriented semantic representation' grammar - LSR grammar - as follows: theory (and, thereby, potentially most supportive of generative semantics). Nevertheless, he does not feel the evidence incontrovertible - e.g., "The case representations are best understood as a notation for expressing the semantic relations determined by the interplay of deep structure grammatical relations and specific properties of lexical items". (Chomsky, 1970b, 43). 37 Although there is much disagreement about evidence here (as elsewhere), Chomsky holds that, where defensible, claims for generative semantics amount to nothing more than an empirically vacuous recommendation for an alternate notation. He even appears to hold the implausible position that until proved otherwise generative semantics must be presumed to be only a notation variant - see, for example, p. 8 of Chomsky (1969). (This would appear to be, in general, unscientific, since apparently different observations, hypotheses, or explanations are usually presumed to be what they apparently are (different) until proved otherwise). Katz (1970a) takes a similar stand. However, and paradoxically, he goes on (1970b) to claim that vis-a-vis Chomsky's extended standard theory he and the generative semanticists agree. For Chomsky now suggests that semantic representations depend on SURFACE structures as well as deep structures. Katz claims that his interpretive approach as well as that of generative semantics agree in assigning the task of representing meaning to 'deeper' or more abstract levels of grammatical analysis (more distant from 'surface' or superficial syntactic structures associated with representing sound), whereas Chomsky has abandoned that basic view and now permits structures that were thought only to be grammatically relevant to representing sound (surface structures) to also provide crucial inputs to the semantic component. Needless to say, as with the other complicating developments, both empirical evidence and theoretical considerations have so proliferated as to make easy comparison of alternative approaches extremely difficult. 38 It is appropriate to insert my reply to part of the criticism (from Chomsky and Katz) that generative semantics is a notational variant; i.e., that it is a different notation or formalization for the same facts and regularities so that no empirical (scientifically substantive) issue is involved. There are two parts to the criticism: (i) that alternative notations or formalizations are empirically or scientifically uninteresting (beneath serious interest scientifically, though perhaps of other interest - e.g., philosophical or mathematical); and (ii) that any correct generative semantics model is IN FACT intertranslatable with some interpretive one (e.g., the extended standard theory). I will address only the first point here. The second point is, indeed, very important, but it is also the

62

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

A grammar, G , is a n L S R grammar if a n d only if (i) each initial P-marker generated by P S rules of G contains a less tractable at present given the wide differences of opinion on both phenomena and hypotheses. (Chomsky's point [1971a] that the deep structure hypothesis is the stronger, narrower claim - and, thereby, is more easily disconfirmable in principle - must be recognized. But, of course, the deep structure level, as Chomsky admits, may not exist, even if it is an attractive hypothesis). There are two considerations I raise against the assumption that there is nothing empirically or scientifically interesting in alternate notations. First is a general reminder of the relative PARTICULARITY of grammars. To explain this I draw an analogy between 'grammars' for 'languages' expressing the propositional calculus and grammars for natural languages. Consider two distinct notations for the propositional calculus - Polish prefix, and ordinary infix with parentheses. Two distinct grammars effectively formalize the wellformedness conditions for formulae in the two different notations; e.g., S ~ S S OS S S -+p,q, r, .. O V , &, =>

S -y ~ S S (S O S) S ->-p, q,r, ... O -> V , &, =>

The logical transformation rules (inference rules) for the calculus can be formulated in either notation with only slight adjustment for differences in formulae (e.g., if modus ponens is adopted, then the conditional premise of the rule must be stated with a formula that has the conditional connective infixed for one language and with the connective prefixed in the other). But, although both languages (sets of well-formed formulae) are equally useful for expressing the calculus, still they are distinct languages (intertranslatable, of course, but distinct). Now the relevant feature is just the respect in which each notation for the calculus is MORE PARTICULAR (less abstract) than the propositional calculus itself. The calculus is that which any number of different notations can capture each in its own distinctive way. (Notice that I am here only relying on the relative differences determined by notational variation, i.e., grammatical variation. The differences between different sets of (logical) transformation rules over and above the formation rules - from typically rich, redundant natural deduction systems to sparser ones - are not what I am drawing attention to. Thus, that system that is 'THE propositional calculus', manifested identically in different particular choices of both formation and transformation rules, is quite abstract and remote from particularizations of it we customarily use). The analogy is that a, grammar of a particular natural language (even one with the ultimate desideratum of 'explanatory' adequacy) is 'particular' in the way that a notation for one version of the propositional calculus is. And, in any case, a natural language grammar is certainly LESS analogous to that very abstract system that is 'THE propositional calculus' than it is to a particular formulation of it with particular formation rules and particular transformation

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

63

representation of the logical f o r m of any sentence whose grammatical description can be subsequently derived by transformations of G , a n d rules. Since this is the case, so-called mere notational variation (as alleged by Chomsky and Katz) should not be lightly disdained. For substantive GRAMMATICAL alternatives may be involved due to the relative particularity of any natural language grammar. The second consideration is a little more precise than this general warning. It is that notational variation can be of real consequence with regard to distinct but closely related theoretical matters. Again, an analogy helps - viz., to two notations for integers. I can refer, for example, to the number eight with an appropriate use of either the Arabic numeral "8" or the Roman numeral "VIII". With regard to simply naming numbers (in writing) either notation will do perfectly well. However, when the closely related aim of arithmetical calculation arises (not merely a practical matter, though certainly at least that), then Arabic numerals are greatly to be preferred. For our customary techniques of addition, subtraction, multiplication, and division are closely tied to the nature of the Arabic numerals. In fact, the whole calculation system is so effective that someone might even be prompted to conjecture that the Arabic numerals come closer to revealing the 'real' nature of numbers than any other numerals could - e.g., Roman numerals embedded in some suitable calculational system. I am not at all sure this is true, but it would be a plausible, perhaps interesting, hypothesis. Now the same may be true of generative semantics in comparison with its (alleged) notationally equivalent interpretive (deep structure) alternative. Maybe the approaches are mere notational variants of the same thing. But maybe also, generative semantics is notationally superior! I would claim it is for philosophical reasons - viz., the generality gained by conceiving a very broad range of linguistic structural phenomena to be formalizable by grammatical transformations leads to the new characterization of concepts and conceptual structures as 'transformable' (as argued in Chapter 2 below). But that may have little impact scientifically. Yet, philosophical reasons aren't the only basis for a notational superiority claim. Lefkowitz (1971) argues, for example that one trouble with Katz' interpretive semantic theory (certainly the most well developed general theory of interpretive semantics based upon deep structures) is that the formulation of so many of the principles and operations of his semantic component remains at the INTUITIVE and INFORMAL stage (and are sometimes completely implicit). Lefkowitz suggests that EXPLICIT formalization is significantly promoted by adopting the generative semantics approach. (That is, even admitting the possibility of equivalence between the two approaches, still the generative semantics approach directly permits easy formulation in exact, non-informal terms of the regularities that many of Katz' various semantic component rules are intended to describe). Then we at least have an explicit and relatively well understood mechanism to institute - viz., transformations operating on P-markers (as versus e.g., non-explicit, intuitive, and informal 'projection rules' operating on 'semantic markers'). Lefkowitz demonstrates this claim by formulating Katz' own analysis of the syncategorematicity of good (as predicat-

64

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(ii) the transformations of G apply to initial and derived P-markers at least until all lexical insertion is completed. 39

This delineates a CLASS of grammars, of course, since there are no specifics about manner of representing logical form and even if there were, two LSR grammars representing logical form in exactly the same way could still differ radically in their prelexical transformations. The idea is that of a grammar which contains in its initial phrase markers representations of logical form in the sense in which Caton's semantic representations were simply logical formulae. Thus, the generative grammar I constructed out of Caton's grammar would be such an LSR grammar. Remember: (44)

In this LSR grammar, however, all lexical items are inserted at one point - defining, perhaps, the 'deep' structure level of ed of instrument-nouns) in transformational terms - involving an optional deletion of purpose-complement, and an optional lexicalizing rule for instrument-nouns. 39 From Peterson (1969b) reprinted in revised version in the Appendix below.

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

65

Chomsky.40 The considerations advanced by the various grammarians which I have sketched above make it seem possible (if not likely) that lexical insertion does NOT occur at any single, well-defined level. Thus, a Caton-influenced LSR grammar may never see much development. The sub-class of LSR grammars of which I have begun to construct one example are not exactly similar to the Caton-influenced one anyway. For in that grammar initial P-markers are simple logical representations and nothing else. They 'contain' representations of logical forms, but only as a limiting case since they ARE such representations. I suggest that grammars be considered which properly 'contain' logical forms in the sense that the simple, logical tree-structures are 'decorated' with additional items which serve to trigger appropriate transformations and lexical insertion. Thus, instead of (say)

40

Interestingly enough, this conclusion is not unchallengable. For remembering the early models, it might well be contended that what corresponds to deep structure in this model is the PREtransformational level, the level of semantic representation in predicate calculus formulae. Then the lexical insertion level would just be some sort of middle level, though not necessarily illdefined. There is a strong inclination to think of initial P-markers as deep structure since they are the 'deepest' transformationally. Further, when they contain the (logical) essence of semantic representation - the logical structure the inclination is nearly irresistible. (Incidentally, I prefer dropping altogether the 'deep and surface' terminology in favor of one of "early and later stages". Cf. Peterson and Carnes, 1968).

66

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(a tree-structure representation of the universal affirmative proposition which, in parenthesized rather than tree form, is (Vx) (Gx => Mx), where if Gx = x is Greek and Mx = x is a man, then the whole represents Ail Greeks are men), we might appropriately 'decorate' it to yield

5

P

/\

/\ /l ~ VB

BE

G

IT

Adj

a P-marker that 'contains' the same logical representation (which could be extracted via a simple algorithm), but that also contains special syntactic markers that trigger transformations. For example, the 7J's are often deleted, but may go to surface manifestations of it, such as in the variant If anything is a Greek then it is a man or Everything is such that if it is a Greek then it is a man. N and Adj indicate syntactic category noun and adjective which a lexical item must be to express the predicates Gx (Greek) and Mx (man). The => may lead to if ..then or be deleted by suitable transformation before lexical insertion. BE, if not deleted, leads to a form of the copula and A to the article a (signifying that the following noun must be a count noun). LSR grammars of this sub-class differ among themselves simply in terms of logical notation chosen in the logical representation. If a Russellian format is chosen, then logical connectives are infixed, if a Polish notation is chosen, they are prefixed. That simple alternative would require different transformations before lexical insertion in order to reflect the difference in alternative initial P-markers (e.g., (Vjc) (Gx => Mx) versus Vx => Gx Mx)

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

67

which would have to be transmitted into the SAME surface structure - such as the one of Everything is such that if it is Greek, then it is a man.41 I will present t w o sample derivations in this L S R grammar in order to suggest its details. First consider the sentence Everybody wants something, particularly in the sense of every PERSON does. That is, I presume the derivation is largely the same for Everybody wants something, Everyone wants something, and Every person wants something. The initial P-marker contains the logical representation (in Polish notation for this grammar) (47)

Vx => Px 3 y

Wxy

where by Px is represented x is a person and by Wxy is represented x wants y. This logical representation portrays the meaning (in a narrow sense) of the sentence (of English) Every person wants something (as well as of some synonymous forms, such as For anything, if it is a person, then it wants something).42 In a broader 41

For explanation of such details and more introductory discussion, see the Appendix below. For discussion of motives, see Peterson and Carnes (1968) and Peterson (1968a). 42 There is, also, an underlying question of LOGICAL synonymy. A closely related, but still distinct, formula V * 3 y ZJ Px Wxy ought (it would seem) to generate the same sentences - Everyone wants something (plus the other variations). But, then, we would have two initial P-markers underlying the same sentence (the reverse of the typical sort of treatment of sentence synonymy wherein two surface structures have one underlying semantic representation). The answer to this minor dilemma, I believe, is to permit some concept of logical synonymy over and above that kind of synonymy captured by two surface forms being generable from the same initial P-marker (where the same lexical items are inserted). To start with, let logical forms themselves be 'synonymous' - granted this is a slightly deviant use of the term - when they are mutually deducible from one another. But then some definite limits to this logical synonymy, however, are clearly in order. Delimiting the notion should, I believe, be in terms of the actual constituents in the semantic representations (the logical formulae) - viz., in (roughly) the following way. y x 3 y •=> Px Wxy is logically synonymous with y x Px 3 yWxy because they are mutually deducible from one another (logically equivalent) AND because they contain all the SAME logical particles (mainly, quantifiers and connectives but also perhaps any such analogues of these to be eventually added, such as modal operators or predicate modifiers). But, ~ (Fa V Gb) - underlying (say) It is false that Alan is a man or Bill an animal (or Neither is Alan a man nor Bill an animal) - is NOT logically synonymous with ~ Fa & ~Gb (for, say,

68

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

sense, however, the meaning o f such sentences is not just represented in this structure alone, but comprises the relationships o f this structure to other such (conceptual) structures, plus its role in the subsequent transformational derivations leading t o lexical insertion. T h e derivation is as follows. (48)

( Everybody Everyone

\ wants something.

V*

Px 3y

Wxy

Every person

A

IT

x

3D

A

IF

.

THEN A BE

VB

AAA

\ A

N

P

IT

x

3

v

/ I IT y

VB

/ VERB

u

vi

P A W ~

vs A IT

vj x

IT

Q-adjunction. (cycle on lowest 'relevant' S first)

Alan is not a man and Bill is not an animal). This is because different connectives are involved. Thus, I judge that Neither is Alan a man nor Bill an animal is not actually synonymous (in the usual non-logical sense) with Alan is not a man and Bill not an animal. They are simply mutually entailed by one another. (Further, these two are, of course, about as close as one can get logically equivalent, yet distinct, sentences to being actually synonymous. They are very close, but just not quite fully synonymous). But, Every person wants something, If anything is a person, then there is something he wants and For any person there is something he wants ARE actually synonymous if either (i) they are generated from initial P-markers containing the same logical formula, or (ii) they are generated from initial P-markers containing 'logically synonymous' logical formulae.

A y

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

/I A A / ~ A

3

IT

v

VB

y

VERB

Q-attraction (under S5)

W

Va

IT

x

70

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

v-permutation

A

IT variable deletions (only on g-related variable, y ; x here is like a constant on this cycle)

/I

IT

x

/\ / ~

VB

VERB

W

Vj.s

Q

A

3

v

IT

y

71

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

lexical insertion (twice)

wants

SOME

THING

O-deletion (beginning 2nd cycle, O-deletion mandatory since there is no A under relevant Q to trigger longer forms)

X

IT

Vt

x

P

I

VB wants

v

I

Q

SOME

THING

72

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

G-adjunction (mandatory after O-deletion f o r such structures)

Q

V

P v

IT

VB

P

IT

:

x BE

A

N wants

SOME

THING

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

73

v-embedding (replacement of v2 daughters with Sa ; variables, here x, guide)

SOME THING

IT

x

v-permutation (under S3) Si, 1, 4, S

SOME

THING

74

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

Nom-Q (a species of nominalization; since S., is now a v or a. logical subject to the predicate wants something)

wants

SOME

THING

lexical insertions (after one variable deletion)

wants

SOME

THING

In such a manner, two transformational cycles of adjunction, substitution, reordering, and deletion transformations - each cycle terminating in lexical insertion transformations - operates to yield an approximation of final surface structure. The procedure is compatible with the general idea of lexical insertion occurring at various levels, rather than one. Further, it introduces a substantial

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

75

dose of logical orientation, so that semantic relations of synonymy, meaning containment, paraphrase, analyticity, and contradiction can eventually find a more explicit and fruitful treatment with the help of the methodology of logic and formal systems. This approach does not, as yet, institute the implications about pre-lexical transformational structure involved in single lexical items - e.g., the structure of kills or cousin introduced above. However, I believe it is promising in that regard, simply through recognition that any predicate, representable as an open sentence or propositional function (49) S

(i.e., Fx) may itself be a molecular predicate constituted of predicates and structures of predicates transformable into it. Saying just how the relationships and operations should be represented will be part (or even the whole) of specifying the semantic characteristics and relations in terms of such an LSR grammar. The central feature of the approach is that semantic units (atoms and molecules) are represented by predicates of quantification theory - i.e., by logically oriented notations for CONCEPTS (following Frege). In transformationally based semantics today, there is fairly wide agreement that meanings of linguistic expressions, in a narrow sense of 'meaning', are simply concepts. Further, it is variously agreed that concepts themselves are structural, constituted of sub-structures of component concepts. Still further, there are repeated suggestions that these concepts be explicitly represented by logical predicates (cf. Caton, Lakoff and Ross, Mc Cawley). I don't know that Katz anywhere suggests the same, but there is no particular reason to think he would have strong arguments against so representing concepts - i.e., using predicate logic

76

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

notations in place of his own semantic markers. The only really open question this grammar makes a specific hypothesis about is what the structure is which constitutes conceptual structure. I propose that it is at bottom logical structure, but with a definite transformational cast. Before leaving semantic theory, the final example derivation in the same LSR grammar is one for The angry man sleeps with the logical form (50)

Sxx&Mx

Ax

(for S... as ...sleeps, Mx as x is a man, and Ax as x is angry, and with Russell's inverted iota indicating syntax for singular reference, though NOT incorporating his whole theory of definite descriptions):43 44 43

The initial P-marker of (51) is the same as the one for The man who is angry sleeps (where the clause is restrictive) except for the absence in (51) of A under Q. That absence signals a mandatory modification; i.e., the phrase which restricts the subject (viz., who is angry) must become a modifier, not a clause. If A were present, then a restrictive relative clause results. Also, another paraphrase That which is a man and is angry sleeps results from O-shift (in place of the 0-deletion in (51)) followed by part of the series of transformations illustrated in (48) - Q-adjunction, Q-attraction, v-permutation, Nom-Q, etc. Two things are involved here: A and the distinction between restrictive and non-restrictive relative clauses. 'Relativizers' (Rel) are generable from every v constituent: v -> ITK (A), where A' is a logical variable y, or etc. (For major PS rules of this grammar, see (5) of the Appendix). An IT is an underlying logical subject of an underlying logical predicate (except when dominated by Q). Associated with each IT is the optional relativizing element, so that each structure of the form

runs drinks milk is angry is a man etc. (the syntactic side of the propositional function 0x, and where v-permutation is already effected for illustration here) is optionally

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

(51) The angry man sleeps. S ^x & Mx

77

Ax

Verb

A

VB

A\~

BE A N

M

A A A

IT

x VB

BE

A_

IT

\

x

Adj

(ii)

e. g., IT

is that which is such that it •< is what etc.

>

runs drinks milk 1 is angry is a man etc.

For example, for John is that which is angry, there is an underlying structure (iii)

is

angry

is that which is such that he etc. A comes into the distinguishing of restrictive from non-restrictive relatives as follows. First, John, who is angry, sleeps must involve a non-restrictive relative

78

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

V

coordination reduction + identity reduction (Postal)

O

/

&

Pr

A

VB

P, M

/ K ~

BE A N

VB BE

IT

x

N

A

A " Adj

(since proper nouns don't take restrictive relatives). Non-restrictive relatives are generated from two separate sentences conjoined, e.g.,

John is angry

John

sleeps

V S John, who is angry,

sleeps

(where who is from some Rel in underlying struture). Similarly, for The man, who is angry, sleeps, where who is angry is non-restrictive:

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

79

0-deletion (0-shift here would generate That which is a man and is angry sleeps)

Pi VB BE

A

Pi M

N

VB BE

IT

x

A

Adj

S

The man is angry

The man sleeps

The man, who is angry, sleeps But restrictive relatives do not derive from separate sentences conjoined (or generated that way in initial PS). They come from underlying conjunctions in the logical subject (v); e.g., for The man who is angry sleeps: S

80

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

0-adjunction and C-attract'ion

VB

M

VB

xrs ~ / \

BE

A

N

BE

Adj

A

~

H

i>

A

IT

x

(executing some permutations and deletions here for purposes of illustration only). And, a Rel element (relativizer) must be present via some A in the logical subject to generate the required relative pronoun who (viz., under Q). (For additional remarks on A and relative clauses, see the Appendix). 44 The appendix will supplement this introduction to the idea of LSR grammars. In addition to what is presented there, some further remarks and qualifications are: (A) Nearly all the transformations are preliminary speculations based on intuitions and attitudes (hopefully not idiosyncratic) concerning logic and grammar. (Little that constitutes (i) rigorous presentation of grammatical evidence or (ii) empirical argumentation has been attempted). The aim at such an early stage of exploratory inquiry is merely to test the initial intuitions against (i) the possibility of generating surface structures of all (intuitively judged) relevant examples (i.e., cases where there are no transformations matching intuitions for an obviously relevant example would be disconfirming evidence for the approach) and (ii) the possibility of devising an efficient and non-idiosyncratic set of transformations (i.e., at some point a set would be too large and idiosyncratic to have any part in a system of linguistic description with pretensions of explanatory adequacy). Thus, nothing is particularly sacred about any single transformation (or set of them) so far proposed. Many sets may combine to a single complex transformation if future investigation warrants it (e.g., G-adjunction and g-attraction) and many could be broken down into more useful simpler transformations (e.g., in Nom-Q, the .fiE-deletion part may well be extractible for other purposes - say, in generating the non-restrictive relative in John, who is angry, sleeps, the relative pronoun would require an underlying A-constituent from which a BE would be deleted). Similarly, major categorial constituents - P, VB, BE, etc. -

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

81

v-permutation (lowest 5)

VB

M

VB

A

/ N ~ BE A N BE Adj should all eventually be revised so as to include only those mentioned in some transformation, the usual criterion. (B) The basic idea includes the hypothesis of cycles of transformations applying to lower ¿"-structures (or S-like ones) moving up the tree until the final lexical insertion provides a structure that is a surface structure (or is the input to post-lexical and post-cyclic transformations). No detailed cycles have been proposed. (The ones presented as working hypotheses in Peterson, 1970, are not even adequate for the examples illustrating them therein). However, some patterns of application have, of course, occurred. Out of these, in light of further analysis and development of this LSR grammar, some interesting hypotheses for definite cycles might be forged. The patterns: (4) G-adjunction (1) v-permutations variable deletions fi-attraction lexical insertion v-embedding v-permutation Nom-Q (2) 0-shift #1 # 1 or # 4 (5) g-adjunction (3) O-deletion # 4 or # 5 g-attraction v-permutation P-embedding Nom-M (or Nom-Q + Rel-advance) variable deletions lexical insertion (6) coordination reduction identity reduction # 1 or # 2 or # 3

82

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY .P-embed (detaching and nominalizing Pi to set it as head for modifier or relative clause derived from P 2 ) S

A P

S

/ above \ v.

A

Po

P,

Q

/ above \

VB

M

N (C) Concerning quantifiers and logical structure: Just as proposing a different series of transformations to generate surface structures for every distinct quantifier word or phrase would conflict with the aim of devising transformations with some descriptive generality, so proposing a different set of transformations for each kind of logical quantifier (each of which would have many different surface manifestations such as the contrasts and similarities between every, each, any, etc.) would still be unpromising for obtaining generality. For it appears that many English sentences with quite different quantificational structure only share surface structures - e.g., the singular The whale sleeps (i.e., a particular one such as, say, the one in the harbor) and the general The whale sleeps (i.e., all do, a categorical like The whale is a mammal). However, others share much syntactic structure except for the specific connective and quantifier - e.g., All whales are mammals versus Some whales are males, the former with universal quantifier and conditional connective and the latter with existential quantifier and conjunction. In aiming for generality while trying to avoid inefficiency, some kind of balance must be achieved. For example, many kinds of simpler examples must be treated (such as those derivations in Chapter 1 above, the Appendix, and Peterson, 1970) and some slightly more complex ones must submit to plausible extensions - e.g., negation (totally avoided herein) and other quantifiers (such as many, most, and few, but about which there is considerable difficulty due to lack of even preliminary proposals for 'baseline* inquiry like we do have with all and some in Aristotle's square of opposition and quantification theory). For example, Everything that is over anything is above it ( y x y y z> Oxy A x y ) is only a slightly more complicated sentence that ought to be accounted for by transformations of this LSR grammar. It does not quite submit, however. Such a case does not disconfirm the whole approach, however, since it would appear that only small extensions would be required (say, a quantifier-distribution transformation

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY Nom-M

83

(analogous to Nom-Q, it de-verbalizes a phrase, here changing a structure for The (that) man is angry to one for The (that) angry man)''"

which is also needed for the even simpler case If anything is a whale, (then) it is a mammal). Present inability to treat very difficult cases, on the other hand, seems too much to ask. For those are not tractable on any approach and it is unreasonable and unfruitful to reject an alternative in grammatical theorizing simply because it does not show immediate promise for resolving the most difficult descriptive dilemmas. (I would put many of the examples Harman, 1970, treats in this latter class - e.g., A boy who was fooling them kissed many girls who loved him or (from Geach) Almost every man who borrows a book from a friend eventually returns it to him, cases where quantificational scopes seem to cross and become confused. Perhaps such sentences are just too complex (or even borderline) to merit immediate attention. Interestingly enough though, the answer to the dilemma with these particular sentences is probably provided by Harman himself; see his (73) and (75), p. 293). 45 If there were a A under the Q here, then Nom-Q would apply to generate structures underlying That man which is angry... On Nom-Q and Nom-M, see Peterson (1970) 195 as well as the Appendix below.

CONCEPTUAL STRUCTURES IN SEMANTIC THEORY

V

v-permutation (2nd cycle) lexical insertion

s

VB

THE angry

man

sleeps

2 STRUCTURES OF CONCEPTS

Two approaches to semantic theory involve some kind of conceptualisai.1 I begin the philosophical examination of the required concept of concept in the rest of this essay.2 First, I further characterize the notion in this chapter. Then I defend the resultant conceptualism against some important criticisms (in section 3.1 of the next chapter) and extract some morals for scientific and philosophical psychology (in 3.2 below). More specifically, I discuss first in this chapter (2.1 and 2,2) the notion of concept taken as that representable by LOGICAL predicates (concepts as 'predicative'). To this formulation I add a new characteristic of concepts parallel to their 'predicativeness' - namely, their 'transformability', derived from the representation of concepts in (grammatical) transformational structures (2.3). The whole conception is summarized in conclusion (2.4).

2.1 LOGICAL PREDICATES AND CONCEPTS

Well, then, what ARE concepts? The first thing to notice is that concepts are STRUCTURAL. A variety of interrelated considerations force this characterization I believe - among others: (i) logical analysis of concepts by way of logical predicates representing them 1

Also, one of the most useful (cf. Householder's [1969] review) introductory texts in linguistics today wholeheartedly adopts this conceptualism (Langacker, 1967). 2 1 understand this task to be another example of the return via linguistics to traditional, basic epistemological topics - the sort of return illustrated in Chomsky's considerations of rationalism, innate ideas, and mind (cf. Chomsky, 1966,1968, and 1971b).

86

STRUCTURES OF CONCEPTS

in an interpreted formal system (or part of one) presupposes internal structure to most of them; (ii) structures of semantic markers (representing concepts) resulting from application of (Katzian) projection rules appear in many cases to be complex or compound semantic markers (cf. Caton's suggestions in section 1.2 above); and (iii) the definition of many words (words with 'content' or full morphemes, e.g., bachelor, farmer) requires specification of components (e.g., adult, male, unmarried, etc.;person, farms, farm, etc.) in relation to one another. There are at least two related aspects of this structuralness. First, concepts occur IN LARGER structures, some of which are also concepts and some of which are not. (For example, propositions themselves might easily be construed as structures of concepts that are not themselves just concepts). Second, nearly every concept appears to be itself complex, having FURTHER CONSTITUENTS that are themselves concepts. (From highly technical concepts like momentum to very ordinary ones like the color red, it is very hard to propose one that is even a good candidate for being absolutely simple, containing no further conceptual constituents). Though it is difficult to suppose that concepts are totally structural (no non-structural 'content' ever to be found after exhaustive analysis of each), still the structural character looms so large that very much of the task of saying what concepts are is saying what conceptual STRUCTURES are, especially that structure pertinent to the 'internal' nature of any complex concept. A major point to be made now is that the conception of concept in the semantic theories discussed in Chapter 1 is compatible with Frege's concept. Frege's notion of concept (Begrijf) is that which is, and only is, the referent of a logical predicate.3 A distinctive 3

There may be the minor complaint that the logical approach to inquiry into meaning of natural language expressions - e.g., concepts (meanings) as referents of logical predicates - is misleading (perhaps seriously) because we don't speak in logic, but in natural language. And logic is, at best, artificial and ideal standing only in stipulated relations to natural languages (e.g., Strawson's AND Russell's "there is no logic to ordinary language"; Strawson, 1950; Russell, 1957). At this point it ought to be guessed that I don't agree to this radical separating of logic from language, but insist on the inter-

STRUCTURES OF CONCEPTS

87

kind of reference - best labeled 'predicative reference' - is intended. The reference to concepts is not possible, according to Frege, by any other means, such as by proper names, definite description, or other devices of singular reference. Concepts are (in his metaphor) 'unsaturated' and incomplete in a way directly opposite to the way that 'objects' (Gegenstand) are complete and 'saturated'. Frege, of course, has at least one class of 'objects' that is unusual - namely, truth values, the True and the False. However, if it is kept in mind that the truth values on such a view play a role analogous to what 'facts' do in so-called 'correspondence' theories of truth (though truth values are not themselves analogous or very similar to 'facts'), then Frege's objects that are truth values don't seem so strange. For there are in his ontology definite, completed objects - the denotations, respectively, of names and of true (or false) sentences - and the iNdefinite or UNcompleted ones, viz., concepts. And concepts are not even 'funny' kinds of objects. They aren't objects at all! But expressions for concepts (predicates) can be completed into expressions for objects by the addition of names (or other modifications which, in effect, make closed sentences out of the open ones which are predicates).4 Saturated concepts aren't concepts at all, but objects. What objects? The True, or else the False. In a NON-Fregean correspondence-theoretic approach, we might say that saturated concepts, ones made into objects, have been changed into the objects that are facts; e.g., for the concept

connectedness of such phenomena (logic to language, thought to language, meaning, concepts, and/or ideas to language and logic). At the very least, I have already shown that it is possible to plausibly hypothesize that logical forms are merely the quite abstract and/or 'underlying' formal structures of natural language expressions. 4 Thus, open sentences represent concepts taken as propositional functions. And there is no discrepancy thereby created on Frege's view, since "a concept is a function whose value is always a truth-value" ("Function and Concept", p. 30 in Geach and Black, 1960). (All references to Frege's work are to translations in Geach and Black, 1960. The main works I am relying on are his "Function and Concept" "On Concept and Object" and "On Sense and Reference").

88

STRUCTURES OF CONCEPTS

being-red, referred to by the predicate (say) ...is red or, formalized R..., it is completed into the object or fact that (say) my book is red, when the expression my book refers to my book (e.g., my Fodor and Katz volume), the expression ...is red refers to the concept being-red and, further, that being-red is 'true-of' my book. The latter proviso is no question begging, since concepts themselves can just be regarded as (purported or possible) 'true-of-s'. 5 Frege, however, prefers not to set up facts as objects which sentences correspond to if true and don't correspond to if false. He adopts the purer view - NOT introducing the new relation of correspondence AND the new category of facts, but only the new objects, the True and the False. Though uncommonsensical, he has not increased the number of problematic notions since he works only in terms of what we must have on any semantical account - viz., truth, falsity, and reference relationships. Still, the reference relationship that is PREDICATIVE reference is odd, isn't it? It is something like naming or denoting, but is suspicious if only because the bearers of the names are strange, viz., concepts. Historically, serious consideration and/or espousal of Frege's concepts is infrequent. Church (1951), for example, adopts them, but dismisses the 'unsaturated' aspect of them. In view of Frege's insistence on it (cf., his "On Concept and Object") such dismissal may amount to total rejection of Frege's concept of concept. I once argued that Frege must modify his claim that ONLY predicates refer to concepts to the claim that PRIMARILY predicates refer to concepts.6 I suggested that concepts themselves remain 'unsaturated' in a still Fregean sense in that use of, and 5

Alternatively, perhaps read "... is true of ..." here as "... characterizes ..." or "... is exemplified by ...". Tarskian influences are heavy. We have become accustomed to saying that only predicates like red (or is red) or man (or is u man) are TRUE of things (i.e., predicates rather than properties or relations expressed by them). I think this is a recent, UNnatural prejudice. See Peterson (1971a) and (1971b) for some arguments against it. 6 I developed that argument through considerations of examples iike Socrates is humble-, Humility is a virtue; Thus, Socrates is virtuous wherein predicative reference to the concept (being) humble must also permit non-predicative reference to it by corresponding abstract singulars. Cf. Peterson (1968b).

STRUCTURES OF CONCEPTS

89

knowledge of, concept WORDS is at bottom concerned with their use, and knowledge of them, as predicates.7 There are troubles with that sort of defense of Frege. The technical ones - concerning (i) the adequacy of my presumptions about the nature of a word (e.g., a single concept word having different uses or roles amounting to a change of syntactic category), or (ii) the relation of non-predicative reference to concepts to nonpredicative reference to objects - might be cleared up one way or another. There are even deeper troubles, however. The distinction between sense (meaning) and reference seems to have been either abandoned or the two parts conflated with regard to expressions that are logical predicates. For applied to general terms (predicates) the sense-reference distinction amounts to the extension-intension distinction, the extension being what the predicate is true of or applies to (the class of such things) and the intension being the criterion property (concept, meaning, etc.) which an object must exemplify to be within the extension. Thus, the intension or sense of a predicate can be construed as a concept or (intensional) meaning also. But on Frege's approach, a predicate also 'predicatively' REFERS to a concept. Thus, predicative reference is not reference of general terms in the sense of referring to their extensions. Rather it appears to be reference to intensions. But then a predicate term both HAS a meaning and (predicatively) REFERS to it. If both, then we at least have to explain the utility and inter-

7

My analysis was an expansion on the idea expressed by Geach: "When a general term occurs non-predicatively in ordinary language, it 'stands for' a concept only because it can be worked around into such a predicative occurrence". (Geach and Anscombe, 1961, 155). The working around, I contended, amounted to this. For all uses of each concept word, if it is the only use of that word or else the use that is semantically prior to all other uses of that word, then it is a predicative use. Plus: For Xi and that are two distinct kinds of uses or functions for a word (or expression) X, Xi is semantically prior to Xi if and only if understanding the word or expression Xas Xz (knowing how to use X as X2) requires understanding how to use it as X\, and understanding of X\ does not in any similar sense require the understanding of X%. (I was thinking of two different 'uses' of a word in the sense of using it as a logical subject as versus using it as a logical predicate).

90

STRUCTURES OF CONCEPTS

relationships of both. If one collapses to the other, then which is it? And would the Fregean conception of concept survive?8 I propose now that the Fregean notion of 'predicative' reference be itself thought of as more of a METAPHOR than a technical part of the theory or approach. That is, just as natural language expressions that are predicates can be thought of as 'unsaturated' METAPHORICALLY (or 'incomplete' though 'completable'), so the unique kind of 'reference' by predicates is also a metaphor. The basis of the latter metaphor is that a person making some audible sound in the conventional phonetic way might be said to use that sound as a sign (name, or referring device) for a concept if the sound is a manifestation of a logical predicate. Does that mean that a logical predicate, an actual word or phrase playing that role in the language, REFERS to a concept? I don't think it does mean that - only metaphorically or analogously. For a word or phrase is NOT JUST a sound. It is a sound-meaning correspondence. (That is, I adopt this doctrinal assumption). Thus, the SOUND could 'refer' to the concept in just the sense that an utterance of that sound, other things being equal, is manifesting part of the sound-meaning correspondence that is what the word is. An utterance of a sound might, then, be construed as a kind of reference to what the word in part is - viz., to the other part, its meaning.9 If that is what Frege could have eventually come to, or what might be shown compatible with his view, then we are still Fregean and not at the point (as Church is) of denying altogether his 'unsaturated' metaphor or 'predicative' reference. Still better, I believe, is to use the term 'logical predicate' more materially - i.e., to indicate a class of technical notation types (of which tokens are used in logical analysis and, as I have above, 8

I am grateful to Michael Dunn for bringing this point out in his commentary on my paper (Peterson, 1968b). As I responded at the time, though this is a dilemma objectively, it is not a mistake in understanding Frege. Frege does not resolve it for us. It is genuinely a dilemma within his view. 9 I do not, by the way, believe that this kind of 'reference' is at all analogous to that of proper nouns or similar referring expressions. About proper nouns and most definite descriptions I would be inclined to follow an approach somewhat like that of Donnellan (1966, 1970).

STRUCTURES OF CONCEPTS

91

in grammatical and semantical descriptions within propositions of semantic inquiry). We attempt in linguistic description to represent accurately the nature of, and the rules and relationships between, various components of sentences, phrases and words of a language. In the semantical part of the endeavor, we can choose (as I recommend and is often done) to utilize notations called logical predicates to 'stand for' or REPRESENT units (atomic and molecular) of meaning. The notations, then, REPRESENT concepts. Logical predicates are then notation types we use in expressing technical propositions of semantic description. Tokens of them occur in tokens of the propositions as actually manifested somewhere (best written down, I think, since when I speak, I speak English - even if it is a rather obscure dialect of logic class and/or grammatical speculation). All of this supports my recommendation to replace Frege's notion of predicative reference between logical predicate and concept with the relation of representation. Logical predicates REPRESENT (don't refer to) concepts. Concepts are still, however, metaphorically unsaturated or incomplete, since speakers cannot use expressions for them to directly indicate (Fregean) objects e.g., physical (nameable) objects, or the True and the False.

2.2 LOGICAL STRUCTURE AS CONCEPTUAL STRUCTURE

I believe it is traditional in philosophical inquiry to assume that the relationships among concepts are LOGICAL relations in some sense or other of the term 'logical'. This immediately implies that the STRUCTURE of concepts which themselves are concepts - i.e., molecular concepts constituted of component concepts - is LOGICAL STRUCTURE. The suggestion that semantic items (meanings) be identified with what logical predicates (and structures of them that are also predicates) represent, leads immediately to the thesis that SEMANTIC structure - the structure of concepts that are the ultimate meanings of linguistic expressions - is logical structure. I do not have any serious objections to this thesis, although preliminary clarification of it is certainly warranted (and shows it to be rather

92

STRUCTURES OF CONCEPTS

programmatic). I do have one general addition to it however. Before making it (in section 2.3), I will display some of the details and difficulties of the identification of conceptual structure with logical structure. I will discuss three matters - logical structure for concept containment, the necessity feature of relationships between concepts in conceptual structures, and borderline cases. Main concern, however, is with the first point, logical structure. Since most concepts contain other concepts as parts or constituents,10 any such complex concept would evidently have to be a structure of those components, a conceptual structure. Then, if the conceptual structure is proposed to be logical structure, the logical connectives (or other such logical-structure-making operations) should serve to characterize the structure. For example, if we explain the analyticity of All bachelors are male by saying that the concept expressed by the predicate is contained in the concept expressed by the subject (Kant, Katz) - which I would say is only a hasty, superficial summary of what the analyticity in part consists in - then it must be that the CONCEPT male is contained in the CONCEPT bachelor in that the latter concept is a component concept of the (logical) structure that is simply the concept bachelor itself. We might represent such a description of the concept bachelor, then, as follows:

10 I will continue to speak rather materially about concepts; i.e., speak of concepts as if they were objects. In a way, of course, they must be objects ABSTRACT ones, as numbers, sets, properties, etc. are. The way they cannot be objects, Frege's way, bears even further examination which I won't go into.

STRUCTURES OF CONCEPTS

93

where B represents the concept bachelor, H the concept human, Ci, Cj, Ck represent molecular concept constants, A represents the concept adult, M the concept male, and E the concept married. So, we represent the concept male as CONJUNCTIVELY contained in the conceptual structure which is the concept bachelor as a whole. (I don't propose this structure for the concept bachelor very seriously, only speculatively. Among other things, there is little motive for the (semantic) 'parsing' displayed - e.g., that A and M be constituents parallel to E. It is only that there might be such motives - particularly, empirical evidence via (scientific) linguistic theory for SEMANTIC 'constituent structure'). The idea of conceptual structure being logical appears, then, to be a plausible hypothesis based upon cursory inspection of cases like bachelor. But such sketchy analysis and evidence as this is certainly too meager to amount to a hypothesis with content or substance. OTHER logical relations must be relevant too, or else we have less of an hypothesis than a hope for one. Consideration of ONE other - disjunction - does yield a little more content for the hypothesis. However, it also raises some of the basic difficulties which might be easily overlooked and which, eventually, make the hypothesis more problematic than it seems at first sight. Just as some concepts seem, then, to be essentially conjunctive - e.g., bachelor (or bachelorhood) being the conjunction of (say) being adult, maleness, and being unmarried 11 - so some concepts should be mainly Disjunctive, shouldn't they? Examples appear to be notions like congressman (being a senator or a representative) and feline (being a lion or being a tiger or being a panther or etc.). Immediately upon suggesting such examples, however, there is the observation that some (or all) such disjunctive concepts simply are cases of species falling under a common genus - where the genus is the disjunctive concept and the species are the separate disjuncts. If simple disjunction is all that is required to explain (describe or represent) species-genus contrasts, then we DO have another use of a simple logical relation. Then, the rough idea of conceptual 11

Alternatively, for the hempel-minded, All and only those things are bachelors that are adult and male and unmarried is analytic.

94

STRUCTURES OF CONCEPTS

structure as logical structure is filled out one step beyond conjunction. However, there are reasons for concern here - for example, the possibility that genus-species contrasts are much more complex than disjunction alone can explain (describe or represent) Perhaps, it is analogous to the degree of complexity that subjunctive conditionals explicating dispositions12 have over the simple (material) conditional. The qualm is that we are in the position of citing as evidence for a SIMPLE structural characteristic (e.g., simple disjunction or simple material conditional) phenomena that are doubtlessly much more complex - phenomena that the simple feature could not possibly do justice to. (I would think that this is obviously the case in offering the subjunctive conditional that is somehow involved in explicating dispositions as unequivocal evidence for conditional concepts easily characterizable via the material conditional).13 It is at least misleading, that is, to offer the genus-species relationship and dispositions as evidence for the existence of simple disjunctive concepts and material-conditional ones (respectively). The latter notions, at best, may be involved somehow in the explanation of the former, but how they are is just very unclear today. Therefore, unequivocal and unmisleading evidence for simple disjunctive concepts might well be wanted. I translate the question, then, into the following: Are there any MERELY disjunctive concepts, ones that are NOT the more complicated genera (under which species fall as disjunctive components) ? And I would accept as suggestive, MINIMAL confirmation of a positive answer some uncontrived expressions in ordinary natural language for the concepts. Phrases can, of course, be easily constructed to this end - e.g., non-bachelorhood (taken, say, as the Disjunction of non12

For example, the conditional involved in "dissolving when placed in water" as an explication of solubility - more particularly, * is soluble explicated as "if x were placed in water, then x would dissolve". 13 And this not even to broach the slightly simpler, but equally disquieting, criticisms of the material conditional as relevant to the non-subjunctive uses of if-then in ordinary language - e.g., Belnap (1970) logically, or Lakoff (1971) grammatically.

STRUCTURES OF CONCEPTS

95

male, non-human, non-adult, and non-unmarried) 14 which both my wife and I separately exemplify and for which there is no 'real' genus under which we fall as species other than the pseudogenus non-bachelorhood itself. (That is, I presume that no one will propose non-bachelorhood as a central case of a generic concept like mammal or feline). Yet, such phrases might be objected to as evidence because they are at least slightly contrived. I contrived this one, for example, by applying DeMorgan's law to the conjunctive concept bachelorhood. So, single words might be the best uncontrived and unequivocal examples. Are there any WORDS (in English, say) for MERE disjunctions ? Three candidates are sex (male or female), parity (even or odd, of numbers), and truth-value (true or false, and allowing this minimal phrase to be sufficiently word-like).15 I will examine these with the help of the following definitions: (2) (3) (4) (5)

'A = B & C' abbreviates 'A is the conjunctive concept constituted of B and C' (A = B &C ) = C(x) (Ax = (Bx & Cx))' is analytic) 'D = E \ J F abbreviates 'D is the disjunctive concept constituted of E and F' (D = E\fF) = ('(*) (Dx = (Ex V Fx))' is analytic)

A, B, C, D, E, and F are here presumed to be constants naming concepts. The definitions can, of course, be easily generalized by letting these symbols (as well as their non-underlined predicate counterparts) be schemata. Also, note that I adopt the stylistic convention of underlining the predicate symbol to form the NAME 14

That is, one is a non-bachelor if and only if one is not-male OR not-human OR non-unmarried; hempel-mindedly, if 'all and only non-bachelors are nonmale or non-human or non-adult or non-unmarried' is analytic. 16 My thanks to Mark Brown for these examples. Also, note that the sense of "disjunction" being discussed concerns the form of the meaning of an expression in just ONE sense of expression. I am not considering another way disjunction can come in; namely, the alternatives between different senses of a single, thus ambiguous, term. Parity may well be ambiguous. Most words are. But on one sense of it (one reading), it is apparently internally disjunctive.

96

STRUCTURES OF CONCEPTS

of a concept from a predicate introducing it (e.g., the name B derived from the predicate Bx) in order to mirror both the abstraction operation of logical inquiry and the abstract singular formation of ordinary language (e.g., humility from humble). (2) and (4) are obviously mere abbreviations - or simple formalizations of the idea of conjunctive and disjunctive concepts (expressed in English on their right hand sides). (3) and (5) should ALSO be taken as such mere abbreviations or very simple formalizations (even though some of their right hand sides are in English). The reason is that (3) and (5) are not explications supposedly of any help to someone who, for example, does not already understand what is intended by 'analyticity' ascriptions (even if he has qualms about criteria for, or definition of, the concept). If one doesn't understand 'analyticity' ascriptions, then (3) and (5) won't help to acquire the understanding. 16 The ideas behind (2) and (4) can also be formulated as: (6) (7)

(X is conjunctive) if & only if (X = Y & Z) (X is disjunctive) if & only if (X = Y V Z)

16 I presume that the ascription of analyticity in (3) and (5) might be replaced, or better explicated, by appropriate uses of concepts and systems of modal logic. There are a large and heterogeneous number of reasons why I do not consider principles and achievements of modal logic in this essay. I shall not discuss any of these reasons. Suffice it to say that I do not take any of my analyses, arguments, or preferences to be criticisms of the relevance of recent work in modal logic to many aspects of this enterprise. I am sure modal logic is relevant and that there is a large overlap of subject matter between the grammatico-logical approach to meaning I am taking and the topics of pure and applied systems of modal logic. Incidentally, the same terminological procedures instituted in (2) through (5) can be extended to »-place predicates, e.g., (R=Q&P) = ("(*) 00 (Rxy = (Qxy & Pxy))" is analytic). (where, say, R = older-brother-hood, Qxy = x is older than y, and Pxy = x is a brother of y). And mixing degrees seems permissable to some extent; e.g. where Lx = x is liberated and Wxy = x is the wife of y and (T = L & W) = ("(*) (3) (Txy = (Lx & Wxy))" is analytic), so that T = liberated wifehood. But there also seem to be limits (at least of expressibility of such cases in English). For there don't seem to be examples of concepts likeG such that: (G = H&F) = ("(*) (y) (Gxy = (Hx & F>>))" is analytic), for some concepts H_ and F.

STRUCTURES OF CONCEPTS

97

(where X, Y, and Z are concept schemata and the identities formalized on the right hand sides of (6) and (7) are understood in the sense of (3) and (5)). This format has the advantage of permitting easy introduction of the concept of 'NON-GENERICALLY DISJUNCTIVE' concept - where generic concepts are taken to be disjunctive, but not non-generically disjunctive. (8)

X is non-generically disjunctive if & only if (i) X is disjunctive, and (ii) X ^ w

for any generic concept W, or (what amounts to the same thing) for W being any conjunctive concept embodying the necessary and sufficient conditions for anything satisfying a generic concept.17 As with (2)-(7), (8) is fairly uninformative. It just (partially) formalizes the idea of a 'mere' disjunctive concept - one that is not, in fact, a generic concept. To be such requires (i) that it be disjunctive and (ii) that it be non-generic. The details of being non-generic (OR being generic) may be somewhat beyond our present ability to explicate. But the little that I can say is enough for present purposes. The idea of a generic concept being identified with necessary AND sufficient conditions is the main point. Whether or not we know or can reasonably hypothesize what the necessary and sufficient conditions might be is NOT the point. Rather, a concept is generic (a concept of a genus under which species can 17 I do not for a moment wish anyone to think that the contents of (8) are not riddled with difficulties or that what I offer is supposed to clear up any of them. Obviously, identity and distinctness claims about concepts themselves will have all of the difficulties of such claims about individuals, as well as additional ones. Perhaps a logical approach to grammatical description will eventually yield a grammar of intensions (concepts) which will be of utility in overcoming some of these difficulties. But with respect to parity of numbers, for example, if it is claimed that the concept of parity is no different from (identical to) the concept of number itself (since extensionally equivalent to it), I can only informally insist that the concepts are distinct. And I can only defend my insistence with an appeal to linguistic data (our knowledge of the meanings of terms of the language we happen to speak), and can offer no criterion for effective testing of the distinctness. So, if (ii) of (8) appears exceedingly stipulative and vacuous, that is because it is.

98

STRUCTURES OF CONCEPTS

be specified via appropriate differentia) if it presupposes or implies somehow that there is more than a verbal unity to the species falling under it, a unity that could be explicable via any case of any species manifesting some set of properties which ARE necessary and sufficient for the case to exemplify the GENUS.18 An approximation of a test, then, for the non-generically disjunctive concept would be that if a concept, say A, is disjunctive and there is NO coNjunction of concepts M & N & ... that can be identified with A (in the manner of (3)) and could also serve as necessary and sufficient conditions for a genus, say W (as in (8)), then A is nongenerically disjunctive - or MERELY disjunctive. Non-bachelorhood is certainly that way. Although there do exist concepts (properties) that my wife and I both exemplify, no conjunction of them is necessary and sufficient for our BOTH being non-bachelors. Only the satisfaction of the disjunction itself (rather than some additional conjunction) makes us each non-bachelors. Given this very preliminary analysis, an example I have often used fails (or is, at least, borderline) - namely, congressman. Let C represent the concept congressman (or, better perhaps, "beinga-congressman"), and S and R represent the concepts senator and representative. (I speak here, of course, of the sense of those terms used to describe those legislative offices in the United States Federal Government). To be a congressman is just to be a senator OR a representative. But it is not the case that there is nothing more than that to the concept of congressman. For something like "federal legislator" itself might well express the genus (or conjunctive definition of it) to which both senators and representatives belong. Thus, condition (ii) of (8) appears violated for congressman taken as a non-generic disjunctive concept. What about sex, truth-value, and parity. Consider the concept 18 The genus of living-things may be composed of three major species disjunctively combined, such as (i) those undergoing photosynthesis (ii) those that are self-moving, and (iii) mushrooms - as it has been not quite nondisingenuously put to me recently. Yet, necessary and sufficient conditions satisfied by any case of a species of living thing will be sought, if the concept is a 'true' genus - say, the property of transmitting or possessing genetic material.

STRUCTURES OF CONCEPTS

99

of sex derived from occurrence of the word sex in predicates like x is sexed or x has a sex; i.e., let sex (for the purposes of this discussion) name the concept that is exemplified by either someone who is a male or by someone else who is a female. (If one is either, then one is sexed, or has a sex).19 Thus, 19

I believe to be fair and U N m i s l e a d i n g the three candidate disjunctive concepts must be FIRST-ORDER concepts (those introduced by first-order predicates). Sex must be short for being sexed or having a sex, parity for having a parity, and truth-value for having a truth-value (perhaps being truth-valued). Otherwise, following the more natural analysis of the English expressions, each of the three would be considered second-order concepts not amenable to being candidates for the straightforward sort of generic concepts being discussed. For consider the following: (a) (b) (c) (d)

Socrates is humble. Socrates is virtuous. Humility is a virtue. The humble are virtuous.

Only (d) is the appropriate form among (a)-(d) for (possibly) asserting a speciesgenus relationship. The (alleged) species of humble things (the humble) falls under the genus of the virtuous if it is the case that humble things must (not just accidently) be virtuous. But the generic concept of virtuousness is not the same as the concept of virtue; i.e., not exactly the same, even if closely and necessarily related. For humility (that concept or property Socrates exemplifies if (a) is true), honesty, wisdom, and such other things are virtues. Socrates is not a virtue, but is virtuous. Virtue - or being-a-virtue - is a SECOND-Order concept. Virtuousness, although obviously dependent on virtue, is a FIRST-order concept which Socrates exemplifies, if (b) is true. So, (d) alone - and not (c) - is parallel to the more ordinary species-genus assertions like 'man is a mammal' (alternatively, 'men are mammals', or better, 'for anything, if it is a man, then it is a mammal' which clearly shows the first-order quality of both species and genus concepts). The following propositions have the form of (c) - second-order singular predications - rather than of (d): (e) Truth is a truth-value. (f) Female(ness) is a sex. (g) Oddness is a parity (?). (I admit the questionableness of (g), but it would seem to be presupposed as at least possibly grammatical in light of the acceptability of such statements as "3 and 5 have the same parity" and "There are many ordered pairs of numbers in which the second differs in parity from the first", statements more typical of the intended mathematical sense of parity). But then truth-value,

100 (9)

STRUCTURES OF CONCEPTS

S = M V F

read, "the concept of being a sex - or being sexed - is that of being-male OR being-female". And sex (so defined) does NOT appear to be a genus, since nothing else seems to be identifiable with it that is either a generic concept or the conjunction of sex, and parity (as expressed in statements like (e)-(g)) are not candidates for first-order genera (like the virtuous or mammals), but are concepts appropriate for describing first-order concepts themselves. (Some, for example, would claim - on the presumption that common nouns typically introduce classes that truth-value, sex, and parity are names of two-membered classes of properties (containing, respectively, truth and falsity, femaleness and maleness, and evenness and oddness) exactly parallel to the way virtue is the name of a larger class containing honesty, wisdom, humility, and similar virtues). So, the first-order correlates of sex, parity, and truth-value are required. They are easily generable from (e)-(g) by using (c) and (d) as a guide; namely, (h) Being (true ) is ( having a truth-value.) j false j | being truth-valued, j (i) Being i female ) is ( having a sex.) { male \ \ being sexed. j (j) Being ( odd ) is having a parity, j evenj Or, alternatively: (h') The (true ) (ones) are truth-valued, j false j (i') The i female ) are sexed. | male j (j') The i odd ) (ones) have parity (?*are paritied?) (even j All of these have the form that (d), as well as Men are mammals, have, namely: (k) (x) (Fx => Gx) where the potential species in predicative form take the F position (e.g., true, female, odd, humble, man) and the potential genera in predicative form take the G position (e.g., truth-valued, sexed, having parity, virtuous, mammal). So, it should be reasonably clear (though some of the judgments involved in this analysis might well be open to more discussion) that it is the first-order correlates of sex, parity, and truth-value that must be discussed as potential generic concepts - roughly parallel to the generic concepts of mammal and the virtuous.

STRUCTURES OF CONCEPTS

101

necessary and sufficient conditions for one. Restricted to humans, for the sake of discussion, one might propose the extensionally equivalent concept of human itself - i.e., one is sexed if and only if one is a human. To be either male or female is all and only to be a human. The only reply I can make to this (if it is taken as reason for identifying being male-or-female with being human) is that the semantic data don't seem that way to me. If we are to admit the EXISTENCE of extensionally equivalent but intensionally distinct general terms in English (which I believe we certainly must - e.g., "equilateral" and "equiangular"), then this seems to be another one of those cases. Maybe all and only humans are male or female (which, remember, is an artificial proviso only for this discussion; a better example of the same dilemma is having-parity and being-a-number about which see below). And maybe even necessarily so. Still that need not entail intensional identity. That the two aren't intensionally identical (the two concepts distinct) is just an EMPIRICAL claim. If there is no agreement about the empirical data, then I must simply drop the example. A PRIORI demonstration one way or the other is irrelevant. The same appears to hold for truth-value (viz., being truthvalued) and parity (viz., having a parity). Being truth-valued is just being true or being false, a disjunctive concept exemplified by statements or propositions. And having parity is just being odd or being even. Again the question of a genus which the disjunctive concept really amounts to can be raised by proposing that one results from discovery of an extensionally equivalent term (e.g., "being a number" with respect to "having a parity", and "being a proposition" with respect to "being truth-valued"). If the extensional equivalence of "proposition" and "truth-valued", and of "number" and "having a parity", is sufficient grounds for their intensional identity, then we would appear to have good candidates for genera that the two disjunctive concepts are identical to, respectively. But notice that the argument can go both ways. That is, we might claim that the existence of an extensionally equivalent term (e.g., "proposition" with respect to "truth-valued") only proves at best that the equivalent (non-disjunctive) term ("propo-

102

STRUCTURES OF CONCEPTS

sition") is really just merely disjunctive - UNLESS there is another conjunctive concept identical to that introduced by the term ("proposition") that constitutes the necessary and sufficient conditions for the application of the term (the definition of the genus, so to speak). That is, if we can never say what it is to be a proposition over and above being either true or false (e.g., fill in the predicate schemata Ni, N2, etc. of "x is a proposition if and only if x is Ni andx is Nz AND ..." with predicates expressing concepts that do constitute the necessary and sufficient conditions for being a proposition), then it must be a non-generic, disjunctive concept. Being a proposition does appear to ME to be merely disjunctive, but being a number does not. Again, someone might propose that the extensional equivalence of "being a number" and "having a parity" is grounds for their intensional equivalence. And if they are intensionally equivalent, then the concepts of having a parity and being a number are the same. But here (as in the more artificial case of "male-or-female" restricted to humans, discussed above) the matter seems to me to be otherwise - which is, still again, an EMPIRICAL claim. The meanings of "being a number" and "having a parity" just do seem distinct (in my and many others' dialects of English) - i.e., the two concepts are distinct, even if the predicates for them are extensionally equivalent. And, in this case, there is more convincing related evidence for the distinctness - principally, in the significant uses of expressions like "be the same in parity as", expressions which are not replaceable by any phrases involving primarily the use of "number". So, being a number is not identical to having a parity (or so it appears) and no other candidates seem to be available for the genus that having a parity might be. It is, I submit, non-generically disjunctive.20 20

This discussion may have some bearing on arguments about definite descriptions and truth-value gaps for propositions (or statements). Should, for example, 'The present King of France is bald' be said to express a proposition (or make a statement, or be capable of being used today by a person in an utterance that makes a statement or assertion) even though a presupposition (or implication) of it is false? There are at least three well-known positions, (i) Such items are propositions that are false (Russell, Quine). (In Quine's case, the motive seems to be inter alia to drop the sentence-statement distinc-

STRUCTURES OF CONCEPTS

103

A SECOND thing that can be said about conceptual structure as logical structure - in addition to the relevance (to a degree) of the logical connectives for characterizing part of the 'containment' structure - is that the RELATIONSHIPS concepts stand in to one another are NECESSARY relationships, not contingent ones. If a concept contains another as a component of its structure, then it contains it necessarily. If it didn't it wouldn't be that concept, but another. Thus, analogous to my iNformal use of "analytic" above to suggest the full force of logical connectives holding between component concepts in a conceptual structure, a (perhaps) better characterization would be in MODAL terms - e.g., "Bachelors necessarily are males" instead of " 'All bachelors are male' is analytic". The relationships between distinctly different concepts - concepts which are not constituents of one another - are also necessary relationships. If concept A is distinct from concept B (e.g., honesty and solubility) and neither is constituent of the other, then the relationships they stand in would be ones like (i) each sharing common constituents, (ii) being 'denials' of one another (e.g., the tion). (ii) Such items are propositions that are neither true nor false (truthvalue gaps are permitted, cf. van Fraassen, 1966), (iii) Such items are not propositions or statements at all since a presupposition of such a sentence being used by a person to make a statement (express a proposition) - one which is true or false - is not fulfilled (Strawson). I will, for the sake of discussion, simplify the third alternative, (iii), by replacing it with the claim that the item is not a proposition at all (since not true or false). There are two questions : (A) Are propositions all and only truth-valued items ('proposition' and 'truth-valued' being EXTENSIONALLY equivalent)? (B) Are the concepts of being a proposition and being truth-valued the same ('proposition' and 'truth-valued' being INTENSIONALLY equivalent)? Since the answers from the three positions are (or can be) the same to EACH question - viz., 'yes' from (i) and (iii) and 'no' from (ii) - it could be concluded that the two questions are reduced to one. And that implies that the possibility of intensional distinctness with extensional equivalence for the terms 'proposition' and 'truth-valued' is denied by all positions. I believe that NOT denying it would be an interesting assumption to pursue. Initially, it seems to me that presuming intensional distinctness (that the two concepts are just different) would be best utilized to defend the truth-value gaps position, (ii). However, it is not clear to me what additional variations on the other two positions might be devised by the assumption of intensional distinctness.

104

STRUCTURES OF CONCEPTS

conjunction of both being a contradictory concept),21 (iii) being compatible with one another (the conjunction being suitable in the structure of a covering concept of which they are components),22 or (iv) being in some (necessary) relation to some third (or two others, or n) such as indirect deducibility, independence, etc. FINALLY, there are doubtlessly borderline cases, cases of (purported) concepts where we can't tell easily (or even perhaps after exhaustive inquiry) whether we have, say, two concepts distinct from each other though standing in a close relation (e.g., equilaterality and equiangularity) or really one single concept with two aspects of it which we mistakenly take to be distinct concepts (being-a-proposition and being-truth-valued, perhaps). And (hor21

But also see Katz' antonymous «-tuples, Katz, 1966, 195ff. It is often presumed, I believe, that the criterion for combining of concepts into a covering concept of which they are components is not contradictoriness -i.e., that contradictory concepts cannot be so combined - but rather something akin to grammaticality. For contradictory concepts certainly exist qua concepts - e.g., round-square. It is instances or exemplifications of such concepts which don't exist. Indeed, they CANnot. The idea of necessary falsity (contradiction) of statements rests on that. For a predicate expression (e.g., '... is a round-square') can be neither true nor false of an object if it is nonsense (insignificant). And contradictory statements are certainly not nonsense but FALSE and, thereby, meaningful. (If they were not, their denials could not be true and be prime examples of necessary truths). Thus, the existence of meanings that are contradictory (i.e., lead to contradictory sentences and statements) is required. On the conceptualistic approach, meanings at bottom are concepts and conceptual structures. Thus, some conceptual structures must be 'contradictory'. (I admit some looseness of terminology here. A careful account of these matters is certainly required. I am not giving that account, but only introducing aspects of, and motivations for, an inquiry that could eventually produce it). On the other hand, being-a-virtue-AND-not-being-a-virtue is a second-order conjunctive concept which is 'contradictory' in much the same way that round-square is (at the first-order level). (I take the contradictoriness of roundsquare to come down to two conflicting component concepts - say, having all points on the circumference equidistant from the center AND not having all points on the circumference equidistant from the center). Just as no OBJECT can be a round-square, so no CONCEPT can be a virtue which is not a virtue (unequivocally). But, then, contradictoriness is still at bottom the criterion, rather than grammaticality or categorial significance (the opposite of categorial insignificance or 'category' mistakes). 22

STRUCTURES OF CONCEPTS

105

rendously, but still possibly), what if MOST concepts underlying most linguistic discourse are in the borderline category? I don't think it can be decided at the outset - or even discussed very profitably - whether borderline cases are symptoms of our own investigative insufficiency (whether lack of information or lack of ability) or symptoms of the real structure of the conceptual world (e.g., the real existence of concepts 'in between' two distinct ones, or parts of one complex one, or even worse, a gradual shading off from identity to distinctness). The problem is similar in perplexity to the question of elementary concepts - viz., are there simple, unanalyzable concepts of which all others are structures? (I take this question to be closely related to many questions about simple or atomic facts or propositions - e.g., Russell, Wittgenstein, Carnap). That question too should not be, and maybe cannot be, immediately investigated, only kept in mind. On the one hand, it seems obvious that there must be. On the other, the difficulty of ever proposing what they are suggests that we should defer judgment as to whether they must exist (while keeping in mind that if there are not atomic concepts, we must eventually make a plausible case for the possibility of a theory of conceptual structures WITHOUT simples). What is the moral of this story that identifies conceptual structure with logical structure? There are two morals, I believe. First is that the paucity of detail and the great number of open questions could be used as an argument against the identification of logical structure with conceptual structure. Secondly, on the other hand, it is still the main substantive proposal. The content of the hypothesis is the analogy of unformalized and ill-understood 'logical' structure to well-understood logical structures (mainly, the propositional and functional calculi). Dispositions, cause-and-effect, genus and species, etc., all must have their 'logic'. We don't know it now. (Maybe most of it will, in fact, issue from contemporary inquiries in modal logic). But whatever such 'logics' turn out to be, they will all be analogous (at least roughly) to the simple logical structures of the propositional calculus, and they will play an explanatory role analogous to that played by the propositional

106

STRUCTURES OF CONCEPTS

calculus in explaining certain inferences expressible in ordinary language. Given the understanding of all of these 'logics' - a very large proviso - the claim is simply that just as the logic of sentence conjunction can be transferred to the explanation of the logical nature of conjunction between concepts, so further 'logics' can be transferred to (or, are indeed identical to) explanation of the other connections between concepts. Perhaps, then, the thesis of 'logical' conceptual structure is more of a program than a well-confirmed, or even very well-understood, hypothesis. So be it. The best is none too good. Proponents and opponents alike should be aware of the shortcomings as well as the promise.

2.3 GRAMMATICAL STRUCTURE AS CONCEPTUAL STRUCTURE

Both interpretive AND generative semantic theories have concepts as basic units (atomic and molecular) of semantic content. Katz explicitly adopts conceptualism, though not the logical predicate representation. The generative theorists by and large recommend use of such logical predicates, which on the view of concepts I recommend (Frege's), is simply an adoption of conceptualism. The logical approach to the question of what concepts are is that they are structures of further concepts and stand in necessary relations to distinct concepts and conceptual structures. Beyond this, or some extrapolation or restatement of the view sketched above about the logical nature of concepts, little can be said, I believe - except for Frege's metaphor. Concepts, since they are represented by logical predicates, are incomplete or 'unsaturated'. The syntactic or formal feature of expressions for concepts (logical predicate notations or naturallanguage 'predicative' expressions) is that they are incomplete and in need of completion to indicate something definite. Names indicate something definite - their bearers. Similarly, so do sentences (or uses of them, statements). They indicate truth values (Frege) or facts (correspondence theories). Thus, concepts, what-

STRUCTURES OF CONCEPTS

107

ever more they are, are (at least) not objects or complete entities. They are 'incomplete' entities. A feature of the (logical) syntax of expressions for concepts has been transferred to the entities themselves. And the transfer is tentative and respectable, I believe, particularly if conceived of as a metaphor. Now I propose an ADDITIONAL metaphor. I derive it by imitating the derivation of the 'unsaturated' or incomplete metaphor. The basis of it, however, is not the logical syntax, but the GRAMMATICAL syntax. I propose that concepts are not only 'unsaturated', but are 'transformable'. Concepts are fundamental items in a system that contains the transformations of grammatical methodology. Again, this is only a metaphor. For it is the theoretical representations of concepts and syntactical (though non-logical) relations among them which directly and NON-metaphorically have the transformational characteristics. Thus, a property of the representations, and systems of them, is transferred to the entities represented. Concepts, themselves, are 'transformable', just as they are 'predicative' or incomplete (unsaturated). By such slow and metaphorical stages we might eventually begin to formulate a conception of what concepts are in themselves. This is, of course, only a beginning. The considerations which prompt the new metaphor are as follows. Concepts are representable in systems (descriptive systems applicable to natural languages) wherein they are related to one another by relations or operations best characterized as grammatical transformations in the abstract sense. In ADDITION to logical relations and operations, concepts are related through grammatical relations and operations. And these operations are not the SIMPLE grammatical ones already found in logical systems; namely, the CS and CF phrase structure grammars that enumerate the wffs of a logical system. Rather they are the more complicated and more interesting ones typical of systems for describing natural languages - viz., transformations in transformational grammars. Consider McCawley's example: x kills y. The concept is a two-place relation between x (so-called subject) and y (so-called

108

STRUCTURES OF CONCEPTS

object). The representation is designated as a sort of sentence, or an ¿"-structure: (10)

or better, using A'to represent the concept: (ID S

Does K represent the concept kills or is it K x y that does ? ('S' is a node label assigning minimal structure - i.e., the rule S -* Kxy must have applied). Isn't Kxy as an abstract sentence structure more complex than the concept that is its part? These might be rhetorical questions by now. K alone is only PART of a symbol analogous to the horizontal slash that might be the crossing of a t. A predicate must have individual subject places, one or more corresponding to degree of the predicate. It can't be abstracted from its degree.23 Thus, Kxywith x and y as so-called 'free variables' - represents the concept kills, by x of y. The predicate is a propositional function, and an open sentence and, thereby, an ¿•-structure. According to McCawley, that concept reduces grammatically to (using words instead of predicate symbols for convenience here):

23

Some of the problems of 'variable polyadicity' - cf. Davidson (1968), Clark (1970), and Parsons (1970) for example - might suggest that abstracting a predicate from its degree, though radical, would be a good idea; particularly in light of the extreme presumptions or restrictions that Davidson, Clark, and Parsons (and, doubtlessly, many others) must adopt to make their 'solutions' work.

109

STRUCTURES OF CONCEPTS

(12) 5 X

causes

S S

not

becomes

y S

is alivé

That is, causes is a two-place relation between x and another (sentential) conceptual structure, which is the S-structure under the second highest S (which resolves to y to become not alive). But that structure itself, viz. S becomes, contains another embedded S-structure consisting of the denial of still another S (the lowest, viz., y is alive). The whole conceptual structure from which x kills y is derived is represented in (12). The derivation, however, is transformational. (12) represents the SEMANTIC P-marker from which the structure of x kills y can be derived. But the structure of x kills y is not simply that structure and nothing else. For transformations of the semantic (pre-lexical) part of the grammar must transmute it into one which more closely corresponds to the sequence of those concepts appropriate to x kills y. And that DERIVED structure must be something like: (13) S

to

become

S not

alive

which is clearly distinct from (12) and must be transformationally related to it. Thus, concepts (which are open ^-structures - pro-

STRUCTURES OF CONCEPTS

110

positional functions or open sentences) are not only 'predicative' (since they play that role in logical systems) but are also 'transformable' since they play that (structure modifying) role in grammatical derivation - grammatical derivation of a very semantical level (stage) remember.24 Moreover, whether or not we modify an LSR grammar to add even earlier ('deeper') stages prior to the generation of complex concepts (so that the derivation of any sentence always starts from semantic 'simples' and no highest S goes to a predicate which is transformationally derivable from underlying conceptual structures), the lesson of McCawley's example still holds. For if we permit (14) S

S

x

kills

y

24 Kill is no isolated example. Rather, it is typical of any so-called causative verb. Some such predicates have the same surface forms for the causative as well as for a basic constituent - e.g., sank, as in The Bismark sank and as in I sank the Bismark. Clearly, there are two predicates sank here, one more complex than and dependent on the other. The sank of I sank the Bismark must 'contain' in some way the other one - say, / caused (the Bismark sinks) I sank the Bismark. I see no way these => I caused the Bismark to sink relationships could be considered just 'logical' ones, disregarding transformational relations. (Or: any way of formulating the 'logical' relations would just be a disguised way of admitting the transformational relations in question). Middle verbs (ones without unequivocal passives) give more examples - e.g., weigh (as in The book weighs 1 *1 lb. is weighed by the book) vis-a-vis the predicate of / weighed the book. The 'logic' of the two weigh'% must be mostly its transformational 'logic', rather than any natural extension of ordinary logic (such as is attempted with predicate modifiers). Can, for example, the relationship between John gave Mary presents and John gave presents be adequately explained (to show how or why the latter entails the former) in ANY way without transformations ? Predicate modifiers (cf. Clark, 1970, or Parsons, 1970) might well be required for explaining entailments like Socrates drank the hemlock from Socrates quietly drank the hemlock, but to use them for the deletion in John gave presents would be empirically perverse. Further, the oft-heard assessment that such a proposition is elliptical (and, thus, the 'entailment' doesn't exist) is just pretending to dismiss the question by citing the proper answer to it.

111

STRUCTURES OF CONCEPTS

to (schematically) represent insertion of a lexical item based upon occurrence in the initial P-marker of a representation of the CONCEPT kills (viz., the predicate Kxy\ still we must afterwards deal with the SYNONYMY between x kills y and x causes y to become not alive. Thus, the structure (15) S

must be conceptually equivalent to

(16) S.

y

to

become

S

/ \

not

s

I

alive

or else to the P-marker the latter is derived from, viz. (12) (or, even to both). I will conclude this extension to the concept of concept - calling it 'transformable' in addition to 'predicative' - by way of answering a major objection. That objection is as follows. Logical structure as conceptual structure is necessary to it. We have no notion of concepts in relation to one another except as explicated by logical systematizations and explanations based upon them. 25 No more is needed. In short, transformations on conceptual structures (e.g., the example above) are NON-necessary, contingent 25 And we may well hope that current research in modal logic will give us an adequate system for representing such structures and investigating them.

112

STRUCTURES OF CONCEPTS

relationships required (if that) only for the explanation of the natural language English (and perhaps others). It is logically possible to have a language requiring no such transformations in its description and, then, the concepts expressible in that language would not be 'transformable' in the sense intended. I can give no refutation of this objection. However, there are considerations which are good reasons for being open minded about the question. First, all natural languages to one degree or another appear to require transformations in their descriptions. Perhaps it couldn't be proved they must, but likewise there is no known proof (and none likely) that some language or the other is 'simpler' in not having a transformational structure or that (even more unlikely) all COULD go without such structure in their descriptions. And the same (or similar) reasons that argue for transformational structure of all natural language will probably also argue for the existence of transformational semantic structure of all of them. Now if they all do, that is no argument that they all must. Martian or some computer language which is effectively equivalent to (say) English - which I might use to communicate with my robot - might not have transformational structure. (No computer languages do now). I grant that. All I would say is that there is considerable inductive evidence for the generalization that all languages - ones as suitably effective for communication as natural languages - must have transformational structure. Such will never prove it, but could make disproof look very unlikely. Secondly, logical structure may be conceptual structure, but it also may not be. We certainly have some roadblocks - e.g., the logical analysis of the dispositional predicates. It would appear that dispositional predicates - e.g., soluble - unavoidably involve covering laws or law-like generalizations. If the full analysis of all (or some) dispositional predicates involves the use of an analysis of such laws, which in turn must involve large parts of the analysis of causal explanations, then can a 'language' of concepts less complex than a natural language be sufficient to the task ? I doubt it. We will probably have to enrich natural language rather than impoverish it. Thus, there is good evidence that logical structure

STRUCTURES OF CONCEPTS

113

may be as much an ABSTRACTION from underlying conceptual structures (semantics) of natural language as McCawley's prelexical transformational structure is. I would claim that both are abstractions from it. Thirdly, it seems clear from the briefest inspection of the grammars of logical systems that they are extremely simple. Most can be formulated with context-free phrase structure rules and some require context-sensitive rules. Deletion, I would hypothesize, is never required in these rules. (Remember that a grammar for a logical system will be the formalization of the well-formedness conditions for formulae of the system. All well-formed formulae of the system will be generable from a finite number of productions). It appears that the grammars of natural languages are fantastically complex compared to the grammars for logical systems. Thus, it would be at least theoretically interesting to explore complications of the grammatical syntax of logical sytems - i.e., complicating the number and variety of well-formed formulae that will be included in the system and that will be used to paraphrase other formulae, deny them, etc. The SIMPLEST complication is the addition of some transformational rules. (These are not inference rules, but additional well-formedness rules that correspond to grammatical transformations). To my knowledge nothing about this has ever been done, or even suggested before. 26 Finally, and most speculatively, I would suggest that most of what we know and communicate about the nature of conceptual or cognitive thinking (not just 'ideal' reasoning as in logical inquiry, but factual aspects of human thinking) CAN be expressed in language. Admittedly, much about thinking cannot, but that does not concern what I take to be CONCEPTUAL or COGNITIVE thinking; e.g., the way emotions operate upon the processes of my cognitive thought may not be directly expressible in language (though I may indeed succeed in making them known to you communicatively - I may punch you in the nose, thereby, expressing AND communicating my anger to you). And various other elements 26

The closest thing I know is Carnes, 1970. Certainly much more along these lines should be explored.

114

STRUCTURES OF CONCEPTS

of my thoughts may not be - e.g., my 'singing' to myself (not singing at all, just thinking) Beethoven's Ninth - may not be expressible in language. For it is expressible and communicable in my humming out loud, playing the themes on the piano, or even perhaps writing them for you in musical notation. Yet a large portion of thought is conceptual or cognitive. Maybe all of what is traditionally referred to as rational thought, reflection, and consideration - that wherein we try to think straight, discover consequences, and solve problems by analyzing them (rather than, say, by observation). I conjecture that all of this thinking is expressible in natural languages. As a consequence language and conceptual thinking come very close to each other, the one being the public side of the phenomena while the other is private perhaps. 27 This is so close as to suggest that language (and/or some form or aspect of its use) CONSTITUTES thinking. I hypothesize this not on the behavioristic reduction scheme that mental phenomena can be reduced to physical or observable phenomena (thinking to language, language to speech, speech to physical occurrences in space and time). Rather, it is the other way around. Language canNOT be explained without considerable posits about mental phenomena (cf., Chomsky, Katz, et a/.). How can we further explore these related mental phenomena ? In and through language. The last - that language somehow constitutes thought - is a rather remote consideration to bring to bear upon the question of whether conceptual structure involves MORE than just 'logical' structure. Yet, if this suspicion is what motivates inquiry in part into language structure, then there is a pleasant agreement of motives (that there be NON-logical structural components of conceptual structure). And that agreement is a loose sort of reason, though perhaps only for 'true believers'. 27

Also, there is a psychological question: Are there any well-confirmed cases of conceptual thinking (problem-solving, abstract inquiry, etc.) that are as complicated as that typically achieved by a well-developed language user (e.g., adult human), yet are also in the absence of any language development ? I doubt it. (Helen Keller is not to the point, but rather confirms it. She certainly used language).

STRUCTURES OF CONCEPTS

115

2.4 THE CONCEPT OF CONCEPT: SUMMARY

The conception of concept introduced is of an item that is (i) the meaning of linguistic expressions, (ii) 'predicative' AND 'transformable', (iii) structural in nature. All these points are interrelated, not independent. First, 'concept' is a very direct alternative expression for 'meaning' itself, in a suitably narrow sense of both. The exact sense of 'meaning' is that presumed in the current characterization of well-formed linguistic segments of a language (words, phrases,) sentences) as sound-meaning correspondences - words, phrases, or sentences not being separate objects related to specifiable sounds and meanings, but rather being IDENTICAL to specific sound-meaning relationships. More specifically, it is the sense of 'meaning' that is the topic of empirical semantic theories of natural language (Katz, McCawley, etc.). This sense of 'meaning' may be describable by way of the tools of recent philosophical analysis - e.g., contrasted with reference in the meaning-reference distinction such as given by Quine, equivalent to connotation and intension in the connotation-denotation and extension-intension distinctions, and closely related to sense (Sinn) of Frege's sensereference distinction (though even closer to his notion of 'thought' perhaps). Finally, it should be noticed that in a non-reductionistic way, being meaningful or significant (as versus nonsense or insignificance) is just a matter of being grammatical in the sense of 'grammatical' adopted in generative semantics (and even in Chomsky and, perhaps, Katz). I say 'non-reductionistic' because there is not a prayer for the proposal that problems of meaning can be reduced to 'mere' problems of grammar (without meanings) and NON-semantic linguistic phenomena. That hope has been laid to rest in recent linguistics, which in general has showed the dependence of phonology and syntax on meaning (e.g., our knowledge of meanings of expressions of the language when we possess the language), rather than vice versa. (Answers to CRITICISMS of identifying linguistic meanings with concepts, and of the viability of conceptualism, appear in the next chapter).

116

STRUCTURES OF CONCEPTS

Secondly, the role played by concepts (as meanings) in two theories contributes to their initial characterizations. First, concepts are what logical predicates represent in logical systems. Predicates can be taken to be notation types (as in the type-token distinction) in which case any predicate stands for, even refers, to concepts (when the predicate is given an interpretation, or is a constant rather than a variable or schema). This approach is Fregean and leads to the characterization of concepts - what predicates refer to - as 'predicative', 'unsaturated' or 'incomplete'. Alternatively, 'predicates' can be used more materially (as Aristotle usually does with his 'predicables') wherein predicates simply ARE concepts. And, then, a notation type simply represents a predicate or a concept by playing the appropriate role in a notational system when that system represents (at least) first order quantification theory. The second theory in which the role played by concepts contributes to their characterization is semantic theory. On two of the predominant approaches to semantic theory today, concepts come in as basic atomic and molecular (simple and complex) units of meaning. In both, these concepts can be interpreted as equivalent to logical predicates. Caton's version of Katz' interpretive theory adopts logical predicates. Lakoff and Ross, McCawley, and others recommend a similar adoption of logical predicates to represent meanings. I propose that the additional feature to concepts represented by logical predicates that these theories suggest is that concepts are basic 'transformable' items. Thus, in sum, the role in logical and grammatical systems of description of concepts suggests that they are both 'predicative' (Frege)and 'transformable' entities. Finally, concepts are structural. All concepts - except absolutely simple and atomic ones (if there are any) - are structures of component concepts. Concepts, then, are CONCEPTUAL STRUCTURES. The nature of the structure is both logical and grammaticaltransformational. That conceptual structure is 'logical' seems required by traditional philosophical motives and there is, apparently, no strong argument against it (save that which resides in

STRUCTURES OF CONCEPTS

117

the paucity of detail and the programmatic nature of the hypothesis). The transformational structure is a conjecture. It may be contingent - rather than essential or necessary (as with logical structure) - that concepts themselves stand in transformational relationships (such as required to be expressible in natural language). However, the conjecture has scarcely been investigated, and there are reasons for thinking it could be true and should be explored. In conclusion, the notion of conceptual structure and concept sketched here is an explication of the current notion of meaning involved in transformational approaches to linguistic description today. 'Meaning' and 'meanings' as theoretical concepts of linguistics simply reduce to the concept of concept introduced. In light of the wide and deep dependence of nearly all parts of linguistic description upon meaning, it is obvious that one of the fundamental theoretical concepts of linguistic theory in general is the concept of concept itself.28

28

This might be thought to be obviously paradoxical, circular, or both. I don't think it is, particularly not obviously so. That is not to say there are not problems. Great care must be taken in investigating semantical topics and reporting results so that narrowly circular and/or vacuous descriptions or explanations are not uncritically accepted. Katz, of course, has given us considerable help in answering superficial philosophical objections to this state of affairs. About the possible 'peculiarity' of this condition for semantic theory, see Peterson (1969a).

3 THE CONCEPT OF CONCEPT

There are significant ramifications of the notion of concept and conceptual structure introduced. Their empirical significance is what was discussed in Chapter 1. The ramifications I shall now discuss concern (i) the status of concepts with respect to certain traditional issues in philosophy (viz., universals and ideas) and (ii) their import for the study of cognitive mental phenomena. The presentation of philosophical ramifications (3.1) contains some defense of the concept of concept against expected philosophical criticism. About mind and cognitive processes (3.2), however, I only offer some impressions and preliminary proposals.

3.1 PHILOSOPHICAL RAMIFICATIONS

To bring out the philosophical issues consider again the simple question, 'What are concepts'? The first and obvious answer to the question is merely what I said concepts are in Chapter 2 above (with the introductory help of Chapter 1). Concepts are those kinds of things - required in semantic theory as ultimate units of meaning, preliminarily described by considering their nature as logical items and as grammatical ones. But presuming THAT is fully understood, it still may be asked, "But what are those kinds of things ?" I can only take this to be asking, "Are concepts fully abstract or suprasensible entities, such as Platonic Forms or Ideas ? Or are they simply ideas, the main constituents of thought when thinking? Or are they shared properties, whether Platonic universals ante rem or Aristotelian ones in res ?" To these questions I answer, equivocally, "Yes!"

THE CONCEPT OF CONCEPT

119

The philosophical question of what concepts are is the question of whether they are any, or all, of three kinds of things: (i) special, non-concrete, suprasensible, fully ABSTRACT ENTITIES (Plato's Forms will always be the best example of what these items are supposed to be, but numbers, classes, and propositions are also good, more mundane examples); (ii) IDEAS that constitute, or play a major role in THINKING, are the 'objects' of thought while thinking, and/or are 'in' the mind and constitute consciousness in part (though there are possibly other constituents such as images, emotions, intentions, etc.); and (iii) UNIVERSALS, in the sense of 'shared', often observable PROPERTIES of concrete particulars (when taken to be not necessarily dependent on their instances, these items merge with those of (i), when dependent, or otherwise interpreted, they remain a separate category). I answered with the equivocal "yes" - that concepts are all these things - because I believe it is obvious that they directly pertain to all of them, to abstract entities, to ideas, and to universals. The question, of course, is how. That question I will only begin to answer here, mainly by answering criticisms concerned with theories of each of the three kinds of items. Due to the nature of the case (the limitations of this essay and my approach via semantic theory), I will concentrate on the first two kinds of items - abstract entities and ideas - to the relative neglect of any other, or more broad, conception of universals. As a preliminary point notice the TERMINOLOGICAL advantage of the word 'concept' in discussing these questions. Of all the terms involved - 'abstract entity', 'form', 'idea', 'property', 'universal' - the term 'concept' is the most versatile. It can substitute for any of the others without much strain. I can speak of all of Plato's examples of Forms - Justice, Beauty, Unity, etc. - as concepts if I append all the explanations Plato does; e.g., that concepts are eternal and unchanging, independent of being exemplified or known, constituting a hierarchy capped by the concept of the good, etc. I can speak of ideas - whether in Descartes, Locke, or Hume

120

THE CONCEPT OF CONCEPT

- as concepts; e.g., that there are simple and complex concepts, that impressions are to be distinguished from concepts (the latter being faint copies of impressions), that I now have a particular concept in mind that I can or cannot communicate to you (linguistically or otherwise), that in mental activity what my thought breaks down into are concepts, etc. And I can speak of properties - whether dependent or independent of their exemplifications - as concepts; e.g., that 'to be is to be an instance (exemplification or case) of a concept', that a case of the concept white is now being perceived by me in looking at this page, that concepts are predicable of or present in substances, etc., etc. This terminological versatility will probably be thought to be based solely upon an equivocation.1 Maybe; but at least 'concept' is wonderfully and usefully equivocal. I would be the last to contend that there is a genus, concept, of which each of the three - abstract entities, ideas, and universals - are species. For that would obscure rather than clarify the intricate epistemological issues involved, those concerning both knower (his ideas) and known (intelligibilia and/or concrete properties), as well as cognition itself (the possession of adequate ideas or perception of relations between them). Yet, is 'concept' MERELY equivocal - as the use of 'horse' to apply to Man of War and to his picture is ? I don't think so. There is some sort of thread between abstract entities, ideas, and properties (universals) that is obvious to the student of philosophy. The point is to discover the thread and make some sense of it. For that purpose I do contend that the term 'concept' is a very fruitful term to use. First, consider abstract entities and independent universals. There is continual emphasis in recent discussions in linguistic theory on the 'abstractness' of various topics under discussion such as language being what one possesses or 'knows' when he is competent in the language (as versus performance) and, thereby, being 'abstracted' from actual performances. Or a certain phonological representation as being 'abstract' or 'underlying' an even1

Heath (1967) - about whom more below - appears to think so.

THE CONCEPT OF CONCEPT

121

tual, or even merely possible, phonetic manifestation. Or a theoretical item, such as semantic marker or transformational cycle, being an 'abstract' feature of linguistic description. This talk is not philosophically coercive. It does not force us to be Platonic or Aristotelian Realists. It would be fully compatible with the nominalistic thesis that there are, in the end, no abstract entities or universals, that there are only concrete particulars. For all these kinds of 'abstract' properties are still just properties - in one way or another, no matter how remote (and many are VERY remote) - of particulars, collections of particulars, and/or concrete events. The point of the terminological emphasis on 'abstractness' is to direct the attention of the linguist (or other scientist) to properties and relationships of a sort - perhaps VERY general and/or very mdirectly observable - that have not received enough consideration, according to the particular investigator. If a nominalist can explain away ANY universals or abstract entities, then he must surely be able to explain away ALL of them. The only question is can he do it for any.2 To the general nominalistic charge that there aren't any abstract entities, I will simply point out that no nominalist has ever made this case out. Of course, it is too difficult to expect that he could (Nelson Goodman notwithstanding). Numbers, classes, functions, relations, and operations on such mathematical objects either are, or are all directly concerned with, such entities. For most endeavors of any scientific interest, application of a large portion of mathematics is required, thereby, bringing into the endeavor all those mathematical objects that are candidates for being such abstract entities. Further, all contemporary science has considerable theoretical structure concerned with properties and relationships of entities that are only indirectly observable, if even that. Continual failure with attempts to reduce theoretical propositions, and the unobservables they concern, to observable phenomena is notorious 2 Interestingly enough, and due I suppose to motivations to reject behaviorism rather than nominalism, the polemics in linguistics have not concerned the nature and/or existence of abstract entities so much as they have concerned the nature of ideas - viz., mentalism in general, as contrasted with behaviorism.

122

THE CONCEPT OF CONCEPT

in the philosophy of science. Thus, not only the possibly naive arguments from what we experience, and what we say about it, produces evidence for the existence of abstract entities - e.g., the use of red to apply to what it is that is identical in two experiences - but the apparent nature of what scientific investigation involves also motivates acceptance of abstract entities. There is little reason to believe that nominalism is a remote possibility. One particular argument against abstract entities can be laid to rest now. I mean the argument that abstract entities (universals, Platonic forms, etc.) arise from a faulty theory of language. The argument goes like this: (1) General terms apply to many separate individuals in the world; i.e., the common noun man applies to me, you, and many others. (2) The MEANING of man is proposed to exist in order to explain how it can apply to many things yet 'mean' only ONE thing. (3) It is, thereby, felt that there exists a special entity - the Form or universal 'man' which is the 'meaning' the term has - and the separate particulars are also referred to by the term man because they stand in a special irreducible relation (instantiation or participation) to that entity. (4) Then, the argument concludes, this is an obviously incorrect theory of meaning - viz., one in which meaning is taken to be like (or identical to) naming, a term in relation to its meaning being confused with the relation between a name and a bearer of the name. (5) But common nouns and most other words are NOT names. Thus, there is no entity - e.g., form or abstract entity - which they need to name. And to propose one is to propound mysteries and multiply difficulties, rather than to resolve any. 3 The sketch I have given of the theory of language and meaning (in which the abstract entities called 'concepts' came up) provides enough material to see the error in this nominalistic line of reasoning. The charge is that non-names (words that are not proper 3

It is usually common nouns which are jumped on by nominalists talcing this line of argument. I presume they could follow the same line, though not without some increase in implausibility, by considering other words for abstract entities - e.g., adjectives (whiteness named by white, perhaps okay), verbs (swimmingness named by swims (?)), prepositions ('on-ness'(?) named by being-on's (?)), etc.

THE CONCEPT OF CONCEPT

123

names) are thought to be like names in how they get their meanings - their meanings being 'named' (referred to, etc.) by the words in question. Nothing could be further from the truth! It is very clear - as clear as any other single point in linguistics - that the MEANING of words, phrases, and sentences of a language have NOT been proposed to be what the expressions 'name'. PROPER NOUNS can be USED to name things. So using them seems a definite part of the use of a language in actual linguistic performances. BUT THE CONNECTION

OF PARTICULAR

PROPER

NOUNS

WITH

THEIR

BEARERS IS NO PART OF THE DESCRIPTION OF LINGUISTIC COMPETENCE

At most, in the language itself (what competence descriptions aim at representing) proper nouns are distinguished from other nouns (and words) by SYNTACTIC criteria (concerning syntactic contexts in which they occur). But that has nothing to do with BEARERS of the names. Bearer-to-name relations, in fact, are the preeminent REFERENCE relationships - 'reference', that is, in contradistinction to sense or meaning. And linguistic meaning is part of what is being investigated in the investigation into the language itself - as 'abstracted' from performances and uses of it (such as name-bearer relations). Thus, abstract entities that are concepts or meanings arise in explanations of meanings of linguistic expressions because IN THE LANGUAGE THAT THE PROPER NOUNS ARE PART OF.

o f a n EXPLICIT RECOGNITION THAT MEANING IS NOT NAMING a n d i n

answer (in part) to a call for explaining what meaning (significance, grammaticality) in this non-naming sense is. Thus, contemporary origins of the abstract entities that are meanings of expressions are not open to this traditional nominalistic charge of a confusion between meaning and naming.4 Now consider the second category of items to which concepts might be identified - ideas, constituents of thoughts, and mental items. If we take the view that outright adoption of the con4

Something might be salvaged of the nominalistic charge by incorporating part of the defense I mentioned for a strictly Fregean predicative reference (section 2.1 above) - viz., that the sound of a word (rather than the word itself) might be taken to 'predicatively refer' and thus 'name' its meaning or concept. However, the account would be strained and would lose most of the superficial plausibility that attached to the original nominalistic argument.

124

THE CONCEPT OF CONCEPT

ceptualism in semantic theory to be a mentalistic (as versus behavioristic) turn in linguistic theory - since meanings (concepts) must in some way concern or be mental acts, events, or phenomena - then that may well be true. I certainly hope conceptualistic semantic theory can shed some light on thinking, or mental activities and capacities. I will discuss some positive approaches in the next section. Here, while appropriate to philosophical ramifications, I will only mention the criticism of this so-called mentalism that derives from philosophical motives and sources. In general, I think the criticism is summed up in a kind of attack on conceptualism which takes the form of (i) identifying concepts with elements fully present in the consciousness - that ideas are introspectibly discoverable constituents of conscious mental experience - and then (ii) disconfirming the identification of meanings with concepts (on that introspectible rendition) by citing cases where expressions are meaningfully used though the requisite introspectibles are not present in the consciousness of the user. Jerrold J. Katz has ably defended mentalism in general in linguistics 5 and, in particular, has answered this criticism of conceptualism wherein concepts are identified with introspectibles. 6 Katz deals with Alston's statement 7 of the criticism. To start with, the answer is simply to deny that ideas must always and only be introspectibly specifiable. Katz says: When we say that thoughts and ideas need not be present in conscious experience and so need not be available to introspective observation, and further, as we would have to, that they are not publicly observable either, we are simply saying that they are unobservable in much the same sense in which physical scientists say that certain microentities and microprocesses are unobservable. We are being no more, or less, metaphysical than they. What gives their claims about such entities and processes empirical content is that their theories connect the postulated existence of such things with certain observable phenomena through a complex chain of deductive relations. Hence, scientific method offers us a straightforward way of establishing the existence of unobservable 5 6 7

Katz (1964). Katz (1966) 176-185. Alston (1964) 22-25.

THE CONCEPT OF CONCEPT

125

entities and processes, one which there is n o legitimate reason to preclude f r o m linguistics. 8 In brief, concepts t h o u g h certainly mental in s o m e w a y or o t h e r need n o t be identifiable with conscious correlates whenever t h e concepts are meanings o f linguistic expressions actually in use. Occasionally, s o m e concepts m a y be introspectibly ascertainable as meanings o f relevant expressions o f them. But just because s o m u c h o f the apparatus o f linguistic competence must be in s o m e w a y or another mental in nature, that alone is n o reason t o think that each element a n d every aspect o f linguistic c o m p e t e n c e is o p e n t o introspectible discovery. In the typical case, they simply are n o t . 9 There should be n o surprise that in semantics also, the constituent mental elements and capacities are similarly n o t reliably available for introspection. 1 0 8

Katz (1966) 181. For recent statements of this point in linguistics, see Chomsky (1968) 21ff. 10 It is well worth noting that neither Alston or Katz himself is fair to Locke on this issue. They both agree in attributing the introspectible-idea theory of meaning to Locke (Alston, 1964, 23 and Katz, 1966, 178). Locke does, of course, provide a theory ideas that could be interpreted that way, particularly with respect to his explanations of what he means by an idea. However, attentive reading of this Essay - not, mind you, requiring any subtle scholarship or interpretive skill - makes it clear that he did not lay himself open to the simple-minded view that ALL significant words stand for ideas (which, if unmodified, leads to Alston's notion of conceptualism). Locke says, "Besides words which are names of ideas in the mind, there are a great many others that are made use of to signify the CONNEXION that the mind gives to ideas, or to propositions, one with another. The mind, in communicating its thoughts to others, does not only need signs of the ideas it has then before it, but others also, to show or intimate some particular action of its own at that time, relating to those ideas". {Essay, Book III, Chapter VII, Section 1.) Because, perhaps, of the popularity of polemics about innate ideas (in the 17th as well as recently in the 20th centuries), Locke's full theory of ideas - especially with respect to theory of meaning (cf. Book III of the Essay) - has been neglected too much. Though Chomsky has remarked (in conversation, 1966) that Locke was not an empiricist in the sense in which he (Chomsky) was advocating rationalism of the empiricism-rationalism distinction, it is not necessarily true that Locke's theory is not genuine or basic empiricism (or shouldn't be so considered). I would contend the opposite of Chomsky - that Locke is the MAIN empiricist in the history of British philosophy and that a potentially fruitful empiricistic approach in epistemology and metaphysics has been wrongly ignored, because 9

126

THE CONCEPT OF CONCEPT

Before considering what is the most important problem among the philosophical criticisms of concepts, I will end my initial defense by considering a recent rendition of the less important criticisms - viz., P. L. Heath's article "Concept" in the Encyclopedia of Philosophy,n Heath's article is valuable because of its precise statement of the critical tradition in philosophy with respect to concepts, and because of its brevity. Heath considers the question of what concepts are by way of examining what 'HAVING a concept x' amounts to - viz., knowing the meaning of the word ' x \ having a recognitional capacity for cases of x, being able to 'think' of x OR x's, knowing the nature (properties) of x's. Contentions about the primacy, and the manner of it, of one or another explications of 'having a concept' lead to the various theories. Heath mainly considers two of them, the 'entity' (substantival) approach and the 'dispositional' (functional) one. I will consider his remarks on the 'entity' theory, since they are more germane to the concept of concept introduced. 12 Heath summarizes the 'entity' theories of concepts as taking concepts to be items like (i) subsistent word meanings, (ii) abstract ideas in the mind, and (iii) Forms. He asserts in general that no matter what entity a concept is identified with, it is (a) not observable, and (b) posited solely by the needs of the theory (i.e., entirely ad hoc), and (c) ineffective even if it did exist. Though the charge of not being observable (directly observable I must presume) is common enough in contemporary philosophical criticism, I think of inter alia Berkeley's and later empiricists' emphasis on an imagistic concept of idea and mental operation. Locke's theory of ideas was the barest beginnings of a conceptualism compatible with experience as well as linguistic facts. Semantically, his explorations and leads have never been picked up and fully developed. With the advent of serious semantic theory in scientific linguistics I hope they will be soon. Perhaps, N. Kretzman's (1968) valuable article signals the start. (Also, see Aarsleff, 1970). II Heath, 1967. 12 Though round-about, it is a kind of confirmation of the conceptualistic approach to meaning to find that discussions about concepts, such as Heath's, and discussions about meaning, such as Alston's, are about most of the same things - the nature of mental ideas, of abstract entities, and of shared properties.

THE CONCEPT OF CONCEPT

127

I am justified in dismissing it out of hand. In philosophy, as well as science, MOST of the interesting phenomena for inquiry are unobservable, and often only very indirectly disconfirmable, if at all. To even hint that the motives of a nearly totally discredited positivism are reasons against a proposal is complete anachronism. (It does happen, though, I admit). Heath's second and third objections do contain the seeds of serious criticism (which I believe is the same as Quine's, and which I will consider after Heath below). However, Heath does not develop them. His reasons for claiming that any entity identified with a concept (meaning, idea, Form, etc.) is a mere ad hoc calculation, which also plays no effective role in explaining anything even if it did exist, are as follows. The generality of concepts involves diverse phenomena e.g., (i) the capacity to apply words for them to many separate individuals, classes of individuals, and properties of them, (ii) many distinctly different images of cases of the concept can be produced (mental images I presume), (iii) there are many different examples of concepts (and contexts for them, and properties of them). In light of these, Heath says: It seems obvious that possession of these varied and interchangeable skills does not require, and would not be explained by, acquaintance with an abstract general entity, even if such entities were possible which they are not. Locke's infamous multiform triangle (Essay IV, vii, 9) is the classic illustration, alike of the absurdity and the explanatory uselessness, of any such attempt to determine what concepts are.13 In short, Heath takes the diverse evidence for, and phenomena of, concepts to be in itself reason for claiming that concepts are ad hoc and explanatorily vacuous. A curious argument that, which baldly asserts that the evidence for a proposal is evidence for its denial without further argument. Indeed, the only argument Heath has given is the appeal to Locke's 'general triangle' which is "neither oblique nor rectangle, neither equilateral, equicrural, nor scalenon; but all and none of these at once". First, as any reader (not scholar) of Locke knows, that passage is UNcharacteristic of Locke (occur13

Heath (1967) 178.

128

THE CONCEPT OF CONCEPT

ring in Book IV, not in Book II where he details his theory of ideas) and, in particular, it is associated with BERKELEY'S imagistic approach to ideas. (Heath does a disservice to Locke and his readers I believe by not associating his use of this contradictory triangle with Berkeley's views rather than Locke's). Secondly, as any reader of Descartes knows, there is an obvious answer to this challenge to imagine such a contradictory object. It is that images simply are not ideas and vice versa. (It has always escaped me how this point failed to have any effect upon Berkeley and later empiricists). For the contradictory triangle that Locke (in the same passage) is saying "requires some pains and skill to form the general idea of" is not an idea at all but an IMAGE. That is the lesson of Descartes' chiliagon (Meditations, VI). I can't imagine a thousandsided, (say) regular polygon - that is, distinctly visualize what it would precisely look like as distinct, in imagination, from a figure with one more or one less sides. But I know what one would BE and I know what my IDEA of one is. So too the triangle with all and none of those specific properties. The idea is contradictory. Thus, there could not be one, neither could I visualize or imagine what one would look like. Why? Because the nature of the IDEA of such a triangle is contradictory. Now is the contradictory idea of such an impossible triangle identical to the general idea of triangle - the idea or concept of triangle that is exemplified in any particular triangle I draw, imagine, or observe? No. And that alone is Locke's mistake - since he uncharacteristically proposes that it is; thereby, suggesting Berkeley's approach and unknowingly sowing the seeds of future unjust disdain of his theory of ideas. Therefore, what Heath takes to be an impossible idea or concept is not at all impossible qua idea or concept and is misleading as a proposal for what any conceptualist would propose the entity 'concept' to be. Heath's argument reduces to a trick based upon what no one ought to credit as a serious proposal for what a concept in any sense is - viz., a mental image. Of course, much of post-Lockean empiricism is based upon the trick also, but that doesn't diminish the seriousness of the error.

THE CONCEPT OF CONCEPT

129

Heath adds two lame criticisms in concluding his attack on 'entity' theories of concepts. First he rings in a variation of his own on the Third Man theme - 1 mean, of course, Aristotle's Third Man Argument as found first, remember, in Plato's Parmenides. Taking ideas as particular mental occurrences to be identified with concepts (in some non-general sense, since he thinks he refuted their generality), Heath says we would have a problem of knowing whether a particular occurrence were really a recurrence of the same idea previously experienced (another particular occurrence) unless there were some 'superconcept' to guarantee sameness. And so on ad infinitum. This is merely the 'Third Man' applied to similarity between mental occurrences (e.g., particular apprehensions) of ideas, rather than between particular object and Form which it participates in. And the answer to the Third Man argument I have in mind ought to suit Heath's case as well. It is that either the regress does not exist or it is innocent, non-vicious. The infinite regress in Plato's case can easily be quashed by pointing out the metaphor involved - viz., the comparison 'in the mind's eye' of the Form and an exemplification of it. Can one do that? Doesn't that require comparing an image and the exemplification, rather than a Form or Idea (or idea) and the exemplification? Further, it is comparison among PARTICULARS that resulted in the apprehension of sameness. The sameness is called a Form. To ask for a comparison of the sameness itself with what is evidence for it, just as if the sameness were one more piece of evidence, is at best peculiar. It is not clear at all what is to be done. If we, through polite lipservice allow the comparison of Form and exemplification, thereby generating the second Form (Third Man), still it does not seem to be a particularly vicious regress. For it may well be that all the supposed additional Forms are just identical to the first, collapsing to it in eifect. These answers apply to Heath also. If a series of mental occurrences ARE manifestations in experience of the same idea, then THAT is what they are. N o superconcept is needed. If they recur, they recur. The sameness is the idea simplicitur. The occurrences are at different times, but qualitatively identical. The identity is

130

THE CONCEPT OF CONCEPT

the concept or idea, philosophically interesting, but not 'super'. Finally, Heath ends at what he calls the 'root' of the matter. (Perhaps, all the above wasn't meant to be his argument, though it's hard to be that charitable). And what do we find at the 'root' but the traditional nominalistic argument rehearsed above. ... entity theories o f concepts ... rely implicitly on a fallacious view o f meaning, whereby all words, other than logical connectives, are taken to be names, and general or abstract words to be names o f general or abstract things and characteristics - which must at least subsist, therefore, in order for their names to have designata, and so meaning. 1 4

I have adequately defused this charge above. I have at this point sufficiently detailed and defended the philosophical concept of concept to be secure in asserting Heath to be totally mistaken in his claim that "... concepts ... have none to praise and not many left to love them, ...". I will conclude my discussion of philosophical ramifications with what is the only important line of criticism of the concept of concept that exists in philosophy. It is Quine's. I do not mean Quine's behaviorism, though what I do mean is the basis for it. Quine's behaviorism does not spring from a simple commitment to Skinner's psychology or from a peculiar view of scientific theory and theoretical indeterminacy. Rather, it is based upon a radical scepticism, a scepticism concerning the idea of IDEA (or, in my terminology, the concept

of concept). 15 Heath (1967) 178-179. What is important about Quine's scepticism on ideas is what is important about scepticism in general. Which is that the sceptic may be right. Of course he, or we, won't ever 'know' that. Such paradoxical knowledge would refute scepticism. Yet any serious epistemologist - by definition one who is concerned to explain what knowledge is, if only partially or haltingly - must have in mind the very real possibility that he has no subject matter for his inquiry, that he is deluding himself. If truth and knowledge are relative, as belief usually is, to the possessor of them, then there is no TRUTH or KNOWLEDGE to have a theory of knowledge about. There is only belief, and epistemology reduces to theory of belief. To my serious epistemologist, this should be the utter destruction of his inquiry. It is surprising to realize how much of recent empiricism (positivism and pragmatism) implicitly acquiesces to genuine scepticism. 14

15

THE CONCEPT OF CONCEPT

131

Quine doesn't believe that there are any ideas. The details of his reasonings are impressive, running from the first three chapters of From a Logical Point of View (1953) through Word and Object (1960) to "Ontological Relativity" (1969). The argument develops from, though far outstrips, a rather simple point - viz., that we can easily give vacuous causal explanations. To quote him: The evil of the idea idea is that its use, like the appeal in Molière to a virtus dormitiva, engenders an illusion of having explained something. 16

Just as offering the explanation "it's a soporific" in answer to "why does that put me to sleep" is entirely vacuous, so Quine believes appeal to ideas and meanings is similarly illusory. No explanation of anything is to be gained in that way. 17 Ideas are only vacuous dispositions. I believe that we must grant Quine the point that all inquiries must be scrutinized for such vacuous explanations. (Dispositional terms are often the symptoms). The question, of course, is whether appeal to ideas in psychology, or meanings (concepts) in linguistics, is as vacuous as Quine thinks. Quine begins his argument against meanings, concepts, and ideas in the midst of explicating an aversion to abstract entities in general. He would like to be a nominalist. In arguing with 'McX', in "On What There Is", 18 about the existence of such entities, he agrees that there is some persuasiveness to McX's introduction of such entities as meanings : "Let us grant,"- he [McX] says, "this distinction between meaning and naming of which you make so much. Let us even grant that 'is red', 'pegasizes', etc., are not names of attributes. Still, you admit they have meanings. But these MEANINGS, whether they are NAMED or not, are still universals, and I venture to say that some of them might even be the very things that I call attributes, or something to much the same purpose in the end". 16

Quine (1953) 48. And soon after the quotation I reproduce from Quine in my next paragraph, Quine similarly says, "... the explanatory value of special and irreducible intermediary entities called meanings is surely illusory". (Quine, 1953, 12). 18 Quine (1953) Chapter 1. 17

132

THE CONCEPT OF CONCEPT

For McX, this is an unusually penetrating speech; and the only way I know to counter it is by refusing to admit meanings, for I do not thereby deny that words and statements are meaningful. 19

We might conclude that Quine does not want to admit ideas because he doesn't want meanings, and that he doesn't want meanings because he objects to any abstract entities (universals, meanings, concepts, etc.). That conclusion would fit the chronology. However, there is good reason for thinking Quine would avoid ideas independently of countenancing abstract entities. For Quine himself makes the simplest and most compelling argument for the essential abstractness of that abstract entity that would have been thought to be most easily reducible to concrete particulars. It concerns the class or set. One can directly observe classes of concrete particulars. One can regard members of the classes separately or together as a class. Certainly both recognitions depend on the same particulars existing and don't change concrete objects to abstract ones. Yet, Quine explains: The fact that classes ARE universals, or abstract entities, is sometimes obscured by speaking of classes as mere aggregates or collections, thus likening a class of stones, say, to a heap of stones. The heap is indeed a concrete object, as concrete as the stones that make it up; but the class of stones in the heap cannot properly be identified with the heap. For, if it could, then by the same token another class could be identified with the same heap, namely, the class of molecules of stones in the heap. But actually these classes have to be kept distinct; for we want to say that the one has just, say, a hundred members, while the other has trillions. Classes, therefore, are abstract entities; we may call them aggregates or collections if we like, but they are universals.20

And if Quine wants to accept any mathematics (any that might be based upon sets or classes) as significantly true (true OF something), then he is probably forced to accept some kind of abstract entity such as classes.21 19

Quine (1953) 11. Quine (1953) 114-115. 21 Of course, remembering Quine's scepticism, he may well not accept mathematics if it forces acceptance of classes. For his very next sentence after

20

THE CONCEPT OF CONCEPT

133

Meaning for Quine is to be explained solely in terms of its two aspects - significance of utterances (or meaningfulness, the 'having' of meaning) and synonymy between utterances (sameness of meaning). He prefers to discuss whatever there is to be considered regarding meanings and ideas with this terminology because it promotes avoidance of tacit presumptions of ENTITIES that are meanings. That a statement possesses meaning too easily suggests that there is an entity standing in some relation of possession (like I do with respect to my automobile) to the possessor. The hope is for an account of synonymy and significance WITHOUT presupposing or implying that meanings as entities do exist. The account, or hope for one, is what we find in his "Two Dogmas of Empiricism". 22 Further development is in his Word and Object, but the nature of it and its satisfactoriness can be clearly extracted from the earlier article. His starting point is to reject any theory of meanings, as he rejects meanings per se, because "... the primary business of the theory of meaning [is] simply the synonymy of linguistic forms and the analyticity of statements". 23 Analyticities of the 'second kind' are those which depend on crucial occurrences of cognitively synonymous terms. If we had a criterion by which we could decide whether two terms were cognitively synonymous, then we could use it to determine the analyticity of corresponding statements. However, as Quine shows, we can't obtain a criterion for cognitive synonymy without presupposing in it the analyticities we would use it to determine. (See, especially, pp. 22-34 of "Two Dogmas".) Quine concludes - rightly from HIS clearly stated point of view - that we are not going to get any criterion for analyticity or cognitive synonymy that is not question-begging. From this, he makes the much broader conclusion that the concept of an analytic-synthetic distinction is a mere dogma and that there is no sharp distinction to be drawn between two kinds of statements such that one kind (the the final one in the quotation immediately above is:"That is, if there ARE classes", (p. 115). 22 Quine (1953) Chapter 2. 23 Quine (1953) 22.

134

THE CONCEPT OF CONCEPT

analyticities) may be held true 'come what may'. For there is no non-question begging criterion to apply to statements to find out which is which. Quine's conclusions are startling and I do not agree with them. However, the flaw is not, as so many think (too many to list), in his reasoning or line of argument. It is in his assumption that meanings don't exist (abstractly or whatever) or, equivalently, that there is no theory of meaning except that concerned with synonymies and analyticities as he conceives them (a theory containing criteria for deciding whether a particular statement is analytic or a pair of terms synonymous). That is, the only theory of meaning Quine permits consideration of is one that in NO sense posits meanings. Thus, if we can come up with criteria for synonymy and/or analyticity in terms of something else (e.g., observation, behavior, etc.), then that is permissible. But if we demand a whole THEORY (or 'genuine' theory) of meaning not reducible entirely to other phenomena, then that on Quine's argument cannot be used within his argument. It is a PRESUPPOSITION, NOT A CONCLUSION of his argument that there are no meanings and no genuine theory of meaning. On that presupposition, his argument goes through I believe. That is, it goes through as far as his demonstration that no criteria are to be had for synonymies or analyticities. The philosophical conclusions about empiricism and the general analytic-synthetic distinction are, of course, much more open to debate. (I won't consider them). In fact, the moral of the story is that Quine has constructed a reductio ad absurdum on the hypothesis that meanings don't exist (or, alternatively, that there can be no theory of meaning). The absurdity he has derived is not a 'flat' contradiction. However, it is the absurdity of a conclusion of a sophisticated line of reasoning contradicting what any speaker of a language knows - viz., that many words and phrases are synonymous with other words and phrases of the same language, that that is the kind of phenomena one finds involved in possessing (knowing) a language. Similarly, any speaker knows that certain statements such as Bachelors are unmarried are vacuously or redundantly true - true

THE CONCEPT OF CONCEPT

135

not exactly in the way that It is raining or it's not raining is true, but equally true, and just as redundant. Now these phenomena are only EVIDENCE for a theory or an explanation, whereas Quine's claim that there are in the end no genuine analyticities or synonymies is a complicated philosophical proposition (a conclusion of a complex explanation). Thus, we can't say the one obviously contradicts the other, merely that there is some sort of incompatibility - an absurdity to think both Quine's conclusion and the evidence could both be true. The absurdity, I believe, proves the negation of his presumption that there aren't meanings. Meanings DO exist. Not an impeccably logical argument, but good enough as good as the internal workings of Quine's arguments. Of course, there is ample evidence that Quine might well 'live with' this absurdity. He clearly hints he could live without mathematics and, one might conclude, even would give up language itself if that meant relying on even knowing the meanings of one's own terms moment to moment. 24 But then Quine leaves us, as we might expect he would IF we remembered to think of him as the sceptic at work. Not sharing all of Quine's scepticism, what do I conclude? As is obvious 1 conclude there are meanings, and that Quine's arguments argue FOR them more than against them. But there is much more than that, for to stop here would be to miss what is valuable in Quine - both scientifically and philosophically. What we should extract from Quine is that there are no non-question begging CRITERIA for synonymies outside of positing meanings. But, we can hear Quine saying, that is nothing. To say "tadpole" is synonymous with "polywog" because there is one meaning (or concept or idea) involved, leads only to asking for criteria for determining what meaning (idea, or concept) that is. And that can only be done by either begging the question (referring back to what-thoseutterances mean, even if the circle is indefinitely enlarged) or by positing an ad hoc entity to do the job required - a 'soporific' (idea) to be 'why it makes you sleep' (why they are synonymous). 24

Cf. Quine (1969a) 47-48.

136

THE CONCEPT OF CONCEPT

Thus, we posit meanings only on pain of circularity or ad hoc stipulation. I add that we only SUFFER from those pains, if we try to make USE of a synonymy criterion. For we can't. Meanings are EXACTLY like Quine says they are. If we use them for criteria in assigning truth to synonymy claims only on pain of circularity or vacuity, can THAT be 'lived with'? I think it can. 25 How to put up with it is simple and typical of science in general, I believe. It is that we have no legitimate criteria for anything except within a theoretical framework. Criteria are hypothetical. There are no flat criteria of the form "P" is true, on the criterion C. with no other conditions. There are ALWAYS further conditions. And the conditions must concern, in science, the theoretical framework in which the phenomena and data are being considered. Thus, the criterion C for the truth of "Tadpole" is synonymous with "polywog" may well be that the concepts (meanings, semantic markers) that constitute in part what those words are be the same for both (though their sounds differ significantly), but that criterion is obviously circular, ad hoc, and totally uninteresting (scientifically or philosophically) without a rich, reasonably well confirmed, and internally coherent explanatory framework for the criterion to issue 25 I have so argued in Peterson (1969a). I applied therein a line of reasoning - that I called the reductio method - whose crucial rejection under certain conditions is typical in science, mathematics, and philosophy. Quine's possible 'living with' the reductio I ascribed to him is such a crucial rejection, though it is not among the instances I discussed (and, in fact, is the opposite of my main point). Notice that Quine's possible willingness to 'live with* the absurdities of no mathematical knowledge, and possibly no linguistic knowledge either, may very well be less insane than our typical willingness to put up with all sorts of absurdities - e.g., the Cantorian infinite, wave-particle duality, semantic circularity, etc. We do so, we say, in the name of scientific or intellectual progress. "But is that progress?" we can hear Quine mutter. "You think you learn more, but you don't. At least, I am not subject to that delusion".

THE CONCEPT OF CONCEPT

137

from. Happily, such is the case with the criterion of synonymy in semantic theory. I believe I have given sufficient hints above to show that semantics in transformational grammar is a welldeveloped inquiry defined in its own terms and related to observable phenomena - languages and their uses - in typically complicated ways. Are there no criteria in semantic theory that will still Quine's qualms? I answer, there are not. To ask for them is to ask for decision criteria in an endeavor where they are very likely not to be found. The quandary is directly similar to the quest for a recognition ('parsing') algorithm for random English sentences. Putting aside the problems of getting agreement on a tractable corpus of sentences - i.e., assume we agree that everything in a certain corpus is a sentence and each is of a type to which a grammar we have specified is germane - even then it may be absolutely impossible to obtain a criterion by which we could either (i) effectively assign the syntactic structure to any sentence of the set or (ii) say whether two sentences have the same structure (e.g., the analogue of deciding synonymy between two terms). This impossibility will come about if the grammar contains, or is generatively equivalent to, a grammar with context-sensitive productions including deletions. For in that case, the class of sentences generable may be recursively enumerable (not recursive). That is, the grammar might generate every grammatical sentence of the language, but it still might be impossible to obtain a recognition procedure for members of the set (the language) since the complement of a recursively enumerable set is not itself recursively enumerable. It is clearly conceivable that any transformational grammar adequate to the description of a natural language will contain sufficiently complicated transformations such that even though its phrase structure be context-free still the class of sentences generated is only recursively enumerable (not constituting a recursive set). If natural languages are merely recursively enumerable sets in this sense (and it is not clear now that they must be), then no effective recognition procedure will be forthcoming, as a mathematical fact. Is the case of having no effective decision procedure for deciding

138

THE CONCEPT OF CONCEPT

synonymies any worse? I think not. It may be no better, but formally at least it is no worse. 26 Quine's chief contribution, then, is to promote absolute criticism of whatever entities we propose. Criteria are at the bottom of all his challenges, criteria of two sorts - identification criteria and identity criteria. If you propose that there are protons (or other specific sub-atomic particles), then you must be able to provide criteria for deciding when we have got such an entity; e.g., be able to tell a case of a proton from an atom itself, an electron and another (numerically distinct) proton. Science is usually directly concerned with providing those criteria, especially experimental science (with theoretical science suggesting new entities to look for and additional properties and relations to try to devise additional identification criteria for so as to confirm or not theoretical propositions about them). When science fails to derive identificaMuch of the polemics about defining 'analyticity' has turned on misunderstandings and lack of appreciation for this point about criteria. Quine presumes that no concept is clearly defined unless there issues from it a criterion which in principle (even if impractical) could decide cases to which the concept applied. If we can't tell whether or not some proposition is analytic, then that is reason for thinking we don't have the concept of analyticity straight. Further, if we can come up with no conceivable criterion, then that is good evidence that we cannot get analyticity straight; i.e., it can't be defined. F o r Quine, definition demands criterion. For Katz (among others), it does not, especially with such an abstract concept. This finally becomes clear in Katz' work when he characterizes his definition of analyticity as 'theoretical' (cf. Katz, 1967, section 3). Then it is obvious that Quine and he are just completely at odds about the nature of the inquiry into DEFINING 'analyticity'. For neither is going to accept the other's ground rules. Quine is being Humean, exemplifying the all-ideas-reduce-to-impressions credo (albeit in modern behavioristic dress). Thus, if we have a concept of analyticity, then it must derive from, depend on, and apply to experience. If it cannot, at least not effectively, then you don't have the concept. It can't be defined. Katz is taking a quite opposite stand, regarding theoretical concepts as theory-dependent entities only loosely and distantly connected with confirming observations (impressions) with the possibility of reduction of theoretical concept (e.g., analyticity within empirical semantic theory) to observables practically hopeless, if not provably impossible (e.g., observing electrons, or recognizing syntactic structure for context-sensitive-with-deletion languages). Katz' view is certainly the more respectable in light of current philosophy of science. Quine's, however, may be more respectable with respect to pure epistemology - especially, if scepticism is to count as a theory of knowledge. 26

THE CONCEPT OF CONCEPT

139

tion criteria - perhaps pleading physical impossibility (say, with electrons or genes) - the demand is still for identity criteria. If one is proposing that there BE something (or some property or relation), then at least there must be in principle criteria for deciding when one case of it is identical or not to another. If you propose that there is a genetic structure you call 'structure-78', but you can't tell me what it would be for a structure to be structure-78 (or even, in principle, whether some other structures were identical to it or not), then it would be fairly clear you weren't talking about anything at all. Quine applies this criticality to ideas, meanings, and concepts. "You say there are ideas", we hear him say. "Well, WHAT are they? How will I KNOW when I have got one idea rather than another ? You say examine linguistic expressions for it in light of a linguistic description of your whole language? You say examine the representation of the idea expressed ? What it represents is what the idea is. Well, perhaps. But can I rely on it? If I have two expressions and I just happen to propose that they express the same idea (perhaps I think they do, or maybe I'm just being capricious or not even understanding what you are up to), then I examine their semantic representations (presuming I could get them, of course, there's always a mathematical possibility I couldn't, given the recognition problem). There are two possibilities. The theory says they do, or that they don't. Can I believe it, whatever the theory says? You say there is no further court of appeal, ideas are the end of the line. If the ideas are the same - as reflected in the semantic representations - then there was one idea being expressed. But there must ALWAYS be a further court of appeal, mustn't there? How could semantics be final? You say it's not, it's an open science still developing. But we were talking about the ideal case, when the theory is true. In that case, must not the theory say what an idea is - this one, and not that one ? It can't do it. For there is nothing for it to do it in TERMS of. That is the point about indeterminism of translation27 - that there are no identity 27

This is part (or most) of the point of the indeterminism of radical trans-

140

THE CONCEPT OF CONCEPT

criteria for ideas themselves. That is why there are no absolute criteria for synonymy within one language (ideas expressible in a single language) or translation criteria between two (ideas expressible in two languages). There is nothing else in terms of which we cou'd decide - even given a finished and complete semantic description of all natural languages - whether one idea was the same or not from another. Ideas are the end of the line". Quine concludes that ideas don't exist. In a way, his is the most rational stand, for there aren't any identity criteria (let alone identification criteria) for them. It is not clarifiable what it is for one idea to be the same as another or not. I would propose NOT concluding that ideas don't exist, but only because I must have them to make sense of language (i.e., I need some concept of concept - no matter what). Language is certainly there demanding to be made sense of. If there is no future explanation to be devised to answer the demand for identity criteria - say, ascending systems of concepts each of which explains the next lower - then I must admit that my recommendation is LESS reasonable than Quine's, though maybe more 'scientific'. At the very least, then, since the question of ideas is absolutely central to the theory of knowledge, linguistics may be irretrievably entwined in epistemology - a consequence we can hope is not suffered by most other sciences, though perhaps some (e.g., psychology).28 lation eventually leading to inscrutability of reference "at home"; cf. Quine (1969a) 47. 28 I have in these last paragraphs been trying to make communicable what is philosophically important about Quine, especially with respect to his position on analyticity and indeterminism in language. I have tried before - in answering Rosenberg's 'refutations' of Quine's indeterminism of translation thesis. (Rosenberg, 1967: Peterson, 1968c). It isn't easy. Rosenberg's challenge to Quine is essentially Katz' position with respect to any challenges to his conceptualism (see the Katz quotation above in this section). Quine generally feels that he has been misunderstood on the indeterminism thesis, for he was not intending to suggest that indeterminism of translation was typical of indeterminism in science (Rosenberg's position, compatible with Katz). Cf. Quine (1969b) 303. Evidence generally underdetermines theoretical propositions in science. An understandable, and perhaps necessary, open-endedness attaches to theoretical concepts in developing science. None of that is the indeterminism Quine was trying to specify. The case is much worse than that

THE CONCEPT OF CONCEPT

141

3.2 PSYCHOLOGICAL RAMIFICATIONS

The psychological ramifications of the concept of concept concern what our understanding of cognitive mental phenomena is and what should be the principles and problems guiding continued for language. We can't tell in principle whether one term is synonymous or translates another term - especially with general terms that express concepts or ideas - because there are NO ideas (Quine says). But grant that there are ideas (CONTRA Quine), then can't we tell? No, I would say, for Quine's challenge still remains. We say, think, truly believe, or whatever, that we have the same idea 'in mind', 'expressed in', or whatever, with respect to the terms. But do we? How do we KNOW that? By what criteria? There are none. As a practical stance for further semantical inquiry, I recommend adopting Katz' (and Rosenberg's) position. Thus, concepts are theoretical entities that arise because of the nature of a particular inquiry (viz., linguistic theory). They formalize in a typically abstract way what the pre-theoretical evidence for them is - namely, the meanings of expressions that are the data for linguistic description (e.g., the way in which you and I as English speakers know that "polywog" and "tadpole" mean the same (though are used differently) and that there is something odd - semantically vacuous perhaps - about 'Green ideas sleep furiously'). The concept of concept is posited, then, as a theoretical entity to do a job in the scientific theorizing that constitutes part of linguistics (including intra- and inter-theoretical evaluation as well as experimental confirmation). As a theoretical entity (or theoretical concept), concepts are open-ended and subject to further specification or modification just as the entity (or concept of) electron or gene is. This open-endedness is not the same (at least not obviously) as Frege's 'incompleteness' of concepts. Thus, concepts are in two different ways 'incomplete' - one way they are 'incomplete' because predicative, the other way because they are open-ended and not fully specified. It may well be that none of this theorizing has anything to do with knowledge (certainty), so that theoretical entities (or concepts) of any science are always components of elaborate systems of beliefs, never subject to being known. It often seems so for the non-linguistic sciences. Could it be so for linguistics also ? Remembering Quine's challenges, we may well wonder. For a particular theoretical entity (theoretical concept) for linguistics (within semantics) is the concept of concept itself. And if anything is intimately involved in what knowing is (if there is any such phenomenon as genuine knowledge), it would certainly seem to be concepts - e.g., the arithmetic truths are such clear candidates for being absolutely knowable precisely because their truth falls out directly from the concepts involved (viz., the number concepts and the arithmetic operation concepts that the truths are understood, or 'thought', in terms of). Thus, linguistics WITH the theoretical concept of concept may be more (not less) epistemological than the other sciences - more involved with the fundamental problems and phenomena of knowing itself. But if it is, then Quine's scepticism again haunts us. There may be NO concepts. There may be NO knowing.

142

THE CONCEPT OF CONCEPT

research into them. I propose what I hope is commonplace and unexceptional today: (i) that an organism (or machine) is MENTAL or 'intelligent' to the degree to which it manifests COGNITIVE PROCESSES AND CAPACITIES; and (ii) that these cognitive processes and capacities ('rational thought' and 'higher' mental abilities) are typically manifested by the POSSESSION and USE of natural LANGUAGE. This does NOT EQUATE knowledge of a language with possession of cognitive (higher mental) capacities. Almost, but not quite. The contention is that the cognitive capacity that is best understood, and whose study is theoretically the most interesting, is language use. If an animal or machine has got a language (one like a natural language), then it has cognitive capacities. If it does not, then it may or may not have cognitive capacities. Problemsolving behavior, complex use of tools to adjust the environment, social and/or political structure, and many other non-linguistic phenomena are manifestations of SOME sort of mental or intellectual ability. I only recommend reserving the terms 'cognitive', 'intelligent', and 'higher' for mental phenomena sufficiently similar to language possession. Language is the top of the scale of mental abilities. Much could be said about the bearing of this bias towards language on the concept of mind itself. I will say just a little. 'Having' a mind may well turn out to be equivalent to 'having' linguistic capacity.29 'Being' a mind may be equivalent to being an agent who exercises (acts intentionally in using) the linguistic capacity. Thus, a person 'has' a mind in that he 'has' (possess, knows) a language. An individual 'is' a mind in that he (qua mind) 'is' that capacity. A person is mind and body. Thus, persons USE language in observable speaking and listening. Mind - the cognitive aspect of persons - is an abstraction, from person. 30 But so would a dis-ensouled body be an abstraction from a person, a physiological robot in effect. Is there any 'pure' mental activity - e.g., 29

Cf. Chomsky (1966) 3ff. on Descartes and this possibility. There is nothing new in this proposal. Cf. Locke's concept of a man CEssay, II, xxvii, e.g., section 15). Or, more recently, Individuals (Strawson, 1959). 80

THE CONCEPT OF CONCEPT

143

language use without actually speaking and hearing (moving any organs of articulation or stimulating of any auditory organs)? What would such 'internal' talking to oneself be? Thinking? Yes, thinking. Two questions, then, arise. First, is this 'thinking' purely mental, not depending on any functioning of any bodily organ ? Yes, it seems as 'purely' mental as we humans are going to attain. AND no, it is not obviously independent of the brain. But if it does depend upon the brain how can it be purely, 'completely' mental ? To that, there are two answers I believe: first, that it is completely mental in the sense that it is completely private (and that is all some conceptions of mind demand); and, second, that it is not completely mental if by that is meant a nature and activity separated from all dependence on, or interaction with, physical phenomena. Thinking is only as 'completely' mental an activity as we persons are ever going to know of. Whether there are completely mental phenomena divorced from all connection to physical phenomena is a question of pure metaphysics. The second question is: Doesn't this account amount to materialism - at least materialism with respect to the 'minds' of persons ? For human minds are, thus, granted to be totally dependent on brains and, thereby, eventually explicable solely in terms of brain capacities and activities. To this, I answer perhaps. I have three reasons for NOT believing such a materialistic reduction will work. First, there is the privateness matter. Even if my mind is science-fictionally shown to be totally dependent on my brain processes, and my purportedly private experiences exhaustively described by fantastic brain analysis devices (my every thought and intention observed by way of brain states and events being monitored moment to moment), still they will remain as MY mental experiences. That is, they are still private to the extent that I 'have' them from MY POINT OF VIEW. No matter how exhaustive the description of my brain and my experiences dependent on its activities, knowing the description is not 'living' it. Only I can do that with MY experiences. To import Russell's distinction between knowledge by acquaintance versus knowledge by description, only

144

THE CONCEPT OF CONCEPT

I am directly acquainted with my experience (and to that, perhaps meager, extent they are private), but anyone else can only know them by description and analogy. 31 Secondly, the so-called reduction (of mind to brain) works both ways. I grant that it looks like minds are reduced to brains and thinking to language use because mind is merely the linguistic capacity and thinking is (silent or not) language use. But, perhaps, the physical is being 'mentalized' rather than vice versa.32 Thus, mind, and private mental experience, is manifested in the physical and public world - thinking has its public as well as private side through linguistic communication. The increasingly mentalistic flavor of linguistic descriptions - particularly with respect to nonintrospectable mental features - makes the reduction of mind to body less and less plausible. For linguistics is leading us to a much more complicated concept of mind, one which permits an openended and developing concept rather than a limited conception that easily fits into a materialistic reduction argument. Finally, the third answer to the materialistic reduction claim is that there may BE disembodied minds. I don't know that I will 31

Someone might propose an even more fantastic science fictional possibility - viz., that we have a machine which records my every mental thought and intention and maps it onto another person's brain so that his mental experience exactly duplicates mine. Thus, he has my point of view and the last vestige of 'privateness' disappears to the concept of mind. This example is so fantastical and remote, I don't quite know what to say about it. Yet, it would seem to have some superficial difficulties. For if the machine detects an intention I have - say, to quit thinking about how I feel at the moment and try to devise some argument for the existence of disembodied minds - and it maps it onto the person whose brain is controlled by the machine so that he has the same thought in his mind, would the conclusion really be that the person did have the intention I had ? He might believe he did, but he wouldn't really. He is at the mercy of the machine directly and me indirectly. Thus, the machine couldn't fulfill its supposed goal of completely giving him my experience. It could, though, DECEIVE him into thinking he had my point of view. (Descartes' problem, then, arises again. Am I deceived in this - knowing that and what lam?) 32 I don't mean this in the sense that idealism or phenomenalism may, thereby, be true. The point is more akin to Fodor's (1968) distinction between mentalism and mind-body dualism - e.g., that mentalism contrasts with behaviorism and may be compatible with EITHER materialism or dualism. In the case of mentalism compatible with materialism, we have the physical 'mentalized'.

THE CONCEPT OF CONCEPT

145

ever know or be one. I can't imagine a demonstration that proves that there are or must be. But it is a remote possibility, not definitely refuted.33 The moral of all this for the science of psychology is as follows. Investigations of cognitive processes should be heavily linguistic. In fact, blatant and even artificial transposition of linguistic conceptual structure - particularly, the general methodology and complicatedness of it - to the models of other non-linguistic cognitive processes ought to be fully explored. I recommend reformulating empirical studies of problem-solving, of concept acquisition, of 'intellectual' development, etc. in terms of the kinds of conceptual structures typical of natural language and its use. The greatest difficulty in doing this will not be the details of constructing the requisite structural analogues - though that will be very difficult. Rather, it will be the huge ignorance of mechanisms, and even reasonable hypotheses, concerning linguistic performance (as vs. competence); i.e., how competence in a language is put to use in the intentional actions of communicating. Thus, the first order of business for psychology is the construction of an approach to describing and explaining linguistic performance again an incredible task to ponder, since so little of what linguistic competence is is understood. Simulation of cognitive processes - e.g., computer programs and systems that simulate it - may be very useful in these endeavors. I recommend this not in the spirit of some artificial intelligence research that purports to be concerned with simulation of cognitive processes as an approach to mechanization of intelligent processes,34 that wherein only superficial characteristics of problem solving, game-playing, or questioning and answering is all that is attempted. Rather I recommend the construction of artificial systems which have sufficient 'built in' capacities to simulate the 33

In my contentions here about 'privateness' as well as about disembodied minds, I have ignored a large body of recent philosophical literature concerning the description of private experience, private languages, and our knowledge of minds (and other minds). 34 Cf. pertinent sections of Feigenbaum and Feldman (1963).

146

THE CONCEPT OF CONCEPT

complexity that must underlie genuine intelligent behavior.35 Similarly, investigations in computational linguistics - such as the construction of grammar testers and other programs capable of automatic processing of grammatical structures as complicated as those underlying natural language - may eventually yield the appropriate materials and techniques for an artificial system that does use a language in a sense that simulates to a degree human language use. The value of simulating language use, or other cognitive processes, is not that if you can build a machine that can do X, then ipso facto you understand what X is. For you may not! The value is that trying to simulate a processes forces (or can force) clearer thinking and description of what the process is. And with language use, it is not clear what the process is. The very first task of simulation - viz., how to REPRESENT in formal terms what it is we want to simulate - forces careful examination of our previous understanding of what we are interested in. More precise definitions can result. And if the simulation attempts are partly successful we can generate data which is directly pertinent to evaluating the adequacy of our REPRESENTATION of the process - which is as close as you can get, I believe to a perfectly explicit and objective evaluation procedure for proposals that purport to describe what a particular cognitive process itself is. A disclaimer, perhaps, is necessary. It might be objected that infinitely great and detailed simulation of any cognitive process will never tell you much about how humans succeed in instituting the process. All it will do is provide more and more information about what the problem of 'how humans do it' is. I agree! that is precisely the point. There may well be more and more future questions of how persons in fact institute such processes. What needs stu35

Recent work (cf. Simmons, 1970) in question-answering systems appears to manifest this approach. Also, I am told that more recent game-playing programs attempt this, the symptom being their heavy emphasis on utilization of genuine chess-playing expertise in designing chess programs (for example), rather than the former approaches in which it was supposed that a fairly unstructured 'learning' capacity was all that was required.

THE CONCEPT OF CONCEPT

147

dying NOW, however, is not those further questions. Indeed they canNOT be studied now. First, we need more and deeper understanding of what the problem itself is - namely, what a cognitive process and capacity for it amount to in abstraction from their dependence on physiological characteristics and abilities of humans or other animals.

EPILOGUE

I close with an endorsement of Popper's recommendations to both the historian and the psychologist. He recommends more concern and appreciation for 'third world' objects. About the 'third world' he says:1 ... some philosophers have made a serious beginning towards a philosophical pluralism, by pointing out the existence of a third world. I am thinking of Plato, the Stoics, and some moderns such as Leibniz, Bolzano, and Frege. ... I follow those interpreters of Plato who hold that Plato's Forms or Ideas are ontologically different not only from bodies and minds, but also from 'ideas in the mind', that is to say, from conscious or unconscious experiences: Plato's Forms or Ideas constitute a third world sui generis. Admittedly, they are virtual or possible objects of thought, - intelligibilia. ... Thus, Platonism goes beyond the duality of body and mind. It introduces a tripartite world, or, as I prefer to say, a third world. ... In this philosophy the world consists of at least three ontological categories; or, as I shall say, there are three worlds; the first is the physical world or the world of physical states; the second is the mental world or the world of mental states; and the third is the world of intelligibles, or of ideas in the objective sense; it is the world of possible objects of thought. I have said little above about the third-world nature of concepts and conceptual structures, except by way of discussing a few philosophical objections that turn on that status. However, most of what requires inquiry about concepts and conceptual structures concerns discovering their nature as members of the third world. (Worrying about the formal representations, definitions, and/or simulations of conceptual structures is concerned with this, what 1

Popper (1968) 25-26.

EPILOGUE

149

they are in abstraction from concrete minds and physical supporting systems). It may be an agonizing experience for 20th century philosophers, linguists, and psychologists just to wrench themselves away from materialism and behaviorism to the point of giving serious consideration to genuine mental capacities and phenomena. And that is not the final step, though perhaps it should be taken in conjunction with what is final. As Popper says2: ... one day we will have to revolutionize psychology by looking at the human mind as an organ for interacting with the objects of the third world; for understanding them, bringing them to bear on the first world.

2

Popper (1968) 27.

APPENDIX

ONE CLASS OF LOGICALLY ORIENTED GRAMMARS FOR SEMANTIC REPRESENTATION 1

The use of logical concepts (e.g., quantification, logical predicates, variables, and connectives) and an alternative to semantic representation achieved through deep syntactic structure (e.g., Chomsky, 1965) and interpretive semantics (e.g., Katz, 1966) have been mentioned together in the recent literature of grammatical theory (e.g., McCawley, 1967, 1968a, Lakoff and Ross, 1967).2 This development pleases me. My purpose in this paper, however, is NOT to argue for more logic in grammar or against deep structure. I will merely INTRODUCE one class of grammars that does incorporate both a 'generative' method of semantic representation and a wide use of logical conceptions.3 The class of grammars I have in mind I shall call LSR grammars (from 'logically-oriented semantic representation grammars'). A grammar, G, is an LSR grammar if and only if 1

This is a slightly revised version of a paper originally presented at the meetings of the Linguistic Society of America, University of Illinois, July 26, 1969. The unrevised original appears in Part IV of Section 1, Volume I of RADC Technical Report RADC-TR-70-80 (Peterson, Carnes, and Reid, 1970). A similar introductory presentation constitutes Peterson (1970). 2 McCawley (1968b, fn.l) mentions Postal is in agreement in this regard. Postal (1968b) is somewhat relevant with respect to variables. Further, Langacker (1967) is compatible. 3 The logical and semantical troubles I find with certain concepts of traditional grammar (particularly concerning the grammatical concepts of subject and predicate) have been stated in Peterson and Carnes (1968) 11-29. Concerning some philosophical motives, see Peterson (1968a) and Vendler (1967) Chapter 1.

APPENDIX

151

(i) each initial (completed) P-marker generated by PS rules of G contains a representation of the logical form of any sentence whose grammatical description can be subsequently derived by transformations of G, and (ii) the transformations of G apply to initial and derived P-markers at least until all lexical insertion is completed.4 To briefly explain the idea of an LSR grammar, let the logical formula (1)

(x) (Gx => Fx)

(a representation for a universal afBrmative categorical proposition symbolized in a notation of quantification theory) be a typical representation of a logical form that an initial P-marker might be said to 'contain'. The transformational rules of an LSR grammar would then apply to a structure containing such a representation until lexical insertion is completed - schematically:

4

An LSR grammar may, or may not, include transformations that transmute lexically complete structures into inputs (the surface structures) for phonological representation. That is, I leave the question of the existence and nature of post-lexical, syntactic transformations open. Considerations like those of McCawley (1968a) 72 or Lakoff and Ross (1967) 1-2 seem to demonstrate that post-lexical grammatical transformations (for English at least) are necessary. However, some alternative seems at least remotely possible with respect to the treatment of verbs, prepositions, and particles involved in the LSR grammar sketched below. Note that the class of LSR grammars discussed is one of the simplest (not the most obscure) approaches to what Lakoff and Ross (1967) p. 6 suggested: "So the theory of language provides (somehow) the universal set of rules and well-formedness restrictions which generate the correct set of concepts (i.e., well-formed predicate calculus formulas) and every grammar consists of a set of transformations which map each concept (somehow) into the large set of surface structures which can be used to express each concept".

152

APPENDIX

(2)

s (*)

(Cr

D

Mx)

u

" • U

s

All

Greeks

are

mortal

where Gx is a logical representation for x is a Greek and Mx is one for JC is mortal. A general feature of such a grammar is that so-called full morphemes are at the earliest 'semantic' stages of generation represented by general terms of quantification theory - i.e., by LOGICAL PREDICATES.5 This feature makes LSR grammars particularly compatible with the presumption that the meaning of a sentence is a STRUCTURE of CONCEPTS. Logical predicates represent concepts. Other logical particles participate in defining forms or structures for the predicates. An appropriate explanation of the conception of 'concept' involved here can be extracted from Frege's proposal that a concept is the 'referent' of a logical 5

This agrees with Bach's conclusion (as reported in McCawley, 1968b, 4) that nouns also originate as (logical) predicates. There are arguments that adjectives and verbs belong to the same category (cf. Lakoff, 1965, and Ross' summary, p. 1 of Ross, 1966). Further, Ross (1966) appears to argue that adjectives and nouns are both noun phrases (though the context is somewhat distance from that motivating the 'P' category of (5) below). Stringing these various arguments together presents a fair case for the equivalence at some suitable 'early' generative stage of nouns, verbs, and adjectives. Thus, the category 'P' (logical predicate) below is motivated by relatively independent syntactic considerations in addition to its logical motivations. (Chomsky has somewhere remarked, I believe, that this kind of proposal is trivial because it just means that every lexical item is a logical predicate. Thus, the category P is just equivalent to lexical item in general. This MAY be true. I don't see, however, that it is trivial).

153

APPENDIX

predicate (is represented by it, I would say).6 Thus, the logical orientation of LSR grammars should not be minimized, for a transformational derivation starts with a logically motivated representation of conceptual structure and proceeds through appropriate transformations to end in a representation of lexically completed structure of a natural language sentence. The main feature which distinguishes different LSR grammars among themselves is choice of logical notation. Again, consider an example. Let the initial conceptual structure of (2) generate the different (though conceptually equivalent, i.e., synonymous) sentence Everything

is such that if it is a Greek, then it is mortal -

schematically : (3) S

(x)

( Gx

D

Mx)

It S

Everything

is such that if it is a Greek,

then it is

mortal

Now consider a DIFFERENT notation for the same logical form; viz., the same proposition formalized in Polish prefix (parenthesisfree) notation. 6

Cf. Frege's "On Concept and Object" (in Geach and Black, 1960). I have outlined this species of conceptualism in Peterson (1968a) (written prior to the adoption of a 'generative' approach to semantics). In general, it is a philosophical view broad enough to include both Katz (1966) and Langacker (1967). Further considerations of the logical background in Frege and of some pertinent points from Quine are contained in Peterson (1968b). (The general approach of the latter paper (1968b) will ultimately require a more complicated account of sense and reference utilizing grammatical methodology (e.g., an LSR grammar) and, perhaps, some aspects of a theory of language performance).

154

APPENDIX

(4) S

V

x

D Gx

Mx

£ It S

Everything is such that if it is a Greek, then it is mortal

In some fashion or other, productions and transformations of an LSR grammar accounting for the derivation in (3) would apply to 3 to split it into two elements - viz., an //element and a then one - and then the if part would be placed before the Gx constituent to obtain the proper English order. In (4) the opposite is the case. After the if and then elements are derived, the then must be moved between the Gx and the Mx elements (or their subsequent manifestations).7 This simple alternative is typical of the alternate transformations required for different LSR grammars that result from different choices of logical notation (reflected in the PS rules). There are further less important differences between LSR grammars which might bring about various subclassifications of these kinds of 7

Some may judge is such that sentences like those of (3) and (4) ungrammatical. Nothing hangs on this judgment. All I require here is a LONG form for the relevant logical proposition to contrast with the SHORT forms like All Greeks are mortal and Every Greek is mortal. Other, perhaps more acceptable, long forms might be substituted - e.g., 'For anything, if it is a Greek, then it is mortal' or even 'If anything is a Greek, then it is mortal' (although the latter permutes the first part of the logical connective, IF, with the quantifier expression, anything, in a way that may be misleading at this point). The is such that forms, however, do manifest an intuition that the LSR grammar presented below attempts to explicate - namely, that relative pronouns and analogous expressions are generated from underlying 'relativizers' which appear in a sentence-like constituent (A) that has a logical subject (IT) and a verbalizing element (BE).

APPENDIX

155

grammars. The interesting thing, however, is not any difference that distinguishes LSR grammars from each other, but the differences between LSR grammars and other grammars proposed to generate linguistic descriptions (including semantic representations) of the same sentences (e.g., McCawley's or Katz'). Seeing these differences - or obtaining some information on which to base guesses - can be achieved by sketching some example generations. The following PS rules (grammatical productions) together with transformations to be mentioned - define ONE particular LSR grammar. (These PS rules are a doubtlessly small, though important, subclass of all the grammatical productions that would be required for this particular LSR grammar).8

8

This LSR grammar is based on the grammar being developed for SOUL, an artificial English-like language being used for (i) semantical and logical study of natural languages and (ii) research in computational linguistics. (The idea of SOUL was first introduced in Peterson, 1969c, and is more fully explained in Peterson, 1970). The prevalent convention that initial phrase structures of a TG be enumerated by a CF grammar, saving context-sensitivity for representation by transformational operations, could be applied to the PS grammar of (5). The only reason for inserting contexts in the rules of (5) is to prohibit generation of initial trees containing ill-formed logical formulae - a feature that could be abandoned since none would survive the transformations anyway. A S: Q: P: v: O: V: 3: i: &: V: o : ~ : VB: A:

glossary of technical terms of the PS rules of (5) is as follows: sentence (initial symbol of G) quantifier predicate (logical) logical subject place (or argument) of P (from 'variable') logical operator (or connective) universal quantifier existential quantifier definite description 'quantifier' (but see Note 16 below) (logical) conjunction (logical) disjunction (logical) conditional (logical) negation (not discussed herein) verbal (VB rules are obviously incomplete) a 'dummy' (or vacuous) S (where what it dominates, BE + Rel + (A), is roughly thought of as itself subject-predicate in form - viz., P(BE + Ret) + v(/T + (A)).) Rel: relativizer

156

(5)

APPENDIX

S ^

QS

Q -

VV

S ^ S-+

OSS Pv

Q-+

3 v

v -* NAME

X (A)

v -> IT X (A) V -* V V / P — P -* VB X

O ->&,

V, =>

& -* (BOTH) AND V -> (EITHER) OR => - IF THEN (where X is a member of the of individual constants {a, b, (where X is a member of the of individual variables {x, y,

class c,...}) class z,...})

(where X is a member of the class of predicate constants {A, B, C , . . . } )

A -> BE Rel IT {A) / Verb \ VB-+\ BE Adj 11 — X v Y \BEN [BEA N] (VERB \ VB -> BEADjl I — X (v)» ( BE Prep)

j

(where X is a predicate constant, and Y v)

(where X is a predicate constant and n > 2)

BE: the copula (which is not the two-place identity predicate manifested in English by forms of 'to be') N: noun (N of A + N is the category of count nouns, otherwise it is noncount (mass)) A : the article 'a' NAME: proper noun Verb: intransitive verb Adj: adjective Prep: preposition VERB: transitive verb ADJ: 'transitive' adjective (so, ADJ -> Adj Prep\ under study) {a, b, c, ...}: the class of individual constants {x, y, z, ...}: the class of individual variables {A, B,C, ...}: the class of predicate constants (including many-place predicates which are signaled by the number of following v-constituents) (English words fully upper case above are technical terms for traditional or new grammatical categories, some of whose nature is obvious and others not. Additional upper case English words are used in the same way, e.g., EVERY, IF, etc. This practice, however, is not altogether advisable or prudent in grammatical theorizing).

APPENDIX

157

v

QS Q -* i v

The kind of initial P-marker noted in (4) in slightly more detail would be (6) S

The rules of (5) now elaborate (6) with additional markers that (i) trigger transformations relevant to deriving structures representative of the (English) sentence in question and (ii) permit lexical insertion. (Trees exactly like (6) are not, of course, generable from rules of (5)). Thus, the full initial P-marker for a sentence like (4) - viz., for the analogous sentence Everything is such that if it is a whale, then it swims - is

(7) S

158

APPENDIX

(7) is said to 'contain' the required representations of logical form in the sense that the logical form (paralleling (6) exactly) can be read off the P-marker left-to-right (i.e., an appropriate extractive algorithm can be devised) producing: (8)»

V* = WjcSX

In (7) each logical predicate - viz., W and S - is associated with a so-called verbal element, VB, which introduces symbols for syntactic categories of (English) morphemes that will eventually express these concepts (here, the concepts of being-a-whale, W, and swimming, S. A series of transformations apply to (7) to transmute it into later stage structures suitable for lexical insertion. For this kind of structure, the first (optional) transformation that applies is O-shift (viz., one species of it called THEN- shift)10 - schematically: S

IF

S

THEN

Since A is present in (7), THEN-shift is optional (permitting generation of structures for either Everything is such that if it is a whale, then it swims or, executing 0-deletion instead of THENshift, Everything that is a whale swims). However, if THEN-shift is not instituted first, then 0-deletion is required. A is what introduces 'relativizers' (Rel) which are eventually manifested by the English such that, that, which, who, etc. The idea is that relativizing 9

This is a string of symbols NOT requiring its associated P-marker (nor parentheses) to indicate structure explicitly since a feature of Polish prefix notation is that it wears its structure 'on its sleeve', so to speak, without the need for parentheses (or other markings) to disambiguate forms. Whether this enriches the initial phrase structure too much is worth studying. 10 Other O-shifts are O/J-shift and y4iV£>-shift. O-shift should eventually be stated in a general enough way to encompass these three alternatives as well as its application after coordination reduction.

159

APPENDIX

elements are always accompanied by an explicit or tacit copula (represented here, as well as outside A, by BE). In addition, the predicate-like structure BE + Rel also seems to require a logical subject, viz., IT. BE + Rel can be thought of as a predicate which just does not happen to ever terminate in a full morpheme (a kind of empty concept, as it were). (If A were not present under Q in (7), then 0-deletion would be mandatory. See (13) below). The next transformations are the nearly always required v-permutations :

s

(10)u

£

o

/\v

A

i>

After two v-permutations and two variable-deletions (always late transformations in a cycle, immediately preceding lexical insertion), we get a structure closely approximating the English sentence. Schematically, the derivation is

BE

Rel

IT

IF THEN

A W /A\ VBA

VB

IT x BE A 11

N

S^

\

N | \ IT

x

Verb

This transformation is required by the fact that in Polish prefix, as in most logical notations, the logical subject v usually follows the logical predicate P. The main reason for this is in order to cover n-place predicates for 2, e.g., Pvvv for a three-place predicate like gives-to - i.e., Gabc for a gives b to c. Thus, (10) is only an accurate account for v-permutations of one-place predicates. For two or more place predicates only the first of the ordered «-tuple of v's is preposed. Other v's assume positions for so-called direct and indirect objects of verbs. A crucial problem here will be the introduction of the prepositions that play the role of verb particles (e.g., see what Bohnert and Backer, 1967, called 'placers') and the discovery of criteria which separate these particles from other prepositions that are genuine relational predicates.

160

APPENDIX

O-shift (on THEN)

Verb v-permutation (twice)

v

Ax

IT

P

/\ W /K BE A N VB

.

IT

x

VB

,

S

Verb

variable deletions

S v I IT

THEN

S

/ \P

P A VB

/ K " BE A N

v W

/ IT

VB

A v S

I Verb

"

161

APPENDIX

Lexical insertion is conceived of as a deletion-and-replacement transformation 1 2 with the general f o r m

(12) P

P

VB

u

A

Y



\>

VB/

y

X

u

/\

Z

where X — N, Verb, Adj., etc. (a syntactic category label for full morphemes, or 'concept words' as I have elsewhere called them), where y is a member of the class of logical predicate constants 12 This schematic transformation - representing a set of particular transformations, one for each lexical item in the dictionary - could be regarded as just a separate convention like Chomsky's "lexical rule" (1965, 84). However, the possibility of further such transformational schema (in addition to separate particular transformations belonging to no general classes) has been suggested by Postal (1968a) and seems likely to be needed for other purposes also. Note also that, computationally, lexical insertion could be regarded as some kind of replacement rule analogous to deletion-and-replacement analyses or reordering transformations - e.g., the reordering

E

F

analyzed as a deletion and replacement transformation, reflected in the notation [D

El

B

B 1 2

S

0

4+ 2

IF

G]

C

C

3

4

3+1

0

The analogy is not exact, of course, since in lexical insertion what is doing the replacing is extracted from the dictionary not from the structure itself. (But, perhaps, the dictionary is just an explication of what a logical predicate is in grammatical structure; i.e., the dictionary is tacitly contained already in every conceptual structure).

162

APPENDIX

{A, B, C,...} (representatives of concepts per se, analogues of Katz' semantic markers), and Z is a lexical item from the dictionary. (Further, u may be BE, BE + A, or 0). A typical dictionary entry, then, has the form Z = X,Y\ e.g., whale = N, W. The use of the dictionary is to find in it an appropriate word (i.e., phonological representation of a morpheme, here symbolized by ordinary lower-case typescript in English) for insertion in a prelexical structure like that terminating (ll). 1 3 One SHORT form of the same example described in (7), (8), and (11) is Every whale swims. The initial P-marker is the same as (7) EXCEPT for the absence (under v of Q) of A; i.e.,

13

Lexical insertion of words for quantifiers and for Rel-constructions (and associated particles) will probably be more complicated than the simple lexical insertion operation just presented. Additional local transformations may be required. For example, there is a choice, in terms of the syntactic category of the following VB (e.g., count or non-count N) for y + IT being (or not) manifested as EVERY (EACH, ANY, etc.) THING. And, not only does A possibly recur under every v, but it can continue to embed within itself. Thus, it is possible to derive Everything is such that it is such that if S, then S, and Everything is such that if it &( is such that it) \ is a whale, then S, or Everything is such that if it A (is what) A r (a whale is)p, then S. In short, ^/-constructions are intended to account for the evidence that sub-structures of the form runs is wise is a man is water etc.

!

also admit of the longer forms

!

is that which is such that it is what

I runs 1 is wise ; + I is a man I is water | (etc.

and that relative pronouns required for restrictive and non-restrictive relative clauses do derive from such underlying constructions through some appropriate de-verbalising transformations that delete BE (among other things).

163

APPENDIX S

(13)

S

Q

IT

x

The absence signals the necessary deletion of O, which in turn requires a series of adjunction and 'attraction' transformations for Q, and embedding of the first S (antecedent of the => -conditional) in the 'subject' (the v constituent) of the second (consequent) S, thereby, GENERATING - not presupposing in initial P-marker the so-called 'grammatical' subject-predicate form (Every whale) + (swims). The transformational generation of a (grammatical) subject-predicate form, rather than its presence in initial phrase structure for such a GENERAL proposition, is one of the main motives for incorporating logical concepts in a transformational grammar. The GENERATION of the subject-predicate form is to be contrasted with that of a SINGULAR proposition - e.g., as in John swims with initial P-marker (14)

Verb

That is, Every whale swims and John swims simply do not have a similar subject-predicate structure at a suitably early, semantic stage of their generative descriptions.14 The transformational derivation beginning with (13) is 14

The evidence for this claim is in the semantic data, data mainly discovered in sentence groupings that constitute arguments. (Thus, it is in LOGICAL semantics where the evidence is found). At bottom, the difference between general predication (categoricals) - such as Every whale swims - and singular

164

APPENDIX

N

IT

x

D

/ \

IF

THEN

P

X

VB

A\ BE A N

W

X I IT

v

pP

A

x

A

VB

v

S

/\

IT

x

Verb

predications - such as John swims - rests on the difference between singular and general TERMS. The importance of the latter distinction is sometimes missed by not looking beyond the simple ways in which singular terms can apparently be reduced to awkward general ones in syllogisms. Thus, the X in Xdrinks-, All drinkers swing-, Thus, Xswings can be filled by X = All alcoholics or by X = John. A slightly more sophisticated analysis fills the Y slot in All Ys drink-, All drinkers swing; Thus, all Ys swing with either an ordinary general term like alcoholics or one that purportedly is EQUIVALENT to the singular term John, namely, person identical to John. This second maneuver also helps by permitting the quasi-singular, general term to play the role of middle term in the syllogism - e.g., All drinkers W; All W-(er)s swing-, Thus, all drinkers swing. That is, W can be filled by a straightforward general term like swallow (or male) AND by the (quasi-singular) general term person identical to John. (For a contemporary way of overlooking the distinction, see Quine's elimination of names - e.g., Quine, 1970, 25). The trouble with such reductive analyses, however, comes to light only upon somewhat deeper study - e.g., with respect to the notion of class. If, for discussion, we let P stand for the phrase person identical to John and P* for the phrase the class of persons identical to John, then we can say that the term P is an expression for a property which can be the criterion for membership in P*. That is, nothing is a member of P* unless it has the property P (or being identical to John is truly ascribable to it). The thrust of the reduction of a singular term (like John) to a general term (like person identical to John) is to propose that John (the individual) is the same as either (i) the class P* (which in a certain way does seem true; i.e., regarding classes 'extensionally') or (ii) the property P (which though less plausible, seems possible sometimes; e.g., KNOWING who has got the property P is the same as knowing who John is). Is (could) John be identical to P or P* ? By the usual criterion of individual identity, two seemingly different individuals are identical (are only one) if they share all the same properties. Thus, any predicate true of John must be true of whatever is identical to John (e.g., P or P* on the reductive proposal). Let such a predicate be has one member, abbreviated U. U is certainly true of P*, since U(P*) is the proposition that the class of persons identical to John has one member. Is U true of John himself? It would have to be if John is

165

APPENDIX

O-deletion

S


/

VB

Q

/

W

/

VB

/ \IT.

(for Z a member of A, B, C, ..., where W is y , 3, or t, and Y = Verb, BE + Adj, or BE + (A) + N). Then, there is a 'resolution' of IT of three

types: Verb + IT => Verb + ER; BE + Adj + IT => Adj + (THING, ONE, or STUFF); and BE + (A) + N + IT => N (the latter demonstrating the close relation between count and non-count noun, since both end such nominalizations identically, thereby maintaining a contrast between Adj and noncount nouns - a distinction which is at the bottom of one of my criticisms of Quine in Peterson, 1968b).

APPENDIX

IF

A

THEN

VB BE

A N

P

Q

K /\

IT x

3

s

v

P

As

v

y\ f\ ¡\v IT y VB W v

I - A \\

VERB

IT

x

IT

The crucial transformation transmuting this P-marker into a suitable prelexical structure is the g-attraction on 3 (a particular quantifier under £4) - wherein the use of logical variables (here, the y) is decisive for the quantifier being attached to the appropriate logical subject (viz., the so-called object of the verb wants). On the above analysis, it is clear that John swims would have an entirely different CONCEPTUAL structure (reflected in initial P-marker) than Whales swim (i.e., all of them do). It may not be clear, however, how The whale swims might be handled when The is not another manifestation of V, but indicates singular reference (paralleling That whale swims). Such a sentence clearly parallels the singularity (singular reference of surface subject term) of John swims, but it contains a general term, viz., whale in the 'subject' and a quantifier like particle in the article The. The answer is to adopt the inverted iota of Russell's notation for definite descriptions (though not, however, his definitions of a sentence with an 1 in it, since that is a proposal still of some controversy in logical theory and not advisedly transported into grammatical theory yet). Thus, the initial P-marker for The whale is angry (in the singular sense) is:

y

171

APPENDIX

(17)

S

BE A

N

It would seem prudent to label the syntactic category of 'iv' (or referring phrases such as The whale in The whale h angry or The city of light in The city of light is quiet) as different from phrases for universal and existential quantifiers. Using the SAME label is recommended, however, for grammatical reasons - i.e., reasons internal to this LSR grammar. For many transformations of iv constituents are the same as for Vv or 3v segments - e.g., Nom-Q and (^-attraction - even though t generation is restricted to generation from covering v's. 16 Taking the initial P-marker for The 16 Actually, the PS rules of (5) must be complicated to avoid Q being manifested as V or 3v when it should be tv and vice versa. (That is, this is desirable if initial P-markers that do not contain logically ill-formed formulae are to be avoided - but, also, see note 8 above). One way (which is none too good) is to replace and Q -> iv with v -»- QS\ Q ->-1 v/—S', and S" S leaving S Q S alone while changing Q ->- Vv, 3v to Q -> V / — S and Q —• 3v/—S. This solution is rather artificial. However, it is not fruitless since it brings to light two further questions. The first is that the marking of S that occur under v (within definite descriptions) as S' (and considering the adoption of S' -*• S) draws attention to an unforeseen situation, viz., that the >S associated with iv might have been logically closed sentences. With Q other than iv (viz., yv and 3v), a following closed sentence is permitted (in the (5) sort of LSRG) in order to capture the logical possibility of sentences with SUPERFLUOUS quantifiers (like yxFa or ^xWy => Ay By) which do seem to have their (admittedly odd though grammatical I believe) natural language counterparts (like Everything is such that Alan flies and Something is such that all Anglicans are boring). However, the S following an iv MUST be an open sentence and, moreover, one with a free variable identical to that under v; i.e., Fix\fy HyGy, and F\x Ga are logically ill-formed (and natural language counterparts are plainly ungrammatical - e.g., The Alan grows flies). The second question concerns the overall value of an LSR grammar with Polish notation. A Russellian notation (without parentheses as symbols, the

172

APPENDIX

whale is angry as (17) requires Q-adjunction and g-attraction identical to that above in (14), so we obtain

BE A

N

After applying v-permutation to the lowest Pv of (17), Nom-Q identical to the above in (14) is applied with the subsequent difference that \ goes to The, That, This, etc. (via lexical insertion) parallel to V and 3 going to various manifestations of quantifiers. After a final v-permutation, the pre-lexical structure obtained and the result of lexical insertion are :

adding of parentheses being equivalent to adding tree-structures to a string) would not have so much of a problem of blocking mis-G's (so to speak). Thus, one could have the following rules to cover the same difficulties:

S ->- Q S, S-+SOS,

S

Pv,

g->- X v/X— (where X # P(v)

n

V-+QS,

Q

iv/P(v)"— (for n ^ 0),

for n ^ 0). Examining such an LSR gram-

173

APPENDIX lexical insertions

V 1 I Q

\

p i l VB

\\

n THE

whale BE

angry

A s a final example of the simpler cases, consider The man who is near the window sleeps (where the relative clause is restrictive). The initial P-marker is

O &

/

Sa p

/ - I VB M

yK ~ BE A N

/ Iv A IT x BE

^ K VB N Prep

v2

v3

a IT x T y j IT y

Q v

S P

VB

JV

IT

A y

A \ BE A N mar would seem to be indicated for this reason alone, since it is not clear whether the transformations for it would be greatly, or scarcely, different from those for the Polish-notation PS rules of (5).

174

APPENDIX

The extracted logical form - viz., S ix & Mx N xiy Wy - is, though barbaric, perfectly well-formed (where the logical predicate S... is ...sleeps, Mx is x is a man, Nx... is x is near..., and Wy is y is a window and where the whole is awkwardly read (paraphrased) in the dialect of logic class The x, such that x is a man and x is near the y such that y is a window, sleeps). Transformations of the type mentioned above - viz., O-deletion, Q-adjunction, ^-attraction, v-permutations, etc. (plus analogous new ones) - apply cyclically to the lowest Q + S structures of (19) in two cycles (once for each embedded Q-structure) until a structure approximating an English order is reached:

(21)

x

N

BE

Prep

the window

where the first cycle lexical insertions is already completed under V3.17 Lexical insertion (completing the second cycle) on (21) 17

The structure under V3, that resulting in the string the window, is attached under Pi (under Pa) and S2 (via a transformation called P-collapse) in order to portray the change of the predicate from a two-place predicate (amounting to) x is near y to a one-place predicate (amounting to) x is near-the-window. The full generation of this example is contained in Peterson (1970). The interesting transformations are those which resulted in the S subscripted as £2 and the P subscripted P2. (The subscripts are, of course, not part of the transformations and are only here to increase comprehensibility). They result from a coordination reduction rule (similar to Postal's, see (15) and (16) of Postal, 1968a, 5). The pertinent step is after P-collapse and before g-adjunction on the second cycle, as follows:

175

APPENDIX

consists of Rel + IT manifested as who (or, alternatively, such that it (or he), which, or that), t as THE, N as man and Prep as near. The third cycle is exceedingly brief, paralleling that for a

/ \

VB

M

a

IT

x

VB

r

^

N

the window coordination reduction (Postal) (plus identity reduction under Si and S,)

O &

VB

Pi

P.

M VB

BE A N BE

N

Va

Prep the window

(where P i is a NEW node and vz of S2 is the result of identity reduction). After g-adjunction and G-attraction (placing Q under the v of S2), O-deletion and a v-permutation occur to bring the v under Si (the so-called 'subject') into

176

APPENDIX

simple singular like John sleeps - viz., v-permutations, deletions, and lexical insertion to yield:

IT

x

A

variable

the window

If the O-constituent had not been deleted but shifted between P3 and P4, then a structure appropriate to That (or: The thing, The one) such that it (or: which, who) is a man and is near the window would be obtained. However, to continue deriving the structure for The man who is near the window sleeps, the following transformations apply next: P-embed (like Nom-Q, effecting a de-verbalizing, here of Pz)

the window

177

APPENDIX

(22)

VB

THE

near

the

window

sleeps

A t least three kinds of considerations must be encompassed within the further developments of this LSR grammar: Nom-Q (deleting BE of A ) plus Rel-advancement

* N The crucial step above is O-deletion, which signals the then mandatory P-embed, Nom-Q and ^/-advancement. The series of transformations changes the whole v-structure (covering £2) from one underlying That who is a man and is near the window (and similar alternatives) to one underlying The man who is near the window, i.e., moving Rel + IT (manifested as who) along the string to a position BETWEEN the terms for man and for is near the window involves deleting the term for and and (apparently) replacing it with that for who (though the better motivated transformation achieves this by first deleting and and then adjoining the structure for man to the Q). Though perhaps bizarre at first glance, these transformations are standard for the transformations leading up to ordinary cases of modification; i.e., v-phrases like The white man are derived from a structure for The man who is

178

APPENDIX

FIRST, its extension to the more obvious grammatical structures that it must be able to generate (viz., the structures underlying English expressions for higher-order and n-place logical predicates, mixed-order predicates (e.g., Sincerity frightens John), full sentential nominalizations as well as sentential complements of verbs in various positions in the logical subjects o f many-place predicates - not t o mention the simpler (?) structures for negation, tense, auxiliaries, modals, etc.); SECOND, the addition of procedures to actually represent semantical concepts and relations (synonymy, paraphrase, anomaly, presupposition, analyticity, etc.); and THIRD, its extension t o the generation (or account) of syntactic-like aspects of semantical (lexical) structure. 18 A judgment of the likely success in these three developments (for this particular L S R grammar) should be a useful guide forjudging the overall prospects for LSR grammars in general. white (similar to The man who is near the window) which in turn is derived from structures for That (or: the thing, or the one) who is a man and is white. (The modification transformation is signaled by the absence of A; i.e., when there is a conjunctive predicate structure adjoined to a Q (which is dominated by a v) but where NO A is present to generate relative pronouns). (All of these remarks pertain to 'conjunctive' modifiers, those deriving from underlying logical conjunctions. Non-conjunctive modifiers (what Parsons, 1970, for one calls non-predicative modifiers) deserve a different treatment one which is not clear now. For attributive (non-conjunctive) adjective modifiers, like adverbs that are genuine predicate modifiers (e.g., slowly), doubtlessly derive from logical forms whose nature is not well understood today. Cf. Clark, 1970, Parsons, 1970, Thomason, 1970, Stalnaker, 1970, Lakoff, 1970b, 1970c, and Wallace, 1971). 18 For example, McCawley's examples (say, "x kills y" with the structure of "x causes (y to become (not (alive)))") or structures of the type discussed by Bierwisch (1967). That it is not implausible to expect some success at this third task is revealed in the similarity of McCawley's quests with some of the aims of conceptual analysis in philosophical inquiry. The various concepts and problems of definition, explication, and explanation (from Locke's nominal vs. real essence to Carnap's notion of explication) immediately spring to mind as part of what the inquiry into 'lexical' structure is concerned with. So much so, that the so-called paradox of analysis (a fundamental epistemological dilemma with antecedents as early as Plato's Mend) might well turn out to be the ultimate methodological problem for any grammatical theory which (as I believe it must) includes methods for semantic representation.

BIBLIOGRAPHY

Aarsleff, H. 1970 "The History of Linguistics and Professor Chomsky", Language, 46(3), 570-585. Alston, W. P. 1964 Philosophy of Language (Prentice-Hall). Bach, E. and R. Harms 1968 Universals in Linguistic Theory (Holt, Rinehart, and Winston). Belnap, N. 1970 "Conditional Assertion and Restricted Quantification", Nous, IV, 1-12. Bierwisch, M. 1967 "Some Semantic Universals of German Adjectivals", Foundations of Language, 3 (1), 1-36. Bohnert, H. and P. Backer 1967 "Automatic English-to-Logic Translation in a Simplified Model" (IBM Research Paper, RC-1744. IBM-Watson Research Center, Yorktown Heights, N.Y.). Carnes, R. D. 1970 "Grammatical Transformation and Logical Deduction", In Proceedings of the Conference on Linguistics, R. Oehmke and R. Wachal, eds., (University of Iowa) 235-250. Caton, Charles E. 1969 "Katzian Semantics Renovated" (Philosophy Department (xerox), University of Illinois). Forthcoming in Philosophical Linguistics 4. Chapin, Paul G. 1967 On the Syntax of Word-Derivation in English (Information System Language Studies Number Sixteen (MTP-68), The MITRE Corporation, Bedford Mass.). 1970 "What's in a Word? Some Considerations in Lexicological Theory" (Read to the Linguistic Society of America, Washington, D. C.). Chomsky, N. 1957 Syntactic Structures (Mouton). 1961 "On the Notion 'Rule of Grammar'", Structure of Language and its Mathematical Aspects, Proceedings of Symposia in Applied Mathematics, Vol. XII, American Mathematical Society, (Also, in Fodor and Katz, 1964). 1965 Aspects of the Theory of Syntax (MIT Press). 1966 Cartesian Linguistics (Harper & Row).

180

BIBLIOGRAPHY

1968 Language and Mind (Harcourt, Brace and World). 1969 "Deep Structure, Surface Structure, and Semantic Interpretation" (Indiana University Linguistics Club, mimeographed). 1970a "Remarks on Nominalization", Chapter 12 of Jacobs and Rosenbaum. 1970b "Some Empirical Issues in the Theory of Transformational Grammar" (Indiana University Linguistics Club, mimeographed). [1969,1970a, and b are collected in Studies on Semantics in Generative Grammar (Mouton, 1972).] 1971a "An Extended Standard Theory of Syntax" (Forum Lecture, Linguistics Institute, Buffalo). 1971b Knowledge and Freedom: The Russell Lectures (Harper & Row). Chomsky, N. and M. Halle 1968 The Sound Patterns of English (Harper and Row). Church, A. 1951 "The Need for Abstract Entities in Semantic Analysis", Proceedings of the American Academy of Arts and Sciences, Vol. 80, 1. (Also in Fodor and Katz, 1964). Clark, R. 1970 "Concerning the Logic of Predicate Modifiers", Nous, IV(4), 311-335. Davidson, D. 1968 "The Logical Form of Action Sentences", in N. Rescher, ed., The Logic of Decision and Action (University of Pittsburgh Press) 81-103. Donnellan, K. 1966 "Reference and Definite Descriptions", Philosophical Review, 75, 281-304. 1970 "Proper Names and Identifying Descriptions", Synthese, 21, 335-358. Feigenbaum, E. A. and J. Feldman (eds.) 1963 Computers and Thought (McGraw-Hill). Fillmore, C. 1968 "The Case for Case", In Bach and Harms, 1-88. Fodor, J. A. 1968 Psychological Explanation (Random House). 1971 "Three Reasons for Not Deriving 'Kill' from Cause to Die'", Linguistic Inquiry, 429-438. Fodor, J. A. and J. J. Katz (eds.) 1964 The Structure of Language (Prentice-Hall). Frege, G. (see Geach and Black, 1960). Geach, P. and G. Anscombe 1961 Three Philosophers: Aristotle, Aquinas, Frege (Basil Blackwell, Oxford). Geach, P. and M. Black, (eds.) 1960 Translations from the Philosophical Writings of Gottlob Frege (Basil Blackwell, Oxford). Ginsburg, S. 1966 The Mathematical Theory of Context-Free Languages (McGraw-Hill). Harman, G. 1970 "Deep Structure as Logical Form", Synthese, 21, 275-297.

BIBLIOGRAPHY

181

Heath, P. L. 1967 "Concept", in P. Edwards, ed., The Encyclopedia of Philosophy, Vol. 2 (Macmillan) 177-180. Householder, Fred W. 1969 "Review of Language and Its Structure by R. Langacker", Language, 45(4), 886-897. Jacobs, R. A. and P. S. Rosenbaum 1968 English Transformational Grammar (Blaisdell). 1970 (eds.) Readings in English Transformational Grammar (Ginn and Co). Katz, J. J. 1964 "Mentalism in Linguistics", Language, 40(2), 124-137. 1966 The Philosophy of Language (Harper & Row). 1967 "Some Remarks on Quine on Analyticity". Journal of Philosophy LXIV. 36-52. 1970a "Interpretive Semantics Vs. Generative Semantics". Foundations of Language. 6(2). 220-259. 1970b "Interpretive Semantics" presented in the Symposium on Semantics and Transformational Grammar (Linguistics Society of America, Washington, D. C.). Katz, J. J. and J. A. Fodor 1963 "The Structure of a Semantic Theory", Language, 39, 170-210 (Also in Fodor and Katz, 1964). Katz, J. J. and P. Postal 1964 An Integrated Theory of Linguistic Description (MIT Press). Kretzman, N. 1968 "The Main Thesis of Locke's Semantic Theory", Philosophical Review, LXXVII, 175-196. Lakoff, G. 1965 "On the Nature of Syntactic Irregularity", Report No. NSF-16 (Harvard Computational Laboratory). 1968 "Instrumental Adverbs and the Concept of Deep Structure", Foundations of Language, 4, 4-29. 1970a "Global Rules", Language, 46, 627-639. 1970b "Adverbs and Opacity: A Reply to Stalnaker" (University of Michigan, mimeographed). 1970c "Adverbs and Modal Operators" (Phonetics Laboratory, University of Michigan, mimeographed). 1971 "Notes on Relevant Entailment" (Linguistics Department, University of Michigan, mimeographed). Lakoff, G. and J. Ross 1967 "Is Deep Structure Necessary?" (mimeographed memorandum, MIT). Langacker, R. W. 1967 Language and Its Structure (Harcourt, Brace, and World). 1969 "Mirror Image Rules II: Lexicon and Phonology", Language, 45(4), 844-862. Lefkowitz, R. 1971 "The Syncategorematicity of 'Good'" (Linguistics Department, Syracuse University, mimeographed).

182

BIBLIOGRAPHY

Locke, John 1959 An Essay Concerning Human Understanding (A. C. Fraser, ed., Dover, two volumes). McCawley, James 1967 "Meaning and the Description of Languages", Kotoba no uchu, Vol. 2, Nos. 9(10-180), 10 (38-48), and 11 (51-57). 1968a "Lexical Insertion in a Transformational Grammar without Deep Structure", in Papers of the Chicago Linguistic Society 1968 Regional Conference, Darden, B., Bailey, C. and Davison, A., ed. (Department of Linguistics, University of Chicago) 71-80. 1968b "Where Do Noun Phrases Come From?", a revised and extended version (xeroxed) of the same title in Jacobs and Rosenbaum (1970). Parsons, T. 1970 "Some Problems Concerning the Logic of Grammatical Modifiers", Synthese, 21, 320-334. Peterson, P. L. 1968a "Concepts and Meanings", Proceedings of the XlVth International Philosophy Congress, Vol. Ill (Wien, Herder) 469-476. 1968b "Concepts and Words", read at the Eastern Division Meetings of the American Philosophical Association, Washington, D. C. (Abstract in Journal of Philosophy, November 7, 1968, LXV(21), 714-715) 1968c "Translation and Synonymy: Rosenberg on Quine", Inquiry, 11(4), 410-414 1969a "Theory and Meaning", read to the Canadian Philosophical Association, York University, Toronto. 1969b "One Class of Logically Oriented Grammars for Semantic Representation", read to the Linguistics Society of America (University of Illinois). 1969c "The Uses of Language Research", Progress Report, Syracuse University; Part VI of Peterson, Carnes and Reid (1970). 1970 "G(SOUL): A Logically Oriented Grammar for Representing Linguistic Meaning", in Proceedings of the Conference on Linguistics, R. Oehmke and R. Wachal, eds., (University of Iowa) 163-196. 1971a "Relation Names", read to the Western Division Meetings, American Philosophical Association, Chicago. 1971b "Grellings Paradox: Some Remarks on the Grammar of its Logic", read to the 4th International Congress for Logic, Methodology, and the Philosophy of Science, Bucharest. Peterson, P. L. and R. D. Carnes 1968 "Semantic Representation", An extract from Peterson, Carnes, and Reid (1969) circulated separately (Philosophy Department, Syracuse University). Peterson, P. L., R. D. Carnes, and I. R. Reid 1969 "Semantics and Grammatical Methods", Section I, Volume I of Large Scale Information Processing System, Technical Report RADC-TR-68-401, Rome Air Development Center. 1970 "Semantics and Computational Linguistics", Section 1, Volume I of

BIBLIOGRAPHY

183

Large Scale Information Processing Systems, Technical Report RADC-TR-70-80, Rome Air Development Center. Popper, K. 1968 "On the Theory of the Objective Mind", Proceedings of the XlVth International Philosophy Congress, Vol. 1, (Wien, Herder) 25-53. Postal, P. 1968a "Coordination Reduction", IBM Research Paper, RC2061 (Yorktown Heights, N.Y.). 1968b "On Coreferential Complement Subject Deletion", IBM Research Paper, RC 2252 (Yorktown Heights, N.Y.). Quine, W. V. 1953 From a Logical Point of View (Harvard University Press). 1960 Word and Object (MIT Press). 1969a "Ontological Relativity", in his Ontological Relativity and other Essays (Columbia University Press). 1969b "Replies", in Words and Objections: Essays on the Work of W. V. Quine, D. Davidson and J. Hintikka, eds. (Humanities Press). 1970 Philosophy of Logic (Prentice-Hall). Rosenberg, Jay F. 1967 "Synonymy and the Epistemology of Linguistics", Inquiry, 10(4), 405-420. Ross, J. R. 1966 "Adjectives as Noun Phrases" (MIT, mimeographed). 1967 "On the Cyclic Nature of English Pronominalization", in To Honor Roman Jakobson (Mouton). Russell, Bertrand 1957 "Mr. Strawson on Referring", Mind, LXVI, (reprinted in his My Philosophical Development, 1959). Simmons, R. F. 1970 "Natural Language Question-Answering Systems: 1969", Communications of the ACM, 13, 15-30. Stalnaker, R. 1970 "Notes on Adverbs in Response to Thomason and Lakoff" (University of Illinois, xerox). Strawson, P. F. 1950 "On Referring", Mind, LIX, 320-344. 1959 Individuals (Metheun). Thomason, R. 1970 "A Semantic Theory of Adverbs" (Yale University, xerox). van Fraassen, B. 1966 "Singular Terms, Truth-Value Gaps, and Free Logic", Journal of Philosophy, 63, 481-495. Vendler, Z. 1967 Linguistics in Philosophy (Cornell University). 1968 Adjectives and Nominalizations (Mouton). Wallace, J. 1971 "Some Logical Roles of Adverbs", Journal of Philosophy, LXVIII, 690-714.

184

BIBILOGHRAPHY

Wilson, N. L. 1959 "Substances Without Substrata", Review of Metaphysics, 12, 521-539.

INDEX

Aarslef, 126 Alston, 124-126 Anscombe, 89 Aristotle, 116, 129 Bach, 152 Backer, 159 Belnap, 94 Berkeley, 126, 128 Bierwisch, 178 Black, 87, 153 Bohnert, 159 Bolzano, 148 Brown, 95 Carnes, 50, 53, 65, 67, 113, 150 Carnap, 105, 178 Caton, 40-47, 57, 64, 75, 86, 116 Chapin, 39, 53, 59 Chomsky, 17, 24, 30-33, 35, 49, 53, 57, 59-63, 65, 85, 114, 115, 125, 142, 150, 152, 161, Church, 88, 90 Clark, 108, 110, 178 Davidson, 108 Descartes, 119, 128, 142 Donnellan, 90 Dunn, 90 Feigenbaum, 145 Feldman, 145 Fillmore, 60 Fodor, 34, 59, 60, 144 Frege, 75, 86-92, 106, 115, 116, 141, 148, 152, 153 Geach, 83, 87, 89, 153 Ginsburg, 17 Goodman, 121 Halle, 24, 49 Harman, 83

Heath, 120, 126-130 Householder, 85 Hume, 119 Jacobs, 24 Kant, 92 Katz, 31, 32, 34, 35, 37-41, 43, 55, 57, 61, 63, 75, 92, 104, 106, 114, 115, 117, 124, 125, 138, 153, 140, 141, 150, 155, 162 Kretzman, 126 Lakoff, 52, 53, 55, 60, 75, 94, 116, 150, 151, 178 Langacker, 59, 85, 150, 153 Lefkowitz, 63 Leibniz, 148 Locke, 119, 125-127, 142, 178 McCawley, 41, 54, 55, 57, 59, 75, 107, 108, 110, 113, 115, 116, 150, 151, 155, 178 Parsons, 108, 110, 178 Peterson, 35, 38, 50, 64, 65, 67, 8183, 88, 90, 117, 136, 150, 153, 155, 169, 174 Plato, 119, 129, 148, 178 Popper, 148, 149 Postal 31, 32, 34, 150, 161, 175 Quine, 102, 115, 127, 130-141, 153, 164, 169 Reid, 150 Rosenbaum, 24 Rosenberg, 140, 141 Ross, 24, 52, 53, 55, 75, 116, 150152 Russell, 76, 86, 102, 105, 143, 170 Simmons, 146 Skinner, 130 Stalnaker, 178

186 Strawson, 39, 86, 103, 142 Thomason, 178 van Fraassen, 103 Vendler, 150, 169

INDEX Wallace, 178 Wilson, 29 Wittgenstein, 105

JANUA L I N G U A R U M STUDIA MEMORIAE NICOLAI VAN WIJK DEDICATA Edited by C. H. van Schooneveld SERIES MINOR 42.

m d : Trends in Linguistics. Translated by Muriel Heppell. 1965. 260 pp. Gld. 28.— 44. THEODORE M. DRANGE: Type Crossings: Sentential Meaninglessness in the Border Area of Linguistics and Philosophy. 1966. 218 pp. Gld. 29 — 45. WARREN H. FAY: Temporal Sequence in the Perception of Speech. 1966. 126 pp., 29 figs. Gld. 23.— 47. BOWMAN CLARKE: Language and Natural Theology. 1966. 181 pp. Gld. 30.— 49. SAMUEL ABRAHAM and FERENC KIEFER: A Theory of Structural Semantics. 1966. 98 pp., 20 figs. Gld. 16.— 50. ROBERT J. SCHOLES: Phonotactic Grammatically. 1966. 117 pp., many figs. Gld. 20.— 51. HOWARD R. POLLIO: The Structural Basis of Word Association Behavior. 1966. 96 pp., 4 folding tables, 8 pp. graphs, figs. Gld. 18.— 52. JEFFREY ELLIS: Towards and General Comparative Linguistics. 1966. 170 pp. Gld. 26.— 54. RANDOLPH QUIRK and JAN SVARTVIK: Investigating Linguistic Acceptability. 1966. 118 pp., 14 figs., 4 tables. Gld. 20.— 55. THOMAS A. SEBEOK (ED.): Selected Writings of Gyula Laziczius. 1966. 226 pp. Gld. 33.— 56. NOAM CHOMSKY: Topics in the Theory of Generative Grammar. 1966. 96 pp. Gld. 12.— 58. LOUIS o. HELLER and JAMES MACRIS: Parametric Linguistics. 1967. 80 pp., 23 tables. Gld. 14 — 59. JOSEPH H. GREENBERG: Language Universals: With Special Reference to Feature Hierarchies. 1966. 89 pp. Gld. 14.— 60. CHARLES F. HOCKETT: Language, Mathematics, and Linguistics. 1967. 244 pp., some figs. Gld. 28.— 62. B. USPENSKY: Principles of Structural Typology. 1968. 80 pp. Gld. 16.— 63. v. z. PANFILOV: Grammar and Logic. 1968.160 pp. Gld. 18.— 64. JAMES c. MORRISON: Meaning and Truth in Wittgenstein's Tractatus. 1968. 148 pp. Gld. 20.— 65. ROGER L. BROWN: Wilhelm von Humboldt's Conception of Linguistic Relativity. 1967. 132 pp. Gld. 20.— 66. EUGENE j. BRIERE: A Psycholinguistic Study of Phonological Interference. 1968. 84 pp. Gld. 14.— MILKA

67. 69. 70. 71. 72. 73. 74. 76. 77. 106. 107. 110. 113. 114. 117. 119. 123. 134.

ROBERT L. MILLER: The Linguistic Relativity Principle and New Humboldtian Ethnolinguistics: A History and Appraisal. 1968. 127 pp. Gld. 20.— I. M. SCHLESINGER: Sentence Structure and the Reading Process. 1968. 172 pp. Gld. 22.— A. ORTIZ and E. ZIERER: Set Theory and Linguistics. 1968. 64 pp. Gld. 12.— HANS-HEINRICH LIEB: Communication Complexes and Their Stages. 1968. 140 pp. Gld. 20.— ROMAN JAKOBSON: Child Language, Aphasia and Phonological Universals. 1968. 104 pp. Gld. 12.— CHARLES F. HOCKETT: The State of the Art. 1968.124 pp. Gld. 18.— A. MILLAND and HANS-HEINRICH LIEB: "Klasse" und "Klassifikation" in der Sprachwissenschaft. 1968. 75 pp. Gld. 14.— URSULA OOMEN: Automatische Syntaktische Analyse. 1968. 84 pp. Gld. 16.— ALDOD. SCAOUONE: Ars Grammatica. 1970. 151 pp. Gld. 18.— HENRIK BIRNBAUM: Problems of Typological and Genetic Linguistics Viewed in a Generative Framework. 1971.132 pp. Gld. 16.— NOAM CHOMSKY: Studies on Semantics in Generative Grammar. 1972. 207 pp. Gld. 24.— MANFRED BIERWISCH: Modern Linguistics. Its Development, Methods and Problems. 1971.105 pp. Gld. 12.— ERHARD AGRICOLA: Semantische Relationen im Text und im System. 1972. 127 pp. Gld. 26.— ROMAN IAKOBSON: Studies on Child Language and Aphasia. 1971. 132 pp. Gld. 16.— D . L . OLMSTED: Out of the Mouth of Babes. 1971. 260pp. Gld. 36.— HERMAN PARRET: Language and Discourse. 1971. 292 pp. Gld. 32.— J O H N W . OLLER: Coding Information in Natural Languages. 1971. 120 pp. Gld. 20.— ROMAN JAKOBSON: A Bibliography of His Writings. With a Foreword by C. H. Van Schooneveld. 1971. 60 pp. Gld. 10.—

MOUTON • PUBLISHERS • THE HAGUE