291 39 24MB
English Pages 349 [352] Year 1973
JANUA LINGUARUM STUDIA MEMORIAE NICOLAI YAN WIJK DEDICATA edenda curai C. H. V A N S C H O O N E V E L D Indiana University
Series Minor,
144
PRINCIPLES AND METHODS OF CONTEMPORARY STRUCTURAL LINGUISTICS by JU. D. APRESJAN translated by DINA B. CROCKETT Stanford
University
1973 MOUTON THE HAGUE • PARIS
© Copyright 1973 in The Netherlands. Mouton & Co. N.V., Publishers, The Hague. No part of this book may be translated or reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publishers.
LIBRARY OF CONGRESS CATALOG CARD NUMBER: 72-94441
Printed in Belgium by NICI, Ghent.
FOREWORD
There is an interesting tale about the caliph Omar, the seventh century conqueror. In one of the cities he took, there were many books. The military commander who found them asked the caliph whether to distribute them to the believers, together with the rest of the loot. Omar's answer was: "If the books say what the Koran says, then they are useless. If they say other things, then they are harmful. So burn them!" What calls this tale to mind is the fact that the attitude toward structural linguistics was, for a good many years, very much like that of the caliph Omar toward the books he decided to destroy. New notions of structural linguistics were declared harmful, and all the other premises were dismissed as merely new formulations of long known truths. A lot of criticism was directed against de Saussure's view of language as a system of relations and against his conception of linguistics as a formal theory studying objects the existence of which is not directly deducible from observable linguistic facts. These positions of de Saussure's will be explained in detail later on and it will be shown that he was only trying to discover simple and general rules which underlie all linguistic phenomena. For the time being, what should be pointed out is that if de Saussure had not gone beyond the notions and methods which prevailed in linguistics at his time, and if he had not postulated the existence of linguistic forms for which there was no direct evidence in the languages known at that time, he would not have made one of the greatest discoveries in the history of linguistics - the discovery of the laryngeals (for details see pp. 104-107). The existence cf laryngeals in a class of Indo-European roots, postulated
6
FOREWORD
by de Saussure on the basis of strictly theoretical considerations of root structure, was confirmed empirically only after his death, with the discovery and decipherment of the Hittite language. As for the second argument against structural linguistics, we shall again refer to a historical fact, though of a somewhat different nature. Vostokov's syntactic theory, as is well known, also dealt with the arrangement of words in sentences. He gave rules for the order of the primary sentence components (the subject and predicate groups) as well as for the arrangement of attributive and complementary words within each of the two groups NP + VP; VP is sent to the temporary memory, and NP is expanded by the rule NP -*• A -f N. Now TV is sent to the temporary memory, and A is expanded by the rule A -> D + A. Then A is sent to the temporary memory, so that it now contains three symbols (VP, N, and A), and D is replaced by the symbol xoroso 'well'. The maximum number of symbols a computer has to store in its temporary memory while generating a given sentence is the
226
SPEECH MODELS
of this sentence. The DEPTH OF A LANGUAGE is the depth of the deepest sentence in this language. The depth of a sentence can be computed by numbering the branches of each node from right to left at their points of origin as in Figure 10, and by adding up the occurrences of 1 on every path leading down to a terminal point (cf. the numbers in parentheses). The depth of sentences, and consequently also of languages, is obviously a function of regressive rather than progressive structures, for when progressive structures are expanded the computer does not have to store many symbols in its temporary memory. Hence, Yngve's model can generate sentences with PROGRESSIVE structures of any length. However, it cannot generate sentences with infinitely expanding REGRESSIVE structures, for this would require a temporary memory with unlimited (infinite) capacity, and in every physically realizable computer the capacity of the memory must be finite. The depth of sentences generated by the computer is therefore LIMITED by the capacity of the computer's temporary memory. After these preliminary remarks we can focus on the more interesting aspects of the IC generative model. Every grammatical model inevitably embodies a certain hypothesis about the actual structure of the mechanism the operation of which it simulates. If one approaches linguistic models from this point of view, then every mechanism in the models must correspond to some psychological or physiological mechanism. At the same time it should be remembered that the mechanisms in models are always simpler than the corresponding mechanisms of the brain and are at best only approximations of the latter (see p. 91). The generative mechanism of Yngve's model consists of four parts. The permanent memory naturally corresponds to the human memory, and the output unit to the human speech organs. To be consistent, we must assume that the human brain contains also two additional mechanisms which correspond to the temporary memory and the computing register. As it turns out, there are very interesting facts and observations in support of this proposition. DEPTH
GENERATIVE MODELS
227
It has been established by psychological experiments that a man processing a given amount of information can hold in his memory only a limited number of units at any given time, more presisely 7 ± 2 random numbers, discrete words, or the like find, complete, ... -> eat, smoke, ... 12. Nhuman
As noted earlier, the set of PS rules is partially ordered. The relation of partial order is a consequence of the first condition restricting the application of PS rules to strings (p. 233). The end result of the application of PS rules to a string constitutes the BASE OF A KERNEL SENTENCE. (A rigorous definition of this term would be: the base of a kernel sentence is the end result of the application of a sequence of PS rules to a string S.) Thus the application of rules 1, 2, 3, 4, 5 (twice), 6, 7, 8, 9 (twice), 10, 11, 12 (twice), and 15, for instance, can yield the string of symbols The man 0 Past have en find the boy S, which is the base of the sentence The man had found the boys (on sentence bases and paradigms see p. 197). Formally the bases of kernel sentences can be said to be the AXIOMS of a syntactic CALCULUS , and in substance they are the linguistic ANALOGS of elementary situations (see pp. 195-196). At a certain point in the generative process we get so-called KERNEL TYPES, the further expansion of which leads to the bases of kernel sentences. The following is the system of kernel types for English, as given in several grammars (other systems are also possible, of course): 1. NP + aux + Vi (adv) e.g., John ran (home), The ice fell (on the floor)-, 2. NP + aux + Vt + NP (adv) e.g., John killed Mary, The heat melted the ice easily, 3. NP + aux + be + NP (adv) e.g., John is a bore;
GENERATIVE MODELS
237
4. NP + aux + be + adj (adv) e.g., The problem was difficult; 5. NP + aux + Vt + NP + NP (adv) e.g., They built John an office', 6. NP + aux + Vt + prt + NP (adv)1 e.g., They took over the country, 7. NP + aux + Vs + adj (adv) He felt good. As this list demonstrates, it is possible to generate by the PS rules a finite set of sentences of the 7-10 simplest types. The derivation of other sentences, from moderately elaborate to highly elaborate ones, is accomplished by the TRANSFORMATIONAL rules, which constitute the second component of transformational grammar. This brings us to another important principle of the transformational calculus. This principle, which we have already referred to earlier, is that it is possible to generate an infinite set of sentences of any productive type, no matter how elaborate, by applying a few simple rules to a few simple syntactic types recursively. Let us outline some of the more important properties of transformational rules. First, as mentioned above, transformational rules are recursive. Secondly, they allow for the substitution of more than one symbol at a time. Thirdly, they allow transpositions. Fourthly, they are applied to trees of sentences rather than to strings of particular forms. In order to apply a transformation, then, one has to know not only the structure of the given string but also the history of its derivation f r o m the original string, that is, its DERIVATIONAL HISTORY. Let us consider the sentences On vidit otca semejstva 'he sees the father of the family' and On lisaet doc' nasledstva 'the deprives (his) daughter of (her) inheritance'. Outwardly these sentences appear to have the same structure (both can be represented by the string of symbols Nn V Na Ng). The trees for these sentences, however, are different (Figure 14). 7
Prt stands for particle (prepositional adverb).
238 On
SPEECH MODELS vidit
otca
semejstva
On
I
lisaet
doc'
nasledstva
II Fig. 14.
The passive transformation, which yields transforms such as Doc' lisena im nasledstva 'the daughter has been deprived of (her) inheritance by him', can be applied to sentences with a tree such as the one in Figure 14, II, but it cannot be applied to sentences with a tree such as the one in Figure 14, I. There are ELEMENTARY and COMPOUND transformations. The elementary transformations must be specified in a list (see below). Compound transformations are simply sequences of elementary transformations the application of which yields well-formed sentences (or bases of well-formed sentences) from kernel type sentences. The output of every transformation, whether elementary or compound, is either a TREE, or its equivalent - a BRACKETED representation. There are OBLIGATORY and OPTIONAL transformations. The obligatory transformations are applied to the bases of kernel sentences and yield kernel sentences. These transformations and the PS rules together comprise the body of rules for forming the objects of the modelled set. Optional transformations can be applied to kernel sentences as well as to their bases, and they yield transforms of varying complexity. They are rules for the conversion of the objects of the modelled set into their equivalents, and they comprise the foundation of the actual transformational calculus. In other respects the typology of the transformations used in generative grammar is essentially like the one presented above on pp. 203-205; the only minor difference is that transformations yielding pro morphemes and zero variants of lexical morphemes are viewed as deletions, and the inverse transformations are viewed as additions.
GENERATIVE MODELS
239
The following are some transformations which operate in English. The obligatory ones are marked by an asterisk (our source is Xi - S - X3 (b) Stuctural analysis : NPpl, Present, X Structural change: Xi - X2 - X3 Xi - 0 - X3 3. Interrogative: Structural analysis: (a) NP, C, VPi (b) NP, C+M, X (c) NP,
C+have,
( d ) NP,
C+be,
Structural change : Xi - Xz - X3 4. Negation: Structural analysis: same as 3. Structural change: Xi - X2 - X3 5. Nominalization : 8 (a) Structural analysis: j ^
X X
X2 - X\ - X3
Xi - X2 + n't- X3 J = jJ-We
j
Structural change: X4+S, or 0, replaces Xi; ing+X6 replaces X2 (b) Structural analysis : same as (a). Structural change: to-\-X§ replaces X2 (c) Structural analysis: same as (a), with V\ instead of VP\ Structural change: ing+X^+of+Xi replaces X2 To illustrate, we shall trace the derivation of the sentence The boys 8
had been found
by the
man.
The productivity and relatively high frequency of this class of transformations in English distinguish it from most Slavic languages