Analogy 9783110803341, 9789027979759


209 87 11MB

English Pages 160 Year 1977

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Preface. Apologia pro analogia sua
1. Analogy and linguistics
2. Analogy in synchrony
3. Analogical change
4. Analogy in generative grammar and its wake
5. Drift
References
Appendix to references
Index
Recommend Papers

Analogy
 9783110803341, 9789027979759

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Trends in Linguistics State-of-the-Art Reports edited by

W Winter University ofKiel, Germany

Analogy Raimo Anttila University ofCalifornia, LosAngeles

MOUTON PUBLISHERS THE HAGUE-PARIS-NEW YORK

ISBN 90 279 7525 6 ©Copyright 1977 Mouton Publishers^ The Hague

No part of this book may be translated or reproduced in any form; by print, photoprint, microfilm, or any other means, without written permission from the publishers Printed in The Netherlands

Ahti Pesóla in memoriam (1916-1974)

CONTENTS Preface. Apologia pro analogia sua

1

1 Analogy and linguistics 1.1 The theory of signs 1.2 The modes of reasoning 1.3 Analogical argument 1.4 Language learning and linguistic change 1.5 Analogy and analysis 1.6 Summary and introduction

7 7 13 16 19 21 23

2 Analogy in synchrony 2.1 The Greek spirit 2.2 Sapir's language typology 2.3 Anomaly as regularity 2.4 Analogy, competence, and performance 2.5 Analogy and rules 2.6 Pictures, family resemblances, and similarity 2.7 Analogy, association, and contiguity 2.8 One meaning - one form 2.9 Formative blocks and indeterminacy of segmentation. . . .

25 25 32 35 38 40 45 52 55 58

3 Analogical change 3.1 The internal frame of change 3.2 Vantage points and ingredients of classification 3.3 Analogical change and features 3.4 Phonetic analogy versus conceptual analogy 3.5 Kurylowicz versus Mariczak 3.6 Andersen's typology of language change 3.7 Toward a dynamic theory of language

65 65 66 69 72 76 80 85

4 Analogy in generative grammar and its wake 4.1 The original revolution 4.2 Appraisal of the revolution 4.3 Ongoing revolution 4.4 Rule inversion 4.5 The state of the art 1976

87 87 91 98 105 110

viii 5 Drift 5.1 The rationality of change 5.2 Increase in iconicity 5.3 Esper's experiments and phonaesthetic extraction . . 5.4 Greenberg's intragenetic mode of dynamic comparison 5.5 Drift and deep language 5.6 Final comments

. .

Ill Ill 112 .116 .121 122 126

References

128

Appendix to references

143

Index

150

PREFACE

Apologia pro analogia sua The past fifteen years have witnessed a most curious linguistic bubble. This is/was the "Chomskyan Revolution", or "generative adventure" (Danielsen 1972). The whole episode has no scientific merit (Maher 1975, Linell 1974, Itkonen 1974, Hammarstrôm 1971, 1973, Danielsen 1972, 1973, Makkai 1975, Anttila 1975b; and many others, note particularly the bulk of articles in Makkai and Makkai (eds.) 1975, and Koemer (ed.) 1975). It was another camouflaged token of extreme American structuralism and behaviorism in particular, and as such it represents a big step backwards. The movement outdid also the negative aspects of the Neogrammarians, who instituted the following false steps (from Wilbur 1972): 1. 2. 3. 4. 5.

Deliberate misinterpretation. Emotional tone (nuova fede). Formal position eliminates the need for explanation. Rhetorical bows to the past. Observation and description called explanation.

Since the goal of Vennemann and Wilbur (1972) is to make Schuchardt and the transformationalists the good guys, Wilbur does not point out that these false steps apply equally well to the transformationalists (but see Anttila 1975d). The only difference would be that for the Neogrammarians point 3 applies to sound change, for the transformationalists to analogy. General linguistics is not the only field where a step backwards has been decreed progress. Maher (1975 [and elsewhere]) points out that Gellner's analysis of Oxford language philosophy makes the generative revolution déjà vu (to quote Gellner 1968: 212; quoted in Maher 1975): . . . a general recipe for a successful revolution. The recipe runs: take a viable system (social, of beliefs, or philosophical, etc. consisting of elements, a, b, c, etc.). Abolish or invert a, b, c, eliminating traces of them and of their past presence. Of course the New System, consisting of not-a, not-b, etc., will be highly revolutionary, novel, etc., and give you the réclame of a great innovator, revolutionary, etc.

2 However, the trouble so far is that not-a, not-b, etc., may form a system which is not merely revolutionary, but also absurd. No matter. If you have followed the recipe correctly and eliminated traces and memories of a, b, c, etc. (at least from your own mind), you can now gradually reintroduce them, and slowly finish again with the original system a, b, c, etc. You now have both a viable system and the aura of a just innovator and revolutionary. In its earlier stages, it was claimed to be revolutionary; more recently, the stress has been on continuity with the past. Novel ideas had to be claimed to justify the introduction of such novel practices: continuity is now claimed mainly in order to avoid the drawing up of balance sheets of how much the revolution has achieved. If it is merely a continuation, it is at least no worse than philosophies which preceded it, it is claimed, and there is no call for abrogating it if it has failed - as it has - to deliver the promised goods.

Gellner delineates, mutatis mutandis, the history of generative historical linguistics (about 1962-72). It is no wonder, then, that among this kind of revolutionary chaos analogy did not fare too well, since it would have supported truly mental linguistics against mere formalisms on paper (see Anttila 1975c, 1974b). Whatever the exact details are, it is clear that around 1960 analogy became a rather illegitimate topic in linguistics, and in fact, it was expressly banned by generative grammar which was believed to be able to replace it with more adequate notions. Analogy lived on, however, under new assumed names, and survived the pogrom. Then about 1970 it started to retrieve its old rights, even if the traditional names were not always permitted. After such neglect analogy was of course bound to come back as a legitimate area of study. The first longer independent treatment is Vallini (1972), for the period up to and through Saussure. Then both Best and Esper published a history of the notion, reaching the 1960s (both 1973). Neither of them discusses the 1960s, however, except for a few stray references to generative grammar (Best [27-8] suggests Chomsky's competence [1965: 4, 10] as equal to Brugmann's analogy [1885: 79-81], and Esper rejects 'generative' analogy). Best ends his treatment with Manczak's work of the late 1950s (which does continue through the 1960s also), and Esper signs off with Hockett's ideas from the same period, which, again, is strengthened by Hockett's state of the art survey of 1968. In both cases the 1950s run off into the 1960s quite naturally without the beating of drums that became so characteristic of the 1960s. The fact that Best refers a few times to Leed (1970) and acknowledges Winter (1969) is of course not enough to give any proper account of the 1960s. Otherwise Best and Esper are on the whole supplementary and provide valuable references to the past, and such background will not be repeated in this Forschungsbericht. Best's and Esper's books necessitated a new strategy for the present

3 survey, quite different from the one originally planned, since any overlapping should now be avoided as much as possible. Traces of this lastminute shift abound in the text, I am sure, but I am convinced that the report should go out now. Due to this change in the scene, the cut forced on me was thus naturally the past decade, 1963-1973. Some overlapping will of course result, since this decade must be tied to tradition. Taking the years 1963-1973 under scrutiny is in fact what is most needed today, because this is the era of the "generative adventure". The decade 1963-1973 contains in fact two turning points that are also relevant for the recent history of analogy. The first one was the year 1965 with Jakobson's "Quest", Kiparsky's dissertation, and Chomsky's Aspects. Aspects was the first to go, and Kiparsky ( 1 9 6 5 ) lasted till about 1970, but the line opened by Jakobson has been productive (e.g. Andersen's and Shapiro's output). The year 1970 is the turning point of transformationalgenerative historical linguistics back to tradition from the difficulties of their own making. This is also the time when Wandruszka revived a kind of modern Sapirian spirit, as it were (e.g. Wandruszka 1971). Also around 1970 morphology made its comeback (e.g. Wurzel 1970), followed by stricter requirements of phonological concreteness, both factors that automatically imply a strengthening of the role of analogy. The position that the morphophoneme is a fiction (Hockett 1961) is also getting support (Karlsson 1974), and independence of allomorphs necessitates analogy (2). Was the generative revolution perhaps beneficial for linguistics in that it stirred the waters and showed what natural language is not? — This is indeed a very popular argument, and one hears it constantly by earlier followers of transformational-generative grammar. But also nonpartisans think the same. Thus e.g. both John Ross and Francis Dinneen supported such a view at the Georgetown Round Table Meeting, March 1974. Maher (1975) points out that this is the felix culpa argument of St. Augustine's: Adam's Original Sin was a 'happy one' since it permitted us to be saved by a Savior. In linguistics, however, the situation is worse: the sinning and the saving outfit is one and the same, representing thus the typical revolutionary aspect quoted above. In social revolutions all current ideas must be stamped for approval by the headquarters. But of course there always were scores of linguists, taught by the tradition, who knew within the limits of their era "what language was about", to put it optimistically. Witness the greatest linguist ever, Edward Sapir. The present survey is not a study of the generative claims in historical linguistics, which still waits a monographic summing up, or critique. This is a state of the art report on analogy in general over those years, and represents thus a much more modest account. The main service of such an

4 account is that it draws together a great number of works that have sprung up independently from the inadequacies of the 1960s. Apparently one of the reasons for the forceful impact of the generative school is its well regulated inbreeding: they refer to each other even in cases that have always been known. This has led to the misunderstanding, among the tyros in the field but also others, that everything worth while was discovered by generative grammar. In contrast, most of the more recent studies that defend analogy (and that are presented in the subsequent chapters) do it alone without the support of others. Thus a survey like the present one, which combines such stray efforts, even if rather superficially, should be of value, just at a time when more ambitious critiques are perhaps in the making. I myself have lived in the middle of the scene, and I learned to know the difficulties of one who does not accept the fashionable dogma of the day. Beyond my original training (philology, American structuralism, and Indo-European linguistics) I was forced to enter the tradition. The evidence was clear: I had to become an analogist to keep my scholarly independence and to do what is expected by every scientist. This was the position I have defended since about 1965, although it took some time before it came out in print. Anttila (1969a) is a state of the art survey concentrating on 19651969 with phonological emphasis. It did take issue with analogy also, and turned out to be predictive in two ways: 1) Analogy has gone the way foreshadowed therein, and 2) Henning Andersen's work has turned out to be most significant for historical linguistics. Analogy was also the topic of Anttila (1969b; Yale University dissertation 1966), and in 1970 I defended the notion in various lectures. By this time the in-group in generative grammar (4.3) was moving toward the same traditional position I had adopted from the tradition itself. In this process I was told by two 'semi-official' members that I should have infiltrated them so that I would have been listened to. This shows once again the strong group cohesion among generativists, and indeed, the new line of development was reserved largely for members only (cf. Anttila 1975b). Analogy is also the central principle in Anttila (1972), and this has been noted (Dressier and Grosu 1972: 58, fn. 103, Hint 1973: 279). That particular book was in production at the same time as Householder (1971) and Wandruszka (1971), and thus it should be very significant that all reach basically the same position independently. And below many more names will be added to the list. It can of course be expected that those who have grown accustomed to the recent hegemony of transformational-generative grammar will object to the 'slight' attention I give to generative historical linguistics in this discussion of analogy. I try to give a balanced view, and in any case, generative literature is now better known than other varieties, and thus one feature

5 of this balance would indeed be devoting more space to the neglected, more fruitful aspects. The requirement of treating a non-existent needle in the haystack would be curious, but in perfect agreement with a social revolution. Note what Gellner (1968: 89) said about the Wittgensteinians, and what Maher (1975) shows to apply to the "generative" control of the establishment as well: . . . The needle has not turned up. But the burrowing in this haystack has become habitual and established, and a cessation of it would leave many men in a bewildered state. Some have no other skills. So, some alternative positions have emerged and are to be found: there may be needles in the haystack. Haystacks are interesting. We like hay.

Against my optimistic expectations it is still (1975) true that the very persons who banned analogy in the 1960s are now generally expected to teach analogy to others, or to reinvent it along with traditional historical linguistics. It is further true that research grants tend to go to the same persons for doing exactly the things they earlier denied and derided. Thus there is in fact more reason to present this survey now than I thought two years ago. During the academic year 1974-75 I have attended many lectures (and heard about others) that return to morphological solutions against generative phonology. This means new support for the word-and-paradigm model and analogy, whatever terms are used. This extra 'observation time' extends now the coverage of the survey to the spring of 1975. But it still must be remembered that the bibliography is a selection only, especially with reference to pre-1963 works. An area which has been excluded is the study of metaphor and semantic change in general, as well as the Western literary tradition, see e.g. Lange (1968). Particularly interesting would have been cross-cultural comparisons, e.g. Jenner (1968) and Gerow (1971), because these would have given evidence for Thumb's "other times, other analogies" (1911), or Wundt's "other times, other associations" (1901) (Esper 75, 79, 202, fn. 62). Analogical argument in historical explanation is likewise omitted. Further excluded is the rich Thomist tradition on analogy, see e.g. Robins (1951), Lyttkens (1952), Burrell (1973), and Robillard (1963). Only a few philosophers of science will be referred to. Analogical mechanisms in dialect geography will not be directly treated, but see Leed (1970) and particularly Markey (1973). As Walburga von Raffler Engel points out (private communication), all non-linguistic behavior is analogical (for the reasons, see 2.6-7), and so are all levels of linguistic analysis (1). Childern's secret languages are all analogical, and so is glossolalia, as it is phonotactically and intonationally based on the native language (v. Raffler Engel, private

6 communication). These aspects will not be treated. Space limitations necessitate the exclusion of a great number of 'real', first-class linguists, whose output contains proper treatment of analogy in the service of other problems, not specifically problems of analogy. Names like Dwight Bolinger and J. Peter Maher come to mind, but scores of other names would have a legitimate claim for recognition. I owe thanks to many colleagues who have encouraged me in this undertaking at some time or other. Particular mention should be made of Johannes Bechert, Wolfgang Dressier, Esa Itkonen, E. F. K. Koerner, J. Peter Maher, and Martti Nyman. But here belong also many of the persons whose works are mentioned in this study. Without the authors providing me with their texts I could not have given such an up-to-date survey. My particular gratitude goes to Henning Andersen who has let me see even beyond works in production or in press (3.5). The first MS. for this survey was finished early in 1974 and distributed to many of the authors discussed and to some other experts as University of Helsinki Department of General Linguistics Dress Rehearsals No. 1. I express hereby my sincere thanks to the Department. This enabled me to check the accuracy of the text, and to expand the references throughout, although the bulk remains the same. All additions also verified the usefulness of the divisions adopted, since they fitted rather well into the format. The two most notable additions in this final version are the works of Kenneth L. Pike and Uhlan V. Slagle, the former omitted by oversight, the latter not available at the time of the preliminary version. Note also how Pike's work (relevant for us here) started to be published in 1963, making thus the original cut 1963-1973 even more relevant. The reaction of both authors to my work has greatly encouraged me in this undertaking. Without friendly prodding and advice by Werner Winter and Swantje Koch this book would still be in my head only, in much vaguer form. Assistance by Warren Brewer has also been helpful.

1 ANALOGY AND LINGUISTICS

1.1 The theory of signs American structuralism was able to crystallize the following definition of language, which went even beyond their own practice (modified from Sturtevant 1947: 3): A language is a system of arbitrary vocal symbols by which the members of a speech community (social group) cooperate and interact (communicate).

Whatever inadequacies this definition has, it at least recognizes language as a system of communication, a fact forgotten often by the structuralists and expressly denied by the transformationalists in their heyday (till about 1970 at least). Language is systematic (rule-governed, non-random) and systemic (the total is divided into subsystems, a basic feature in all functional systems). The attribute vocal excludes writing, and arbitrary symbol emphasizes the make-up of the linguistic sign, a colligation of sound and meaning. The linguistic-sign aspect has been neglected in modern linguistics, although it is essential to the very structure and functioning of language. We will briefly delineate some aspects of the theory of signs, following basically Peirce (1955: 98-119), although there is already a development of linguistic thought along that line (Jakobson 1965, Andersen 1966 and later, Shapiro 1969, Anttila 1972). It is Jakobson who revived Peirce's ideas by bringing them into the pale of linguistics, but the main credit for the linguistic momentum goes to Andersen, who has now brought these ideas to fuller fruition in many sectors of language, as we will see below. Peirce saw logic as semiotic, part of which was the theory of signs. Logic alias semiotic is the quasinecessary, or formal, doctrine of signs. A sign is something which stands for something in some respect or capacity, i.e. a sign of a given referent (thing meant) elicits at least some of the responses elicited by the referent. Peirce's classification of signs on various axes lists ten types, but the most important core for us is the one according to which the relation of the sign to its object consists in the sign's having some character in itself, or in some existential relation to that object, or in its relation to an interprétant. This classification contains three

8 types of signs: An icon expresses mainly formal, factual similarity between meaning and the meaning carrier. There is physical resemblance between the shape of the sign and the referent. Thus a photo can be an icon of what it represents. Better known linguistic icons are the onomatopoeia in which simple qualities of the meaning are contained in the form, e.g. English peep, thump, gulp, cuckoo, etc. An index expresses mainly factual, existential contiguity between meaning and form. It is based on psychological association and/or physical juxtaposition of different events and things. Best known is perhaps the indexical cause-and-effect relation: smoke is a sign of fire, footsteps in snow indicate a walker, etc. The index features of language include relational concepts of time and place, i.e. deictic elements or shifters, e.g. now, here, I, this. Thus also "indefinite" pronouns are very "definite" indexes, of which two varieties are particularly important in logic, viz. "universal selectives" such as any, every, all, none, whoever, etc., and "particular selectives" like some, a, certain, etc. Prepositions and prepositional phrases are indexes, as the concept of time and place already implied. A symbol is based on a learned conventional relation, ascribed contiguity or colligation, between form and meaning. The relation is said to be completely arbitrary, and this is the characteristic of the linguistic sign as especially stressed by Saussure. All symbols are arbitrary in this way, and it comes out easily in comparing different languages, e.g. horse, Pferd, cheval, hevonen, etc. Actually, owing to the structure of natural language as a mechanism connecting sound to meaning, the connection between the concept and the sound image is necessary (Benveniste 1939, Bolinger 1949, Anttila 1972: 13-14). What is arbitrary is that a particular sign be connected with a particular referent. The connection itself is necessary, since we do not and cannot deal with things in themselves; we must deal with signs pointing toward things. The shape of the linguistic sign is arbitrary, and this arbitrariness is outside the nucleus of the sign itself — it is in the outer shape and the semantic range, as indicated by a comparison through different languages (thus also Wandruszka 1971: 30). But when we look at linguistic signs in terms of one language, we can see that there is a tendency for the speakers to assume complete sameness between linguistic form and reality (thus also Wandruszka 16); the sign captures and controls reality; in fact, it is reality in the extreme case (nomen est omen, etc.). This is one source of conflict between a naive speaker and the linguist, and this conflict has greatly obscured the treatment of analogy, up to our day, since most generative grammars are linguists' grammars, not speakers' (they do not treat natural language as natural).

9 At this point it is profitable to add Peirce's own synopsis of the sign types (1955: 104): An icon is a sign which would possess the character which renders it significant, even though its object had no existence; such as a lead-pencil streak as representing a geometrical line. An index is a sign which would, at once, lose the character which makes it a sign if its object were removed, but would not lose that character if there were no interprétant. Such, for instance, is a piece of mould with a bullethole in it as sign of a shot; for without the shot there would have been no hole; but there is a hole there, whether anybody has the sense to attribute it to a shot or not. A symbol is a sign which would lose the character which renders it a sign if there were no interprétant. Such is any utterance of speech which signifies what it does only by virtue of its being understood to have that signification.

and consider further (111): Icons and indices assert nothing. If an icon could be interpreted by a sentence, that sentence must be in a "potential mood", that is, it would merely say, "Suppose a figure has three sides", etc. Were an index so interpreted, the mood must be imperative, or exclamatory, as "See there! " or "Look out! " But . . . [symbols] are, by nature, in the "indicative", or, as it should be called, the declarative mood.

Both quotes imply much of the difficulty we have in linguistic analysis. Let us here remember only the semiotic divisions of the relation of signs to their referents (semantic), to other signs in the code (syntactic), and to their users (pragmatic). Sociolinguistics and related disciplines are now studying the pragmatic side, whereas linguistics concentrates on the syntactic aspect (in the semiotic sense, not just transformational-syntactic). In the study of analogy all are necessary and an explicit use of the semiotic classification adds much clarity, even if not necessarily novelty. These aspects have been neglected in the study of linguistic analogy much to the detriment of the field. The three sign types so far delineated are just cardinal footholds in the hierarchy of signs. Photos and animal cries are basically indexes of the objects and the source of the cry (cf. pronunciation indexes in dialect geography). The best signs are mixtures of all the ingredients, as can be verified in poetry. Onomatopoeic words are often thought to be completely iconic, but again a comparison between different languages brings out their symbolic value also. Thus, e.g. English pigs go oink, the Finnish ones rôh or nôf. The cock goes cock-a-doodle-doo in English, kikeriki in German and coquerico or cocorico in French, whereas in Finnish it utters a well-formed sentence kukko kiekuu 'the cock crows'. Thus every linguistic sign is a symbol, onomatopoeia being symbolic icons. And note that symbols are often felt as icons. The same kind of mixture is true of art also, which gives a concrete example of the range

10 from iconic to symbolic representation, the exact admixture being determined by the age and culture. Medieval heraldry shows clearly such conventions. A form of heraldry has survived in modern trademarks which blend all the three sign elements. A frequent form of design is one in which the initial letters of the name of company are shaped so that they form a picture of the product of the company, its main tools, or its raw materials (all these standing in indexical relation to the company). Thus the abbreviation F of Finnair builds the company logo as shown in Fig. 1.

^fe F/NNfHR Fig. 1 A company logo juxtaposed with the company name. The logo shows an iconic picture of the symbolic letter standing in indexical relation to the total name.

Here the letter represents iconically an airplane. The logo is preferably white and blue showing thus a further connection to Finland. The fact that F is /f/ is totally symbolic, but the fact that it is part of Finnair is indexical, of course (further examples in Anttila 1972: 15-16). Trademarks bring out such a gradience much better than linguistic signs, which can, however, be exactly the same in nature. Since analogy hinges on similarity we have to have a closer look at icons. Icon means 'picture' and, as everybody knows, there are many kinds of pictures; Peirce in fact classified them further (105): Those which partake of simple qualities ... are images; those which represent the relations ... of the parts of one thing by analogous relations in their own parts, are diagrams; those which represent the representative character of a representamen [sign] by representing a parallelism in something else, are metaphors.

The icons we saw above were thus images. A diagram is a symbolic icon in that one needs a symbolic key to read it off. An icon of relation (diagram) like two rectangles representing the coffee production of two countries tells the relative amounts, but one must otherwise know it is coffee we are comparing. But the relations among the referents are exactly mirrored among the forms. Language and linguistics rely heavily on diagrams and metaphors. Language is frequently referred to as a system of relations, and linguistic units tend to be taken rather as endpoints in various relations than as entities in themselves.

11 In fact, linguistic texts and grammars abound in diagrams of all sorts which show relationships of various kinds, e.g. distinctive feature matrices (the relation of d to 6 is the same as that of g t o y ) or tables of semantic components {gander : gosling = ram : lamb). This may sound too obvious or trivial, but it is exactly such relational fields that linguists generally accept as the playground of analogy also. The relational character of language design has been recognized above all in morphology and syntax. Jakobson (1965) noted that the distributional elements in them are largely iconic (diagrammatic). Vocabulary is predominantly symbolic, the rules of language iconic or diagrammatic; and consider also the indexical element mentioned above. The lexical tool and the grammatical instrument thus polarize at opposite ends in the hierarchy of signs. But Jakobson noted also the diagrammatic relation between parts and wholes, expressed in the dichotomy of lexical and grammatical morphemes. These occupy fixed positions relative to each other, and further affixes, especially inflectional suffixes, do not generally have access to the total phoneme inventory or all the phonotactic combinations of the language. Thus, in English, the productive inflectional suffixes are represented by dental stops, dental spirants, and their combination in -st. Of the twenty-four obstruents of the Russian consonantal pattern, only four function in the inflectional suffixes. The most striking case of phoneme selection is the Semitic pattern (Anttila 1972: 196), in which consonants represent lexical meanings and vowels grammatical, e.g. Arabic /k-t-b-/ 'write', /kataba/ 'he wrote', /ka'tib/ 'writing (person)', /kita'b/ 'book', and so on, although there are of course also affixes, e.g. /ma-ktab/ 'place for writing', /katab-at/ 'she wrote'. In morphology it is perhaps easier to find such relational correlates between form and meaning. In Indo-European, the positive,' comparative, and superlative degrees of adjectives show a gradual increase in length (bulk) corresponding to the increase on the semantic side, e.g. Latin altusaltior-altissimus, English high-higher-highest. This is not a perfect universal, but it is parallel to the coffee diagram mentioned above. The only way of communicating directly an idea is by means of an icon, as Peirce stresses. And the same is true of indirect methods of communication also. "Hence, every assertion must contain an icon or set of icons, or else must contain signs whose meaning is only explicable by icons" (105). Such an idea contained in a set of icons is the predicate of the assertion. Symbols need not be iconic, but symbol complexes are arranged in iconic relations with object complexes. Every algebraic formula is an icon. "For a great distinguishing property of the icon is that by the direct observation of it other truths concerning its object can be discovered than those which suffice to determine its construction" (105-6):

12 This capacity of revealing unexpected truth is precisely that wherein the utility of algebraical formulae consists . . . That icons of the algebraic kind, though usually very simple ones, exist in all ordinary grammatical propositions is one of the philosophic truths that the Boolean logic brings to light. The reasoning of mathematicians will be found to turn chiefly upon the use of likenesses, which are the very hinges of the gates of their science. The utility of likenesses to mathematicians consists in their suggesting in a very precise way new aspects of supposed states o f things. . . . (106-7)

These quotes tell many things. First, they already indicate the essence of analogy in science. Second, the mathematician can be replaced by the linguist, and in our context, must be. (cf. 2.6). Thus, thirdly, we get a clear indication that even those modern linguists who have devoted their efforts exclusively to syntax have ignored basic aspects of semiotics. Algebraic equations are used, but it has not been realized that they exhibit, by means of the algebraic signs which themselves are symbols, the relations of the quantities concerned (Peirce 107). Only now has Esa Itkonen, in his critique of transformational-generative grammar, shown that syntax has indeed such an exhibitive iconic function (1974: 267-8, 275). The whole can be summed up also in that "everything" in language is analogical (the message in Anttila 1972). (This does not help much now, but the notion will be broken down in subsequent chapters.) Summing up the discussion so far we can write the following icon (Anttila 1972: 18):

signs

It shows the relations among the classes of signs, and is thus a typical diagram. A little more must be said of symbols here, in order that we may return to the subject more easily below. It was said of algebraic icons that they reach beyond the fact of their own construction. From this end we have a gate into symbols again. "A symbol is a sign whose representative character consists precisely in its being a rule that will determine its interprétant. All words, sentences, books, and other conventional signs are symbols." . . . "A symbol is a law, or regularity of the indefinite future." . . . "But a law necessarily governs, or "is embodied" in individuals, and prescribes some of their qualities. Consequently, a constituent of a symbol may be an index, and a constituent may be an icon." (Peirce 112).

13 All this also became apparent in the trademarks. Below we will have to discuss linguistic change, and thus the following becomes relevant (115): Symbols grow. They come into being by development out of other signs, particularly from icons, or from mixed signs partaking of the nature of icons and symbols. We think only in signs. These mental signs are of mixed nature; the symbolparts of them are called concepts. If a man makes a new symbol, it is by thoughts involving concepts. So it is only out of symbols that a new symbol can grow.

We will return to concepts and symbols in connection with Sapir's typology. For the growth of symbols it can be mentioned here that Anttila (1972: 37-40) presents the development of writing in exactly this vein as a microcosm of language. The being of icons belongs to past experience, that of the index to present experience. "The value of a symbol is that it serves to make thought and conduct rational and enables us to predict the future. Whatever is truly general refers to the indefinite future. . . . The past is actual fact. But a general law cannot be fully realized. It is a potentiality" (Jakobson 1965: 37 quoting Peirce). The application of these sign types in linguistics is increasing, although so far it is difficult to refer to explicit treatments, and in those cases one can, there may be great differences, e.g. Wescott (1971) vs. Andersen, or Shapiro. A general plea for iconicity is also Valesio (1969). As already mentioned, Andersen covers phonology and linguistic change in general. Particularly interesting is his analysis of phonological implementation rules as iconically showing the distinctive feature hierarchies that underlie the phoneme set. Shapiro (1969, 1974) has concentrated on morphophonemics, and Anttila (1972) was able to delineate historical linguistics from comparative better than before: Linguistic change (historical linguistics) depends on icons and indexes, reconstruction (comparative linguistics) on symbols. (For an exposition of Peirce's concept of sign see Greenlee 1973, and we have now an admirably succinct treatment of Peircean semiotics with its extensions in Walther (1974.))

1.2

The modes of reasoning

The mathematician and his craft implied that the scientific method might display some necessary support for the study of analogy. This is indeed the case. We must here look at Peirce's theory of the scientific method (Peirce 1955: 129-134, 150-156, Knight 1965, Reilly 1970, and especially for linguistics: Andersen, 1973, 1974; this exposition follows closely Anttila 1972: 196-203, 1975cg).

14 All cognition involves inference for Peirce, and all knowledge comes from observation, the starting point being the percept and not the senseimpression (Reilly 28). There are three modes of reasoning or argument. The order of the logic of the syllogism involves inference from the rule (major premise) and the case (minor premise) into the result. This is deduction. Induction is inference with the order of the procedure reversed: we infer the rule from the case (minor premise) and the result (major premise). But the most common type of reasoning is hypothetical inference or abduction, in which the rule and the result are given and we infer the case. Abduction is everyday logic par excellence: The surprising fact, C, is observed But is A were true, C would be a matter of course Hence, there is reason to suspect that A is true (Peirce 151, Knight 117).

Thus A cannot be abduced or conjectured until its content is already present in the premise: If A were true, C would be a matter of course (152). Abduction is extremely fallible, but people go on using it, and it is indeed man's most important asset. Every item of science came originally from such conjecture, pruned down by experience. Abduction is an act of insight that comes to us in a flash. It is indeed the first explanatory phase of scientific inquiry; it suggests that something may be. Unlike other modes of argument it introduces a new idea. One must emphasize this notion of explanation, since it will totally invalidate what transformational grammarians have been saying about explanation in general and analogy in particular: The scientific explanation suggested by abduction has two characteristics that must be pointed out . . . : 1.) an explanatory hypothesis renders the observed facts necessary or highly probable; 2.) an explanatory hypothesis deals with facts which are different from the facts to be explained, and are frequently not capable of being directly observable (Reilly 35).

Any learning or understanding must be by abduction, which stands as the basis for predictions. The purpose of deduction is to infer those predictions, and the purpose of induction is to test them. Abduction is always a gamble, whereas deduction, with little risk and low return never introduces anything new (cf. bisociation in 2.7). It is clear that linguists have glorified the role of deduction. In short, abduction suggests that something is the case, that something may be\ deduction proves that something must be\ and induction tests to show that something actually is (Knight 123).

15 An important warning is now in order. The three modes of reasoning are cardinal types only, and they do not exclude the existence of less usually used modes of reasoning, as Peirce himself admitted. Ultimately thus also the trichotomy may be inadequate, but it is far superior to the usual deduction/induction dichotomy. Ralph Gardner White points out further (private communication) that the Peircean stand that any learning requires abduction might be too strong a statement, because there may be other ways of learning. However, for the majority of cases in learning various cultural and conceptual schemes abduction is central, and we will here pursue some of its positive aspects that bring new clarity into the study of linguistic change. Linguists have not generally realized or acknowledged that deduction is just an experiment. Nor have they acknowledged abduction, mainly because it cannot be formalized the way they would like to. The reason is its unpredictability, which itself ties in with perceptual judgement, another factor that has derailed discussions on analogy, and this matter must also be touched upon here. Abductive inference shades into perceptual judgement without any sharp line of demarcation, in fact, the latter constitutes extreme cases of the former. Perceptual judgement is not subject to criticism (Reilly 46-7), and science is built up from them. Perceptual judgement and abductive inference share four important similarities: (1) Both contain elements of generality, (2) both are in some respect beyond the control of reason, in that neither judgement is the necessary conclusion of an inference. (3) There is newness or originality in both, and (4) both are interpretative (Reilly 47-50): . . . perceptual judgement is an interpretation of a perceived object, and though there are several interpretations possible, the one actually adopted seems forced upon us (Reilly 47). The perceptual judgement is interpretative because it is abstract. That is, it represents one or more features of the known object without exhausting the meaning of the object. The knowable aspect which is grasped in the perceptual judgement is only one of several (7.198 [volume and paragraph in Peirce's Collected Works]). Nonetheless the perceptual judgement is true in the sense that "it is impossible to correct it, and in the fact that it only professes to consider one aspect of the percept" (5.568). It is for this reason that the interpretation made in the perceptual judgement is not the only one possible, and the aspect represented in it is really as given as it appears so forcefully to be (Reilly 50-1).

Perceptual judgement now explains the difficulties linguists have had with the "actuation problem", the doubt of whether an event will in fact take place although all the conditions known to be necessary obtain. This is the

16 "tragic problem of induction". It should be noted that those who do not distinguish between induction and abduction, treat abductive problems as drawbacks of inductive reasoning. Also, because judgement has a general element, reasoning does the same, and further, all reasoning, according to Peirce, is mathematical, and therefore diagrammatic (Reilly 186; the Greeks got to the syllogism from analogy, see Platzeck 1954). He even developed existential graphs to show this, through Euler circles, and Venn diagrams, as the reference to Boole above would suggest (Knight 120-1; but see particularly Roberts 1973). We are back to the sign types. Reasoning and cognition is largely based on perceiving similarities and contiguities (2.6-7). Aristotle called abduction apagoge, Descartes absorbed it into intuition which was the basis for deduction (and note how "clear and distinct" fits perceptual judgement). Hume accepted the natural relations of the mind, the empirical laws of association, based on resemblance and contiguity, in addition to the philosophical relations. There are only particulars for him, but a particular idea can be used to represent many individuals (Aristotle's natural universals). In other words, although there are no innate ideas we use something as a sign. The rule of resemblance makes cease the particular (cf. 2.6). This sketchy philosophical background has already shown the basic ingredients of analogy. It was necessary, however, since linguistics has ignored these crucial aspects. In fact, we still have to go briefly into analogical argument to correct certain linguistic misconceptions.

1.3

Analogical argument

In the mid-1960s transformational grammarians launched an attack against analogy (Kiparsky 1965, Postal 1968, King 1969), and this was also the synchronic stance: Chomsky (1966: 12) termed analogy a vague metaphor, and metaphor itself a semi-grammatical phenomenon (1964). This was a reaction against the Neogrammarians and American structuralists who had attributed the creative aspect of language to analogy, one of the reasons for the generativists being that analogy is supposedly such a surface phenomenon. This attack missed the whole point, and since Dinneen's answer to analogy (1968) and Reddy's (1969) to metaphor have been ignored, the matter must be mentioned once more. Ever since antiquity an analogy has been a relation of similarity, whose types can be classified according to 1) the number of terms involved, 2) the bases for the analogy affirmed, or 3) the types of terms entering into the analogy (Dinneen 98). According to the number of terms we have A : B

17 with two terms, A : B = B : C (the mean) with three, and A : B = C: D (proportion) with four. The first type is the rather arbitrary one based on perceptual judgement and similarity which can be unpredictable (e.g. sister : brother as kinship terms on a certain axis), the latter two are diagrams which derive from mathematics. In fact, 'proportion' is the Latin translation of Greek 'analogy'. We have already seen that icons can reach truths beyond the truths already observed. Anchoring such a diagram into the known we can use the other end to (dis)cover new territory. In this sense analogies are utterly essential parts of all theories, crucial for explanation and understanding and all formal definition (Hesse 1966, Kaplan 1964, Dinneen 1968). Analogy is the only way to extend a dynamic theory. Models are in fact scientific metaphors, they are diagrams of the scientist's experience. Analogy is particularly valuable when the object of investigation is not directly observable. Such an object is language as a system (not speech), among others, and linguists have always built models for it. The structuralists did, in fact, concentrate on the positive intersentential analogies giving what does appear in grammatical constructions, whereas the transformational approach relies on the intrasentential relations between deep and surface structures, with the extra dimension on what might appear in grammatical constructions. We have just different terms and axes, but otherwise the same analogical machinery is at work, except that transformational grammar denies it (typical of their numerous contradictions). This is how Aristotle worked out his universals through induction, i.e. the multiplication of examples enables us to grasp the analogy (Hesse 148): awake asleep

seeing eyes shut

_

finished article raw material

' '

ACTUALITY POTENTIALITY

Every universal was based on analogy between the instances, and argument by analogy is indeed generalization (Kaplan 106-7). Analogy is used to introduce theoretical terms (which is also the task of grammatical analysis). These are the unobservables in elastic balls bouncing

_

[gas molecules] pressure

air sound

_

[ether] light

where the theoretical terms are bracketed (Hesse 93). These proportional types have characteristically two axes, consider the following analogy in a classification system:

18 Genera:

BIRD wing lungs feathers

FISH fin gills scales

(Hesse 61)

where on the horizontal axis we may have one or more similarities of structure or function. The vertical axis shows a whole-parts relation. Any missing slot can be argued from the known parts. Analogy is weaker than induction, because the description of similarities and differences is notoriously inaccurate, incomplete, and inconclusive (Hesse 76). In other words, it feeds on abduction, and there is its glory. And we do not need incorrigibility, but only methods of selecting hypotheses (Hesse 76), and there is no point in defining degrees of similarity (Koestler 1967: 537). Likeness and repetition (consistency) are the bases of analogy and in fact all structuralism. We notice that the horizontal axis above is based on iconicity and the vertical axis on indexicality (Hesse calls it causality, but it is a flexible category of any contiguity). Here we find the conditions for material analogy: 1) the horizontal dyadic relations between the terms are those of similarity or difference, and 2) the vertical relations are "causal" in some acceptable sense (Hesse 86-7). The problem here is the unpredictability and logical freedom from criticism in perceptual judgement. In formal analogy we have one-to-one correspondence between different interpretations of the same formal theory, and it is useless for prediction (Hesse 68-9). Formal analogy is a pure diagram without similarity between the corresponding terms. Formal theories are weakly predictive, conceptual models strongly so, but not justified by choice criteria, whereas material analogue-models are both strongly predictive and justified by choice-criteria (Hesse 129). There is no reason to go on citing all the philosophers of science who have shown the essential role of analogy in human cognition and any science (for the Greeks, see Lloyd 1966). It is curious that studies like Hesse's have never been contended with in the transformational derision of analogy. It should be noted that Hesse ends her book with discussing the explanatory function of metaphor. She shows that scientific arguments and ordinary language employ analogy as the normal and not the exceptional case, and the same is true of metaphor, which is also iconic. Metaphors modify and correct first-language statements. The use of metaphorical explanation is rational and the metaphor is the chief means to expand our use of language. Thus we are led to a dynamic view of language as intricate interaction rather

19 than all kinds of invariance, as has been the ultimate goal of recent linguistics. (Cf. Stewart 2.6 end.) There is still a place for invariance, but not for the static conception adopted by transformational grammar and American linguistics in general (before Labov). We will see that similarity (all the way to identity) does in fact give the raison d'être for invariance, whether this is natural or learned.

1.4 Language learning and linguistic change The theory of signs has already been provided with sufficient linguistic examples, and more will follow below. We will here look briefly at the linguistic correlates of the modes of reasoning and analogical argument before taking up linguistics in fuller form. In recent years linguists have been looking for a theory that would satisfy both the students of language acquisition and historical linguists. Many resorted to a particular language-centered "acquisition device", forgetting that language learning is part of a larger context of socialization and not verbalization only. Language is just one facet of the human capacity for analogizing. Loss of faith in the black box of a language acquisition device (LAD) has led newest research to the necessity of the triad abduction-deduction-induction. The first to make it explicit was Henning Andersen (1969, 1972, 1973), who also showed that many historical linguists had implicitly followed the trichotomy (cf. the discussion of Breal and Sturtevant below). Every speaker has to abduce his grammar from the outputs of other grammars he hears in his communicative context. This link is diagrammed by Andersen as in Fig. 2 (1973: 778). Grammar 2 is inferred from Universals (the major premise) and Output 1 (the minor premise) by abduction. If Grammar 2 is different from Grammar 1, we speak of abductive change. Output 2 is inferrred or derived from Universals and Grammar 2. If Output 2 is different from Output 1, we speak of deductive change. All learning is always change. Also, we see that deduction is indeed an experiment. Now, if those who produce Output 1 accept Output 2, the conclusion has been inductively verified. The two grammars are equivalent and the same for all practical needs no matter what the structural differences between them may be. To take a couple of examples: When Sturtevant's little son underwent treatment of the ear by irrigation, he made the 'logical' guess that ear and irr-igate go together. This is an abduction that is totally covert, and hence perfectly 'right'. Such covert abductions surface through overt deductions. This analysis came to light when the boy's nose was treated the same way

20

Learner

Universals 1

f

1

^

1 Grammar 1 l_

!

Grammar 2

^A J

1

sk

D \

Output 1

J

Output 2

I Fig. 2 The crucial link in the language learning process, after Andersen (with slight modification). and he produced the verb nosigate. four airplanes was a formation

Similarly when a child was told that

he abduced a connection with four,

since

the analysis surfaced with two planes being referred to as twomation 9 7 - 8 ) . When alcoholic

(1947:

is reanalyzed as containing a morpheme -holic it

would not show without the further derivations gumholic

and

foodholic

(Anttila 1 9 7 2 : 9 3 , courtesy o f Peter Maher). Why children are so active in producing analogical forms is that their experience is still limited and their creativity has not been stifled b y the inductive pruning. The prestructuralists mapped Output 1 into Output 2, and J a k o b s o n ( 1 9 3 1 ) , followed by the transformational grammarians, did the same from Grammar 1 into Grammar 2 (Andersen 1 9 7 2 , 1 9 7 3 ) . In both cases we have a mere convenience o f description for before-after relations which can be termed diachronic correspondences (Andersen 1 9 6 9 and later). This is probably also one o f the reasons why the original transformational conception o f analogy is structurally very similar to the standard Neogrammarian treatment ( 4 . 1 ) . Since all change and learning must go through abduction and the process described, we get support for the position that all change is analogical (Schuchardt [ 1 9 2 8 ] , Hermann [ 1 9 3 1 ] , Hockett, Winter, Householder, Anttila), i f we indulge in one label, whereas generative grammarians have tried to describe all change on the phonetic-change model (see R o c h e t 1 9 7 3 , to be added to the above list also). Andersen has shown succinctly that one must understand the bases and terms o f analogy to the able to understand and explain change. Obviously the results are different if the

21 terms are distinctive features or inflectional morphemes. The generativists used to wonder why certain changes occurred across the board whereas others were limited to certain sporadic cases. Since analogy was a surface phenomenon and irregular in operation, it could be brushed aside as a necessary evil. Regular uniform outcomes, however, justified the establishment of "rules", and a correspondence was decreed actual history as well. Across-the-board changes become quite understandable, however, when we realize the exact juncture where analogy occurs. If it happens in a distinctive feature hierarchy, the result is reflected all over the language, since it is a one occurrence type. In other words, wondering at the regularity of certain across-the-board changes actually means the reliance on surface forms only. Analogy, however, is always an abstract phenomenon. Also Björn Lindblom has reached a perceptual model of phonological acquisition which agrees perfectly with the Peircean/Andersenian trichotomy (1972).

1.5 Analogy and analysis Although there has been much discussion of the linguist as a scientist (mainly as a neopositivist, although the hermeneutician is entering again), it seems to be difficult to find any discussion of him as a user of analogy. Only two books can be noted in this vein (plus Hockett 1968), and both let analogy be the leading factor in language design. These are Householder (1971) and Anttila (1972), and in both it is chapter five where the analogical argument concentrates. Householder's chapter title is "Sameness, similarity, analogy, rules and features" and Anttila sums up his in almost the same words (106). Householder shows that classical phonemic analysis is the analogical extraction of features, e.g. in sets like bet: pet = bad : pad = big: pig = etc. This correlates with the analogy in language use in which phonological rules rest also upon such proportions, although few people notice this as analogy. In fact, the features resulting from such linguistic analysis represent these analogies. Latin affixes superficially different in form can still fill exactly the same function — the so-called declensions. These are pure analogies, often of many terms: singular : plural = anima : animae = animae: anirrwrum = animae : animis. .. homo : homines = hominis : hominum -homini : hominibus.. . etc. In short, a conventional paradigm is merely a handy representation of one of these long proportions. Paradigms of many such dimensions are called matrices by Pike (2.9). A vast network of analogies is sparking our brain every time we speak. Then these millions of examples of sentences can be summarized as Subject : Sentence and

22 Predicate : Sentence. The two chains are equivalent to a rule S NP + VP, or Subject + Predicate. Householder's whole chapter should be reproduced in full, but note particularly this (76 n.): Furthermore it cannot be doubted that the analogical chains in our brain include inequalities as well as equalities, i.e. information that certain things are not analogical to certain others, the result, one supposes, of tentative a f tempts at extending a chain which are in some way rebuffed. And, though we can economically summarize the value of many, many chain proportions by minimal ordered rules, there is no reason to suppose that analogies have to be minimal. In fact, those which we can observe in operation as speakers try to extend the frontiers of their language are often complex, to be represented not by a single rule, but by half a dozen rules, or even more.

He goes on to show the role of inductive testing and that analogy really is a more adequate learning aid than George Lakoffs "preknowledge" (1969). He does not need an innate idea of a chair to learn the concept. The same is true of NP's. Anttila treats the analogical configuration in analysis from the point of view of comparative linguistics (1972: 348, 352-7, 1973: § 5). In internal reconstruction (alias morphophonemic analysis/systematic phonemics) the linguist works backward in time and tries to lift the most recent changes that have obscured the earlier state of affairs. He tries to pull aberrant forms back into the regular pattern, e.g. for English

SINGULAR PLURAL

INVARIANT SETS m p s f m p s f

VARIANT SETS

f

e

s

v

8

z

the variant sets are straightened out as */2> * #2> and *s2> o r the like, plus a proper relative chronology. For morphology Anttila uses Sapir's analysis from Time perspective in connection with alternation and productivity, for relative chronology (1972: 352-5). Similarly internal reconstruction in syntax irons out irregularity in the same way as in phonology. Finnish word order distinguishes between pojat juoksevat 'the boys run' vs. juoksevat pojat 'the running boys'. In the singular we have poika juoksee vs. juokseva poika. In other words XY: YX in the plural vs. X'Z : Y'X' in the singular. Z is the term that falls out of line. Poetic or archaic Finnish does in fact have juoksevi whereby Z becomes X". With a grammatically conditioned sound change *-a ) -i we get X', and total symmetry. Thus, whether he acknowledges it or not, a linguist is indeed a scientist that uses analogical argument in his craft. It is even more natural than in

23 other sciences, since the very structure of language is analogical; it is learned and used analogically, and the same analogical relations guide its change. It is then no wonder that the same diagrammatic method of analysis does justice on its object, whether in synchrony or diachrony.

1.6 Summary and introduction This chapter can best be summed up by a further look into Householder's ideas, because at the same time they represent the Greek conception of analogy, to be continued in the next chapter. The chapter heading quoted is totally Greek if 'feature' is taken in the sense of 'property', which is the crucial character in the similarity necessary in material analogy. The only candidate for conscious or unconscious systematization is analogy, a sameness of similarities and differences, expressed in proportions (63). Most of our everyday cause-and-effect reasoning is proportional. Intelligence tests make great use of proportions, obscuring them as much as possible (cf. Esper 1973), and such proportions may involve form, form and function, meaning, syntactic function, etc. (64). Most rules, e.g. pronunciation rules, are clearly proportional (68). The brain does not do the operations equivalent to these proportions in a generative grammar, but recognizes the proportions more directly (75). Proportion-chains can be shown to lie behind various syntactic categories (76). The classical transformational rules are analogical (77), although analogical change in syntax is not as sweeping as elsewhere (79). In other words, understanding the bases of analogy explains when the results sweep across the board and when not. Householder understands also very well the abductive machinery used by the human mind. He posits an automatic generalizer or similarity-seeking device counterbalanced by a discriminator or difference-seeking drive. These factors build up a device that is not merely linguistic. It is acquired or taught and is not instinctive like in Peirce's conception (4). Derwing (1973) has continued this line with succes (2.5). Householder is quite to the point about the role of inductive testing in any learning. It is simply false that all speakers build the same grammar, because two different rules may have the same effect ninety-eight percent of the time (5). Thus any LAD depends on correcting (8). Beliefs that cannot be properly tested last throughout life (76 n.). The same is true of word meaning; it may take years to perceive differences (7). Posner (1973) also joins this convincing position and argues that it is theoretically better to define meanings through the negative results of such rebuffs. In other words, we know clearly what meanings do not belong to

24 the w o r d w h e r e a s t h e p o s i t i v e area is vague. Our chapter can b e s u m m e d up with the following (78): We have come a long way from the undefined notion of similarity with which we started. But we seem to have added nothing more; all the most complicated chain analogies are built up using only this. The most important (if vague) principle involved here is what may be called the principle of hasty generalization (by which we live and assuredly talk): Things which are similar in one way are probably similar also in another (different, dissimilar) way. It c o u l d n o t b e put more s u c c i n c t l y . A final q u o t e will refer t o t h e particular linguistic situation w e n o w live in and w h i c h has called f o r t h o u r survey ( 7 9 - 8 0 ) : Enough has been said to show the great role of analogy in forming the structure in a man's brain which is his language. We have also noted the convenience and economy, in talking about such proportions, of using conventionalized summarizing devices like rules, features, paradigms, and matrices [and categories; 76 - RA]. From now on we shall use these devices most of the time; but we should not forget that each of them rests on one or more proportions or sets of proportions. And if, in one sense, rules and features are merely arbitrary fictions (while only the utterances and proportions are real), real is also another, paradoxical, manner of speaking in which only they are real while the actual utterances are merely conventional abbreviations for the rules and features. Many linguists prefer this paradoxical sense of 'real'. The q u i x o t i c reality is t h e transformational-generative o n e w h e r e the k i n d o f forgetting H o u s e h o l d e r warns against has b e e n a primary goal in t h e s t u d y o f analogy in particular. B u t there is m o r e t o a n a l o g y t h a n m e e t s the e y e , as w e shall see. T h e m o d e r n scene displays m a n y currents and H o u s e h o l d e r is n o longer a l o n e ( n o t e also H o c k e t t

1968).

2 ANALOGY IN SYNCHRONY

2.1 The Greek spirit In linguistic introductions it is customary to refer to the Ancient Greek controversy of whether language was governed by nature or convention, analogy (Alexandrian grammarians) or anomaly (the Stoics of Pergamum) (cf. Lloyd 1966: 124-5, 211, 225-6). A clear exposition in a nutshell is in Lyons (1968), but now there are two fuller treatments, Best (1973) and Esper (1973), both of which give rich references to further literature on the topic. According to Lersch (1838) this argument further led to a question about the nature of linguistics: the technicists held in the analogist tradition that grammar is a science, the empiricists stuck to the tradition of an inventory of observed usage (Esper 7). 'Analogy' was the Greek word for the mathematical proportion that in linguistics came to mean 'regularity', in contrast to anomaly ('irregularity'), and it was translated into Latin as proportio, ratio, regula sermonis or kept as analogia. Analogy, or pattern, or structure, was thus the dominating principle guiding language learning, use, and saying new things not said before. Another key concept was paradigm, a model or example, also from mathematics. Together they came to refer to declensions and conjugations, or inflectional regularity. Paradigms or inflectional patterns which conformed to given rules were called canons of regular flection (Best 20, Esper 3). Varro started the concept of synchronic analogical productivity in that certain paradigms serve as unfailing models for the inflection of other nouns. In short, analogy was a statement of grammatical rules, and also, the grammarian makes rules with analogy (Best 18-9). In practice, however, the analogists had to accept usage, and the anomalists likewise acknowledged regularity or analogy. Best thinks that anomaly seems to be neglected in recent linguistics, only Heinz (1967) has supposedly taken up the issue, according to Best (23). The purpose of this chapter is to show how inaccurate this statement is. The Greek notions are vigorously used by many linguists, who, however, work alone without joining forces into a 'school'. The preceding chapter in fact already pointed out some of these currents, maximally clear in Householder. Mayerthaler (1974) proposes as modern studies of anomaly the transformational treatments of irregularity (see any

26 introduction). This is also a very partial or even questionable truth, since transformational treatment is based on a very arbitrary concentration on form without function (usage). Damaging also is the lack of understanding of the sign types which true analogists on the whole use adequately, even when implicitly. Of course, the transformational school has so shuffled up and camouflaged this issue that any criticism can be always answered by "But that is what we do or have done! ". As we saw above (1.3) analogy and metaphor represent rationality and rationalism. This the transformationalists deny by saying at the same time that they know what true rationalism looks like. Esper documents the strength of the analogical argument through the ages, and we need briefly refer only to Paul, who is one of the chief authorities rejected by the transformational grammarians. If his overly confident reliance on the proportion represents certain defects of the Greek conception, he certainly advocated many of the positive aspects also. He thought that the rule mechanism of language was analogical (Esper 40), and nobody has gone beyond this today, as also Householder shows. Paul classified morphological associations into material and formal groups, which Stern redubbed, with justification, as basic and relational (Paul 1920: 10620, Stern 1931: 200-7, Esper 39, Koerner 1972). Material (basic) groups are characterized by similarity of meaning and form, formal ones are relational in that they show the same grammatical meaning (the sum of all comparatives, nominatives, etc.). Formal types can have the same form but need not, and the material groups can rely on semantic similarity only (man : woman, boy : girl, etc.). The groupings intersect to form various chain-analogies (to use Householder's term; see 1.5), always under networks of similarities and differences, and these pertain to syntax also. Saussure's rapports associatifs (1916: 172-5), better known to linguists, are the exact counterpart of Paul's ideas (Koerner 1972: 299). Basically Paul's usage is same word = material analogy, other words = formal analogy, but this works better with morphemes as in Fig. 3. "Of lesser significance are equations in which the phonemic correspondence is limited to the formal element" as (Esper 39) in Fig. 4. Thus formal and material overlap in various configurations (cf. 2.8 end). The interesting thing is that whatever difficulties remain in this kind of diagrammatic conception of morphology or grammar, it is still strong today and it displays more clarity than other models for the topic. As one possible synopsis from this modern line, let us look at the indexical element in morphology, which will lead us also to other modern aspects (e.g. suppletion and usage). An understanding of the sign types teaches that linguistic signs are not simple but complex, as are morphological diagrams.

27 SAM E

DIFFERENT

hort hort hort hort hort

us um i 0 0

DAT. SG.

NOM. ACC. GEN. DAT. ABL.

libr 0 ann 0 mens ae ros ae pac i lue i

MATERIAL

PRET. g n br r sag Heb

^-FORMAL

a ah ie ie te te —r—'

b m t t

Fig. 3 A simple diagram of Paul's material and formal groups. The different cases of one noun build a material group. The sum of all representatives of one grammatical category builds a formal group. The groups divide into subpatterns.

SAME nice

=

nie

er

good ox am

bett cow was

er

*

FORMAL

MATERIAL Fig. 4 Suppletion, which is normal in the formal area, occurs also in the material domain.

un

SAME

DIFFERENT

mother mother mother mother

0 hood iy iy

DIFFERENT

SAME

mother boy liveli sentence

hood hood hood hood

Fig. 5 Basic paradigmatic structure through the Same/Different axis in the spirit of the Greeks, Paul, and Saussure.

28 Paradigmatic structure always indicates diagrammatically the same part vs. the differences, e.g. Fig. 5 (Anttila 1975a: 11); for this Semitic mixes the consonantal and vocalic diagrams further into a setup as in Fig. 6 (Anttila 1975a: 11): SAME k

a

t

a

b

a

he wrote

k

a"

t

i

b

0

scribe

k

i

t

a'

b

0

book

k

0

t

a

b

0

study (place)

DIFFERENT

The configuration

Fig. 6 of Arabic paradigmatic

structure.

So far the SAME boxes contain one morpheme written many times, and prefixes have been ignored, to emphasize simplicity for the reader. This is also linguistic analysis, which, as has been noted, is diagrammatic (analogical) in the same sense as the structure of language is. But consider now Kurylowicz' notion of a bipartite morpheme, which has such survival value in his theory of analogical change. It means a morpheme plus the effects of a morphophonemic rule applying in an environment defined with reference to that morpheme, e.g. German -e 'pi.' plus umlaut in Baum-e (1947; see 3.5). Now he argues that morphophonemics treats semantically void morphs. But morphophonemics deals with alternations between and not with morphs as such. Kurylowicz excludes also "structures dominated by phonological factors". The alternation a/an in English is morphophonemic, although phonologically conditioned. Andersen (1969) answers Kurylowicz by maintaining that morphophonemics exists apart from morphology. Only when morphophonemic alternations are correlated with specific categories would he argue that the rule that produced them has become morphologized. But allomorphic alternations have indexical meaning, whose diagrammatic manifestation is hardly different from the Semitic pattern, cf. Greek in Fig. 7 (Anttila 1975a: 12). This is indeed the general situation, whether one operates with a separated morphophoneme (e 'v o ^ zero) or allomorph distributions (leip-/loip-/lip-). The typical state of affairs can be verified with Bloomfield's (1933: 167-8,

29

SAME e

i

P

o

I leave

1 e

o

i

P

a

I have left

e

0

i

P

on

I left

DIFFERENT

Greek paradigmatic

Fig. 7 structure

exhibiting

ablaut.

238, 264) and the transformational typical examples in Fig. 8 (Anttila 1975a: 12). SAME

SAME d

yuw

TT

0

ey

n

0

d

A

ts

as

ae

n

itiy

DIFFERENT The allomorphic

DIFFERENT Fig. 8 configurations in duke/duchess and sane/sanity.

In all these cases the DIFFERENT part (that penetrates the morpheme) represents grammatical meaning. This outside intruder connects the morpheme with another one. Historically we know that this intrusive part is often articulatory agreement between two adjacent morphs or morphs within the same word. Evolution, even linguistic, shows a tendency toward both law and variety, so that the general result may be described as organized heterogeneity or rationalized variety a la Peirce (Anttila 1974 b). Once a linguistic sign (SAME) is learned, it takes on strong iconic character (1.1). Allomorphs (DIFFERENT) represent rationalized variety by adding a considerable indexical overlay to morphemes. In other words, they refer beyond their own boundaries toward larger stretches, generally words. In that sense they exemplify contiguity within the structure of grammar, or at least lexicon or between other lower-level units. They indicate their dependence from the whole, or one variant boasts independence (/dyuwk, seyn/). Allomorphic alternation is a diagram of the indexical aspects of linguistic signs (morphemes), which themselves are symbols with iconic overlay.

30 As is well known, cases where variants become functionally independent abound, e.g. pre-English sg. *mus, pi. *mys-i, jives, with the loss of the final vowel OE mus/mys 'mouse'. But even when we do not get such minimal contrast, the alternation is an index for what follows. Umlaut represents a typical case of increase in redundancy which often leads to the loss of the originally conditioning part. In the case of loss, the diagrammatic index is resymbolized into an inflectional or derivative marker (mouse/mice, man/men, speak/speech, bake/batch). Instead of referring indirectly through a following suffix to the semantics, the allomorph does it directly. Komarek (1964) shows how the existence of allomorphs in a root morpheme diminishes the uncertainty of the following morpheme, and in extreme cases the root allomorph determines it totally, e.g. Czech nom. pi. vlc-i, acc. pi. vlk-i 'wolves'. There is no difference between conditioned and unconditioned variants in this respect. Like Komarek, Korhonen (1969) also uses information theory and comes to the same conclusion through Lapp material. Already the pure allophonic alternation of longer and shorter /'s determined by open and closed syllables in ProtoLapp *kole 'fish' and *kole-n 'of the fish' carried a functional distinction in the linear sequence, which then ultimately becomes North Lapp nom. guolle, gen. guole. Korhonen gives the name quasiphoneme for such units which are allophones by distribution but phonemes by function (1969: 335). This is a name for the word-cohesion through an iconic index. Shapiro (1974) treats the Russian vowel/zero alternation in connection with a semiotic theory of markedness. The alternation itself and the allomorphs it regulates are directly relevant, not speculation about underlying forms. Requirements of usage (anomaly) sometimes create symbolizations in which allomorphs are used like independent signs combined with one and the same suffix, as is clear from the glosses for the following (cf. 1.1): German sach

lich

objective

(Sache)

säch

lich

neuter

('thing')

yksi

näinen

lonely

(yksi)

yhte

näinen

uniform

('one')

Finnish

31 Greek é

leip

é

Up

e

he was leaving

(/[e]/p-)

he left

('leave')

All cases exemplify regular alternants (see Anttila 1975a). There is no mystery in these developments, since one and the same expression form can split up, e.g. English cold 'kalt' and cold 'Schnupfen, Erkáltung' or German Schloss 'lock' and Schloss 'castle' (Wandruszka 68). The outer shapes still iconically refer to each other, but the symbolic innovation wins. An effective communication system like natural language should not always create everything from scratch in every situation by every speaker. Any truly creative strategy of language use needs a considerable item-andarrangement apparatus. This is ready to be used in clear contexts, and thus such aspects are also first learned (creating e.g. the cold and Schloss configurations). For Paul and Saussure, and the linguists preceding them (down to Bopp) we have now Vallini (1972), who correctly emphasizes the fact that the era did not just deal with analogical change, but that since Humboldt, analogy was taken as the essence in the organization of linguistic structure (8). Linguistic progress during this time was directly dependent on the interpretations and explications of analogy. Particularly noteworthy is Kruszewski's and Paul's use of analogy and memory as the crucial factors behind language, and their attempts to explicate a theory of association of ideas (cf. Slagle in 2.6). Paul's formal associations are in fact based on comparable syntactic groups, and the whole taxonomy refers easily to syntax. Figs. 3-5 are equally true of syntax also, as Householder in fact points out (cf. 1.5). After English psychologists Kruszewski classified associations by similarity and contiguity (Vallini 64), which reminds one of Saussure's associative and syntagmatic relations (Godel 1973: 66). Contiguity explains the form-meaning colligation of the symbol (cf. Slagle in 2.7), which favors the reproduction of transmitted forms (Vallini 65 n.; see further in 2.3 below). Association by similarity favors productive or creative activity, which contributes toward the harmony of the system. This is an important insight which has now come to fuller fruition in Slagle's work (the spontaneous rule-governed functional unification of memory and perception to produce a new whole; 2.6). A solid resurrection of Paul's and Saussure's ideas find an extensive application in Paunonen (1974) for mapping the paradigmatic configurations of the Finnish genitive plural. As for the ancient and medieval conception of analogy one can conveniently refer to Robins (1951, 1967), and Matthews

32 (1972, 1974), among which the last gives the newest and shortest introduction (1974: 59-76). For a more general philosophical treatment of analogy for these periods see Lloyd (1966), Lyttkens (1952), and Burrell (1973).

2.2 Sapir's language typology Not only does the preceding discussion exemplify the Greek spirit in modern linguistics, it also shows that an analogist need not mention analogy at all and still be a perfect representative of it. Such a linguist is Sapir, who, against this background is a good Greek and an utterly modern linguist. Although Sapir falls outside the main scope of this survey, his importance warrants a closer look into his typology, since it shows that the modern 'Greek' development gives a fresh interpretation of his ideas which often remain vague to the casual reader. Noting that some concepts are expressed radically (kill), others derivatively (farmer, duckling) in the sentence The farmer kills the duckling, he went on to say that Corresponding to these two modes of expression we have two types of concepts and of linguistic elements, radical (farm, kill, duck) and derivational (-er, -ling). When a word (or unified group of words) contains a derivational element (or word) the concrete significance of the radical element (farm-, duck-) tends to fade from consciousness and to yield to a new concreteness (farmer, duckling) that is synthetic in expression rather than in thought. In our sentence the concepts of farm and duck are not really involved at all; they are merely latent, for normal reasons, in the linguistic expression (1921: 84).

Sapir understood clearly how expression forms are just a vehicle' for the semantic units which are primary in communication. In general, the communicative context should never be divorced from linguistic analysis ("normal reasons" in the quote). But after Sapir has given the universal semantization as a background he returns to the indexical function of allomorphs. He calls internal changes in the radical element 'symbolic', a term "not inapt" (126): There is probably a real psychological connection between symbolism and such significant alternations as drink, drank, drunk or Chinese mai (with rising tone) 'to buy' and mai (with falling tone) 'to sell'. The unconscious tendency toward symbolism is justly emphasized by recent psychological literature. Personally I feel that the passage from sing to sang has very much the same feeling as the alternation of symbolic colors - e.g., green for safe, red for danger. But we probably differ greatly as to the intensity with which we feel symbolism in linguistic changes of this type.

33 Here the idea of a symbolic icon comes out properly. And in fact Sapir speaks of symbolism when there is indeed a new symbolization of the old semantic configuration without any mediating affixes. The welding together of radical and affix (dep-th) is called fusion, and it is indeed related to symbolism (= symbolic fusion), and fusion may be taken in a broader psychological sense (130-1). This discussion of fusion culminates on the next page (132): Where there is uncertainty about the juncture, where the affixed element cannot rightly claim to posses its full share of significance, the unity of the complete word is more strongly emphasized. The mind must rest on something. If it cannot linger on the constituent elements, it hastens all the more eagerly to the acceptance of the word as a whole.

One final quote on fusion and form: "An inflective language like Latin or Greek uses the method of fusion, and this fusion has an inner psychological as well as an outer phonetic meaning" (135). It seems that 'outer phonetic meaning' is Sapir's term for an indexical icon delineated above. It is the coat of paint that unifies the word. This inner psychological unity is shown by the articulatory fusion, in other words 'outer phonetic meaning'. So far we have seen how the iconic function increases cohesion between the components of a word. The result is a ready-made sign, a unit of and for communication. Often one need not ask how it was built up. A hearer may perceive such a sign as one unanalyzable lump and use it accordingly (icons go beyond their construction, 1.1). Two things remain to be done, to look closer into language use where the independence of words and larger phrases shows itself and to look into the indexical machinery that iconically asserts itself between words. The discussion toward these goals can continue with Sapir's classification of concepts through their grammatical form. Remember his four-way division (1921: 101): I. II. III. IV.

Basic (Concrete) Concepts. Derivational Concepts. Concrete Relational Concepts. Pure Relational Concepts, (cf. the term 'relational' in Paul and Stern above! )

The greatest difficulty resides in III and the borderline between III and IV. But note Sapir's own words that III "differ fundamentally from type II in indicating or implying relations that transcend the particular word (my emphases — RA) to which they are immediately attached" and that IV "serve to relate the concrete elements of the proposition (my emphasis - RA) to

34 each other, thus giving it definite syntactic form". This indication supports thus III and IV as serving sentence semantics, I and II word semantics. Sapir seems to be rather clear on the indexical pointing function of such inflectional morphemes. IV point down toward the parts, III up toward the whole. The situation seems to be as in Fig. 9 (Anttila 1975a: 25).

Fig. 9 The relation of Sapir's concepts

to each

other.

Concord belongs to III, and like allophonic variation, its function is thus to increase cohesion between units that go together. In both cases we have rationalized use of redundancy which is beneficial especially to the decoder. Things that belong together are stamped the same way (vidi ilium bonum dominum, quarum dearum saevarum) (Sapir 114). He adds: Not that sound-echo, whether in the form of rhyme or of alliteration is necessary to concord, though in its most typical and original forms concord is nearly always accompanied by sound repetition. The essence of the principle is simply this, that words (elements) that belong together, particularly if they are syntactic equivalents or are related in like fashion to another word or element, are outwardly marked by the same or functionally equivalent affixes (114-115).

Note that Sapir is quite correctly explicating the principle 'one meaning — one form' as one of the basic aspects of simplicity in language structure (see 2.8). This ensures that the form-meaning link is established once only, and the same form is thereafter repeated when necessary. Thus it is no wonder that this type is so frequently used in the languages of the world, but Sapir is of course fully aware that grammars leak, and thus concord affixes may make the symbolic link many times, for the same function, e.g. 'feminine singular' in Latin popul-ws alt-a virid-K + que 'a high and green poplar'. This aspect was also clear to e.g. Varro (viii. 39ff., x. 3ff.) and Paul (1920: 106-20) (Esper 10, 39), and Householder (70). Examples

35 of pure iconic indexes would be Finnish ta-lie piene-lle kaunii-//e Xyio-lle 'to this little beautiful girl', Italian i grand-/ giornal-/ italian-/ or German die gross-e« italienisch-e« Zeitung-en (Wandruszka 76). Particularly insightful is also Samuels' (1972: 50) discussion of the constancy in the level of redundancy so that a sufficient number of categories is available for selection. He shows an interesting redundancy shift from Old to Middle English. The Old English full formal system of grammatical gender may have its drawbacks for part of communication, but at least (50) the inherited system yielded a vast store of expected collocations, e.g. every time a speaker . . . said pcere rather than pees he was automatically restricting the number of nouns that the hearer might expect, and that was its value to communication. In Middle English, this system . . . broke down, and was replaced by a far more developed system of specifiers (determiners). This was a function very different from the previous: it was anaphoric and cataphoric only, i.e. it referred not to accompanying words, but to words elsewhere in the context. But, in communicative value, it served as an equivalent to its predecessor, i.e. a(n) and the served to limit the range of nouns that might be expected by the hearer.

Then Samuels proceeds to investigate this further (showing also properly the abductions that take place when overt expressions become indeterminate), but we leave it at this admirable statement of the indexical function, which adds a new dimension to the discussion.

2.3

Anomaly as regularity

The task of anomaly is to provide redundancy which will make language work in actual language using situations. In other words, the symbolic aspects have to be incorporated in the web of iconicity. An icon can reach beyond itself, and often it is asked to become a symbol. The logic of usage requires this. A map of experience written with an isolating language looks, according to Sapir's typology, something like in Fig. 10 (Anttila 1975a: 27). The scenery is dotted with basic radical concepts. Intermediate points in the terrain are characteristically covered with compounds which can be strongly metaphorical (for us). The conceptual independence means that such compounds are colorless units in which the parts dissolve for the whole. The baby-talk effect is the outsider's reaction. Wandruszka puts the same situation in proper perpective in answer to those foreigners who wonder about the German words Fahrstuhl and Handschuh: "Wir hatten in einem Fahrstuhl nie einen Stuhl vermisst, bei einem Handschuh nie an einen Schuh gedacht! Fahrstuhl und Handschuh sind Einzelbezeich-

36

o (5) Fig. 10 An isolating map of experience.

nungen, die keine analogische Verbindung mehr zu Stuhl und Schuh haben" (36). Combination of basic elements easily fills up empty parts of the map, e.g. Nahuatl kwa 'head' plus kwauh 'tree' gives kwakauh 'horn' and ten 'lips' plus tson 'hair' produces tentson 'beard'. A compound of the-two with -e 'with' gives kwakwauhtentsone 'goat' (to retain Misteli's 1893 orthography). In Chinese the writing system repeats this combination principle, even when we do not have iconic compounds like 'firehorse' or the like. In fact, the psychological support for logography is often contrasted with alphabets. Both capture part of the truth. The conceptual independence provides also the reason for why parts of compounds can fade and become 'regular' morphology (e.g. English -hood, -ly, French -ment, etc.). But in an agglutinative map we have the same situation "each added element determining the form of the whole anew" (Sapir 127, fn. 10), as is shown in Fig. 11 (Anttila 1975a: 28). The end point is where the formation tends to anchor in the terrain of experience. Let us take an example from IIIj and III 2 covering the area of 'together' as direction and state 2 . English uses together for both, and the form is hardly relatable to the original gather. Finnish uses inflected case forms of yksi 'one' for these: yhte-en (illative) and yhde-ssa (inessive). Swedish, on the other hand, would use independent symbolization for these notions: ihopj and tillsamrmns2 (with the radicals hop 'heap' and *sem'one-together'). Syntactic contexts as needed for English together will not be discussed here. Syntax and morphology intertwine in filling the expression gaps in the language maps over experience territory. Thus also syntactic constructions in certain contexts are liable to fossilize and be resymbolized to a one-to-one relation. Also this is well known in that today's morphology

37

Fig. 11 The anchoring of derivatives into the experience

terrain.

tends to be yesterday's syntax. Typical is the origin of many particles and adverbs (willy-nilly, maybe, kanske, quamvis, peut-être). The concreteness of thought (as over expression) Sapir talks about means the importance of the lexicon in which many variants have to be listed separately even when many speakers realize the connection (although hardly every inflected form as Halle now suggests; 1972, 1973). If morphology tends to draw syntax into itself, lexicon attracts from everywhere, in the right pragmatic context (see diagram H. in Anttila 1975a). All these facts present strong evidence (from linguistic change) that there is a surface level which is important in language use. Generative grammar took the emphatic position that no such evidence exists. But since relexicalization phenomena were still obvious, the generativists had to resort to all kinds of rule deletions, etc., as explanations. This attitude was — literally — preposterous, since the primary momentum is language use for communication (which Chomsky has denied, but was quite clear for Paul (1877: 328-9); Esper 36). Usage can also turn polysemy into homophony (cold, Schloss), and the boundaries between these are different for different speakers (see Anttila 1972: 144, 349). Also endings or parts of words can be drawn into the lexicon (ism, ology). Sapir has often been mentioned as a pre-transformational generative grammarian, because he gave processes their due. But actually he was more than that, he was a 'real' linguist, to coin a term, who understood the balance between the different components of grammar and the communicative situation. Again, there are modern statements that agree perfectly with Sapir's spirit. Note particularly the following in connection with wholes which have absorbed original motivated parts, wholes that are used as lumps in communi-

38 cation: "So wie in den Analogien drückt sich auch in den Anomalien eine geistige Kraft aus, das Vermögen, Einzelnes als Einzelnes zu benennen" (Wandruszka 35). Also Samuels echoes this in what he says about lexical intake (63): "Each new form is individual in origin . . . ". Wandruszka's book represents Sapir's spirit. He shows how the two sides, conceptual structure (Mentalstruktur) and expression structure (Instrumentalstruktur), combine in maps like above (he does not draw them). His central message is the mutual tension between analogy and anomaly (35): Unsere Sprachen sind keine unfehlbaren analogischen Systeme von strenger, zwingender Konsequenz. In ihnen ist unsere menschliche Erlebniswelt nicht in einem Netz perfekter Analogien eingefangen. Das ist eine ebenso einfache wie entscheidende Erkenntnis, die uns gerade in der letzten Zeit durch den Streit um so viele strukturaüstische, generativ-transformationelle, formallogische Systemmodelle immer deutlicher bewusst wird. Die Lehre, die wir daraus ziehen müssen, lautet: Wenn wir begreifen wollen, wie unsere Sprachen als Kommunikationssysteme funktionieren, müssen wir zuallererst versuchen, das ihnen eigentümliche Wechselspiel, das Gegeneinander, Miteinander, Ineinander von Analogien und Anomalien, von "Majoritäten" und "Minoritäten" richtig zu verstehen.

Then his whole second chapter is an explicit text on the topic. Anomaly is not peripheral but intrudes into the most frequent central vocabulary (40), anomaly is always somewhere consistent (41). "Seit langem weiss man, dass wir als Kinder in unserer Muttersprache von Anfang an beides gleichzeitig lernen, Analogien und Anomalien" (42) (cf. also Householder 1971: 70-4).

2.4

Analogy, competence, and performance

When Best discusses Neogrammarian analogy as language competence, he suggests that Brugmann and Chomsky are parallel analogists (29-30). Mayerthaler quite correctly notes that mentioning Chomsky in this context is mere name dropping (1974: 127). However, Best is right in that analogy occupied the place of competence and performance, and in general the rejection of analogy is a kind of self-imposed misunderstanding (Dinneen 1968: 102): In rejecting 'analogy' plus 'specialization' or 'generalization' as a basis for explaining normal linguistic behavior, manifesting poetic and creative aspects, Chomsky seems to equate analogy and the processes of generalization and specialization with 'stimulus control, habit structures and dispositions to respond*. He appears to assume that all these bases are mechanical and externally

39 triggering, whereas he would prefer to assign the sources of innovation to some internal principle.

We have seen that reasoning is diagrammatic, and this is the basic structure of the working of the human mind. Vincent, in his defense of analogy against generative malpractice, lists treaters of analogy in two camps (1974: 436):

Competence

Performance

Kurylowicz Kiparsky Vennemann

The Neogrammarians Sturtevant Bloomfield Manczak Andersen Anttila

Now his point is that the two approaches are possible because the truth lies half-way between: "analogy is neither a competence nor a performance phenomenon, but rather a sort of line linking these two aspects of language". Analogy works also between the perceptual and "predictive" sides of language, and thus data from child language acquisition are of paramount importance. We have indeed seen that both parts are correct. For the first, witness Dinneen's title, and the work of Hockett (Best 27), Householder, and Anttila. And analogy works exactly between abduction and deduction. Vincent's work is, as it were, independent evidence for these notions. In fact, Vincent's position seems to be parallel to Saussure's, who used analogy exactly between langue and parole. Analogy was the act of creation that used the old parts of the system to produce cohesion in the forms (Coseriu 1974: 209). Wandruszka ends his book (136) with a quote from W. Wieser (1968): Die Methode der Informationsverarbeitung des Gehirns ist nicht sehr genau; es irrt und korrigiert sich o f t ; es geht nicht logisch vor, sondern tastet sich nach Ähnlichkeitsformeln durch den Wust gespeicherter Information; es ist unerhört redundant, arbeitet langsam und bedarf dementsprechend vieler paralleler Kanäle, um Nachrichten von einer Station zur andern zu übermitteln.

The mechanistic notion of opposition, of the redundancy-stripping kind, as practiced by the American structuralists and the operational rule mechanisms as the transformational counterparts of that practice, will not do. Koestler comes to a similar view (1967: 537), although he, like anybody else, has "the hoary problem of the nature of similarity". It is of course not possible to get around oppositions, but as invariance must be under-

40 stood through a dynamic principle of similarity, so opposition must be subsumed under a dynamic notion of difference. Opposition relation is of course negated similarity. It is on these relations that analogy operates. This is why the Greek notion should be seriously studied, although there has never been general agreement in the definition of analogy (Lloyd 1966: 172). The hoary problem is the basic axiom. Wheeler noted long ago (1887: 43) that it was the name 'analogy' rather than the thing that seemed to awaken odium, but pointed out that the "Kinderkrankheit der Analogisterei" had assumed a chronic form. Havers (1931: 30-2, 52) and Hermann (1931: 123), among others, thought that 'analogy' simplifies the real thing too much, which is a direct parallel to the transformational program. The alternative was a psychological combination of Sprachgefühl and inner language plus memory as explanation (Anttila 1972: 106-7). Surely this was an attei ipt at explicating competence, although the notation had to remain proportional (cf. Best 57). Finally, to return to most recent times, Esa Itkonen has convincingly shown the logically faulty character of the transformational competence/ performance distinction, because the latter is action, whereas the former is idealized, unconscious knowledge about forms, not action (1974: 303). He proposes the practical-syllogism model as a better way of explicating the notion 'correct speech act in L', since this notion obviously contains the explication of the notion "correct sentence in L", [and] provides for the logically faultless notion of a potentially conscious competence for rule-governed linguistic actions and, moreover, formulates the crucial link between knowledge and (attempted) action. In other words, our model is able to account for the following facts: i) The relation of competence to performance must be that of potentiality to actuality, ii) As corollary the relation between competence and performance is necessary or conceptual, not contingent . . .

Thus we see that, indeed, one must ultimately come to the essence of analogical argument in this area (1.3). The competence/performance tabulation of linguists is inexact and rather arbitrary.

2.5 Analogy and rules The question of analogy in connection with competence/performance is one side of the question of analogy and rules, which we already know from the Greeks. The most important single articles on this topic are Malone (1969), unfortunately, although typically, totally neglected in the literature on the subject, and Leed (1970), which will be treated in 3.3. Malone starts out

41 by asking whether there might not be synchronic rules of analogy corresponding to analogical change? There is a tendency to take the synchronic impact of analogical change as essentially negative, but Malone wants to show that many synchronic anomalies seem to demand the formulation of synchronic analogy (A-rules), i.e. rules whose content is not merely deletion of, or restrictive conditions on, morphophonemic processes" (536). Malone correctly feels that this has been little studied to date, but does not seem to be aware of the 'Greek tradition'. Thus also his plea is independent evidence for the content of this chapter. A certain common sense must come out when one thinks about the subject with sensitivity and insight. Malone presents evidence from Tiberian Hebrew, Classical Mandaic, and Geez, adding thus a new axis to the Semitic pattern (2.1). In principle, wherever synchronic processes refer to total paradigms we have analogy, and this is more than just grammatical conditioning. The Hebrew anomalies are grammatically definable, the Mandaic ones lexically (cf. the grammatical instrument vs. the lexical tool, 1.1). In short, one can summarize Malone's detailed treatment by noting that he treats the area where traditionally we have the question of analogical restoration of the result of sound change or paradigmatic prevention of sound change (Sturtevant's paradox at large; 1947: 109). A rule in the form of systematic phonemics would go like (542): Geminate word-final consonantism is simplified just in case the relevant C X C X # does not contain wholely or partially the symbolization of a 2nd pers. pronominal morpheme.

and with an A-rule (546): Any allomorph of a 2nd pers. pronominal morpheme is replaced by that allomorph of the relevant morpheme which is unmarked within the same subparadigm. In verbs, sameness of subparadigm means identity of tense, binyan [morphological conjugation classes], and gizra [morphophonemic classes].

Like Komarek and Shapiro, Malone also thus assigns the task for morphological iconicity directly to the allomorphs, and he agrees with Shapiro in that the guiding force must be a theory of markedness, no matter what the differences are between them. This exposition agrees thus with the kind of semiotics of allomorphs recurring in this chapter. In the Hebrew and Mandaic cases a morphophonemic rule is descriptively adequate, but not explanatorily like the A-rules, but in Geez it is not even the former and we need an A-rule. In fact, morphophonemic rules can prevent paradigm regularity and its description (551). Malone admits of course that the actuation problem is not solved (546), and that A-rules need not always

42 be formulated wherever possible. "Perhaps such phenomena tend to comprise structures manifesting the first stages of systematic change" (558). Equally significant for synchronic analogy is Leed (1970), but his ideas will be presented in 3.3. Again, Malone's and Leed's treatments are independent of each other. Ohala studies (1974; among other works) experimentally the psychological side of 'sound change' by looking at extensions of systematic patterns (found by the linguist at least). He finds that the 'velar softening' {public/publicity) is only marginally productive (domestic + ism - mainly with /k/). In fact, the whole question of productivity has been overvalued in generative grammar, since it is "an empirical generalization, not an explanation or theory. The same can be said about the emphasis on rules" (Esper 167; cf. 193). Ohala comes to the conclusion that "there is also analogy, which [he] take[s] to be a shorter label for 'analogical phonological rules' (my emphasis - RA)". These require information about the phonological items, but also information from the lexicon as a whole. He led subjects to produce, on the model of detain/detention, -ion nouns from obtain and pertain, as well as -atory adjectives {explain/explanatory). For obtain, most subjects left the base unchanged (/ey/). The evidence (not reproduced here) points out that the use of standard generative unique underlying forms is not supported, because the same word can show different alternations in the speech of one subject. Such derivations cannot be based on single underlying forms, perhaps on no abstract underlying forms at all. In fact Also it shows that the particular form of the derivations, contrary to the assumptions of generative phonology, does depend on other words or pairs of words in the lexicon of the speaker. Having found, or, in the present case, having been provided with suitable existing models, the speaker can pattern new derivations after them, that is, he can analogize.

(Cf. this with Hsieh below; 2.7). Ohala realizes that he demonstrates the obvious, because philology always knew this, but he thinks it necessary, because of the 'generative' neglect of the topic. The essence of the proportional set-up of analogical phonological rules comes out in Fig. 12, which is a small extract of Ohala's total algorithmic summing-up (373). Differences to generative phonology are, among others: 1) These analogical rules do not operate on highly abstract forms, but derivations derive directly from existing (surface) models. 2) Separate linearly ordered rules (cf. 4.1) are unnecessary with analogical rules, because, say, the velar softening can be produced directly from such models (e.g. -c/-cize is the model for sputnik/sputnicize). 3) A speaker need not store in his memory

43

input proportion

output 'proportion'

DIFF.

SAME

input stem

Abth

eyn

model stem

tkspl

eyn

Abth - 1

1

ekspl -aenatoriy

Fig. 12 Ohala's analogical matrix for phonological

target derivation

model derivation

rules.

long lists of 'generative' phonological rules, because analogical rules can be made at the spur of the moment and forgotten (and they are primarily proportional as Householder 1971 stresses). Variability would result from the fact that different references to the lexicon are made by different speakers. This analogical process can be used for any sound pattern, whether phonetically plausible or implausible, and in fact, analogy shows that 'natural sound patterns' are not propagated more easily than 'unnatural ones'. Ohala concludes (1974), correctly, that generative grammar overdid its mentalism: it went too far in that direction. This, then, is hardly mentalism any more. (Ohala is one of the very few modern linguists who refers to Esper's experiments [5.3], and in fact his goals are largely the same, although he also lets nature do its own experiments.) This leads us to Derwing (1973: 308-13), whose discussion pertains also to the previous section (2.4). Derwing continues Householder's position (1971), and starts from the Greek list/rule opposition (lexical tool/grammatical instrument), and correctly comes to the necessity of both: we need a system of elements and rules (like Anttila 1972 passim). Paul (1920) understood the situation, Chomsky just re-emphasized the rules. At one level Chomsky's notion of a rule of grammar and the structuralists' notion of a regular language function or grammatical pattern may be regarded as much the same (309). Paul, Bloomfield, and Co., had a legitimate claim on analogy as the machinery of true creativity. A rule is abstracted from utterances (310; thus also Halle, Schlesinger, Stemmer). Chomsky is esoteric in rejecting analogy on the grounds that it is mysterious. Derwing (310) shows that Chomsky's rule itself is a mystery, because 1) it is without direct behavioral interpretation and 2) it is unlearnable. It is improper to argue that

44 the transformationalist's rules are simply analogies made precise, and once precise, indistinguishable from a transformation (Lakoff). Only Harris' notion is acceptable (310) (same in Householder). We need a performance model (cf. 2.4) where rules are put to use like Bloomfield's, which express behavioral regularities directly. We must demonstrate that the speaker behaves according to the rule in a situation which is novel to his linguistic experience (not having heard it before) (cf. Esper! ). Then the rule is a general surface-structure constraint. The difficulty in finding evidence for this leads King (1969) to throw out analogy altogether. This is totally unjustified. We seldom know when a mature native speaker is creative (Brugmann) (311). When the child is good enough to escape the inductive net we do not know (313). We only know in case of unsuccessful deductions (312). Certain implications for generative formalism have now been repeatedly written on the wall, but since not everybody is willing to read various handwritings, we must summarize the salient features here (Anttila 1975a). When linguistics is rediscovering morphology and function, it is clear that analogy also becomes more dominant. But more particularly, note that this chapter has shown that item-and-process and item-and-arrangement models are not mutually exclusive, as usually thought. It is in fact also surprising that, over the past decade, the item-and-arrangement handling of allomorphs has been gaining territory (Komarek, Malone, Shapiro). The distribution of, say, the allomorphs /sir) 'v saeq ^ SAQ / sing/sang/sung is of primary importance in the process of communication. There is little difference if we lift out the alternation /i 'v ae 'v A / and describe it with rules like i ->« / - [pret.] and i -*• A / - [ppp]- It might be just a trick of the linguist. A rule like X -»• Y/ - Z is a mere description of a morphological index, and is indeed an iconic (diagrammatic) picture of this index, but it does not mean that the allomorphs themselves would not represent the same diagram. A fundamental mistake has been elevated into an explanation, since the semiotic function of such indexes has not been understood by the generative grammarians. They have again described a tool without its function, and Shapiro says now (1974) that where the notation is adequate, it should finally be provided with content also. It should have been clear right from the beginning that one cannot do morphology as phonology. Generative grammarians still tend to stick to their original position: "There is a growing body of evidence that the derivation of surface alternants from underlying forms by means of ordered phonological rules is not merely an elegant descriptive device, but in a real sense a characterization of linguistic structure" (Kiparsky 1972 b: 277). As has been pointed out, this is by no means progress. The problem of homology and the ideal

45 type may give some historical clues, but for synchrony it might be useless for explanation (cf. Hesse 82-3). Otherwise Kiparsky (1972b: 279) and Halle (1972, 1973) lament the fact that our present understanding of morphology (and its change) is very limited. If we take e.g. Wurzel (1970) as a typical generative treatment of morphology, we see that the difficulty is the following: morphology is appended to the environmental part of "phonological" rules, which take the form of complicated and unreadable taxonomies (which should be unlearnable too, cf. Derwing's point 2 above). According to Halle the character of morphological rules has been studied to a limited extent. Now he wants to facilitate discussion and draw others to this topic, suggesting that exceptions be listed, a grammar include a dictionary of occurring words and morphemes with each inflected form in a separate entry. As far as he can tell, derivation and inflection must be handled in parallel fashion, and because paradigms influence evolution, they must appear as entities in their own right somewhere. Other interesting characteristics must await further study. This manifesto shows deep ignorance of the field. All the points mentioned are rather well-established traditional ones, except that some are rather surprising (all inflectional forms! ). Literature is further full of the evidence for a difference between derivation and inflection, as Sapir shows us above. The emptiness of this program is taken up by Lipka (1975), who lists some of the relevant literature. The same message comes out also in Boas (1974).

2.6 Pictures, family resemblances, and similarity The discussion of the previous section continues best with the nature of the dialect relation. I refer only to St. Clair (1973), who reviews succinctly the two approaches to it. The dependency principle maintains that dialects are mutually intelligible because they share the same underlying forms. In other words, we would have the same "progress" discussed above (2.5) in morphology, but here it would occur with bigger playing chips. Another extension from this is the pan-dialectal grammar, one grammar for "all" the dialects. St. Clair lists six major difficulties with this conception, and opts after all for the independency principle as the only viable alternative. This is the counterpart to the allomorph approach in morphology: the speakers of a language do not share the same underlying forms. St. Clair notes that the independency principle contradicts the accepted transformational position, but more important is the general dilemma created in this context. If the linguists "chose the independency principle, they would be

46 forced to relinquish their traditional assumptions about how the dialects of a language are related. Yet if they admitted to defining language as a class consisting of dialects which share the same property, then they would have to rescind the independency principle" (25b). St. Clair continues (25b-26): These conflicts can be resolved only if we reject the traditional notion that the dialects of a language share a unique property such as underlying forms, or phonological rules, etc. In its place we propose that the relationship among the dialects of a language is a family relationship (Wittgenstein 1 9 5 3 ) . Hence we view dialects as a complex network of similarities and differences which overlap in such a way that no two idiolects are necessarily identical, but in which there is enough of a similarity among the dialects in a speech chain so that speech can always be processed by means of phonological strategies. The function of these strategies is that of bridging the systematic gap that exists between related linguistic systems.

Wittgenstein argued very economically, with a simile of games and faces, in other words, analogy, for his family resemblances, complicated networks of similarities overlapping and criss-crossing. With these he 'solved' the problem of the universals, being able to avoid the pitfalls of both the nominalists and the realists. Both properties and resemblances are ultimate, there is no single answer, and the similarities and differences cover both possible and actual cases (Bambrough 1966; cf. Hesse). In other words, we run again into the networks of chain-analogies under a different name, since Wittgenstein does not (ever) refer to his predecessors (cf. Esper 11). Linguists have tended to fall under the fallacy of single answers, which always leads to extremes. When some degree of emes is obvious, it does not mean that we should push for a gapless maximally abstract system in every case. Family resemblances as a theoretical answer mediates between extremes. Systematic gaps should not be ironed out. In fact, Hintikka (1973) proposes the same even for logic in order to make it better fit natural language, rather than the earlier attempts the other way. Of course, there is no reason to posit only phonological strategies of processing communication (as St. Clair frequently mentions), although this is the easiest area of investigation. Such a phonological system was proposed, after Dyen (1969), in Anttila (1969 a: 82-86) and (1972: 283-4, 292), called 'dialect cohesion', whereas a wider higher-level notion would be a 'diasystem' (Anttila 1972: 292, 298-9, 317). These have been presented in connection with a pan-dialectal grammar, and are thus vulnerable to criticism (which has indeed come from L. Campbell, private communication). However, since the essential function of the notion is a partial cohesion, it can be naturally interpreted as the similarity part of family resemblances.

47 After all, a dialect cohesion is based on correspondences which are indexes easily expressed with the notation X -*• Y/ - Z. And note, the result is cohesion, exactly as in morphology. Family resemblances form an essential core of Wittgenstein's philosophy, although it seems to be nowhere stressed, which is true of many other philosophers also, since similarity belongs to the necessary axioms. Wittgenstein developed also a picture theory of sentence meaning, which is in part similar to Peirce's. In linguistics it has had no impact, even when pointed out (Anttila 1969: 59). We have in fact been talking about pictures of grammar rather than giving pictorial examples. There is a rise now in the appreciation of both Peirce and Wittgenstein, but the most comprehensive treatment of graphs in linguistics does not mention either. This is Stewart's (1972) dissertation on a theory of graphic representation for linguistics. In essence this is very much showing the value of analogy and models in science (1.3). Also others are now paying attention to this area (E. Zierer 1970, H. Karlgren; Gavare, Ore; Hoenigswald; and Anttila 1972 has a graphic quality throughout). Scientific theory and language are maps of their respective objects, which we have seen by reconstructing Sapir's language maps of experience (2.3), although these are extremely simplified. St. Clair's position is also the one adopted by Itkonen (1974: 316), whose point of departure is indeed later Wittgenstein on a much larger scale. One should note here also von Kutschera's support of the notion (1972: 267-79). The openness of language can only be understood through the fact that usage is never fixed for good; we can always name new things with old words. Thus the human mind needs an open ability to see similarities, and this openness is reflected in the first part of 'family resemblances'. We can use language correctly in new situations, and thus we do have some kind of a standard, even if a fallible one, but we bump again into an irreducible mental (or abductive) ability. It should be further noted that von Kutschera relies on the pragmatic priority. With family resemblances he explains the openness of word meanings, the variants that are contextually bound (e.g. the metaphor), and relative synonymy (cf. Posner in 1.6). Anttila (1974a) extended the notion of family resemblances to the definition of allomorphs. But then the best philosophical support for analogy as a relation of similarity came in the form of Uhlan Slagle's recent work (1974, 1975ab). His 1974 book must be read with the two articles due to certain unfortunate circumstances attending the publication of that book; to alleviate this his most recent work must be considered in to to. Slagle resurrects and combines the valuable insights of gestalt psychology and Kantian philosophy for a valid or adequate theory of mind. And only

48 this can give an adequate theory of meaning. Gestalt psychology provides the frame for a necessary scientific phenomenology; in particular, gestalt psychologists proved that similarity and contiguity are factors underlying spontaneous unification in perceptual fields, as shown by the Wertheimer dots (Köhler 1940). Experiments dealing with stroboscobic movement, in which there is no movement at all in the stimulus pattern, provide even more striking evidence in regard to the role of similarity and contiguity in effecting spontaneous unification. Thus, perception is a dynamic process. More importantly, immediate conceptual experience, the structure of meaning, and thought, is based directly on the structure of sensory experience. And similarity as a mode of sensory organization plays a uniquely important role in this context. Slagle points out that Cornelius was particularly clear on this point, witness (Cornelius 1897: 42): Jene Betrachtungen zeigen ohne Weiteres, dass die Aenlichkeitserkenntnis ebenso wie die Erinnerung eine für den Zusammenhang unseres Bewusstseinsverlaufs unentbehrliche, fundamentale Tatsache ist. Ohne dieselbe gäbe es kein Wiedererkennen und somit auch kein Begreifen unserer Erlebnisse: ein buntes Chaos zusammenhangloser, ewig neuer und unbekannter Empfindungen würde das einzige sein, was wir als Gegenstand unseres Erkennens besässen. Erst mit der Erkenntnis der Aenlichkeit kommt in diese Chaos Ordnung und Zusammenhang, gelangen wir dazu, die Bewusstseinsinhalte unter Begriffe zu fassen, so dass wir gewissermassen im Strome der Erlebnisse einen festen Standpunkt gewinnen, von welchem wir den rastlosen Wechsel zu überblicken und zu verstehen mögen.

Here, and elsewhere, Cornelius was most explicit in regard to the experiential status of resemblance: Es la'sst sich deshalb auch das, was mit Aenlichkeit gemeint ist, niemals sondern stets nur aufzeigen.

definiren,

As Bertrand Russell stressed (1945: 154): "the mind [apart from .the perceptual faculty] is no more involved in the perception of likeness than in the perception of colour." In the same vein, Cornelius effectively answered the criticism of the gestalt (and Humean) account of abstraction whereby one abstracts on the basis of perceived similarity (1911: 244): Man hat gegen diese Thecrie eingewendet, dass sie, um diese Begriffe der abstracten Merkmale zu erklären, die Abstraction bereits voraussetze: um an jene Ähnlichkeiten zu denken, müsse eben der abstracte Begriff der Ähnlichkeit bereits gegeben sein. Dieser Einwand beruht auf einem Missverständnis: nicht um den abstracten Begriff der Ähnlichkeit oder um die Beurteilung der zwischen zwei Inhalten bestehenden Beziehung als Ähnlichkeit handelt es sich, sondern um das

49 concrete Vorfinden derjenige Tatsache, welche wir erst dann als Ähnlichkeit zu bezeichnen lernen, nachdem die Bildung dieses abstracten Begriffes vor sich gegangen ist.

One would like to say that the perceptual structure is the determiner of deep structure. Slagle gives rich references to scores of linguists and (empiricist) philosophers who have understood the issue; e.g. James, J. S. Mill, Jevons, and Ehrenstein argued that similarity underlies logic, with respect to both inference and categorization. There is an underlying unity of memory, thought, and perception, and the functioning of similarity as a factor underlying the dynamics of sensory organization explains both recall and categorization (already Aristotle realized the crucial role of similarity, see Lloyd 1966). The process of recognition is effected by a functional union of trace process (i.e., memory) and perceptual process. Gestalt completion figures show this quite clearly, since in these memory traces supply missing aspects of incomplete figures (Slagle 1975b): When this functional unification of trace and perceptual process takes place, what at first appears to be a group of unconnected, unrelated elements in the visual field spontaneously reorganizes itself, with the apparently unrelated parts sometimes almost literally snapping into place and the previously disparate elements being perceived as parts of a coherent whole representing something one has seen in the past, such as, for example, a telephone or a large merchant vessel.

As Slagle stresses, one needs this machinery in problem solving, and in that context, 'analogy' is indeed a word frequently used. Above all, this kind of functional unification of sensory processes is very familiar from dreams where various aspects of previous experiences are combined into new sensory experiences (cf. 1975b). (This aspect of Slagle's work makes it now possible to improve on the definition of allomorphs as a kind of Wertheimer dot configuration, see Anttila 1975f.) As Wolff (1963: 177) observed, "so far is Kant from demonstrating the separability of logic and psychology, as has long been fashionable to assert, that he actually demonstrates their complete inseparability." In fact, Slagle (1975ac) shows that the experimental findings of gestalt psychology provide the basis for following through on Kant's breakthrough in regard to the foundations of logic. The key lies in being able to explain in psychological terms what Kant labeled the synthetic unity of apperception. The crucial significance of being able to do this is brought into clear focus by Kant's own words (B134): Und so ist die synthetische Einheit der Apperzeption der höchste Punkt, an dem man allen Verstandesgebrauch, selbst die ganze Logik, und, nach ihr, die Transzen-

50 dental-Philosophie heften muss, ja dieses (Emphasis mine - RA)

Vermögen

ist der Verstand

selbst.

Now as Lewis (1946: 361; cf. 331-2) suggested, the functional equivalent of Kant's synthetic unity of apperception is the rule-governed unification of memory and perception in the epistemological (i.e., the perceived) present. Consequently, Slagle (1975c) is able to demonstrate that the rulegoverned functional unification of memory process and perceptual process effected by similarity (functioning as a principle of spontaneous unification in sensory fields) is the equivalent, in psychological terms, of Kant's synthetic unity of apperception. And explaining the synthetic unity of apperception in terms of the immanent dynamics of sensory organization removes the only reason for positing a separation of the categories and the transcendental schemata which are derived from the modes of the temporal (and spatial) organization of sensory experience. That this is the case is clear from the following passage on the function of the Verstand in Kant's theory (B313-4): Wenn wir denn also sagen: die Sinne stellen uns die Gegenstände vor, wie sie erscheinen, der Verstand aber, wie sie sind, so ist das letztere nicht in transzendentaler, sondern bloss empirischer Bedeutung zu nehmen, nämlich wie sie als Gegenstände der Erfahrung, im durchgängigen Zusammenhange der Erscheinungen, müssen vorgestellt werden. . . .

Kant made it abundantly clear here and elsewhere (cf. B195, 245-6) that the function of the Verstand is to organize sensory experience into a unified whole in terms of (spatio-) temporal ordering, a unity which is possible only through the rule-governed unification of memory and perception (i.e., the synthetic unity of apperception). Hence, as has already been indicated, since the rule-governed functional unification of memory and perception is effected by similarity functioning as a principle of spontaneous unification of sensory processes, the reason for separating schema and category completely disappears, along with the extremely problematic vestiges of traditional rationalism reflected in Kant's original formulations. Within this context the propositions of logic are ultimately based on the 'objective' order of sensory experience considered as a system of constant and systematic interconnections of possible sense experiences valid for any observer, with the rule-governed functional unification of memory and perception (effected by similarity functioning as a principle of spontaneous unification in sensory fields) underlying our ability to recognize the systematic order of experienced reality (1975abc). Analytic meaning is then, as Lewis (1946: 156) argued, based on the relations of the sensory criteria

51 which constitute the basis of applying concepts to experience. This was, mutatis mutandis, also the position of Kant in regard to analytic meaning (B133): . . . also nur vermöge einer vorausgedachten möglichen synthetischen Einheit kann ich mir die analytische vorstellen.

Indeed, such a position necessarily follows from Kant's contention in B104-5 that the same operations of the Verstand underlie analytic and synthetic meaning. (As Slagle [1975abc] stresses, criticism based on recent split-brain research is invalid because patients, using only the language dominant hemisphere, can distinguish between random arrangement of distinctive features and a patterned arrangement: the whole is more than the sum of its parts.) The first version of this book on analogy was written from the perspective of language as constant systematization, or diagrammatization (Coseriu, Jakobson), effected through family resemblances and analogy, and this aspect of course remains. But now we see that this conception in fact requires the process-oriented theoretical framework provided by Slagle's synthesis of Kantian insights and gestalt psychology. When you have a network of family resemblances (see also Pike in 2.9), you can derive new networks by activating the ones that are already there. Similarly Vipond (1975) stresses the fact that the same memory must be used by humans both for linguistic and nonlinguistic information. Independent dictionary models of knowledge (as in transformational grammar) will not do; we need network models that represent the above continuity and are (redundant) multifactor theories of language processing (323): ALL types of information - linguistic, cognitive, sensory - are contained in a single information network, and they are all represented in the same way; namely, in associative, or relational, form. Briefly, this associative structure may be illustrated as "aRb molecules," where a and b are atomic units and R the relation between them. . . . (my omission - RA) the meaning of a particular concept is given by its location in a network of interconnected nodes; that is, by its relations with other units.

Hence, the full concept is defined by a number of such relations. And note that one does not experience relation without the entities involved (Slagle 1974: 30), which is correctly seen by Pike also (2.9). (In addition to Wieser and Wandruszka in 2.4 one can note that Van Lancker [1975] is the newest treatment demanding the double-wiring model and its relevance for linguistics.) Slagle's work deals with concepts and their applicability to experience,

52 while the diagrammatization approach concentrates on signs. Note the similarity of the terms 'schematization' and 'diagrammatization'. Slagle's position that the syntax of language is ultimately based on the syntax of perception gives a justification for the diagrammatic nature of syntax. Perception as a factor in syntax comes out in Maher's situational motivation in syntax (1972, 1973). Slagle's work shows also the underlying unity of the abductive, inductive, and deductive processes.

2.7

Analogy, association, and contiguity

The linguistic experience maps just mentioned go under analogy in the form of association (they are symbolic icons). In general, analogy presupposes association (Scherer 1890: 25-6; see Esper 24), but not necessarily the other way around (Oertel 1901: 150-1; Esper 61). Only the Chomskyan tradition, also in psychology, finds association, like analogy, "an insult to man's intellectual capacities" (Esper 171, cf. 136), and wants to replace both with a rigid, recursive, closed rule mechanism (cf. Derwing in 2.5). Esper has indeed taken the issue from the right angle, as his title "Analogy and association in linguistics and psychology" shows. He is also qualified in both fields, whatever school feuds might detract from such a background. In general he shows that linguists are armchair psychologists, even when their willingness to apply the necessary notions to linguistics is the right one. The Greeks knew the chain of similarity, contiguity, and association (Esper 85); this can be taken as icon, index, and symbol. Association psychology was the foundation of the Neogrammarian analogy, and it was also the source of the proportion (Best 32-5). In fact, association is everywhere (Esper 33), and analogy is a process of form-association, and inasmuch it is unconscious, it is a natural-science phenomenon (Esper 26-33). (In psychology association is spatial or temporal contiguity [Esper 47].) According to Peirce the great law of association is indeed a principle strikingly analogous to gravitation, since it is an attraction between ideas, and such a principle belongs to phenomenologically influenced nomological psychology (1.270, 1.189). For Saussure also, the paradigmatic axis was intuitive and associative. Only associationism is dead, "but association remains one of the fundamental facts of mental life" (Koestler 1967: 642). Associationism is a doctrine connected with British empiricism (Esper 86). As Kant maintained, the craving for a cause is innate, and experience fills it with the law of association. Hume is not much different on this, and these facts must be mentioned to show that they are so basic that no invented fights between empiricists and rationalists can be resorted to.

53 A sign system like natural language funnels association in through the indexical channel, of course; whence it branches into the icons and symbols. Association is the very essence of symbol formation, and, as we have seen (e.g. 2.3), all linguistic signs have a symbolic element, and consequently they need association. It is generally agreed now that facts of language acquisition screen out the correct notions in these matters. The best treatment in this connection is Stemmer (1971). He shows how language learning is a verbal pairing situation where the child learns to associate an associatum with a certain stimulus. Associatum thus represents meaning and has partial similarity to classical conditioning. But we need also a theory of similarity in that the stimuli need not be identical, not a stimulus, but a kind of stimuli (cf. family resemblances). It is the notion of abstract subjective similarity (cf. abduction) that comes from the notion of degree of generalization that is central for the emic operation of the human mind. Exposure to different kinds of pairing situations enables the child to understand sentences he has never heard before. In this way he acquires a creative language. The ability to generalize between stimuli that are devoid of physical similarity is based on the role played by the acquired similarity sensations. New sentences must have parts that are familiar in order to be understood; innateness will not help here. For the generative grammarians, innovation always means total novelty. Thus acquired similarity becomes crucial, necessary, and natural in an explanation of language acquisition also. We have seen such similarity in the grammatic diagrams briefly sketched above. Note that even if all philosophical understanding of the operation of the human mind must accept similarity and association as basic axioms, such a theory must still be empirical, as Stemmer shows. It is the only rationalistic approach. Some linguists are thus willing to explicitly combine association and analogy, many more do it implicitly. Hsieh states it very succinctly (1972): It would seem that the strongest candidate for this hypothetical linguistic faculty is the 'power of association', or 'analogical power'. It is quite plausible that by the help of this power, a subject responds to a new or unfamiliar word by associating it with one or more already-known words that are similar in some respect to the new or unfamiliar item. He then supplies responses that resemble in some relevant aspects the responses that he would give to the already-known items being assimilated.

Otherwise Hsieh provides arguments also for the issue in 2.3. The child's lexicon is of the 'surface-forms-too' kind, and hence the adult's lexicon must be likewise. Phonological underlying forms are by far not enough. And we will see that even those who most firmly believed it (4.1) are

54 changing their minds (4.3). (Note how all this ties in with Ohala 1974; 2.5.) All novelties start in the first flash of chance, and if repeated they start to take habits, through association. Evolution is the product of habit, which is the only cosmological category without exception. And habit leads to laws (5.2 and see Anttila 1974 b). This is not the psychological habit of behaviorism. "Associative thinking is the exercise of a habit" (Koestler 1967: 647). Learning is the acquisition of a new skill, but then Koestler introduces his notion of "bisociation, which is the combination, re-shuffling and re-structuring of skills" (647). These categories overlap. Bisociation is the bringing together of independent matrices, it is the agent of originality. In other words, this is the insight of abduction, the idea of putting together what we never thought of connecting before (Peirce 5.181; Knight 117, Anttila 1972: 197; although Koestler has a rich bibliography it does not include Peirce). Association is the conservative force in structures, bisociation revolutionary or destructive (-constructive), cf. Koestler's summary (659-60): Habit

Originality

Association within the confines of a

Bisociation of independent matrices

given matrix Guidance by preconscious or extraconscious processes

Guidance by sub-conscious processes normally under restraint

Dynamic equilibrium

Activation of regenerative potentials

Rigid to flexible variations on a theme (cf. family resemblances - RA)

Super-flexibility (reculer pour mieux sauter)

Repetitiveness

Novelty

Conservative

Destructive-constructive

The columns are quite relevant for linguistics also, e.g. generative grammar falls under the habit group. Note that the chain-analogies of language provide matrices for analogical reinterpretation quite naturally, through the components in the second column, as we shall see in later chapters. One important component in Slagle's work is Leisi's (1967) requirement for ostensive definition of language units in relation to their use (cf. Stemmer). Leisi refers only to concrete denotata, but everything in language must have public criteria (Slagle 1974: 26; cf. Esa Itkonen 1974), since we have to deal with what is actually experienced and intersubjectively

55 demonstrable on an empirical basis (28). There is no dichotomy between sensory relationships and grammatical relationships, because logical meaning ultimately refers to the co-occurrence patterns of sense experience. Traditional explanations of case markings were without success because of the implicit separation of allegedly pure conceptual relationships and all sensory relationships (Slagle 1975a: 185-8). In 1974: 45-9 Slagle adduces evidence from semantic change which shows how public perceptual data give names for private phenomena, and how concrete relationships produce 'abstract' ones. One should of course add that Slagle includes by necessity contiguity for the essence of linguistic symbols: Symbol and referent (i.e., the sensory processes underlying our experience of symbol and referent) are functionally united through contiguity functioning as a principle of spontaneous unification. This new functional whole can then interact with other similar functional wholes on the basis of similarity functioning as a principle of spontaneous unification. We control all of this by controling our attention processes which play a vital role in determining what interacts with what. This is the general truth in semiotics: symbols always need indexes and icons for actual instantiation (see e.g. Walther 1974).

2.8 One meaning - one form One of the most important features of language design is that there be as much one-to-one symbolization between meaning and form as possible. This was clear already to the ancients, since "the principle of analogy required that similar meanings be represented by similar forms", whatever the difficulties of defining similarity were (Esper 9). Varro could handle this more easily with stems and inflections, whereas referential meaning was more recalcitrant (viii. 39ff, x. 3ff, ix. 41) (Esper 10). In other words, pure diagrams are easiest in this respect. Since we have an emic principle working among family resemblances, this principle has also been called iconic (Anttila 1972 passim), as an eme should have one and the same picture as its representamen, not the etic variety. This principle is important in orthography; note here just Bell's requirement that the "fundamental principle of Visible Speech [is] that all Relations of Sound are symbolized by relations of form" (Stewart 128; cf. Anttila 1972: 31-43), the exact configuration thus depending necessarily on the emic level used. The principle is one of the cornerstones of Western linguistics, and it has gone under many names, among them: the requirement of optimality (Humboldt), univocability (Vendryes), the canon of singularity (Ogden and Richards), the laws of specialization and differentiation (Breal), the psycho-

56 logical factor that aims to eliminate purposeless variety (Curtius and Wheeler) (Anttila 1972: 107). Because of this variety Anttila (1972) names it more descriptively as the principle of one meaning - one form ('one' can be replaced by 'similar', 'same' or 'like'), and Wandruszka adopts the 'same' name quite independently: "eine Form für eine Funktion" (57) (cf. Lloyd 1966: 109, 110, 129-31, 433-5). Then, on the other hand, many do not devise particular names but speak of one-to-one symbolization or the like (e.g. Chafe 1967, Malone 1969). The principle of 'one meaning - one form' is in vigorous use in current literature, since it was in due course discovered by generative grammarians, who, except for Vennemann, think it a new breakthrough of their paradigm. Bever and Langendoen suggest such a 'predictive natural strategy' (1972) and Schane adopts it from there (1971). Vennemann calls this innate principle ("of linguistic change") Humboldt's Universal and he shows its use in Schuchardt (1972 a). His work in general reverses the post-Neogrammarian rule that laws and principles should not be named after people, and here a similar inaccuracy comes about as in Grimm's law, although Rask deserved perhaps more credit on what he .did that the ancients on 'one meaning - one form'. Some generativists look at the principle from the point of view of paradigm transparency (Kiparsky 1971, 1972a), without reference to the long tradition. The principle says that suppletion is undesirable, but we have seen that it still occurs, particularly in items with high frequency, which immediately shows the requirements of usage, as we have seen (2.3), not to speak of allophonic variation (2.1). The fact is, natural language is impossible with one-to-one symbolization only. Such is the design of formal languages, which are closed systems (Wandruszka 105, Esper 202, fn. 62). The very essence of natural language is an interdigitation of polymorphy and polysemy, either more than one form for one meaning or vice versa (Wandruszka 71-3). Polymorphy thus looks like redundancy, polysemy like a flaw, since it leads to misunderstanding and deficient information: Polymorphie bedeutet synonymische Konkurrenz, bedeutet, dass die Welt unserer Empfindungen, Vorstellungen, Gedanken nicht von vornherein festgelegt ist in einem starren System umkehrbar eindeutiger Formen und Funktionen. Polymorphie bedeutet Wahl, bedeutet Freiheit . . . Und zusammen mit der Polymorphie ist die Polysemie systematisch die auffallendste Schwäche, asystematisch die grösste Stärke unserer Sprachen (Wandruszka 72).

The connection of the two is the first crucial prerequisite for the living mobility of language, enhanced further by the alternation between expli-

57 cation and implication (Wandruszka 103-4, cf. Householder 70-4). The two configurations can be diagrammed as in Fig. 13 (Anttila 1972: 100). meaning 1

meaning 2

A polymorphy

B polysemy

form 1

form 2

Fig. 13 One to many meaning-form

pairings.

These configurations are central to the synchronic functioning of language. Configuration A represents allomorphic alternation, compounds, phrases, and synonymy; configuration B mirrors metaphor, metonymy, homophony, and loan translation (Anttila 1972: 144). Since language use shows the necessity of bending one-to-one relations, I , into one to many, A or v , we can understand how language change would be a direct result of this (2.3), in its tendency toward 'one meaning - one form', when the context is right. Anttila (1972) develops and uses this graphic notation throughout. We can write leveling as A > I , and split as A > I , I in which each variant becomes an independent sign with one-to-one relation between meaning and form. When extension of alternation in a certain subset of vocabulary takes place, the net result is still unity for diversity: A , I > A (130). The fading of metaphors agree with this principle: v > I ; or polysemy can split off: v > i , i (of vs. o f f , through vs. thorough). And note VA . '= bakej batch, which is now I, I for most speakers. Thus we see that analogy and metaphor represent fundamental generalization, not just mere simplification: analogy = A >i or A I > A, metaphor =1 > v . Anttila (1975 c) uses this notation also for larger questions of optimization and function in evolution and sociolinguistics : generalization = A > ,I, specialization I > A . I n other words, the creativity of language depends of this breaking up of one-to-one relations, and usage pulls the variety towards the one-to-one configuration. The ultimate forces here are similarity/analogy arid association. As Sapir said, all grammars have to leak in this sense (cf. also Wandruszka 127), and language has to change to stay the same (Anttila 1972: 393, Coseriu 1974: 245), a medium to serve the functions of communication. Ignoring this is also in Hockett's (1968) analysis the very denial of natural language in Chomsky's conception (1968; cf. Wandruszka 32). We have now almost reached the end of the synchronic argument on analogy in linguistics, but this section has also introduced linguistic change

58 through the permutations in the meaning-sound link-ups, as is brought out in sharp outline through the I — A — v notation. Note that both kinds of shifts, I > A , l > v and A > I or | , | , v > I or i , | give the machinery for facing new situations (true creativity). The first cluster shows that in some situations the linguistic sign has to send out a tendril to grope into the unknown. This is possible in the network of analogies that exist in language and because of the machinery of human cognition. Analogy is a way of extending knowledge (and theories) and a necessity in language use also. In the second cluster we have resymbolization in a situation which has become stabilized in usage. This is indeed a cycle that goes on constantly in language. The movement of a living cell has been recently solved as showing a similar mechanism, a strange phenomenon called "ruffling" (Time, Dec. 31, 1973: 43). The cell sprouts thin, veil-like folds along its edge, that grow upward and then drop on the surface. Once anchored, the rest of the cell flows over the ruffles; and so on. Note the parallelism to metaphor: once it is well anchored in usage it fades and pulls the whole sign with it, even if it leaves a replica behind. The reason why transformational grammar went astray in its conception of creativity was perhaps largely due to its adherence to the principles of formal languages (Itkonen 1974), which need rigid one-to-one symbolization. As Hintikka now points out, the logic of a natural language should also leak, naturally (2.6). Stewart's (2.6) major emphasis is to develop a graphic metasystem that would show maximally one-to-one representation between meaning and form and thus she has to operate with I — A — v lines between two columns, one for graph, the other for its meaning. Her final recommendation shows I -relations only, except for one A -relation: The "meaning" of componential analysis is represented by both a matrix and a cube (178).

2.9

Formative blocks and indeterminacy of segmentation

The question of relation (2.8) is of course an old problem of ontology that has always led to the analogical and metaphoric relations. These relational set-ups are essential configurations in many systems. In other words, the relations between two strata are of three types, as we have seen, viz. one to one, one to many, and many to one; note Russell's classical example of this: marriage as monogamy, polygamy, and polyandry. The same is true of physics also, on the level of atomic particles. Zwicky brings this out in treating linguistics as chemistry (1973: 480):

59 First, there is the matter of isotopes, The discovery of different atoms of the "same" element with different masses (and even atoms of "different" elements with the same mass) is an obvious embarrassment for a theory which takes an invariant mass to be criterial for a given element. The existence of isotopes, especially those which (like light and heavy hydrogen) have quite distinct properties, makes the study of subatomic structure inevitable. What are semantic analogs of isotopes? They are occurrences of the "same" semantic prime with different meanings. Just this sort of situation is exhibited by lexical items which are "denotatively" distinct but do not differ in any independently motivated semantic feature; these are terms for correlative species, for example rose, chrysanthemum, and pansy, or snap, crackle, thud, and rumble.

This fits Fig. 13 very well. Zwicky's argumentation, which ends with a plea for analogy and metaphor in the vein of Hesse (1.3), could also be combined with family resemblances (2.6). The same relations lie of course behind Paul's material and formal groups, since syncretism is a general phenomenon also. But stratificational grammar has most fully utilized this structure. In it different levels are converted into others along such lines (be it just referred to Makkai and Lockwood 1972, and Lockwood 1972 here). Typically, Lockwood does not mention 'analogy' by name, but Christie (1973, 1974) has to, since he treats change. The stratificational model is totally analogical, however. No other model can in that detail specify the bases of analogy. Further, the lines in the model can be easily interpreted through family resemblances. Change must be described as rewiring, or traveling alternative paths, and this is much more satisfactory than, say, reordering of rules. All in all, the model falls into the basic analogical pattern stressed by Householder (1.6), who refers rather to Pike and the Pikeans, and their matrices (see below). Matrices handle also family resemblances, but more importantly for analogy as such, the slot/filler distinction relies on the dichotomy potentiality/actuality (2.6). When we combine the content of 2.1-2.5 and 2.8-2.9 we get the very essence of language structure. The terms 'item-and-arrangement' and 'itemand-process' were used, since they are familiar to most readers. But actually they are inadequate for an adequate conception of language structure, since what is at issue are not items. In this approach, which derives from Andersen and Shapiro, we have structures with processes of diagrammatization (see Anttila 1972 for elementary graphic presentation throughout). This contrasts drastically with the bland rewrite rules of the post-post-Bloomfieldians. The processes involved convert hierarchic structures into linear diagrams. This is possible because of variation rules (i.e. A ) and neutralization rules (i.e. v ), as Andersen shows throughout his output. We see that it is only because of the triplicity of I , A , and v that diagrams can be produced (woven) so that hierarchy can be captured in decoding.

60 Whether this model is named structure-and-diagrammatization in analogy with the others, a construct perhaps too cumbersome to be adopted, the credit goes to Andersen for a breakthrough. It is of course true that he built on Jakobson's work of the past decade (1965: 35; "system of diagrammatization") (cf. 3.6). So far only Shapiro and Anttila have joined in exploring this line of thought. So far this chapter has presented the synchronic picture as it presented itself to me (except for Slagle's work) over the past fifteen years. This vantage point seems rather typical for the linguistic scene in that most linguists know part of this scene, whereas here the coverage is much wider and gives the proper total situation (even if not exhaustively). But the treatment still contains a serious distortion: omission of Kenneth L. Pike's work, which seems to have been ignored as if on purpose. This is one of the typical results of the "generative revolution" which drew the attention of most linguists on narrower sectors. In 1960 Pike wanted to check to see if in grammar there might be an analogue of a phonetic chart (Pike 1963, 1965, 1975, Pike and Erickson 1964). His basic requirement has been that grammar is both a particle (unit), a wave, and a field phenomenon, and that the linguist needs the freedom of shifting from one aspect to another (cf. the match with itemand-arrangement, item-and-process, and word-and-paradigm). In short, relations are nothing without units (cf. Slagle). Pike stresses also (correctly) the value of redundancy, since this will enable the linguist to use a direct heuristic approach even in the middle of unknown data. This way one can carry out analysis without waiting for the emergence of the total network. Field components overlap in particles, and redundancy of signal does not allow parts to be suppressed. Thus although, e.g., "/b/ versus /z/ signals constrasts between 1 sg. and 1 pi. [in German]; so also does zero versus /t/; etc." (1965: 199). And Pike goes on: We are interested in the reporting of field redundancy here — not in eliminating it. Representation in terms of morphemes, furthermore, becomes inadequate to describe the devices signalling meaning, whenever allomorphs of a morpheme (supposedly noncontrastive) do in fact carry part of the signal of a contrast. I do not desire, here, to replace geometric spatial display with an algebraic formalization of all the intersecting, redundant, signal components. Geometric insight must fometimes be given priority over rules and formulas. Preservation of the irregular but redundant features of a field structure, as basic to that linguistic subspace, leaves them structurally available for consideration in problems of structure synchronically, or of change over time.

Pike developed (geometric) field diagrams which show the overlapping

61 similarities, e.g. as in Fore pronouns (Fig. 14:A, from 1963: 6; also given in 1975).

A. FORE

1st

1st

2nd

Sg-

na

ka

a

pi.

ta

ti

i

du.

tasi

tisi

2nd

3rd

3rd

isi

B. ARABIC sg-

du.

pl.

3m

y

y

y

3f x f = k > x = t > t > )• Schuchardt's position has been revived recently by Vennemann in connection with generative grammar (1972 a), and his treatment is generally referred to in current literature. There is an interesting difference between Best and Esper. Best emphasizes the positive line from Schuchardt to Hermann to Winter (1969), whereas Esper has not much sympathy to this approach, in fact, he does not see any contribution in Winter (196). In Best, Winter is the other modern author (in addition to Leed). Winter asks whether analogy is so different from sound change, after all. What is analogy? "Die Ausformung einer bestimmter Verhaltensweise nach dem Vorbild einer anderen" (cf. Householder 1.6). A model is enough and it together with an imitation constitutes a system. Thus Paul's proportion A : A ' :: B : X A : A ' : : B : B ' must be rejected and Hermann's A c :: B c -*• A c :: K'c accepted. This accommodates contaminations easily, since it means that under a condition c a formally different B becomes more similar to A. In contamination the influence can go both ways, e.g. tiny wee -*• teeny weeny (Palmer 1972: 242) or Old French citeain, denzein, both 'native inhabitant', giving Anglo-Norman citizein and denizein (Anttila 1972: 76). Winter notes how the semantic components in kinship terminology impose formal similarity, e.g. how the shapes for 'mother' and 'father' get more similar in many branches of Indo-European (Tocharian B macer, A macar influenced by B pacer, A pacar). The same semantic tie exists between Armenian ustr 'son' built after dustr 'daughter', or Russian brat'ja 'brothers' brings forth a synov'ja 'sons' (instead of *synove), and so on (similar discussion occurs also in Jeffers 1974). There are levelings both over sex and generation. A semantic connection between two words, not formally related, can be shown also if both are replaced by related items, as happened e.g. in Latin filius/filia 'son/daughter' and Greek adelphos/adelphe 'brother/sister'. This agrees with Householder's explication of suppletion as part of analogy a la grecque (1.6, 2.1). Common replacement is now an extreme form of leveling which Hermann's formula also covers. But how to define semantic similarity. Winter delineates three cases: 1) (near-)synonymy periturus -*• moriturus, 2) antonymy periturus, moriturus oriturus, paritura, and finally 3) paronymy leaiha 'she-lion' -»• lukaina 'she-wolf; he further gives tables of distinctive features for such groups. Winter correctly concludes that even if the result sounds

74 trivial, it is the only one possible. To this Hermannian line we can add a more Schuchardtian one also. Malkiel has treated "The inflectional paradigm as an occasional determinant of sound change" (1968), in which he shows how the paradigm of Spanish digo/dize(sjldiga(s) 'say' has exerted powerful analogical influence on other verbs, supported by verbs in -ngo (frango/frañes). Thus we get fago/fazfes)/ faga 'make', and others, but the exact rich details of the configurations must be omitted (cf. also Coseriu 1974: 116). Rochet (1973) treats the merger of Old French [é] and [á] and shows it to be morphologically conditioned, arguing thus at the same time against the alleged universal that nasal vowels tend to lower (e.g. Ohala 1974). Rochet shows that the merger started in the present participles -ent and -ant where it can be viewed as a normal merger eliminating purposeless variety. Rochet expressly refers to Schuchardt's conceptual analogy. From here the substitution spread to the rest of the lexicon (purely phonetic analogy). The chronology is right, and in the lexical diffusion words of Greek origin like talant mediated in the separation from the participles (also covant [-ent], dolans, noient [néant], etc.). Ultimately the variation -ent 'v -ant was generalized into all eN 'v aN (also -ance over -ence). Thus Rochet concludes that we have to consider all levels in looking at sound change. Phonology is not sufficient for it. Language is an extremely complex phenomenon which is easily simplified beyond reason in our research tactics. Rochet's criticism of generative grammar will be taken up later (4.2). Schuchardt's notion of rule generalization, which lies behind the above presentation, is also resurrected by Vennemann (1972 a) as a correction to generative grammar (4.1 and even 4.3). 'Rule generalization' was mentioned once by Kiparsky (1965: 2.32), but the reference was omitted in Kiparsky (1968), whereafter rule generalization has been taken as a recent conceptual achievement. Vennemann considers the motivations for change and the functions for change as symbolization devices with a phonetic and a conceptual end, against Kiparsky's (1968) mere formal relations within phonology only (198). Vennemann tabulates the classifications of phonological changes as in Fig. 15 (199, 200, 202). But "both a typology of phonetically motivated simplifications and a typology of conceptual-analogic changes as grammar changes are still desiderata of linguistic theory" (202). (For an answer to this, see 3.5, 3.6) Fairbanks (1973) looks at the types of 'sound changes' that any theory has to explain, and there are four of them: 1) A sound changes everywhere that it occurs, under all phonological conditions, or 2) only under some particular set of phonological conditions. Or else 3) a change occurs under conditions that must be stated morphologically or in terms of a morpho-

75

OLD (TG) I.

Innovation

II. Simplification

1. Rule addition

1. Rule generalization 2. Rule loss 3.

Rule reordering

NEW I.

Phonetically motivated simplification

II. Conceptually motivated simplification

1.

Addition of natural rules

1.

Rule loss

2.

Rule generalization

2.

Rule generalization (? )

3.

Rule unordering (into "intrinsic" or "feeding" order)

3.

Rule reordering (from one "extrinsic order" into another, or from "bleeding order")

4.

Exemption from rules

5. Counter-phonological application of rules FUTURE Part I the same as in New, but Part II retains only No. 1 and adds 2. Relexicalization, with a. a resulting lexical redundancy rule b. a morphologized rule c. an inverse rule

. •

Fig. 15 Vennemann's tabulation of phonological change.

76 phonemic alternation (cf. 2.5, 3.3, 4.1). 4) A change may also occur in a particular grammatical category, but does not affect all items in that category (199-201). Fairbanks thinks that the traditional grouping of 1 and 2 as 'sound change' and 3 and 4 as 'analogy' is valid (202). Generative historical linguistics has confused the issue in very much the same way as structuralists did in the beginning. Otherwise the similarities between tradition and generative grammar are greater than one would think, since the differences are natural outcomes of self-imposed restrictions (208). The best part of the article is reasoned criticism of generative inadequacies (202-8). The phonetic/conceptual distinction is coming into the foreground also because phonology is getting more concrete and allomorphs regain their earlier independence (2.1, 2.3). Thus also Karlsson, who speaks against unique underlying forms (1974: 11; Kiparsky in 2.5 above), has a channel that allows for 'phonologization' of a morphophonemic process (48), which would agree with Rochet above.

3.5

Kurylowicz vs. Manczak

The Kurylowicz-Manczak controversy about the 'laws' of analogy has been the main launching pad for subsequent discussions (1949 vs. 1958; and later, esp. Manczak). Best devotes half of his book to it (61-110), and I refer to his bibliography for the details of the dialogue and the laws themselves. In fact, these laws are regrouped and combined in some introductory books (Lehmann 1962: 188-90, Arlotto 1972: 135-42 [Kurylowicz]), and they are reproduced also by Vincent (1974). By now the whole issue provokes ennui rather than excitement, and one can understand why Esper gives only one and a half pages to it (186-7). On the other hand, he devotes the same amount of space to Sarah Thomason's (1969) ideas about analogy (196-7); in fact, Esper concludes his main text: "Thomason has demonstrated, I think, that careful study of historical and comparative records can yield useful generalizations about analogy". This is typical of the current scene: even when one ignores the standard classifications and descriptions, one comes back to them anyway. Thomason's groupings seem to be perfectly Kurylowiczian after all (see 4.3 end). The controversy is mapped in detail in Best. Be it mentioned here that Best characterizes Kurylowicz's approach as qualitative and Manczak's as quantitative. Manczak emphasizes frequency and statistics, Kurylowicz the sphere of employment. Since Kurylowicz is an ardent proportionalist and Manczak attacks it, we could also say that the former uses a formal approach the latter a material

77 and probabilistic one. Vincent extracts three main factors common to both; the issues revolve around 1) morphological markedness, 2) length/strength of exponents, or 3) elimination of redundancy and reduction of allomorphic variation (cf. 2.4). This constant reshuffling of the laws and parts of them, both by interpreters and the authors themselves, has led to an impasse. There must be some truth in both, but there must also be a lack of a proper frame for it all. Curiously, all the new prospects for studying analogy that Best can suggest are better statistical methods! That is a completely wrong avenue, it would seem. The impasse of the Kurylowicz-Manczak controversy can be broken only by adopting a frame of reference that does not take changes as unanalyzed events (as required by Rochet and Coseriu 1974 also). This frame is indeed provided by Andersen's abduction/deduction model (1.2, 1.4), and he has commented on this particular controversy in his lectures (and here thanks go also to extensive private communication). What follows is an attempt to look at the controversy from Andersen's point of view, which, by now, has also turned out to be the best theory of change. Kurylowicz's and Manczak's approaches are rather incommensurate, because the latter deals with diachronic correspondences on the surface, the former with grammars. Both use rather inexact terminology so that the indeterminacies multiply. Kurylowicz says, among other things, that basic forms will determine the direction of analogy to derived forms. Consider the example pek-/pec- 'cook' that Trnka used for Czech (pres. ind.; imperative 2 sg; but these paradigms are provided by Henning Andersen; k, = K): ORu. NRu.dial. St. Ukr.

peku peku pecu

pecesi pek,os peces

peceti pek,ot pece

pecemu pek,om pecemo

pecete pek,ot,o pecete

pekutl pekut pecut

; plci ; pek,i ; pecy

These different generalizations of the original k ^ c ^ c do not disprove the law, but it shows the problem of determining which alternant is basic, viz. a question about the abductive process. Kurylowicz has phrased it as deduction: if rules are not applied, basic forms will surface. Thus much of the confusion in the controversy arises in that the two processes are not distinguished. Kurylowicz maintains that compound allomorphs replace simple ones, and Manczak retorts that stem alternations are more often eliminated than generalized (cf. 2.1). What Kurylowicz means, in Andersen's scrutiny, is that if a desinence plus stem alternation has been set up as a morpheme, then it could expand at the expense of simple allomorphs;

78 deduction again, a subtype of the above case. Manczak's statement is about diachronic correspondences which must be restated with reference to abduction: endings accompanied by stem alternations are more often analyzed as simple morphemes plus morphophonemic rules rather than compound morphemes. Diachronic correspondences must be broken down into sequences of abductive and deductive innovations and thus the laws of analogy must be reexamined as to what kind of questions they provide answers to. Manczak's purely empirical generalizations often make sense when analyzed in Andersen's truly sophisticated theoretical framework rather than in Manczak's own (Kurylowicz fares less well). Note the following of Manczak's laws: 1) The number of synonymous stems and desinences is more often diminished than increased = speaks of the tendency of learners to account for surface alternations by basic forms plus morphophonemic rules. 2) Longer stems and desinences more often replace shorter (plus 0) stems and desinences than vice versa = (abduction again) learners tend to choose longer allomorphs as basic. 3) Longer desinences and stems are more often remade on the model of shorter desinences and stems than vice versa. This includes nonproportional analogy, blends = learners assume that similarities in meaning are diagrammed by similarities in form (cf. 2.8, 2.9, 3.4). 4) The more frequent categories of grammar generally furnish basic forms than the less frequent categories = a perfectly understandable abductive process, which Kurylowicz neglects from his deductive bastion. Manczak's score is better because he was less ambitious than Kurylowicz, who wanted to say something about grammar without a clear conception about it. Andersen's insight in seeing the basic dichotomy as abduction (Manczak) vs. deduction (Kurylowicz) is typical of his solid work. This explains also admirably why no solution has been reached, because the authors did not see their inconsistency either. As already indicated, Kurylowicz's problem is mixing his deduction with as much abduction. One of his famous laws is that the innovating form inherits the old function and the old form acquires a new function. Thus, e.g. when the English genitive was voiced (in voiced environments) as a case form (one's), adverbs having secondary function were lexicalized with the old shape (once), or when the original dative plural endings were lost, they could survive as adverbs (e.g. whilom of while 'time'). The problem is that it is so easy to find counterexamples to this law, even if it were true in slightly over half of the cases. Kiparsky (1974) also took up the reshuffling of Kurylowicz's laws and presented cases in which derived meanings took on regular patterns (silly gooses, mouses 'black eyes', wolfs 'aggressive males', baddest man in town, Mickey Mouses, silverleafs (white poplars), barrelcactuses, etc.).

79 Kiparsky (1974) concludes (against Kurylowicz) that derived forms are regularized first. This should indeed be obvious in most cases since deduction fills new needs and this means the use of the regular patterns of the grammar. 'New usage' of course bypasses the old irregularities in the established usage (cf. 2.3). This area of derivation, lexicalization, and usage is indeed well-investigated, even if classification is in flux. The fact that Kurylowicz is right much of the time should mean that deduction is not the mechanism here. At least 'lexicalization', 'secondary function', etc., do not explain the situation. And, indeed, Andersen (1973) explains the difficulty by adopting the distinction of innovation vs. change (the adoption and spread of innovation) from Coseriu (1974). Once a deductive innovation gives rise to doublets (e.g. worse/badder), the process of differentiating these, in other words, of assigning different value to them, is a matter of abductive guesses, made by learners who have no use for history, and consequently will not always assign more marked value to the older forms. It all depends on the total historical situation (context, frequency, etc.), which both of the Polish scholars, after all, think an important indeterminacy factor in analogical change. Kurylowicz is right only for the deductive phase in the production of doublets, but the crucial screening, assigning different status to variants (basic vs. derived, colloquial vs. technical, etc.), happens in the abductive process of grammar formation which takes place after the deductive phase (under the imperative of 'one meaning - one form', elimination of 'purposeless variety'). Similarly, Kurylowicz's statement that languages will give up less important distinctions but retain the more important ones, is fraught with difficulties, for how do we measure the importance of a category (ordre central)? There are undoubtedly differences between languages, and the danger lurks in that a particular change tells us it post facto. Anyway, the question must be translated into the terms of an abductive process in which language learners have to correlate linguistic categories with communicative needs and reality. Kurylowicz's last law (1945-49: 36) deals with the spread of analogical change, and he says that the extension of morphological changes is at the same time internal (to a grammatical system) and external (it takes place within a speech community). Part of a proportion belongs to the model, part to the imitation. This is true, and is in fact a truism; it specifies what happens when the innovations take place (a l'origine). But such contact innovations give rise to doublets which in the further development may be subject to the same process of selection and differentiation as doublets that arise through an internal development. In other words, we have an adaptive contact innovation a la Andersen (see 3.6 below). The net result, under Kurylowicz's deductively posed answers is the

80 following: two generalizations about the deductive process (innovations arise in synchronically derived formations, and consist in basic forms surfacing), and two about the abductive process (retention of important distinctions, and adoption of new forms of usage by adding adaptive devices to the grammar; see below in 3.6). But note the following. To remain oblivious or to deny the existence of analogy and its force was precisely what prevented generative grammarians from progressing in any meaningful way with morphology (cf. Vallini, and see 4). Proto-Indo-European comparative morphology advanced noticeably by implementation, however fallacious, of Kurylowicz's contested laws, as Thomas Markey points out (private communication). These laws still permitted very significant progress and effected clarification of otherwise complex problems. (For the analogical mechanisms in dialect geography, see Markey 1973.)

3.6

Andersen's typology of language change

When we combine the preceding section (3.5) with two earlier ones (1.2, 1.4), we see that the main theoretician of language change today is Henning Andersen. Although it is difficult to condense his work because of its extraordinary solidity and richness in every detail, a brief outline should still be a useful conclusion to this chapter, inasmuch as it solves the dichotomies that have so troubled many linguists (sound change vs. analogy, phonetic vs. conceptual analogy, etc.). Very roughly one can say that Andersen has developed Jakobson's work (of the last decade) into a better determined, more 'tangible', level. On the other hand, he has taken up many of the problems raised, and also partially solved, by Coseriu. In particular, Andersen uses Coseriu's distinction of innovation vs. adoption (Coseriu 1974: 67-71, 122, 128), which was equally clear in Sturtevant (1947: 77-84) already; and Coseriu understands also the futility of diachronic correspondences (Coseriu's term 'equivalences', 76, 116, 189, 2256, 240), which interfere with Jakobson's work. Andersen's output proves Coseriu's statement that we have looked in the wrong place for true change (204), and he actually splits up the change process into its ingredients much more accurately than anybody before, in fact so well that he explains change. Both Jakobson and Coseriu understood grammatical conditioning of sound change, both were expressly teleological in outlook, and these ideas get full fruition in Andersen's work. Coseriu has also rejected physiological explanations as ridiculous (121), and emphasized the choice of the speaker (186, 198). These are indeed integral aspects of

81 Andersen's abduction/deduction model. Note further (Coseriu 236; and read ff. pages also): . . . 'dasjenige, wodurch die Sprache Sprache ist', ist nicht nur ihre Struktur (die nur die Bedingung ihres Funktionierens ist), sondern die Sprechtätigkeit, die die Sprache als Tradition erschafft und erhält. Wenn nun der Wandel als systematisches Werden der Sprache verstanden wird, kann es selbstverständlich keinerlei Widerspruch zwischen "System" und "Wandel" geben, und mehr noch, es darf nicht einmal von "System" und "Bewegung" - wie von einander entgegengesetzten Dingen — gesprochen werden, sondern nur von "System in Bewegung": die Entwicklung der Sprache ist nicht ein ständiges, willkürliches und ungewisses "SichWandeln", sondern eine ständige Systematisierung. Und jeder "Sprachzustand" stellt eine systematische Struktur dar, gerade weil er ein Moment der Systematisierung ist. Mit dem Begriff 'Systematisierung' wird die Antinomie zwischen Diachronie und Synchronie auf grundlegende Weise überwunden . . .

When we combine the conception behind the Coseriu quotation with the important influence of Jakobson (Coseriu: structure and systematization; Jakobson: system of diagrammatization), we get the basic background for Andersen's work, which transcends the bland rewrite rules of the generativists (2.9). From such a 'structure-and-diagrammatization' model we can step into typology. Since the basic model has already been given (1.2, 1.4) we can go directly to typology (Andersen 1973, 1974). The overall typology for all change can be tabulated in a straightforward way like this (Andersen 1974: 41): I.

Adaptive innovations A. 1. Accommodative innovations 2. Remedial innovations B. Contact innovations

II.

Evolutive innovations A. Deductive innovations B. Abductive innovations

To the abductive(B) - deductive(A) axis, already discussed, Andersen has added that of adaptive (I) vs. evolutive (II) change. Adaptive innovations arise from the communicative system, language use, whereas evolutive change is based on the linguistic system alone, of which of course the former is the more complex one (1973: 780). Adaptive innovations alter the relation(s) between a grammar and the communicative system so that information about the latter is necessary (1974: § 1.0). Accommodative innovations react to the semantic inadequacies of the code by procuring the missing semantic con-

82 figuration(s). Remedial innovations patch up inadequacies by creating new forms for existing meanings and thus alleviate or prevent semantic ambiguity (therapeutic replacements, etc.)- Contact innovations arise between two differing codes in that either interlocutor imitates the usage of the other. The motivation here, as against I A 2, is the desire to speak like someone else. This necessitates an abductive inference as to the manner of producing a usage to be imitated on the basis of one's own grammar. Imitation is achieved by positing adaptive (patch-up) rules (A-rules; cf. Malone's A-rules in 2.5). Such rules can apply in individual lexemes and are thus additive and subordinate to the phonological structure, but they enable the speaker to match the received norms of the community regardless of the exact structure of his own grammar (1973: 781-2). Note that these rules provide a means to match a model and we get an explanation for the situation assumed by Winter (3.4). As for the characteristics of abductive (II B) and deductive (II A) innovations, we have seen them above (1.4). To be noted is the covert character of abductions which surface through deductions. Note that such recuttings have been a constant trouble spot in classifications of analogy: to include them or not to include? Nonproportional analogy? Yes, but note that now analogy is a term like the relative adjectives (tall, good); one has to know the context for exact interpretation. Andersen has exposed the basic mechanisms of diagrammatization, and traditional analogy represents combinations that are more visible to the uninitiated. It is like showing that water, steam, snow, and ice are all H2O, a demonstration that can indeed be called scientific. Andersen shows that A-rules produce symbols, and phonological implementation rules diagrams of distinctive features (1973: 785), whereas the question of teleology is solved as follows. Abductive innovations II B have no teleology, but I B shows teleology of purpose, and deduction II A teleology of function in two ways: it validates the learner's analysis and makes the relations that constitute the phonological structure more explicit (1973: 789-90). In general, then, the surface target is given in abduction, and the means to get to it must be guessed at. In deduction, however, the means are given and experiments produce surface forms. Or very roughly, abduction attacks from the outside, deduction from the inside. In his typology article (1974) Andersen presents a wealth of material for all this and a thorough analysis from phonology by contrasting changes with their logical alternatives (bifurcations). Change is looked at from the point of view of the decisions the learner has to make, and every decision has a logical alternative. Such decisions are about feature valuation, showing the reality of binary (+ or —) perceptual features. They can be about

83 segmentation, and this proves that segments are not fictitious, or about ranking, which shows that phonemes have subordination in their features and thus immediate constituent structure. Ranking is a purely mental operation without physical correlates. Note that these decisions are abductive, and that accommodative innovation I A 1 cannot exist in phonology, which has no referential meaning, only diacritic. Further, these decisions, in this work, relate to phoneme inventories, not phonological implementation rules. Finally, Andersen (1974: 47-8) shows that the model explains with equal facility also innovations in morphology and lexis, and the parallelism should be natural, since the same apparatus of inference is used. ME wiche 'person skilled in sorcery' did not specify sex. When man-witch or he-witch surfaced we get evidence for a preceding feature valuation, and then the relative importance of 'skilled in sorcery' and 'female' was reversed, in other words, an innovation in ranking took place. In fact, Andersen shows that Stern's adequation, the most common type of semantic change, is ranking of features, and permutation is change in the features, innovation in valuation. Stern's nomination and substitution (1931: 294) correspond to accommodation, and deductive innovations are further those transfers that are based on the similarity or contiguity of the new referent with others, different types of analogy (paradigmatic relations within the grammar), and ellipsis. Indeed, reinterpretation and other types of analogy do get their share of the same clarity. Segmentation belongs typically to lexis, since no immediate semantic consequences need follow. We have seen recutting as in alcohol-ic to alcoholic (1.4), in which two segments remain. But single units, especially if they are long, are often folk-etymologized, and thus one unit becomes two, e.g. Meerkatze, and sparrow grass. Here the constituent morphemes are now used purely diacritically. Loss of boundary occurs in cases like boatswain to /bowsan/, but also when the surface does not change as behind the English spelling here, and cf. Handschuh, Nahuatl tentson (2.3). The morphemes in these words are now again mere diacritic markers. The phenomenon is traditionally called fading and in this case we have a shift from grammar/morphology into lexis. It is difficult to spot such changes because, with typical abductive effect, there is no overt change in shape. The same phenomenon is stressed by Skousen (1973) for shifts from phonology into morphology (without using the abduction/deduction model). Even when morphophonemic/morphological alternations can be described by 'phonological' rules of various sorts, there is no guarantee that the speakers do it that way. It often happens that they learn morphological paradigms instead. This would be an abductive innovation without visible effects (like Nahuatl ten + tson to tentson). But when this has happened one can expect

84 that 'irregularities' will be leveled out to form better morphological diagrams. These steps conform perfectly to Andersen's model. Classical Greek had -n as the accusative singular marker, except for most consonant stems, which showed -a (these derive from the same source: -V-n, *-C-n > -C-a). Jeffers (1974: 246-7) tabulates the situation as follows:

nom. sg. acc. sg.

C-stem (1)

C-stem (2)

s-stem

pater 'father' patera

Hellas 'Greece' Hellada

tamias tamian

Now -n was extended to all nouns (acc. sg. pateran, this set the stage for further remodeling: tamias

X

X

tamian

pateran

Helladan

'treasurer'

Helladan,

tamian),

and

and we get the Modern Greek configurations pateras

Helladas

tamias

pateran

Helladan

tamian

The support of the vocalic pattern has been sifnificant, of course (hippos/ hippori). Jeffers concludes that "it is important to note that analogical extension is often preceded by a morphological reanalysis which itself must be explained in terms of some grammatical context. The explanation of the prerequisite morphological reanalyses is very often neglected". This is also a call for the abductive/deductive chain, and Andersen's detailed answer (1974) came at the same conference (of course, e.g. Sturtevant had not neglected the issue at hand; 1.4). In conclusion one can note a few more points. Andersen (1974) has convincingly shown that his criteria do indeed relate intimately to the structural features of language and that the study of change contributes to the understanding of structure (cf. Peirce, Coseriu, who used the same approach in their work). What decides the actual surface change in the classical conception (sound change, analogy, etc.) is the actual area of structure that is fed into the mechanism. This point has not been understood by generative grammarians (but it was already by Coseriu, e.g. 80, 86, 89). Andersen has also deepened our understanding of classics in the field, viz. Kurylowicz, Mahczak, and Stern, adding precision, new depth, and inspiration. Anttila has done the same for Breal (1964), Eve Clark (1973)

85 (semantics), Sturtevant (1947) (morphology), and Lindblom (1972) (phonology) (and cf. Skousen above). It is also significant that there is no independent syntactic change, for which Elizabeth Traugott (1972) and Robin Lakoff (1972) pleaded so ardently at the turn of the decade. Anttila (1972: 102-4, 128-9, 149-52, 355-6, etc.) treated it as a result of analogy and fading, in other words, the same abductive/deductive patterns as the other changes. Note the traditional Finnish example of an infinitival form like juoksevan (cf. 1.5), a fossilized accusative singular (103-4). Using 'analogical' terminology we can indeed say that all change is analogical rather than lautgesetzlich, because of the diagrammaticity of linguistic change. But the underlying change must be broken up along the lines Andersen has delineated.

3.7 Toward a dynamic theory of language Certain questions of theory must be drawn into the open. As a pro analogia book this was written in the framework of structural linguistics, and thus, although its extreme forms are criticized (4), structuralist concepts are taken for granted. Man is an analogical animal. His metaphors, similes, allegories, proportions, mathematics, and poetry are due to analogy (Buchanan 1932, 1962 App., Leatherdale 1974: 127 App.); analogy is indispensable to science (see also Hesse 1974 App.). Hoffding (1967: 2) noted that analogy is not quite a category, but very important in all real thinking; without it, a correct conception of nature and the validity of our cognition would be impossible. In fact, it deserves to be a category, since all metaphysics rests upon it (37); it provides the only way to designate relations between different sciences (72). Furthermore, quantity is a quality and thus all empirical sciences are based on analogy (48, 70, 79). (Note that the most adequate scientific theory, a logic of science, requires a similarity network model; Hesse 1974 App.) Analogy thus covers perception and cognition in a central way and provides an essential continuity between sciences and between parts of a whole within a holistic conception. Hoffding was quite clear on this. Elements are not just there to be combined (5); rather, totality is the starting point (Ganzheitsrealismus) (16). The law of synthesis displays constant continuity between different grades, and thus analogy between the same (49). Analysis (criticism) only comes afterwards (100). Analogy is one factor toward the equifinality of a dynamic conception of science, and this is no surprise, since analogy resides in synthesis, relation, continuity, contiguity, and similarity. Its acceptance in linguistics

86 has been hindered by the prevailing antiquated conception of science. It is ironic that the human sciences strive toward Newtonian rigidity while the physical sciences try to get away from objectivity (objects), predictability, and tyranny of method (Merrell 1975: 93-4 App.). Merrell (1975) finds the epistemological and scientific presuppositions of structuralism seriously deficient. The same is true of current overwhelmingly reductionistic linguistics. Merrell asks for a dynamization of our macroatomistic structuralism, and this entails that our universals of being must be replaced by universals of becoming (process must be primary; cf. 5.1). Thus we return to Aristotle's analogical universals (and note further how Heisenberg's matter must be understood as potentia) (cf. 1.3). The upshot is that "we must learn to live with ambiguity and vagueness, aware that Cartesian "clear and distinct ideas" are impossible . . ." (103). This dynamic conception of science is of long standing. Quantum physics started it in the 1880s and by the 1920s it was solidly established (in part independently) in biology and gestalt psychology. Linguists know the slogan 'The whole is more than the sum of its parts', but other related terms are electromagnetism, field phenomena, organismic conception, systems conception, dynamic morphology, morphic tendency, holism, synthesis, structuring process, emergent forms, and gestalt notions. When this framework (cf. 2.6) is adopted in linguistics, as we must, we get a gestalt linguistics (Anttila 1977 App.). The indications toward this (Slagle, Anttila 1975 f) can be supplemented with other works that plead for the theoretical primacy of field effects (see Anttila 1977). The dynamic force of similarity argument from Anttila (1975 f) is applied by Paunonen (1976 App.) to Finnish noun declensions. Bolinger (1976 App.) admits that language is a jerry-built structure (but tightly organized) and argues that the holistic approach overcomes the prevailing reductionism. The distribution of practically every item runs into ad hoc aspects (cf. 2.3). The total context shows that ambiguities are semantic illusions (11-2). Bolinger's position can be taken as a natural outgrowth of his long espousal of similarity (see App.). The most explicit statement on analogy and gestalt patterning in syntax and its change is Terho Itkonen's (1976 App.; see 5.3 p. 119), which rests on similarity vectors, the so-called Wirkungszusammenhange of the gestaltists, between surface structures. Itkonen presents ample evidence from Finnish patterns that have over the centuries influenced each other and shows how apparent chaos turns into coherence once we explicate the interaction of analogical factors in syntax.

4 ANALOGY IN GENERATIVE GRAMMAR AND ITS WAKE

4.1 The original revolution The 'generative revolution' in historical linguistics, as it is often called, hit exactly the Neogrammarian seam between sound change and analogy. The main momentum in this was Kiparsky (1965, 1968) and the issue was taken up by Postal (1968) and King (1969). Here we have the core of the matter and it represents a rather unitary phase in the development of generative grammar. The choice of the target for this attack on tradition is typical in that it picked up a fight against outmoded positions (cf. v. Wright 1971: 33). Only a small portion of the Neogrammarian and American structuralist practice was singled out and the rich pre- and post-Neogrammarian literature on analogy was ignored. Kiparsky's "convincing" thesis was that the inadequacy of proportional analogy can be solved by taking analogy as a perfectly regular type of sound change (Postal 1968: 259), under the auspices of combining sound change with imperfect learning (abweichende Neuerzeugung). The surprising element in this is of course the prominent position of sound change, which represents a reversal of earlier, traditional, positions. This is all the more astonishing, since the basic frame that an explicit synchronic grammar must be the playground of change is the generative manifesto in this area. As we have seen, this is a truism, although grammar has often been neglected in practice. Thus the generative statement ends in the rather monotonous position that sound change is grammar change, analogy is grammar change, and borrowing is grammar change. This is defended through a further dichotomy: generative grammar maps changes in competence (langue) which depend on abstract rules even in phonology, and not the Neogrammarian fluctuations in performance (parole). The notation that combines sound change and analogy into rule change is of the type X -*• Y I - Z (in other words, the notation of sound change), and as we have seen (2.5), grammatical criteria can be appended to it where needed (Y = structural change, X and Z = structural analysis). In orthodox generative grammar this notation was strictly structuralistic, in that no linguistic variables were included; these were added by Labov, e.g. Y is a function of style, class, age, etc. (Weinreich-Labov-Herzog 1968: 170; Sankoff-Cedergren 1974). A basic typology of change includes now the following types: rule addition, rule loss, rule reordering, and simplification (King 1969: 39-63;

88 see Vennemann in 3.4 and Fig. 15). These are considered primary changes, they apply to rules. In contrast, restructuring (lexicalization, change in underlying forms) is taken as secondary. Simple rule addition corresponds to traditional sound change in that a rule is added to the grammar. Although the piling up of such rules ultimately will have deeper effects in grammar the notion is indeed that of sound change, which does the same. All the other types correspond to traditional analogy, although this was originally denied. We can leave out grammatical conditioning of change since we have seen that there were always scholars who accepted and used it. But one type of the additions was a rule inserted high up in the hierarchy of ordered rules. A standard example is Kiparsky's (1965) treatment of Lachmann's Law in Latin (cf. King 43-4). The situation is the following. Latin passive participles show lengthening of the root vowel before those Ik I's which are underlying Igl's, thus facio/factus (with /k/) vs. ago/actus (with Igl). In other words, with ordered rules: 1. a a / -g 2. g ->• k / -t

[ppp]

(*agtos) (aktus)

The embarassing fact is that 2 occurred in Proto-Indo-European, hence Latin has added a later rule above it in the hierarchy (we will return below to this example). The standard example of rule loss is the disappearance of final devoicing in some German dialects. The devoicing 'was added' to German grammar around 900-1200 and produced paradigms like wec/wege 'way', tac/tage 'day', etc. When the rule is now dropped, original voice surfaces again, e.g. Yiddish lid/lider 'song', tog/teg 'day', and noz/nezer 'nose' (King 47). Another treatment in this vein is Wagner (1969) who treats the loss of the Old English w -*• p in cases like sierwe/sierest 'I/you serve'. The rule, having very low functional yield, dropped, and thus w surfaced again: sierwe/ sierwest. In such cases 'analogical' forms come about automatically (leveling in these cases). One of Kiparsky's famous examples of rule reordering contains two Finnish sound changes (rules), loss of gamma and diphthongization of long mid-vowels. Thus Standard Finnish 1. Diphthongization: 2. Loss of gamma:

vee > vie teye > tee

'take' 'do'

Now, in the Eastern dialects there was actually a second diphthongization, but Kiparsky says that the explanation of the new tie 'do' is a reordering

89 of the above rules (unfortunately these facts are fictitious to boot): 1. Loss o f gamma: 2. Diphthongization:

teye > tee tee > tie, vee > vie

Or take the 'way' paradigm from Standard German (King 8 7 ) . Originally the vowel was lengthened in open syllables and we had thus 1. Final devoicing:

veg > vek

2. Vowel lengthening: vega>

vega

but today we have 1. Vowel lengthening: veg- > veg2. Final de voicing: veg > vek. These two examples must suffice, although the literature is full of others (since Halle 1962, Saporta 1 9 6 5 , Keyser 1963). Rule simplification deals with loss o f specification in any part of the rule. Thus a rule like "stop voiceless

spirant voiceless

affects p t k > f 0 x. But if it is simplified by eliminating the feature [voiceless] ([-voice]) the result is p t k b d g > f O x fi 8 y . Thus a loss of a feature specification entails a spread o f a process, and indeed, even reordering is taken as loss of order marking and hence simplification. The most important generative concept in this area is simplification in general (King 64-104), under the auspices o f Halle's slogan that "children simplify grammars and adults complicate t h e m " ( 1 9 6 2 : 6 4 ) . Underlying it all is o f course a black-box language acquisition device. Loss and simplification is supposed to be much more specific than the old proportional analogy, because they allow prediction o f the direction o f leveling (King 132). Thus how do we explain that the Pre-Gothic grammatical change (in the principal) parts o f the verb) s s z z (kiusan 'choose') was leveled to s s s s and not to *z z z z, since we have both voiceless, p p p p (greipan 'seize'), and voiced, g S S S (steigan 'climb'), unalternating series. King forgets that he has exactly the same problem with rule loss, i.e. we have Yiddish hant 'hand' and gelt 'money' which have t and not d as King expects ( 4 7 ) . Generative grammar believes also to have found the regularity o f analogy,

90 and in general analogy becomes superfluous because simplification is enough (King 133). We of course know already that such claims are totally empty (esp. Derwing 2.5). But one should note that the considerable abstractness of the approach has also led to rules that apply to other rules by taking them from one stage to the other. Such rules are called metarules by Foley (1972) and epirules by Grace (1969), and they glorify diachronic correspondences by bypassing history. The approach can be summed up with the following quotation from King (39), since it is very characteristic of this phase: . . . the study of linguistic change is the study of how the grammars of language change in the course of time. We have nothing to gain from comparing phoneme inventories at two different stages of a given language and seeing what sound has changed into what other sound. Such a comparison gives as little insight into linguistic change as a comparison of before-and-after pictures of an earthquake site gives into the nature of earthquakes.

Despite this explicit disavowal, this is exactly what generative grammar did by tallying diachronic correspondences between before-and-after structures. A phenomenon typical of revolutions is doing the exact opposite of what one professes. Finally, a short note on irregularity is called for. Mayerthaler (1974) notes that e.g. Lakoff (1970 [< 1965]) represents a modern treatment of anomaly. This is indeed true, in many ways, but the treatment is geared to the abstract transformational search for underlying invariance. Since only regularity is highly valued, natural language creates a disappointment here, with its usage-bound irregularity. The whole issue is made arbitrary by the fact that lexicon is somehow a necessary evil. The approach is extreme analogist in the Greek sense with the addition that usage is denied. Understanding the role of lexicon would have helped (Karlsson 1974: 31-2; cf. Hsieh 1972). The arbitrary line is continued by Halle (1972, 1973) who both wonders at exceptions and requires that all inflected forms be listed in the lexicon, a requirement that is extremely baffling (Lipka 1974, Anttila 1974b, Karlsson 1974), to say the least. Similar reaction is understandably increasing. Thus also Hans Boas points out that Halle simply denies the creative aspect of word formation (1974). Denial of creativity is perhaps the foremost characteristic of generative grammar, as this criticism crops up in such various quarters.'

91 4.2 Appraisal of the revolution The preceding section was rather short, since it selected out the main points only. Much more material would be available, and one might require that more details should be given because of the enormous impact of the school. Why it is left at that can be justified by the fact that it is the best known modern approach to 'analogy' in any case, and by the fact that it was misguided on all points. Part of this reaction has already been presented in the preceding chapters, but much remains to be brought up. The treatment takes off from the points in the manifesto (4.1). The generative approach to historical linguistics is a step backwards, since it aligns itself spiritually with the negative aspects of the Saussurean and Hjelmslevian persuasions of structuralism. It distorts history in at least two important ways. First of all, it ignores almost totally the achievements in the field; and, secondly, it thinks that a formal account, on paper, can be the cause of historical change. In other words, the school goes expressly against all the warnings to this end. The earthquake metaphor is appropriate for the structuralists King mentions, but equally appropriate also for generative grammar. After all, change is expressly defined as diachronic correspondences (cf. Andersen 1972). The inadequacy of this conception was also known to Hermann, who spoke against the interpretation of structural conditions as causes (1931). The danger of falling into the determinism of the system is a "form of mysticism more dangerous than the idealist mysticism of a creative people" (Coseriu 1974: 182, 224, fn. 71). Also Stern warned against proportions on paper (Esper 153), and the same is seen by Stewart (115), who notes that transformational grammar begs the question by relying on graphs on paper. In fact, Coseriu's document from 1958 (1974) is a serious one, and should have prevented many dead ends (one should hope that the translation will now be noticed). Coseriu shows that description is not history (12-3), and that the positivistic conception of language is totally wrong (23). Historical linguistics belongs to the domain of intentionality. One must not seek natural causes outside the freedom of choice (170; cf. Andersen 1974). In short, neither historical documentation nor systemic integration explains change (186); we need the teleology of purpose. What Coseriu says about the structuralists is equally true of generative grammarians, who are extreme structuralists. They cannot really answer even the how of change, because of their (mere) diachronic correspondences, and thus they cannot practice history at all (189-90). They just classify and describe (180). In 1968 the question was still open as far as generative grammar was concerned (Weinreich-Labov-Herzog 1968: 144):

92 There was a time when sound changes were being reclassified under the headings of "additions . . . deletion . . . substitution . . . transposition of phonemes". We presume that a repetition of this simplistic exercise in relation to rules . . . is not to be taken as the chief contribution of generative theory to historical linguistics. . . . it is far more important to see whether it offers any new perspectives in the explanation of changes (all omissions mine - RA).

This presumption was wrong. No new explanations have come forth; on the contrary, many have shown that the generative approach obscures the clarity and correctness of earlier analogical explanations. To those criticisms presented above (3) we can add here the following. Newton (1971: 53) notes that it is risky ground to take rule change as primary. Rules are not shuffled around in people's brains, they belong to the trends that wax and wane in the community. Further, 'competence' in the technical sense is of course also an unjustified extrapolation when one studies change. Kiparsky believed that he can explain the regularity of analogy if the change occurs in the morphophonemic system (1968: 7). "But even if we choose to use the term 'regular' for both analogy and sound change, this can certainly be. no argument for not distinguishing between the two mechanisms. The mere fact that the results of the two types of change may be similar says nothing whatsoever about the identity of the mechanisms" (Leed 1970: 16-7, fn. 15). Leed points out further Kiparsky's confusion between mechanisms and their effect: "a change in the morphophonemic system is not a mechanism." This confusion runs through generative historical grammar (King 128). Of course all change is grammar change in that the result of a particular change type may be grammar change. In short (Leed, same footnote), To say that grammar change results in borrowing is clearly nonsense; the reverse goes without question. To say that 'grammar is enough' is to ignore mechanisms (causes, processes, events) and deal only with results (effects, consequences, interpretations of events). It is to ignore the deeper, underlying sources of the native speaker's intuition and deal only with the surface manifestations thereof.

Leed sums up his criticism with eight points which show that the generative pooling up of sound change and analogy is wrong (22-3). The above quotation brings out a further confusion: generative grammar as surface linguistics. There is nothing wrong about the surface level, but it is curious how the school maintains that they practice deep linguistics, and explanation, rather than surface and lists. In fact, Koefoed observes the following about Kiparsky's original theory of simplification: " . . . [it] can be viewed as an attempt to demonstrate that all analogic change . . . is always 'accidental'. Levelling was not considered as caused by a tendency toward paradigmatic

93 uniformity, but as a consequence of a tendency to formal simplicity of the rule system" (1974). This is a correct observation; it means that the rules had no task in the system except to exist for the linguist. In reality, however, rules build diagrams which are both handy to learn and to communicate with (2.9, 3.6). But all along, it was claimed that the Neogrammarian notion of analogy was accidental (cf. Derwing 2.5), not the rule manipulation in mere description on paper. Rochet notes that the generative grammarians provide no evidence for sound changes receiving their impetus from changes in the morphological system, nor do they attempt to explain the mechanisms of grammatical conditioning. All in all (fn. 15): Although they claim to depart from what they call neogrammarian practice, their final formulations are geared to general phonetic processes - even though these are viewed as applying to underlying forms. Thus, the interpretation of a change as the addition of a rule at a point other than the end of the grammar (or, similarly, as rule loss or rule reordering), looks suspiciously like a deus e x machina whose function is precisely to salvage the purely phonological formulation of the rule; compare in particular Postal's interpretation of Lachmann's law ( 1 9 6 8 : 2 6 2 ) and Kiparsky's ( 1 9 6 5 : 1.29-33) with Saussure's ( and Watkins' ( 1 9 7 0 ) .

167-8), Kurylowicz's ( 1 9 6 8 )

The issue of accidentally of analogy comes out also in Jeffers (1974), who is very adamant on the issue of explanation (rule manipulations are not it). Kiparsky criticizes (1972b) Watkins' explanation of the Sanskrit middle first singular athematic ending -ai as analogical, because such an analysis cannot adequately represent overall aspects of morphological organization (it can; 2.5). Such an analysis is based on surface segmentation, which is a poor foundation for historical morphology (289). Instead, he uses a "functioning rule" in Indo-Iranian. Now Jeffers points out that this rule works clearly only in this verbal form, and even if it were synchronically adequate, it fails as historical explanation, "and Kiparsky's assumption that the presence of this form can be understood without reference to any external grammatical context is simply misinformed". Jeffers pleads for contexts (like Leed), which is an answer to the indeterminacy that makes Kiparsky reject proportional analogy. The regularity (better, consistency; Leed) of analogic change that is so surprising for generative grammarians tells us that they do not observe the bases of analogy. This is a contradiction again in that they want to do deep linguistics but justify it with surface terminology. If an innovation effects a unit, a theoretical rule, then of course the surface effects are necessarily regular, when we deal with evolutive change (Coseriu 80). As Andersen has shown there is no difference in principle and note the fol-

94 lowing about sound change (Coseriu 86). Coseriu shows why the inner essence of a sound law merges with the grammar-internal generality of its adoption, its uniqueness (Einzigkeit) (86): Es bezieht sich auf die Sprache als "Wissen" und auf den individuellen Anfangsakt der Erwerbung (Schöpfung) eines neuen lautlichen Modus als Realisierungsmög/iWikeit. In die Realisierung selbst und in die historische Fixierung eines neuen Lautmodus (wenn er überhapt fixiert wird) greift ein umfassender Prozess individueller und interindividueller Selektion ein. Der Lautwandel endet nicht, sondern beginnt mit dem "Lautgesetz".

Andersen has shown how this process takes place and how it is the same for all learning and change (1972, 1973), but this is clear also in Coseriu (89): Und "Lautgesetze" feststellen heisst einfach feststellen, dass die Sprecher die Sprache systematisch schaffen. Andererseits gilt eben diese Interpretation für alles Systematische der Sprache; daher auch für den grammatischen Aspekt. Nur fragt sich niemand, warum ein neues Verbaltempus zum Beispiel . . . für alle Verben gilt oder warum der Artikel, einmal geschaffen, auf alle Nomina anwendbar ist. . . .

The regularity issue is tied with the issue of predictive power of analogical change (King 132). The proportional model is allegedly worthless because there is none in it. Jeffers points out that this is parallel to saying that phonetic conditioning of sound change is worthless in explanations because we cannot predict whether the palatalized reflex of a velar stop would be a palatal affricate, a sibilant, or so on. More importantly, we are dealing with history where the momentum is human intentional activity. In this domain prediction as required by generative grammarians is completely beside the point (Coseriu 201, 204). In addition to Koefoed's evaluation of simplification, one must also look at it from a more general point of view, some hints toward which have been given in previous chapters. 'Imperfect learning' as the momentum behind formal simplification, which thus gets justification in language acquisition, evades the main issue by excluding the communicative context altogether (as well as the wider grammatical context; Jeffers). Perceptual judgement is always perfect, i.e. correct; it cannot be criticized (1.2). The abduction/deduction model shows further that simplification is not at issue here, but whether the grammar works or not. Of course there are also language universals that work toward simplification (one meaning - one form). Generative grammar went astray because of its wrong philosophy of science, exactly as did Saussure who could not handle change. Competence and the

95 rules mirroring it represent conventional simplification, and they are of course necessary in description. But such simplification is not to be allowed in theory, which must account for the reality of its objects."At least the theory must not forget the operational simplifications it carries out and not confuse the conventions with reality" (Coseriu 224 fn. 71). Peirce put it also quite clearly in his article "Abduction and induction" (1955: 156): Modem science has been builded after the model of Galileo, who founded it, on il lume naturale. That truly inspired prophet had said that, of two hypotheses, the simpler is to be preferred; but I was formerly one of those who, in our dull self-conceit fancying ourselves more sly than he, twisted the maxim to mean the logically simpler, the one that adds the least to what has been observed, in spite of three obvious objections: first, that so there was no support for any hypothesis; secondly, that by the same token we ought to content ourselves with simply formulating the special observations actually made; and thirdly, that every advance of science that further opens the truth to our view discloses a world of unexpected complications. It was not until long experience forced me to realize that subsequent discoveries were every time showing I had been wrong, while those who understood the maxim as Galileo had done, early unlocked the secret, that the scales fell from my eyes and my mind awoke to the broad and flaming daylight that it is the simpler Hypothesis in the sense of the more facile and natural, the one that instinct suggests, that must be preferred; for the reason that, unless man have a natural bent in accordance with nature's, he has no chance of understanding nature at all.

Thus abduction shows further that all changes are natural, if they have taken place (Coseriu 160, fn. 15), and this is why history has been used to such an extent in justifying synchronic analyses. The program is an old one, adopted in generative grammar, not invented there as widely thought. Generative grammar has not tackled the 'rational problem' of change at all (5.1). Change is justified as the basic phenomenon because language use is human rule-governed intentional activity (Coseriu, Itkonen 1974). The internalized grammars of individual speakers are discontinuous, and thus the causal relations that are created through Kiparsky's typology of diachronic correspondences do not explain anything, since they do not suggest what questions the investigator should ask of his data in order to explain them (Andersen 1973: 790). In general one can say that one of the basic premises of generative phonology, the so-called 'linguistically significant generalization' has remained an empty rallying cry without any empirical content (Karlsson 1974: 11; cf. Anttila 1975c). Finally, one should look briefly at some of the specific aspects of 4.1. It is of course a truism that a variation rule will disappear, if the variation is eliminated. It should have been another truism that the loss of a rule cannot be the cause of the change, because rules are put up to take care of outputs (see Anttila 1972: 127-31). Malone and Leed (in particular) have

96 shown how paradigmatic information provides the channels and machinery for analogical change (and even synchronic rules). The quotation from Rochet hinted already at the fact that Lachmann's Law ("rule insertion high up") was after all normal analogy as Saussure thought (Kurylowicz 1968, Vennemann 1968, Watkins 1970, Anttila 1969, 1972). To account for the lengthening in actus we need access to the whole paradigmatic system of the Latin verb. To quote a short paraphrase (Anttila 1972: 83): The starting point remains the contrast between the participle stem in /k/ versus the remaining forms in /g/, but the lengthening takes place only if there is also a long-vowel perfect tense (e.g., egi, legi, rexi, and pegi). Thus facio/feci/factus does not lengthen the participle vowel, nor do fingo/finxi/fictum 'stroke, mold', findó/fixi/fìssus 'split', stringo/strinxi/strictus 'draw together', and so on. The actual spread of length to the participle is, therefore, an ordinary instance of analogy in the traditional sense.

Morphophonemic relations of course always mirror paradigms (2.1; cf. also Karlsson 1974), and when they are lifted out of the contexts, it does not eliminate the analogical set-up, but now analogy can be written like sound change. Thus the loss of final devoicing and lengthening of vowels in German gives g \ E >g

and

V V >V

This notation is the simplest one for paradigmatic information although it leaves out grammatical categories (Anttila 1969: 17-8, 1972: 81-2). Morphophonemic conditioning like this is analogy, not sound change (cf. Fairbanks 3.4). One of Kiparsky's most interesting ideas was the tabulation of (and tendencies in) rule orders. When the output of a rule is the input of another, this rule feeds into the second one. In the reverse case, a rule takes away, bleeds, material from the input of another rule. Now he proposes that feeding order tends to be maximized and bleeding order minimized, i.e. rules tend to shift into the order which allows their fullest utilization in the grammar (1968: 196-200). Feeding order simplifies structural analysis, and its result is extension of an alternation, whereas bleeding order affects structural change and produces leveling (202). There are serious difficulties with this typology if it is supposed to match actual historical change, because it is so heavily intranotational only. There is of course no evidence that such rule reordering is learnable (cf. 2.5). But what all this

97 is meant for is much better explained by the more traditional notions. The simplification at stake belongs to the principle 'one meaning - one form'. Thus bleeding reordering exemplifies A > I , and restructuring and loss of rule A > 11 or even v A > u (in bake/batch 2.1) (Anttila 1972: 130). The reaction to the generative revolution in historical linguistics, particularly its treatment of analogy, has thus been, on the whole, negative. This appraisal is justified, even if the aficionados themselves would not accept it, as is often the case in revolutions. Most of the reviewers of King's (1969) book have expressed these reservations, either explicitly or implicitly (Bhat 1970, Campbell 1971, Jasanoff 1971, Robinson and van Coetsem 1973; see also Winter 1971). In general one can say that the reviewers defended strongly the notion of analogy, which was thrown out by a mere notational fiat (cf. 2.5). This reaction is still continuing. Hogg (1975) takes up the issues of simplification and leveling from Postal (1968) and Kiparsky (1972a) and shows them to be too simplified. Simplicity is the difficulty (111), and all reordering solutions are highly suspect, and so are deep and shallow structures as frames for explanation. Analogy comes when it comes and affects the existing surface structure, as Newton (1971) also argued. Later stages are of course irrelevant at that time; and Hogg concludes (112): This explanation is in large measure that of the traditional historical linguist although not of the caricature of that figure presented by Postal (1968: 231-44, 269-83) - and thus we may be tempted to wonder how far the apparatus of generative phonology helps us to a better understanding of the character of sound change.

True, but it is not a matter of temptation, but a scientific necessity. One cannot practice morphology as arbitrary phonology. Langages 36 (Dec. 1974) was devoted to the issue of 'la néologie lexicale', and thus various aspects of productivity and creativity are discussed in it. The question of analogy is treated by Mortureux (1974), and she has to conclude that both for methodological and theoretical reasons (32), la linguistique generative n'offre pas actuellement un cadre suffisant pour rendre compte de la néologie lexicale, tout spécialement pour aborder la fonction créatrice de l'analogie.

Otherwise Mortureux presents a clear exposition of Saussure's ideas on analogy and the general analogical machinery of semantic change.

98 4.3

Ongoing revolution

The revolution (4.1) could not ignore the reaction to it (4.2), although outside influence has not been acknowledged. But generative grammarians did realize that their strict formal approach could not explain history, and this led to a revision of the platform. In general one can say that the old formalism was kept but supplied with traditional content. This new progress is presented as an integral outgrowth of the original generative position. The leader in this phase also has been Kiparsky himself (other important names would be Stephen R. Anderson, Kisseberth, Kenstowicz, Bever Langendoen, J. Harris, and Vennemann). The conception is still such that the rules tend to be phonological, although they now peek more into morphology, and this is where analogy comes in. Formal simplification in the rule component did not work very well (it was formal 'economy'), and even when it was turned around (Stampe 1969), it was still only simplification. Instead of starting with a Lockean tabula rasa, and adding to this clean slate what experience required, Stampe started with a tabula perscripta, in which everything is innately written down. Experience now erases what is not needed. But then the morphological aspect of analogy came in as transderivational constraints (G. Lakoff) or global rules (Kisseberth). In order for certain rules to operate properly (in the hierarchy of ordered rules) they have to 'look back or forward'. They have to know the history of their input or the shape of their output to block unacceptable products. Here we have a system of (mainly phonological) rules that hang on other parts of grammar. Kiparsky argues against transderivational or global rules (evaluated in the context of the total grammar) and abandons the old formalism in a different way (1971, 1972a, 1973). He feels that formal study has now progressed far enough to return to traditional functionalist and substantial questions (1972a: 189), and only in this way can we get a satisfactory theory of analogy (196). (This is an illegitimate way to enter history as has been repeatedly pointed out in this survey.) The notion of the inflectional paradigm must be included in generative theory (208), the first proposal in this school being Vennemann's (1968). Actually the principle was used already by Kettunen (1962) well before, and by Erkki Itkonen (1966) also, but of course without generative theory. The new theory proposes the old (e.g. Kurylowiczian) idea that semantically relevant information tends to be retained in surface structure. This is distinctness condition (195). Leveling conditions state that allomorphy in paradigms tends to be eliminated, another old functional 'condition', and this gives the 'factor' of paradigm coherence (208), a principle that can now complicate the system of rules (209). In other words, we do have teleology

99 of function also here, and this is represented in the form of 'conspiracies' (1973: 3), various formal mechanisms conspire to produce uniformity in the paradigms. Now, global rules and conspiracies reduce to a theory of rule opacity (1971, 1973: 23). Rule opacity is the traditional surface ambiguity (insufficient diagram for communication, to put it in the spirit of Andersen's approach). Rule opacity means that the surface is ambiguous to the learner either in terms of the context of a process or the underlying forms subject to a process. This is an invitation to abductive innovation, although this is not the suggested explanation in this school. Now, languages tend to have conspiracies because they tend to have transparent rules. The development of conspiracies is one manifestation of the general tendency to eliminate opacity. This opacity theory of conspiracies eliminates the formal problems by supplying the proper functional factor. Paradigm uniformity or regularity becomes an "independently justified principle" (1973: 26). This means that there will be tensions between rule simplicity and paradigm regularity, and paradigm regularity and rule transparency, and the current formalism cannot quite do justice to all these parameters. The concept of rule opacity should also prove useful in the theory of exceptions (1971: 631), and this certainly enters the old area of analogy, and the terminology is not so obviously superior over the analogical one. Rule opacity is still connected with rule reordering as a primary kind of change (1971: 612). It is still a tug-of-war between a relatively independent rule component and an equally independent paradigm regularity. Koefoed (1974) has taken up this issue and argued that the rule system and paradigm regularity are not separate, thus we need a formalism in which paradigm regularity not only adds to the value of a grammar, but also leads to formal simplification (rather than being the result of it, as in Kiparsky 1965 and 1968). Paradigm regularity in fact has nothing to do with rule reordering; rather, the tendency to eliminate allomorphy causes rule loss (as we saw also above); and thus alternations tend to be eliminated. Rules that are not eliminated tend to become more transparent. A formalism that cannot express transparency is not very interesting, according to Koefoed. Also Schindler (1974), in discussing paradigmatic leveling by analyzing the standard generative practice with clarity, comes to similar conclusions. Elimination of allomorphs is not the motivation of change, but its result. Kiparsky's "Allomorphy within a paradigm tends to be "minimized" (1972a: 208) is just "eigenständiges (aber keineswegs neues) Prinzip". Kiparsky's approach is too phonological and merely formal and this typology does not fit the facts of change. Schindler concludes: Als Ergebnis unserer Betrachtungen kann festgehalten werden, dass alle hier unter-

100 suchten P[aradigmatischer] A[usgleich]-Typen mit grosser Wahrscheinlichkeit morphologisch und nicht phonologisch motiviert sind, wobei die Regclkonstellation, die die Allomorphien erzeugt, nur als PA-hindernd (bei transparenten Regeln) oder PA-begünstigend (bei opaken Regeln) von Wichtigkeit ist. Eine exakte Abgrenzung zwischen paradigmatischem Ausgleich und paradigmatischer Angleichung ist oft nicht möglich, vor allem weil es keine Typologie der Angleichung gibt. (Schindler

[1974]).

James Harris (1973) also is very much in the formal pen, although he likewise uses paradigmatic uniformity as a necessary factor in grammar, and he achieves this by ordering the necessary rules accordingly, a procedure properly criticized by Skousen (1973; cf. Vennemann 1972a). Thus indeed, the movement still sticks as close as possible to the classical type of phonological rule, if at all possible, e.g. Harris posits an underlying lkok-1 for Spanish cocer 'cook' because of cocción. Mayerthaler (1973) presents a hierarchy of theses how to accept analogy if one cannot get by with rule mechanisms. The net result is about the same kind of ultimate resort to analogy as that which Mayerthaler wanted to criticize. Bever and Langendoen (1972) have been hailed as having provided a new breakthrough in generative historical linguistics. Their main message is that language learning and linguistic evolution are not merely the learning and evolution of grammatical structure, but that of the human perceptual and productive systems of speech behavior. Thus attempts to explain language universals as formal functions of just one of these factors are rather futile. The authors return here to traditional grounds to a far greater degree than one is led to understand. In general, the terminology tends to be generative, the content rather traditional (not necessarily 'structuralistic'). Recent historical conferences, e.g. UCLA 1969 (Stockwell and Macaulay 1972), Chapel Hill 1969 (see under Schane), Edinburgh 1973 (Anderson and Jones 1974), and Romance historical ones in the Mid West 1972 and 1973, have time and again run into completely traditional notions. Of these Chapel Hill and Edinburgh treated heavily analogy, the former closer to the classical generative (4.1) line, and the results of the latter have been presented often in this survey, and they point almost totally to traditional content. The ongoing revolution has successfully adopted most of the tradition. It is very questionable whether this can be called progress (cf. the military command "About face, charge! ", not *"Retreat! "). A change in a scientific paradigm brings about various regroupings. Thus Weinreich, Labov, and Herzog tabulate the following potential effects each time there is a refinement in the theory of language structure (1968: 126):

101 (a) (b) (c)

a reclassification of observed changes according to new principles; proposal of fresh constraints on change; and proposal of new causes of change.

Generative studies have done mainly (a), but only (b), and especially (c), really advance historical linguistics. The translation of the old problems into rule mechanisms (a) was not particularly useful, because it was connected with the position that the constraints of the old school were not good enough. When the innovators were forced to propose new constraints, there was a return to the rejected values. It was precisely the discarded analogy that showed the overreliance on simplification wrong. Thus we came to paradigm conditions, distinctness conditions, etc., which reflect the old factors of analogy. Each step of 'progress', retreat from the original theory, seems to be moving closer to traditional views (for these points see Anttila 1974b, 1975d). Note also how tendencies (c) were emphatically rejected in the beginning, but now accepted, and these represent much in the traditional explanations for 'why'. This about-face back to tradition is often not noticed, since so many linguists have been trained within the paradigm where it is seen as new progress. Sometimes it is noticed but denied, since so much has been invested into it and extensive promotion campaigns have been carried out. And a small minority admits to the drastic U-turn. (Not only Anttila 1974b, 1975d, but also Schindler 1974, Skousen 1972, 1973, and 1975, make it apparent that no progress has come within transformational historical linguistics.) New documentation on this turn is increasing, e.g. Karlsson (1974: 18, 27, 37, 55) points out, for many reasons, how Kiparsky's work is a return to tradition. Note here also King's refutation of rule insertion high up in the hierarchy (1973), and his acknowledgement of old achievements (1975), although he retains the 'new' terminology and 'precision'. Very illustrative is Miller's rich treatment (1973). He starts out by stating that the line of thought presented in 4.1 is undeniably true, but not particularly insightful because it is obvious. He points out further that Kiparsky's theory says virtually nothing about the function of rules in grammar, but this brilliant output makes us nevertheless understand why rules are reordered in grammars. Miller goes on to suggest deeper considerations than formal simplification, and indeed, he shows with good evidence that there is true motivation in (analogic) change. "To regard simplification (cf. Kiparsky 1968: 8 f.) as a form of motivation is to confuse the change with the motivation for it" (686; cf. Schindler above). Miller concludes that there is no function in Kiparsky's work; function started to enter with Kisseberth's efforts (705). Miller has also accepted the basics of Andersen's abduction (Andersen 1969), and thus correctly comes to the realization that elaboration and simplification as evalu-

102 tion measures break down (705; cf. Anttila 1975g; and Schindler in general who shows that tallying structural-change and structural-analysis effects also breaks down). But then comes Miller's postscript (716) where he retracts half of what he said, especially that which concerns our progress in understanding why rules are reordered, or for that matter, ordered. Another typical feature in this, for the age, is that one waits till the leaders of the in-group have given permission for the next step. Miller refers particularly to Kenstowicz, Kisseberth, and Kiparsky, not to 'outsiders' who lack 'decorum' in having criticized what they felt was untenable in the original revolution. Original revolution (4.1) and further progress (4.3) can overlap, and the original breakthrough tends to have considerable support among later followers, when the "innovators" have already progressed away from their earlier positions. Thus Reklaitis (1973) relies on Kiparsky's "more precise standard for defining analogy" as simplification (209). She discusses the pre-Lithuanian replacement of nom. pi. *-os by the pronominal *-oi and decrees it a morphological innovation which perhaps by default has been classified as analogy, since it did not constitute a sound law. She concludes (209): It is far more reasonable to consider this as a straightforward morphological innovation - the direct introduction of the pronominal morpheme in the plural of the first class nouns - which occurred in response to internal structural pressures according to Kurylowicz's principle. By rejecting the classic label of analogy for this case, the intuitively sound position that the normal result of subsequent linguistic change or analogy is simplification o f the grammar while the result of morphological innovation is a form of rule addition can be retained.

Reklaitis' reclassification remains a totally arbitrary taxonomy enhanced by terms like "straightforward, direct, intuitively sound, normal" in addition to the generative-taxonomic terms of 4.1. There is no precise standard in this, since the whole is not based on a theory, but a fiction (see Maher 1975). A particularly clear example of ongoing generative revolution is Clements (1975). His bibliography mysteriously includes Esper (1973), which has no bearing on the text itself. That refers only to standard transformationalgenerative works. Clements' purpose is to present evidence that languages may permit analogical reanalysis of syntactic structures (a fact denied by transformational grammarians to begin with, hardly by any other linguists — thus the "new" revolutionary color). He points out that analogy has not had favorable reception in recent years as an explanatory concept (again, we have seen that many have indeed accepted it), the reason being its

103 traditional association with empiricist theories of language acquisition. And such theories have been called into question. But note again: by whom? And once more, only empiricist theories can explain the language-and-mind issue as well as language acquisition (see Derwing in 2.5 and Slagle in 2.6). In fact, the rationalism Clements seems to support has been out of business since 1781. Clements thinks also that analogy assumes that deeper levels of language structure are inaccessible to the learner (witness again Coseriu and Koefoed for the untenability of the statement). Nevertheless, he says that the concept of analogy has not been entirely absent from more recent work in generative linguistics (and he means the phase of 4.3). Clements concentrates on syntax where attention has been directed to cases where superficially similar forms affect each other; in particular, Hankamer (1972), Aissen (1973), and Cole (1973, 1974) have suggested that the principle of analogy "may be" necessary to explain such cases. It is conveniently left out that such is indeed the traditional explanation (as stressed correctly in recent talks by e.g. Terho Itkonen and Josh Ard, not to refer to classics like Havers). Clements' study of the Ewe evidence is very solid indeed, and he finds "that an analogical syntactic process, not characterizable in terms of present theory superimposes further structure on the structures generated by the normal rules of grammar." In Ewe some verbs are recategorized analogically as nouns, and Clements formalizes this as tree-grafting, adding an NP node on top of a VP. He realizes that such a tree-grafting is not an explanation, but "the effect of treegrafting is to generalize the range of grammatical processes applying" to some structures also to another structure. All these structures form a paradigm of similar syntax and semantics, the characteristic breeding ground for analogy in any traditional account. Tree-grafting creates a form of analogical extension. Thus we have to reassess functional criteria (which, e.g., Praguean functionalism never abandoned). Clements "would like to suggest that a theory of syntax which provides for a well-defined principle of analogy has greater explanatory power than one which does not . . . " What a detour it all was! But somehow Clements' functional principles predict the possibility of rules which reanalyze syntactic structures. Such principles must be part of the general theory of grammar evaluation, supplementing formal principles (such as the simplicity metric). Clements criticizes the lack of a well-defined theory of analogy which has always made it possible to resort to analogy whenever other explanations have not worked. But note that he uses analogy in the same way, since analogical criteria supplement formal ones. This is the same reverse order as in Kiparsky (1972a). One should not enter function via formalism. On the whole Clements' suggestion for a well-defined theory of analogy

104 is the traditional one, without consideration of the richness of the tradition. Clements' credibility is lessened by the fact that he takes Halle (1973) seriously. Note also that Clements does not understand the necessity for abductive inference, and thus he remains within a strict formalistic position where "rules reanalyze syntactic structures." Thomason (1973) thinks that the first major departure from traditional analogy was Kiparsky (1965, 1968). In her mind this was a significant advance in that it enabled linguists to discuss analogic changes independently of their motivational bases. She realizes, however, that analogy does not only simplify, but also complicates structures. However, she still wants to develop formal mechanisms that erase defunct rules. In particular, she borrows the notion of functional operators from techniques that have proved useful in diachronic studies other than linguistics. In physics it is commonplace to describe changes in physical states. Functional operators take functions into other functions (cf. epirules and metarules that are supposed to do the same for sound changes/phonological rules; Grace 1969 and Foley 1972 respectively). Thomason expressly refers to the physicist's strategy: she hopes to discover laws and regularities, at least tendencies. In any case she believes to get an advantage: a formal way of description without reference to the change's motivation, since description should not incorporate the causes of change (which are to be sought in the sociolinguistic motivation). To describe how the Serbo-Croatian o/C-stem masculine hard stem accusative plural was replaced by the soft stem suffix she writes (a 0 (o, MASC, Y, HARD) (ACC, PL) f (o, MASC, Y, SOFT) (ACC, PL)

=

where a is the operator on the function, and it leaves constant everything in the declensional system except for the features and grammatical units which are specified to the left of the identity sign. A theory of language change must be based on the notion of possible language change, as Hjelmslev has already argued, and this has been a general desideratum ever since. All operators that do not add new suffixes are traditional-analogic. Thomason joins the current criticism that the Neogrammarian notion of analogy was too vague, although rather adequate as a heuristic conception: "Where it went farthest astray, I believe, was taking analogy itself to be an appropriate theoretical concept." Now, it seems, after all, that analogy is indeed a very appropriate theoretical concept, which is true, even if one takes it as an axiom (cf. 2.6), and witness Aristotle's success with it (Lloyd 1966). Thomason takes maximal generality as a fundamentally new aspect in her theory (cf. 2.6 end), so that it will have to be constrained down to

105 fit the real world. In fact, "the characterization of the notion of lawlike generalization is a prime necessity to a viable theory of analogy." Intrasystemic analogic tendencies will often support the generalization, and they often have flimsy bases. Changes in opposite direction can support the same generalization (see 5.1-3). But the main difficulty in identifying lawlike generalizations is that one can prove only post facto that any given generalization was not lawlike. It seems that Thomason's approach has two incompatible factors. On the one hand, she tries to define a formal mechanistic system on the model of positivism. As has become increasingly clear, this does not fit history at all (Wright, Coseriu, Esa Itkonen). On the other, her discussion of lawlike generalization seems to wrestle with the typical problems of abduction (cf. also Anttila 1975c). The human mind is indeed the crucial filter of linguistic change, but it seems impossible to tackle it with a formalistic approach. Thomason (1974) is a study of Serbo-Croatian paradigm realignments in which she properly argues for morphological (analogical) solutions against the standard generative way of doing morphology as phonology. She shows succinctly the general tendency toward maximally clear diagrams. Her final plea is a need for an explicit notion of nonphonological process, and that requires an independent inflectional morphology. (Matthews had indeed begun to meet such a requirement earlier.) Thus the formal approach is by no means defunct. One of the latest treatments in this vein is de Chene (1975). He tries to write rules for the similarity vectors between and within paradigms, but actually only his notation is the old formalistic one. Otherwise he opts for redundant information, as many have done before (Wieser, Pike, Householder, Wandruszka, Vipond, Slagle, Bimson; and the whole tone of this book).

4.4

Rule inversion

The preceding survey of recent developments in generative historical linguistics has been extremely sketchy. To counterbalance it we look here into rule inversion in more detail. Vennemann, whose 1968 dissertation on German phonology followed Chomsky and Halle (1968), started around that time to recognize the value of traditional analogy and especially Schuchardt's work (see 1928). To make a clearer break from the inadequate transformational-generative grammar, he founded, in due time, his 'natural generative grammar', which uses the old generative formalism. In this connection his article on rule inversion (1972b) has received much support in literature.

106 Rule inversion means that the synchronic derivation is the reverse of the historical change that produced the alternation in question, i.e. history X > Y I - Z, synchronic derivation Y X / - non-Z (Anttila 1972: 201). The idea-r of it phenomenon (3.4) is an example of this. The original change r > 0 / - C has reversed into a synchronic epenthesis rule 0 -> r /- V. Vennemann presents ample evidence for this mechanism (or more rarely the result) of grammar change (235). The phenomenon is quite understandable if the historically derived form turns out to occur in semantically primitive categories or if its text or paradigm frequency becomes dominant. In such cases learners are of course likely to take the new forms as basic, since history plays no role here. Vennemann presents this inversion against primitive and secondary categories with the kind of Schleicherian formulae (240-1) and sums up their complexity in the following terms (241): Rule inversion is, thus, recognized as one further enactment of Humboldt's Universal [one meaning - one form - R A ] , viz. as a special, covert, and probably the most complex form of analogic change.

Both the fact that this is indeed a complicated change type (Kiparsky at Edinburgh 1973; but omitted in the publication 1974) and a recent discovery have been often mentioned already in current linguistic literature. Andersen (1974) points out the most important characterization, however: Vennemann's work on rule inversion deals with diachronic correspondences exactly as in the original model of generative grammar. Further, Vennemann's work can be taken again as application of Kurylowicz' laws of analogy. Note the following example (J. Peter Maher, private communication), which is of the type that many linguists have used, and which agrees with Kurylowicz' spirit. Latin frater 'brother (uncle, sibling)' is the base for the diminutive fratellus, which signals also redundantly (without a surface morpheme) the endearment inherent in the semantic matrix of frater. Now an innovation disturbed the balance in that frater was retained only as a metaphor for 'monastic brother', and fratellus took on the primary function; thus the morphemic relation was reversed to fratellus -*• frater and the semantics followed suit: Italian fratello 'male sibling', fra(te) 'monastic brother'. The same holds for suora/sorella in which Old Italian suoro has received (analogically) the productive feminine ending (cf. mano). Examples like these show true complexity, not the ones discussed by Vennemann and approved as complex by Kiparsky. In fact, linguists have always thought that rule inversion is one of the simplest things (as a diachronic correspondence it is not a mechanism); e.g. Plank (1974) points out how Paul understood the phenomenon perfectly.

107 Abduction explains how easy it is to 'bifurcate' when we have an alternation with two ends. The fading of metaphors (etc.) shows h o w basic and derived features switch places so easily because they can be learned independently (Anttila 1972: 199). Andersen has not only justified such inversions by providing the abductive model for them, but also in terms of markedness reversals in a semiotic frame (1972: 4 5 , fn. 23): The reversal of markedness values in oppositions dominated by a marked context is far from limited to phonology. In the use of grammatical categories, it is particularly common in the selection of the normally marked term of an opposition in contexts of neutralization defined by the marked term of another opposition. In English, for instance, in sentences in a marked status, the assertive vs. nonassertive opposition (they do know vs. they know) is neutralized, and the normally marked assertive is used to exclusion of the non-assertive (Do they know? They do not know.). Similarly, in the marked subjunctive mood, the past vs. present opposition (they knew vs. they know) is neutralized, and the normally marked past tense is used to the exclusion of the present (I wish they knew.). Here, too, the number opposition (they were vs. he was) is neutralized, the normally marked number being used to the exclusion of the unmarked number (/ wish he were.). The importance of markedness reversal for the study of syntax is shown by Timberlake 1970. Markedness reversal in marked contexts is not limited to language, but is an essential characteristic of all human semiotic systems. For a simple example, consider the distinction between formal and casual dress. In the unmarked context of an everyday occasion, formal wear is marked and casual clothes unmarked: but in the marked context of a festive occasion, the markedness values of formal and casual clothes are reversed. Numerous striking examples of the reversal of roles and of symbolic values in general, in contexts relating to either marked situations or marked social status in different cultures, are discussed in Fox 1967, Needham 1967 and 1971, Rigby 1968, Turner 1969. . . . The phenomenon whereby a marked context reverses the normal markedness relation between the terms of an opposition I propose to call markedness dominance. Markedness dominance has one other manifestation, which I will merely mention here, the reversal of the relation between the elements of a syntagm in a marked context, which I would call rank reversal. In fact, Andersen has shown how adaptive rules handle such cases, e.g. in the change of [p'] to [t] in Czech (1973: 779): A learner whose models pronounced certain lexemes with both dentals and labials would have to decide which to take as the underlying consonants. It would not be difficult for him to see, however, that the doublets with labials were always acceptable to his models; so he would naturally formulate his phonology accordingly, i.e. with underlying labials and an optional A-rule to derive dentals. Note that the process by which such a learner would reinterpret the lexical distribution of labials and dentals is completely analogous to the abductive innovation examined in 3.2. The abductive innovation is possible because the A-rule which the

108 learner formulates as part of his reinterpretation enables him to produce the same doublet forms that he has heard from his models. As a consequence, his abduction will not be invalidated by deductive testing.

and (788): Speakers with underlying sharped labials, as well as speakers with none, would presumably both be able to produce innovating forms containing dentals - the former by applying an adaptive rule replacing underlying sharped labials with dentals in deference to the new norms, the latter by not applying the A-rule that enabled them to adhere to the old norms. Speakers of these two categories could not but differ in their evaluation of new and old forms.

And remember how his abduction model easily explains reversals in feature hierarchies, and linguists are in general quite aware of switches between basic and redundant features. The reversal of social value due to historical change has been commonplace knowledge particularly in Romance linguistics (Coseriu 1974: 159); to sum up a case (Anttila 1972: 191): In French, many grammarians have recorded the alternation w