Juncture, a collection of original papers

245 92 27MB

English Pages [72] Year 1980

Report DMCA / Copyright


Recommend Papers

Juncture, a collection of original papers

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview




edited by

Stanford University



VOL. 7


t 1980




o 1980 by ANMA LIBRI & Co. P.O. Box 876, Saratoga, Calif. 95070 All rights reserved ISBN 0-915838 -46-X Printed in the United State s of America

In a discussion of juncture and its synonyms, Webster's III notes that the term juncture itself "emphasizes the usu. significant concurrence or convergence of events ... ". The connotation of significant convergence carries over to the use of this word in linguistics. Junctures are not simply breaks in the speech signal, but rather those places (often not breaks at all in any physical sense) where we find not only an instance of joining but also a significant convergence of linguistic events or units. The fascination which juncture holds for the linguist is not so much in the breaks themselves, but rather with the principles or organization which they imply, and those who are devoted to juncture often begin to regard it as a special key to the discovery of these principles. This book was conceived of by two such devotees, as an advertisement for juncture. We felt that juncture could provide a solid anchor for many of the analytic techniques which had been developed in the last twenty years, techniques which sometimes seem to be too powerful for the ideas which accompany them. We also felt that the relative neglect which the topic had suffered recently was largely due to purely historical reasons, and that now when much work is being done on the interaction of linguistic levels, was a time when an advertising campaign for juncture might have a good chance of success. We therefore hatched a plan to put together a selection of new and original papers on juncture, in the hope that the linguistic public, being presented with these papers together in one volume and seeing what sorts of things could be learned from juncture, would begin to view the topic as one worthy of more attention. In this hope, we present what is neither the first nor, if we are to have any success at all, the last word on juncture, but rather a new word. 5

1 :1967 2 3



We have selected nine papers on a variety of subjects within the larger area. Two, those of Siegel and Stevens, deal with junctures themselves: Siegel shows how the = boundary, long criticized as inelegant, can be eliminated, while Stevens shows that the + boundary, whose necessity in the statement of phonological rules has been questioned, is indeed necessary. Three papers deal with the effect of boundaries on the domains which they determine: Devine and Stephens discuss these domains from a general phonological point of view, while Selkirk investigates the notion that the domains themselves categorize phonological rules. Kahn shows how one purely phonological domain, that of the syllable, governs certain well-known alternations in English phonology. The remaining papers branch out from the phonological roots of juncture. Those of Bradley and Garrett and Kean are psycholinguistic. Bradley is concerned with the speaker's internal lexicon, and discusses the role of boundaries in accessing this lexicon, while Kean and Garrett analyze certain speech errors in terms of a level of representation where boundaries are central elements. Allen's paper deals with English compounds, showing how an analysis of compounds in terms of boundary strength provides insight into their phonological and semantic structure. Finally, Aronoff s paper deals with the history of the treatment of juncture in American linguistics. I We are indebted to many people who helped in the preparation of this volume, but most especially we are indebted to our fellow contributors for their great patience in the interval since the original project was conceived.




Semantic and Phonological Consequences of Boundaries: A Morphological Analysis of Compounds MARK ARONOFF

The Treatment of Juncture in American Linguistics DIANNE







On the Phonological Definition of Boundaries & M.-L. KEAN

Levels of Representation and the Analysis of Speech Errors DANIEL



Lexical Representation of Derivational Relation M.A. and M.-L.K





Syllable-Structure Specifications in Phonological Rules ELISABETH


Prosodic Domains in Phonology: Sanskrit Revisited DOROTHY

91 107


Why there is no





Formative Boundary in Phonological Rules


Semantic and Phonological Consequences of Boundaries: A Morphological Analysis of Compounds* MARGARET R. ALLEN University of Connecticut

1. Introduction

The remarks in this paper concern primary compounds; that is, nominal compounds which do not contain a deverbally derived element. Water-mill, shoebox, fly-wheel, book-man, alligator-shoes are examples of primary compounds. Compounds which contain a deverbally derived element are verbal nexus, or synthetic, compounds. Truck-driver, mountain-climbing, food spoilage, taxevasion are some examples of synthetic compounds. A discussion of synthetic compounds is outside the scope of this paper, but see Allen (1978) and Roeper and Siegel (1978) for two different analyses of verbal nexus compounds. 2. The semantics of primary compounds

In Allen (1978) two mechanisms to account for the semantics of primary compounds are proposed and discussed in some detail. I will now briefly indicate the relevant aspects of these mechanisms, as an under standing of the semantic characteristics of compounds is vital to the morphological analy sis which I will present. A crucial, and often overlooked/ fact about primary compounds is that although a compound often tend s to take on a specific meaning (e.g., as a name for • The analysis of compounds presented in this paper is discussed in further detail in Allen (1978) . I would like to thank Howard Lasnik and David Michaels for their helpful criticism . 1 This omission is central to the failure of Lees' (I 963) transformat ional analysis , and indeed, to any transformational account of primary compounds.


10 Juncture

Allen: Semantic and Phonological Consequences of Boundaries

a particular item), the compound as a linguistic unit has a range of meanings. For example, the compounds fire-man and water-mill have a range of possible meanings, although each also has a 'conventional' meaning.

fire-man man man man man man


who worships fire who walks on fire who sets fires who puts out fires who guards the fire etc.

mill mill mill mill mill

for producing water powered by water located near the water for analyzing water where everyone drinks water etc.

I refer to the variability in primary compound meaning as Variable R. The Variable R Condition establishes a range of possible, and consequently impossible, meanings for a given primary compound. This range of meanings is specified in terms of the semantic feature sets of the constituent elements of the compound. Variable R predicts that the complete semantic content of the first constituent element may fill any one of the available feature slots in the feature hierarchy of the second constituent element, as long as the feature slot to be filled corresponds to one of the features of the filler. Variable R Condition

In the primary compound



[[l Cll

slots (e.g. ,fire-man cannot mean 'man who contains fire' because the semantic specification for man does not include the feature [ +container]. Compare with fire-box). A second important principle of meaning formation for primary compounds is the /SA Condition. The IS A Condition In the compound [ [ ...... ]x [ ...... ]y ]z, Z "IS A" Y The statement of the JS A Condition is purposefully ambiguous between syntactic and semantic interpretations. Syntactically, X, Y and Z stand for labels of major lexical categories, in which case the JS A Condition correctly predicts the derived category of a compound. The IS A Ccndition thus ensures that the syntactic category of the output of a rule of compound formation does not have to be stated in the rule. That is, a rule of the form

[..... -k + [.....-k - [[... lv[ ... 1, k contains redundant information, given the JS A Condition. A semantic interpretation of the IS A Condition results when X, Y and Z are interpreted as shorthand for the semantic content of their associated bracketings [ ...... ]. In this case, the IS A Condition predicts that a semantic subset relation2 ship holds between the compound Z and the compound constituent Y. For example, a a a a a a a

is the semantic content of A in terms of hierarchical semantic features



is the semantic content of B in terms of hierarchical semantic features


and E Cl'.n-x such that Cl'.n -x

= f3m- y

then the meaning of X ranges from {31 (a 1 to /3m (a 1

• .

a,,) a,,)

steam-boat rose-bush silk-worm beer-can cabbage-box sound-man night- light


boat bush worm can box man light

I will refer to compounds which meet both the Variable R and the IS A Conditions as semantically transparent or predictable. Compounds which fail to meet either the IS A Condition (e.g., buttercup , cottontail, white-throat, chairman) or the Variable R Condition (e.g. , cranberry, huckleberry , whinchat) are to be distinguished from the class of semantically predictable primary comThere are apparent exceptions to the IS A Condition. One well-known set of exceptional examples are the so-called exocentric or bahuvrihi compounds: e.g., loud-mouth, red-coat, white-cap, cottontail, turtle-neck, etc. It seems to me that the problem here does not concern the derivation of compounds, but rather their use as names for things. Simple adjectives can become names for things or people that they characterize, e.g ., ' red', 'smiley', 'old faithful'. Similarly, a compound which characterizes a thing by naming one of its outstanding qualities can itself come to act as the name for that thing, as in cotton-tail, stickleback, redhead, and so on. 2

Feature slots which are hierarchically dominant are more likely to be filled. Thus within the range of possible meanings for a compound, some are more likely than others. Impossible meanings result from incompatibilities between two sets of hierarchical features (e.g. ,fire-man cannot mean 'man who trains fire', although lion-man can mean 'man who trains lions') and from filling non-existent feature



Juncture Allen: Semantic and Phonological Consequences of Boundaries


pounds. Compounds which are not semantically predictable are 'lexicalized' in that their meanings are not derivable by general principles of meaning formation and must consequently be simply listed in the lexicon. No verbal element is involved in the mechanisms I have proposed for asssigning meanings to noun-noun primary compounds. There is no reason to suppose that a verbal element is present at any stage of the derivation of primary compounds. Indeed, the inclusion of a verbal element obscures the real semantic characteristics of primary compounds. A syntactic derivation for primary compounds must consequently be ruled out, as syntactic derivation from an underlying sentential source necessarily entails the presence of a verbal element in the underlying syntactic structure. This is the obstacle on which Lees' (1963) analysis founders. However, the characteristic way in which the variable meanings of productive primary compounds are built up from the meanings of the · constituent elements can be described rather naturally when a direct morphological derivation is assumed.

h ker appear on the outside Of course, inflectional endings such as t e p 1ura 1mtraraps towels hand-towels. . · · 1 words· traps mouse, ' of compounds, JUStas~ s1~p e . which can apply to compounds. 3 This There are some denvat1onal processes . . . o holo ical rule. state of affairs can only be possible if ~ompou:dmg ::~~dis :emn; way gin which

I will now examine the details of a morphological derivation for compounds. I will propose ways of distinguishing morphologically between semantically predictable and non-predictable compounds. I shall examine in particular the morphological basis of semantic transparency in compounds, and will show how 1 distinctions between semantically predictable and non-predictable compounds coincide with a number of phonological distinctions.

The constituent elements of compoun d s a1so generally cannot function independently with respect to syntactic processes.

mouse-trap ,. t armer ioo aper fl -w y-p hand-towel

trap for catching mice (*a mouse) something for warming the feet (*a foot) paper for catching flies (*a fly) ) towel for drying hands (*a hand .

If compou~ds were .dlerive! :~:~a~:1~~~:~dtt:r;he compound. Some examples morphological matena cou . of derivational affixes attaching to compounds are. heart-rendingly left-handedness slave-driverish head-achey

* I don ' t want a breadbasket, I want an egg one. ' * The lecture hall is alright, but the co~cert one isn t * There were plenty of car and other thieves * Give me a goldfish and a silver one * They picked black and blueberries * Jack built a very greenhouse

3. Compounds as morphological entities

The claim that compounds are morphologically derived is not a new one. In fact, it is the traditional position (c.f., Bloomfield, 1933; Nida, 1946; Marchand, 1960; Matthews, 1974). The processes of 'derivation' (prefixation, suffixation) and 'composition' (compound formation) have generally been subsumed under a single rubric (sometimes, 'word-formation'), and both have been consistently distinguished from inflectional morphology and from the syntax. More recently, Gleitman and Gleitman (1970) conclude that a morphological solution to the derivation of compounds is probably superior to a transformational approach, and investigators examining morphological phenomena within the framework of the Lexicalist Hypothesis (Chomsky, 1970), for example Aronoff (1976), Jackendoff (1975), Siegel (1977), Roeper and Siegel (1978), have either assumed or claimed that compounds are morphological entities. Some of the well-known evidence for the morphological status of compounds concerns the interaction of compounds with affixes of various kinds. Inflectional affixes do not appear inside compounds, just as they do not appear inside suffix- or prefix-derived words. This is true even in cases where semantic considerations show that inflectional endings would be present in a noncompounded form, for example:

*mice-trap *feet-warmer *flies-paper *hands-towel

A M h d (1969) points out, these facts do not constitute an _argument for s arc an d since similar facts obtam for frozen the morphological status of compoun s, dh . the Black Sea. syntactic phrases such as the black market, a re erring, * the very black market * a surprisingly red herring ' * The Black Sea was rough but the Caspian one wasn t . "bTty of the elements of compoun ds to Syntactic processes .. However, the maccess1 i_i h 1 . 1 analysis of compounds, even if It is certainly consistent with a morp o ogica cannot be used as an argument. . nds however. Roeper and Siegel (1978) claim that Not all derivational affixes can attach to cmnpou ' I th t compounding is a morphological d" 1 d Jave-driverish show not on Y a Th" . forms like heart-ren mg y an. s . d with respect to other morphological rules. IS IS rule but also that compoundmg is unordere tside compounds. In particular, affixinc;rrect. Only a few derivationa_l affixes ~::ti:~i::;:unds; *slave-driverous' *left-handedity' boundary affixes never appear outside of pro. t th refore be that compounding is ordered at akest possible position mus e d• ""bed-bugal. The very we . A stronger ordering hypothesis will be propose l.11 some point following affix-boundary affixal!on.



Allen: Semantic and Phonological Consequences of Boundaries 14



4. New evidence for compounds as morphological entities

I will now present some evidence of a different kind which strongly supports a morphological analysis of primary compounds. As a first step, I propose the following as an approximation of a morphological Primary Compound Formation Rule (PCFR). PCFR [#X#kA(V) ...... [#Y#k Condition: Y contains no V

[[#X#] [#Y#]]

The condition on PCFR excludes verbal nexus compounds (e.g., truck-driver, mountain-climbing, food spoilage, flame retardant) from the domain of the present discussion. 4 The label (V) is parenthesized because of the marginality of the type pick-pocket, kill-joy. 5 The effect of PCFR is to concatenate two fully specified lexical items, creating an internal double word boundary by the juxtaposition of the right wordboundary of X and the left word-boundary of Y. The assignment of external word boundaries to the compound formed by PCFR is achieved by a general convention of External Word Boundary Assignment. 6 The compound external boundaries therefore do not have to be indicated in the output of PCfR,. The category labelling of the compound formed by PCFR is similarly not rule specific, but results from a general principle of morphology which predicts the derived category of a morphologically complex item. This principle has been discussed here in the context of the double interpretation of the JS A Condition. 1 PCFR forms productive primary compounds whose semantic structure is transparent. Semantic transparency in compounds may be defined as complete • In Allen (1978) I argue that a single compound formation rule accounts for both primary and verbal nexus compounds. In fact, if Siegel's (I 977) Adjacency Condition is accepted and extended, in what seems to be a logical fashion, to be relevant to Compound Formation , then we would be forced to choose a single analysis for primary and verbal nexus compounds, since the condition "where y contains no v" is not statable when the Adjacency Condition is taken as a condition on the form of morphological rules. • This type is in fact so marginal that compounds like pick-poc ket, kill-joy, cut-throat are probably not rule derived. • External Word Boundary Assignment is discussed in more detail in §4.4. 7 It is possible that this principle of category prediction in compound derived forms can be extended to other morphological compositions. Aronoff (1976) proposes that the category of the output of a morphological rule be listed in the rule itself, along with other information necessary for correct morphological derivation . However, this proposal does not take into account the following facts: I) Prefixation, with one or two exceptions, does not involve category changing. The exceptions to this statement are not productive prefixes; en-forms verbs from nouns, for example, enslave, empower, and a- forms adjectives from nouns, for example, aflame, aglow, ablaze, asleep. 2) Suffixation does involve category changing. The/SA Condition can be extended to account for these facts so that in a structure [[prefixk[noun)vlz it would be correctly predicted that the derived category Z IS A Y. Similarly, in a structure ([wordk[suflvlz it would again be predicted that Z IS A Y, where the suffix Y itself carries the derived category information .

edictability of the range of possible meanings for a gi~en primary_con:ipound~ pr . the existence of general principles of meaning formation m com assumdmg h s Variable R and the IS A Condition. I now propose that the ~orpoun s sue a . d b PCFR-es sentially bolo ical shape assigned to pnmary compoun s y ~X#;Y#-is responsible for the semantic transparency of the~e compounds. Only general principles of meaning formation-for example, Vanab~e :1:n!:~ IS A Condition in the case of compounds-can operate across ~ ou l boundary. The operation of less general principles, or the operat10n of a tota: Y - eneral rocess of meaning formation, are blocked by th~ presence ~ a ~~:b~e word ~oundary. s The Strong Boundary Condition formalizes this claim. Strong Boundary Condition (Semantic Version)g in the morphological structure

X Bs Y no semantic amalgamation process can involve X and Y

where Bs is a strong boundary, ## where 'semantic amalgamation process' refers to any ~roc~ss of meaning formation which is not completely generalizable.

d · ·mary compounds derived The strength of the internal double word boun ary m pn . . by PCFR prevents semantic distortion or loss of information from ~c~u~nng when words become constituents of productive primary co~pound~. tho o;s hat such semantic distortion or loss of information is poss~ble on Y w en e t d . aker than## I have made similar claims about boundary internal boun ary JS we · • · f gative pre strength and semantic distortion in the context of a d1scuss10no ne -


fix;: ~:;~~::; here that the phonological shape of the constittnt t f PCFR derived compounds does not change when the free-stan mg elemen s o d members In all the illustrative examples of productive words become compoun · d t·t ts have . d h. h I have discussed the compoun cons I uen pnmary compoun s w JC . . ' retained their phonological mtegnty ·

4.1 Vowel Reduction in compounds forms however in which the constituent words do show There are compoun d ' ' d. nterparts deviations from the phonological shape of their free-stan mg cou t .. Vowel Reduction occurs in the second constituent of some forms, although no m others, as shown by the following examples. . . . ·on mi ht be the arbitrary selection of a single reading • A less general pnnc1ple of_mean1ngfo~~IIR A t!tally non-general process of meaning formation out of the range of readings given by Vana fe . d. onstituent words is altered or lost in the occurs when the semantic content of the ree-stan mg c compound meaning. . . • The Strong Boundary Condition is revised m §4.5.

16 Juncture

Allen: Semantic and Phonological Consequences of Boundaries


Vowel Reduction

mainland highland Iceland woodland fireman policeman chairman strawberry raspberry Dartmouth Newtown


No Vowel Reduction

[-land] [-land] [-fand] [-fand] [-man] [-man] [-man] [-b(a)ri] [-b(a)ri] [-ma0] [-tan]

bear-land [-!rend] waste-land [-!rend] Toy-land [-!rend] farm-land [-!rend] tax-man [-mren] produce-man [-mren] bird-man [-mren] field-berry [·bcri] bush-berry [-bcri] river-mouth [-maw0) hill-town [-tawn) Vowel Reduction is more prevalent in so . . sequently, Vowel Reduction may be optiona~;ord:~ects than m. others. ConThe crucial point is that Vowel Red t' . . me_speakers m Column A. Column B Th uc wn IS impossible for all speakers in · e contrast between the A and B t h Marchand ( 1960) b " . ypes as often been noticed. o serves that the pronunc1atio [ f . in all older words " while "th . . n compounds m -man) ism;:m f ' e pronunciat10n man is found in all o a more or less occasional nature ', To t h . , recent words terms, it can be stated that those .com Ju:dt ese o~~ervatrons in more formal nature'' in which Vowel Red t· p s of a more or less occasional uc 10n cannot occur are deriv d • . manner by PCFR Th . . e m a straightforward . e resu 1tmg mternal double word b d . there will be no phonological distort· B oun ary predicts that Vowel Reduction does occur and wn_. ut what about the compounds in which


words' ' ? One possible soluti~n wo;~~~ :e;la~~ ~:~~nd p;inlts_out, "older these cases is the same as the s _,_. e un er ymg vowel in uuace vowe 1 A reduced vowel Id h present underlyingly, and no rule of Vowel d . wou t us be type of approach can be ruled out by th . e uct10n would be relevant. This 0 ;~:!a~:i:~ced in one case receives st;s:xa~:~::s :~:l:~:/:a::~



Iceland Finland woodland

[-land] [-fand] [-land)

Icelandic [-Irendrk] Finlandia [-IrendiyaJ 'woodlandian ' [-Irendiyan] These alternations show that a fully s c·fi d present in the underlying structure_ 10 pe I e vowel, not a reduced vowel, is 10

The informed d l . . an a ert reader will notice that words like I . problematic, since a stress-determining affix-boundar celand1c a~d Finlandia are possibly propose that Compound Formation i's d d f y suffix appears outside a compound. In §4 5 I or ere a ter External w d B d · ordered after all affixation rules. Consequent! d . . or oun ary Assignment, which is compounds. The problem is not an insurm ;'b;o envat10nal affixes should be found outside oun a e one, however. First of all, stress-determ ining .

- - --- - - --- -------=--~ ---- ..,. _ -


The problem which remains is that of ensuring that Vowel Reduction can occur in some compound forms, while prohibiting it in others. The rule of Vowel Reduction reduces non-tense vowels which are also [-stress] to schwa:

Vowel Reduction -t~se] [ - stress



All vowels receive the specification [ -s tress] at an early stage of the phonology. This specification is removed if the vowel is assigned primary stress at any point in the derivation. Once the specification [ -stress] is removed, vowel reduction cannot occur. Now, the second elements of the compounds in A and in Bare freely occurring words; thus they receive primary stress on their own cycle, prior to the application of the Compound Stress Rule. [main] -str _l _

[!rend] -s tr




[bear] -str Cycle 1 Cycle 2 (Compound Stress)



[lrend] -str _l _


Vowel Reduction cannot apply in either case , since there are no stressless vowels. The rule of Vowel Reduction cannot distinguish between the two cases to give superficial mainland in one case and bearland in the other. Some difference in underlying structure must be proposed in order to allow distinctive operation of the rule of Vowel Reduction. At this point it should be observed that the phonological variation in the occurrence or non-occurrence of vowel reduction correlates with a variation in the degree of semantic transparency of the compound forms in A and B. All the B compounds, in which vowel reduction is not permissible, are semantically transaffix-boundary suffixes appear only rarely outside compounds, and then only outside compounds which give evidence of weakened internal boundaries (e .g., Finl~nd!Finlamdia; lcelandllcelamdic). Productive, PCFR derived compounds do not appear with affix-boundary affixes: *bear-landic, *tax-manive, *bed-bugous. Secondly, ordering principles are violated only if Icelandic, etc. is formed by an actual affix-boundary suffixation rule. It is quite possible that these examples are not formed by rule but are merely analyzed as having an internal boundary of some sort. Finally, a more disturbing problem is the fact that stress-neutral word-boundary suffixes appear much more readily outside of compounds than do affix-boundary affixes. However, my concern over this issue is tempered by the fact that only some word-boundary suffixes appear outside of compounds (no word-boundary prefixes have this property) and the fact that these suffixes also may appear attached to phrases and other non-morphological constituents: e.g., go-getter, three-fingered, over-niceness, everydayness, matter-of-Jactness,Jedupness,fourth-grader, two-seater, etc. I think the solution must lie in designating a certain freedom of attachment to a few specified word-boundary suffixes .


Allen : Semantic and Phonological Consequen ces of Bound aries 18 Juncture

parent, in the sense defined earlier in this paper. The compounds in A are semantically non-transparent, to a greater or lesser extent; many are names for specific items, persons or places, rather than general category names. A revealing example is the pair chairman/chairman. The form with a reduced vowel can only mean a person (and not necessarily a man) who is in charge of a meeting or assembly. The sequence -man in chairman has lost some of the important semantic features of the free word man, as evidenced by the phrase Madame Chairman. In contrast, the compound chair-man, with an unreduced vowel, may mean 'a man who mends chairs', 'a man who sells chairs', 'a man who balances chairs on his head', and so on. This typical range of compound meanings is available only from the phonological sequence chairman, not from the sequence chairman. A possible way of structurally differentiating the forms in A and in B can now be examined. The B compounds are derivable by PCFR, being concatenations of two fully specified lexical items. But the A compounds appear to be concatenations of a lexical item plus an element which is clearly related to a lexical item, but which has lost some of the semantic characteristics of the free lexical item (e.g., man/man). I propose that the word-like second elements of compounds like chairman have been reanalyzed as suffixal elements of some type. 11 This is a process which can be historically observed in the development of English. The suffixes -dom and -hood as in kingdom, motherhood were still free words in Old English. 12 Siegel (1974) observes that word-boundary suffixes (e.g., -ness, -Jul, -less) which do not determine stress-placement in the words to which they attach, are themselves stressless. The second elements of words like chairman, mainland, fireman can be characterized in the same way. It is reasonable to conclude that the sequences man, land, etc., are morphologically identical to word-boundary suffixes; i.e., [#mren1ur, [#lrendJsur· Since the stress-assigning rules do not operate on word-boundary suffixes, the designation [ -stress] is never lost, and the vowel in the suffix will reduce. The following derivations illustrate the proposed structural differentiation between the suffix-like compounds in A and the true compounds in B.


The following objection to my analysis might be raised at this point. If words like.fireman (-man), chairman (-man) contain a suffixal element, -man, then why are the plural folUls of these words not "jiremans, •chairmans. The fact that these are not attested plural folUls suggests that if indeed the sequence -man in these words is suffixal, as I have argued, then it must still be 'linked' with the lexical word man in such a way as to retain all the morphological irregularities associated with the lexical word man, in particular, its irregular plural. Further questions about the relationship of suffixes and semi-suffixes to lexical words are discussed in Allen (1978), as is the interaction of inflectional morphology with suffix and compound derived words. 12 See Marchand (I 960) for a discussion of -dom and -hood.

True compounds

Suffix-like compounds



[#(#chair#1v [#mren1u1#1v -str -str _L no cycle not applicable

Cycle 1 Cycle2 Compound Stress Vowel Reduction

[#( #chair#1v [#mren#1v#lv -str -str _1_ _ _l _ 2

not applicable 1 2 chairman

chairman These derivations provide the correct output; a reduced vowel in the su~fix-lik~ compound and an unreduced vowel in the true compoun~. However the mtema boundary structure is the same in both cases-#) [#. Given th~ Strong Bou~ary Condition proposed earlier, it would be expected that semantic amalgamation or distortion would be blocked in both the A and the B _co~pounds b~ the resence of the double word-boundary. Semantic amalgamation _isblocked m the ~ compounds; they exhibit the typical range of compound meanings: But semantic amalgamations and distortions, rather than being blocked, typify the comunds in A. Semantic considerations consequently sugg~st that the proposed ~ructural differentiation between the A and ~ compou~ds is not adequate. Furthermore, there is a variety of phonological evidence which demo~strates that the morphological structure of suffix-like compounds must be modified further. I now tum my attention to these data.

4.2 Compound vs. word-boundary suffix derived forms: Phonological distinctions at the internal juncture There are a number of phonological distinctions which must be made bet:"'een [WORD][WORD] forms (i.e. , compounds) and [WORD][#SUF] forms at the mternal boundary .

4 .2 .1 The tense Iii - lax hi alternation There is an alternation between tense final Iii in free-standing words and lax hi 13 in corresponding word-boundary suffix derived forms. For example:

Words: tense Iii

Suffix derived forms: lax Ir/

beauty mercy

beautiful merciful

,. For some reason unknown to me, the word-boundary suffix -ness does not participate in this pattern. Happiness has a tense /i/ , not a lax III.


Juncture fancy

merry penny happy pretty jolly

Allen: Sema11tic and Phonological Consequences of Boundaries

fanciful merriment penniless happily prettily jollity

No such alternation exists when compounds are compared to free-standing words. The vowel in corresponding compound forms behaves exactly like the vowel in the free-standing forms, not like the vowel in the suffix-derived forms.

Words: tense Iii

Compounds: tense Iii

beauty mercy

beauty-treatment mercy-killing fancy-man merry-mint penny-box happy-hour


merry penny happy

Th~ rule responsible for the tensing or !axing of the vowel in question, regardless of its exact formal statement, is clearly sensitive to a structural difference in the immediate righthand environment of the target vowel in word-boundary suffix derived forms and in compound forms. The morphological structures which I have proposed so far, i.e.,


Word-boundary suffix derived forms Compounds

are not distinctive in this respect. The immediate righthand environment of the target vowel is #][# in both cases. Some modification is clearly necessary.

traveling, traveler wiggly, wiggling, wiggler tickly, tickling

[trrevho] [w1gli] [t1kli/trk;}li]

laundry, Jaundring wondering clambering angry hungry

[Jandr10] [wAndr1olw,\Ild;}rIIJ] [klrembno] [reogri] [hAogri]


Exactly as in free-standing words, III and Ir/ always syllabify in compounds, even before a following vowel. This is in direct contrast with the behavior of III and Ir! in word-boundary suffix derived forms. angle-inch nibble-urge travel-itch wiggle-eel tickle-attack

[reog;}hnc] [mb;}far}] [trrev;}hc] [Wlg;}lil] [trbl;}trek]

*[reoglmc] *[mbl;}r}] *[trrevhc] *[wrglil] *[trkfatrek]

launder-ease wonder-ape clamber-age anger-abuse hunger-act

[land;}riz] [wAnd;}reyp] [klremb;}rey}] [reog;}r;}byus] [hAIJg;}rrekt]

*[landriz] *[wMdreyp) * [klrembrey}] *[;}ogr;}byus] *[hAogrrekt]

These data demonstrate again that the internal boundary structure of compounds must be distinguished from the internal boundary structure of word-boundary suffix derived forms.

4 .2 .3 Fricative voicing 4 .2 .2 III and Ir/ syllabification Syllabification of III and Ir/ occurs word finally, as in: angle nibble travel wiggle tickle

launder wonder clamber anger hunger

In corresponding forms derived by word-boundary (vowel initial) suffixes, Ill and Ir/ do not syllabify, or they syllabify optionally , e.g. angling, angler nibbling, nibbler

[reogho] .[mbfar]

There is an alternation between voiceless and voiced fricatives in word final position which occurs in some simple words and corresponding word-boundary (vowel initial) suffix derived forms.

Underived word Voiceless fricative louse worth calf house elf thief north

[laws] [W;}f0J [kaf1 [haws] [c!f] [8if] [n:>r0]

#Suffix derived form Voiced fricative lousy worthy calving housing elven thievish northern

[lawziy) [w;}roiy] [kav10] [hawzro] [clv;}n] [8iv,s] [n:,ro;}rn1

22 Juncture

Allen: Semantic and Phonological Consequences of Boundaries

In co~esponding compound forms, the fricative is always voiceless as in th underived word. e louse-eaten worth-adjustment calf-eye house-ant elf-anvil thief-orgy north-east

[IawSJ't an J [waroa}Astmant] [kefay] [h awsrent ] [Cv!'I'renvaI] [8if::>1jiy] [n:>r0lS · t]

*[lawzitan] *[wanfaJAstmant] *[kavay] *[hawzrent] *[dvrenval] *[8iv:>rJiy] * [n:>roi St]

In ~ manner exactly ~arallel to the llr syllabification data, these examples show ~gam that the operation of certain phonological rules distinguishes between the mternal boundary structur~ of compounds and the internal boundary structure of word-boundary suffix denved forms.

4 .3 The boundary structure of compounds and #suffix derived forms Whate:e: the exact nature of the phonological processes outlined in §§4 .2 _ 1 ~-2.3, 1t 1s clear that they crucially depend on the fact that the internal b d. · d'f" oun ary m ~ompoun d s 1s 1 ,erent from the internal boundary in word-boundary suffix den:ed forms. If there is a double internal word boundary in compounds (as p~ed1cted by PCFR), then it is reasonable to suppose that there is a weaker or smgle word-boundary in word-boundary suffix derived forms. Up to this point I have been operating on the assumption that stress-neutral or wo~d-~ound~ suffixes have an associated morphological word-boundary, # which 1s an mtegral part ~f the suffix. That is, the boundary is as much a part of ~he suffix as the followmg phonological segments, and is not, for example, mserted by some external rule. This position on the presence of a boundary as pa~ of a pre_fixor suffix is motivated by the fact that a boundary encodes a variety of mformat10n ~bout each prefix or suffix, information which would otherwise ?ave to be 1tem1zed separately. The type of information encoded by boundaries mcludes: whether the suffix or prefix is stress-neutral or stress-determining; whethe~ the suffix or prefix attaches to native and/or non-native items· whether sem~ntlc di_storti~~ is permissible in the derived semantic compositio;. . G1v~n this pos1t10n on the nature of boundaries, the only possible source for a smgle mt~rnal word-boundary in forms derived by word-boundary suffixation is the suffix itself. Thus I propose that the morphological structure of words formed ~y the addition of word-boundary suffixes (including compound-like suffixes) is m fact [# [WORD] [#SUFFIX] #]

and not

[# [#WORD#][#SUFFIX]

as suggested earlier.







This revised structure for suffix derived forms-both true and compound-like suffixes-provides the necessary contrasts at the internal juncture with productive PCFR derived compounds and with free-standing words. (i) [#WORD#]


(ii) [#[WORD][#SUFFIX]#]


(iii) [#[#WORD#][#WORD#]#]


Phonological rules such as /1/ and Ir/ syllabification and Ir/ tensing, which occur in cases (i) and (iii) but not in (ii) can now be said to operate in the environment of a single right-bracketed word-boundary: #]. A rule such as Fricative Voicing, which occurs in (ii) but not in (i) or (iii), can be said to operate in the environment of a single left-bracketed word-boundary: [#.

4.4 The place of Compound Formation and External Word Boundary Assignment in an ordered morphology The examples discussed throughout this section now have the following morphological structures.

Suffix (true and compound-like) forms [#[chair] [#mren ]suF#] [#[main][ #lrend]suF#] [#[merry][ #mentJsuF#] [#[beauty][ #fuI]suF #]

Compound forms [#[#chair#][#mam#]#] [#[#bear#][#lrend#] #] [# [#merry#] [#mint#]#] [#[#beauty#][#treatment#]#]

In a theory of morphology which incorporates a principle of external ordering of distinctive sets of morphological rules (e.g., Siegel, 1974), this array of structures can be achieved by separating the sources of two types of wordboundaries, and by requiring that the assignment of one of these types ofwordboundaries be ordered with respect to certain morphological rules. The two types of word-boundaries are: 1) word-boundaries which are an integral part of a prefix or suffix; e.g., [#ful]suF, [#ness]suF, [un#]pRE and 2) word-boundaries which designate the domain of phonological words and which are assigned by convention to the external bracketings of sequences which qualify as words. I will refer to the former type of word-boundaries as affix-associated word-boundaries and to the latter as external word-boundaries. In Siegel (1974) underived words enter the morphology having already been assigned external word boundaries. Thus when word-boundary affixes are attached, an internal double wordboundary results, i .e., [#WORD#][#SUF]. I have shown that such structures must be modified, since they do not allow the necessary distinctions to be made with compound forms. If it is proposed that the assignment of external wordboundaries (ExWBA) does not take place until after suffixation and prefixation


Allen: Semantic and Phonological Consequences of Boundarie s


rules, but before compound formation rules, then the necessary boundary distinctions are immediately available. The following is a schematic illustration of the relative ordering of prefixation and suffixation rules. External Word-Boundary Assignment (ExWBA) and Compound Formation. l.

Underived words enter the morphology



Prefixation and Suffixation rules apply optionally



ExWBA applies to the output of 2.

[#[PRE+ ][WORD]#],

and to underived words

[#[PRE#][WORD]#], [#[WORD][#SUF]#] [#WORD#], [#WORD#]

Compound Formation applies to the output of 3.



lamp-post grand-daughter

[#[WORD][ +[SUF]#]


etc. ExWBA applies to the output of 4.

[#[#WORD[][#WORD#]#] [#[#[PRE#][WORD]#][#WORD#]#] [#[#WORD#][#WORD][+sUF]#]#]


The ordering of ExWBA between affixation rules and compounding rules supports the general principle that external ordering be imposed on distinct sets of morphological rules.

4.5 Further evidence for weakened internal boundaries in compounds: Assimilations So far I have shown that there are 'true' primary compounds derived by PCFR with internal double word-boundaries, semantic transparency and phonological stability, and compound-like formations which I have analyzed as wordboundary suffix formations with a single internal word-boundary, a greater degree of semantic instability and phonological variability. A second type of phonological variability in compound -like forms involves the operation of various rules of assimilation. Some differences in the occurrence of Voicing Assimilation, Nasal Assimilation and Consonant Drop are shown below. Voicing Assimilation

gooseberry newspaper

[guzbri] [nyuspeypar]

pancake handkerchief handcuff grandma

[preokeyk] [hreobrcif] [hreIJkAf] [grremma]

Consonant Drop

[#[PRE#][WORD]#][#WORD#] [#WORD#][#WORD][+SUF]#] [#[PRE+ ][WORD]#][#WORD][#SUF]#


Nasal Assimilation

No Voicing Assimilation

goose-barn news-piece

[gusbam] [nyuzpis]

[lrempost] [grrend:itar]


No Nasal Assimilation

pan-cleaner hand-gun hand-cart land-mass

[prenklinar] [hren(d)gAn] [hren(d)kart] [lren(d)mres]

No Consonant Drop

lamp-paint land-deal

[lremppeynt] [lrenddil]

These consonantal assimilatory processes are marginal in their general productivity, but they do illustrate the tendency of phonological distortions to appear as internal boundaries weaken and even disappear. Extreme cases of boundary weakening and consequent phonological distortion are shown in words such as christmas, holiday, gunwale [gAnal], boatswain [bosan]. It is not easy to establish what the internal boundaries (if any) are in many of the forms with assimilated consonants. It is quite likely that the language learner makes no internal analysis in at least some of the cases. In others it might be possible to argue for an internal affix boundary ( + ), or a single internal wordboundary (#). However, all the cases with assimilated consonants contrast with productive compound forms which have double internal word-boundaries and no consonantal assimilations. The productive compound pan-cake (no nasal assimilation) may mean 'a cake made in a pan', 'a cake shaped like a pan' or even 'a cake made for Pan' ... , but the form paycake (with nasal assimilation) means only that thing which you put syrup on and eat for breakfast. Again, phonological and semantic distortion go hand in hand as a result of weakened internal boundaries. The fact that semantic distortion is always found when phonological distortion occurs means that the variation in the phonological form of compounds cannot be explained away by using re-adjustment rules to weaken boundaries so that the appropriate phonological rules may apply. Chomsky and Halle (1968) and Selkirk (1972) both suggest this type of approach for compounds showing vowel reduction and assimilated consonants , but this approach makes it appear coincidental that all cases in which phonological adjustments are necessary also exhibit semantic deviation from the productive compound normal range of meanings. I have proposed that the underlying structures of 'compound' forms vary in internal boundaries and bracketing. I now make the further claim that these underlying differences account for the simultaneous deviations from the norm in both phonological shape and semantic content. The Strong Boundary Condition , presented in the context of a semantic analysis of compounds, can now be revised so that it formalizes the claim that boundary differences account for the co-occurrence of semantic transparency with phonological stability, semantic distortion with phonological instability.

26 Juncture

Allen: Semantic and Phonological Consequences of Boundaries

Strong Boundary Condition 14 (Revised Version)

In the morphological structure

X Bs Y No rule may involve X and Y

whereBs, the strong boundary, is## and where rule refers to both 'semantic amalgamation process' (as defined) and 'phonological rule'

5. Conclusion: the explanatory power of a morphological analysis of compounds

Why do these arguments about the internal structure of compounds support a morphological, as opposed to a transformational, analysis of compounds? The different degrees of semantic and phonological transparency which I have attributed to differences in boundary strength, illustrate an extremely productive process in the formation and use of compound nouns. As we have seen there are few limits on the formation of productive compounds, and a range of meanings is available. But there is great pressure for the compound I to become 'lexicalized' -that is, to take on a specific, more or less idiosyncratic, meaning. Then the internal structure begins to disintegrate and phonological disturbances occur. If productive primary compounds were transformationally derived, while lexicalized compounds were treated as frozen morphological forms, then the move from productive to lexica!ized compounds would appear to be a radical one, involving the loss of a whole transformational rule (or a set of them) and the establishment of a new lexical item. The type of morphological analysis which I have proposed predicts that the move from productive to lexical compounds is a simple one, and this prediction appears to be borne out by the language itself.

Allen, M.R. 1977. "The morphology of negative prefixes in English." Proceedings of the 8th Northeastern Linguistic Society Conference, Amherst, Mass. --· 1978. Morphological Investigations in English. Ph.D. dissertation, University of Connecticut, Storrs, Conn. Aronoff, M. 1976. Word Formation in Generative Grammar . Cambridge, Mass.: M.I.T. Press. Bloomfield, L. 1933. Language. New York: Holt, Rinehart and Winston. Chomsky, N. 1970. ''Remarks on nominalization. '' Readings in English Transformational Grammar, ed. R.A. Jacobs and P.S. Rosenbaum. Waltham, Mass.: Ginn & Co. ,. Notice that the type of semantic process which is ruled out by the Strong Boundary Condition (semantic amalgamation, loss of information) involves change of semantic content. Similarly, the phonological rules which are blocked involve, by definition, change of phonological content.


___ , and M. Halle. 1968. The Sound Pattern of English. New York: Harper and Row. Gleitman, Lila R., and H. Gleitman. 1970. Phrase and Paraphrase. New York: W.W. Norton and Co. Jackendoff, R. 1975. "Morphological and Semantic Regularities in the Lexicon." Language 51 :639-671. Lees, R. 1963. The Grammar of English Nominalizations. The Hague: Mouton and Co. Matthews, P.H. 1974. Morphology. Cambridge, England: Cambridge University Press. Marchand, H. 1969. The Categories and Types of Present-Day English Word-Formation• Mi.inchen: Beck. Nida, E.A. 1949. Morphology: The Descriptive Analysis of Words. Ann Arbor, Mich.: University of Michigan Press. Roeper, T., and M . Siegel. 1978. "A lexical tran sformat ion for verbal compounds." Linguistic Inquiry 9: 199-260. . . Selkirk, E. 1972. The Phrase Phonology of English and French. Ph.D. d1ssertat10n, M.l.T. Siegel, D. 1974. Topics in English Morphology. Ph.D. dissertation, M.I.T. ___ _ 1977. "The Adjacency Condition and the theory of morphology." Proceedings of the 8th Northeastern Linguistic Society Conference. Amherst, Mass .

The Treatment of Juncture in American Linguistics* MARK ARONOFF SUNY, Stony Brook

No one would deny that the syntax and morphology of a natural language is to be described in terms of a sequence of discrete elements arranged within a hierarchical phrase structure. Within such a general type of system, it might be expected that the phonetic representations of a language could provide a means of telling where one element ends and the next one begins; such marking, it might be thought, would facilitate our understanding of an utterance in processing. In fact, in many languages we do find phonetic phenomena which are peculiar to boundaries between words and morphemes, and which may function as indicators of these boundaries. Trubetzkoy (1969) called such phonetic phenomena "boundary signals" (Gren;,signale) noting , among other examples, that in Tamil obstruents are realized as voiceless aspirates in word-initial position, but not elsewhere. In this paper I will discuss some of the difficulties which American linguist s have had in dealing with boundary phenomena. I will try to show that many of these difficulties arose because of the manner in which boundary phenomena were treated, in particular because the boundaries themselves were viewed as segmental phonemes, part of the arbitrary signifian t of language . I will mostly be • Though there is a large literature on juncture, most of the important work is contained in three books: Martin Joos's Readings in Linguisti cs. Volume I (1957), Zellig Harr is' s Methods in Structural Linguistics (1951) , and Choms ky and Halie 's The Sound Pattern of English (1968). In citing material from the Joos volume, I have given dates of first publication, but the pagination of the articles as they appear in Joos. The paginati on for Harris is that of the 1960 paperback edition (curiously retitled Structural Linguistics).


- -------

- -- -- - - ----- - - - _,

30 Juncture

concerned with the Descriptivist school of the 1940' s and 1950' s who devoted a lot of energy to boundaries, which they called junctures, but I will also try to show that some of the problems which Generativists have had with boundaries have their roots in those of the earlier school. The American descriptivists believed that junctures were phonemes because they had to. Orthodox American descriptivists of the 1940's and 1950's were most concerned with developing a set of discovery procedures by which any human language could be analyzed. This in itself is a goal which most linguists share. What was peculiar to the school in question was the condition which they set on the discovery procedures they were looking for. Briefly, ''The whole schedule of procedures ... is designed to begin with the raw data of speech and end with a statement of grammatical structure" (Harris, 1951:6). The analysis of a higher, closer to grammatical, level could never be essential to the analysis of a lower level, nor could the vocabulary of one level intrude onto another. There is no record of anyone ever having followed such a schedule in practice, but it was generally believed that this was the only truly valid method for doing linguistic analysis (what the linguist actually did usually involved shortcuts). To a Descriptivist, then, if a boundary has a phonemic effect , it must be represented as an 1 element on the phonemic level: a juncture phoneme. I will give a few examples to show how this treatment of juncture as a phoneme works. Such examples are not difficult to find in the literature, as the descriptivists were past masters at constructing minimal constrasts involving juncture. The most famous pair is night rate : nitrate. The /t/ of the first is unreleased, that of the second aspirated and palatalized. My favorite is school today : 's cool today (Trager and Smith, 1951), and even nonlinguists know I scream : ice cream. There is no problem in describing the effects of the juncture at the phonetic level, for the phonetic differences between members of a juncturally defined minimal pair are always transcribable. The problem comes at the phonemic level. One of the essential characteristics of the American descriptivists' phonemic level, a consequence of their theory of discovery procedures, was its autonomy from syntax, semantics, and morphology. One was supposed to be able to write a phonemic transcription which did not refer to higher levels of analysis. Indeed, according to the theoreticians, one was supposed to be able to do a phonemic analysis without having any clue as to the higher structure of an utterance. In the case of juncture, the phonemic transcription could not refer to the fact that night rate is a compound consisting of two words, while nitrate is one single morphological unit. One could not account for phonetic distribution in terms of morphology and syntax. The simplest way out of this theoretically imposed bind is to simply ignore the morphological difference and accept the two as a minimal pair. This has unfortunate consequences. It is counterintuitive and leads to a superfluity of phonemes, for wherever we can find a junctural minimal pair we must posit a phonemic difference. In this case it would force us

Aronoff: Juncture in American Linguistics


to posit two t phonemes in English, one unreleased, one aspirated. Alternatively, one can posit a juncture in night rate. The difference between the two is now represented on the phonemic level by the juncture, and we no longer need two t phonemes, This juncture, it must be remembered, is a phoneme. The fact that it coincides with a syntactic boundary is purely coincidental, since there can be no necessary connection between first and second articulation, 'grammar' and 'phonology'. The best example of the descriptivists' use of juncture is Moulton's "Juncture in Modem Standard German" (1947). In this paper, Moulton argues for a segmental juncture phoneme / + /, which allows him to account for aspiration and glottal stop at the beginning of words and also permits him to collapse [x] and [c] into one phoneme, despite such minimal pairs as Tauchen 'small rope' and tauchen 'driving'. The greatest problem with the descriptivists' theory of juncture is the notion that juncture is independent of morphology and syntax. The fact that juncture is not independent was obvious even to the most hardened descriptivists, and they made frequent apologies and excuses for it. Moulton, for example, notes: The fact that/+/ occurs almost exclusively at syntactic and morphological boundaries raises the question: Should we accept syntactic and morphological boundaries as part of our phonemic analysis if, by so doing, we can limit the scope of-or even avoid assuming--open juncture? _ . For a number of reasons I believe that this should not be done. Frrst-and this is a purely methodological reason-I believe that the phonemes of a language should be analyzed without reference to syntax or morphology (as I have tried to do in this paper). Secondly, we could not do so successfully even if we tried, because of the cases (noted above) in which open juncture does not coincide with a syntactic or morphological boundary. Finally, it would seem that the phonetic marking of morphological and syntactic boundaries is more clearly described precisely by the assumption of open juncture.

Indeed, the few foreign borrowings which Moulton cites as calling for morpheme-internal juncture are about the only examples found in which such an analysis is even plausible and, as Pike (1947) points out in his critique of Moulton '' ... one should hesitate to allow a small residue of words of foreign origin to prevent a general formulation ... ''. Harris (1951: 80) notes that "one of the chief occasions for setting up junctures ... is when one set of phonemes occurs at speech boundaries while its parallel set does not." He even says (ibid. :87) "The great importance of junctures lies in the fact that they can be so placed as to indicate various morphological boundaries." Harris goes on to point out, however, that this is not necessarily the case. So, for example: In German, we find [t] but not [d] before # ([bunt] 'group', [vort] 'word'), while (t] and [d] occur in identical environments within utterances ([bunde] 'in

32 Juncture group', [bunte] 'colored', [vorte] 'in word'). If we insert # after every [t], and group [t] and [d] into one phoneme, we would find that we are writing# in the middle of morphemes (e.g. /d#ayl / Teil ' part' ). We could still phonemicize [t] as Id# / , i.e. use the /#/ to indicate that a preceding I d/ repre sents the segment [t], but many of the occurrences of this /# / would not correlate with morphological boundaries .

Harris does not seem to feel that this analysis indicates a weakness in his theory of juncture. Indeed, as long as there is no necessity for junctures to coincide with morphological or syntactic boundaries, any pair of phonemes X and Y can be reduced to X and X# and the phonemic inventory can be cut in half. We will return to the general question of analyses below. It is not clear at first why there was such a fuss over the coincidence of ju ncture phonemes with grammatical boundaries, and such delight in junctures which did not seem to coincide with boundaries. Admittedly, to allow boundarie s (grammatical elements) into phonemic representations would run counter to a basic axiom: the autonomy of phonemics. To say, however, that junctures always correspond to morphological and syntactic boundaries is not different from saying that segmental phonemes correspond to lexical items, or intonation contours to speech acts and modalities, which no one disputed. The real reason for the fuss appears to have been the belief that junctures were segments, or at least units in the segmental string. If we claim that junctures are phonemes, then we must ask ourselves exactly what sort of phonemes they are. One's first intuition is that they occupy the same rank as segments. After all, they come between segments and they affect phonetic dimensions (aspiration, voicing, spirantization) which are segmental. In fact, Moulton (1947) treats juncture as a segment. Stockwell, Bowen, and SilvaFuenzelida (I 956), in one of the classic treatments of junctural phenomena, call internal open juncture a segmental phoneme in Spanish, on the grounds that "plus juncture functions like a segment rather than a suprasegment in the way it clusters with consonanta l or vocalic segments." Still, Stockwell et al. don't quite want to call / +/ a segment, since "in its other functions it is more like the junctures that we are familiar with in English and German .'' They compromise on the term SEMIJUNCTURE. Other people felt that juncture should be grouped with intonation and accent as suprasegments "for the grammatical reason that in its distribution and in its meaning it resembles stresses and pitches more than vowels and consonants" (Wells," 1947). Others were more pragmatic than Wells in reaching the same conclusion . Joos (1957 :216) says "finally one must assign juncture to a phonemic status: otherwise it is nothing. By hypothesi s it can't be segmental: no room there. Hence we are forced to the Hockett solution: it is suprasegmental." The problem with this view is that though junctures act like suprasegments, they sure don't look like them. Harris, who devoted a whole chapter of Methods


Aron off: Jun cture in A merican Linguistics


(1951) to juncture, never said what sort of phoneme s junctures were, but he was not too concerned with phonetic reality. Junctures do affect the segmental string, and the major tendency was to treat them as segmental phonemes. Segments, however, are that part of ~e sound of language whose relation to meaning is most arbitrary . Once we treat Juncture s as segments, it is only one step further to regard them as arbitrary in the same fashion. But junctures do not have meaning in the same way segments do: th~y do not form signs; nor do they have any phonetic value; they only have phon~tlc effects (the possibility of pause is not a phonetic value , but rather wha~ Pike (1947) calls a "po tential "). Untrammeled by substance of sound or_mearung or structure, junctures become purely formal entities, whose only poss1~l,e~urp?se is mathematical elegance. We should therefore not be surprise~ that L~ gu1sts find themselves tempted to institute 'juncture' simply as notational devices for reducing the number of phonemes" (Wells 1947:201), for junctures defined as empty units of sound are always free. We have already had several examples of what can be done with such free junctures in the name of analytic 'simp licity'. We have seen -~~u lton use_a juncture to account for the peculiarities engendered by non-imt1al stress m foreign borrowings in German. We have similarly seen Harri s analyze German [t] as [d#]. The following are even better examples of what a brilliant mind can do with juncture. . Harris (1951:83) contrasts two types of words in Moroccan Arabic: Thus Moroccan Arabic sfan-5, 'do ughnut', bard 'wind ', ktabt 'I wr~te', x~dma 'work' all have the pronunciation of the string of consonants phoneuca~ly mterrupted (by consonant release plus /-a/) at every secon d consonant counting from the last. In contrast tb al 'hill', br;,d 'cold', s;,ww;,l ' he asked' , ktab 'he wrote'., katbat 'she wrote' , all have the pronunciation of the consonant sequence phonellcally interrupted before every second consonant counting from one after the last (i.e ., counting from the juncture after the last consonant). These_two types of short Moroccan utterance s could be distinguished by the use of two Junctur~s, say _ at the end of the former and = at the end of the latter : /sfnj-, brd-, ktbt -/ for sfan-5,, bard , ktabt, and /zbl =, brd =, swwl =, ktbt =/ for tbal, briid , s~wwiil, kiitbiit... It is not necessary to write fa/, since the occurrence of /-al _ 1s now automatic in respect to the two junctures: The [-a] is no longer_phonemic. but 1s included in the definition of the junctures, which also serve to indicate points of intermittently present pause.

Harris even allows a class of morphophonemic junctures, distinct from phonemic junctures (ibid.: 241): Thus in Nootka, morpheme s ending in labialized gutturals and velars_have forms witho ut Jabialization before certain morphemes (words, and incremental suffixes), e.g. /qal)ak/ 'dead', /qal)ak'aX./ 'dead now' , but,wwith, labialization before other morph emes (formative suffixes). e .g. /qa ~ak as/ dead on the

- -


- --·

- - ----- - - - - -




Aronoff: Juncture in American Linguistics

ground'. None of these features would be represented by phonemic juncture, because they occur even when no morpheme boundary is present lk'i •sk'i •k'o ·I 'robin', lk'wisk'wa · stin/ boy's name. Since the alternation occurs in all morphemes ending in labialized gutturals and velars, and only before certain suffixes it is useful to mark the particular suffixes. Morphemes ending in other phoneme~ have members showing other alternations before these same suffixes: lpisatotw-I 'play place', lpisatow'asl 'playing place on the ground'. It is therefore not desira_bleto add to these suffixes a morphophoneme consisting of a particular letter, smce not one but several phonemic alternations are to be indicated by that morphophoneme. The simplest mark is a special morphophonemic juncture I-I which would be the initial part of the morphophonemic spelling of each of these suffixes, and which represents various phonemic values when it is next to various phonemes. After some phonemes, there is no alternation before these suffixes, so that there the juncture represents zero. As the quote from Wells above reveals, some linguists were concerned at the great power which segmental junctures permitted. Two sorts of steps could be taken in attempting to trammel juncture, and both were tried. The more conservative one is to assign juncture some real phonetic properties. The standard view on this was that all junctures had two allophones: pause and zero. The former was more likely to occur at the beginning or end of an utterance, the latter in the middle of an utterance. Moulton, for example, says "only the zero allophone occurs at morphological boundaries within words ... "(1947:214). There are two ways to deal with the idea of a phoneme having these and only these phonetic properties. The first is to regard the postulation of such an entity as a reductio ad absurdum of the whole notion of juncture phonemes. This is what Pike (1947) does. The second is to take these purported phonetic properties seriously and to investigate them instrumentally. The only published reference to such an investigation which I know of is the following in Stockwell et al.: ~hat the terminal junctures in English are such tempo phenomena is clearly audible, I #I bemg the greatest slowing down, approximately two average phoneme lengths (a still inexact figure, stated informally by Martin Joos on the basis of incomplete spectrographic measurements, IHI being about one-half phoneme length less slowing down (accompanied by pitch rise), ... Plus juncture also represents a slowing down, but with a critical difference; the slowing down before terminal juncture is at least one average phoneme length and occurs throughout the segments that follow the last strong stress (either primary or secondary); the slowing down of plus juncture is often less than five centiseconds, or within the smear-span of the human ear. Being within the smearspan, it will not be heard as what it is, but rather in terms of its effects on the immediately preceding segments. (I 956:407) A more radical and more dangerous move is to admit that the coincidence of juncture and morphological or syntactic boundaries is not accidental. Harris, for example, despite the remarks quoted above, says in a later footnote (1951:241)



,, ... phonemic junctures are used for segments which occur only at morpheme (or other) boundary ... ". Wells declared (1947:202) that "the validity of junctural phonemes is open to grave doubts on phonetic gro~nds-'.'. In ~~-ct,Wells went ven further, and denied that junctures were phonemic entities: Juncture, where . . ,, ever it occurs, is a morpheme-though often with no detectable meanmg (ibid.:201). To take such a step was, however, to incur the wrath of Ho~kett, who noted (1947:fn. 30) "the risky complications which result from callmg a word-juncture a morpheme, as Wells does in his 'Immediate Constituents'. The semantic contrast which Wells cites as evidence ... means that word-juncture is morphemic, but in such cases it might just as well be concluded-I think, a little better so-that absence of word-juncture is the morpheme." So much for Mr. Wells! Generative phonology, as represented in SPE, explicitly rejects many of the methodological assumptions of the Descriptivists. Thus, rather than being troubled by the fact that junctures typically co-occur with morphemic and syntact~c boundaries, this is something which they exploit. The word boundary (#) 1s assigned within their system to the left and right of every major lexical category (N, A, V) and every category dominating a major lexical category by a universal convention. The formative boundary ( +) is associated with the left and right extremity of every formative; "it indicates the point at which a given formative begins and ends" (1968:365). These two boundaries,# and+, are universal in the sense that they are found in all languages and the general conventions for their distribution hold for all languages. Thus, the question of whether or not these boundaries are phonemes within particular languages simply does not arise in any interesting sense; unlike segments where there is a great deal of apparent variation among the phonemic inventories of various languages, the boundary inventory is fixed and essentially uniform. Nor do Chomsky and Halle insist on the phonetic reality of juncture. Indeed, they declare that "unlike the latter segmental features, boundary features do not have universal phonetic correlates, except perhaps for the fact that word boundaries may optionally be actualized as pauses.'' In one respect, however, the Generative treatment of juncture is similar to that of the Descriptivists. In SPE, junctures, which are there called boundaries, are viewed as "units in a string, on a par in this sense with segments" rather than "on a par with the labeled brackets, as elements delimiting the domain in which a given phonological rule applies" (1968:371). Chomsky and Halle do not claim that boundaries are segments, in fact they give them the distinctive feature specification [ - segment). Nonetheless, by placing boundaries on a par with segments, they open up the possibility of treating them as such, with all the concomitant opportunities for abuse. In fact, most criticisms of the use of boundaries in SPE have been leveled at just those analyses where boundaries are treated as segments.

36 Juncture

The first criticism involves the features used to analyse boundaries. Chomsky and Halle note that boundary features do not have universal phonetic correlates, however, they treat them in the same manner as they treat phonetic features. As a result, they are led to propose a number of features whose motivation is weak at best. These features are [segment], which differentiates boundaries from segments, [formative boundary], which separates + from other boundaries , and [word boundary], which separates = from #. The only one of these which receives much attention is [segment], whose main motivation is the treatment of boundaries as units on a par with segments. The second criticism is that the use of boundaries in SPE is sometimes ad hoc. The = boundary, for example, has the sole function in SPE of preventing stress from retracting past the root of certain Latinate verbs in English. Siegel (this volume) has shown that the failure of retraction can be accounted for more adequately without resorting to the use of this boundary, effectively removing the motivation for the boundary and for the feature which defines it. I do not pretend to have answered the question of whether boundaries are units in a string, distinct in this regard from labeled bracketings. This is an empirical question, and one which is addressed in other papers in this volume (especially by Selkirk and by Devine and Stephens). My purpose has been only to show that, whatever their phonological effect, junctures are part of the organization of language, not lexical segmental phonemes.

Chomsky, N., and M. Halle. 1968. The Sound Pattern of English. New York: Harper and Row. Harris, Z. 1951. Methods in Structural Linguistics. Chicago: University of Chicago Press (paperback edition entitled Structural Linguistics, 1960). Hockett, C. 1947. "Problems of morphemic analysis." Language 23:321-43; reprinted in Joos (1957). Joos, M. (ed.). 1957. Readings in Linguistics I. Chicago: University of Chicago Press. Moulton, W. 1947. "Juncture in Modern Standard German." Language 23:212-26; reprinted in Joos (1957). Pike, K. 1947. "Grammatical prerequisites to phonemic analysis." Word 3:115-72. Stockwell, R. , D. Bowen, and I. Silva-Fuenzelida. 1956. "Spanish juncture and intonation." Language 32:641-65; reprinted in Joos (1957). Trager, G.L., and H.L. Smith. 1951. An Outline of English Structure. Studies in Linguistics: Occasional Papers 3. Norman, Oklahoma: Battenberg Press. Trubetskoy, N. 1939. Grundziige der Phonologie. TCLP 7, Prague. Wells, R. 1947. "Imm ediate constituents." Language 23:81-117; reprinted in Joos (1957).

r • • ...


Lexical Representation of Derivational Relation*

DIANNE BRADLEY Massachusetts Institute of Technology

t r • t

l l •


1. Introduction

Relatedness among lexical items, in reflection of derivational processes in wordformation, will be expressed in some way by a speaker's representational system. So, too, will relatedness which depends on, for example, association (whatever its basis might be) , or semantic similarity. The rule-governed nature of derivational relation, in contrast with the latter cases, has prompted the suggestion that its expression significantly involves some aspect of the structure of the mental inventory of forms. This paper examines a proposal made in this spirit: that the consequence of derivational relation for the format of lexical representation is a simplification of the mental inventory, so that lexical entries serve as representations of base forms and their derived variants. This is a thesis which has received some attention in the psychological literature (e.g., McKay, I 976, 1978; Manelis and Tharp, 1977), and it is clear why this should be so: brute placement of a derived form in that lexical entry which also serves its base gives the strongest possible account of such phonological, syntactic and semantic regularity as we notice in these cases, and suggests as well the computational relevance of that regularity. We are focusing here on the cognitive apparatus which supports the behaviors of speaking and listening, rather than on the grammars which are intended to represent the structural domains of these systems. If at this stage there is anything which is clear about the connection between grammars and processors, it is that * I am grateful to M .F. Garrett for his interest in all phases of the research. All errors remain mine.


• •



grammars do not (and, moreover, are not intended to) dictate the ways in which the computations of speaking and listening proceed, though they do suggest the kinds of information that ought to be displayed, one way or another, by sentence processing systems. The questions pursued here are not in any direct fashion questions about what has been called the 'psychological reality' of grammatical descriptions. That is, we are not in the business of quality-checking, of asking whether the morphophonological rules which capture the relation between derived forms and their bases are best thought of as psychologically real-that is, whether they properly correspond to the manipulations of the speaker-hearer--or whether, in contrast , they are to be seen as a linguist's 'convenient fiction'. To the extent that a linguistic rule system gives appropriate expression of the distributional facts of a language, it is by any usual standard real, and it is unclear that psychological data (however painfully amassed) have any privileged status in determining theoretical adequacy. What's more important, perhaps, is that such a fact-or-fiction approach doesn't allow us enough options when we are in pursuit of a description of the computational mechanisms which underlie language use. The simple affirmation of the psychological reality of grammatical descriptions leaves open the issue of how the mental operations involved in language use reflect the structural regularities inferred from a distributional analysis of language. Distributional evidence is neutral with respect to the ways in which processors exploit regularities, and in this largely unexplored territory the more profitable research strategy is likely to be one which regards grammatical descriptions chiefly as a lever to the problem in that they suggest one of the endpoints of the processing operations. The hypothesis we examine here-namely, that the inventory of lexical forms is simplified to reflect derivational relation-should be understood as a hypothesis about the mechanisms which allow the lexicon to be exploited as an information base in support of sentence processing operations. Specifically, it is to be cashed in, for purposes of experiment , as a proposal about the nature of access mechanisms which, e.g., account for the recognition of surface word forms. Given this, one rather straightforward evaluation of the question would seem to fall out of a chain of reasoning like the following: if the lexical representation of an affixed form (for example, DANCER)is subsumed under the representation of its associated base , then it must be the case that, in recognition, some preanalysis is made of the input which 'unpacks' it to recover the base-plus-affix structure. And since processing operations should not be free, there ought to be some regular indication of that recovery of structure so that, for example, affixed words take longer or are more difficult to recognize than otherwise equivalent non-affixed forms. Further, since the possibility of unpacking presumably rests on the availability of an inventory of affixes, there should be some indication of the false recovery of structure, when inputs which are not in fact affixed (for

• • • •

• •· • •

• ;

.. t t


• •

• • •

Bradley: Lexical Representation of Derivational Relation


example, DANGER)inappropriately fall under the analysis. A moderately inventive mind can readily generate a proliferation of readily testable predictions, all resting on the assumption that a simplifying lexicon is necessarily supported by a special apparatus which recovers internal structure, and, moreover, does so in a way which involves computational cost. It is unfortunate, given the charm and directness of this form of analysis, that experimental evaluations produce mixed outcomes. For example, Manelis and Tharp (1977) look for evidence of a cost of structural analysis in a task in which subjects affirm that two simultaneously visually presented forms constitute words of their language, where pairs are constructed using items which either truly or falsely fall to an analysis as base-plus-affix. Though,pairs with items of the same type (e.g.PRINTEIVDRIFTERor SLANDEIVBLISTER)are responded to more rapidly than mixed pairs (e.g. PRINTEIVSLANDER or BLISTEIVDRIFTER),truly affixed pairs do not differ reliably from falsely affixed pairs. The first piece of evidence can be taken to indicate that an analysis of internal structure is available, and is controlling performance in the task in some way; the second piece, however, denies a computational cost to that analysis. Clearly something is amiss, and whether that lies in the experimental paradigm or in the general approach to the problem is difficult to determine. We can make some progress by considering what it is that a word recognizer does. Like any recognition system, it assigns the type of which an input can stand as token. Immediately we distinguish word recognition from sentence recognition: the vocabulary of a language is finite, and the word-recognition problem is essentially one of contacting mental representations. This is not to deny the complexity of the problem, but rather to point to the fact that the pre-analyses of input which the recognition system makes to engage stored representations need not be structure recovering. Details of internal structure could be arrayed against entries in the mental inventory of forms, as are patterns of syntactic patterning (grammatical category, phrasal environments) or of interpretation. That is, the analysis of internal structure which the straightforward account places as an operation in the process of word recognition could be delivered as a description listed against the mental representation which is the target of the recognition process. These remarks are intended to suggest that a justification is needed for any claim that a word-recognition system computes at cost what it might read. They do not, of course, deny that a justification will become available when we understand in more detail the nature of the system. If the processes which allow a contacting of stored representations are not obliged to recover internal structure, can we still entertain the notion of a simplifying lexicon? Some considerations suggest that we can. To say that the vocabulary of a language is finite is not to say that it is of negligible size, and the task of the word-recognizer, the matching of an input with a mental representation, is a correspondingly large one. Some pre-analysis of input is required , to


Bradley: Lexical Representation of Derivarional Relation


delimit the set of candidate lexical targets, and the principles under which those analyses are made can plausibly be understood to be governed by considerations of computational efficiency. That is, what the access routines might require is the minimal analysis of input which allows successful contact with lexical elements, and whether that minimal analysis engages facts about morphophonological regularities will, in this view, tum simply on a question of efficiency. Taft and Forster (1975, 1976) offer a suggestion about the fom1 that the pre-analysis takes. Their claim, supported by a labyrinthian series of experiments, is that a description of the first syllable of an input is sufficient to engage stored representations. So, for example, the mental representation of the form ATHLETE is located by a search over lexical candidates which is armed only with the description ATH, CHIMNEY is located through CHIM, and so on. There are difficulties, of course, with what is to count as a syllable (though see Taft, 1978), and with what is to count as first, given the problem of prefixing (see Taft, 1975; Taft and Forster, 1975). What's of interest here is that such a proposal foreshadows a view of a simplifying lexicon which need not make a costly analysis of internal structure. In the same way that ATHLETE is treated initially as ATH, so can SOFTNESS be treated as SOFT, RUNNER as RUN, and so on. The suggestion, if correct, makes a gloomy forecast for any program of research in derivational ' treatment morphology which relies on the existence of a special apparatus for the of derived forms by access routines.

• •

• t

2. Diagnostics for the content of lexical entries

We need an experimental tool which in some indirect way allows us to examine the content of lexical entries, and which does not depend at all closely on the evidently vexed question of the operations which support recognition. The research reported here is an attempt to make that examination, and employs a lexical decision task in conjunction with an established experimental result, a frequency effect.

2 .1 In the lexical decision task, a subject is presented with some form (usually visually) and must decide whether it is a word of his language. He decides, for example, that sentiment is a word of English while serdiment is not. The subject cannot simply 'know' that one item is a word, and another is not-features of form do not distinguish word and nonword items, since nonword foils are chosen to be possible words of the language. It is clear that the lexical inventory must be reviewed: the subject is in a position to decide that sentiment is a word when he has searched for and located its mental representation, and to decide that serdiment is not a word, when a search of the mental inventory necessarily fails to tum up an appropriate representation. The lexical decision task seems, on the face of it, to be a useful tool for the investigation of lexical access and lexical structure,




and indeed there is a considerable literature (for a general review, see Forster, 1976). One of the more robust findings in this and related paradigms is that of a frequency effect, that is, a regular relation between the frequency of occurr~nce of a word and the reaction time for its classification: the more frequently an item is used in the language, the more rapidly it is classified in lexical decision. 1 It's important to notice that the contrast here is not one drawn between forms which are guaranteed to be in the vocabularies of the speakers who act as our subjects, and forms whose lexical status is marginal. Though it is inevitably true that among the low frequency forms one is most likely to encounter items that are in fact not known by all speakers, it is not true that all infrequent items are potentially unfamiliar . The frequency variations of interest are those which reflect the differential availability of occasions for the use of forms, rather than differential control by speakers of those forms . A graded effect of frequency on the distribution of response time is in fact maintained even when lexical status (from the point of view of our subjects) is assured. For example, there are clear effects in the treatment of animal names, where items range from highly frequent horse through to less frequent skunk (Bradley, 1978). Though relative frequency is not the only variable effecting reaction time, it accounts for a healthy proportion of the variance, and current estimates (e.g. Whaley, 1978) give it a more than usual prominence. Given these observations, it is clear that models of recognition processes must incorporate an account of the influence of frequency of occurrence in a nontrivial way. We might say that, when the mental lexicon is consulted, a 'search' ranges over candidate elements which are encountered in order of their relative frequency. Building in a frequency variable this way has some a priori plausibility, in that it is a bias in the service of efficiency: average search time will be reduced when the lexical candidates examined earliest are those whose tokens are most likely to occur. And, in making frequency of occurrence a variable which has its force in the ordering of elements in a search path, we make it a variable whose domain is lexical entries, rather than particular representations internal to entries. For purposes of experimentation, we rely on estimates of frequency of occurrence which come out of counts over surface forms in text: in the work described here, we made use of the word-count put together by Kucera and Francis (1967) , which lists the number of occurrences (per million) for particular orthographic sequences. So, for example, we read off information as to the relative frequency of the form kick, the form kicks, the form kicked, and so on. This count quite properly leaves open a question of more than methodological interest: namely, 1 To a good approximation, reaction time is a logarithmic function of relative frequency of occurrence. Typically, we find that reaction time is decreased by some 40 msec with every log,o step in frequency per million .

• •



Bradley: Lexical Representation of Derivational Relation

the question of the computational relevance, in terms of the target lexical representations which are contacted in word recognition, of relatedness amongst surface forms. Consider the assumption one makes in explaining the effects of frequency of occurrence in word recognition: relative frequency determines the position in a search space of a target lexical representation. But one can ask, which relative frequency? Is it the frequency of the precise surface form? If so, this amounts to a claim that the mental lexicon consists simply of an inventory of the surface forms of a language. An alternative and more palatable. claim is that entries in the mental lexicon are representations which abstract in one way or another from the detail of surface variation. This claim. has consequences in terms of the appropriateness of one way or another of counting frequency for purposes of experimental prediction. An example will make this clear. If a lexical entry IIKICKl/2 represents all the surface forms kick, kicks, kicking and kicked, then the count relevant for predicting the contribution of the frequency effect to the recognition latency of any one of these, say kick, should be that which sums over all of them. In contrast, if the entry IIKICKII represents only kick (so, there are independent entries IIKICKsll, IIKICKEDII,and so on), then the most successful predictor of the frequency effect should be a count of that particular form alone. Ideally, a procedure that determines which of the ways of counting, (kick, kicks, kicking, kicked) versus (kick), is most predictive of the frequency effect for presented kick, will tell us of the contents of the lexical entry IIKICKII.That is, designating the field of surface forms relevant in predicting frequency effects is equivalent to designating the field of surface forms served by particular lexical listings. We have taken our example here in terms of verb tensing, for purposes of exposition, but it is clear that the same point applies for cases of derivational variation as for inflectional variation. Moreover, if we show lexical simplification in reflectional of derivational relation, then the more straightforward inflectional case follows automatically: it is difficult to conceive of a system which collapses over derivational variation without making the same move for the much more regular inflectional cases.

plural), the base form (and its inflections) and related derived forms a cluster frequency count (Fe) 3 • Now, our question stands as: is_the _frequency count relevant in predicting the contribution of frequency to reaction trme for the Fe, or the Fp? Though this contrast of alternate frequency counts straightforwardly allows an experimental evaluation of the lexical status of derived forms, the exploitation of the contrast is complicated, for essentially uninteresting reasons. Counts over surface fields of differing extent are in general highly correlated, so the most obvious ploy fails to decide the case. 4 That is, if one simply assembles a large collection of forms, determines recognition latencies, and by correlational analysis tries to pull apart the relative effectiveness of FP and Fe, the exercise must fail: the two measures are correlated, and an effective test requires that they be decorrelated. We have opted here for a pairwise decorrelation, constructing sets of item pairs, where pairs are matched for value on one frequency count, but contrasted on the other. Distinguishing the effectiveness of FP and Fe in controlling reaction time rests now, not on a correlational analysis, but on patterns of reaction time difference between members of item pairs. Let's take an example. The forms sharpness ·and briskness are identical in Fp value· briskness is as commonly used as sharpness. But they contrast markedly in F c value: adjective sharp, together with its comparative forms and the adverbial, occurs much more frequently than the corresponding forms with brisk. Should the representation of sharpness and briskness be in entries which are independent of the base adjectives, then reaction times for the forms will not differ, because they are matched on FP. But if the entries contain not only the representation of the derived form, but also the bases, then we expect sharpness, with a higher value on Fe, to be responded to more rapidly than briskness. By the presence or absence of a difference in reaction time for these and similarly constructed pairs, we infer the presence or absence of base forms in the lexical entries containing their representation. In the same fashion, we can construct pairs using items which differ in value on FP (so, for example, happiness occurs more frequently than does heaviness), but which are identical in value on Fe (the relative usages of base forms neutralizes the differences in derived forms). We have reversed the contrast, and correspondingly the significance of obtained differences in reaction time is re-

2 .2 For the cases we are interested in, the question is set as follows: does the lexical entry containing a representation of a derived noun also contain a representation which serves the associated base, or does the derived form have representation which is independent of that of related forms? We establish a terminology before cashing this in terms of our diagnostic variable, frequency. We will call a count over occurrences of a derived form (and its plural) a particular-form frequency count (Fp), and a count over occurrences of the derived form (and its ' The double slash, II ..... .II, is used here as an indication that it is not a phonological representation which is intended, but rather a 'name' of a lexical entry.


It becomes clear, when the Fe count is undertaken, that there are a number of decisions to be made simultaneously. It is typical that a base form supports more than one derived variant, and the question of which of those several variants should be entered into the count is a delicate one. The approach we have adopted is justified primarily by the patterns of effects obtained over the series of cases. We have summed the frequencies of occurrence of those derived forms which have the same status (wordor formative-boundary) or are 'downstream' of a given item in the experiments. • That failure is clearly suggested by the fact that the basic frequency effect has quite reliably been established in the absence of any consideration of the appropriateness of one way or another of counting tokens. 3



Bradley: Lexical Representation of Derivational Relation

versed, sensitivity to the Fv counts supports a claim that the derived forms are represented independently of their bases. In sum, we have argued for an experimental evaluation of the question at hand via a manipulation of alternative ways of counting relative frequency of occurrence. For each of the nominalization types we shall consider, we will look for ~atterns of_diffe~encein reaction time over sets of item pairs whose construction 1s summarized m Table I. Notice that there is just one pattern of differences wh!ch is int~rpr~table in the framework we have developed: namely, a pattern in which react10n time differences are induced by a contrast of one frequency count, but not by a contrast of the other.

counts without at the same time varying the 'naturalness' of the use of the nominalized forms. It should not be the case that a nominal which occurs often relative to the usage of its associated base is inherently more noun-ish, or has firmer lexical status, than cases with more balanced distributions of usage. These disquieting problems are to some extent intractable, though some comfort is to be derived from the success of the experimental paradigm when cases are selected with care. 3. The Experiments

We have used the rationale set out above to make an evaluation of the representation of derived forms for nominals of four types: forms with #ness, with #er, with #ment, and with +ion. There is one over-riding consideration dictating the choice of types: namely, the sheer number of available cases. The experimental design rests on a 'manipulation' of frequency counts, where manipulation consists in the selection of item pairs which contrast on one way of counting frequency while matching on another, according to published counts . This criterion for item selection rapidly diminishes the number of usable cases, and a second requirement, that items within pairs be equated as far as possible in the variables in which we have no interest, further reduces the materials. The same general considerations of design and procedure hold for the evaluations with each of the nominalization types. We shall work through two in some detail: an experiment in #ness, which we view as a validation of the paradigm; and an investigation in #er, which extends the conclusion suggested by the #ness case, and begins to explore the impact of idiosyncracy in word-level semantics on the format of lexical representation. For the remaining cases, we shall simply point to patterns of effects: here, the number of items employed in preliminary experiments is rather small, and we intend these chiefly as a basis for speculation.


Summary of Item-set Construction Frequency Count

Fe Fe Pairs (sharpness/briskness)



Fv Pairs (happiness/heaviness)




There is one further methodological point which should be emphasized. Frequency of occurrence is not the only variable affecting lexical decision latencies t~~ugh it is on_ewhich is surprisingly powerful. The determinants of word recog'. mt10n clearly mvolve more than the order in which candidate analyses are scanned. This has the following consequence. If we claim that Fe (relative frequency over a surface field of derivationally related forms) is the count at work controlling the frequency effect, this does not amount to a claim that surface forms serv_edby the same lexical entry will be treated identically in word recognition; in p~rt1cular, we do not expect that surface variants (for example, softness and soft) will be responded to with equal latency. Rather, we expect only that the contribution of the frequency effect to response time will be the same, where ~hetotal response time will reflect the operation of the several processes involved m word recognition. Thus, it is important in setting up pairwise contrasts to adjudicate between Fv and Fe, that we make every effort to avoid a contamination of the re~ction time contrasts by variables other than the one we are focusing on .. H~nce _pairsare constructed which are matched, as far as possible, for such vanat1ons m form as affixal structure, length, number of syllables, and so on. Much more vexing is the need to choose cases which unhook the two frequency


3. I. A lexical decision experiment with #ness nominals


Among the several kinds of items which the subject classifies as words and nonwords in the experiment, there are two critical subsets: the Fe item pairs (e.g., sharpness, briskness), which are matched on Fv and contrasted on Fe; here, pairwise differences in reaction time would support a view that nomina!izations in #ness share lexical representation with their bases; and the Fv item pairs (e.g., happiness, heaviness), contrasted on Fv and matched on Fe;in this complementary case, reaction time differences are expected to be zero if lexical representation is shared. The available cases of #ness nominals permitted the construction of 18 Fe item pairs, with pairwise differences averaging 0.87 units, and 12 Fv item pairs, with pairwise differences averaging 0. 96 units. These 60 critical #ness words were appropriately distributed over a presenta-

46 Juncture

Bradley: Lexical Representation of Derivational Relation

tion order which also included 60 non-derived words (e.g., magazine, wizard), and 60 nonwords, either #ness nonwords (e.g., tabidness, lealness, where -ness is added to a nonword string) or 'non-derived' nonwords (e.g., pellock, bandicoy). Thirty subjects, native speakers of American English, made timed word/nonword decisions for each of the items of the set, under instructions urging fast and accurate performance. It is the distribution of reaction times for the critical #ness cases which is of concern here. Table II summarizes the analysis of pairwise differences in item reaction time, for Fe and FP sets. (Item pairs are listed with the form with higher frequency on the contrasted count appearing first.) It is evident that it is the frequency count that sums over the cluster of forms associated with a #ness nominal that is effective in determining response latency: Fe contrasts induce reaction time differences of 65 msec, reliably exceeding zero ( t = 3. 28, p < .0 l), whereas FP contrasts induce a mean difference of 24 msec, which is not reliable (t = 1.38, p > .05). The above analysis is rather conservative; it includes pairs (starred in Table II) in which one member is apparently considered 'odd' by subjects, as indicated by an unusually high rate (greater than 10%) of misclassifications as nonwords. This pattern of effects is, of course, exactly that which we associated with a simplifying lexicon: representation of a #ness nominalization is subsumed under that of its base. It is perhaps not surprising that this should tum out to be the case, given the remarkable productivity and regularity of nominalizing #ness, and indeed it is among forms like these that we ought to find evidence of simplification, if it is to hold for any of the derivations. But it is important to keep in mind the point that the mental inventory and its attendant access systems have no obligation to exploit every available regularity, and it is unclear that notions of 'economy of storage' have substance when applied to human memory; as we noted earlier, it is more plausible that the governing principle is one of efficiency of access to an information array. A successful outcome with the #ness cases can be taken as validating the experimental program that we have launched, and that primarily is the significance we wish to attach to it at this point.



Differences in reaction time D(RT) in milliseconds, induced by differences in frequency of occurrence D(F) Fe Pairs


Items prettiness/haughtiness *steadiness/giddiness abruptness/astuteness boldness/deftness emptiness/dreariness sharpness/briskness harshness/brashness sweetness/numbness wetness/slyness paleness/rudeness alertness/aloofness narrowness/shallowness *foolishness/pompousness correctness/politeness slowness/sadness roughness/bluntness pleasantness/stubbornness gentleness/idleness MEAN STANDARD

F,, Pairs


Items weakness/roundness *brightness/greenness happiness/heaviness thickness/softness illness/madness awareness/quietness darkness/blackness seriousness/obviousness readiness/aloneness loneliness/weariness greatness/smallness ugliness/liveliness

3 .2 Investigations with #er nominals While words in #ness are by and large quite regular in meaning (with, roughly, X-ness = 'state or quality of being X'), there is somewhat less predictability in forms with agentive #er. We can pick out cases of two kinds: on the one hand, there are the regular, pure agentives (with, roughly, X-er = 'one who X's', e.g., suggester, embracer); and on the other hand, there are forms whose primary sense is more restricted (with, roughly, X-er = 'one who habituaJly or professionally X's', e.g., consumer, teacher. These latter cases present an interesting



1.47 1.36 1.27 1.06 1.05 1.04 1.00 0,97 0.88 0.84 0.82 0.77 0.62 0.60 0.59 0.56 0.46 0.37 0.87 0.31


85 221 218 106 70 51 62 63 -88 -36 79 69 176 10 -29 50 2 54 64.6 81.3



1.72 1.26 1.16 1.07 1.04 1.03 0.93 0.90 0.88 0.65 0.56 0.55

17 139 33 -29 -89 33 53 89 52 -10 -17 22

0.98 0.32

24.4 58.5

contrast: their regularity of form suggests that they should fall under the treatment afforded the #ness cases, so that they share lexical representation with their


Bradley: Lexical Representation of Derivational Relation

48 Juncture

bases; the departure from pure agentiveness of the meaning, however, raises the possibility that they have independent lexical status. We have no a priori basis for determining which of these criteria controls the format of lexical representation and, since it is evident that idiosyncracy in the sense relation of derived forms and their bases is entirely characteristic of derivation (e.g., Chomsky, 1970; Jackendoff, 1975), these are important cases.

3 .2 .1 'Lexicalized' agentives Though the number of cases examined is less than was mustered for the #ness forms, it is clear that the agentives present the same pattern of effects. Table Ill summarizes the distribution of pairwise reaction time differences induced by F and FP manipulations. c TABLE III

Differences in reaction time, D(RT) in milliseconds, induced by differences in frequency of occurrence, D(F) Fe Pairs

Items shopper/brewer exporter/embezzler climber/prowler smuggler/toddler jailer/skater advertiser/prosecutor exhibitor/inheritor singer/baker

1.14 1.12 1.05 0.85 0.81 0.57 0.40 0.37


0.79 0.31


Fv Pairs





26 57 40 30 22 43 146 -36 41.0 50.6



teacher/keeper composer/reviewer reporter/beginner manufacturer/experimenter explorer/deserter performer/announcer killer/drinker builder/planner consumer/survivor farmer/fighter

I. 70 1.06 0.95 0.94 0.85 0.70 0.64 0.60 0.49 0.40

7 6 -27 3 2 43 -6 -19 21 26


0.83 0.37


Response time for the classification of these non-pure agentives reflects the influence of a frequency variable which sums over the forms themselves and their associated bases: Fe contrasts between item pairs induce reaction time differences of 41 msec, reliably exceeding zero (t = 2.10, p < .01). Frequency of occurrence of the agentive forms themselves does not influence response time: FP contrasts induce a non-reliable difference of 6 msec (t = 0. 81, p > .05). In sum, the experimental evaluation supports a claim that such agentives, regular in form though somewhat irregular in sense, share lexical representation with their bases.

3 .2 .2 Pure agentives What's true of the treatment of those agentives showing restriction of meaning must surely be true of pure agentives: the highly productive forms, regular in form and meaning, are the strongest candidates for a format in representation which we have characterized as a simplification of the inventory. But logical necessity does not guarantee experimental success, and it is in just these cases that our experimental evaluation runs aground. Manipulations of neither Fp nor Fe lead to reliable differences in reaction time, and that this should be so presents a cautionary tale. We have noted earlier that it is an important assumption of the experimental program, that Fe and FP should be manipulable independent of other variation in the nature of items. 5 But in these pure agentive cases, there is an almost inevitable confounding of frequency variation with familiarity, in the sense of attested wordhood. Pure agentives which occur in text counts rather infrequently (e.g., briber, stumbler) tend to be nonce-forms, and are treated as such by subjects when they are presented, for purposes of experiment, in isolation: error rates (misclassifications as nonwords) tend to be high, as is the variability of response time. Notice, though, that such problems cannot arise with the 'lexicalized' agentives. The very fact that a form has acquired a restricted sense guarantees a separation of frequency and familiarity.

3.3 Nominals in #ment



5.6 20.8

Though #ment nominals exhibit a degree of idiosyncracy in the relation of derived and base forms that is greater still than the agentive forms, for these forms also we can support a claim of lexical simplification. Table IV summarizes the relevant contrasts, and associated latency differences. Manipulations of the ' Notice that this is a problem which is peculiarly acute when we look for variation in the frequency distributions of inflected forms. A verb which has its predominant usage as past, for example, will as a consequence show a healthy (and unusual) separation of Fp and Fe when considered in its present form. But the fact that the primary usage is past will mar its isolated presentation as present. It is fortunate, then, that we can indirectly determine the status of inflectional variations via our interpretations of effects with the less delicate derivational cases.

Bradley: Lex ical R epresentation of Derivational Relation

50 Juncture

frequency count summing over the cluster of related forms reliably induce a reaction time difference of 44 msec (t = 2.33, p < .05); and conversely, manipulations of a frequency count restricted to the nominals themselves does not lead to any reliable difference in reaction time (t = - 0.45, p > .05).

to be drawn between the 'neutral affixes' (-ness, -er, and -ment) and the class of affixes of which -ion is an instance , whose presence may perturb, for example, the stress pattern characteristic of the isolated base. A distinction between word and formative boundary affixation encapsulates a host of phonological facts (Chomsky and Halle, 1968; Siegel, 1974), which we might summarize in terms of the obviousness of the surface correspondence of derived words and their associated bases. What is of interest in the cases of formative boundary derivation is that a regularity of sound patterning is not invariably accompanied by any transparent similarity of surface forms. We have been developing a view of a recognition system where representations in the mental inventory are contacted via access routines committed to a minimal and rather superficial analysis of inputs. This can be taken to suggest that just those phonological facts which motivate a distinction between word and formative boundary derivation in grammatical descriptions will underlie a parallel distinction in the treatment afforded derived and base forms in recognition. In cases where strictly superficial analyses of derivationally related inputs fail to provide converging descriptions, we might expect the simplifying format of lexical representation to be abandoned. Formative-boundary derivations, as a class, present just such a case. With the words in +ion that we have examined experimentally, there is some initial indication of a treatment in recognition which contrasts with that for the neutral affix forms. That is, manipulations of Fe do not induce reaction time differences of the sort we had observed previously. The trademark we have associated with a simplifying lexicon is absent, and we have no support for a claim that forms derived in +ion share lexical representation with their bases. The complementary claim, that these forms are represented independent of their bases, is unfortunately not supported either: response latency is not sensitive to variations in the relative frequency of the derived forms themselves. Table V summarizes the patterns of contrast. The mixed outcome of our investigations of nominals in +ion provides no assurance for the strongest interpretation, but does open up an interesting avenue of speculation.

3.4 Nominals in +ion

4. Discussion

The rather stable patterns of effects that we established for nominalizations in #ness, #er, and #ment can plausibly be understood to reflect a system of lexical structure and lexical access which is relatively insensitive to the sense relation of base and derived words , and which is influenced rather, by a regularity of form. Words derived under affixation with +ion present a case which is interestingly removed from the ones we have been considering: the sound relation of base and derived forms is rule-governed, as before , but complex. There is a clear contrast

As a basis for more general remarks, we assume that the experimental evidence we have laid out is properly summarized as suggesting a recognition contrast between forms in -ness, -er, and -ment , on the one hand, and forms in -ion, on the other. The sensitivity ofresponse latency in the form cases of Fe, a frequency count over a field of derivationally related forms, and the insensitivity to Fe in the latter, is taken to reflect a divergence in the format of lexical representation: words in -ness, -er and -ment share representation in the mental inventory with


Differences in reaction time D(RT) in milliseconds , induced by differences in frequency of occurrence D(F) F e Pairs




inducement /atonement postponement /bombardment involvement /resentment replacement /displacement employment /investment advancement/attainment enjoyment /enforcement attachment /enrollment announcement/assessment

1.03 0.98 0.86 0.56 0.52 0.51 0.39 0 .34 0.29

- 27 32 26 66 53 144 - 31 55 82


0.61 0.28


Fp Pairs




44.4 53.9

D(F p)


agreement /inducement entertainment /advertisement resentment/inducement allotment/attachment refinement/concealment arrangement/enjoyment amendment/ amazement

0.90 0.81 0.78 0.74 0.70 0.54 0.49

0 - 30 42 - 96 27 38 - 47


0.71 0.15



-9.4 51.1

, 1111 I



jlll! 1 I 1111111

;:;;; I l l hl I I 11111 I t1Ul I I


52 Jun cture

Bradle y : Lexica l Representation of Derivational Relati on


Differences in reaction time D(RT) in milliseconds, induced by differences in frequency of occurrence, D(F) Fe Pairs

Items dictation /obstruction cultivation /excavation prevention /invention hesitation /humiliation construction/compression indication/illustration suggestion /instruction adoption /infection isolation /circulation calculation / dedication

0.64 0.60 0.56 0.51 0.48 0.44 0.43 0.37 0.36 0.19


0.46 0.13


Fv Pairs

D(F c)





- 16 148 - 44 -52 - 31 -21 42 24 - 19 - 15 1.6 58.8 (D(RT)


11 i:::·:; llfl'I· , 111111'!

I lij ll



I ll ltlll

11' ::·1' ul11 1 1 1111111I

1u: :::: jlld



11111.i I

jltll I 1



'1111 I l



1111111 1!llf11 I

1:::;1 ! 1111111I 1P l l" I


correlation/ elevation innovation/devastation exploration /derivation discrimination /exaggeration construction/prevention population/legislation restriction /rejection division/extension instruction / attraction appreciation/elimination

0.78 0.74 0.71 0.58 0.57 0.49 0.46 0.45 0.41 0.36


0.56 0.15



- 97 131 ll5 65 21 86 - 14 - 47 - 78 - 2 18.0 79.7

1111111 t

their base forms, while words in -ion do not. In the terms in which we have spoken previously, the lexical inventory is simplified to express derivational relation for some cases, but not for all. We want to claim , further, that the separation of cases in -ness, -er and -ment from cases in -ion is to be seen, more generally, in terms of the distinction between word and formative boundary affixation. That is, it is primarily on the basis of properties of form that the cut is placed in the 'derivational continuum'; and it is unclear that regularity in the sense relation of derived and base forms acts as a criterion for the determination of the format of lexical representation. To say that the distinction is to be understood in terms of the contrast of word and


formative boundary affixation is, of course, a summary device. It is the morhophonological regularities which are encapsulated in the word-boundary/ ~ormative-boundary contrast which drives the processing distinction. In particular we have suggested, it is the degree to which there is surface similarity of de;ived words and their bases that dictates the form of lexical representation: in cases with, e.g. , #ness , the access procedures may, with no special apparatus, recover a version of the base-plus-affix structure, while in cases with, e.g. +ion, this will not in general be possible. We have proposed a treatment of# and + cases which falls out of a consideration of the task with which a word recognition system is confronted, together with a view that here, efficiency is all. A general problem arises, though, and it is one for which there are no compelling answers available. It will not be the case for every instance of formative boundary derivation (for, e.g., every deverbal form in -ion) that a perturbation of the base form is induced . We might ask, then, whether the computational solution is a principled one, or whether it operates } case by case . That is, will there be isolated cases of derived forms in +ion which, idiosyncratically, are treated like word-boundary cases? While a neatness of mind leads us to prefer the principled solution, there is no evidence at hand to suggest that it will be so. We could speculate, though, that insofar as there are any regular consequences, syntactic or interpretative, of a given affixal type, the cases will be treated uniformly. The most striking (though by no means, the only) phonological correlate of the distinction of junctural types, in English, is in terms of susceptibility to stress alternations, blocked by #, and permitted by +. There is some evidence to be brought to bear, to justify a proposal that this single factor is paramount in determining the form of lexical representation. Fay and Cutler (I 977), in their study of 'malapropisms' (word-substitutions in fluent speech, not semantically motivated) observe a powerful constraint on the relation of targets, the intended utterances, and intrusions: overwhelmingly the forms agree in stress. This can be interpreted (e.g., Fay, 1978) as evidence that word-level stress is lexically represented. In this view, the arrangement of listings in the lexical inventory will reflect surface stress patems, for it is the arrangement of the inventory which is taken as the determinant of the occurrence of an intrusion for a specified target. And, where surface stress acts as a guideline for organizations, this amounts to a representation of stress in the lexicon. Should the 'production' lexicon be significantly related to the 'comprehension' lexicon (and this is Fay and Cutler's point) a lexical representation of stress will dictate just the separation of word and formative boundary cases that we have observed. It has often been noted in studies of the problem of the segmentation and recognition of words in running speech that the stressed syllable has a special role: syllables bearing main stress are loudest, longest, and have the greatest clarity. In that high-stressed syllables might be taken to represent islands of

54 Juncture

relative certainty in the speech stream, it is reasonable to hypothesize a recognition system which makes use of the best information it has, as follows: a preliminary analysis picks out the stressed syllables, and the lexical inventory is addressed with that information solely; candidate representations are selected on the basis of the single syllable, their appropriateness being evaluated on the basis of the fit of the adjacent (left and right) strings in the speech stream. This proposal has, of course, very much the flavor of the one we laid out earlier for the (visual) recognition of orthographic forms. But it makes a demand that is not strictly necessary in the visual case: if the stressed syllable is the primary guide to word identification, then forms which are derivationally related but differ in the placement of main stress must inevitably have indepeudent lexical representation. Again the separation of the word-boundary and formative-boundary cases seems necessary. The convergence of treatments that we are suggesting here for acoustic and visual word recognition is a well-motivated one. The chief vehicle of language behavior is speaking and listening, not writing and reading. It is entirely appropriate that organization in the secondary mode should fall to the dictates of the primary one. The result is of the right sort, then, and this should count as support of the position we have put forward. Finally, it must be emphasized that the denial of explicit operations of structure-recovery that we have issued in the case of a word recognition system reflects a conviction that this is indeed a special case. The lexicon is a rich source of information which can determine the options to be taken by sentence processing devices. The demands on such devices are great, and yet we produce and understand utterances with an evident ease, even under adverse circumstances. It is thus plausible to propose a system which allows ready access to the lexical inventory, under principles of operation governed primarily by questions of efficiency. An exploitation of that kind will, however, only succeed where the types to be recognized are finite in number. Finiteness is the special property of the vocabulary.

Bradley, D. 1978. Computational distinctions of vocabulary type. Ph.D. dissertation, M.I.T. Chomsky, N. 1970. "Remarks on nomina!izations." Readings in English Transformational Grammar, ed. R. Jacobs and R. Rosenbaum. Waltham, Mass. : Blaisdell Press. ---, and M. Halle. 1968. The Sound Pattern of English. New York: Harper and Row. Fay, D . I 978. "On the organization of the mental lexicon." Paper presented to the Workshop on the Mental Representation of Phonology, Amherst, Mass . , November 1978. ___ , and A. Cutler. 1977. "Malapropisms and the structure of the mental lexicon." Linguistic Inquiry 8:505-520. Forster, K.I. 1976. "Accessing the mental lexicon." New Approaches to Language Mechanisms, ed. R.J. Wales and E. Walker. Amsterdam: North Holland.

Bradley: Lexical Representation of Derivationa/ Relation


Jackendoff, R. 1975. "Morphological and semantic regularities in the lexicon." Language 5 l :639- 71. . Kucera, H., and W.N. Francis. 1967. Computational Analysis of Present -Day American English. Providence, R.l.: Brown University Press. MacKay, D.G. 1976. "On the retrieval and lexical structure of verbs." Journal of Verbal Learning and Verbal Behavior 15:169-82. Manelis, L., and D.A. Tharp. 1977. "The processing of affixed words." Memory and Cognition 5:690-95. Murrell, G.A., and J. Morton. 1974. "Word recognition and morphemic structure." Journal of Experimental Psychology 102:963-68. Siegel, D.C. 1974. Topics in English Morphology. Ph. D. dissertation, M.I.T. . Taft, M. 1976. Morphological and Syllabic Analysis in Word Recognition . Ph. D. dissertation, Monash University . _. •'Recognition of words via an orthographic access code: The Basic Orthographic Syllabic Structure (BOSS)." Forthcoming. ___ , and K.I. Forster. 1975. "Lexical storage and retrieval of prefixed words." Journal of Verbal Learning and Verbal Behavior 14:638-47. ___ , and ___ . 1976. "Lexical storage and retrieval of polymorphic and polysyllabic words." Journal of Verbal Learning and Verbal Behavior 15:607-20. Whaley, C.P. 1978. "Word-nonword classification time ." Journal of Verbal Learning and Verbal Behavior 17: 143-54 .


On the Phonological Definition of Boundaries I


I I ,









1. Current problems in the theory of phonological boundaries

Throughout the history of American linguistics, junctures have never been elements of what, following Greenberg (1970), we can call ''the language of observation" of phonology, "resting in some reasonably direct way on a body of observation" ; they have , rather, always been more abstract entities, to be sure ultimately connected to phenomena observable in the acoustic record, but via theoretical postulates which, being among the most complex, indirect and abstract in phonology, have been much disputed and little understood. As theoretical entities, junctures are currently under attack from many sides. For example, it is often argued that phonological substance is wrongly ascribed to junctures, that they should be replaced with m(orpho)s(yntactic) boundaries. The school of so-called N(atural) G(enerative) P(honology), sharing Hockett's distrust of phonological entities not "always involving identifiable phonetic material" (Hockett, 1955:172), (but certainly less tolerant of phonetic heterogeneity than Hockett), requires phonological boundaries to "have a necessary and consistent phonetic manifestation;" they must be "determined by phonetic means" (Hooper, 1976:14). Thus NGP claims that the only 'true' phonological boundaries are syllable boundary ($) and P(ause). Word boundaries ( ##), clitic boundaries ( #) and compound boundaries are classified along with diacritics, syntactic and morphological category labels, semantic classes and so on. These latter 57


'1 '1 I


11 I




,, ''




are "determined by syntactic and semantic means" (Hooper, 1976:14), and are NOT permitted in the phonology. 1 Crucial to the theoretical framework of NGP and allied approaches is the classification of rules into mutually exclusive types on the basis of the information to which they have access, so as to demarcate phonology from the rest of the grammar. "The claim is that speakers construct phonetic generalizations only where [alternations] are regular and transparent" (Hooper, 1976:16). Whereas morphophonemic type rules "take part in the sound-meaning correspondence of a language and ... are apt to be phonologically quite arbitrary," "a causal relation between the phonetic environment and the structural change of the rule is postulated" for true phonological rules (Hooper, 1976:16-17). The limitation of phonology to processes which are causally determined by phonetic environments necessarily requires that only boundaries that can be analyzed as having phonological substance can appear in phonological rules. We do not dispute this, but rather we question the restrictive NGP interpretation of phonological substance. There are a large number of alternations which are "regular and transparent" in all respects except that they do not satisfy the NGP strictures concerning phonological boundaries (see below §3 and §4). These alternations cannot be treated phonologically in a theory such as NGP, yet they are prima facie phonological. The issue is, therefore, not merely what class to assign a rule to, but the adequacy of a theory of phonology to account for its explananda. We shall argue that these alternations can be given a genuinely scientific explanation in terms of causally formulated phonological processes within a theoretical framework that permits a more realistic and integrated analysis of phonological boundaries.

1.1 Limitations of pause boundary and the inadequacy of lexicalizing adpausal variants

I •



' '' '

: I

It is interesting to note that with the restriction of phonological boundaries to $

and pause, the old structuralist problem of the zero allophone of pause reemerges in a modern terminological guise. Consider the following situation. There is an alternation between x and y at a set of syntactic brackets (A); furthermore many instances of y are contiguous to pause. NGP can formulate the phonological rule (I)


y / __p

This is what the P boundary is for. However, a sub-set of (A), say (B), never shows pause (in a given phonostyle, and further a sub-set (C) never shows pause

Devine & Stephens: On the Phonological Definition of Boundaries


• normal phonostyle) In NGP it is not possible to account for the alternation many · , , · d

x -y kc by a phonological rule, due to absence_o~ true phonological boun ary.

Thus either the same alternation must be spht mto two or more p~ts of the grammar or the erstwhile phonological instances must be dephonologized. The latter alternative considerably reduces the . value of the pau~e boun~ary. Moreover, if the x - y alternation is a typical lo_wlevel phonetic one, either alternative seems unattractive. In §4.3 below we discuss several cases related to this situation. One way of treating adpausal variants that also occur in some non-pausal environments is suggested in an allied approach by Vennemann (1974), wher~by the adpausal allophone is entered in the lexical representation. Such lexicahzation has the effect of eliminating the alternation, and thus the problem of ac~ounting for it in a synchronic grammar. The erstwhile allophones x and Y simply occur where they do. 2 In this treatment x and y actually cease to be allophones, and if the fact that x and y are related at all is to be captured, it would appare~tly be captured not as part of the phonology. But in effe~t this wh~le procedure is a radical departure from the minimal goals of phonological analysis. I~ man_yca~es the variant y may be only minimally different phone_tically from x, 1.e. di~enn_g only in scalar values of features, e.g. a few milliseco_nds_greaterin clo~ure time if a stop. To accord lexical status to the sort of phonetic difference that is nowh~re contrastive in the phonological system of a language amounts to abandonmg phonology for transcription. In fact, in the extreme, it reduces to a proposal made by Wickelgren (1969) to get around proble~s of coartic~lation wher~by ~l contextual variants are simply listed in their envrronments; this would entail bram 1 6 storage and retrieval of a list of an order of magnitude of 1~ or even 10 unrelated items (see Kent and Minifie, 1977). Needless to say, this proposal ~as not met with much acceptance. Clearly, one way of eliminating ~he~retic~l problems in phonology is to eliminate theory, but linguists would be Justified m judging this as throwing the baby out with the bath water.

1 .2 Covert word boundaries NGP adopts the position that the 'new' rules giving rise to soun~ change, however constrained and distributed through the lexicon at later penods, are purely phonological rules, i.e. they have absolutely no ~~nditioning :actors such as word boundaries. Supposing that some stable defimtJon can be given t~ th~ term 'new', we have apparently the opportunity for an unusually direct e~ammation of the consequences of the NGP classification of rules as phonological vs. mor-


Like rule ordering, violations of the so-called 'true generalization condition', and much else that has come under its interdict, these non-phonological boundaries are not completely banned from the phonology in NGP, since they are needed to account for phonosty listic variation and sound change in progress. Indeed non-phonological boundaries are welcome in true phonological rules so long as they 'block' but do not 'condition' (a distinction not explicitly defined) those rules (Hooper, 1976:15).


y may then be used to account for other boundary sensitive, segmentally conditioned alternations:

) 51Ly~


l -z

1sif l #Si I

becomes , not x-> z / __

#S; #S;, but y-> z 1--51,

Devine & Stephens : On the Phonological Definition of Boundaries

60 Juncture

phologized (we shall use the latter term broadly to refer to syntactically as well as morphologically constrained rules). Consider the example of Spanish apocope, which is analyzed as a 'true' phonological process (Hooper, 1976: 105ff.). Philological data are cited from Menendez-Pidal to demonstrate that originally e-deletion ''optionally affected every word final e in the language (syllable structure permitting)" (Hooper, 1976:106). The rule formulating the apocope process, however, must not make reference to word boundary, for then "it would have a # in its SD, and it would be a [non-phonological] sandhi rule. " 3 The following formulation is adopted by NGP: (2)







1: 1




:t ,, I













' "

"" I



"" ' " ' "" :1


,, '



i{[d:,]} [C,,]_


C lengthening (as e .g. in English, Russian, Spanish, German, French too), is there explained in the same way . Previous research on position in utterance duration for English (Oller, J973; U 1975; Klatt, 1976) has not explicitly tested the applicability of a Lindblom type Rather attention has been directed to the measurement of durational increments at w and syntactic boundaries. This research resulted in the proposal of various types minimum duration-discrete percentage change models (Klatt, 1976), which have basic form (8)



Tikt (D - Dmin) - Dmtn

where the k; are the percentage effects of such factors as nature of postvoca lic stop, s and boundary type. Such a model corresponds to the conventions of generative phono in that each k; represents the effect of a discrete, context sensitive rule. The concepl of unit of temporal organization is not explicitly utilized. It is possible to show, howe based on the data reported in graphical form in Oller (1973) for the stressed nonse syllable ['ba), that, at least for the final word in declarative sentences , English syll duration is apparently fitted quite well by a simplified form of the Lindblom fonn namely (9)

Db D = (a + 1)"

In addition to processes of temporal organization and integration, there are other regulartransparent and productive processes dependingon boundaries. In these rules,certain boundaries claimed to be non -phonologicalby NGP (such as word andcompound boundaries) seem to form a natural, hierarchically ordered class withboundaries such as syllable and pause, the phonological status of which is not doubted. It is extremely common in languages to find that the processes classified as strengthening or weakening in NGP are not identical at all syllable boundaries but vary in both degree and extent of application according as the syllable boundaryis or is not pre-/postpausal or word initial/final. As an example of the effectof word boundary, when stress, syllable boundaryand segmental environmentareheld constant, consider English It/: both the closure time and the aspiratiOnperiod of postvocalic, syllable initial It/ before stressed vowel are sensitive to thepresence of a word boundary; the closure time of /ti in V#'tV is around 20 msec. greater than in V'tV; the aspiration time is also somewhat greater in the former (Umeda, 1977; l 978). Note also Korean: durationof phonemic aspiration mmedialposition is about half of what it is in initial position (Han and Weizman, 1970:J12); similarly in Bengali (Haig, 1958). Conversely,in Chontal Mayan and Arapaho(Devine, 1974:134) , it is reported that aspiration (or aspiration-like ielease) of voiceless stops is heavy before a pause but light syllable finally before consonants. Three degrees of variation of aspiration can frequently be found in thehierarchy postpausal - word initial - syllable initial (see e.g. Umeda and Coker, 1974). There are other graduated allophonic phenomena as well. For

Devine & Stephens: On the Phonological

70 Juncture Specifically, at the level of the word, the duration of a given syllable type (segmental and_~tres_sfactors held constant), is dependent both on word length in syllables and the pos1t1onm the word. Syllable duration is adjusted with respect to the number of precedin and the number of following syllables within the word level unit. This pattern of variatio~ has been accounted for by a formula of the basic form proposed by Lindblom(1975): (6)

111111 11

!Ill!!! !111111

1;;! 1 II 11 111II

till 11 1111 'I ~ti I

mP! ! 111111,

Dw = (a


Db l)" (b


l).B, a


= n -


where Dw is the duration of the syllable type in a word of n syllables, Db the theoretical base duration of the syllable type, a ('after') the number of syllables following, and b ('before') the number of syllables preceding in the word. a and /3 are parameters which reflect the strength of the adjustments and may be interpreted as theoretical assimilation parameters. For Swedish values of a = 0.4 and of /3 = 0.1 were obtained (Lyberg, 1977:53). With such values the above formula wiil predict the phenomenon of word final lengthening, and such final lengthening will be explained in the same terms of continuous durational adjustment within a unit of temporal organization as are the durations of the initial and medial syllables of that unit. In fact, the formula predicts final lengthening whenever a > /3. Using the method of undetermined Lagrange multipliers, it can also be shown that the ordinal position in the unit of temporal adjustment of length n at which a given syllable type receives its minimum duration will be

1.11 1 11

,t!!' I,




111 .,,

II .,

Ill :. 111: :;

•Ill : I llll


1111 I 1

1111If 111 1 1



Ill: , Ill ,: II•

Ill :, 11 1 II


= I + [ an-/3] (a + /3)

(i.e. the integer values closest to the expression in brackets.) The duration of syllables within higher level units such as the phrase are obtained by recursive application of the formula to the adjusted word level durations. The common phenomenon of phrase final lengthening (as e.g. in English, Russian, Spanish, German, French too), is therefore explained in the same way. Previous research on position in utterance duration for English (Oller, 1973; Umeda, 1975; Klatt, 1976) has not explicitly tested the applicability of a Lindblom type model. Rather attention has been directed to the measurement of durational increments at word and syntactic boundaries. This research resulted in the proposal of various types of minimum duration-n).I assume an assimilation rule taking (gt] to [nt] is operative here (cf. often -heard [len9] for length); the remainder of the derivation parallels that of [wiDr] from /winter /, for example; cf. footnote 5.

104 Juncture

substantiate this claim are fully productive 1 7 , there can be no doubt that they are psychologically real and therefore relevant to any study of what sorts of rules phonological theory should allow. I have further argued that when one has set down rules to account for the perceived syllable structure of English words and phrases, it becomes possible to rewrite the aspiration, glottalization, and voicing rules in an extremely simple and natural form, and a form in which the presence or absence of/#/ at any point in the input string is irrelevant to the operation of the rules. I conclude from these observations that the theory should be broadened to allow phonological rules which make reference to previously-assigned syllable structure 18 and ignore all syntactic information. Hopefully, these observations will in addition lead to the discovery of ways to simultaneously constrain phonological theory, so that the set of permitted rule types corresponds more closely to the kinds of rules actually occurring in the phonologies of natural languages.

Bailey, C.-J. 1968. "Dialectal differences in the syllabication of non-nasal sonorants in American Englis h. " General Linguistics 8,2:79-91. -- · 1975. "Gradience in English syllabization and a concept of unmarked syllabization." Unpublished ms. Choms ky , N., and M. Halle. 1968. The Sound Pattern of English. New York: Harper and Row. Goldsmith, J. 1976. Autosegmental Phonology. M.I.T. dissertation, available from Indiana Univeristy Linguistics Club. Halle , M., and K. Stevens. 1971. "A Note on laryngeal features." QPR IOI, RLE,

M.I.T. Hoard, J.E. 1971. "Aspiration, tenseness, and syllabication in English." Language 47:133-40. Hooper, J.B. 1972. "The Syllable in phonological theory." Language 48:525-40. ---· 1975. "The Archisegment in Natural Generative Phonology." language 51:536-60. Jakobson, R. 1941. Kindersprache, Aphasie, und allgemeine lautgesetze. Kahn, D. 1976. Syllable-Based Generalizations in English Phonology. M.I.T. dissertation, available from Indiana University Linguistics Club . Malecot, A. 1960. "Vowel nasality as a distinctive feature in American English." language 36:222-9. SPE = Chomsky and Halle (1968).

Consider, for example, hypotheticalwords tiss, stiss, and gret. The first must be pronounced[this] and the second [stis]; the last would be [gret?J in isolation, and, assuming it to be a verb, the progressive gretting and the phrase let's gret Ann would display [D]. 18 Other phonologists,of course, have suggestedthat certain phonologicalrules have syllabic conditioning (cf., e.g., Vennemann, 1974, 1972; Hooper, 1975, 1972; Stampe, 1972; Hoard, 1971; Bailey, 1968, 1975). The approachto syllabic phonology taken here differs on a number of points from that of these earlier workers, for example in the postulationof phonologicallysignificantambisyllabicity. 17

Kahn: Syllable-Structure Specifications


Stampe, D. How I Spent my Summer Vacation. Ohio State University. Trager, S.L., and B. Bloch. 1941. "The syllabic phonemes of English." language 17:223-246. vennemann, T. 1972. "On the theory of syllabic phonology." Linguistische Berichte 18: 1-8. -· J 974. "Words and syllables in Natural Generative Grammar." Papers from the Parasession on Natural Phonology. Chicago Linguistic Society.

Prosodic Domains in Phonology: Sanskrit Revisited ELISABETH O. SELKIRK University of Massachusetts, Amherst


As has been shown time and again, significant phonological properties of sentences follow from their syntactic properties, and not vice versa. A generative grammar mirrors this fact by casting the set of rules comprising the phonological component as interpretive of the syntax. According to the 'extended standard theory' (Chomsky, 1972), which we assume here, a grammar associates with every sentence a set of syntactic representations {s1 , ...... ,Si}, where S1 is the deep structure and Si is the surface structure. (This set may be called the syntactic derivation of a sentence.) According to the standard theory of genera tive phonology, as outlined in Chomsky and Halle (1968; hereinafter SPE), surface structure, modified in accordance with a set of readjustment rules, provides the input to the phonological component. The output is the phonetic representation of the sentence. Given the standard view, the rules of the phonology may be thought of as simply picking up the derivation of the sentence where the transformations (and readjustment rules) leave off. Just like a transformation , a rule of the phonology is characterized as one which defines a mapping between two representations, and the nature of the representations to which phonological rules apply is characterized as being in principle no different from that of the syntactic representations generated by the syntactic components. The phonological component is thus seen as defining a derivation {S1 , . ••••. .Sn} . Each representation S; through Sn is a syntactic representation, i.e. a well-formed labelled bracketing of formative s. In standard phonology, S1 is referred to as the phonological or systematic 107

Selkirk: Prosodic Domains

108 Juncture

phonemic representation. The name phonetic representation is applied to Sn. 1 The phonological representation Si is not identical to, but is thought of as being very close to, the syntactic surface structure S;. The readjustment component provides the 'link' between Si and S; (SPE:61). (One function of the SPE readjustment component is to introduce boundaries into the surface structure tree; boundaries are conceived of as grammatical formatives which are elements of the terminal string.) Together, then, in the standard view, the syntactic, readjustment, and phonological components of a grammar define a derivation {s1 , ••• ,S;, ... , Si, ... Sn} for every sentence of the language. The conception of the mapping from surface structure to phonetic representation that we seek to develop here and elsewhere (cf. Selkirk, forthcoming) is different from that of standard phonology in a number of respects . According to the revised version of the theory that we propose, a distinction is made between two types of representation. The first is syntactic representation, of the sort described. We assume here, in the spirit of standard phonology, that there may be a set of rules in a grammar which are phonological in character and which operate in terms of syntactic representation, i.e. in terms of the labelled bracketing of the sentence. These rules, which we will call LB-domain rules, define a derivation {S;, .. .Sk}(called the phonosyntactic derivation). The second type of representation in our theory we will call phonological representation. Its defining propert ies are distinct from those of syntactic representation in that the relations between words of a sentence are not expressed in terms of labelled bracketing, but rather in terms of suprasegmental entities we will call prosodic domains. A mapping s/pis defined in a grammar which takes the syntactic representation Sk into a phonological representation , P 1 , with prosodic domains , and an additional set of phonological rules, which we will call prosodic domain rules, defines a derivation {P1 , •• •P n}, where P n is the phonetic representation of the sentence. (This derivation will be referred to as the phonological derivation.) Thus, according to the proposed theory, the phonological component consists of two subcomponents: two distinct blocks of rules whose domains are defined in terms of two distinct types of representations: (1)

The Proposed Theory


s/p =>

{P1,............... Pn}

~ ~





LB-domain rules

Prosodic domain rules

' In SPE, a phonetic representation is not considered to contain labelled bracketing, but only because that bracketing is progressively erased as part of the SPE algorithm for the application of cyclic rules. It can be shown (cf. Selkirk, forthcoming) that labelled bracketing cannot be erased in the course of the cycle, so that if phonetic representation is to be without it, it would have to be erased in toto just before, i.e. from the representation s.-,.


The focus in this paper will be on prosodic domains, in particular those which are of a size the same as or larger than the word. Our investigations ha~e led us.to posit the existence of at least four progressively larger sorts o'. proso~c dor:iam: the word, or W-domain; the phonological phrase, or F -domam; the mton~tlonal phrase , or /-domain; and the utterance, or U-domain. We .do not ~omm1t ourselves to the claim that these four exhaust the set of prosodic doma~s ~f wordlevel or above. They are simply those for which we have found motivation. Nor do we claim that all language s exhibit all types of domain. What w~ are developing is a notion of a universal repertory of domain types fr?m whic~ languages may draw. 2 These domains are arranged in hierarchical fash1~n. In this paper , we adopt the practice of representing them as in (2) or the eqmvalent (3).





w ,-,


w ,-,

w ,-,














u( i p( w( . .. )w w( . .. )w w( ... )w )p )1 l F( w( ... )w w( ... )w )p F( w( .... )w )p p( w(... )w w( ... )w )p )1 )u

(where •... .' stands for the terminal string, a distinctive feature matrix.) Our major intent here is to develop a theory of the form and functioning of phonological rules with respect to prosodic domains such as these. These domains we are calling prosodic correspond more or less to the stretches of an utterance which, in standard phonology, are thought of as being deli1'.1ited by boundary symbols of various sorts, where boundaries are entities occupying a place in amongst the segments of the terminal string. So, for example, the boundary representation of the nonsyntactic domains of the Frenc? sentence vo~s le verrez dans le premier article de la revue would be (5), while the prosodic domain structure would be as indicated in (4). • See Selkirk (forthcoming) for some elaboration of these points.

Selkirk: Prosodic Domains

I IO Juncture








w ,---,

dans (5)






w .-----,



//##vous # le # verrez ## dans # le #premier# revue##//

w w ,--,








de# la#

(where'#' is a word boundary,'##' is a phonological phrase boundary , and '//' is the utterance, or pause, boundary.) One purpose of this paper is demonstrate that the theory of prosodic domains is preferable to a theory of boundary-defined domains in that it allows one to make better sense of the properties of the phonological rules operating qnnonsyntactic domains. Elsewhere (Selkirk, forthcoming) it is demonstrated that prosodic domains correspond to units of the hierarchically arranged prosodic structure of phonological representation which have independent motivation in the grammar (cf. Liberman and Prince, 1977). It is thus clearly the case, though it cannot be shown here, that our theory of suprasegmental prosodic domains is not merely a notational variant of a theory of boundary domains. 3 We also claim that prosodic domains are not isomorphic to the labelled bracketing of the syntax. A number of arguments can be made in favor of this position, the most telling one being that prosodic words, phonological phrases, intonational phrases and even the utterance do not necessarily correspond to constituents of syntactic representation. It is true, as will be seen, that syntactic information contributes to the definition of prosodic domains, but this does not amount to saying that prosodic domains are syntactic. This point can unfortunately not be carried further in the limited space of the present paper. The reader is referred to Selkirk (forthcoming) where a full defense of this position is developed. The phenomena of sandhi in Sanskrit show admirably well that phonological rules are sensitive to the division of an utterance into prosodic domains of various sorts, and so we have chosen to draw on these well-known facts of Sanskrit phonology to illustrate our theory of the form and functioning of phonological 3 The reader is referred to Mccawley (1968) where a theory of boundaries as suprasegmental domain markers is sketched out. In many ways, we owe a large intellectual debt to McCawley's non-standard approach to boundaries in phonology.


rules with respect to prosodic domains. 4 (In the Indian grammatical tradition, the word sandhi 'putting together' designates the set of phonetic changes brought about through the juxtaposition of morphemes and words in an utterance .) The works of Whitney, 5 Renou, 6 and Allen,7 all of which are based to a great extent on the works of the Indian grammarians, have provided us with the data, and with the generalizations (rules) concerning the data. 8 Our task then is simply one of seeing how the facts reveal an organization of Sanskrit sentences into prosodic domains of various types, and how the rules apply in terms of these domains. The claim will be made that there is evidence for the presence in the phonological representation of Sanskrit utterances of three of the four types of prosodic domain mentioned above: the U-domain, the F -domain and the W-domain. It will moreover be claimed that the three types of domain-sensitive rules we have reason to isolate in our theory (cf. Selkirk, forthcoming) are represented in the grammar of Sanskrit. These are: domain span rules, domain juncture rules, and domain limit rules. A domain span rule functions in the following way: it applies throughout, i.e. across, a particular prosodic domain D; without regard to the ways in which Di may be subdivided into smaller prosodic domains, and is restricted to applying within that particular domain D;. It has the form of (6). (6)


B I D;(.... __


(where A and Bare segments (A or B may equal 0), and ij, are strings (possibly null) of specified segments, and the ellipses are variables over segments. The bracketing m( )m corresponds to a specific prosodic domain of the phonological representation. The form of the rule is related to its functioning in an obvious way: the rule scans the terminal string included within a Di domain of a phonological representation like (2), and converts any A to B that is contained in the context 4>--l/i within that D; domain. The rule ignores the subdivision of D; into smaller domains because (a) the markers of those domains are not part of the terminal string in the phonological representation, and (b) those smaller domains are not mentioned in the structural description of the rule. A domain juncture rule functions in quite different fashion: applying on a particular domain D;, it must know whether the segments it involves belong or not to distinct domains of type Di included within the domain D;. Domain juncture rules have the form (7a) or (7b). ' The term Sanskrit refers to both the classical and Vedic varieties, unless otherwise specified. 'Whitney, W.D. 1889. Sanskrit Grammar . ' Renou, L. 1961. Grammaire Sanskrite, 2eme edition. 7 Allen, W.S. 1962. Sandhi: The Theoretical, Phonetic, and Historical Basis of Word Junction in Sanskrit . • We have also drawn on A.A. Macdonell. 1927. A Sanskrit Grammar for Students; and J . Gonda. 1966. A Concise Elementary Grammar of the Sanskrit language.

Selkirk : Prosodic Domains

I 12 Juncture


a. A -

B / Di(. ..v;( ... __ i/J)mm(w ... )v; ... )v;


b. A -

B / Di(. .. v;(, .. )v;mltjJ--w,,.)m,,,)v;

A rule of this type scans the terminal string included within the D; domain and performs the structural change only if the segments of the terminal string of the phonological representation can be factored into smaller prosodic domains in the way specified in the structural description. The other type of rule , the domain limit rule, applies in terms of one or the other limit, or end, of a domain D;. It will have the form of (8a) or (8b). (8)

B Iv;( ... --

b. A -

B / Di(plusor t> into the long vowel e or 0, 10 respectively (Whitney §127; Renou §25, 40; Allen pp. 36ft.).

> > >

naya bhava vana(y) aste prabha(v) ehi

We parenthesize the glide in word-final position: it is thought by some (Whitney § 132, 133) to be systematically deleted there, but not all concur with this position (Renou §41; Allen pp. 37ft.). The rules accounting for the above alternations may be written informally as follows:



deva + i sa uvaca tva i§vara

> > > >

nai + a bho + a vane aste prabho ehi

(11) Glide Formation

a. A -

The rule will perform the designated change only if the string ct>A 1/Jis located at the right or left limit, respectively, of the D; domain.




deve sovaca tvesvara

However, if in the utterance the ai or au combination is followed by another vowel, it is not converted to e or o. Rather, the high vowel is converted to a glide (Whitney §131, Renou §125; Allen pp. 37ff.).


~~ ~


[+s yll] ....)u

/ u( ... a __

Vowel Contraction

~ :~~ - ~: f

I u( ....__

.. ,)u

Applied in this order, they produce the correct results. Other examples of U -span rules are provided by the processes affecting consonants in combination. Consider the treatment of the phoneme m. Throughout the utterance, an m assimilates in place of articulation to a following stop (Renou §11, 35; Whitney §212, 213; Allen pp. 80ff.). (13)

sram + takim karo;,i satrum jahi gurum namati

> > > >

srantakiri karo;,i satruii jahi gurun namati

Before semi-vowels and fricatives, the m becomes anu.yvara (indicated by IJ'l), which is to say that it disappears, and causes nasalization of the (now lengthened) preceding vowel. 11 (14)

tarp veda karul).arproditi mok~arp seveta madhurarp hasati

We might formulate these rules as follows: (15) Assimilation of m aant } m -




[ I

0 ( ... _


] - cont aant (3cor etc .

.... )u


Renou §30; Allen pp. 99-100. 10 Since there are not short counterparts to e and o in Sanskrit, every e or o appearing in transcription is long, and therefore it is conventional not to employ the macron to indicate length.


As noted by Allen p. 82 and Renou §35, in the classical language, a word-final m may (optionally?) become anu$vara before stops as well.

I 14 Juncture

Selkirk: Prosodic Domains

(16) Anu~vara of m Vm -

[+iYas] I u( .. .__

[ + cont] ., .)u

A last example of a U-span rule is the rule of voicing assimilation, according to which the voicing of an obstruent is determined by the voicing of an obstruent that follows (Whitney §159; Renou §9, 33; Allen pp. 91ff.) . (l 7)

ad+ si ad + thas ap - j~ dik - gadal;i jyok ji:va parivrat gacchati

> > > > > >

atsi atthas ab-j~ dig-gadal;i jyog ji:va parivra > > >

avir mama dhenur iva gupair yuktalJ manur gacchati

> > > > >

nalo nama tapo-nidhi devo gacchati 20 asva iva deva uvaca

> > >

asva vahanti hata gajalJ deva UCUQ

After a nalas nama tapas-nidhi devas gacchati asvas iva devas uvaca



After a asvas vahanti hatas gajalJ devas iicuv

We will not attempt to f~rmulate the rules for these alternations here. The examp!es them~elves provide sufficient evidence. A mere comparison of the fo7s m (42) with those in (44) and (45) shows the impossibility of positing the uni o~ l:i as. the. underlying final consonant in external combination Ou~ i?vest1gat1on thu_sfar has shown the existence of two sorts ~f prosodic doma1~ m the phonological representations of the sentences of Sanskrit· W th prosodic word and u th . . · , e kn , , e utterance. Cons1derat10n of two additional welld ow? P?enomen_a o~ th~ language leads to the positing of yet another type of omam, intermediate m size between Wand U. We will call it F the h I . cal _phrase. It is the privileged domain of the rules of ruki and nati (;:.;g11 § l80-88, 189-95; Renou §15-19). . ey

ti + sthacaksus - mat dhenu + su sarpis + a but tisras

> > > >

ti$thacak$U$-mat dhenu(lu sarpi$a

> > > > > >

mu~ + na + ti karman + a dii$ + anam bflTlh + anam brahman - yal;i k$ip - nul;i

mu~pati karmapa dii(lapam bflTlhapam brahmapyal;i k~ippulJ

but rathena, dars-ana-, arc-anam, ardhena, kurvanti

In classical Sanskrit, it appear s that ruki and nati are W-domain rules , for only in this domain do they apply with regularity. Where they apply beyond the W-domain in classical Sanskrit, as in certain compounds, they are to be seen as fossilized, i.e. lexicalized, reflexes of a more archaic sandhi process (Renou §37, 76; Allen p. 15). Indeed , in the earlier language of the Vedas, the domains of ruki and nati both appear to be larger than W, but smaller than U. According to Whitney, the structural descriptions of the two rules can be satisfied by segments belonging to two distinct stems of compounds. Renou, in his Grammaire de la langue Vedique (§148, 150), remarks further that prepositional pre-verbs and a following verb constitute a domain within which the two rules apply: (49)

The rule of ruki brings about a retroflexion of s converting it to s "f f the sounds r k · d ' . , I one o ' u, ' z prece es, and if some other segment, but not rh:, follows: (46)

"The dental nasal n, when immediately followed by a vowel or by norm or y or v, is turned to the lingual !'I if preceded in the same word [sic] by the lingual sibilant or sem ivowel or vowels-that is to say, by ~, r, or r or f-: and this, not only if the altering letter stands immediately before the nasal, but at whatever distance from the latter it may be found: unless, indeed , there intervene [a consonant moving the front of the tongue: namely] a palatal (except y), a lingual or a dental."

vi - syati pra - nak nir - hanyat

> > >

Vi$-yati pra-pak nir-hapyat

Moreover, Renou points out (GLV: 149, 151) that words plus a following enclitic pronoun or particle form domains for ruki and nati:



"Il se produit meme qu'un i - u final de mot, surtout appartenant un terme etroitement lie au suivant, cerebralise un s initial, si cet s fait lui-meme partie integrante d'une forme verbale plus OUmoins breve OU banale, d'une particule, d'un pronom monosyllabique. De la Jes groupes hi $ma, abhi $dl:i, abhi $antu, abhi $ifica, divi $