Identity Relations in Grammar 9781614518112, 9781614518181

Few concepts are as ubiquitous in the physical world of humans as that of identity. Laws of nature crucially involve rel

197 96 5MB

English Pages 381 Year 2014

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Contributors
Introduction
Part I Phonology
Contrastiveness: The basis of identity avoidance
Rhyme as phonological multidominance
Babbling, intrinsic input and the statistics of identical transvocalic consonants in English monosyllables: Echoes of the Big Bang?
Identity avoidance in the onset
Part II Morpho-Syntax
Unifying minimality and the OCP: Local anti-identity as economy
Semantic versus syntactic agreement in anaphora: The role of identity avoidance
Part III Syntax
Exploring the limitations of identity effects in syntax
Constraining Doubling
Recoverability of deletion
On the loss of identity and emergence of order: Symmetry breaking in linguistic theory
Part IV General
Linguistic and non-linguistic identity effects: Same or different?
On the biological origins of linguistic identity
Language index
Subject index
Recommend Papers

Identity Relations in Grammar
 9781614518112, 9781614518181

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Kuniya Nasukawa and Henk van Riemsdijk (Eds.) Identity Relations in Grammar

Studies in Generative Grammar 119

Editors

Henk van Riemsdijk Harry van der Hulst Norbert Corver Jan Koster

De Gruyter Mouton

Identity Relations in Grammar Edited by

Kuniya Nasukawa Henk van Riemsdijk

De Gruyter Mouton

ISBN 978-1-61451-818-1 e-ISBN (ePub) 978-1-61451-898-3 e-ISBN (PDF) 978-1-61451-811-2 ISSN 0167-4331 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.dnb.de. ” 2014 Walter de Gruyter, Inc., Boston/Berlin Printing and binding: CPI books GmbH, Leck 앝 Printed on acid-free paper 앪 Printed in Germany www.degruyter.com

Contents

Contributors ............................................................................................. vii Introduction ................................................................................................ 1 Kuniya Nasukawa and Henk van Riemsdijk

Part I: Phonology Contrastiveness: The basis of identity avoidance ................................... 13 Kuniya Nasukawa and Phillip Backley Rhyme as phonological multidominance ................................................. 39 Marc van Oostendorp Babbling, intrinsic input and the statistics of identical transvocalic consonants in English monosyllables: Echoes of the Big Bang? ............. 59 Patrik Bye Identity avoidance in the onset ............................................................... 101 Toyomi Takahashi

Part II: Morpho-Syntax Unifying minimality and the OCP: Local anti-identity as economy....... 123 M. Rita Manzini Semantic versus syntactic agreement in anaphora: The role of identity avoidance .............................................................................................. 161 Peter Ackema

vi

Contents

Part III: Syntax Exploring the limitations of identity effects in syntax .......................... 199 Artemis Alexiadou Constraining Doubling .......................................................................... 225 Ken Hiraiwa Recoverability of deletion ..................................................................... 255 Kyle Johnson On the loss of identity and emergence of order: Symmetry breaking in linguistic theory ................................................................................. 289 Wei-wen Roger Liao

Part IV: General Linguistic and non-linguistic identity effects: Same or different? ........ 323 Moira Yip On the biological origins of linguistic identity ....................................... 341 Bridget Samuels

Language index ..................................................................................... 365 Subject index ......................................................................................... 367

Contributors

Peter Ackema is Reader in Linguistics at the University of Edinburgh. His research interests are in the areas of theoretical syntax and morphology, particularly concerning issues surrounding the interaction between these two modules of grammar. He is the author of Issues in Morphosyntax (John Benjamins 1999) and co-author with Ad Neeleman of Beyond Morphology (OUP 2004), and has published articles on a range of topics such as agreement, pro drop, compounding and incorporation, verb movement, and lexical integrity effects. Artemis Alexiadou is Professor of Theoretical and English Linguistics at the Universität Stuttgart. She obtained her Ph.D. at the University of Potsdam. Her research interests lie in theoretical and comparative syntax, with special focus on the interfaces between syntax and morphology and syntax and the lexicon. Phillip Backley is Professor of English Linguistics at Tohoku Gakuin University, Japan. His research interests cover various aspects of segmental and prosodic phonology, with a focus on how the two interact to constrain the phonologies of individual languages. He is author of An Introduction to Element Theory (EUP 2011) and co-editor (with Kuniya Nasukawa) of Strength Relations in Phonology (Mouton 2009). Patrik Bye is a researcher affiliated to the University of Nordland, Bodø, Norway. He has published scholarly articles on a number of topics including the syllable structure, quantity and stress systems of the FinnoUgric languages, notably Saami, North Germanic accentology and historical phonology, derivations, dissimilation, phonologically conditioned allomorphy and, with Peter Svenonius, morphological exponence. He is the co-editor with Martin Krämer and Sylvia Blaho of Freedom of Analysis? (Mouton 2007).

viii Contributors Ken Hiraiwa has worked on the syntax of various languages and published a number of descriptive and theoretical articles. He got his Ph.D at MIT in 2005 and is currently an associate professor of linguistics at Meiji Gakuin University. Kyle Johnson earned a Bachelor of Arts in Psychology from the University of California at Irvine in 1981 and a PhD from MIT in 1986. He studies the relationship between syntax and semantics, with an emphasis on movement, ellipsis, anaphora and argument structure. He teaches at the University of Massachusetts at Amherst, where he has been since 1992. Wei-wen Roger Liao holds a PhD in linguistics from University of Southern California, and is currently an Assistant Research Fellow at the Institute of Linguistics in Academia Sinica. His publications and research cover various aspects of Chinese linguistics, comparative syntax, the syntax-semantics interface, and biolinguistics. M. Rita Manzini has been Professor at the University of Florence since 1992, after taking her Ph.D. at MIT in 1983, and holding positions at UC Irvine (1983-84) and at University College London (1984-1992). She is the (co-)author of several volumes including Locality (MIT Press 1992) and with Leonardo Savoia I dialetti Italiani (ed. dell’Orso 2005, 3vols.), Unifying Morphology and Syntax (Routledge 2007), Grammatical Categories (CUP 2011). She has also published about one hundred articles in journals and books on themes related to the formal modelling of morphosyntax, language universals and variation, including studies on locality, voice, graphs, agreement and Case, specifically in Italo-Romance and in Albanian. Kuniya Nasukawa is Professor of English Linguistics at Tohoku Gakuin University, Japan. He has a Ph.D. in Linguistics from University College London (UCL), and his research interests include prosody-melody interaction and precedence-free phonology. He has written many articles covering a wide range of topics in phonological theory. He is author of A Unified Approach to Nasality and Voicing (Mouton 2005), co-editor (with Phillip Backley) of Strength Relations in Phonology (Mouton 2009), and co-editor (with Nancy C. Kula and Bert Botma) of The Bloomsbury Companion to Phonology (Bloomsbury 2013).

Contributors

ix

Marc van Oostendorp is Senior Researcher at the Department of Variationist Linguistics at the Meertens Institute of the Royal Netherlands Academy of Arts and Sciences, and Professor of Phonological Microvariation at the University of Leiden. He holds an MA in Computational Linguistics and a PhD from Tilburg University. He is co-editor (with Colin J. Ewen, Elizabeth V. Hume and Keren Rice) of The Blackwell Companion to Phonology (Wiley-Blackwell 2011). Henk van Riemsdijk was, until recently, Professor of Linguistics and head of the Models of Grammar Group at Tilburg University, The Netherlands. He is now emeritus and a free-lance linguist operating from his home in Arezzo, Italy. He is the co-founder of GLOW, the major professional organization of generative linguists in Europe. He was (from 2001 through 2013) the co-editor of the Journal of Comparative Germanic Linguistics (Springer) and of the book series Studies in Generative Grammar, Mouton de Gruyter (from 1978 through 2013). And he co-edits the Blackwell Companions to Linguistics series (Wiley-Blackwell) and the Comprehensive Grammar Resources series (Amsterdam University Press). He has written and edited around 25 books, contributed around 100 articles and directed around 30 Ph.D. Dissertations. Bridget Samuels is Senior Editor for the Center of Craniofacial Molecular Biology at the University of Southern California. She is the author of the 2011 Oxford University Press monograph, Phonological Architecture: A Biolinguistic Perspective. Previously, she held positions at the California Institute of Technology and the University of Maryland, College Park. She received her Ph.D. in Linguistics from Harvard University in 2009. Toyomi Takahashi is Professor of English at Toyo University, Tokyo, Japan. His research interests include theories of representation with a focus on syllabic structure and elements, phonological patterning involving harmony, stress and intonation, and the phonetics of English and Japanese in an EFL context.

x

Contributors

Moira Yip did her BA at Cambridge University, then earned her PhD at MIT in 1980. She taught at Brandeis University, and the University of California, Irvine. She returned to the UK in 1998, and taught at University College London (UCL) until her retirement in 2008. She is now Emeritus Professor of Linguistics at UCL. She has published two books on tone, and many articles on a wide range of topics in phonological theory, including many on identity and non-identity phenomena. She has a particular interest in Chinese, and more recently has published on comparisons between birdsong and human language.

Introduction Kuniya Nasukawa and Henk van Riemsdijk

1. Introduction Few concepts are as ubiquitous in the physical world of humans as that of identity. Laws of nature crucially involve relations of identity and nonidentity, the act of identifying is central to most cognitive processes, and the structure of human language is determined in many different ways by considerations of identity and its opposite. The purpose of this book is to bring together research from a broad scale of domains of grammar that have a bearing on the role that identity plays in the structure of grammatical representations and principles. Needless to say, the notion of identity as used here is an intuitive notion, a pre-theoretical one. We do not really know that we are talking about the same thing when we talk about referential identity and haplology, even though both are discussed in terms of some notion of identity. Bringing together a variety of studies involving some notion of identity will undoubtedly bring us closer to an understanding of the similarities and differences among the various uses of the notion of identity in grammar. Ultimately, many of the phenomena and analyses discussed in this book should probably be evaluated against the background of Type Identity Theory to see if a more precise notion of identity can emerge. Some ways in which identity-sensitivity manifests itself are fairly straightforward. For example, reduplication (cf. Raimy 2000 and many others) in morpho-phonology creates sequences of identical syllables or morphemes. Similarly, copying constructions in syntax create an identical copy of a word or phrase in some distant position. This is typically true, for example, of verb topicalizations such as those frequently found in African languages such as Vata (cf. Koopman 1984). In such constructions (often referred to as ‘predicate clefts’) the verb is fronted, but is again pronounced in its source position, (cf. Kandybowicz 2006 and references cited there). Such constructions as well as the observation that wh-copy constructions are frequently found in child language (see for example McDaniel, Chiu and Maxfield 1995), have also contributed to the so-called copy theory of movement according to which a chain of identical copies is created whose

2

Kuniya Nasukawa and Henk van Riemsdijk

(non-)pronunciation is determined by principles of spell-out. Alternate theories of movement such as remerge resulting in multiple dominance largely avoid the identity problem, see Gärtner (2002), who observes that the copies under the copy theory are not formally identical at all. In many cases, however, what is at stake is not the coexistence of identical elements in grammatical structure but rather its opposite, the avoidance of identity, a term due to Yip (1998). Haplology, the deletion of one of two identical syllables or morphemes, is a case in point. In addition to deletion, there are other ways to avoid sequences of two identical elements (“XX”): insertion of an epenthetic element (XX→XeX), dissimilation (XX→XY), creating distance (XX→X…X) or fusion (AA→Ā). In phonology and morphology, there is an abundance of identity avoidance phenomena, and some major principles such as the Obligatory Contour Principle (OCP, cf. McCarthy 1986) are instrumental in accounting for them. But OCP-like principles have also been argued to be operative in syntax (cf. Van Riemsdijk 2008 and references cited there). In semantics, an identity avoidance effect that immediately comes to mind is Principle C of the Binding Theory (Chomsky 1981): a referential expression can never be bound, that is, c-commanded, by an element bearing an identical index. Principle C may thus be interpreted as a principle that avoids identity in some way. Still, while referential identity is clearly a necessary condition in order for Principle C to kick in, why does it apply in some cases but not in others? For example, why does contrastive focus override Principle C? And why does Principle C treat epithets more like pronouns than like full copies of the other noun phrase? Given elements must be either deaccented or deleted/silent (cf. Williams 1997), which suggests an identity avoidance effect. But then, how does the notion of ‘givenness’, to the extent that we understand it, relate to the notion of identity? Does the fact that we may be talking about pragmatics here rather than semantics play a role in our assessment of apparent identity relations of this kind? In the examples alluded to above, questions immediately arise as to what exactly we mean by identity. And when we think about these issues a bit more, things are indeed far from obvious. It suffices to look at distinctive features in phonology. /i/ and /u/ are identical in that both are vowels, but they are different in that one is a front vowel and the other a back vowel. What counts for the calculus of identity, full feature matrices or subsets of features, and if the latter, which subsets? Take a difficult problem from syntax. The so called “Doubly Filled Comp Filter” (DFC, cf. Chomsky and Lasnik 1977 and much subsequent research) ostensibly excludes two posi-

Introduction

3

tions that are close to one another (the complementizer head and its specifier position) if both are phonetically realized. Typically, the complementizer is an element such as that, while the specifier contains some wh-phrase, i.e. a DP, a PP, an AP or a CP, excluding such cases as *I wonder who that you saw? Note however that many languages have a process whereby a finite verb is moved into the complementizer position, such as Subject Auxiliary Inversion in English. But whenever this happens, the DFC does not apply: who did you see? Could the relative identity between a wh-phrase and a “nominal” complementizer such as that as opposed to the relative nonidentity between the wh-phrase and a finite verb be responsible? Clearly, identity is a very abstract and perhaps not even a coherent concept, and invoking it is never a trivial matter. Similar issues arise in the domain of intervention constraints. Minimality, and in particular, Relativized Minimality (Rizzi 1990), involves the relative identity of the intervening element with the element that crosses it. But again, what are the relevant properties? In Rizzi’s book, it is proposed that the crucial property is A vs. Ā. But there are many indications that what counts as an intervener is tied to “lower” level features. In Dutch, for example, the [+R] feature creates an intervention effect (cf. Van Riemsdijk 1978) but the [+wh] feature does not. Beyond a great many analytical puzzles, the creation and avoidance of identity in grammar raise lots of fundamental and taxing questions. These include: • Why is identity sometimes tolerated or even necessary, while in other contexts it must be avoided? • What are the properties of complex elements that contribute to configurations of identity (XX)? • What structural notions of closeness or distance determine whether an offending XX-relation exists or, inversely, whether two more or less distant elements satisfy some requirement of identity? • Is it possible to generalize over the specific principles that govern (non-)identity in the various components of grammar, or are such comparisons merely metaphorical? • Indeed, can we define the notion of ‘identity’ in a formal way that will allow us to decide which of the manifold phenomena that we can think of are genuine instances of some identity (avoidance) effect? • If identity avoidance is a manifestation in grammar of some much more encompassing principle, some law of nature, then how is it possible that what does and what does not count as identical in the

4

Kuniya Nasukawa and Henk van Riemsdijk

grammars of different languages seems to be subject to considerable variation? The present collection of articles addresses only some aspects of such questions, but we hope it will pave the way for more extensive attention to the role of (non-)identity in linguistics and neighboring as well as superordinate disciplines. The idea for this book finds its origin in the workshop entitled “Identity in Grammar” held in conjunction with the 2011 GLOW Conference in Vienna on May 1 2011. 1 The workshop was co-organized by Martin Prinzhorn, Henk van Riemsdijk and Viola Schmitt. The contribution of Martin Prinzhorn and Viola Schmitt, which extends to some of the passages of the topic description that are incorporated in some form or other in the present introduction, is gratefully acknowledged. The articles in this collection are arranged under four categories: phonology (Part I), morphosyntax (Part II), syntax (Part III) and general (Part IV). Four of the articles, those by Artemis Alexiadou, Maria Rita Manzini, Kuniya Nasukawa and Phillip Backley, and Moira Yip, were presented at the Vienna workshop. And because these papers succeed in illustrating the overall theme of the volume, they appear first in their respective category. The remaining articles were submitted in response to an invitation by the editors. Abstracts of all the articles are given below. Phonology Kuniya Nasukawa and Phillip Backley observe that identity avoidance constraints such as OCP do not usually refer to phonological domains smaller than the segment. This is based on their claim that allowing two identical features to be adjacent leads to redundancy. They also argue that in other domains of phonology and morphology identity avoidance is driven by a general principle of contrastiveness which subsumes constraints such as OCP and *REPEAT. The existence of identity avoidance at various prosodic levels is attributed to the way some properties are bound by prosodic domains: those tied to the edges of domains (e.g. aspiration, glottalisation, prenasality, true voicing) adhere to identity avoidance whereas place properties tend to display harmonic behavior instead. These two patterns reflect the division between non-resonance features (prosodic markers) and resonance features (segmental markers). This approach is altogeth1

We gratefully acknowledge the financial support from the Truus und Gerrit van Riemsdijk Stiftung, Vaduz, which made the workshop possible.

Introduction

5

er simpler than Feature Geometry proposals involving three or more feature divisions. Marc van Oostendorp presents an analysis of rhyme in terms of multidominance, arguing that rhyming words share some part of their phonological representation. It is shown how this analysis differs from two other formal phonological approaches to rhyme, one developed within Correspondence Theory and the other within Loop Theory. Van Oostendorp also demonstrates how his analysis can account for imperfect rhymes and for the fact that the onsets of rhyming syllables (or feet) have to be different — in other words, that the world’s languages display a strong tendency to avoid complete identity when it comes to rhyming systems. He concludes with a short case study of a rhyming style that ignores voiceless coronal obstruents. Patrik Bye examines a database of 1556 English CV(V)C monosyllables and shows that identical transvocalic consonants at non-apical places of articulation are overrepresented relative to their homorganic class and strongly overrepresented once gradient similarity avoidance is factored in. His proposed explanation connects this pattern to repetitive babbling in infancy, which lays down connections in memory between non-apical places of articulation and motor repetition. Apical consonants are not mastered until long after the babbling phase, and are therefore subject to similarity avoidance. Toyomi Takahashi focuses on identity avoidance within the syllable onset. In general, complex onsets (two or more timing slots or root nodes) disallow partial or full geminates, unlike other phonotactic domains such as complex nuclei or coda-onset sequences. Revisiting Kahn’s ideas (1976) concerning the constrained nature of non-linear representation, Takahashi claims that well-formedness in representations should be ensured in such a way that the expressive capacity of representations naturally excludes unattested (and thus, redundant) structures without recourse to extrinsic wellformedness constraints. From this ‘redundancy-free’ perspective, he argues that the onset is unary at all levels of representation. Apparent ‘clusters’ or ‘contours’ within the onset are claimed to result from the phonetic interpretation of phonologically unordered melodic properties, in much the same way that plosives show three distinct phases that are not phonologically encoded.

6

Kuniya Nasukawa and Henk van Riemsdijk

Morpho-syntax Maria Rita Manzini investigates three constructions which feature in a variety of Romance languages and which involve identity avoidance in one form or another. Specifically, she offers a detailed discussion of (i) double l, as found in clitic clusters, (ii) negative imperatives, and (iii) negative concord (or double -n). Manzini demonstrates that, while these constructions apparently belong to three different domains of grammar (morphology, syntax and semantics, respectively), they all produce a mutual exclusion effect that manifests itself in very local domains. In other words, all three appear to involve a kind of identity avoidance. Peter Ackema investigates a number of agreement phenomena in Dutch, some of which are partly morpho-phonological and partly morpho-syntactic in nature. He shows that there are instances of agreement weakening which apply to syntactic agreement but not to semantic agreement, and argues that syntactic agreement weakening should be viewed as an instance of identity avoidance. Furthermore, Ackema traces the difference in behavior between syntactic and semantic agreement to a difference in the internal structure of strong and weak pronouns: strong pronouns have a richer internal structure than weak pronouns, which explains why the latter are more likely to be identical with their antecedents and thus susceptible to agreement weakening. Syntax Artemis Alexiadou distinguishes two types of proposals that aim to account for “bans on multiple objects,” viz. the Subject in situ Generalization and Distinctness. She argues that, while both may be viewed as specific instantiations of identity avoidance, each is independently motivated. Furthermore, Alexiadou suggests that both principles are also different from other identity avoidance effects that have been observed in the literature. Alexiadou therefore offers a caution to the linguistic community against any hasty attempts to unify what may appear to be similar instances of identity avoidance but which, under closer scrutiny, reveal crucial differences. Ken Hiraiwa addresses three cases of morpho-syntactic identity avoidance in Japanese: a double genitive constraint (*-no -no), a double conjunctive coordinator constraint (*-to -to), and a double disjunctive coordinator constraint (*-ka -ka). He goes on to argue that the structural conditions under which these three constraints may apply, or are blocked from apply-

Introduction

7

ing, are sufficiently similar to justify an attempt to unify all three. To ensure the success of such a move, however, two types of adjacency must be distinguished: head adjacency and phrasal adjacency. 2 Kyle Johnson presents a detailed investigation of so-called Andrews amalgams such as Sally will eat I don’t know what today. As in other constructions containing grafts or amalgams, an important observation is that two sentence-like structures are somehow fused together into a single complex sentence, and the place where the two structures are connected is a shared element, what in the above example. Johnson argues that there are two types of identity involved in such structures. First, he proposes that Andrews amalgams are instances of multiple dominance, in that the shared element is dominated by two separate nodes, one in each substructure. Second, he shows that the construction involves sluicing, an ellipsis construction that can only function under a precise notion of antecedence, which is governed by recoverability, essentially identity. Roger Wei-wen Liao suggests that the notions of symmetry breaking and identity avoidance should be assimilated to one another. Basing his argument partly on unpublished work by Jean-Roger Vergnaud, he develops a three-dimensional theory of phrase structure designed to accommodate complex phrases in which, in addition to the lexical head and the functional heads in its shell(s), there are also classifiers or semi-lexical heads. A typical example is many bottles of wine, arguably a single extended projection. The structures Liao proposes are fully symmetrical, but in order to be expressible the symmetry must be broken up. In other words, the idea is that narrow syntax is highly symmetrical, but that linguistic computation is driven by the need to break up the symmetry. General Moira Yip explores the boundaries between grammar proper and cognition in general. She shows that identity sensitivity is found not only in many different modalities of human behavior but also in many different species. For example, studies have shown that in birdsong both the identity and the non-identity of the song in question can be an important carrier of information. It is not surprising, therefore, that many identity and nonidentity effects are found in the grammars of natural languages. 2

It would appear that the distinction has a wider use, as a similar distinction is shown to play a role in the licensing of silent motion verbs in Swiss German, cf. Van Riemsdijk (2002).

8

Kuniya Nasukawa and Henk van Riemsdijk

Clearly, a volume of this size cannot do justice to a topic as broad as that of identity in the structure of grammatical representations and principles. Nevertheless, we hope that these articles will convey something of the scope and influence that the notion of identity appears to have on a range of apparently unrelated phenomena observed in a variety of different languages. Bridget Samuels is also concerned with the question of whether identity avoidance (*XX) and symmetry breaking (as in dynamic antisymmetry) can be understood as two sides of the same coin. She approaches this question from a broad biolinguistic perspective. Anti-identity can be created in various ways in grammar — for example, by category formation, by internal (copy-)merge — but the resulting structures are disfavored due to a variety of factors including perceptual difficulty and articulatory fatigue. However, Samuels also shows that the evolutionary origins of these effects are not unitary, concluding that we are only at the very beginning of the serious study of “third factor” principles of biological design such as identity creation and avoidance. This work was partially funded by the Ministry of Education, Culture, Sports, Science and Technology of the Japanese government under grant number 22320090 (awarded to Kuniya Nasukawa). Kuniya Nasukawa and Henk van Riemsdijk Sendai and Arezzo, March 2014

References Chomsky, Noam 1981 Lectures on Government and Binding. Dordrecht: Foris Publications. Chomsky, Noam, and Howard Lasnik 1977 Filters and control. Linguistic Inquiry 8: 425–504. Gärtner, Hans-Martin 2002 Generalized Transformations and Beyond. Berlin: Akademie Verlag. Kahn, Daniel 1976 Syllable-based generalizations in English phonology. Ph.D. dissertation, Massachusetts Institute of Technology.

Introduction

9

Kandybowicz, Jason 2006 Conditions on multiple spell-out and the syntax-phonology interface. Ph.D. dissertation, University of California, Los Angeles. Koopman, Hilda 1984 The Syntax of Verbs. Dordrecht: Foris Publications. McCarthy, John 1986 OCP effects: gemination and anti-gemination. Linguistic Inquiry 17: 207–263. McDaniel, Dana, Bonnie Chiu, and Thomas Maxfield 1995 Parameters for wh-movement types: evidence from child language. Natural Language and Linguistic Theory 13: 709–754. Raimy, Eric 2000 The Phonology and Morphology of Reduplication. Berlin/New York: Mouton de Gruyter. Riemsdijk, Henk C. van 1978 A Case Study in Syntactic Markedness: The Binding Nature of Prepositional Phrases. Lisse: The Peter de Ridder Press, later published by Foris Publications Dordrecht and currently by Mouton de Gruyter, Berlin/New York. 2002 The unbearable lightness of GOing. The Journal of Comparative Germanic Linguistics 5: 143–196. 2008 Identity avoidance: OCP-effects in Swiss relatives. In Foundational Issues in Linguistic Theory: Essays in Honor of JeanRoger Vergnaud, Robert Freidin, Carlos P. Otero and Maria Luisa Zubizarreta (eds.), 227–250. Cambridge, MA: MIT Press. Rizzi, Luigi 1990 Relativized Minimality. Cambridge, MA: MIT Press. Williams, Edwin 1997 Blocking and anaphora. Linguistic Inquiry 28: 577–628. Yip, Moira 1998 Identity avoidance in phonology and morphology. In Morphology and Its Relation to Phonology and Syntax, Stephen G. Lapointe, Diane K. Brentari and Patrick M. Farrell (eds.), 216– 246. Stanford, CA: CSLI.

10

Kuniya Nasukawa and Henk van Riemsdijk

Part I Phonology

Contrastiveness: The basis of identity avoidance Kuniya Nasukawa and Phillip Backley

1. Introduction Language succeeds as a system of communication by exploiting the fundamental notion of contrastiveness. Broadly speaking, the information associated with a structural object such as a segment, morpheme, or phrase can only perform a linguistic function if it is distinguishable from other structural objects around it. When applied to phonology, this premise can have the effect of preventing identical units (e.g. features, segments, organising nodes) from appearing next to each other. The idea is therefore that languages strive towards identity avoidance (Yip 1998), which is usually formalized as the Obligatory Contour Principle or OCP (Leben 1973, Goldsmith 1976, McCarthy 1986, Yip 1988). The OCP has become established as a key structural principle in both phonology and morphology, and is typically expressed as in (1). (1)

The Obligatory Contour Principle Adjacent identical objects are prohibited.

The OCP operates as a meta-principle or meta-constraint (Yip 1998, Van Riemsdijk 2008), taking different arguments as required; these include stem, affix, foot, syllable, (C/V) position and node, as well as individual phonological features. Its role is to eliminate illicit sequences of identical objects, which it does by triggering various OCP effects including deletion and dissimilation. In tone languages, for example, the OCP is thought to be responsible for the absence of adjacent identical tones in lexical forms. It also repairs such sequences when they are produced as a result of morphological concatenation; for example, the ill-formed tone pattern *H-HL may be repaired as HL in order to avoid an illicit *H-H sequence. The OCP may also block a segmental property from appearing more than once in a domain, where the property in question is represented by a particular feature. In Japanese, for instance, the feature [voice], which represents obstruent voicing, can occur only once in a native word/morpheme, making geta [ɡeta] ‘clogs’ and kaze [kaze] ‘wind’ well-formed but *[ɡeda],

14

Kuniya Nasukawa and Phillip Backley

*[ɡaze], etc. impossible. OCP restrictions may refer to domains other than the word too. For example, English and Thai allow the feature [spread glottis] (or [tense]), which is responsible for aspiration, to appear just once in a foot, hence English depart [dɪˈphɑːt] (*[dɪˈphɑːth]). Other cases of segmental OCP restrictions abound. In Arabic, for instance, two [labial] consonants cannot belong to the same root, hence fqh ‘understand’, klm ‘speak’ are well-formed whereas *btf, *flm etc. are not. What these examples illustrate is that, although the OCP controls the distribution of segmental properties, it does so by referring to prosodic or morphological domains. Without doubt, the OCP provides a convenient way of capturing certain distributional patterns. Moreover, it is used consistently by scholars of differing theoretical persuasions who in other respects may not share much common ground. But on the other hand, the OCP’s usefulness is limited by the fact that its function is descriptive — when expressed as in (1), it cannot explain why two identical tokens of a particular object may not stand next to each other. Several suggestions have been made to account for the existence of dissimilation effects triggered by the OCP. According to Coarticulation-Hypercorrection Theory (Ohala 1981, 1993, 2003), these effects come about when listeners reverse a perceived coarticulation. From this it follows that dissimilation should only occur with features that are associated with elongated phonetic cues that extend continuously beyond the scope of a single segment. Another view holds that they arise from the difficulties that listeners face when they have to process language which contains similar segments in close proximity (Frisch, Pierrehumbert and Broe 2004). In this case, the OCP may be said to be functionally motivated because, in some languages at least, it is driven by statistical factors emerging from the structure of the lexicon. Van de Weijer (2012) also notes the relevance of statistics to our understanding of the nature of the OCP. Citing BollAvetisyan and Kager (2004), he suggests that in the grammar of the infant language learner the OCP may emerge as a learned constraint — rather than being present from the outset as innate knowledge — on the grounds that OCP effects are prevalent in adult language and therefore have a significant influence on the shape of the child’s early lexical forms. On the face of it, these suggestions appear to offer valid ways of motivating the OCP. But on the other hand, they are based primarily on aspects of language that are external to the grammar, such as language learning and processing. To gain a fuller understanding of the OCP as a principle of grammaticality and its effect on phonological representations, we should ideally like to identify something within the grammar that can account for the pervasiveness of OCP-related effects cross-linguistically. In this paper

Contrastiveness: The basis of identity avoidance

15

it will be argued that the explanation lies in segmental structure — more specifically, in segmental structure as represented by elements rather than traditional features. The discussion will show how the element structure of segments offers a useful insight into why certain OCP effects take place.

2. The OCP and prosodic domains It is interesting to note that structural units smaller than the segment do not make reference to the OCP. In other words, identity avoidance is apparently not an issue when it comes to describing segment-internal structure. This makes the OCP irrelevant to models of segmental representation such as dependency phonology and feature geometry, where it is taken for granted that two identical units cannot appear in the same position. 1 In one sense this seems a reasonable approach to take, as there are no reported cases of OCP effects at this level of structure. But in another sense it has the appearance of a stipulation — our instinct is to seek an explanation for why multiple tokens of a given melodic unit such as a feature or an organising node are generally not possible in a single segment. Below we show that such an explanation can be found if we are willing to admit that segmental structure is represented using elements rather than features. That is, by adopting an element-based approach we can begin to understand why OCP effects are never observed at the sub-segmental level. The claim that element-based representations rule out OCP effects will be expanded in §4 and §5. This is preceded in §3 by a brief introduction to the Element Theory approach. We begin, however, by considering the contexts where OCP effects do take place. In short, the OCP can apply wherever we get phonological contrasts. This will often be between adjacent segments, but it may also be between non-adjacent segments belonging to the same prosodic domain (e.g. syllable, foot) or the same morphological domain (e.g. root, word). In the case of OCP effects operating in these wider domains, it is possible for the effects themselves to be motivated not by the notion of contrast per se, but rather, by the more general notion of information. The role of segmental properties — as represented by units such as features or elements — is to carry linguistic information which contributes to the identity of individual 1

A reviewer has pointed out that some dependency-based models (e.g. Van de Weijer 1996) and particle-based models (Schane 1984) do permit selfconjunction in representations, making structures such as |I I| well-formed.

16

Kuniya Nasukawa and Phillip Backley

segments; and typically, this information relates to lexical contrasts. But importantly, this is not the only kind of information that segmental structure can express: it may also encode linguistic information relating to prosodic or morphological domains, and specifically, to the places where domains begin and end. 2 In this paper we focus on this latter kind of information, and illustrate how certain segmental properties are favoured cross-linguistically because they convey information about domains. When these properties appear at domain edges they are usually pronounced in full in order to perform their function of marking out domain boundaries. But on the other hand, when they occur in the middle of a domain the grammar tends to suppress them, either through lenition processes or through OCP effects. By studying examples of OCP effects (or identity avoidance) in different languages, it becomes possible to establish generalisations concerning (i) which segmental properties are regularly used to identify the edges of domains, and (ii) which domains are relevant to the OCP. And given that identity avoidance phenomena are observed at different structural levels, we show how this reduces to the idea that certain segmental properties are bound by certain prosodic or morphological domains. Returning to the example of [voice] in native Japanese words, having two segments marked for [voice] in a single word is, for contrastive purposes, no different from having just one segment marked for [voice], since [voice] behaves as a morpheme-level property rather than a segment-level one. Beyond Japanese, the same applies to features that are harmonically active in vowel harmony systems. To provide the necessary background for a discussion of how OCP effects relate to segmental structure, the following section introduces the set of units or ‘elements’ employed in Element Theory. It will emerge that representing segments in terms of elements rather than traditional features allows the grammar to capture information about structural domains in a natural and intuitive way.

2

Information about the location of prosodic boundaries is now understood to play a central part in language processing as well as in acquisition — see, for example, Jusczyk, Cutler and Redanz (1993).

Contrastiveness: The basis of identity avoidance

17

3. Segmental structure with elements 3.1. The elements Like feature theory, Element Theory exists in various forms — see, for example, Harris and Lindsey (1995), Nasukawa and Backley (2008, 2011), Backley and Nasukawa (2009a, 2010). The version of Element Theory used here employs the six elements listed in (2). Each one is shown with the informal name for its acoustic pattern (in brackets) together with a description of the acoustic properties usually associated with it. (2) a.

b.

Vowel elements element typical acoustic correlates |I| (dip) high spectral peak (high F2 converges with F3) |U| (rump) low spectral peak (low F2 converges with F1) |A| (mass) central spectral energy mass (high F1 converges with F2) Consonant elements element typical acoustic correlates |Ɂ| (stop) abrupt and sustained drop in overall amplitude |H| (noise) aperiodicity |N| (murmur) periodicity

Informally, the elements divide into two sets, a vowel set comprising the resonance elements |I|, |U| and |A| and a consonant set comprising the nonresonance elements |Ɂ|, |H| and |N|. Note that this is not an absolute split, however: although |I|, |U| and |A| naturally occur in vowels, they regularly appear in consonants too; similarly, while the consonant elements |Ɂ|, |H| and |N| naturally belong in consonant structures, they may also appear in vowels. The distribution of elements is described in more detail below. Element Theory differs from traditional SPE-based feature theories in several ways. One of the basic differences is apparent from (2) — namely, that elements are described in terms of acoustic properties rather than articulation, which is the case with features developed in the SPE (Chomsky and Halle 1968) tradition. More precisely, elements are associated with specific acoustic patterns in the speech signal, where these patterns encode the linguistic information that language users instinctively pay attention to during communication. The patterns in question go by the informal names given in brackets in (2).

18

Kuniya Nasukawa and Phillip Backley

Another basic difference between elements and features has to do with phonetic interpretability: unlike features, elements can be pronounced on their own. For example, when the element |I| appears by itself in a nucleus it is realised as the vowel [i], while in an onset it is pronounced as the glide [j]. In this sense, elements are ‘big’ enough to function as segment-sized units (although it will be shown that they are also ‘small’ enough to combine with one another within a single segment). And what makes it possible for a single element to represent a whole segment in this way is the fact that elements refer to acoustic patterns rather than to properties of articulation. As (2) shows, the |I| element represents an acoustic pattern with a concentration of high-frequency energy, which is created by raising F2 to a point where it merges with F3. And the usual way for speakers to reproduce this pattern is to adopt a high front tongue position of the kind required for [i] and [j]. So segments containing |I| are usually palatals (e.g. [j ʃ ç ɲ]) or front vowels (e.g. [i y e æ]). Typically — though this ultimately depends on the characteristics of the vowel or consonant system in question — all other phonetic properties associated with [i]/[j] are phonologically inert, and for this reason are not explicitly encoded in segmental structure. In this respect, Element Theory departs from standard feature theory, where individual features cannot be phonetically realised. Most features refer to some aspect of speech production such as tongue position ([high], [back]…), airflow ([continuant], [lateral]…) or laryngeal state ([tense], [voice]…), but none of these properties is pronounceable on its own. So in a fully specified representation, a feature must be supported by a range of other features — that is, it must belong to a full feature matrix — before it can be interpreted phonetically by speakers. Elements and features also differ in their distribution. Features tend to be tied to particular syllabic positions, and are therefore associated with particular kinds of segments. For example, [anterior] is only relevant to consonants, [high] usually refers to vowels, [spread glottis] describes obstruents, and so on. By contrast, in Element Theory it is possible, at least in principle, for any element to appear in any syllabic position. So although the elements in (2) are arranged into two groups, a vowel group and a consonant group, this is neither a formal nor a rigid distinction: the labels ‘vowel element’ and ‘consonant element’ are generalisations — they refer to the acoustic and phonological characteristics of elements only in their broadest sense. In reality, the so-called vowel elements |I|, |U| and |A| regularly appear in consonants; for instance, when they combine with consonant elements in an onset or coda they represent consonant place, as shown in (3a). |A| is the place element in gutturals (e.g. pharyngeals, uvulars) and

Contrastiveness: The basis of identity avoidance

19

some types of coronals, while |I| represents palatals and other types of coronals (Backley 2011). Meanwhile, |U| specifies both labial place and velar place; in this sense, |U| overlaps with features such as [grave] (Jakobson and Halle 1956), [peripheral] (Rice and Avery 1991) and [dark] (Backley and Nasukawa 2009b). (3) a.

b.

Vowel elements nucleus |I| front vowels |U| rounded vowels |A| non-high vowels

onset palatal, apical coronal labial, velar pharyngeal, uvular, laminal coronal

Consonant elements onset |Ɂ| oral or glottal occlusion |H| aspiration, voicelessness |N| nasality, obstruent voicing

nucleus creaky voice (laryngealised Vs) high tone nasality, low tone

The so-called consonant elements |Ɂ|, |H| and |N| also have a certain degree of distributional freedom. They are primarily associated with non-nuclear positions, where they represent the consonant properties of occlusion, frication and nasality, respectively. But they also appear in nuclei, where they are responsible for secondary vowel properties such as laryngealisation, tone and nasalisation, as shown in (3b). Of course, Element Theory is not unique in assuming that consonants and vowels can be represented by the same units. For example, the model of feature geometry developed in Clements and Hume (1995) proposes the shared features [labial], [coronal] and [dorsal] for encoding vowel place and also consonant place. Other features such as [continuant] are not shared, however, as there is no obvious way of linking their associated phonetic properties to both vowel and consonant articulations. By contrast, Element Theory is able to fully exploit the use of shared units because the units it employs, namely elements, are based on acoustic patterns rather than on articulation — and importantly, the same acoustic patterns are observed in consonant and vowel segments. For instance, the pattern associated with the |I| element can be seen in the spectral profiles of front vowels such as [i y e æ] and palatal consonants such as [j ʃ ç ɲ], even though front vowels and palatal consonants are not articulated in the same way. So far, the elements have been defined in terms of their phonetic (acoustic) properties. But in fact elements are to be understood primarily as

20

Kuniya Nasukawa and Phillip Backley

cognitive (i.e. grammatical) units, since they represent the linguistic information that is needed to distinguish one morpheme from another. (The acoustic patterns associated with the elements do no more than facilitate a mapping between these cognitive objects and the physical world.) Accordingly, it is mainly through phonological evidence that Element Theory motivates the elements themselves and determines the element structure of a given segment. The claim that the same elements are shared by consonants and vowels, as noted above, is also supported by phonological evidence — typically, by identifying patterns of consonantvowel interaction. These patterns suggest that the relevant consonants and vowels belong to the same natural class, and thus, have some elements in common. For instance, phonological patterning in Mapila Malayalam shows that labial consonants and rounded vowels both contain U. In this language a word-final empty nucleus has the default pronunciation [ɨ], as shown in (4a). But if there is a rounded vowel (4b) or a labial consonant (4c) earlier in the word, then [ɨ] is itself rounded to [u]. (4)

Mapila Malayalam: U as a shared element a.

[kaḍalɨ] [ḍressɨ]

(*[kaḍal]) (*[ḍress])

‘sea’ ‘dress’

b.

[nuːru] [onnu]

(*[nuːr], *[nuːrɨ]) (*[onn], *[onnɨ])

‘hundred’ ‘one’

c.

[caːvu] [jappu]

(*[caːv], *[ caːvɨ]) (*[japp], *[ jappɨ])

‘death’ ‘pound’

The rounding process [ɨ]→[u] is essentially an assimilation effect triggered by rounding or labiality elsewhere in the word. And because rounded vowels and labial consonants both act as triggers, we can assume they have the same triggering property, which is represented by the same element in their respective structures. Given that the same element can appear in consonants and vowels, it follows that an element can have more than one phonetic realisation (see (3) above), since consonants and vowels have quite different phonetic (and especially, articulatory) properties. With |I|, |U| and |A| it is not difficult to see how their consonantal and vocalic realisations are related. But in the case of |Ɂ|, |H| and |N| it is less apparent that their different phonetic realisations are formally linked. In consonants, the ‘stop’ element |Ɂ| provides the

Contrastiveness: The basis of identity avoidance

21

stopness or oral occlusion that characterises oral and nasal stops, while on its own it is interpreted as a glottal stop [Ɂ] (i.e. occlusion with no additional marked properties). And in some languages such as Capanahua, |Ɂ| also appears in nuclei, producing a laryngealised vowel to give a creaky voice effect. The phonological relation between oral occlusion in stops and creaky voice in vowels is described in Backley (2011: 122). The remaining consonant elements |H| and |N| also have dual interpretations: in consonants they represent the laryngeal properties of aspiration and obstruent voicing, respectively, while in vowels they represent high and low tone. Phonological evidence for the link between laryngeal properties and tone is discussed in Backley and Nasukawa (2010). 3.2. Segmental structure There are two ways in which Element Theory can express lexical contrasts, as shown in (5). (5)

a. b.

the presence versus the absence of elements dependency relations between elements

From (5a) it can be inferred that elements are monovalent or single-valued units — this is another difference between the element-based approach and standard feature theories, where features can have either a plus or a minus value. Meanwhile, (5b) describes how elements form head-dependency relations when they co-occur in the same segmental expression. The nature of those relations is described below. Because elements have their own acoustic patterns, each element can be pronounced individually. In reality, however, most segments are represented by combinations of elements such as those in (6b), rather than by single elements. Compound expressions (i.e. structures containing two or more elements) have both phonetic and phonological complexity. They are phonologically complex in the sense that the segment in question belongs to multiple natural classes; for example, the mid vowel [e], represented as I A, may pattern with other I-vowels such as [ɪ ɛ æ] and/or with other A-vowels such as [a o ɒ]. They are phonetically complex too, in that they map on to multiple acoustic patterns in the speech signal; for example, the acoustic profile of [e] has the signal pattern for A (central mass) combined with the signal pattern for I (central dip).

22

Kuniya Nasukawa and Phillip Backley

(6)

structure

phonetic realisation(s)

a.

simplex

|A| |I| |U|

[a]~[]~[] [i]~[ɪ] [u]~[ʊ]

b.

compound

|I A| |U A| |I U| |I U A|

[e]~[ɛ]~[æ] [o]~[ɔ]~[ɒ] [y]~[ʏ] [ø]~[œ]

As (6) shows, a compound expression can have more than one phonetic realisation. This is because the elements in a compound may combine in different proportions. The convention is to express these differences via head-dependency relations. For example, [e] and [æ] are both compounds of |I| and |A|. If |I| is the head then the compound I A has the phonetic value [e], in which the acoustic pattern for I predominates. By contrast, if |A| is the head then I A is realised as the low vowel [æ] (or in some languages, as the open mid vowel [ɛ]), in which case the acoustic pattern for A predominates. Hereafter, head elements are underlined. (7)

structure

phonetic realisation(s)

|I A| |I A|

[e] [æ]~[ɛ]

|U A| |U A|

[o] [ɒ]~[ɔ]

Consonant elements may also appear in head or dependent form. Using laminal coronal stops as an illustration, (8) shows how the head/dependent status of |Ɂ|, |H| and |N| can significantly affect their phonetic realisation. (8)

structure

phonetic realisation(s)

|Ɂ A H| |Ɂ A H| |Ɂ A H|

[t’] [th] [t]

ejective stop aspirated stop plain (released) stop

Contrastiveness: The basis of identity avoidance

|Ɂ A N| |Ɂ A N| |Ɂ A N|

[ɗ] [d] [n]

23

implosive fully voiced stop nasal stop

Non-headed |Ɂ|, |H| and |N| are realized as occlusion, voicelessness and nasality, respectively. But when these elements serve as the head of a compound, they introduce additional phonetic values: |Ɂ| represents ejective release, |H| represents aspiration and |N| represents obstruent voicing. These additional values reflect the fact that headed |Ɂ|, |H| and |N| are acoustically more prominent — and perceptually more salient — than their non-headed counterparts. Phonological evidence for the headedness distinctions in (8) is discussed in Backley and Nasukawa (2009a). To summarise, contrasts are expressed by the simple presence versus absence of elements, and also by headedness (i.e. the headed versus nonheaded status of elements in a compound). On this basis, nothing is gained by allowing a segment to contain more than one token of an element; for example, |I N| carries the same linguistic information as *|I N N|, making the duplication of |N| unnecessary for the purposes of contrast. From the point of view of communication too, the structure *|I N N| does not make grammatical sense: the element |N| represents a specific acoustic pattern in the speech signal, and language users cannot produce or perceive two tokens of this pattern at the same time. So if two tokens of an element provide the same amount of linguistic information as a single token of that element, then there is no reason for the grammar to allow duplicate elements as a structural possibility. In essence, to claim that grammars disallow structures such as *|I N N| is to claim that OCP violations never occur in segmental structure, because two tokens of the same element never co-exist in the same expression. 3 Thus, segmental contrasts are maximally satisfied within a segment.

3.3. Prosodic structure The previous section described elements as the basic units of segmental representation. Segmental contrasts are therefore based on element struc3

At first sight, Particle Phonology (Schane 2005) seems to contradict this assumption as it allows the A particle to appear more than once in an expression, e.g. IAA. But in fact, multiple tokens of a particle simply translate into added prominence — something which we claim is achieved using headedness instead.

24

Kuniya Nasukawa and Phillip Backley

ture. But phonological information is concerned not only with segments — it also refers to the relations that hold between those segments. That is, phonological information includes information about prosodic structure. Traditionally, segmental structure and prosodic structure are seen as being different in kind, the first being concerned with properties that determine the phonetic realisation of segments and the second with properties that organise segments into grammatical strings. Yet on closer inspection it appears that both are constructed along similar lines. Specifically, segmental structure relates to the criteria in (9), repeated from (5), while prosodic structure relates to the criteria in (10). (9)

Dimensions of segmental structure a. the presence versus the absence of an element b. dependency relations between elements

(10)

Dimensions of prosodic structure a. the presence versus the absence of a position (C/V) b. dependency relations between positions

Clearly, prosodic structure involves units (positions) that are different from those employed in segmental structure (elements). But interestingly, there is a close parallel in the way these units function in the grammar. In both cases (i) structural differences are encoded by the presence or absence of the relevant units, and (ii) structure is formed by allowing those units to combine by forming head-dependency relations. To illustrate the role of head-dependency relations in prosodic structure, consider the representation in (11).

Contrastiveness: The basis of identity avoidance

(11)

25

English puppy [ˈphpi] •

foot





syllable position

C1

V1

C2

V2

|U| |Ɂ| |H|

|A|

|U| |Ɂ| |H|

|I|

[ph



p

i]

segmental structure

The English word puppy [ˈphpi] consists of four positions grouped into two syllable-sized units, C 1 -V 1 and C 2 -V 2 . It is generally agreed that an asymmetric relation exists within each CV unit, the C position being dependent on the following V position. Head-dependency applies at higher levels of structure too, with the two syllables of puppy combining asymmetrically to form a foot. In (11) the left-hand syllable is the head of the (trochaic) foot domain, while in other English words (e.g. machíne, appéar) a right-headed (iambic) foot is also possible. It is clear that headdependency at the foot level underlies English word stress. More generally, this example illustrates how (10b) parallels (9b), in that dependency relations are central to the representation of prosodic structure, just as they are to the representation of element-based segmental structure. And although it is not apparent from (11), it is also the case that (10a) parallels (9a), in that prosodic differences and segmental differences can each be expressed as the presence versus the absence of structural units. This is obviously true for segmental differences, given that |I A| (= [e]) and |I| (= [i]), for example, are distinctive. But it also applies to prosodic differences, since there are certain prosodic units — namely, dependents — that are optional. It may be argued, for instance, that English words such as apple, open lack a C 1 position. And because they have no initial onset position, these words cannot begin phonetically with a consonant. Clearly, vowel-initial words are lexically distinct from consonant-initial words. These parallels make it possible to unify prosodic structure and segmental structure as a general category of phonological structure, defined as in (12).

26 (12)

Kuniya Nasukawa and Phillip Backley

Dimensions of phonological structure a. the presence versus the absence of a structural unit b. dependency relations between structural units

Below it will be explained how integrating prosodic and segmental structure in this way is relevant to the OCP — and specifically, to the claim that OCP effects, when they do occur, are never observed within segmentinternal structure. The following section shows how language users employ dependency relations between elements (i.e. the difference between headed and non-headed elements) to encode information relating to the location of prosodic domains. In particular, it will be claimed that headed elements function as boundary markers for prosodic domains. In many cases, therefore, the absence of a headed element is not the result of an OCP effect; rather, it is the result of there being no domain boundary to demarcate.

4. Prosodic demarcation: the distribution of headed elements In this version of Element Theory used here it is assumed that linguistic information of any kind — or in the spirit of (12), phonological structure of any kind — can only be communicated if it is phonetically realised. And furthermore, this information can only be phonetically realised if it is expressed in terms of segmental structure — that is, using elements. From this it follows that elements must be capable of encoding information about prosodic domains in addition to information about segmental contrasts. The remainder of this paper describes how speakers communicate information about prosodic (and in addition, morphological) structure. Elsewhere we have argued that the acoustic cues associated with element headedness are used in many languages to mark out prosodic and morphological domains (Backley and Nasukawa 2009a). Generally speaking, positions located at the boundaries of a domain — typically those at the left boundary (and less commonly, at the right boundary) — tend to favour headed elements; so when listeners perceive a headed element, they take this as a reliable cue for locating the edge of a domain. As the psycholinguistics literature confirms (e.g. Cutler and Norris 1988), knowing where domains begin and end can help listeners to process language more efficiently. Note that it is a language-specific matter as to which domain will be identified by headedness; in some languages it is a prosodic domain such as a syllable, foot or prosodic word, while in others it is a morphological domain such as a root, stem or word.

Contrastiveness: The basis of identity avoidance

27

In principle, any element may function as a domain marker. For instance, in vowel harmony systems the vowel elements |I| and |U| often assume this role. More generally, however, it is the consonant elements |Ɂ|, |H| and |N| which typically function in this way. So, when listeners perceive a headed |Ɂ|, |H| or |N|, they interpret this as indicating the (left) edge of a prosodic or morphological domain. Recall from (8) that headed |Ɂ|, |H| and |N| are associated with quite distinctive acoustic cues, since these elements signal ejective release, aspiration and full obstruent voicing, respectively. Stop aspiration in English is frequently cited as an example of how a headed |H| element can function as a prosodic marker, since it is tied to the left edge of a foot, as in [kh]ookie, re[ph]eat. In Element Theory this property is represented by the presence of headed |H| (cf. non-headed |H| in unaspirated stops such as [k] in [ˈstɪki] sticky and [p] in [ˈhæpi] happy, where the relevant stop occupies a foot-internal position). Aspiration is prosodically conditioned in Swedish too, but in this case the relevant domain is the word; hence, stops are aspirated word-initially and unaspirated elsewhere, as shown in (13). (13)

Swedish: word-initial |H| [ph]acka kö[p]a kö[p]-te

‘pack’ ‘buy’ ‘bought’

[ph] [p] [p]

= = =

|Ɂ U H| |Ɂ U H| |Ɂ U H|

According to one view, aspiration in English and Swedish functions as an additional property which is superimposed on to a plain stop in the relevant prosodic contexts. This would involve promoting an existing |H| element (representing unaspirated stop release) to a headed |H| element (representing aspiration). There is an alternative line of explanation, however, which assumes that all fortis stops are inherently aspirated. On this basis, aspiration is then said to be preserved in favoured contexts (i.e. foot-initial or word-initial, depending on the language) but suppressed in all other contexts. Suppressing aspiration would amount to the suppression of headedness — a structural change that has been used elsewhere in the Element Theory literature to capture vowel reduction and other weakening processes. Like headed |H|, the stop element |Ɂ| also serves as a domain marker when it is headed. Recall that non-headed |Ɂ| identifies oral and nasal stops, which generally occur quite freely in non-nuclear positions, whereas headed |Ɂ| denotes ejective stops (or, in some languages, tense stops), which are (i) typologically much less common than plain stops and (ii) usually more

28

Kuniya Nasukawa and Phillip Backley

restricted in their distribution. A well-documented case is that of Korean, which has a three-way distinction between plain, aspirated and tense/ejective stops. These are represented as in (14a), which uses the labial series to illustrate the relevant contrasts. (14)

Korean: syllable-initial |H| and |Ɂ| a.

plain aspirated tense

[p] [ph] [p’]

|U H Ɂ| |U H Ɂ| |U H Ɂ|

b.

unreleased

[p˺]

|U

Ɂ|

[paŋ] ‘room’ [phaŋ] ‘bang’ [p’aŋ] ‘bread’ [ip˺]

‘leaf’

In Korean it is the syllable domain which is marked out by headedness: tense stops with headed |Ɂ| and aspirated stops with headed |H| are contrastive in the syllable onset (i.e. the left edge of the syllable domain), while in coda position the three-way stop distinction neutralises to a plain unreleased stop, as in (14b). Because [p˺] in (14b) has no prosodic marking function to perform, its structure is partially suppressed. 4 It cannot contain a headed element, so it must be interpreted as a plain stop; it also loses |H| (representing audible stop release), resulting in non-release. And Korean is not an isolated case; ejectives have a similar distribution in some native American languages too, including Klamath, Cuzco Quechua, Maidu, Navajo and Dakota (Rimrott 2003). 5 The remaining consonant element |N| can also function as a domain marker when it is headed, in which case it is phonetically realised as obstruent voicing. (Recall from (8) that headed |N| is interpreted as full voicing in obstruents while non-headed |N| is interpreted as nasality in sonorants.) This is observed in some dialects of Japanese, including the variety spoken in the Northern Tohoku region (Nasukawa 2005), where the rele-

A reviewer has suggested that the lack of headed |Ɂ|/|H| in neutralised stops may help listeners identify the syllable domain ‘recessively’ by contributing to the syntagmatic difference between onset and coda consonants. 5 Although strong properties such as headed |Ɂ| and |H| typically occur at the left edge of a domain, there are also languages in which the right boundary of a domain is demarcated instead. We thank a reviewer for pointing out that some English dialects have ejective release (i.e. they contain headed |Ɂ|) exclusively in word-final position. See also the case of Kaqchikel, which is described in (16) below. 4

Contrastiveness: The basis of identity avoidance

29

vant domain is the prosodic word. In this system, therefore, headed |N| marks word-initial position, as illustrated in (15). (15)

Northern Tohoku Japanese a.

b.

Word-initially: |N| vs. |N| fully voiced stop [b] |Ɂ U H N| nasal stop [m] |Ɂ U N| plain stop [p] |Ɂ U H|

[biɴ] ‘bottle’ [moɴ] ‘gate’ [petto] ‘pet’

Word-internally: |N| (*|N|) pre-nasalised stop [mb] |Ɂ U H N| nasal stop [m] |Ɂ U N|

[sambi] ‘rust’ [kami] ‘paper’

As (15a) shows, word-initial position supports a contrast between voiced stops (headed |N|), nasal stops (non-headed |N|) and plain stops (no |N|). Word-internally, however, voiced stops display a weakening effect whereby headed |N| loses its headedness to leave non-headed |N|. The resulting expression with non-headed |N| is phonetically realised as a pre-nasalised stop 6 in this particular dialect. 7 Again, headedness is suppressed because the consonant in question does not occupy a domain-initial position, and therefore, has no prosodic marking function to perform. Since headed |N| in Northern Tohoku Japanese acts as a domain boundary marker and is restricted to word-initial position, it is forced to undergo some kind of structural change when it appears in other contexts. In this respect the Japanese pattern illustrated here is typical, in that it introduces a minimal structural change whereby the element itself is retained but is reduced to its nonheaded form.

6

The voiced stops in present-day Japanese are thought to derive historically from intervocalic prenasalised stops (Vance 1987), which suggests that voicing is a neutralisation process and that prenasalised stops are structurally stronger than their voiced counterparts. Alternatively, however, it could be argued that the voicing effect in question is one of spontaneous voicing (Nasukawa 2005), which is regularly observed in intervocalic position. On this basis, the voicing effect would constitute a weakening process in which a marked laryngeal property is lost. 7 In consonants, non-headed |N| has two phonetic interpretations: when combined with |H| (which broadly defines the class of obstruents) it produces prenasalisation (e.g. [mb] |Ɂ U H N|), and without |H| it produces nasality in sonorants (e.g. [m] |Ɂ U N|). For a full description, see Nasukawa (2005).

30

Kuniya Nasukawa and Phillip Backley

In all of the languages discussed so far, structural markers (i.e. headed elements) have identified the left boundary of the relevant domain. And indeed, this appears to be the default case. However, if some grammars highlight domain-initial position as being linguistically significant, then we can expect to find other grammars which assign prominence to domainfinal position too, at least as the marked option. Kaqchikel (a Mesoamerican language spoken in Guatemala) is a case in point, where aspiration represented by headed |H| serves as the domain marker and where the relevant domain is the prosodic word (Nasukawa, Yasugi and Koizumi 2013). In Kaqchikel, however, headed |H| is anchored to the right edge of the word domain, as shown in (16). (16)

Kaqchikel: word-final |H| a. b. c.

aspirated (word-final) [ph] |Ɂ U H| [toph] ‘crab’ unaspirated (word-initial) [p] |Ɂ U H| [paȿ] ‘skunk’ unaspirated (word-internal) [p] |Ɂ U H| [pispiɁç] ‘gizzard’

In this language, aspirated stops containing headed |H| are restricted to word-final position. Whenever (what is lexically) an aspirate occurs in any other context, it undergoes the same suppression of headedness that we observed in (13) for Swedish. Up to this point we have only considered languages in which prosodic domains are marked by the consonant elements |Ɂ|, |H| and |N|. But it is also possible for vowel elements to act as prosodic markers — recall from §3.1 that the vowel elements |I|, |U| and |A| appear in consonants by serving as place elements. In Arabic, for example, there is a ban on two labial consonants from co-occurring in the same root, as well as a ban on two velars (Alderete, Tupper and Frisch 2012). Initially, these two co-occurrence restrictions appear to operate independently, but in fact it is possible to treat the two in parallel since Element Theory uses the same element to represent both labials and velars: headed |U| encodes labial place while non-headed |U| encodes velar place (Backley and Nasukawa 2009b). We may generalise, therefore, by saying that the |U| element demarcates a root domain in Arabic, where |U| subsumes the natural class of labials and velars. Meanwhile, in other languages the headed or non-headed status of an element can affect that element’s ability to serve as a domain marker. For instance, there are languages in which domains are marked out by labials (headed |U|) but not by velars (non-headed |U|); examples include Zulu (Doke 1926) and Ponapean (Rehg and Sohl 1981, Goodman 1995). In contrast to the situa-

Contrastiveness: The basis of identity avoidance

31

tion just described, it is unusual to find headed |I| or headed |A| operating as a prosodic marker. This is consistent with the view held by some Element Theory scholars (e.g. Van der Torre 2003) that |U| is more consonantal in character than either |I| or |A|. 8 On this basis, we can expect |U| to pattern with |Ɂ|, |H| and |N| in a way that |I| and |A| do not. It is not surprising, therefore, that headed |U| is able to function as a prosodic marker just like headed |Ɂ|, |H| and |N|.

5. Discussion The examples in §4 demonstrate how particular segment types such as aspirates, ejectives and voiced obstruents can sometimes have a restricted distribution, being permitted to occur only in domain-initial (or more rarely, domain-final) position. In other words, only one occurrence of these segments may be allowed in a given domain. 9 And to account for this distributional pattern, scholars have typically made appeal to the OCP. As discussed in §1, the OCP serves as a meta-principle which rules against the co-occurrence of identical objects. But in reality, treating these patterns as having resulted from the OCP does no more than put a label on them — it does not explain how or why more than one token of a particular segmental property is banned within certain domains. Moreover, an approach based on the OCP would require the OCP itself to be redefined, since the established form of the OCP as given in (1) refers only to identical units that are adjacent, and in the examples in (13)-(16) the relevant segments are not strictly adjacent — they merely belong to the same domain. The alternative proposed here does not rely on the OCP in any form. Rather, it argues that there are some languages in which consonantal features such as aspiration, ejectiveness and obstruent voicing function not as segmental properties but as domain markers. In a sense, this approach echoes Firthian prosodic analysis to the extent that a given property may be associated with a domain larger than the segment. According to this view, a ban 8

Owing to the vocalic bias of |I|, almost all languages have front vowels of some kind whereas not all have palatal consonants. And owing to the consonantal bias of |U|, almost all languages have labial consonants whereas some lack rounded vowels. Acquisition also highlights the consonantal nature of |U|, in that infants typically acquire the labials [p] and [m] before acquiring the rounding contrast in vowels. We acknowledge that the Element Theory formalism would benefit from a way of capturing this asymmetry explicitly. 9 Further examples are discussed in MacEachern (1999) and Blust (2012).

32

Kuniya Nasukawa and Phillip Backley

on the appearance of two headed expressions in a domain is entirely expected — using multiple markers to identify a single prosodic domain would introduce unnecessary ambiguity and, in effect, blur rather than highlight the location of domain boundaries. But if a domain has just one prosodic marker in the way we claim, then no ambiguity arises: when listeners perceive two tokens of |H| (or |Ɂ| or |N|) in a segmental string, they immediately associate each token with a separate domain. At this stage it would be ambitious to claim that the OCP can be dispensed with altogether, as it still appears to fulfil the role of maintaining non-identity between units that are structurally adjacent. However, when the units in question are not adjacent — and crucially, when they co-occur within the same structural domain, as exemplified by the languages we have introduced — then there is a strong case for viewing their distribution as something which emerges from their status as domain markers, rather than from a well-formedness principle such as the OCP. In the examples considered so far, these domains refer to prosodic constituents. But morphological domains can also be demarcated in the same way. This happens in native Japanese words, where the prosodic marker is headed |N|. The native lexicon is subject to a process of sequential voicing (or Rendaku), in which a plain obstruent becomes voiced (i.e. it acquires headed |N|) when it occupies the initial onset of the second part of a compound. There is, however, a restriction on the operation of Rendaku known as Lyman’s Law, which states that only one headed |N| element is allowed in a single morpheme. This restriction is illustrated in (17). (17)

Lyman’s Law in native Japanese words (Nasukawa 2005: 5) a.

sabi sabaki tsubasa saži kazari toɡe tokaɡe

‘rust’ ‘judgement’ ‘wing’ ‘spoon’ ‘decoration’ ‘thorn’ ‘lizard’

*zabi *zabaki, *sabaɡi, *zabaɡi *dzubasa, *tsubaza, *dzubaza *zaži *ɡazari *doɡe *dokaɡe, *toɡaɡe, *doɡaɡe

Contrastiveness: The basis of identity avoidance

b.

beni niži tsubame naɡisa mikado nezumi nokoɡiri

33

‘rouge’ ‘rainbow’ ‘swallow’ ‘beach, shore’ ‘emperor, emperor’s palace’ ‘rat, mouse’ ‘saw’

Headed |N| is phonetically interpreted as obstruent voicing and reliably identifies individual morphemes in the native Japanese lexicon. So, when listeners encounter a phonological string containing two voiced obstruents (i.e. two tokens of headed |N|) they must arrive at one of three conclusions: (i) the string is ill-formed (e.g. *zabi, *gazari, etc. are impossible as native morphemes), or (ii) it is a borrowed word, in which case it is outside the scope of Lyman’s Law, or (iii) it is a well-formed string comprising two separate morphemes (note: compounds count as single morphemes). Interestingly, unlike the prosodic markers discussed earlier, the morphological marker |N| in native Japanese words is not anchored to a domain boundary — it may occur morpheme-initially (e.g. beni ‘rouge’) or morphemeinternally (e.g. toge ‘thorn’). Nevertheless, we assume that it is still able to facilitate language processing to the extent that it helps listeners segment the continuous speech stream into individual morphemes. To summarise, in many languages we find the consonant elements |Ɂ|, |H| and |N| functioning as domain markers when in their headed form. In addition, there are languages in which certain vowel elements show a similar behaviour (e.g. |U| in Zulu). We have argued that the consistent use of headed elements to demarcate the boundaries of prosodic and morphological domains provides an explanation for why headed elements often have a restricted distribution. To avoid ambiguity, only one boundary of a domain (usually the left) is marked out by a headed element, so as a result, the appearance of headed elements is limited to one per domain (e.g. syllable, foot, word, stem). This is indeed the pattern observed in a range of languages, where headed consonant elements are phonetically interpreted as strong segmental properties such as aspiration, ejective release and full voicing. Traditionally, this distributional pattern has been accounted for by appealing to the OCP, but in a strict sense the OCP should not even apply in such cases because they do not involve adjacent segments. It may be noted that, from the small number of examples considered here, there seems to be no way of predicting which elements (if any) will function as domain markers in a given language, or indeed, which domain will be marked out. Evidently, both come down to parametric choice. Inevi-

34

Kuniya Nasukawa and Phillip Backley

tably, this raises the question of overgeneration, given that both parameters can be set independently and both have multiple settings. In practice, however, the number of possible patterns involving a domain marker is constrained by the fact that Element Theory uses only six elements, and also by the fact that the range of prosodic constituents (syllable, foot, super-foot, phonological word) and morphological domains (root, stem, affix) is relatively small. As a result, the question of generating an excess of typological possibilities is not an overriding issue. 10

Acknowledgements This paper was first presented in April 2011 at the 34th annual GLOW colloquium, hosted by the University of Vienna. We thank the colloquium participants and two reviewers for their insightful comments. The authors are responsible for any remaining inaccuracies.

References Alderete, John, Paul Tupper, and Stefan Frisch 2012 Learning phonotactics without rules: a connectionist model of OCP-Place in Arabic. Paper presented at the 48th annual meeting of the Chicago Linguistics Society. Backley, Phillip 2011 An Introduction to Element Theory. Edinburgh: Edinburgh University Press. Backley, Phillip, and Kuniya Nasukawa 2009a Headship as melodic strength. In Strength Relations in Phonology, Kuniya Nasukawa and Phillip Backley (eds.), 47–77. Berlin/New York: Mouton de Gruyter. 2009b Representing labials and velars: a single ‘dark’ element. Phonological Studies 12: 3–10. 2010 Consonant-vowel unity in Element Theory. Phonological Studies 13: 21–28.

10

Bye (2011) discusses the possibility of using standard features, rather than elements, to identify prosodic domains. We are confident that his observations can be recast in terms of the element representations used here.

Contrastiveness: The basis of identity avoidance Blust, Robert 2012

35

One mark per word? Some patterns of dissimilation in Austronesian and Australian languages. Phonology 29: 355–381. Boll-Avetisyan, Natalie, and René Kager 2004 Identity avoidance between nonadjacent consonants in artificial language segmentation. Natural Language and Linguistic Theory 22: 179–228. Bye, Patrik 2011 Dissimilation. In The Blackwell Companion to phonology, Marc van Oostendorp, Colin Ewen, Elizabeth Hume and Keren Rice (eds.), 1408–1433. Oxford: Wiley-Blackwell. Chomsky, Noam, and Morris Halle 1968 The Sound Pattern of English. Cambridge, MA: MIT Press. Clements, George N., and Elizabeth Hume 1995 The internal organization of speech sounds. In The Handbook of Phonological Theory, John A. Goldsmith (ed.), 245–306. Oxford: Blackwell. Cutler, Anne, and Dennis Norris 1988 The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance 14: 113–121. Doke, Clement M. 1926 The Phonetics of the Zulu Language. Johannesburg: University of Witwatersrand Press. Frisch, Stefan A., Janet B. Pierrehumbert, and Michael B. Broe 2004 Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22: 179–228. Goldsmith, John A. 1976 Autosegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Published 1979, New York: Garland. Goodman, Beverley 1995 Features in Ponapean phonology. Ph.D. dissertation, Cornell University. Harris, John, and Geoff Lindsey 1995 The elements of phonological representation. In Frontiers of Phonology: Atoms, Structures, Derivations, Jacques Durand and Francis Katamba (eds.), 34–79. Harlow, Essex: Longman. Jakobson, Roman, and Morris Halle 1956 Fundamentals of Language. The Hague: Mouton. Jusczyk, Peter W., Anne Cutler, and Nancy J. Redanz 1993 Infants’ preference for the predominant stress patterns of English words. Child Development 64: 675–687.

36

Kuniya Nasukawa and Phillip Backley

Leben, William R. 1973 Suprasegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. MacEachern, Margaret 1999 Laryngeal Co-occurrence Restrictions. New York: Garland. McCarthy, John 1986 OCP effects: gemination and antigemination. Linguistic Inquiry 17: 207–263. Nasukawa, Kuniya 2005 A Unified Approach to Nasality and Voicing. Berlin/New York: Mouton de Gruyter. Nasukawa, Kuniya, and Phillip Backley 2008 Affrication as a performance device. Phonological Studies 11: 35–46. 2011 The internal structure of ‘r’ in Japanese. Phonological Studies 14: 27–34. Nasukawa, Kuniya, Yoshiho Yasugi, and Masatoshi Koizumi 2013 Syllable structure and the head parameter in Kaqchikel. In Studies in Kaqchikel Grammar, MIT Working Papers on Endangered and Less Familiar Languages 8, Michael Kenstowicz (ed.), 81–95. Ohala, John J. 1981 The listener as a source of sound change. In Papers from the Parasession on Language and Behavior, 17th Annual Regional Meeting of the Chicago Linguistic Society, Roberta A. Hendrick, Carrie S. Masek and Mary Prances Miller (eds.), 178–203. Chicago IL: Chicago Linguistic Society. 1993 The phonetics of sound change. In Historical Linguistics: Problems and Perspectives, Charles Jones (ed.), 237–278. London: Longman. 2003 Phonetics and historical phonology. In The Handbook of Historical Linguistics, Brian Joseph and Richard Janda (eds.), 669–686. Oxford: Blackwell. Rehg, Kenneth L., and Damian G. Sohl 1981 Pohnpei Reference Grammar. Honolulu, HI: University of Hawaii Press. Rice, Keren, and Peter Avery 1991 On the relationship between coronality and laterality. In The Special Status of Coronals: Internal and External Evidence. Phonetics and Phonology, vol.2., Carole Paradis and JeanFrançois Prunet (eds.), 101–124. San Diego, CA: Academic Press.

Contrastiveness: The basis of identity avoidance

37

Riemsdijk, Henk C. van 2008 Identity avoidance: OCP-effects in Swiss relatives. In Foundational Issues in Linguistic Theory: Essays in Honor of Jean-Roger Vergnaud, Robert Freidin, Carlos Peregrín Otero and Maria Luisa Zubizarreta (eds.), 227–250. Cambridge, MA: MIT Press. Rimrott, Anne 2003 Typology report II: ejective stops. Ms., Simon Fraser University, British Columbia. Schane, Stanford A. 1984 The fundamentals of particle phonology. Phonology Yearbook 1: 129–155. 2005 The aperture particle |a|: its role and functions. In Headhood, Elements, Specification and Contrastivity, Phil Carr, Jacques Durand and Colin J. Ewen (eds.), 119–132. Amsterdam: John Benjamins. Torre, Erik Jan van der 2003 Dutch sonorants: the role of place of articulation in phonotactics. Ph.D. dissertation, Leiden University. Vance, Timothy 1987 An Introduction to Japanese Phonology. New York: SUNY Press. Weijer, Jeroen van de 1996 Segmental Structure and Complex Segments. Tübingen: Niemeyer. 2012 Grammar as Selection: Combining Optimality Theory and Exemplar Theory. Nagoya: Kougaku Shuppan. Yip, Moira 1988 The Obligatory Contour Principle and phonological rules: a loss of identity. Linguistic Inquiry 19: 65–100. 1998 Identity avoidance in phonology and morphology. In Morphology and its Relation to Phonology and Syntax, Steven G. Lapointe, Diane K. Brentari and Patrick M. Farrell (eds.), 216– 246. Stanford, CA: CSLI.

38

Kuniya Nasukawa and Phillip Backley

Rhyme as phonological multidominance Marc van Oostendorp

1. Introduction Poetry is a form of language that makes prominent use of phonological identities and near-identities. A typical (classicist) poem consists of stanzas that have the same shape — e.g. three lines of four trochees each, followed by a line of two trochees (Kiparsky 2006). The repetition of feet — the fact that each line consists of a sequence of trochees — is another example. And a third well-known example is rhyme: the fact that sequences of vowels and/or consonants reoccur throughout the poem in the same order: (1)

Love looks not with the eyes, but with the mind; And therefore is winged Cupid painted blind. (William Shakespeare, Midsummer Night’s Dream)

People clearly have intuitions about poetic rhyme (Katz 2008, Kawahara and Shinohara 2009, Kawahara 2007): some rhymes may be more perfect than others. Furthermore, it is commonly accepted that rhyme (and alliteration) typically are based on phonological constituency. The prototypical alliteration pattern affects an onset; the prototypical rhyme affects a nucleus and a coda (i.e. a phonological rime; I will use the two spellings to distinguish the two theoretical objects). There are no known poetic traditions that use sound patterns which are not known in one way or the other in ‘ordinary’ phonologies of human language (Fabb 2010). Yet another important property is that in many traditions the two rhyming parts are not supposed to be completely identical. How is rhyme calculated and how is it represented? The literature about this topic is far from extensive, but several linguists have pointed out the similarity between reduplication and rhyme (Kiparsky 1970, 1973; Holtman 1996; Yip 1999), and it seems indeed quite plausible that rhyme somehow uses the same machinery as reduplication. In this article, I will explore this view a little more. What representational power do we need to get a satisfactory view of rhyme? If it is based on reduplication, what is a

40

Marc van Oostendorp

plausible view on the latter that can also accomodate rhyme? And what about the fact that rhyme does not work with perfect copies?

2. The representation of reduplication Adopting the view that rhyme uses the means of reduplication does not immediately answer the question how to represent rhyme, as there are several very different theories of reduplication. One important split concerns the question which module is responsible for the actual copying. A relatively large body of literature opts for phonology. For example, within Prosodic Morphology and Optimality Theory (see McCarthy and Prince 1993, 1995a; McCarthy, Kimper and Mullin 2012 for a few representative pubications), it is often assumed that a reduplicative form has a phonologically abstract morpheme RED in the input to the phonology (see also Marantz 1984): (2)

tjilpa- tjilparka ‘bird species’ (Diyari) a. input to phonology: RED-tjilparka b. outputof phonology: tjilpa-tjilparka

The morphosyntax thus in some languages sometimes can insert the abstract morpheme RED; it is sometimes assumed that this morpheme is specified as a prosodic constituent (a syllable, a foot, a word) without segmental content, and at other times as even more empty and abstract (in which case the prosodic shape is determined by independent constraints). The phonology then fills this empty space with copies from the ‘base’ morpheme. An alternative view of reduplication, espoused by Inkelas and Zoll (2005), Aboh (2007) and Alexiadou (2010), on the other hand, claims that morphosyntax is responsible for the copying. Phonology merely prunes the full input representation: (3)

tjilpa-tjilparka ‘bird species’ (Diyari) a. inputtophonology: tjilparka-tjilparka b. outputof phonology: tjilpa-tjilparka

In this case, morphosyntax does not hand any abstract node over to the phonology, but puts the same segments twice in adjacent positions. The phonological module then may sometimes decide to not realize all the material; according to Inkelas and Zoll (2005) the reason for this will often be

Rhyme as phonological multidominance

41

templatic. In the Diyari case, for example, the first copy will be interpreted as a prefix, and prefixes have a templatic length requirement so that they cannot be longer than one bisyllabic foot. Everything outside of that template will get cut off by the phonology. There are several reasons to prefer the second one, in which the syntax does the copying. The most important is one of generative power. In the first model, we have to award to phonology the power to copy material; in the second model, this power is given to syntax (and furthermore phonology needs the power to leave material unpronounced). Which of these options is most likely, given what we already know about these modules? According to most current theories of (generative, minimalist) syntax, the syntactic component already has the power of copying: that is the interpretation given to movement, as (‘internal’) Merge (Chomsky 2001, Hauser, Chomsky and Fitch 2002): one takes a structure {A B} (with A and B each syntactic constituents or heads), and forms a new structure {A {A B}}. This for instance would be a simplified representation of a wh-question: (4)

{ what i { do { you { eat what i } } } }

The two instances of what are literally the same item, occurring in two positions for the purposes of syntax. The phonology then decides to delete (“not spell out”) the lower copy of the two, although it has also been proposed that in some cases both copies can be spelled out. For instance, Barbiers, Koeneman and Lekakou (2008) point out that Dutch dialects can have questions of the following type: (5)

Wie i denk je wie i ik gezien heb? Who think you who I seen have? ‘Who do you think I have seen?’

However this may be, it seems clear that current syntactic (Minimalist) theory is working under the assumption that syntax has the power to copy, whereas phonology has the power to sometimes delete material. On the other hand, there is not a lot of evidence that phonology has the power to copy strings of segments outside of reduplication. It does have the power of autosegmental association, obviously, which can give us, for instance vowel copying, such as in Scots Gaelic (Borgstrøm 1940, Oftedal 1956, Hall 2011):

42 (6)

Marc van Oostendorp

a. ʃalak b. khɛnjɛp

‘hunting’ ‘hemp’

In this example, I underlined epenthetic vowels, which are inserted in Scots to break up impermissible consonant clustrers. Such vowels are always complete copies of the preceding vowel. The typical solution is to represent this as feature spreading: (8)

ʃ x

l x

x

k x

x

a However, the conditions under which such spreading (‘copying’) can occur, are heavily restricted. In the case at hand, we can only spread ‘across’ the intervening consonant [l] because we can assume that vowels and consonants exist on different dimensions and are basically invisible to each other (Mester 1986, Clements and Hume 1995). Otherwise, spreading of features across other association lines is never allowed, by the so-called NO CROSSING CONDITION (NCC) (Goldsmith 1976, Sagey 1988). The result of this is that autosegmental spreading can never serve to copy strings of segments. Even if we would try to just copy the vowels of the Diyari example, association lines would necessarily cross: (8)

tj x

x i

l

p

x

x

tj x

x

x

l

p

x

x

x

a

The line between the features of [i] and the second position in which these should occur necessarily cross the association line(s) between the position for [a] and whatever feature(s) it has. But the NCC is an absolute, inviolable condition on phonological representations, as a matter of logic. The reason is that the precedence relation on association lines are interpreted as temporal precedence: if α occurs before β on a line, it means that α is pronounced before β. Association lines on the other hand, mean that the associated elements overlap in time: if α is associated with x, then the pronunciation of α overlaps with that of x. From this, and the general logic of time,

Rhyme as phonological multidominance

43

it follows that if α precedes β, α is associated with x, and β with y, y cannot precede x. And that is the No Crossing Constraint. For this reason, various other devices have been introduced into the theory (Marantz 1982; Raimy 1999, 2006), but none of them seems sufficiently motivated independently. Thus copying is independently needed in syntax, but not in phonology. That might lead us to adopt a ‘syntactic’ view on reduplication, but the question is what the implications of this choice would be for the analysis of rhyme. There are at least two serious problems. First, in rhyme the elements which are copied are typically smaller than even the smallest thing which is visible to syntax (the X0, whether a word, a morpheme, or a morphosyntactic feature bundle), viz. some phonological constituent, such as the rime or the onset. Secondly, if the relation between rhyming elements were syntactic, we would expect a syntactic relation to hold of them which is also found between moved elements and their traces (e.g. syntactic locality), which obviously is not found in rhyme: words can rhyme even if they are in two completely different sentences — which is not typically an option for movement. This may seem to lead into conundrum: how can we say that rhyme shares phonological but not syntactic properties with reduplication, while still using the same syntactic mechanism? In order to find a way out, I believe we have to go a little deeper into representations such as that in (4). What does it mean to say that there are two instances of the same word in the syntactic representation of that sentence? The Chomskyan view (Chomsky 1995) seems to be that the structure is very much as is given here: there are indeed two instances of an object — which nevertheless still counts as one object, basically because of the derivational history: at the point we had arrived at the structure { do { you { eat what i } } }, we have chosen to not apply external merge, adding another word, but internal merge, adding an object which is inside the structure itself. Another, more representational, view of internal merge is to assume that the same node is really attached to two mother nodes. We give up the graph theoretic notion of a tree (in which every node except for the root has exactly one mother node), and assume that nodes can simply be linked to more than one mother in a tree. There are different versions of this idea; de Vries (2009) and Citko (2012) give nice overviews. The syntactic representation will then be as follows:

44

Marc van Oostendorp .

(9) .

. .

do

.

you eat

what

At the moment of linearization, we will have to choose which of the mother nodes is relevant for spell-out of the content of the doubly linked node; de Vries (2009) offers an algorithm, assuming that only one of the two will be realized. But apparently, under certain circumstances, such as reduplication or the doubling in (5), both nodes of attachment are relevant. (For the former case, Alexiadou (2010) suggests that certain positions might be more likely to spell-out the extra copy; for the latter, Barbiers, Koeneman and Lekakou (2008) suggest that the reason may be that the two nodes are not completely identical after all.)

3. Rhyme as multidominance If we accept that all poetic forms are based on ‘normal’ linguistic forms; that rhyme is based on reduplication; and that reduplication is represented as multidominance; we thus come to the conclusion that also rhyme should be represented as a multidominance relationship. The two crucial differences between rhyme and reduplication — that rhyme involves smaller, phonological constituents than are usually handled by syntax and that rhyme also does not seem subjected to the same locality restrictions typical for movement and other syntactic dependencies — suggest that the tree in which a multidominance relation holds, is different from the syntactic tree. The prosodic tree suggests itself, as it can host everything from the level of subsyllabic constituents, such as the rime, to the level of the Intonational Phrase and even the Utterance. We may assume that a work of poetry could be analysed as such a larger constituent as well. Plays written in verse may contain more than one Utterance, if we consider the lines attributed to an individual speaker as such an Utterance. Some authors have the last line of the text of one speaker rhyme with the first line

Rhyme as phonological multidominance

45

of the next speaker. In such cases, still the text as a whole will count as a relevant unit. Abstracting away from a lot of irrelevant analytical detail, we can then represent the Shakespearean couplet in (1) as follows: (10)

couplet

line σ

σ

O R O R

line σ

...

L ove l ooks

O R

m ind

σ

σ

σ

O R O R O R

and th ere f ore

σ ...

O

bl

Several questions arise at this point. One of them is whether the new representation does not violate the No Crossing Constraint. If the lines drawn here would be autosegmental associations, there would obviously be a violation. But metrical structures are trees and not necessarily autosegmental representations. The interpretation of a dominance line between, say, a foot and a syllable is not one of temporal overlap (the foot is pronounced at more or less the same time as the syllable), but of dominance: the syllable is part of the foot. Under such an interpretation, ‘crossing’ lines do not establish a contradiction: a prosodic constituent in principle can be part of more than one sub-constituent. This is also not entirely uncommon. A well-known example is provided by ambisyllabic consonants which are attached to the coda of one syllable and the onset of the next (Kahn 1976). Now it is true that such examples are not very widespread, and furthermore, they are also not uncontroversial; but there is nothing to block them. This holds a forteriori for non-distance sharing of constituency, although the next question now obviously is whether there are any longdistance shared prosodic constituencies in natural language. They do not seem to be too easy to find; possibly some of the cases which have been analysed as long-distance agreement of consonants can be analysed in this way (Rose and Walker 2004, see §4).

46

Marc van Oostendorp

Note that in the end obviously any theory of rhyme as building on the resources of natural language will reach the conclusion that the ‘natural’ mechanism on which it builds is not widespread in natural language, as rhyme itself obviously is not a device used by non-poetic language. In this particular case, what may prevent rhyme from being used in ordinary language is the fact that multidominance by definition gives a problem for linearization and phonology, being relatively close to the linearization module, tangled representations such as these become difficult. This might at the same time be the special attraction of such representation for poetic means. Since constituents get shared among very different parts of the representation — even parts different utterances may rime, in a play —, the rhyme works literally as an extra dimension to the representation, and thereby making sure that the whole poem is interpreted as a whole, where various parts resonate with each other. Rhyme, seen this way, is a device which openly contests linearization of the sound structure — more or less in the way in which prolongational structure does in music (Lerdahl and Jackendoff 1983, Katz 2008). My proposal thus is that this rhyme is special because it introduced multidominance into phonological representations, thereby stretching those representations in exactly the same way as internal Merge (movement) does in syntactic representations. Since we are in the phonological component, these representations are not restricted by the usual syntactic locality restrictions, althought there might be others. This might be subject to different restrictions in different styles and for different authors. Even though it is common for lines in a verse to rhyme, there can also be rhyme internal to a line.

4. Correspondence theory and loops In order to understand the predictions of this model of rhyme, it is useful to compare it to alternative (formal) views, i.e. the one presented in Holtman (1996), as well as the Loop Model of Idsardi and Raimy (2005). The former theory is embedded within Optimality Theory, and it is assumed that rhyme is a form of a correspondence relation. Taking rhyme as a form of correspondence makes sense if one accepts Correspondence Theory (CT: McCarthy and Prince 1995b) to begin with, since this is a theory which explicitly is designed to model reduplication. Phonological representations get enriched with a correspondence relation, which holds between

Rhyme as phonological multidominance

47

segments. To see what is the motivation for this, consider the following data from Tagalog: (11)

a. putul ‘cut (n.)’ b. pang-putul ‘that used for cutting’ > [pamutul] c. pa-mu-mutul ‘a cutting in quantity’ (reduplication: *pa-mu-putul)

The prefix pang nasalises the immediately following obstruent, which can be analysed as the result of a normal assimilation, obeying normal requirements of adjacency. But in reduplicated forms, both the prefix and the stem start with a nasalised segment. Under correspondence theory, this can be understood in the following way: the first segment of the prefix assimilates because it is adjacent to the underlying nasal; the first segment of the stem turns into a nasal, because it is in a correspondence relation with the first segment. Holtman (1996) uses this technology in order to describe rhyme as well: the segments at the end of lines are in a correspondence relation to each other, and constraints evaluate these correspondence relations. Technically, it is quite obviously possible to make this work. As I have argued elsewhere (Van Oostendorp 2007), Correspondence Theory is very powerful and there is very little which it cannot do; it therefore is also not very restrictive, and this may count as a disadvantage. There is another difference between the account presented here and a Correspondence Theoretic one. This concerns the fact that rhyme is based on phonological constituency. The stretches of segments that rhyme are not random, but correspond to phonological constituents. This follows from a multidominance approach, since the stretches that rhyme have to be nodes in the phonological representation by definition. Holtman (1996), by contrast, needs separate constraints to get this effect: RHTEMPLATE(foot) for instance, forcing the rhyming sequence to be a metrical foot. The problem this creates is at least one of rhyme system typology: one can easily construct a ranking in which all constraints on template form are lower ranked, so that random stretches of segments may rhyme. The question then is why no poetic tradition employs such a system. A similarity between both accounts obviously is that they connect rhyme to reduplication: both are represented in the same way. Typical arguments in favour of the Correspondence account have been overapplication and underapplication of phonological processes (Wilbur 1974).

48

Marc van Oostendorp

The Tagalog example above is an example of overapplication. Nasalisation has applied to the stem segment, even though it does not seem adjacent to the prefix that causes nasalisation. Underapplication, by contrast, is the phenomenon by which a process does not seem to apply to one of the two copies, even though it occurs in the right environment. A well-known case of this is found in Chumash. This language has a process deleting l before a t. However, this process does not apply across the boundary of a base and a reduplicant: (12)

/s-tal’ik-tal’ik/ → [s-tal’-tal’ik], *[s-ta-tal’ik] ‘his wives (i.e. of a chief)’

In Correspondence Theory, this follows from the specific ranking of the constraint demanding maximal correspondence between the segments of the base and the reduplicant and certain other constraints. In the Tagalog case, this constraint forces unfaithfulness of the base stem which is not otherwise warranted; in the case, correspondence in tandem with a templatic constraint forcing the reduplicant to be (at least) a heavy syllable, disallows deletion. Under a multidominance account, we would have to observe that some constituents occur in more than one position. In Tagalog, the syllable /pu/ occurs both immediately after a nasal and immediately after a non-nasal. Apparently, it is the nasal context which here decides, so in some sense the position of the first copy. In Chumash, in contrast, tal occurs both before a t and before another segment. In this case, one could say that it is the second copy which decides. In the latter sense, this work is reminiscent of the loops in the work on the representation of reduplication in Raimy (1999, 2006); Idsardi and Raimy (2005). In this work, the precedence relations are relativized: segments are not necessarily ordered in two-dimensional strings, but can contain loops. (I will refer to this theory as Loop Theory.) (13)

# → p → a → ŋ → p → u → t → u → l →%

In this representation, there are two arrows from the first u: one leading to t, and one leading to p. Similarly, there are two arrows point to the p: one from the ŋ, and one from the u. This means that the p is effectively adjacent on its left to two different segments. If it changes, because it is adjacent to the nasal, the arrow coming from the u will now point to it. Overapplication therefore is even the norm in this kind of theory.

Rhyme as phonological multidominance

(14)

49

# → p → a → ŋ →m → u → t → u → l →%

What this theory has in common with the multidominance account is, obviously, that the copies are considered to be instances of the same object. This means that the phonological representation has to be rather abstract: other than is sometimes assumed, it cannot (yet) be fully linearized even if notions like ‘precedence’ arguably play a role. This presumably has to be done then in the interface with phonetics. Both theories offer purely representational accounts of reduplication, and possibly, rhyme. The main difference is that Raimy’s theory is strictly segment-based, whereas the account we present here is based on the notion of the phonological constituent. This, then, seems the main difference between a multidominance account of rhyme and its alternatives: the latter always allow for the possibility of random stretches of segments to rhyme. To the extent that only phonological constituents can be involved in rhyme processes, this is an argument for multidominance. Idsardi and Raimy (2005) argue that indeed non constituents rhyme. They cite two pieces of evidence. The first is that in English poetry (among others) rhyme is defined as follows: ‘either of two or more words which have identical nuclei in their stressed syllables and identical sequences of segments after these nuclei’ (Trask 1996). Idsardi and Raimy (2005) point out that this is not a constituent under any theory of metrical phonology. I will come back to this in the following section. The other piece of evidence is Tuareg. Idsardi and Raimy (2005) cite Fabb (1997) who observes: “the rule for rhyme is complex in its reference to syllable structure: the syllable nucleus must be identical and the final consonant in the coda must be equivalent. (Other consonants are apparently ignored: thus in one poem -at rhymes with -art, and also with -ayt and -ant and -alt.” It seems to me that the conclusion that “rhyming in English and other poetic traditions operates on strings of segments that do not necessarily match to a phonological constituent” is too strong. In both cases one does actually need to refer to prosodic constituents, but the definition is a little more complex. Actually in both cases, the right definition seems to be: (15)

α and β rhyme iff α and β are prosodic constituents, α and β are the same, except that there is one (designated) daughter constituent that may differ.

50

Marc van Oostendorp

Although it is therefore true that rhyming is in these not the simple similarity of two (prosodic) constituents, it is also not as random as both Correspondence Theory and the theory of Idsardi and Raimy (2005) would predict.

5. Imperfect rhyme An important problem for any account of rhyme is that it can be imperfect: the words or phrases are almost but not exactly the same. Poetic rhyme thus often shows an identity avoidance effect. We can distinguish two versions of this. First, sometimes we find ‘imperfections’ in the rhyme as something which the poet allows himself to do, for instance in the following examples from nursery rhymes, collected by Holtman (1996): (16)

a. b. c.

Hush-a-bye, baby, on the tree top When the wind blows, the cradle will rock Peter stands at the gate Waiting for a butter cake One for the master, and one for the dame, And one for the little boy who lives down the lane

In many cases, like here, the rhyming sequences are really very similar. In the examples just provided they differ only in one consonant, and in those consontants only in one (place) feature. The second type are obligatory differences. The most important among these is that in several poetic traditions (e.g. the Anglosaxon one) two words are not supposed to rhyme if the onsets of the rhyming syllables are the same: rhyming two to too is supposed to be bad practice. There are other styles in which ‘rich rhyme’ is explicitly allowed. Chaucer, for instance, employs it quite regularly in the Canterbury Tales: (17)

Therfore I pass as lightly as I may I fel that in the seventhe yer, of May

What makes this different, though, is that even in Chaucer, these rich rhyme pairs are exceptional, even if they are rather frequent. It thus seems that having a different onset is somehow the rule, whereas the kinds of small alternations in folk poetry are the exception.

Rhyme as phonological multidominance

51

Idsardi and Raimy (2005), in whose theory the consonants before the stressed vowel should be completely irrelevant, claim that the avoidance of rich rhyme falls outside of the scope of linguistic theory: “Repeating a word to make a rhyme is trite. It is too easy. There is no game or art involved in rhyming in this way. Consequently, ‘good’ rhymes avoid this triteness and strive to create rhymes that are creative and playful. Once the artist knows how to rhyme, the art form is the pursuit of combining nontrivial rhymes with the message of the poem.” This explanation does not seem completely satisfactory. One can easily imagine possible restrictions on rich rhyme which would make it more difficult, for instance by requiring that it should not be the same word, but a homophone. Actually Wikipedia (July 2013) gives as its definition: “a form of rhyme with identical sounds, if different spellings” and gives pair-pear as an example. It also mentions that French is much more tolerant of rime riche, even though French seems to have more homophones to begin with — going completely in the opposite direction of Idsardi and Raimy (2005)’s explanation. It does seem, however, that poetic traditions seem to require some form of non-identity among the rhyming pairs: this can be either a different onset, or a different nuclear consonant in Tuareg, or belonging to a different lexical item, signalled by different spelling, in rime riche. At first sight, this might seem to argue in favour of a Correspondence approach, since that holds that the segments of a rhyme are not the same, but merely resemble each other. Differences can then be allowed. Notice, however, first, that to some extent this is just a function of the fact that Correspondence Theory is relatively unrestricted, and secondly, that in order to account for this, we still need to introduce non-identity constraints of which the status as a phenomenon in natural language is unclear. It is not identical to the OCP, or at least I am not aware of OCP’s targeting complete onsets. (In rhyme, blair-bair is fine, as is blair-lair; I am not aware of OCP systems that work that way.) A representational solution may be found if we incorporate into our multidominance analysis the idea that the identity of items is preserved in phonology, as it has been developed in Coloured Containment (Revithiadou 2007, Van Oostendorp 2007). The basic idea is as follows. We know that for certain processes the phonology has to see the difference between stems and affixes, or between epenthetic material and lexically sponsored material. One way to represent this, is to assume that every element of a phonological representation is marked for its lexical affiliation. Every lexical item has its ‘colour’ and every feature, root node, prosodic node, etc., belonging to

52

Marc van Oostendorp

the phonological exponent of that item shares that colour. I will mark those colours as indices. A simple derivation such as /mɛlk-ə/ →[mɛləkə] ‘to milk (INFINITIVE)’ (Dutch) will lead to the following representation: (18)

mi ɛi li ə ki əj

The segments of the stem /mɛlk/ each have one colour, whereas the segment of the infinitival suffix /ə/ has another colour. The epenthetic vowel has no colour since it is not sponsored by any lexical item. What is drawn here for segments, holds (also) for the features of segments: so features like [labial] and [nasal] which constitute the /m/ also have the index i in this case. This technology allows us to differentiate between different ‘copies’ in a rhyming pair, such as the pair dame-lane (the x stands for the root node dominating the material of m and n): (19)

Rhyme i,j

e: i,j

x i,j

[nasal] i,j [coronal] j [labial] i Most elements in this representation have two lexical identity, except the place features on the nasal. When this representation gets linearized, only the features with the right lexical identity are spelled out. Obviously, this can be extended to the onset nodes, which are even obligatorily differently coloured in most rhyming styles: σ i,j

(20)

Onset i

di

Onset j

Rhyme i,j

lj

e: i,j

[nasal] i,j [coronal] j [labial] i

x i,j

Rhyme as phonological multidominance

53

Obviously, this coloured representation has to be embedded in a multidominace tree, to make sure that the two rhyme words are pronounced in the appropriate place in the text. We may speculate that the reason why the (initial) onsets have to be different is that in this way one indicates immediately the ‘right’ lexical colour.

6. The invisibility of coronals I submit that the following can count as a piece of evidence for this account. It is known that in some rhyming styles voiceless coronals do not seem to count: they can be added or deleted rather freely, for instance in rap style Zwicky (1976). Here is an example of the Dutch literary author Gerrit Achterberg (1905-1962): (21)

Den Haag, stad, boordevol Bordewijk en van Couperus overal een vleug op Scheveningen aan, de villawijk die kwijnt en zich Eline Vere heugt Maar in de binnenstad staan ze te kijk, deurwaardershuizen met de harde deugd van Katadreuffe die zijn doel bereikt. Ik drink twee werelden, in ene teug.

In this example (the first quatrains of a sonnet describing the literary past of The Hague), vleug ‘a little bit’ [vløx] and teug ‘sip’ [tøx] rhyme with heugt ‘remembers’ [høxt] and deugd ‘virtue’ [døxt]; and kijk ‘view’ [kɛik] with bereikt ‘reaches’ [rɛikt]. It is well known that voiceless coronals behave as if they are somehow outside of the phonotactic template in many languages (Paradis and Prunet 1991). This is definitely also the case in Dutch: if a word ends in more than two consonants, the last one is guaranteed to be [s] or [t] (herfst ‘autumn’); if a word starts with more than two consonants, the first one is an [s] (straat ‘street’), and similarly in word-internal clusters of four consonants, at least one of them will be a voiceless coronal obstruent (extra (id.)). A wellknown way to describe this is to state that these obstruents are not parsed into regular prosodic templates (but, for instance, adjoined to a higher level); see for instance Van Oostendorp (2003).

54

Marc van Oostendorp

If we adopt this assumption, it is easy to describe the Achterberg rhyme pattern: τi

(22) σ i,j

ti

Onset i Onset j Rhyme i,j

di

hi

ø i,j

x i,j

Only the syllable is multidominated; the t is simply outside of the shared domain. In poetic styles in which voiceless coronals do count, it will obviously be the higher prosodic node (here marked as τ), which is shared. As far as I see, it will be difficult to account for the exceptional behaviour of voiceless coronals in Correspondence Theory or in Loop Theory. The reason is that the existing constituency does not play a role in those frameworks. To the extent that this is true, this is an argument in favour of a multidominance approach.

7. Conclusion In this paper, I propose an interpretation of poetic rhyme that is, as far as I know, new, but which at the same time fits into an existing interpretation of copying in linguistics: that the copies are actually the same thing, showing in different places. Not every issue could be covered in this paper, such as the question why rhyme tends to occur at the end of the line — which is presumably a metrical unit — or why ‘imperfect rhyme’ has a preference for non-matching place features over non-matching manner features, as is apparently the case (Holtman 1996). Furthermore, more empirical research would clearly be necessary to establish what individual variation we find in what counts or does not count as acceptable rhyme. The main goal of this paper is therefore mostly to show that a multidominance analysis of rhyme is feasible. To the extent that multidominance is a plausible analysis of other types of copying (de Vries

Rhyme as phonological multidominance

55

2009), this is already a positive result; but I believe to have shown that the analysis also has some interesting characteristics on its own which may make it worth studying.

References Aboh, Enoch 2007

A ‘mini’ relative clause analysis for reduplicated attributive adjectives. Linguistics in the Netherlands 24: 1–13. Alexiadou, Artemis 2010 Reduplication and doubling contrasted: implications for the structure of the DP and the AP. Linguística. Revista de Estudos Linguísticos da Universidade do Porto 5: 9–25. Barbiers, Sjef, Olaf Koeneman, and Marika Lekakou 2008 Syntactic doubling and the structure of chains. In Proceedings of the 26th West Coast Conference on Formal Linguistics, Charles B. Chang and Hannah J. Haynie (eds.), 77–86. Somerville, MA: Cascadilla. Borgstrøm, C. Hjalmar 1940 A Linguistic Survey of the Gaelic Dialects of Scotland, vol. 1: The Dialects of the Outer Hebrides. Norsk Tidsskrift for Sprogvidenskap, Suppl. Bind 1. Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. 2001 Beyond explanatory adequacy. MIT Occasional Papers in Linguistics. Citko, Barbara 2012 Multidominance. In The Oxford Handbook of Linguistic Minimalism, Cedric Boeckx (ed.), 119–142. Oxford: Oxford University Press. Clements, George N., and Elizabeth Hume 1995 The internal organization of speech sounds. In The Handbook of Phonological Theory, John A. Goldsmith (ed.), 245–306. Oxford: Blackwell. Fabb, Nigel 1997 Linguistics and Literature. Oxford: Blackwell. 2010 Is literary language a development of ordinary language? Lingua 120: 1219–1232. Goldsmith, John A. 1976 Autosegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Published 1979, New York: Garland.

56

Marc van Oostendorp

Hall, Nancy 2011

Vowel epenthesis. In: The Blackwell Companion to Phonology, Marc van Oostendorp, Colin J. Ewen, Keren Rice and Beth Hume (eds.), 1576–1596. Oxford: Wiley-Blackwell. Hauser, Marc D., Noam Chomsky, and W. Tecumseh Fitch 2002 The faculty of language: what is it, who has it, and how did it evolve? Science 298: 1569–1579. Holtman, Astrid 1996 A generative theory of rhyme: an optimality approach. Ph.D. dissertation, University of Utrecht. Idsardi, William J., and Eric Raimy 2005 Remarks on language play. Ms., University of Maryland and University of Wisconsin-Madison. Inkelas, Sharon, and Cheryl Zoll 2005 Reduplication: Doubling in Morphology. Cambridge: Cambridge University Press. Kahn, Daniel 1976 Syllable-based generalizations in English phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Published 1980, New York: Garland. Katz, Jonah 2008 Towards a generative theory of hip-hop. URL http://web. mit.edu/jikatz/www/KatzGTHH.pdf. Handout from a talk presented at Tufts University. Kawahara, Shigeto 2007 Half-rhymes in Japanese rap lyrics and knowledge of similarity. Journal of East Asian Linguistics 16: 113–144. Kawahara, Shigeto, and Kazuko Shinohara 2009 The role of psychoacoustic similarity in Japanese puns: a corpus stud. Journal of Linguistics 45: 111–138. Kiparsky, Paul 1970 Metrics and morphophonemics in the Kalevala. In Linguistics and Literary Style, Donald C. Freeman (ed.), 165–181. New York: Holt, Rinehart and Winston. 1973 The role of linguistics in a theory of poetry. Daedalus 102: 231– 245. 2006 A modular metrics for folk verse. In Formal Approaches to Poetry, B. Elan Dresher and Nila Friedberg (eds.), 7–49. Berlin/New York: Mouton de Gruyter. Lerdahl, Fred, and Ray Jackendoff 1983 A Generative Theory of Tonal Music. Cambridge, MA: MIT Press. Marantz, Alec 1982 Re Reduplication. Linguistic Inquiry 13: 483–545.

Rhyme as phonological multidominance 1984

57

On the Nature of Grammatical Relations. Cambridge, MA: MIT Press. McCarthy, John, and Alan S. Prince 1993 Prosodic morphology I: constraint interaction and satisfaction. ROA 485-1201. 1995a Faithfulness and reduplicative identity. In Papers in Optimality Theory 18. Jill Beckman, Laura Walsh Dickey and Suzanne Urbanczyk (eds.), Amherst, MA: Graduate Linguistic Student Association. ROA 60. 1995b Faithfulness and reduplicative identity. In University of Massachusetts Occasional Papers in Linguistics 18: Papers in Optimality Theory, Jill Beckman, Suzanne Urbanczyk and Laura Walsh Dickey (eds.), 249–384. Amherst, MA: Graduate Linguistics Students Association. McCarthy, John J., Wendell Kimper, and Kevin Mullin 2012 Reduplication in Harmonic Serialism. Morphology 22: 173–232. Mester, Armin 1986 Studies in tier structure. Ph.D. dissertation, University of Massachusetts, Amherst. Published 1988, New York: Garland. Oftedal, Marge 1956 A Linguistic Survey of the Gaelic Dialects of Scotland III. Oslo: Aschehoug. Oostendorp, Marc van 2003 The phonological and morphological status of the prosodic word adjunct. Linguistische Berichte, Sonderheft 11. 2007 Derived environment effects and consistency of exponence. In Freedom of Analysis?, Sylvia Blaho, Patrik Bye and Martin Krämer (eds.), 123–148. Berlin/New York: Mouton de Gruyter. Paradis, Carol, and Jean-Francois Prunet 1991 The Special Status of Coronals: Internal and External Evidence. San Diego: Academic Press. Raimy, Eric 1999 Representing reduplication. Ph.D. dissertation, University of Delaware. 2006 Book Review. Reduplication: Doubling in Morphology (Sharon Inkelas and Cheryl Zoll 2005, Cambridge: Cambridge University Press). Journal of Linguistics 42: 478–486. Revithiadou, Anthi 2007 Colored Turbid accents and Containment: a case study from lexical stress. In Freedom of Analysis?, Sylvia Blaho, Patrik Bye and Martin Krämer (eds.), 149–174. Berlin/New York: Mouton de Gruyter.

58

Marc van Oostendorp

Rose, Sharon, and Rachel Walker 2004 A typology of consonant agreement as correspondence. Language 80: 475–532. Sagey, Elizabeth 1988 On the ill-formedness of crossing association lines. Linguistic Inquiry 19: 109–118. Trask, Robert Lawrence 1996 A Dictionary of Phonetics and Phonology. New York: Routledge. de Vries, Mark 2009 On multidominance and linearization. Biolinguistics 3: 344–403. Wilbur, Ronnie 1974 The phonology of reduplication. Ph.D. dissertation, University of Illinois. Yip, Moira 1999 Reduplication as alliteration and rhyme. GLOT International 4: 1–7. Zwicky, Arnold M. 1976 Well, this rock and roll has got to stop. Junior’s head is hard as a rock. In Papers from the Twelfth Regional Meeting, Chicago Linguistic Society, April 23-25, 1976, Salikoko S. Mufwene, Carol A. Walker and Sanford B. Steever (eds.), 676–697. Chicago IL: Chicago Linguistic Society.

Babbling, intrinsic input and the statistics of identical transvocalic consonants in English monosyllables: Echoes of the Big Bang? Patrik Bye

1. Introduction This paper argues that motor memory formation resulting from intrinsic input during the babbling phase has an effect on the frequency of identical pairs of root consonant in the adult English lexicon. The prevalence of repeated transvocalic consonants at certain places of articulation in babbling makes identical transvocalic consonants in words of the adult lexicon more common than we would expect. During the babbling phase, children show a strong preference for repetition. Examples of babbling from Loekie Elbers’ son Thomas (Elbers 1982), acquiring Dutch, are shown in (1). (1)

6-7 months ....bvvv, bvvv.........bvvvvvv, (intranscribable), bvv, .........bvvvvvwə ....bvvv.......əbvvv.......bvvv.....bvvvvvvv...... əbvvvv....(intranscr.) 7-8 months ...mὰbαbαbəwəbαbαbὰ......bαb̀ ə ....ba:bα ..... bὰbə ....... ba:ba:ba:bà: ba:ba:ba:bəba:ba:mba: .... αwà: ...ba:ba: ba: ba: ba: ba: ba: ba: ba: ba:b .... 8-9 months ... pfpfpfff ...... pffpff ... a:a:a:, a:a:a:a:, a:a:a:a:a:, a:a:a:a:bə̀ ....... bə̀bəbəbəbə̀bəbə ... bə ... p, p .... əbə̀, pffpffpffff ....... 10-11 months ... pαpαpαpvvv, pαpf ..... hα, hα, hα, hα ..... həpəpəpəpfff, pəpv, bəprrr (bilabial trill), bəpəprrr (bilabial trill), bαpfuff, bαpfuf, pαpαpαpvvv, bαpαpαpαpαpα, a:a:a:, ha:a:a:, ha:a:a: ........

60

Patrik Bye

10-11 months ... bə̀gəbə̀gəbə̀gəbək ...... bə̀kəmbə̀kəmbə̀kəmbrrr, brrbə̀ kəməbə̀kəbə̀kəbrrrk, brrbə̀kəmbə̀kəmbrrr, brr̀ brr̀ rwə, brrgəa:gəbrr, wə, www ..... (coughing) ...... ajà: ..... (intranscr.) ..... bə̀kəmbə̀kəmbəkbək, bə̀kəmbə̀kəmbrr, bə̀kəməbə̀kəmbə̀, bə̀kəmbə̀kəmbrrkbrrk, α, α, α, ..... 11-12 months .. gαŋgwαŋgəb, ŋə̀məkə, ŋŋ, (intranscr.), ə̀wbαbùməkə, (intranscr.) (yodeling), (groaning), bαpff, bəbαkὰ ........... (laughter) ..... əpə̀ m̀ mα, əwà:wəmαŋὰx, εmὰx, məgαməgà:x, (intranscr.), (intranscr.), əpəgəx ...... gαgαŋbuk, əbəkὰx ....... There is significant continuity between the repertoires of babbling and the phonological structure of early words, e.g. consonant harmony (e.g. Fikkert 1994). As MacNeilage et al. (2000: 160) reason: “If the first hominid speech was strongly reduplicative, one might expect that there would be some residue of this preference in modern languages” (MacNeilage et al. 2000: 160). Contrary to this expectation, however, MacNeilage et al. (2000) found in a study of the first and second stops and nasals in CVC, CVCV, and CVCV… words in a sample of 10 languages that the tendency to repeat the same place of articulation was on average 67 % of chance values. The effect (ratio of observed to expected count) was strongest in Hebrew (0.44) and weakest in Swahili (0.89), but all were significantly below chance. The MacNeilage et al. (2000) study did not examine the frequency of identical transvocalic consonants, but the implication is clearly that they should occur significantly below chance levels. MacNeilage (2008) connects the apparent dispreference for repetition to the wellestablished finding that similar sounds may result in confusion in working memory, both in planning and perception. The principle of avoiding similar sounds is generally known as the Obligatory Contour Principle (OCP), which McCarthy (1988: 88) formulates as a categorical constraint militating against identical elements as in (2). (2)

Obligatory Contour Principle (OCP, McCarthy 1988: 88) Adjacent identical elements are prohibited.

In some languages, the OCP shows up as a grammatical constraint, while in others it is best understood as a statistical tendency in the lexicon. Further-

Babbling, intrinsic input and the statistics of identical transvocalic Cs

61

more, Pierrehumbert (1993) shows that the dispreference increased the greater the phonetic similarity of the consonants. Here we argue that the negative finding of MacNeilage et al. with respect to the occurrence of repetition in the adult lexicon should be nuanced in the light of this research on gradient phonotactics and more sophisticated statistical thinking that takes account of class structure in samples. Identical transvocalic consonants are only negligibly underrepresented relative to global expected frequencies but, at least for certain places of articulation, significantly overrepresented relative to expected frequencies for their homorganic class. This is shown here on the basis of a corpus of monosyllabic roots from the MRC Psycholinguistic Database,1 a machine-usable dictionary. An investigation carried out by Berkley (1994) into English monomorphemic monosyllables using the same database found that roots with homorganic consonants separated by one segment are significantly below expected frequency. First, Berkley considered 1258 monosyllabic roots whose onset and coda were separated by exactly one segment, i.e. roots of the form CVC. These are summarized in Table 1. Observed and expected counts for homorganic pairs of consonant are shown in shaded cells. Table 1: Distribution in monosyllables of consonant pairs separated by exactly one segment (Berkley 1994: 2)

LAB C1

COR DORS

LAB 26 64.9 204 158.7 22 28.4

C2 COR 256 204.5 428 499.9 110 89.6

DORS 42 54.6 160 133.5 10 23.9

observed expected observed expected observed expected

Berkley observes that homorganic roots with coronals are less underrepresented than homorganic roots with labials and dorsals. When coronal obstruents and sonorants are distinguished, however, a stronger OCP effect emerges within each class (Table 2). For the coronal sonorants, Berkley observes a marked effect. For the coronal obstruents, however, the effect is more modest. 1

Available on-line at: http://websites.psychology.uwa.edu.au/school/ MRCDatabase/uwa_mrc.htm.

62

Patrik Bye

Table 2: Distribution in monosyllables of coronal consonant pairs separated by exactly one segment (Berkley 1994: 3)

C1

coronal obstruent coronal sonorant

coronal obstruent 67 91.8 143 129.8

C2

coronal sonorant 124 115.2 94 163.0

observed expected observed expected

The remainder of this paper is structured as follows. §2 presents the data and develops the statistical analysis and shows how the observed data on repeated transvocalic consonants diverge from both naive (chance) and theoretically-informed (similarity-based) expectations. §3 proposes an explanation that relates the observed micropreference for repeated transvocalic consonants in non-apical consonants to the relative timing of repetitive babbling and maturation of the tongue and adult-like mastery of apical articulation. §4 concludes. 2. Data and analysis The data for the present study consists of 1556 monosyllabic roots collected from the MRC Psycholinguistic Database. Tables for the complete data set may be found in the appendix. The data source is the same as in Berkley’s study, but the input to the analysis here is a superset of Berkley’s data, which only includes monosyllables with a frequency high enough to appear in Kučera and Francis (1967). In selecting the data for this study, however, this filter was not applied. The main reason for this is that the present study entails analyzing smaller groups of words. Filtering out words may have resulted in a slimmer basis for drawing informative conclusions. Some initial search passes with the filter switched on suggested this was correct. For example, for the class of roots with transvocalic /m…m/, using the filter to stipulate a minimum of one occurrence in the Kučera-Francis index returned only the set {MA'AM, MUM}. With the filter lifted, we obtain {MA'AM, MAIM, MIME, MUM}. Neither MAIM nor MIME are intuitively especially low in frequency. In the process of assembling the data it was also found that certain common words are missing from the database, or for some reason not returned by the search. These include words like GOAL and THOUGHT, as well as rarer or dialectal words such as BAP or

Babbling, intrinsic input and the statistics of identical transvocalic Cs

63

neologisms like MEME. In light of this, the best option was to run the study on all monosyllables in the database without attempting to add to or subtract from it in any way. 2.1. Data relationships The segment inventory of English consists of 24 consonants: six labials /p b f v m w/, nine apicals /t d θ ð s z n l r/, five palatals /ʧ ʤ ʃ ʒ j/, three dorsals /k ɡ ŋ/, and the placeless laryngeal resonant /h/. All consonants with the exception of /ŋ/ may occur in onset position (C1). In coda position (C2), none of the resonant consonants /w r j h/ may occur. Instead, the second mora of a syllable nucleus may be a back round, central/retroflex/bunched, or front vocoid (e.g. [təʊ] ‘tow’, [tɪə]~[tɪɚ] ‘tear’, [taɪ] ‘tie’). Since these attributes are regarded here as properly belonging to the nucleus, they therefore fall outside the scope of the paper. The observed and expected frequencies of each pair of consonants are shown in Tables 3 and 4. Shaded regions of the table correspond to homorganic roots. Darker grey is used for strongly homorganic pairs (/Lab…Lab/, /Ap…Ap/, /Pal…Pal/, /Dors…Dors/), and lighter grey for weakly homorganic pairs (/Ap…Pal/ and /Pal…Ap/). As is customary, C1 (onset) is the first column, and C2 (coda) the first row. The last row (SEG.TOT) shows the totals for each consonant in coda position, and the last column the totals for each consonant in onset position. Table 3: Observed frequencies in monosyllables of consonant pairs

64

Patrik Bye

Table 4: Expected frequencies in monosyllables of consonant pairs

Table 5 below gives the distribution of transvocalic consonant pairs according to the categories used in Berkley’s cross-tabulation (Table 1). As can be seen, the two give similar results. Recall that Table 1 only shows consonant pairs in Berkley’s data separated by a single segment (i.e. short vowel). Table 5, however, considers all transvocalic consonant pairs, including those separated by a long vowel or diphthong. Table 5:

Cross-tabulation of consonant pairs by class in monosyllabic /C… C/ roots

LAB C1

COR DORS

LAB 71 106.3 190 164.6 43 33.4

C2 COR 365 323.2 457 499.6 103 101.6

DORS 70 76.3 135 118.0 13 23.9

observed expected observed expected observed expected

Table 5 also confirms differences in the strength of the gradient OCP effect related to place of articulation discovered by Berkley: For labials and dorsals there is a strong effect; coronals also exhibit the effect, but more weakly.

Babbling, intrinsic input and the statistics of identical transvocalic Cs

65

Table 6 below splits up the coronals into three subgroups. The palatals are separated out, and the remaining apicals are split into an obstruent and a sonorant class, in the same way that Berkley distinguished coronal obstruents and sonorants. In addition, the observed and expected values for monosyllables with onset /h/ have been provided. Darker shading indicates cells that seem to show a gradient OCP effect. Lighter shading indicates combinations of coronal consonant that occur with frequencies close to expected values. Table 6:

Cross-tabulation of consonant pairs by class in monosyllabic /C… C/ roots

LAB OBS AP C1

SON AP PAL DORS /h/

LAB 71 107.9 71 65.7 70 63.2 49 37.8 43 34.0 28 23.3

OBS AP 190 170.7 91 103.8 102 99.6 54 59.9 48 53.6 40 36.9

C2 SON AP 120 107.7 71 65.5 42 63.0 42 37.8 40 33.8 16 23.2

PAL 55 43.6 22 26.3 29 25.6 4 15.1 15 13.7 9 9.4

DORS 70 76.2 53 46.3 53 44.5 29 26.5 9 23.9 16 16.4

observed expected observed expected observed expected observed expected observed expected observed expected

Since /h/ is placeless, we expect no interaction with the coda. Indeed, the observed counts for roots with /h/ in onset position model the expected counts very closely. An additional pattern that stands out is that homorganic roots with palatals are strongly disfavoured. The expected frequency of transvocalic palatals is 15.1, but they only occur 4 times in the data. Homorganic roots combining apical and palatal obstruents, on the other hand, do not seem to be significantly dispreferred. ObsAp…Pal roots occur 22 times out of an expected 26.3, while Pal…ObsAp roots occur 54 times out of an expected 59.9. We take this to justify the division of the coronals into separate apical and a palatal classes. Perhaps more surprisingly, the gradient OCP effect is similarly weak, perhaps non-existent, in roots of the form ObsAp…ObsAp, with 91 occurrences compared with an expected 103.8. As Berkley already noted, ObsAp and SonAp do not pattern as homorganic with respect to the gradient OCP.

66

Patrik Bye

Thus SonAp…ObsAp occurs 102 times compared with expected 99.6, and ObsAp…SonAp occurs 71 times compared with expected 65.5. Combinations of coronal obstruent with SonAp (ObsAp…SonAp, SonAp…ObsAp, Pal…SonAp, SonAp…Pal) seem, if anything, to be slightly overrepresented. Only the frequency of the combination SonAp…SonAp seems to be appreciably diminished through the effect of the gradient OCP (observed count: 42, expected count: 63.0). 2.2. Identity: observed values relative to expectations based on chance We wish to establish whether identical transvocalic pairs of consonant are more common than expected in English roots. Identical transvocalic pairs of non-apical consonant occur 32 times in the sample. The expected number of occurrences in the sample as a whole is 34.2, giving an O/E ratio of 0.94. That is, identical transvocalic pairs occur as often as we would expect if no biases were operating. Had the starting point been a naive one, this result would have been the end of it. However, the result is surprising precisely because, in this case we should have expected the unexpected. As we have just seen, the observed count should be significantly lower than the expected count due to the operation of the gradient OCP. The result thus seems to indicate the existence of some factor that counteracts the effect of the gradient OCP in pairs of homorganic consonant just in case they are identical. What we need to do instead, as a first step, is to assess the observed counts of identical transvocalic pairs against the expected counts within each homorganic class. The null hypothesis may be stated such that, within each class of homorganic root, the consonants should occur in the same ratio of frequencies as in the sample as a whole. For example, in the sample as a whole, the non-resonant labial consonants /p b f v m/ occur as onsets in the ratio 99:113:73:32:98 (sum = 415), and as codas in the ratio 85:51:56:41:99 (sum = 332). For each place, we only consider consonants that may occur in both onset and coda position. We therefore do not consider the resonant /w j r h/, since resonants do not occur in coda position as set out in our assumptions above. There are 55 roots with the structure [Lab…Lab] in the sample. Other things being equal, we should therefore expect [Labi…Labj] to occur with , where n is the number of roots in the class (subsample), a frequency of ri is the frequency of i in the homorganic class, and ti is the total (row/column) frequency of i in the sample (likewise for rj and tj). If Labi =

Babbling, intrinsic input and the statistics of identical transvocalic Cs

67

/b/ and Labj = /v/, then we would expect roots of the form /b…v/ to occur = 1.8 times. Table 7 gives the observed and expected frequencies for pairs of labial consonant. Tables 8, 9, and 10 do the same for pairs of apical, palatal and dorsal respectively. Table 7: Cross-tabulation of labial consonant pairs in monosyllabic C…C roots p

p b C1

f v m

8 3.4 0 3.8 1 2.5 0 1.1 2 3.3

b

1 2.0 5 2.3 2 1.5 1 0.7 1 2.0

C2 f 4 2.2 3 2.5 3 1.6 0 0.7 1 2.2

v

m

1 1.6 0 1.8 1 1.2 2 0.5 1 1.6

4 3.9 5 4.5 4 2.9 1 1.3 4 3.9

ObsLabi

22 11.7

ObsLabi

observed expected observed expected observed expected observed expected observed expected observed expected

Table 8: Cross-tabulation of obstruent obstruent apical consonant pairs in monosyllabic C…C roots t

t d θ C1

ð s z ObsApi

9 8.8 8 8.5 0 1.6 1 0.9 12 10.5 1 0.5

d

4 6.3 5 6.1 2 1.2 0 0.6 8 7.6 1 0.4

θ 2 1.8 3 1.8 0 0.3 0 0.2 3 2.2 0 0.1

C2

ð

2 0.8 0 0.8 0 0.1 0 0.1 4 0.9 0 0.0

s

4 4.9 4 4.7 0 0.9 2 0.5 5 5.8 0 0.3

z

2 3.4 4 3.3 0 0.6 2 0.3 3 4.0 0 0.2

ObsApi

19 21.3

observed expected observed expected observed expected observed expected observed expected observed expected observed expected

68

Patrik Bye

Table 9: Cross-tabulation of obstruent palatal consonant pairs in monosyllabic C…C roots ʧ

ʧ ʤ C1

ʃ ʒ

C2

ʤ

1 0.5 0 0.5 0 0.5 0 0.0

1 0.4 1 0.4 0 0.4 0 0.0

ʃ

0 0.3 0 0.4 1 0.4 0 0.0

ʒ

0 0.0 0 0.0 0 0.0 0 0.0

ObsPali

ObsPali

3 1.3

observed expected observed expected observed expected observed expected observed expected

Table 10: Cross-tabulation of obsruent dorsal consonant pairs in monosyllabic CVC roots k

k C1

ɡ ObsDorsi

C2

5 4.6 1 2.7

ɡ

2 1.7 2 1.0

ObsDorsi

7 5.6

observed expected observed expected observed expected

For homorganic roots with a non-apical obstruent consonant, the probability that C1 and C2 are identical is initially = 0.23. Identical

transvocalic pairs of apicals occur 19 times out of 91. For the apical obstruents, the observed and expected frequencies of identical transvocalic pairs thus match each other very closely. For homorganic roots with a non-apical obstruent consonant, the probability that C1 and C2 are identical is intially = 0.269. In actual fact, identical transvocalic pairs of non-apical occur 32 times out of 69, approaching twice the chance frequency. A Chi-squared test for the given probabilities returns a χ2 value of 7.4377 and a p-value of 0.006387, a highly significant result. That is, on the assumption that the null hypothesis is true, there is less than a 1 % chance of observing data values at least as extreme as the observed ones. We therefore have sufficient grounds to accept the alternative hypothesis that there is an effect of

Babbling, intrinsic input and the statistics of identical transvocalic Cs

69

transvocalic identity on frequency with the class of non-apical homorganic roots. 2.3. Identity: observed values relative to expectations based on similarity avoidance Identical transvocalic consonants are significantly more common than chance, but work in phonological theory provides additional reasons for supposing that the expected frequencies should depart from chance, but in the opposite direction to what we actually find. Pierrehumbert’s (1993) study of the lexicon of Arabic proposed that OCP effects vary gradiently with the perceived similarity of homorganic consonants. We return to the issue of how to compute similarity between segments in further detail below. Similarity ranges between 0 and 1, where 1 is identity. For the time being we can note that in Arabic, roots with identical transvocalic stops are vanishingly rare, while roots with a homorganic stop and fricative are more frequent, and roots with a homorganic obstruent and sonorant are quite common. The relation between similarity and ratio of observed to expected count (O/E ratio) in Arabic can be described by the ogival curves in Figure 1, from Frisch (1997: 83). Figure 1a shows the relationship between similarity and O/E ratio for pairs of consonants that are adjacent in the verb root, while Figure 1b shows the relationship for pairs of consonants separated by a single consonant. Figure 1a shows that for adjacent pairs of consonant, even a small degree of similarity results in lower frequency. Adjacent consonant pairs with similarity values above 0.4 are vanishingly rare, and above 0.6 the number drops to zero. Between non-adjacent consonant pairs, the interaction is weaker, as reflected by the curve. a. Adjacent consonant pairs

b. Non-adjacent consonant pairs

Figure 1: Gradient OCP in Arabic verb roots (Frisch 1997: 83)

70

Patrik Bye

Frisch proposes that the gradient OCP, as a stochastic constraint, may be , where y is the acceptability modeled as a logistic function of the consonant pair (as approximated by the O/E ratio), and x is similarity. The value of K controls the midpoint of the curve, and S determines the curve’s sharpness. For the Arabic data, there is an almost noiselessly close relationship between the scatter plots in Figure 1 and some logistic function. The reason for this lack of noise would seem to have to do with the status of the verbal root as a psychologically real entity for speakers of Arabic (Frisch, Pierrehumbert and Broe 2004: 216f.). That is, although the verbal root is intercalated with vowels in actual speech, there is a psychologically real level at which the verbal root lacks vowels entirely. There is also evidence that this morphological organization makes speakers of Arabic and other Semitic languages particularly prone to certain types of speech error, in particular the metathesis of consonants that on the surface appear in onset and coda position, e.g. /takbiir/ → /takriib/. Such misorderings are expected given the verb root in Arabic is psychologically real and distinct in some sense from the vowel melody. Speakers of languages like English in which the vowel is fully integrated into the lexical representation of the root apparently do not make errors of this type. On plotting the O/E ratio against similarity for English roots, we would therefore not expect the same clear relationship, but it would equally be surprising to find no evidence of a relationship. In this section we determine the impact of similarity on acceptability for English on the basis of the monosyllable data. Frisch argues for a metric of similarity based on structured specification (Broe 1993), which uses monovalent features. Monovalent features render the task of computing similarity comparatively easy, since adding a common feature increases similarity, while removing a common feature decreases it. Table 11, which shows the features of the English consonants, is slightly modified from Frisch (1997). Blanks are only used if segments are literally unspecified for a feature.

Babbling, intrinsic input and the statistics of identical transvocalic Cs

71

Table 11: Structure specification of English consonant inventory

Given the specifications, similarity is computed from a lattice, a partial ordering of the natural classes of segments. Each node in the lattice represents a natural class. The top node represents the entire inventory, or the subinventory under consideration. Each natural class of one segment contains the empty set, which is the bottom node. Figure 2 shows the lattices for the dorsal segments of English. Each node in the lattice is labeled with the set of segments contained in the natural class it represents and the features (from Table 11) that define the class. Note that any given node may be defined by more than one feature, in which case there is a redundancy relationship between those features. For example, the class of dorsals {k,ɡ,ŋ} are also redundantly [stop] and [velar]. The containment relation between sets and subsets is shown by a grey line.

72

Patrik Bye

Figure 2: Dorsal lattice

Given the structured specification in Figure 2, the similarity may be computed by dividing the number of shared natural classes by the sum of shared and non-shared natural classes, as in (3). (3)

similarity = shared natural classes / shared natural classes + nonshared natural classes

Frisch (1997) is not explicit about the way similarity is computed in practice, and I have not been able to back-engineer the method from the values he provides. To compute the similarity of two segments x and y I proceed from the union of the sets in which x and y are contained. For example, if we want to compute the similarity of {k} and {ɡ}, we determine that {k} is contained in {k,ɡ,ŋ} and {ɡ,k}, while {ɡ} is contained within {ɡ,k} and {ɡ,ŋ}. They thus share two classes, {k,ɡ,ŋ} and {ɡ,k}, and do not share {ɡ,ŋ}. This gives a similarity of = = 0.67. Within the dorsal lattice {k} and {ŋ} only share the natural class {k,ɡ,ŋ} and do not share {ɡ,k} or {ɡ,ŋ}. This gives a similarity value of = = 0.33. Figures 3, 4, and 5 provide the lattices for the labial, palatal and apical sets.

Babbling, intrinsic input and the statistics of identical transvocalic Cs

Figure 3: Bilabial lattice

Figure 4: Palatal lattice

Figure 5: Alveolar lattice

73

74 12.

Patrik Bye

Computing similarity for each place returns the values shown in Table

Table 12: Similarity matrix for English consonants by place

As we saw in the previous section, there is no obvious bias in favour of identity in the case of roots containing a pair of apicals. Each cell in the homorganic regions of Tables 3 and 4 were matched with the corresponding similarity value in Table 12, 131 data points in total, 72 apical and 59 non-apical. These are plotted in Figure 6 along with a loess curve, a series of local regression lines with a given span or bandwidth and smoothed to derive a curve for the plot as a whole. The first thing to observe is that the relation between similarity and O/E ratio is not nearly as clear for the English monosyllable data as the Arabic verbal roots. As we noted previously, this is probably related to the fact that, in contrast to Arabic verbal roots, there is no level where the consonants are literally adjacent. In English, the vowel occupies the same tier as the consonants. Nonetheless, it is still possible to discern a general decrease in O/E ratio as similarity increases, as in Arabic. For pairs of apical consonant (Figure 6a), the downward trend towards the minimum predicted value is a gentle one. Pairs of apical consonant low in similarity occur on average a little more frequently than expected. For apicals, identity has no mitigating effect on the gradient OCP. In line with the predictions of similarity avoidance, pairs of identical apical consonant on average have the smallest O/E ratios. The pattern for pairs of non-apical consonant is different. For one thing, the effect of the gradient OCP is stronger for non-apicals, making transvocalic similarity less acceptable in general. Transvocalic pairs of non-

Babbling, intrinsic input and the statistics of identical transvocalic Cs

75

apical consonant that are low in similarity occur on average less frequently than expected, and the curve falls more steeply towards the minimum value compared with the curve for apicals. The most striking difference, however, is the reversal in the direction of the curve represented by pairs of identical transvocalic non-apical consonant. Rather than continuing the downward trend, the curve rises and peaks at around twice the acceptability at the lowest similarity value. Pairs of identical non-apical occur several times as often as expected given the O/E ratio predicted by extrapolating on the basis of the data between 0 and 0.7 similarity. a.

b.

Figure 6: 6: Relation Relationbetween between similarity in English homorganic Figure similarity andand O/E O/E ratio ratio in English homorganic monomonosyllabic syllabic roots roots

3. Towards an explanation: intermodal memory traces formed during protophonation The previous section showed that identity counteracts the effects of the gradient OCP for non-apical transvocalic consonants, but not for apical consonants. Why does identity have this effect at all, and can the answer tell us anything about the apical/non-apical split? The answer proposed here is that both aspects can be understood as the effect of intermodal memory traces laid down in childhood and the timing of their formation relative to repetitive babbling. In brief, the argument is this: During the babbling phase, the child forms motor memories of transvocalic repeated labials and linguals, produced through the articulation of some region of the tongue body against the corresponding palate region. The survival of memory traces encoding transvocalic repetition of consonants at certain places of articulation create conditions for a relative overrepresentation of identical transvocalic consonants relative to the homorganic class. The

76

Patrik Bye

proposed explanation of the difference between apical and non-apical consonants relies on the idea that truly apical articulations, with adult-like oro-haptic qualities, are not available to the child until maturation of the intrinsic muscles of the tongue, which occurs several years after the babbling phase has ended. Although coronal consonants are produced during the canonical babbling phase, the available evidence suggests that the tongue is not an active articulator at this stage. The late development of adult-like apical articulations entails the child never forms motor memories encoding repeated adult-like apicals. In the adult lexicon, transvocalic identical apicals bear the full brunt of the gradient OCP and similarity avoidance. In the remainder of this section we shall review the development of babbling and the maturation of the vocal tract in more detail. There is considerable consensus regarding the stages involved in the development of protophonation during the first year of life. Standard references are Oller (1980, 1995) and Stark (1980), both of whom deal with children acquiring English. The phonation stage during the first two months of life exhibits quasivowels produced with the vocal tract in neutral posture, and glottals. At the second, primitive articulation or ‘gooing’ stage at 2-3 months, the child imposes articulations on vocalization. The following expansion stage is marked by full vowels, ‘raspberries’ (bilabial trills) and some marginal babbling, which is characterized by the lack of any rapid transition from consonant to vowel. Babbling begins in around the 7 month stage and is characterized by the rhythmic production of well-formed syllables (Oller 1980, Holmgren et al. 1986). The onset of syllabic babbling is quite sudden and is easily identified by parents as an indicator that the emergence of speech is imminent (Koopmans-van Beinum and van der Stelt 1986). A canonical (reduplicative) and variegated stage are usually distinguished, with variegated babbling generally held to begin later. Davis and MacNeilage (1995) show nonetheless that variegated babbling is common from the onset of babbling and does not increase in frequency. They found that a second syllable differed from its immediate predecessor only 50 % of the time, and repetition of consonants 67 % of the time. However, most of this variation was due to variation in the amplitude of mandibular oscillation between cycles, which manifested itself in inorganic variation in constriction for consonants and height for vowels. In contrast to the observational consensus, the essential nature of babbling is still keenly debated. Several early commentators held that babbling was unrelated to speech, most notably Jakobson (1969 [1941]). Today’s debate turns on whether babbling represents linguistic behaviour

Babbling, intrinsic input and the statistics of identical transvocalic Cs

77

or not. Thelen (1981) argues that babbling is not a skill but one of a number of rhythmic behaviours that infants engage in, including kicking, waving, swaying (p. 238). MacNeilage (2008: 109) similarly holds babbling to be a “rhythmic alternation between closed and open states of the mouth, powered by the mandible”. As MacNeilage (2008) notes, there must nevertheless be a non-endogenous, mimetic component to babbling since hearing impairment may significantly delay the onset of babbling or forestall its emergence completely (Locke 1983, Oller and Eilers 1988). On the other hand, Petitto et al. (2001) holds babbling to be “a linguistic activity that reflects babies’ sensitivity to specific patterns at the heart of human language and their capacity to use them” (see also Petitto 2005 for an overview). As evidence for the essentially linguistic nature of babbling, she shows that babbling occurs both spoken and signed. Specifically, human infants are sensitive to rhythmic temporal patterning of 1.5 Hertz in maximally alternating contrast which, in speech, corresponds to syllablesized units. Importantly, for the view of babbling as a skill with an innately programmed component, as opposed to merely an inorganic precursor for skills developed later, human babies attempt to produce these units irrespective of modality, i.e. whether they encounter them as acoustic or visual stimuli. This is consistent with the findings of Locke (1983), and Oller and Eiler (1988). Similarly, Oller et al. (1999) show that delays in the onset of canonical babbling (beyond 10 months) correlate with delays in the acquisition of first words, indicating that babbling is a precursor to speech. Given these findings, it is reasonable to assume that the output of repetitive babbling (intrinsic input) shapes the formation of linguistic memories of transvocalic identity. These memories offset the effect of the gradient OCP, making identical transvocalic consonants more likely than expected within each homorganic class.2 There is some work bearing on the 2

An anonymous reviewer observes that this appears to be inconsistent with the finding of Davis and MacNeilage (2000) mentioned above that a second syllable in a babbled sequence differed from its predecessor half the time. These were mainly inorganic differences in the amplitude of a ballistic gesture, leading to variation between stop and spirant articulations. The question may arise why this does not lead to an increase in the observed frequencies of certain homorganic non-identical transvocalic sequences, such as Stop…Fricative. The answer probably lies in that target fricative articulations are not simply ballistic stop gestures that have suffered a decrease in amplitude, resulting in undershoot, as in the Davis and MacNeilage study, but entail more fine-grained articulatory control of stricture. Kirchner (1997) argues that fricative articulations, as opposed to spirants, recruit more muscle groups in order to arrest the upward movement of the active articula-

78

Patrik Bye

survival of pre-linguistic procedural memory. Myers, Perris and Speaker (1994), for example, followed a group of children to determine the survival of practical memory from a single experience in operating a toy. There was no evidence of explicit verbal recall, but there was evidence for procedural memory. Experienced children remained in the play situation for longer and relearned operation of the toy faster than children in the control group that lacked previous exposure. The idea that patterns of infant production may be observable after the emergence of the word is not new. Most obviously, the syllable inventories of all languages contain the core syllable CV, and it is the favoured syllable type statistically. Early speech patterns are also characterized by certain patterns of CV co-occurrence for reasons we shall return to in just a moment. Coronal consonants tend to be followed by front vowels (e.g. [dɪdɪ]), dorsal consonants by back vowels (e.g. [ɡoɡo]), and labial consonants by central vowels (e.g. [baba]). Several studies have observed the same CV co-occurrence patterns in the first words of children across a variety of language environments, including the Swedish and Japanese children in the Stanford Phonology Projects (Davis and MacNeilage 2002), Brazilian Portuguese (Teixeira and Davis 2002), as well as others. As early 1986, Janson reported coronal-front and dorsal-back biases for C-V dyads in five languages. A follow-up study by MacNeilage et al. (2000) demonstrated all three co-occurrence patterns in a sample of 12360 words culled from the dictionaries of ten languages (English, Estonian, French, German, Hebrew, Japanese, Maori, Quichua, Spanish, and Swahili). In 7 of these languages, coronal-front occurred with an O/E ratio of 1.16, dorsalback with an O/E ratio of 1.27 (8 languages), and labial-central with 1.10 (7 languages). Similar results are reported by Rousset (2003) for a sample of fifteen languages.

tor. Sibilants, for example, require stiffening of the sides of the tongue and bracing against the molar gumline (p. 52), resulting in the characteristic central groove required to produce a jet of air and direct it against the teeth to give the effect of stridency. Apart from /f/, which is mastered by 90 % of children by the age of three, most fricatives of English are only mastered by 90 % of children by the time they are six years old. In addition to the greater biomechanical force required, fricatives generally differ from the nearest corresponding stops in precise place of articulation, e.g. /f/ is labio-dental rather than a labio-labial spirant. There is good reason to think that fricatives are distinct in neuromuscular terms from spirants at the same gross place of articulation, but that stops and their corresponding spirants are equivalent.

Babbling, intrinsic input and the statistics of identical transvocalic Cs

79

Let us now turn to why the apicals are immune to the identity effect. As Davis and MacNeilage (1990) note, rhythmic oscillation of the jaw in babbling occurs with little or no independent movement of the tongue. Tongue and other articulatory positions, such as velopharyngeal opening, are relatively set for the duration of the utterance, with the result that the posture of the tongue determines the quality of both the consonant and the vowel. A study of the correlations between consonant and vowel place ([dɪdɪ] • [ɡoɡo] • [baba]) confirmed this. Since labial is independent of height and backness, Davis and MacNeilage reasoned that the preference for central vowels following labial consonants simply reflected the rest position of the tongue. The sequence of labial and central vowel is described as a “pure frame”, the result of mandibular oscillation alone. Similar findings were obtained for the velum, whose position was similarly set for the duration of the utterance. As the child matures, control is acquired over the content of the syllable, allowing a wider range of consonant-vowel combinations. The emergence of behavioral repertoires and natural classes of sound is contingent on maturation of the relevant anatomical structures and physiological processes. The differences between the adult and infant vocal tracts go beyond mere size. In addition to being smaller, the infant vocal tract is also differently shaped, affording different articulatory action possibilities. As Lieberman, Crelin and Klatt (1972) show, the infant vocal tract resembles that of the non-human primate more closely than that of an adult. The oral cavity is broader, the pharynx shorter, the oropharyngeal channel slopes downwards gradually, compared to with the 90 degree bend at the oropharyngeal juncture in the adult, the mass of the tongue is towards the front rather than the back of the oral cavity, the velum and epiglottis are closely approximating, and the larynx is situated relatively high up in the vocal tract. The tongue occupies the entire oral cavity, and this also reduces the range of lingual articulations available. Postnatal anatomic remodeling of the vocal tract occurs between 2 and 4 months, when it assumes more adult shape (Sasaki et al. 1977, Buhr 1980). Growth of the facial skeleton increases the volume of the oral cavity. There are also physiological changes during this period. The tongue tip and lips become more sensitive (Bosma 1975), neural control centres for muscular activity develop (Netsell 1981), and experience gained from operating the articulators in vocal play creates a neural map of the vocal tract (Zlatin 1975). By four or five months the infant is able to modulate the velopharyngeal valve, vary place and degree of supralaryngeal constrictions, and coordinate phonation and supraglottal articulation to some degree.

80

Patrik Bye

The intrinsic muscles of the tongue at infancy are not sufficiently welldeveloped for complex movements (Fletcher 1973). The tongue is a muscular hydrostat, much like an octopus’ tentacle or an elephant’s trunk (Kent 1992). Changes in the shape of the tongue in one dimension can only be accomplished through compensatory changes in another, e.g. protrusion entails narrowing of the body of the tongue. Motor control over the hydrostratic tongue depends on an interdigitated three-dimensional network of intrinsic longitudinal, vertical and transverse fibres. The degrees of freedom afforded by the tongue entail much lengthier motor skill development. Some insight into the development of the tongue is afforded by studies of the development of vowel production. Liberman (1980) finds that the vowel space in the first year of life is best understood in terms of a passive tongue on an active jaw. Hodge (1989) similarly adduces acoustic evidence that apical articulations in protophonation involve a passive tongue and raising of the lower jaw. In adults, by contrast, the production of apical articulations entails less involvement of the jaw, and crucial involvement of the tongue tip. As muscles and sensory receptors mature, the child is able to control articulation more finely through the tongue. An early study by Wellman et al. (1931) found that by one year children master low vowels and front, central and back positions, showing early mastery of the movement of the tongue in an anterior-to-posterior dimension. It is not until two years that children master [i], [u] and [ɑ] indicating the ability to move the tongue in the vertical dimension. For children acquiring American English, the rhotic vowel [ɚ] is the last to be acquired due to the difficulty of the gestures required to accomplish retroflexing and bunching (Shriberg and Kent 1982). The development of consonants is consistent with this picture. Sander (1972) determined the 90 % mastery criterion for consonants of American English. 90 % of children at three years of age demonstrate mastery of /p m h n w/. By the age of four 90 % of children master /b f d j k ɡ/. By the age of six, 90 % of children master /t l r ŋ/. The lateral and rhotic depend on sophisticated tongue postures. The finely tuned articulation for frication required to produce /v θ ð s z ʧ ʤ ʃ ʒ/ is mastered by 90 % of children beyond six years. Segments for which there is an identity effect are generally mastered early (Sets 1 and 2), while segments for which there is no identity effect are generally mastered late. Dinnsen et al. (1990) discovered similar implicational relationships in the pretreatment inventories of functionally misarticulating children. Hodge (1989) and others have showed that apical consonants are produced differently by infants. Rather than using the intrinsic muscles of the tongue to flex the apex to articulate against the

Babbling, intrinsic input and the statistics of identical transvocalic Cs

81

alveolar ridge, as in adults, the tongue rides passively on the jaw, which is the active articulator. Green, Moore and Reilly (2002) discovered that maturation of the intrinsic muscles of the lips is responsible for changes in the division of labour between the jaw and the lips in speech development. This might lead us to expect the identity effect for labials to be lost as labial articulations are recalibrated. The difference between labials and apicals seems to be that, for apical articulations, auditory-oro-haptic connections must be significantly remodelled. Adult apicals both sound and feel different to early speech apicals because different parts of the tongue blade/tip and alveolar ridge articulate. In early speech apicals there is greater contact between the blade and the area behind the alveolar ridge, making infant coronals sound more palatal. Since apical consonants are significantly recalibrated after the babbling phase, the child does not form motor memories of repeated transvocalic apicals with adult-like qualities. For this reason, the effect of the gradient OCP and similarity avoidance on apical consonants is not offset by any effect of identity. 4. Conclusions As a source of perceptuomotor learning, babbling plays a role in mapping correspondences between auditory, proprioceptive and oro-haptic sensations. One of the contributions of babbling to development is the formation of memories encoding repeated transvocalic consonants. Recent findings in phonological theory, specifically the role of the gradient OCP and similarity avoidance, lead us to expect identical transvocalic consonants to be dispreferred. For a corpus of 1556 English monosyllables, the data does not bear this out, suggesting that these earliest memories may have an effect on statistical distributions in the adult lexicon. For nonapical consonants, identical transvocalic consonants are singificantly more frequent than expected within their homorganic class. For apical consonants, though, there is no effect. This was related here to the time it takes for control of the intrinsic muscles of the tongue to mature to the stage where the child can produce apicals with adult-like oro-haptic qualities. Recalibration of the apical consonants does not take place until after the age of six, well after the repetitive babbling phase. Because babbling does not give rise to repeated transvocalic apicals with adult-like oro-haptic quality, there are no relevant motor memories that might shape the corresponding distributions. The apicals emerge as fully subject to the gradient OCP and similarity avoidance.

82

Patrik Bye

In future work we hope to carry out similar investigations on other languages and on broader samples of English words, as well as identify other phenomena that “echo” the earliest speech preferences. Acknowledgements For valuable feedback on the first version of this paper, I would like to thank two anonymous reviewers.

References Berg, Thomas 1998

Linguistic Structure and Change: An Explanation from Language Processing. Oxford: The Clarendon Press. Berkley, Deborah M. 1994 The OCP and gradient data. Proceedings of FLSM V. Studies in the Linguistic Sciences 24: 59–72. 1994 Variability in Obligatory Contour Principle effects. In Papers from the 30th Regional Meeting of the Chicago Linguistic Society, Volume 2: The Parasession on Variation in Linguistic Theory, Katharine Beals, Jeannette Denton, Robert Knippen, Lynette Melnar, Hisami Suzuki and Erica Zeinfeld (eds.), 1–12. Chicago IL: Chicago Linguistic Society. 2000 Gradient Obligatory Contour Principle Effects. Ph.D. dissertation, Northwestern University. Boersma, Paul 1998 Functional Phonology. The Hague: Academic Graphics. Bosma, James F. 1975 Anatomic and physiologic development of the speech apparatus. In Donald B. Tower (ed.) Human Communication and its Disorders 3. New York: Raven Press. Broe, Michael 1993 Specification theory: the treatment of redundancy in generative phonology. Ph.D. dissertation, University of Edinburgh. Bruner, Jerome S. 1973 Organisation of early skilled action. Child Development 44: 1–11. Buhr, Robert D. 1980 The emergence of vowels in an infant. Journal of Speech and Hearing Research 23: 73–94.

Babbling, intrinsic input and the statistics of identical transvocalic Cs Bye, Patrik 2011

83

Dissimilation. In The Blackwell Companion to Phonology, Marc van Oostendorp, Colin Ewen, Elizabeth V. Hume and Keren Rice (eds.), 1408–1433. Oxford: Wiley-Blackwell. Conrad, Robert, and Audrey J. Hull 1964 Information, acoustic confusion and memory span. British Journal of Psychology 55: 429–432. Davis, Barbara L., and Peter F. MacNeilage 1990 Acquisition of correct vowel production: a quantitative case study. Journal of Speech and Hearing Research 33: 16–27. 1995 The articulatory basis of babbling. Journal of Speech and Hearing Research 38: 1199–1211. 2000 An embodiment perspective on the acquisition of speech perception. Phonetica 57: 229–241. 2002 The internal structure of the syllable. In The Evolution of Language out of Prelanguage, Talmy Givón and Bertram F. Malle (eds.), 135–154. Amsterdam: John Benjamins. Davis, Stuart 1991 Coronals and the phonotactics of nonadjacent consonants in English. Phonetics and Phonology 2: 49–6. Dinnsen, Daniel A., Steven B. Chin, Mary Elbert, and Thomas W. Powell 1990 Some constraints on functionally disordered phonologies: phonetic inventories and phonotactics. Journal of Speech and Hearing Research 33: 28–37. Elbers, Loekie 1982 Operating principles in repetitive babbling: a cognitive continuity approach. Cognition 12: 45–63. Ferguson, Charles A., and Marlys A. Macken 1983 The role of play in phonological development. In Children’s Language Vol. 4, Keith E. Nelson (ed.), 231–245. Hillsdale, NJ: Lawrence Erlbaum Associates. Fikkert, Paula 1994 On the Acquisition of Prosodic Structure. [= HIL Dissertations 6.] The Hague: Holland Academic Graphics. Fletcher, Samuel G. 1973 Maturation of the speech mechanism. Folia Phoniatrica 25: 161–172. Frisch, Stefan A. 1997 Similarity and frequency in phonology. Ph.D. dissertation, Northwestern University. Frisch, Stefan A., Janet B. Pierrehumbert, and Michael B. Broe 2004 Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22: 179–228.

84

Patrik Bye

Fudge, Eric C. 1969 Syllables. Journal of Linguistics 5: 253–286. Green, Jordan R., Christopher A. Moore, and Kevin Reilly 2002 The sequential development of jaw and lip control for speech. Journal of Speech, Language and Hearing Research 45: 66–79. Holmgren, Karin, Björn Lindblom, Göran Aurelius, Birgitta Jalling, and Rolf Zetterström 1986 On the phonetics of infant vocalization. In Precursors of Early Speech, Björn Lindblom and Rolf Zetterström (eds.), 51–63. Basingstoke, Hampshire: Macmillan. Ingram, David 1986 Phonological development: production. In Language Acquisition: Studies in First Language Development, Paul Fletcher and Michael Garman (eds.), 223–239. Cambridge University Press. Jakobson, Roman 1969 [1941] Kindersprache, Aphasie und allgemeine Lautgesetze. Frankfurt am Main: Suhrkamp Verlag. Originally published by Almqvist and Wiksell, Uppsala. Janson, Tore 1986 Cross-linguistic trends in the frequency of CV sequences. Phonology Yearbook 3: 179–195. Kent, Raymond D. 1992 The biology of phonological development. In Phonological Development: Models, research, implications, Charles A. Ferguson, Lise Menn and Carol Stoel-Gammon (eds.), 65–90. Timonium, MD: York Press. Kent, Raymond D., and Megan Hodge 1990 The biogenesis of speech: continuity and process in early speech and language development. In Research on Child Language Disorders: A decade of progress, Jon F. Miller (ed.), 25–53. Austin, TX: Pro-Ed. Kent, Raymond D., and Ann D. Murray 1982 Acoustic features of infant vocalic utterances at 3, 6, and 9 months. Journal of the Acoustic Society of America 72: 353–363. Kirchner, Robert 1997 An effort-based approach to consonant lenition. Ph.D. dissertation, University of California, Los Angeles. Rutgers Optimality Archive 276. Kučera, Henry, and W. Nelson Francis 1967 Computational Analysis of Present-day American English. Providence, RI: Brown University Press.

Babbling, intrinsic input and the statistics of identical transvocalic Cs

85

Leopold, Werner F. 1947 Speech Development of a Bilingual Child: A Linguist’s Record Vol. 2: Sound Learning in the First Two Years. Evanston, IL: Northwestern University Press. Lieberman, Philip 1980 On the development of vowel production in young children. In Child Phonology. I: Production, Grace H. Yeni-Komshian, James F. Kavanagh and Charles A. Ferguson (eds.), 113–142. New York: Academic Press. Lieberman, Philip, Edmund S. Crelin, and Dennis H. Klatt 1972 Phonetic ability and related anatomy of the newborn and adult human, Neanderthal man, and the chimpanzee. American Anthropologist 84: 287–307. Locke, John L. 1983 Phonological Acquisition and Change. New York: Academic Press. McCarthy, John J. 1986 OCP effects: gemination and anti-gemination. Linguistic Inquiry 17: 27–263. McCarthy, John J. 1988 Feature geometry and dependency: a review. Phonetica 43: 84– 18. McCune, Lorraine, and Marilyn M. Vihman. 2001 Early phonetic and lexical development. Journal of Speech, Language and Hearing Research 44: 670–684. MacKay, Donald G. 1987 The Organization of Perception and Action: A Theory for Language and Other Cognitive Sciences. New York: Springer. MacNeilage, Peter F., Barbara L. Davis, Ashlynn Kinney, and Christine L. Matyear. 2000 The motor core of speech: a comparison of serial organization patterns in infants and languages. Child Development 71: 153– 163. Mitchell, Pamela R., and Raymond D. Kent 1990 Phonetic variation in multisyllabic babbling. Journal of Child Language 17: 247–265. Myers, Nancy A., Eve E. Perris, and Cindy J. Speaker 1994 Fifty months of memory: a longitudinal study in early childhood. Memory 2: 383–415.

86

Patrik Bye

Netsell, Ronald 1981 The acquisition of speech motor control: a perspective with directions for research. In Language Behavior in Infancy and Early Childhood, Rachel E. Stark (ed.), 127–156. New York: Elsevier. Oller, D. Kimbrough 1980 The emergence of the sounds of speech in infancy. In Child Phonology, Volume 1: Production, Grace H. Yeni-Komshian, James F. Kavanagh and Charles A. Ferguson (eds.), 93–112. New York: Academic Press. 1995 Development of vocalizations in infancy. In Human Communication and Its Disorders: A Review. Vol. IV, Harris Winitz (ed.), 1–30. Timonium, MD: York Press. Oller, D. Kimbrough, and Rebecca E. Eilers 1988 The role of audition in infant babbling. Child Development 59: 441–449. Oller, D. Kimbrough, Rebecca E. Eilers, A. Rebecca Neal, and Heidi K. Schwartz 1999 Precursors to speech in infancy: the prediction of speech and language disorders. Journal of Communication Disorders 32: 223–245. Petitto, Laura-Ann 2005 How the brain begets language. In The Cambridge Companion to Chomsky, James McGilvray (ed.), 84–101. Cambridge: Cambridge University Press. Petitto, Laura-Ann, Marina Katerelos, Bronna G. Levy, Kristine Gauna, Karine Tétrault, and Vittoria Ferraro 2001 Bilingual signed and spoken language acquisition from birth: Implications for mechanisms underlying early bilingual language acquisition. Journal of Child Language 28: 453–496. Pierrehumbert, Janet 1993 Dissimilarity in the Arabic verbal roots. Proceedings of the North East Linguistic Society (NELS) 23, 367–381. Amherst, MA: Graduate Linguistic Student Association. Rousset, Isabelle From lexical to syllabic organization: favored and disfavored co2003 occurrences. Proceedings of the 15th International Congress of Phonetics, 2705–2708. Barcelona: Autonomous University of Barcelona. Sander, Eric K. 1972 When are speech sounds learned? Journal of Speech and Hearing Disorders 37: 55–63.

Babbling, intrinsic input and the statistics of identical transvocalic Cs

87

Sasaki, Clarence T., Paul A. Levine, Jeffrey T. Laitman, and Edmund S. Crelin 1977 Postanatal descent of the epiglottis in man. Archives of Otolaryngology 103: 169–171. Schwartz, Richard G., Laurence B. Leonard, M. Jeanne Wilcox, and M. Karen 1980 Folger Again and again: reduplication in child phonology. Journal of Child Language 7: 75–87. Shriberg, Lawrence D., and Raymond D. Kent 1982 Clinical Phonetics. New York: Macmillan. Stark, Rachel E. 1980 Stages of speech development in the first year of life. In Child Phonology, Volume 1: Production, Grace H. Yeni-Komshian, James F. Kavanagh and Charles A. Ferguson (eds.), 73–92. New York: Academic Press. Koopmans-van Beinum, Florina J., and Jeannette M. van der Stelt 1986 Early stages in the development of speech movements. In Precursors of Early Speech, Björn Lindblom and Rolf Zetterström (eds.), 37–50. Basingstoke, Hampshire: Macmillan. Stoel-Gammon, Carol 1985 Phonetic inventories, 15–24 months: longitudinal study. Journal of Speech and Hearing Research 28: 55–512. Teixeira, Elizabeth Reis, and Barbara L. Davis 2002 Early sound patterns in the speech of two Brazilian Portuguese speakers. Language and Speech 45: 179–204. Thelen, Esther 1981 Rhythmical behavior in infants: an ethological perspective. Developmental Psychology 17: 237–257. Thelen, Esther, and Linda B. Smith 1994 A Dynamical Systems Approach to the Development of Cognition and Action. Cambridge, MA: MIT Press. Vihman, Marilyn M. 1996 Phonological Development: The Origins of Language in the Child. Oxford: Blackwell. Vihman, Marilyn M., Rory A. DePaolis, and Tamar Keren-Portnoy 2009 Babbling and words: a dynamic systems perspective on phonological development. In The Cambridge Handbook of Child Language, Edith L. Bavin (ed.), 166–182. Cambridge: Cambridge University Press. Wellman, Beth L., Ida Mae Case, Ida Gaarder Mengert, and Dorothy E. Bradbury 1931 Speech sounds of young children. University of Iowa Studies in Child Welfare 5: 7–80.

88

Patrik Bye

Westermann, Gert, and Eduardo Reck Miranda 2004 A new model of sensorimotor coupling in the development of speech. Brain and Language 89: 393–400. Zlatin, Marsha A. 1975 Explorative mapping of the vocal tract and primitive syllabification in infancy: the first six months. Purdue University Contributed Papers, Fall: 58–73.

Babbling, intrinsic input and the statistics of identical transvocalic Cs

Appendix Labial-labial

C1

C2 f POOF POUF POUFFE PUFF

b PUB

b

p PAP PEEP PEP PIP PIPE POOP POP PUP -

BABE BARB BIB BOB BOOB

BEEF BIFF BOUFFE BUFF

f

FOP

FIB FOB

FEOFF FIEF FIFE

FIVE

v

-

VERB

-

m

MAP MOP

MOB

MUFF

VERVE VIVE MOVE

w

WEEP WHIP WHOP WIPE

WEB

WAIF WHIFF WIFE WOOF

WAIVE WAVE WEAVE WIVE

p

v PAVE

m PALM PAM PERM POM

BALM BARM BEAM BOMB BOOM BUM FAME FARM FEME FIRM VIM MA'AM MAIM MIME MUM WHIM WOMB WORM

89

90

Patrik Bye

Labial - coronal obstruent

p

b

f

C1

v

m

w

C2 t PART PAT PATE PEAT PERT PET PIT POT POUT PUT PUTT BAIT BART BART BAT BATE BEAT BEET BET BIGHT BIT BITE BOOT BOUT BUT BUTT FART FAT FATE FEAT FEET FIGHT FIT FOOT PHUT VAT VERT VET MART MAT MATE MATT MEAT MEET MET METE MIGHT MITE MITT MOOT MUTT WAIT WATT WEIGHT WERT WET WHAT WHEAT WHET WHIT WHITE WIGHT WIT WORT WOT

d PAD PAID PARD PIED POD POOD PUD

θ PATH PITH

ð -

s PACE PASS PEACE PICE PIECE PIERCE PISS PURSE PUS PUSS

z PARSE PEASE POISE

ʧ PARCH PATCH PEACH PERCH PITCH POUCH

ʤ PAGE PURGE

ʃ PISH POSH PUSH

ʒ -

BAD BADE BARD BEAD BEARD BED BID BIDE BIRD BUD

BATH BERTH BIRTH

BATHE BOOTH

BAAS BASE BASS BICE BIS BOSS BUS BUSS

BAAS BAIZE BIZ BOOZE BOUSE BUZZ

BATCH BEACH BEECH BIRCH BITCH BOTCH BUTCH

BADGE BARGE BUDGE

BASH BOCHE BOSH BUSH

BEIGE

FAD FADE FED FEED FID FOOD

FAITH FIRTH

-

FACE FARCE FESSE FIERCE FOSSE FUSS

FEZ FIZZ FURZE FUZZ PHASE PHIZ

FETCH FITCH

FUDGE

FASH FISH

VOID

-

-

VASE VIZ

VETCH VOUCH

VERGE

MAD MADE MAID MEAD MEED MID MOD MOOD MUD

MEATH METH MIRTH MOTH MOUTH MYTH

MOUTH

VERSE VICE VIS VOICE MACE MASS MESS MICE MISS MOOSE MOSS MOUSE MOUSSE MUSS

MAIZE MAZE MOUSE

MARCH MATCH MOOCH MOUCH MUCH

MADGE MAGE MARGE MARJ MERGE MIDGE

MARSH MASH MESH MUSH

WAD WADE WED WEED WEIRD WIDE WOOD WORD WOULD

WITHE WORTH

WITH

WORSE

WAS WHEEZE WHIZ WHIZZ WISE

WATCH WHICH WITCH

WAGE WEDGE

WASH WISH

-

Babbling, intrinsic input and the statistics of identical transvocalic Cs Labial - coronal sonorant

p

b

f

C1

v

m

w

n PAIN PAN PANE PEAN PEN PIN PINE PUN

BAIRN BAN BANE BARN BEAN BEEN BEN BIN BINE BON BOON BUN BURN FAIN FAN FANE FEIGN FEN FERN FIN FINE FINN FUN PHONE VAIN VAN VANE VEIN VINE MAIN MAN MANE MEAN MEN MESNE MIEN MINE MOON ONE WAIN WAN WANE WEAN WEEN WEN WHEN WHIN WHINE WIN WINE WON

C2

l PAIL PAL PALE PEAL PEARL PEEL PILE PILL POLL POOL PULL PURL BAIL BALE BEL BELL BELLE BILE BILL BOIL BUHL BULL

FAIL FEEL FELL FILE FILL FOIL FOOL FOUL FOWL FULL FURL VAIL VALE VEAL VEIL VILE VOILE VOL MAIL MALE MARL MEAL MILE MILL MOIL MOLL MULL WAIL WALE WEAL WELL WHALE WHEEL WHILE WHIRL WHORL WILE WILL WOOL

91

92

Patrik Bye

Labial - dorsal

p

b

C1

f

v m

w

k PACK PARK PEAK PECK PEEK PEKE PERK PICK PIKE PIQUE POCK PUCK BACK BAKE BARK BARQUE BEAK BECK BIKE BOOK BUCK BURKE FAKE

VAC VIC MAC MACH MAKE MARK MARQUE MEEK MIKE MOCK MUCK MURK WAKE WEAK WEEK WHACK WICK WORK

C2 ɡ PEG PIG PUG

ŋ PANG PING

BAG BEG BERG BIG BOG BUG BURG

BANG BUNG

FAG FIG FOG FUG VAGUE

FANG

-

MAG MEG MIG MUG

-

WAG WHIG WIG

WHANG WING

Babbling, intrinsic input and the statistics of identical transvocalic Cs Coronal obstruent - labial C2 f TIFF TOFF TOUGH TURF

p TAP TAPE TIP TOP TUP TYPE

b TAB TUB

d

DEEP DIP

DAB DEB DIB DUB

DEAF DOFF

DIVE DOVE

θ

-

-

THIEF

THIEVE

ð s

SAP SEEP SIP SOP SOUP SUP

SOB SUB

SAFE SERF SOPH SURF

SALVE SAVE SERVE SIEVE

z

ZIP

-

-

-

r

RAP RAPE REAP REP RIP RIPE WRAP

RIB ROB RUB

REEF RIFE ROOF ROUGH RUFF

RAVE REEVE REV RIVE

ʧ

CHAP CHAPE CHEAP CHEEP CHIP CHIRP CHOP GIP GYP JAPE JEEP

CHUB

CHAFE CHAFF CHIEF CHOUGH CHUFF

CHIVE

GIBE GYBE JAB JIB JIBE JOB -

JEFF

GYVE

GEM GERM GYM JAM JAMB

CHEF SHEAF SHOUGH

SHAVE SHOVE

SHAM SHAME







YAM

t

C1

ʤ

ʃ

ʒ j

SHAPE SHARP SHEEP SHIP SHOP JUPE YAP

v

m TAME TEAM TEEM TERM THYME TIME TOM TOMB DAM DAME DAMN DEEM DERM DIM DIME DOOM DUMB THEME THERM THUMB THEM CYME PSALM SAM SAME SEAM SEEM SOME SUM ZOOM RAM REAM RHEUM RHOMB RHUMB RHYME RIM RIME ROOM RUM CHARM CHIME CHUM

93

94

Patrik Bye

Coronal obstruent - coronal obstruent t TART TAT TEAT TIGHT TIT TOOT TOT TOUT TUT

d TED TIDE TOD TURD

θ TEETH TOOTH

ð TEETHE TITHE

s TERSE TICE TIERCE TOSS

DAD DEAD DEED DID DUD

DEARTH DEATH DOTH

-

θ

DART DATE DEBT DIGHT DIRT DOIT DOT DOUBT -

ð

THAT

s

CERT CITE SAT SATE SEAT SET SETT SIGHT SIT SITE SOOT SOT ZOOT

t

d

C1

z r

ʧ

ʤ

ʃ

ʒ J

RAT RATE RET RIGHT RITE ROOT ROT ROUT ROUTE RUT WRIGHT WRIT WRITE WROUGHT CHART CHAT CHEAT CHIT JET JOT JUT JUTE CHUTE SHEET SHIRT SHIT SHOOT SHOT SHOUT SHUT YACHT YATE YET

THIRD THUD -

C2

z TEASE TIS

ʧ TEACH TOUCH

ʤ -

ʃ TACHE TOSH TUSH

DACE DICE DOSS DOUSE

DAZE DIES DOES DOWSE

DITCH DUTCH

DIRGE DODGE

DASH DISH DOUCHE

ʒ TIGE

-

-

-

-

THATCH

-

-

-

-

-

-

-

-

SAITH SOOTH SOUTH

SCYTHE SEETHE SOOTHE SOUTH

THEIRS THESE SAYS SEIZE SIZE

-

CEDE SAD SAID SEED SIDE SOD SUDD SURD

THIS THUS CEASE CESS SICE SOUSE SYCE

SEARCH SUCH

SAGE SEDGE SERGE SIEGE SURGE

SASH

ZED

-

-

-

-

-

-

-

-

RAD RAID READ RED REED RID RIDE ROD ROOD RUDD RUDE

RUTH WRAITH WRATH WREATH

WREATHE WRITHE

RACE REIS RICE WRASSE

RAISE RASE RAZE RES RISE ROUSE RUSE

RATCH REACH RETCH RICH WRETCH

RAGE RAJ RIDGE

RASH RUCHE RUSH

ROUGE

CHAD CHARD CHIDE

-

-

CHEESE CHOOSE

CHURCH

CHARGE

-

-

JADE

-

-

JAZZ

-

JUDGE

-

-

SHAD SHADE SHARD SHED SHERD SHOD SHOULD

SHEATH

SHEATHE

CHASE CHESS CHOICE CHOUSE JESS JOSS JUICE JUS -

CHAISE SHEARS

-

-

SHUSH

JOD YARD YOD

YOUTH

USE YES

-

-

-

-

USE YAWS YOURS

-

Babbling, intrinsic input and the statistics of identical transvocalic Cs Coronal obstruent - coronal sonorant

t

d

θ

C1

ð s

z r

ʧ

ʤ

n TA'EN TAN TARN TEEN TEN TERN TIN TINE TON TOWN TUN TURN TYNE DAN DANE DARN DEAN DEIGN DEN DENE DIN DINE DON DONE DOWN DUN DYNE TANH THANE THIN THAN THEN THINE SANE SCENE SEEN SEINE SEN SIGN SIN SINE SON SOON SUN SYNE RAIN RAN REIGN REIN RHINE RUN RUNE WREN CHAIN CHIN CHINE CHURN GEN GENE GIN JANE JEAN JINN JOHN JOIN JUNE

C2

l TAIL TALE TEAL TELL TILE TILL TOIL TOOL

DALE DEAL DELL DILL DOLL DULL

THILL CEIL CELL CILL SAIL SALE SEAL SELL SILL SOIL SOL ZEAL RAIL RALE REAL REEL RILE RILL ROIL RULE CHILL CHURL GAOL GEL GILL JAIL JELL JILL JOULE JOWL

95

96

Patrik Bye ʃ

ʒ j

SHEEN SHIN SHINE SHONE SHUN SINH YARN YAWN YEAN YEARN YEN YON

SHALE SHALL SHELL

YAWL YELL YOWL YULE

Coronal obstruent - dorsal

t

d

θ C1

ð s

z r

ʧ

k TACK TAKE TEAK TEC TIC TICK TIKE TOOK TUCK TURK TYKE DAK DARK DECK DICK DIKE DIRK DOCK DUCK DYKE THICK CIRQUE SAC SACK SAKE SEC SEEK SIC SICK SIKH SOCK SUCK RACK RAKE RECK REEK REICH RICK ROC ROCK ROOK RUCK WRACK WREAK WRECK CHECK CHEEK CHEQUE

C2 ɡ TAG TEG TIG TOG TUG

ŋ TANG TONGUE

DAG DIG DOG DUG

DING DUNG

THUG

THING THONG SANG SING SONG SUNG

SAG

RAG RIG RUG

RANG RING RUNG WRING WRONG WRUNG

-

-

Babbling, intrinsic input and the statistics of identical transvocalic Cs

ʤ

ʃ

ʒ

CHICK CHOCK CHUCK CZECH TCHICK JACK JERK CHIC SHACK SHAKE SHARK SHEIK SHEIKH SHIRK SHOCK SHOOK SHUCK -

JAG JIG JOG JUG SHAG

-

GIGUE

-

97

Coronal sonorant - labial

n

C1

l

p KNAP KNOP NAP NAPE NEAP NIP LAP LEAP LIP LOOP LOP

b KNOB NAB NIB NOB NUB LAB LIB LOB

C2 f KNIFE

LAUGH LEAF LIEF LIFE LUFF

v KNAVE NAVE NERVE

m NAME NUMB

LAVE LEAVE LIVE LOVE

LAM LAMB LAME LIMB LIME LIMN LOOM

Coronal sonorant - coronal obstruent

n

C1 l

t GNAT KNIGHT KNIT KNOT KNOUT NEAT NET NIGHT NIT NOT NUT LATE LEET LET LIGHT LIT LOOT LOT LOUT LUTE

d KNEAD NARD NEED NOD

θ NEATH

ð -

s GNEISS NESS NICE NIECE NOOSE NOUS NURSE

LAD LADE LAID LAIRD LARD LEAD LED LEWD LID LOUD

LATH

LATHE LITHE

LACE LASS LEASE LESS LICE LOOSE LOSS LOUSE

C2

z NAZE NOISE

ʧ NICHE NOTCH

ʤ NUDGE

ʃ GNASH

ʒ -

LAZE LEES LES LOSE LOUSE

LARCH LATCH LEACH LEECH LURCH

LARGE LEDGE LIEGE LODGE

LASH LEASH LUSH

LUGE

98

Patrik Bye

Coronal sonorant - coronal sonorant

n

C1

l

n NINE NON NONE NOON NOUN NUN LAIN LANE LEAN LEARN LIEN LINE LOIN LOON LUNE LYNN

C2

l GNARL KNEEL KNELL NAIL NIL NULL LEAL LISLE LOLL LULL

Coronal sonorant - dorsal

n

l

C1

k KNACK KNOCK NECK NICK NOOK LAC LACK LAKE LAKH LARK LEAK LEEK LICK LIKE LOCH LOCK LOOK LOUGH LUCK LUKE LURK

C2 ɡ KNAG NAG

LAG LEAGUE LEG LOG LUG

ŋ -

LING LONG LUNG

Dorsal - labial

k

C1

ɡ

p CAP CAPE CARP COOP COP CUP KEEP KIP GAP GAPE GIP

b CAB COB CUB CURB KERB KIBE GAB GARB GOB

C2 f CALF COIF COUGH CUFF

GAFF GAFFE GOOF

v CALVE CARVE CAVE CURVE

GAVE GIVE

m CALM CAM CAME CHYME COMBE COME COOM CUM GAME GUM

Babbling, intrinsic input and the statistics of identical transvocalic Cs

99

Dorsal - coronal obstruent

k

C1 ɡ

t CART CARTE CAT COOT COT CURT CUT KIT KITE QUART QUOIT GAIT GATE GET GIRT GOT GOUT GUT

d CAD CADE CARD COD COULD CUD CURD KID

θ KITH

GAD GIRD GOD GOOD GUARD GUIDE

GARTH GIRTH GOTH

Dorsal - coronal sonorant

k

C1

ɡ

n CAIRN CAN CANE COIGN COIN CON COON KEEN KEN KERN KHAN KIN KINE QUOIN GAIN GIN GONE GOON GOWN GUN

C2

l CARL CHYLE COIL COL COOL COWL CULL CURL KAIL KALE KEEL KILL KYLE GAEL GALE GHOUL GILL GIRL GUILE GULL

Dorsal - dorsal

k C1 ɡ

k CAKE COCK COOK KICK KIRK GOWK

C2 ɡ COG KEG

ŋ KING

GAG GIG

GANG GONG

ð -

s CASE COS CURSE CUSS KISS

GAS GEESE GOOSE GUESS

C2

z COS COZ

ʧ CATCH COUCH KETCH

ʤ CADGE CAGE KEDGE

ʃ CACHE CASH COSH

ʒ -

GAZE GUISE

-

GAGE GAUGE GOUGE

GASH GOSH GUSH

-

100

Patrik Bye

/h/

C1

h

C1

h

C1

h

C1

h

p HAP HARP HEAP HEP HIP HOOP HOP WHOOP

b HERB HIB HOB HUB

t HART HAT HATE HEART HEAT HEIGHT HIT HOOT HOT HURT HUT

d HAD HADE HARD HEAD HEARD HEED HERD HID HIDE HOARD HOD HOOD HORDE

n HEN HERN HORN HUN

k HACK HAKE HARK HAWK HECK HICK HIKE HOCK HOIK HOOK HOUGH

C2

C2 f HALF HOOF HUFF

θ HATH HEARTH HEATH

l HAIL HALE HALL HAUL HE'LL HEAL HEEL HELL HILL HOWL HULL HURL C2 ɡ HAG HOG HUG

ŋ HANG HUNG

v HALVE HAVE HEAVE HIVE

ð HYTHE

m HALM HAM HARM HAULM HEM HIM HUM HYMN WHOM C2 s HEARSE HISS HOARSE HORSE HOUSE

z HAS HAZE HE'S HERS HIS HOUSE WHOSE

ʧ HATCH HITCH HOOCH HUTCH

ʤ HEDGE HODGE

ʃ HARSH HASH HUSH

ʒ -

Identity avoidance in the onset Toyomi Takahashi 1. Introduction Syllabic structure is recognised as part and parcel of representation in current phonological theories. And there is little controversy over its dual function. On the one hand, syllabic structure expresses the relations holding between the terminal units on which prosodic structure is built; and on the other hand, it identifies the units that are referred to in describing regularities of melodic distribution. Although the precise configuration of the structure may differ from one framework to another, it is generally agreed that syllabic structure comprises a core component, a nucleus, which is typically associated with a vocalic expression, and two optional ones, an onset and a coda, which accommodate consonantal expressions on either side of the nucleus. This paper concerns the optional component that precedes the nucleus: the onset. The status of the onset as a formal representational unit is considered untenable in some restrictive frameworks, on the grounds that it does not count in any phonological weight system and thus fails to fulfil the dual function described above (Clements and Keyser 1983, Levin 1985, McCarthy and Prince 1986).1 And yet, there are at least two reasons pointing to the necessity of recognising the onset as a syllabic component. First, the presence of an onset has been observed to contribute to the wellformedness of prosodic structure. Typological observations show that syllables with an onset are structurally unmarked and thus optimal, compared to those without an onset (Jakobson 1962, Kaye and Lowenstamm 1981, Prince and Smolensky 2002); the existence of an onset sometimes determines the well-formedness of higher prosodic domains such as feet and prosodic words (Takahashi 1994, 2004). Second, it has been assumed that the onset defines a phonotactic domain. Individual grammars are therefore expected to declare (i) how many melodic units can be incorporated into an 1

Davis (1988) argues that a weight-sensitive system may refer to the onset. See Takahashi (1994), Goedemans (1996), Downing (1998), and Takahashi (2004) for alternative analyses.

102

Toyomi Takahashi

onset, and (ii) what restrictions control the distribution of melodies within this domain. This paper focuses on the latter by considering the nature of the phonotactic constraints that hold within the onset constituent. String-adjacent distributional regularities are also observed in two other domains: one is the nucleus, and the other is the sequence of a coda and a following onset.2 Among the three phonotactic domains, the nucleus and the coda-onset sequence — to the exclusion of the onset — have the following characteristic in common: if two melodic units can stand adjacent in the same onset domain, it is usually possible for them to share some or all of their melodic properties without incurring ill-formedness. Long monophthongs are allowed to occupy the nucleus in many languages; coda-onset clusters are regularly restricted to geminates and/or sequences of a sonorant plus a homorganic obstruent. This preference for close identity between neighbouring melodic units does not apply to the onset, however; instead, adjacent units in the onset domain are antagonistic — they display a strong tendency to avoid identity: geminates are excluded and homorganic clusters are uncommon in the onset. The most widely accepted explanation for this state of affairs resorts to co-occurrence restrictions, which are formulated in such a way that adjacent positions are banned from sharing some melodic properties and/or are required to keep a prescribed degree of melodic disparity. This approach may be successful to the extent that the stipulated restrictions help identify recurrent sound shapes in the onset, but it leaves untouched the question of what makes the onset different from the other two phonotactic domains. This paper discusses an alternative account of the difference between the onset and other phonotactic domains by challenging the assumption that the onset can be structurally complex. Here the term ‘structurally complex’ refers to an onset that is thought to contain a melodic ‘cluster’ or ‘contour’ involving a sequence of units (terminal positions, root nodes, etc.) at some level of phonological representation. The notion to be explored in this paper is that any apparent sequence of melodies in an onset does not result from a sequence of phonological units arranged in linear order; the onset is a non-branching constituent that may contain phonological primes, but the latter may only hold dependency relations, not precedence relations. When an onset is phonetically interpreted, the resulting speech signal may display some ordering of acoustic patterns. But it is not within the remit of phonol2

Following Kaye (1990), this paper assumes that the ‘coda’ is merely an informal label for a tautosyllabic post-nuclear position licensed by the following onset. The coda alone does not circumscribe a phonotactic domain.

Identity avoidance in the onset

103

ogy to explain these sequences: the ordering of such patterns arises in the mapping of phonological representations onto the acoustic signal for reasons that are external to the grammar. The discussion is organised as follows. §2 describes theoretical approaches to distributional asymmetries and introduces the framework on which the present arguments are based. It also clarifies the issue at hand regarding the onset as a phonotactic domain. Then §3 examines the nature of non-linear representations and argues for redundancy-free representations without well-formedness constraints. §4 proposes that the onset is a simplex entity and that any apparent phonotactic restrictions are derived from a general theory of melodic structure. A summary is given in §5. 2. Distributional asymmetries The description of a sound system refers not only to the inventory of melodic units but also to their distributional regularities. An investigation into the former involves taking a microscopic look inside melodic units to reveal their internal properties (Jakobson, Fant and Halle 1963). In this paper these properties will be represented by units called elements (Anderson and Jones 1974, Schane 1984, Kaye, Lowenstamm and Vergnaud 1985, Harris and Lindsey 1995, Nasukawa 2005, Backley 2011). Meanwhile, an examination of distributional regularities also requires us to take a bird’s-eye view of sequences of melodic units, from which it emerges that a melodic string is organised into syllabic structure (Fudge 1969, Kahn 1976, Clements and Keyser 1983). The presence of syllabic structure in representations allows us to identify three domains in which the distributional regularities in question are formulated. Let us assume that melodic units are exhausted into three syllabic components, onset, nucleus and coda; I leave aside the question of their formal status for the time being. Two of the three phonotactic domains — the onset and the nucleus — coincide with syllabic constituents, while the remaining domain comprises a sequence of a coda and the following onset. A common characteristic of the phonotactic constraints holding in these domains is that one designated position has a dominant status over the other(s) within the same domain. One attempt to formalise this asymmetry is to postulate that adjacent melodic units are required to keep a degree of sonority distance (Steriade 1982, Selkirk 1984, Zec 1988, Clements 1990). According to this sonority-based account, melodic units within an onset are

104

Toyomi Takahashi

distributed in such a way that the initial one is less sonorous than the subsequent one by a prescribed degree. By contrast, within a nucleus as well as in a coda-onset sequence, the initial unit must either be more sonorous or contain the same content, the latter resulting in a long monophthong in a nucleus and a geminate in the case of a coda-onset sequence. Although this sonority-based approach has been widely employed, the notion of sonority is crucially reliant on a stipulated universal hierarchy of melodic prominence. As this hierarchy does not seem to contribute to any other aspects of phonological behaviour, its legitimacy may well be called into question. And on this basis, the concept of sonority distance is difficult to defend as a determining factor in distributional regularities (see Harris 2006 for detailed discussions against the notion of sonority as a formal phonological property). Another, arguably more convincing approach is to attribute the asymmetry described above to a relational property that invokes headship: a particular position in each phonotactic domain is designated the head of that domain, giving it dominance over its dependants. The notion of an asymmetric relational property has been applied extensively, either explicitly or tacitly, in phonological theories, and distributional regularities may well be assumed to fall under this pattern. Although this line of argument can still make use of sonority to determine the identity of a domain head (Anderson and Jones 1974, Kiparsky 1981), it is nevertheless difficult to maintain the relevance of sonority as an intrinsic property of the head, considering that prominence is rather inconsistent with other melodic phenomena attributable to the head: the prominent position in a nucleus can support more melodic contrasts than are possible in a dependant position; yet it is the reverse that holds in an onset and in a coda-onset domain. A more plausible view is to associate heads with coherent attributes of the kind advocated in the Government Phonology approach (Kaye 1990, Kaye, Lowenstamm and Vergnaud 1990, Charette 1991, Harris 1994, GP, henceforth), in which the head of a phonotactic domain is required to be always present, tends to be more stable in its lexical identity, and enjoys greater distributional freedom; by contrast, a non-head may be absent and is likely to have a severely diminished capacity to support melodic oppositions. The following discussion is couched in the GP framework, as this approach offers the clearest explanation of the point at issue. Like other dependency frameworks, GP requires all components in a well-formed representation to enter into head-dependant relations. Unlike other frameworks, however, GP also enforces two further constraints, Locality and Directionality, which ensure that head-dependency relations are

Identity avoidance in the onset

105

strictly binary at any given level of representation; this is designed to prevent the proliferation of dependency structure that would otherwise cause overgeneralisation. Given this notion of dependency, GP configures syllabic structure by stipulating three constituents: nucleus, onset and rhyme, which are depicted in (1). (1)

a.

b.

c.

The ‘x’s represent phonological positions, which serve as anchors for melodic expressions and work as referential units to regulate their distribution. Note that the ‘syllable’ constituent is excluded from the above inventory on the grounds that no phonological phenomena crucially refer to syllables (Aoun 1979: 146). Melodic analyses need not make reference to the ‘syllable’ as melodies are exhaustively syllabified into the above three constituents. Prosodic phenomena concern the branching/non-branching state of the nucleus and/or rhyme but do not invoke the status of the syllable as a constituent. Consequently, the syllabic level of representation is conceived of as a sequence of alternating onsets and rhymes in GP. Strictly speaking, the three ‘constituents’ are not independent representational entities, but rather, are informal labels for a certain type of relation termed government. More specifically, they are characterised as having their dependants governed at the string-adjacent level, shown by the arrows below the positions in (1). The nucleus (1a) contains the ultimate head of the syllable domain (the position that always acts as a head) and an optional dependant governed by this head. The head of the onset (1b) also governs its optional dependant, but the head itself is licensed by the following nuclear head at the level of head projection. The head of the rhyme (1c), which is the maximal projection of the nuclear head — recall that the ‘syllable’ is denied any formal status in the framework —, licenses the following position but the latter must also be externally governed by a following onset head. Although the above sketch of syllabic structure in GP is rough, it should suffice to illustrate the point at issue for the present discussion: it is the string-adjacent governing domains that give rise to distributional restrictions. The governed dependants in these domains, which do not act as heads and are thus characterised as persistent non-heads (Harris 1994: 168), are less capable of sustaining melodic oppositions.

106

Toyomi Takahashi

The tradition in GP is to employ element-based representations in the analysis of melodic patterns, which include the disparity between governing heads and governed dependants in terms of their ability to express phonological oppositions. Melodic expressions are assumed to comprise monovalent phonological primes, which are individually identifiable in acoustic terms. It is claimed that the universal inventory of elements contains less than ten elements, each of which may be either headed or non-headed, and that the way an element is phonetically interpreted differs according to its headed/non-headed status.3 A significant advantage of employing Element Theory in the analysis of distributional regularities is that the ability of a position to support oppositions is measurable quantitatively, and thus unambiguously, by referring to the number of elements licensed by that position. For any two positions entering into a dependency relation, therefore, the head merits a dominant status by virtue of being endowed with the ability to license a more complex (or, at least, no less complex) melodic expression than its dependant. (Harris 1990, 1994, 1997) shows that this line of thinking is applicable not only to static distributional asymmetries but also to dynamic events such as lenition, proposing an integrated analysis of these phenomena under the notion of Licensing Inheritance. Given that a phonotactic domain involves a governing relation at the string-adjacent level, and that melodic complexity serves as an indicator of a position’s ability to support phonological oppositions, it follows that an onset head should license a melodic expression which is no less complex than that of (i) its onset dependant or (ii) the rhymal dependant. The same relation should also hold between the head and the dependant positions in a nucleus. And to the extent that this assumption has been borne out in a number of languages, the notion of government seems able to provide a coherent account of the environments in which the distributional regularities in question are observed. However, a closer comparison of the three phonotactic domains reveals that the onset is rather different from the other domains — and for that matter, different from the other cases of governing domains. Many languages allow the dependant of the nucleus to interpret the same melodic content as that of the preceding head position, giving the structure of a long monophthong. The same is true for the coda-onset domain: the rhymal dependant is often sanctioned only when it shares its melodic content with the 3

The latest version of Element Theory put forward in Backley (2011) posits six elements, excluding elements such as |h| and |N| that appear in examples later. As this difference does not affect the discussion, the issue of the precise inventory is left untouched.

Identity avoidance in the onset

107

following onset head, giving the structure of a geminate. In these cases, the two positions in question are assumed not to license the same set of elements independently, but to share the same melodic expressions: for example, a coda-onset sequence -pp- is represented in (2a) below. (Both figures in (2) are from Harris 1997: 350.) (2)

a.

b.

This illustration is intended to make it clear that the onus is on the governing position to license the elements, the rhymal dependant possessing no melodic material of its own. In much the same way, a homorganic sequence such as -mp- is represented as in (2b). The example in (2a) may well be understood as an instance of maximally enhancing the effect of a governing relation by wholly suppressing the dependant’s ability to encode melodic contrasts. Proper government, which results in the target dependant receiving no phonetic interpretation, can be regarded as another such case. The type of governing effect described above, considering its frequent and widespread occurrence, should be rather unmarked. By contrast, this state of affairs does not apply to the onset. Note that the onset is not entirely different in kind when compared with other phonotactic domains. It is similar to the nucleus in that it has a head which precedes its dependant. And it also parallels coda-onset sequences to the extent that it contains consonantal expressions and is headed by the same (onset head) position. In addition, the onset dependant is typical of dependant positions in its ability to support only a limited range of melodic oppositions, in much the same way the dependants of the other domains are. The above illustration of phonotactic domains has indicated that the uniqueness of the onset only stems from its avoidance of melodic identity via element sharing (the events indicated by ‘«’ in (2)). In order to provide a basis for further discussion of this issue in §4, the following section examines the notion of well-formedness and develops an argument for redundancy-free representations.

108

Toyomi Takahashi

3. Redundancy-free representations The advent of the non-linear mode of representation was probably one of the most significant theoretical leaps in phonology. Prior to the non-linear age, theoretical thinking had been tied to the presupposition that a ‘string’ of speech sounds reflected a linear array of linguistic entities — phonemes, to use the terminology of the time. A breakthrough was made when such entities were found to be decomposable into smaller primes (Jakobson, Fant and Halle 1963). This development not only provided a more analytical descriptive device for taxonomic studies but also demonstrated that the choice of representation may have a significant impact on our insights into phonological phenomena. These primes, which at the time were still thought of as segment-sized entities, paved the way for the development of non-linear phonological structure. The launch of non-linear representations was intended to circumvent the problems of the linear formalism, which had relied on rewrite rules and their extrinsic ordering (Chomsky and Halle 1968). The linear approach had ultimately proved itself incapable of effectively accounting for prosodic events; additionally, it displayed excessive descriptive power which jeopardised the adequacy of the analyses it had been designed to facilitate. On the other hand, a widely acknowledged contribution of the non-linear approach to our understanding of sound patterns comes from the way it shifted the explanatory burden on to representational well-formedness. This raised an awareness of the importance of theoretical restrictiveness, which led to the pursuit of more restrictive frameworks such as the principles-andparameters approach of GP and the output-oriented approach of Optimality Theory (Prince and Smolensky 2002). However, the non-linear mode of representation seems to have sometimes been used without its restrictive nature being fully acknowledged and exploited. Kahn (1976) presents an explicit argument for the restrictive nature of non-linear representation. He proposes that syllabicity should be given an independent status from other melodic properties. This concept is depicted as in (3). (3) The syllable nodes (σ) are detached from the linear sequence of melodic units, and the melodic units are linked to the syllables by lines to indicate syllabification: the example word pony contains two syllables /pəʊn/ and

Identity avoidance in the onset

109

/ni/, with ambisyllabic /n/ in the middle. The proposed mode of representation provides reference labels such as ‘syllable-initial’ and ‘syllable-final’, which help capture a number of generalisations relating to both static and dynamic melodic phenomena. Of interest to the present discussion is Kahn’s argument for the restrictiveness of non-linear representations. Anderson and Jones (1974) presents a dependency version of non-linear structure, illustrated in (4a) using the same example word for comparison. (4)

a.

b.

The relative height of the melodies indicates their status regarding dependency: consonants are licensed by vowels, and vowels assume a dependency relation according to their relative prominence. As this framework is based on a linguistic application of Graph Theory (Marcus 1967), in which the notion of well-formedness is defined as a set of conditions for a proper dependency tree, and claims that the dependency-based view offers better generalisations of phonological structure, Anderson and Jones seems to share more or less the same theoretical aim as Kahn. Nevertheless, Anderson and Jones also makes use of a bracketing notation such as the one in (4b), which essentially amounts to (4a): the brackets labelled σ1 and σ2 embrace /pəʊn/ and /ni/, respectively; the overlapping of the bracketed parts indicates the ambisyllabicity of /n/, and the numbers over the vowels show the dependency degrees, the highest being degree 0. Despite the fact that (4a) and (4b) express the same information, Kahn criticises the use of brackets on the grounds that it may allow a ‘nonsensical’ configuration such as (5a), in which the discontiguous melodies /p/ and /i/ are shown to belong to the same syllable. (5)

a.

b.

Of particular importance at this point is the reasoning behind his claim. One may expect that (5a) is rejected because it would be translated into a non-linear representation such as (5b), which contains crossing lines. This is precisely the case in the dependency framework: Anderson and Jones (1974: 12) asserts that, in the context of the graph-theoretic interpretation

110

Toyomi Takahashi

of dependency structure, well-formed representations must comply with projectivity, which prescribes that lines are connected only at nodes. Goldsmith (1976: 48), another well-known work that establishes the basis of non-linear phonology, also argues that in a well-formed representation association lines cannot cross. In stark contrast to these reactions to (5b), Kahn claims that ‘even if [p] were associated with the final syllable, it would be interpreted as the INITIAL element of that syllable, due to the more constrained nature of the graphical representation’ (1976: 36–37). Accordingly, (5b) is not an illicit configuration but just a notational variant of /əʊnpi/. Put differently, (5b) and (6) are considered to be the same representation. (6) Provided with this understanding of the constrained nature of non-linear structures, the ill-formedness of (5a), for Kahn, stems from the fact that it cannot be expressed in the non-linear mode of representation. Although Kahn’s notion of the constrained nature of representations seems to have drawn little attention, this paper argues that it should in fact occupy a place at the heart of representational theory. One important issue to be borne in mind is how precedence relations observed in the acoustic signal are encoded in cognitive representation. The tacit assumption behind Kahn’s claim is that precedence should be determined only with reference to syllables. For example, (5b) and (6) should share the following information: σ1 precedes σ2, σ1-initial is /əʊ/, σ1-final is /n/, σ2-initial is /p/, and σ2-final is /i/. Evidently, this information is sufficient for calculating the order of the melodic units as they appear when the word is phonetically interpreted. Therefore, the fact that /p/ appears to be located to the left of /əʊ/ and /n/ in (5b) has no bearing on the resulting realisation. Further, compare (6) with the following configuration. (7) The order of the melodies in the second syllable in (7) does make a difference because /i/ is σ2-initial and /p/ is σ2-final. As shown by these examples, melodies have no time dimension by themselves in a restrictive interpretation of non-linear representation.

Identity avoidance in the onset

111

This claim should not be too surprising, considering that non-linear phonology has been developed by separating prosodic properties from melodic structure, as well as by affording autonomy to primes within melodic structure. Precedence is not an intrinsic property of melodic expressions but is determined in relational terms, so such information must naturally be carried by the prosodic level of representation. In fact, it would hardly be possible to show precisely how the order of melodies could be lexically specified independently of prosodic information without giving rise to redundancy. The discussion so far has reiterated the metatheoretical assumption that a theory of representation should be formulated to exclude redundancy in order to be able to describe only attested phenomena. Although this much may seem to have been taken for granted, this section has recalled a point that is fundamental to the development of the non-linear formalism — namely, that the approach should ideally proceed without recourse to stipulative universal conditions. If a particular configuration is obtained by using legitimate representational components yet needs to be ruled out by extrinsic well-formedness constraints, this is very likely to be a sign of redundancy. Attempts to increase theoretical restrictiveness should be made by eliminating such redundancy, not by introducing yet another constraint. Following the above line of argument, there are essentially two approaches to redundancy in a representational theory. One is to examine the appropriateness of representational components postulated in the theory. A component may be reduced to some smaller entities, one such case being the breakdown of the phoneme to elements mentioned above; a representational component may be integrated with others, example cases of which can be found in the development of Element Theory (Nasukawa 2005, Backley 2011); or a component may be subsumed under some other notion or simply eliminated as being redundant (see Takahashi 1993b, 2004, for example). The other approach to redundancy is to advance the understanding of or to revise the definition of components and/or theoretical assumptions as regards representations. As mentioned in the previous section, GP assumes that dependency relations in syllabic structure are maximally binary due to the constraints of Locality and Directionality, any ternary dependency inevitably violates either of these constraints, as illustrated below. (8)

a.

b.

c. *

d. *

e. *

112

Toyomi Takahashi

However, Takahshi (2004) claims that these constraints should be dispensed with by advancing the definitions of dependency relations as follows: (9)

a. b.

In endocentric dependency, the phonetic interpretation of the head strictly and immediately precedes that of the dependant. In exocentric dependency, the phonetic interpretation of the head strictly but not necessarily immediately follows that of the dependant.

In these terms, the relations holding within a constituent in (1) are endocentric, while those extended over to a different constituent are exocentric. The binarity restriction is now the corollary of the above definitions. If the ternary relations in (8c,d,e) are all endocentric, the two dependants are expected to follow their head immediately in interpretation, which of course is not possible as they contradict each other. Note that this is not simply a rhetorical reformulation of essentially the same theoretical argument. One may have to describe the objects s/he sees in a photograph in a certain order, but this does not mean that the human recognition works in that way. Likewise, phonetic interpretation is subject to linearity restrictions but phonological representation is not necessarily — or should not be — so. Bearing in mind the assumption entertained in this section, the following section returns to the discussion of the identity avoidance in the onset. 4. Onset as a categorically unary component The three phonotactic domains, the nucleus, the onset, and the coda-onset sequence, are defined as governing domains, as illustrated in (1). These string-adjacent governing relations generally conform to Licensing Inheritance, which gives rise to distributional asymmetries. Of concern among these domains is the onset in which, unlike in the other domains, positions are prohibited from sharing melodic properties. Recall that sharing is assumed to be the case in which only the head position has a melodic content the interpretation of which is carried over to the otherwise (partially or totally) empty dependant position, as illustrated below (e represents an element).

Identity avoidance in the onset

(10)

a. *

b.

113

c.

The absence of (10a) has been noted (Harris 1994: 171), but no theoretical argument for the state of affairs seems to have been put forward in GP. With respect to the ill-formedness of (10a), it is instructive that a governing relation involving the same configuration has come under scrutiny elsewhere. The rhymal dependant is governed by the immediately following onset head as shown in (10c), but the onset dependant and the nuclear dependant are never governed by an external head. Charette (1989) proposes that a dependant licensed by the immediate projection of its head is protected from an external governor, drawing on an analogous case in syntax (Chomsky 1986: 42). Takahashi (1993b) assumes that the nuclear dependant can in fact be governed by the following onset head, which only results in a configuration equivalent to (10c), dispensing with the rhyme-nucleus distinction; as for the onset dependant, on the other hand, he argues that it is prevented from being governed by the nuclear head cannot since it would violate a general graph-theoretic constraint that a dependency structure cannot contain a circuit (Anderson 1986: 75). Note that the focus of the above argument is quite reminiscent of the present discussion. That is, we may well regard the ill-formedness of (10a) as stemming from the defectiveness of the onset dependant. Now that the focus of the present discussion is so narrowed down, one may be tempted to coin a single condition to formalise the unique status of the onset dependant. However, this paper argues that the onset dependant is a source of representational redundancy and should be dispensed with from syllabic structure by making the most of the constrained nature of representation along the line of thinking reviewed in the previous section. The onset is thus refined as a categorically unary component, and the apparent avoidance of identity in the onset is due to the non-existence of the target object to identify with. As mentioned at the beginning of this paper, syllabic structure is expected to bear an explanatory burden both in prosody and in melody, and this dual function of syllabic structure must be retain intact without the onset dependant. The prosodic distinction between the unary onset and the binary onset has long been criticised (Clements and Keyser 1983: 14) as there does not seem to be compelling evidence for such a distinction. The mora-based

114

Toyomi Takahashi

syllable theory rejects the presence of the onset constituent altogether (Hyman 1985, McCarthy and Prince 1986, Hayes 1989), with premoraic melodies directly attached to the syllable node. Similarly, Takahashi (2004) points out that the onset is rather weakly motivated as a prosodic constituent in GP and proposes a mode of representation in which syllabic structure is defined in terms of dependency relations without constituent nodes. However, there is one case in which the presence or absence of the onset dependant matters. The Empty Category Principle prescribes that empty nuclei (nuclei without melodic contents) receive no phonetic interpretation if they are properly governed at the level of nuclear projection or parametrically licensed to do so at the domain final position (Kaye 1990: 314). A nucleus is properly governed by the following nucleus if the latter itself is not properly governed and no governing domain (a complex onset or a coda-onset sequence) intervenes between them. Charette (1991) presents an illustrative example involving a complex onset. A French word ennemi /ɛ[email protected]/ ‘enemy’ contains an empty nucleus (@s and dots represent empty nucleus and syllable boundaries, respectively); this empty nucleus is properly governed and is not interpreted phonetically, so this word is pronounced as [ɛnmi]. Another word secret /[email protected]ɛ/ ‘secret’ also contain an empty nucleus, but it cannot be properly governed because it is separated from the following nucleus by a complex onset /kr/; it is thus phonetically interpreted as [sə.krɛ]. Although the analysis based on the notion of empty category and proper government provides an elegant account of vowel syncope, there does not seem to be any other event that necessarily refers to the onset dependant; furthermore, an alternative analysis has been put forward in another GP-based framework (Scheer 2004). Consequently, the lack of the binary onset does not affect the prosodic analyses of the theory. The elimination of the binary onset may seem more likely to be problematic to the melodic analyses than to the prosodic ones, as a number of consonant clusters have been analysed as complex onsets. Take a simple example, a question may well arise as to the way of making a distinction between /p-/ and /pr-/, which have been analysed as a unary onset and a binary onset, respectively. In this respect, the present paper claims that the notion of ‘cluster’ — and also the notion of ‘contour’ for that matter — needs to be clarified under the assumption made in the previous section that melodies have no time dimension. The notion of melodic ‘cluster’ generally refers to a configuration containing a sequence of two or more melodic units, each of which is identifiable as an independent single object at a level of representation. For example, given a pair of words that begin with /pa-/ and /ra-/, respectively, that lan-

Identity avoidance in the onset

115

guage is assumed to have melodic units /p/ and /r/ in its inventory, and /pra/ is analysed to contain a cluster of such melodic units /pr-/. This way of thinking is a legacy of taxonomic phonemics implausibly carried over to non-linear phonology. The problem of this conception lies in the tacit assumption that melodic units are always accompanied by some phonemesized quantitative attribute, which gives rise to a sense that, if melodic units /α/ and /β/ are one thing each, it follows /αβ/ is a sequence of two things. As mentioned above, this sense matches the reality in the nucleus and in the coda-onset sequence, but not in the onset. The notion of ‘contour’ seems to improve the understanding of the nature of apparent ‘clusters’ in the onset. It refers to the state of two melodic units (11ab) amalgamating into a single entity (11c). (11)

a.

b.

c.

d.

Affricates have long been regarded as ‘contour’ segments even in taxonomic phonemic studies, and Selkirk (1984) applies the notion to syllable-initial sC- sequences. The notion of ‘contour’, nevertheless, indicates that the two members are in a precedence relation. For example, (11c) is distinguished from (11d) but the latter seems to be ruled out in any language.4 The problem of ‘contour’ as well as ‘cluster’ seems to boil down to the involvement of time dimension in supposedly cognitive representation. In the context of a version of Element Theory, this paper assumes that melodic structure may contain dependency relations among elements but do not specify their precedence relations. Earlier versions of Element Theory sometimes show the kind of ‘contour’-like representation illustrated in (12a), which here is meant to represent an affricate /tr-/; roughly speaking, |ʔ|, |H| and |A| indicate intensity suppression, turbulent noise and coronal resonance, respectively (see Backley 2011 for details). Takahashi (1993a) argues the lack of systematic oppositions such as (12b,c) should indicates that they all amount to (12d), claiming that precedence should only be encoded in prosodic terms. 4

Duamnu (2009) proposes a syllable theory that posits CVX as a syllabic template, extending the ‘contour’ notion to be applicable to any syllable-initial ‘clusters’. This proposal is couched in articulatory-oriented Distinctive Feature theory, and the members of the initial ‘clusters’ or complex sounds are assumed to be merged and their articulatory gestures are overlapped, without holding no precedence relation.

116 (12)

Toyomi Takahashi

a.

b.

c.

d.

e.

f.

In much the same way, (12e), which represent /pr-/ (|U| indicates labial resonance) with a quasi-phonetic ordering of the elements within in a unary onset should be regarded as carrying no more phonologically relevant information than (12f). Regarding the proposed mode of representation, a question may arise as to how phonology can map the phonetic interpretations of elements. In order to answer this question, let us consider /p-/ appearing as a unary onset. This consonant contains the same element as those attached to the left branch in (12e): |U H ʔ|. This expression itself provides no precedence information: this much is the same regardless of whether the representational theory admits the onset dependant or not. Still, the interpretations of the three elements do not coincide but they are distributed over the whole interpretation of the sound in question: the interpretation of a plosive is centred at the zero intensity of |ʔ|; if there is a preceding vowel, the interpretation of |ʔ| is preceded by a resonant glide showing formant transition characteristic to a labial sound; and, if there is a following sound, the interpretation of |ʔ| is typically followed by a bursting noise of |H| and another resonant glide. This orchestration is obviously not detailed in phonological representation, but is performed automatically whenever the particular configuration is identified in the representation. By the same token, (12f) ought to be expected to receive phonetic interpretation. It is thus beyond the responsibility of phonology to compute how elements in a single position should be orderly mapped or overlap each other on the acoustic signal. 5. Conclusion This paper has proposed the refinement of the onset as a categorically unary component. It first demonstrated, introducing the theoretical framework of GP, that the onset excludes partial or total geminates, differing from the other phonotactic domains. In order to provide a basis for a subsequent discussion, the paper examined the Kahnian notion of the constrained nature of graphical representation and argued that a theory of representation

Identity avoidance in the onset

117

should pursue a higher degree of restrictiveness not by introducing wellformedness constraints but by eliminating representational redundancy. The above proposal was put forward in this spirit and some of its consequences in prosodic and melodic analyses were discussed. The proposal of this paper is yet far from complete. It goes without saying that there need to be further detailed and more empirical discussions to consolidate its plausibility. Of particular importance should be the examination of its validity in the light of the latest version of Element Theory, focusing on the way that the elements assembled into a unary onset position manifest themselves on the acoustic signal. The elimination of the binary onset entails that all the possible onset ‘clusters’ need to be configured under the single, unary onset. Backely (2011) postulates six elements, each of which may be either headed or non-headed. Their logical combination provides a sufficient number of patterns, but it must be investigated whether their orchestration works as this paper proposed. Acknowledgements I am more than deeply indebted to Kuniya Nasukawa for his encouragement, advice, patience and friendship. My thanks also go to Phillip Backley for his advice on an earlier draft and to anonymous reviewers for their valuable comments and suggestions. This work was supported by JSPS KAKENHI Grant number 25370442.

References Ahn, Sang-Cheol, and Gregory K. Iverson 2004 Dimensions in Korean laryngeal phonology. Journal of East Asian Linguistics 13: 345–379. Anderson, John M. 1986 Suprasegmental dependencies. In Dependency and non-linear phonology, Jacques Durand (ed.), 55–134. London: Croom Helm. Anderson, John M., and Charles Jones 1974 Three theses concerning phonological representations. Journal of Linguistics 10: 1–26. Aoun, Youssef 1979 Is the syllable or the supersyllable a constituent? MIT Working Papers in Linguistics 1: 140–148.

118

Toyomi Takahashi

Backley, Phillip 2011 An Introduction to Element Theory. Edinburgh: Edinburgh University Press. Charette, Monik 1989 The minimality condition in phonology. Journal of Linguistics 25: 159–187. 1991 Conditions on Phonological Government. Cambridge: Cambridge University Press. Chomsky, Noam 1986 Barriers. Cambridge, MA: MIT Press. Chomsky, Noam, and Morris Halle 1968 The sound pattern of English. Cambridge, MA: MIT Press. Clements, George N. 1990 The role of the sonority cycle in core syllabification. In Papers in Laboratory Phonology I: Between the Grammar and Physics of Speech; John Kingston and Mary E. Beckman (eds.), 283–333. Cambridge: Cambridge University Press. Clements, George N., and Samuel J. Keyser 1983 CV Phonology: A Generative Theory of the Syllable. Cambridge, MA: MIT Press. Davis, Stuart 1988 Syllable onsets as a factor in stress rules. Phonology 5: 1–20. Downing, Laura J. 1998 On the prosodic misalignment of onsetless syllables. Natural Language and Linguistic Theory 16: 1–52. Duanmu, San 2009 Syllable Structure: The Limits of Variation. Oxford: Oxford University Press. Fudge, Erik 1969 Syllable. Journal of Linguistics 5: 253–286. Goedemans, Rob 1996 An optimality account of onset-sensitive stress in quantityinsensitive languages. The Linguistic Review 13: 33–48. Goldsmith, John A. 1976 Autosegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Published 1979, New York: Garland. Harris, John 1990 Segmental complexity and phonological government. Phonology 7: 255–300. 1994 English Sound Structure. Oxford: Blackwell. 1997 Licensing inheritance. Phonology 14: 315–370. 2006 The phonology of being understood: further arguments against sonority. Lingua 116: 1483–1494.

Identity avoidance in the onset

119

Harris, John, and Geoff Lindsey 1995 The elements of phonological representation. In Frontiers of Phonology: Atoms, Structures, Derivations, Jacques Durand and Francis Katamba (eds.), 34–79. London: Lomgman. Hayes, Bruce 1989 Compensatory lengthening in Moraic Phonology. Linguistic Inquiry 20: 253–306. Hyman, Larry M. 1985 A Theory of Phonological Weight. Dordrecht: Foris Publications. Jakobson, Roman 1962 Selected Writings I. The Hague: Mouton. Jakobson, Roman, Gunnar Fant, and Morris Halle 1963 Preliminaries to Speech Analysis: The Distinctive Features and Their Correlates. Cambridge, MA: MIT Press. Kahn, Daniel 1976 Syllable-based generalizations in English phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Kaye, Jonathan D. 1990 ‘Coda’ licensing. Phonology 7: 301–330. Kaye, Jonathan D., and Jean Lowenstamm 1981 Syllable structure and markedness theory. In Theory of Markedness in Generative Grammar: Proceedings of the 1979 GLOW Conference; Adriana Belletti and Luciana Brandi and Luigi Rizzi (eds.), 287–315. Pisa: Scuola Normale Superiore Di Pisa. Kaye, Jonathan D., Jean Lowenstamm, and Jean-Roger Vergnaud 1985 The internal structure of phonological elements: a theory of charm and government. Phonology Yearbook 2: 305–328. 1990 Constituent structure and government in phonology. Phonology 7: 193–232. Kiparsky, Paul 1981 Remarks on metrical structure of the syllable. In Phonologica 1980: Akten der Vierten Internationalen Phonologie-Tagung Wien, 29. Juni-2. Juli 1980; Wolfgang U. Dressler, Oskar E. Pfeiffer and John R. Rennison (eds.), 245–246. Levin, Juliette 1985 A metrical theory of syllabicity. Ph.D. dissertation, Massachusetts Institute of Technology. Marcus, Solomon 1967 Algebraic Linguistics; Analytical Models. New York/London: Academic Press. McCarthy, John J., and Alan S. Prince 1986 Prosodic morphology. Ms., University of Massachusetts, Amherst and Brandeis University.

120

Toyomi Takahashi

Nasukawa, Kuniya 2005 A Unified Approach to Nasality and Voicing. Berlin/New York: Mouton de Gruyter. Prince, Alan S., and Paul Smolensky 2002 Optimality Theory: Constraint Interaction in Generative Grammar (ROA version). Ms., Rutgers University and The Johns Hopkins University. Schane, Sanford S. 1984 The fundamentals of particle phonology. Phonology Yearbook 1: 129–155. Scheer, Tobias 2004 A Lateral Theory of Phonology: What Is CVCV and Why Should It Be? Berlin/New York: Mouton de Gruyter. Selkirk, Elisabeth O. 1984 On the major class features and syllable theory. In Language Sound Structure; Mark Aronoff and Richard T. Oehrle (eds.), 107–126. Cambridge, MA: MIT Press. Steriade, Donca 1982 Greek prosodies and the nature of syllabification. Ph.D. dissertation, Massachusetts Institute of Technology. Takahashi, Toyomi 1993a ‘Contour’ in melodic structure. London. 1993b Farewell to constituency. UCL Working Papers in Linguistics 5: 375–410. 1994 Constraint interaction in Aranda stress. UCL Working Papers in Linguistics 6: 479–507. 2004 Syllable theory without syllables. Ph.D. dissertation, University College London, University of London. Zec, Draga 1988 Sonority constraints on prosodic structure. Ph.D. dissertation, Stanford University.

Part II Morpho-Syntax

Unifying minimality and the OCP: Local anti-identity as economy M. Rita Manzini 1. Minimality, the OCP and their repairs The notion of identity represents an obvious link between (Relativized) Minimality in the syntax (Rizzi 1990, Chomsky 1995) and the OCP (Obligatory Contour Principle, Leben 1973) at PF. In Rizzi’s (1990) formulation, Minimality prevents category X from moving to position Y in (1) across another category, say X’, which is identical to X in relevant respects. In Chomsky’s (1995) formulation, the target position Y acts as a probe which attracts the closest goal with the relevant properties, hence X’ rather than X in (1). (1)

(Relativized) Minimality *[Y ... [X’ ... [X

In the original formulation by Leben (1973), the OCP aims at tone phenomena and blocks two identical tones, say (2), from being adjacent on the relevant autosegmental tier. In later phonological work, the OCP is generalized from the tone tier to autosegmental tiers in general, as in the following formulation by Archangeli and Pulleyblank (1994): “A sequence of identical elements within a tier is prohibited”. (2)

OCP *X

X

One difference between (1) and (2) is that (1) involves the syntactic notion of movement, while (2) doesn’t. Since movement is a notion defined in syntax and not in phonology, one may wonder whether there is a single underlying local anti-identity condition in grammar interacting differently with the different internal structures of syntax and phonology. One may even wonder whether this is an example of a general cognitive constraint recruited by the Language Faculty (Hauser, Chomsky and Fitch 2002, cf. Yip this volume).

124

M. Rita Manzini

Another difference between (1) and (2) is that the result of an OCP violation is generally a repair process; for instance, in Leben (1973), the result of two H(igh) tones being adjacent is a downstep, i.e. the second tone being repaired to L(ow). By contrast, the result of violating Minimality is generally deemed to be ungrammaticality, as seen in the blocking of whmovement across wh-phrases or other operators. This difference could be once more factored away as depending on properties of the syntactic and phonological component respectively. In fact, repair is not an admissible notion in minimalist syntax, which is based on a deterministic, nobacktracking conception of the derivation. In this work we will concentrate on a sample of phenomena which blur the tentative distinction drawn so far. The mutual exclusion between two lclitics in Spanish (3) has been modelled by a morphological version of the OCP (Harris 1994, Grimshaw 1997, Nevins 2007 among others). On the basis of the discussion that precedes we expect that the double-l clitic constraint of Romance admits of repairs, as it does, for instance the Spurious se of Spanish in (3) (see §2.1 below). (3)

Marìa *le/se Maria to.him/SE ‘Maria sent it to him’

lo it

mandó sent

The Person Case Constraint, i.e. the mutual exclusion between a 1st/2nd person clitic and a dative clitic, as in Italian (4), is however modelled in current literature by Move/Agree and Minimality (Bianchi 2005; Anagnostopoulou 2005, 2008; Rezac 2008 among others). The notion of repair must then be imported into Minimality-based, syntactic accounts, since the PCC admits of repairs, for instance the ‘locative’ repair in (4) from Rezac (2006) (see §2.1 below). (4)

*Mi gli prendono (come segretaria) me to.him they.take (as secretary) ‘They take me as his secretary’

Similarly, the mutual exclusion between negation and imperatives, as in Italian (5), has been modelled in terms of Minimality applying to movement of the imperative verb to C across the negation (Zanuttini 1997 on Italian). Yet the result is not ineffability, but a repair through suppletion by the infinitive (see §2.2 below).

Unifying minimality and the OCP

(5)

125

Non *da-/darglie-lo! not give/to.give to.him it ‘Don’t give it to him!’

Since both cliticization and imperatives involve head-movement, one may consider adopting the proposal by Chomsky (2001) that headmovement should be banished from core syntax and head (re)ordering should be treated as a PF phenomenon. However, no matter how problematic head-movement is in the syntax, its treatment as a PF-rule (specifically as Morphological Merger) is even more problematic (Manzini and Savoia 2011b, Kayne 2010 vs. Halle and Marantz 1994, Harris and Halle 2005; cf. also Roberts 2010). Another complication is that some mutual exclusion phenomena, with the same surface appearance as those considered so far, are dealt with by the literature neither at the PF interface nor in the computational component, but at the LF interface. A case in point is Negative Concord (see §2.3 below). In many approaches, lack of negative Concord, i.e. mutual exclusion between two negations, as in English (6a), is imputed directly to the semantic content of the lexical items involved (Zejilstra 2004; cf. Déprez 1999 on Romance). Because the LF interface is involved, rather than the PF interface, repair does not even get a mention. Rather avoidance of ineffability in, say, English by insertion of negative polarity items of the any series, as in (6b), is treated as a straight alternative lexicalization — as is the omission of not in (6c). (6)

a. b. c.

≠I don’t like nothing I don’t like anything I like nothing

In §3-§5 we propose a unified treatment of double -l, Negative Concord and negative imperatives (for which we provide more data in §2). We argue that all of them are syntactic in nature. In order to avoid repairs and the global mechanisms they imply (backtracking), we further propose that these phenomena do not involve the violation of any constraint. Rather, in some languages a single lexicalization of property P per domain D suffices and P cannot therefore be iterated in D under Economy. Descriptive repairs do not represent the undoing of a violation — rather they are simply alternative lexicalizations, licensed by the same property P and domain D that do not admit of doubling.

126

M. Rita Manzini

If the notion of repair can be avoided in all instances of local antiidentity, an important obstacle in the way of the unification of Minimality and the (morphosyntactic) OCP is removed, given that backtracking (hence repair) is banned by the minimalist model (Chomsky 1995) and has no part in the application of Minimality. The other relevant conclusion that emerges from the range of data that we consider is that there is no obvious division of labor between Minimality in (1) and the OCP in (2) along the lines of the classical divide between PF (qua morphology) and syntax/LF. Local anti-identity applies to phenomena traditionally categorized as morphological (3), syntactic (5) and semantic (6) in exactly the same way, suggesting to us that these are all to be dealt with in the syntax/LF. Some aspects of our discussion find a parallel in recent works with similar empirical concerns, despite differences in theoretical outlook. 1 Van Riemsdijk (2008) argues that Swiss German relative wo is subject to haplology — yielding an instance of the descriptive Doubly Filled Comp (DFC) filter. He goes on to suggest that the DFC is essentially a syntactic reflex of the Obligatory Contour Principle (OCP)… Identity Avoidance, or *XX … covers both Haplology and the DFC effect on Swiss German relative clauses. But clearly, Haplology applies under strict phonological identity, while the DFC appears to be primarily sensitive to certain syntactic-semantic features such as operatorhood. This, I believe, is what one would expect if Identity Avoidance is a general principle of biological organization: its effect can be detected at both interfaces, PF and LF … Another area of syntax that might be re-examined in the light of *XX is relativized minimality (see Rizzi 199[0]). What the term relativized refers to in fact is the relative identity of both the element engaged in a dependency relation and the intervening element... And, in a graphic interpretation of how such a movement takes place, there is a virtual intermediate stage at which the two elements in question are also adjacent… (Van Riemsdijk 2008: 241–243)

Van Riemsdijk goes on to consider Optimality Theory (for instance Grimshaw 1997 on Romance clitics) as a possible framework for the treatment of the parametrized nature of *XX constraints and, we may add, of their repairs. If the present contribution is on the right track, the relevant range of data is compatible with a minimalist organization of the interfaces — and it does not require powerful models of variation such as OT, or even Distributed Morphology, with its costly recourse to Late Insertion.

1

Thanks to an anonymous reviewer for raising this point.

Unifying minimality and the OCP

127

As Van Riemsdijk comments, “the idea that the OCP is active in syntax is not new”. In the review of syntactic haplology by Neeleman and Van de Koot (2005), the generative bibliography starts with Perlmutter (1971), who discusses relevant phenomena in the Romance clitic string, including the Spurious se in (3). According to Neeleman and Van de Koot the environments in which [syntactic haplology processes] are triggered are characterized in both morpho-syntactic and phonological terms... This state of affairs raises the question what exactly triggers haplology in this case, repetition of phonological forms or of syntactic features… one would expect to find cases in which deletion or suppletion is triggered by syntactic features even though the morphemes affected are not phonologically identical in isolation (Neeleman and Van de Koot 2006: §3.1)

Possible theoretical treatments envisaged by Neeleman and Van de Koot for this complex of phenomena include once again Distributed Morphology (“a conspiracy … of syntax and morphology”, e.g. Bonet 1995) and OT (Grimshaw 1997). The present thesis is that neither is necessary — largely because the notion of repair is not. This leads us to Richards (2010), who also aims to provide a minimalist account of what he terms Distinctness phenomena. Richards’ basic idea is that a linearization statement (α, β) is only interpretable if α and β are distinct from each other… any phase in which two DPs, for example, must be linearized with respect to each other yields a linearization statement (DP, DP), which causes the derivation to crash (Richards 2010: §1)

Since Richards adopts Kayne’s (1994) LCA, a linearization statement (α, β) requires c-command between α and β — as well as a position internal to the same phase.2 Vice versa “linear adjacency is not only not sufficient, but also not necessary to trigger Distinctness effects” (ibid: §2.2.2). As for repairs, methods of avoiding Distincteness violations come in four main groups. First… Distinctness violations are avoided by adding extra structure... Second… Distinctness violations are avoided by removing offending structure. Third, we will review some cases in which operations that would create Distinctness violations are blocked. And finally, we will see examples in which movement

2

Grohmann (2011) has a different idea, namely that what he calls syntactic OCP effects fall under his anti-locality theory.

128

M. Rita Manzini

breaks up potential Distinctness violations, moving offending nodes further apart (Richards 2010: §2.4)

With respect to this last case, Richards formalizes a principle of Derivational Distinctness, as follows: Given a choice between operations, prefer the operation (if any) that causes a Distinctness violation to appear as briefly as possible in the derivation … It seems reasonable to hope that [Derivational Distinctness] and Shortest Attract could be made to follow from a single overarching constraint (Richards 2010: §2.4.4.2.3)

Apart from yet another clear statement as to the overlapping between Distinctness and Shortest Attract (in present terms Minimality), the discussion of Derivational Distinctness implies a look-ahead, since at some point in a derivation one may need to look ahead to future steps in order to make the optimal choice. This seems to be compounded by backtracking at least in the case of added structure (cf. fn. 3). If the view of the relevant phenomena that we take here is correct, look-ahead (like backtracking) is unnecessary. For the rest, it will be noted that, like Richards, we also refer to scope (or c-command) as a crucial prerequisite for the relevant effects to hold. As for the key concept of locality, the discussion that follows, though not focussed on it, implies that no rigidly defined domain is involved (i.e. phase, but see fn. 2). Rather, relevant domains are relativized much in the sense of Rizzi (1990), coinciding with the scope of the various operators involved (namely D for the clitic string, Neg, Jussive, cf. also fn. 9). 2. The data 2.1. Double-l (and the PCC) In many Romance varieties l- clitic pronouns normally combine, as seen with a dative and an accusative l- clitic in Italian (7). (7)

Glielo da to.him-it he.gives ‘He gives it to him’

However there are Romance languages where two l-forms cannot cooccur. The best known case of mutual exclusion between two l- clitics is

Unifying minimality and the OCP

129

between datives and accusatives in Spanish, as in (3), repeated here in (8b). While the dative clitic le does not surface, a different form of the clitic paradigm, namely se apparently takes its place, as in (8c) — yielding a descriptive repair via suppletion (the Spurious se). (8)

a. b. c.

Marìa le mandó Maria to-him/her sent ‘Maria sent a book to him/her’ *Marìa le lo Maria to-him/her it ‘Maria sent it to him/her’ Marìa se lo mandó Maria SE it sent ‘Maria sent it to him/her’

un libro a book mandó sent

The double-l constraint need not be repaired by suppletion, but can also result in simple mutual exclusion. In (9) and (10), we provide examples from two Italian varieties (from Abruzzi and Lucania respectively) showing that the 3rd person dative-accusative clusters can be simplified either to dative, as in (9), or to accusative, as in (10). (9)

a. b. c.

(10)

a. b. c.

lu/la/li/le camo him/her/them-m./them-f. I.call ‘I call him/her/them’ li a kkweʃto to him he.gives this ‘I give this to him’ issu li a he to.him gives ‘He gives it to him’

Mascioni

lu/la/lə/li vidənə him/her/them-m./them-f. they.see ‘They see him/her/them’ li Ca:nə (a) kwistə to.him they.give (to) this ‘They give this to him’ lu/la/lə Ca:nə it.m/it.f/them they.give ‘They give it to him’

Aliano

130

M. Rita Manzini

Nevins (2007) proposes that the mutual exclusion between two l- clitics is due to a morphological dissimilation rule, namely “Delete/alter the features corresponding to 3rd person on a dative when it precedes another 3rd person”. In other words, “the presence of two identical adjacent person feature specications is illicit”, as in (11) — essentially a morphological version of the OCP. (11)

[Cl[-participant]

[Cl[-participant]

The Spurious se repair can be derived by the same Distributed Morphology machinery of Impoverishment and Late Insertion implied by Nevins’s dissimilation. Specifically, Halle and Marantz (1994: 283) suggest that a rule of Impoverishment deletes the feature [Dative] on a clitic, when it is in the same cluster as an accusative clitic. The only clitic of the Spanish lexicon which can be inserted under the impoverished node after Impoverishment is se, as it lacks Case features altogether. On the contrary, le could no longer be inserted because of its specification for dative, nor could the accusative clitic. The sequence that results will be se lo in (8c). However, since Vocabulary Insertion must follow Impoverishment, minimalist Inclusiveness, i.e. projection from the lexicon, is violated at least in its strongest form, since there are lexical properties that are fixed only at the end of the derivation. A different reason to suspect that not all is in order comes from comparison with another mutual exclusion phenomenon in the clitic domain, i.e. the Person Case Constraint (PCC: Bonet 1994). In its strong version, the PCC involves mutual exclusion between a dative and a 1st/2nd person accusative, including (4) above, repeated here for ease of reference. (4)

*Mi gli prendono (come segretaria) me to.him they.take (as secretary) ‘They take me as his secretary’

According to Rezac (2008: 68–69) the PCC is “a consequence of relativized minimality, whereby … dative X [in (12)] prevents H-Y person Agree”. Hence in the context of X-DAT, Y is disallowed if it positively specified for [person], i.e. it is 1st/2nd person. (12)

[H

[X-DAT

[Y

Unifying minimality and the OCP

131

In Rezac’s words, “much work seeks to solve the riddle of this quirky partial intervention of the dative”. For instance Anagnostopoulou (2008: 18), referring back to Adger and Harbour (2007), assumes that “1st, 2nd and reflexive pronouns are [+person] pronouns... while the person specification of 3rd person pronouns depends on the type of Case they have … direct object 3rd person pronouns lack person features altogether... On the other hand, 3rd person dative/indirect object arguments are understood as animate/affected, they encode point of view, properties encoded through person features”. Because of this, the dative checks the [person] feature of v in (13); this prevents checking by the 1st/2nd person and ultimately yields ill-formedness. (13)

[ v[P]

[ Dat [V

1st/2ndP

In short, the PCC in (12)-(13) is just a double-Person/ Participant constraint. If so, one wonders why the double-l (i.e. -Person/-Participant) constraint in (11) should be treated in such a radically different way from it. An empirical parallelism between the two phenomena holds in practically all respects. In particular, a PCC violation can be repaired in the same ways as a double-l violation. In some varieties of French (Rezac 2006), inserting the locative in place of the 3rd person dative, as in (14) yields a wellformed result — effectively an instance of suppletion. Rezac (2006) advocates a model under which the violation of the PCC is overcome in the derivation, by the merger of an “additional probe” checking the locative. (14)

Philippe vous y présentera Philip you there will.introduce ‘Philip will introduce you to him’

However, adopting the view that a violation is first introduced and then repaired is even more expensive in the syntax than in the morphology. At worst, it involves backtracking, since the derivation survives point of crashing (the Minimality violation) to achieve well-formedness. At best, it involves Late Insertion (hence a violation of Inclusiveness), since it is the derivation that decides what lexicalization the argument will have, and therefore lexicalization is forced to take place post-syntactically. In fact,

132

M. Rita Manzini

Bejar and Rezac (2009: 67) are explicit on the “last resort, global economy flavor” of their added probes.3 Another repair of the PCC is possible, paralleling again double-l environments, and it involves the suppression of one of the members of the offending pair. Thus a conflict like the one in (4) can be resolved by introducing the 3rd person possessor as a genitive in (15), thereby eliminating the dative clitic. A clearer case of repair by obliteration may come from Kiowa, where “when the verb... ‘bring’ takes an indirect object and a second-person direct object, the verbal agreement prex cannot encode all three arguments” (Adger and Harbour 2007). (15)

Mi prendono come sua segretaria me they.take as his secretary ‘They take me as his secretary’

A possible difference between double-l and the PCC is that the double-l constraint is highly parametrized — since it holds in Spanish but not in sister languages like Italian (7); on the contrary the PCC is often presented as universal. Yet, Haspelmath (2004) quotes several languages where the PCC does not hold, strengthening the parallelism with double l-. Furthermore, the PCC is in reality a family of constraints, which share the domain of application (clitics or agreement) and the basic form of the constraint (i.e. matching a case array to a person hierarchy), but differ in other respects. For instance, the weak PCC only prevents co-occurrence of a 1st/2nd person accusative with a 3rd person dative (not a 1st and 2nd person one). In the same way, what we have called the double-l constraint does not apply in a uniform way. For instance, in Northern Italian varieties mutual exclusion is very often attested between a subject and an object l- clitic (Manzini and Savoia 2005, 2007). In short, there is no obvious empirical asymmetry justifying treatments of the PCC and of double-l at two completely different levels of analysis. Yet the literature does not appear to perceive any problem. Thus Nevins (2007), after presenting the theory of double-l as dissimilation mentioned above, goes on to propose a syntactic approach to the PCC, whereby certain combinations of feature values in the domain of a probing v head determine ill-formedness under Multiple Agree. To our knowledge, Manzini and Sa3

The addition of a head in the derivation is also one of the options suggested by Richards (2010) to resolve (potential) Distinctness violations — in his case, the head is a phase head.

Unifying minimality and the OCP

133

voia (2002, 2004, 2005, 2007, 2010) are alone in concluding not only that double-l is in fact a syntactic-level phenomenon, but also that Spurious se requires neither backtracking nor Late Insertion. We will illustrate these conclusions in §3. As for the PCC, despite the heuristic role that it plays in the present discussion, we doubt that its construal as a double-Person phenomenon is correct and we therefore abandon it in what follows (see Manzini 2012 for an argument that it may be double-dative, at least in some languages). 2.2. Negative imperatives In order to understand negative imperatives in Romance, it is necessary to recall that according to Kayne (1991), enclisis depends on the verb being higher than the clitic string (hence to its left). Following Rivero (1994), we can account for the Romance imperatives that display enclisis by assuming that the imperative is in a high position in the sentence, say in C, where it precedes the clitic string in the inflectional domain. In some Romance languages, negating an imperative does not involve any modification either of the imperative verb or of the enclitic order. As noticed by Zanuttini (1997), Northern Italian dialects with a postverbal negation adverb are of this type. Yet varieties of the Romagna that negate only with a clitic can also display this behaviour, as in (16). Note that here and in what follows we exemplify the 2nd person singular, which is a ‘true’ imperative according to Rivero (1994), Zanuttini (1997).4 (16)

a. b.

ʧem-æl call him ‘Call him!’ nu ʧem-al not call him ‘Don’t call him!’

S. Mauro Pascoli

Two alternative ways of forming negative imperatives in Romance are better known, from languages like French and Italian. In both of these languages, combining the negation with the positive imperative yields an ungrammatical result, as illustrated in (17b) and (18b). In neither language 4

This does not mean that we recognize the category of true imperatives (cf. the discussion at the beginning of §5).

134

M. Rita Manzini

does this lead to ineffability. Rather, in French it is sufficient to switch from enclisis to proclisis to get a grammatical result, as in (17c); in Italian suppletion is necessary. Therefore the negative imperative in the 2nd person singular is formed with the morphological infinitive, as in (18c). (17)

a. b. c.

(18)

a. b. c.

Donne-le-lui! Give it to.him ‘Give it to him!’ *Ne donne-le-lui pas! not give it to.him ‘Don’t give it to him!’ Ne le lui donne pas! not it to.him give not ‘Don’t give it to him!’ Da-glie-lo! Give it to.him ‘Give it to him!’ *Non da-glie-lo! not give it to.him ‘Don’t give it to him!’ Non dar-glie-lo! not to.give it to.him ‘Don’t give it to him!’

Consider the relatively simpler case of French. By hypothesis, in the positive imperative the verb moves from I to C, as in (19a). When a negation is present in the same structure, movement is blocked, as in (19b). The original idea of Roberts (1994) was that the negation acts as an A’intervener on the path of A’-head movement, yielding a Minimality violation. The characterization of the identity content, coinciding simply with the notion of A’-position for Roberts, can of course be refined. Suppose the C position to which the imperative moves is associated with modal properties. We may want to say that the same properties (for instance nonveridicality, in the sense of Giannakidou (1998)) are found on Neg, triggering Minimality and preventing the verb from moving to C. (19)

a. b.

[C donne [C [Neg ne

[Cl le [Cl le

[Cl lui [Cl lui

[I donne [I donne

Unifying minimality and the OCP

135

As already noted, negative imperatives are not ineffable in French. Rather, while verb movement to C is obligatory in positive contexts (i.e. proclisis is impossible), in negative imperatives verb movement is impossible (i.e. proclisis is obligatory). Zanuttini (1997: 145–146) explains this pattern in terms of Chomsky’s (1995) Minimality. She assumes that “the negative marker... can raise to fill the head of CP. This happens when Neg is the head closest to C: the features of the negative marker constitute the closest features to the head C; hence they are attracted … Since the negative marker satisfies the feature of C, the verb itself need not and thus cannot move to C”. Going back to the structure in (19b), we can say that some modal feature of C acts as a probe — since the closest goal is Neg, it is Neg that raises to C. The resulting structure is well-formed. Under this analysis, there is in fact no backtracking (descriptive repair) at all. Let us consider however how the theory fares with respect to languages with suppletion, like Italian in (18c). Zanuttini (1997) suggests that the infinitive depends on the insertion of an abstract auxiliary. Yet she must explain why this auxiliary is not inserted in positive imperatives. To this end she postulates an abstract Mood head which needs to be checked in negative contexts. The assumption is that imperatives cannot check it, while the empty auxiliary can. In other words, Zanuttini’s account of suppletion requires no less than two abstract categories (the empty Mood head and the empty auxiliary), of which at least the Mood head is not independently motivated.5 Furthermore, languages of the type of S. Mauro in (16) are not discussed, though Zanuttini (1997: 150) is aware of their existence. In short, it seems fair to conclude that at least suppletion is a problem for the analysis of negative imperatives, no less than for that of clitic phenomena in §2.1. 2.3. Negative Concord Another facet of the problem is represented by phenomena which have the general distribution studied in §2.1-§2.2 — yet neither Minimality nor the OCP are normally invoked in analyzing them. One of them is so-called Negative Concord. Some languages allow for the multiple occurrence of 5

The empty auxiliary of Italian would parallel an overt auxiliary, visible in dialectal varieties. The problem remains that this auxiliary is never visible in Italian.

136

M. Rita Manzini

the n- morphology within the same sentence with the interpretation of a single negation, for instance Italian in (20); for instance English, as in (6a). Example set (6) is repeated below for ease of reference (6)

a. b. c.

≠I don’t like nothing I don’t like anything I like nothing

(20)

Non voglio niente not I.want nothing ‘I don’t want anything’

Haegeman and Zanuttini (1991) model Negative Concord via head-Spec agreement in NegP, hence ultimately via abstract movement of the nargument. However absence of Negative Concord in English is not taken to depend on a violation of Minimality in any literature that we are aware of. Within a minimalist framework, Zeijlstra (2004) treats Negative Concord as an instance of (Multiple) Agree between a negative operator and some element(s) with an uninterpretable negative feature. A language like English without Negative Concord can be accounted for by assuming that all nconstituents have an interpretable [iNEG] feature, while there are no uninterpretable [uNEG] constituents. Hence lack of Negative Concord and the mutual exclusion between n-words that derives from it, result simply from the different distribution of features with respect to Negative Concord languages. Despite the different treatment, the syntactic and even morphological contours of the Negative Concord/double-n phenomenon are the same as in the double-l phenomenon previously reviewed. Some languages allow for doubling, as in (20), while some languages have mutual exclusion, as in English (6a). The latter does not lead to ineffability; rather the content of Italian (20) can be externalized in English in one of two possible ways. One consists in eliminating one of the n- words, as in (6b), the second in substituting a negative polarity item of the any series for one of the n-words, as in (6c). Looking at the entire matter without knowledge of previous literature, one may be entitled to conclude that this pattern is the same as in other repairs examined so far, yielding either simple mutual exclusion (6b), or suppletion (6c). Another important consideration is that the double-n constraint may hold in different languages for different sets of configurations. For instance, French generally allows for the co-occurrence of n-words under the Nega-

Unifying minimality and the OCP

137

tive Concord reading, as in (21a). Yet it disallows the co-occurrence of the sentential negation adverb pas ‘not’ with another n-word, as in (21b). The conflict is generally resolved by not lexicalizing pas, as in (22a). Alternative lexicalizations are in principle also possible, for instance by qui-que ce soit, literally ‘whoever it be’, as in (22b). (21)

a. b.

(22)

a. b.

Personne (ne) voit rien nobody not sees nothing ‘Nobody sees anything’ *Il (ne) voit pas rien he not sees not nothing ‘He doesn’t see anything’ Il (ne) voit rien he not sees nothing ‘He doesn’t see anything’ Je n’ai pas vu qui-que ce soit aujourd’hui I not have not seen anybody today ‘I haven’t seen anybody today’

One may want to argue that the double-n constraint of English, or of French (to the extent to which it applies), is really quite different from the double-l constraint of Spanish, in that examples like I don’t want nothing in (6a) are not ungrammatical — but only require a particular interpretation, i.e. a double negation one. Similarly French (23) is acceptable with a double negation reading (Martineau and Déprez 2004). (23)

Ce n’est pas RIEN que d’être Français it not is not nothing that to be French ‘Being French isn’t nothing’

This is an important objection. But consider again the logic of local antiidentity. The configuration le … lo in a language like Spanish is wellformed if two different clitic domains are involved and leads to ungrammaticality only in case no such two distinct domain can be construed. What happens in the English double-n phenomenon can be described in comparable terms. Two n-words are mutually exclusive in the domain of the same logical negation; if there are two n-words, this implies that two logical negations (two different negation domains) must be introduced.

138

M. Rita Manzini

Let us then provisionally accept that nothing stands in the way of an assimilation of double-n (Negative Concord) and double-l phenomena. What is especially interesting for present purposes is that, if lack of Negative Concord is simply modelled as a different distribution of features with respect to Negative Concord, then no constraint violation and no repair is involved in, say, French (22). Rather what insures grammaticality in (22) is simply an alternative lexicalization choice (i.e. an alternative numeration) with respect to the ungrammatical (21b). In other words, (21b) and (22) are not derivationally connected, specifically (21b) is not a step in the derivation of (22a). This is the line of explanation that we will pursue in the rest of our work. 3. Double-l In order to analyse double-l mutual exclusions we need to understand first what is the property that l- lexicalizes. As mentioned in §2.1, Nevins (2007) argues that it is the [–participant] feature, taken to provide a formal characterization for the descriptive 3rd person. Equivalently, in underspecification systems (Harley and Ritter 2002) lack of specification for the [participant] feature is interpreted as a negative specification for that feature, i.e. [–participant] again. Here however we maintain that features are privative, i.e. they are only positively specified, nor is there any underspecification. In the light of these assumptions, consider the singular paradigm of object clitics in Italian, as summarized in (24). The 1st and 2nd person forms in (24a) are characterized by a specialized lexical base m-/t-, denoting ‘speaker’ and ‘hearer’. The 3rd person forms in (24b) have a lexical base lfollowed by nominal class inflections -o/-a. The same lexical base l- turns up as the determiner of nouns, in which case its referential value is definiteness, as in (24c); the nominal class endings -o/-a are the same seen on nouns (here zi- ‘uncle/aunt’). (24)

a. b. c.

mi, ti me, you lo, la him, her lo zio, the uncle,

la zia the aunt

Unifying minimality and the OCP

139

Morphological analysis then suggests that Italian clitic pronouns consist of ‘speaker’, ‘hearer’ and ‘definiteness’ properties, as well as nominal class ones, and that the l- base in particular lexicalizes D (definiteness). These conclusions are supported by the semantic analysis of 3rd person pronouns in Kratzer (2009: 221), according to whom “the alleged ‘‘3rd person’’ features are in fact gender features, a variety of descriptive feature... If [a descriptive feature] is to grow into a pronoun, it has to combine with a feature [def] that turns it into a definite description. If [def] is the familiar feature that can also be pronounced as a definite determiner in certain configurations, it should head its own functional projection, hence be a D. It would then not originate in the same feature set as descriptive features, which are nominal, hence Ns”. Manzini and Savoia’s (2002, 2004, 2005, 2007) categorization for so-called 3rd person pronouns is essentially identical, i.e. a D category for the Definiteness morphology (l- in Romance) embedding an N, i.e. nominal class, category for its inflection. In short, reference accrues to 3rd person forms not through their lack of P (Participant/Person) categorization, nor through their negative P specifications — but through their positive D categorization. The logical space is partitioned not into P and not-P (i.e. [+participant] and [–participant] in conventional notation), but into P and D. In these terms, mutual exclusion between two l- pronouns turns out to be mutual exclusion between two D’s. That l- properties are involved in mutual exclusion, and not the N inflection, or the clitic as whole, is shown by the Sardinian variety in (25), where the dative l- clitic in (25a) and the accusative l- clitic in (25b) do not combine. However the dative-accusative cluster takes the form in (25c), where the ldative is followed by an accusative reduced to the sole nominal class vowel. (25)

a. b. c.

li daCa Ɂustu to.him he.gives this ‘He gives this to him’ mi lu/lɔr daCa to.me it/them he.gives ‘He gives it/them to me’ li u/ɔr daCa to.him it/them he.gives ‘He gives it/them to him’

Gavoi

If the double-l constraint really targets D properties, it becomes likely that some syntactic/LF principle is involved, rather than a morphological one. Specifically, since D is an operator, certain properties can be imputed

140

M. Rita Manzini

to it on the basis of what we know about operators; for instance, in LF movement terms, we may say that insertion in the clitic string of a lo clitic in Spanish (8a) causes D to take scope over the entire clitic string. Alternatively, using the minimalist Agree notation, one may say that each clitic domain is associated with a set of abstract operators, including D, acting as probes — and that insertion of lo in (8a) values D, as in (26). In the present context, valuation of the D probe by the clitic means roughly that the clitic (or its N descriptive content, cf. the discussion surrounding (24)) values the variable introduced by D.6 (26)

[D[lo]

[lo

[I

When it comes to the ill-formedness of the Spanish sequence *le lo in (8b), nothing prevents us from setting up a Minimality model like the one proposed by Rezac (2008) for the PCC. Thus we will say that in (27), le values the D operator, to which it is closer, and prevents lo from valuing it — presumably resulting in a violation of Full Interpretation at the interface. (27)

[D[le,*lo]

[le

[lo

[I

However, two observations are in order. First, the asymmetry introduced by Minimality between the two l- clitics in (27) appears to be unnecessary, since a weaker, symmetric model is sufficient to yield the desired result. Assuming that D can be valued only by a single argument (as in standard Agree), valuation by le will block valuation by lo, as in (27), but also vice versa, as in (28), if le and lo are deemed to be equidistant from the probe (for instance, because they are internal to the same phase). As before, we can then assume that Full Interpretation is not met at the interface, since (informally) scope remains unassigned to one of the two arguments. (28)

[D[*le, lo]

[le

[lo

[I

What is more, the D probe itself in (26)-(28) is redundant. It is perfectly possible to model the mutual exclusion between two l-clitics in Spanish by assuming that there is a single D probe per clitic domain and that a single lclitic can value it under Agree. However, by hypothesis the lo, le clitics of 6

In other words, both D and the clitic play a role in interpretation. Uninterpretable features are not part of the present model (cf. Brody 2003; Manzini and Savoia 2005, 2007, 2011a).

Unifying minimality and the OCP

141

Spanish have the internal structure displayed in (29), consisting of a D operator and an N inflectional element. Therefore the D probe is redundant with respect to the D operator properties lexically represented by the lmorphology. If we eliminate the Probe/Agree part of the structure, i.e. the shaded area in (26)-(29), we can still obtain the mutual exclusion between the two clitics on the basis of the simple assumption that there is at most one D operator per clitic string. So, two D operators, as in (29), are excluded. As far as we can see, the Probe/Agree encoding makes no explanatory contribution. Nevertheless, in homage to the fact that is ordinarily employed by minimalist literature, we will keep a notation compatible with it. (29)

a. b.

[D[le,*lo] [D[*le, lo]

[D l [N e]] [D l [N e]]

[D l [N o]] [D l [N o]]

[I [I

Recall that our aim is not simply to provide the basis for the illformedness of Spanish (8b), with the structure in (29), but also for the wellformedness of (8c), i.e. the so-called Spurious se. Treatments available in the literature make two crucial assumptions. The first one is that violation of the double l- constraint, as in (29), is a necessary precondition for the lexicalization of the Spurious se. The other key assumption is that se is inserted because of its impoverished feature specifications. Let us begin with this second assumption. In the discussion that precedes, we rejected the idea that 3rd person may be characterized by a [–participant] feature or by the absence of participant features, embracing instead privative (i.e. positive only) features without underspecification. If so, we will also want to reject the idea that an element like se may be characterized as a default, without any feature specifications, or perhaps endowed only with 3rd person features (Harris 1994) — and capable of entering contexts as diverse as the reflexive and the Spurious se in virtue of this underspecification. Manzini (1986) and Manzini and Savoia (2005, 2007, 2011a) argue in favor of a positive characterization of the denotational content of se in Romance as the free variable of the clitic system. In particular, in all Romance languages se is associated with middle-passive voice, including passive, anticausative, reflexive and impersonal interpretations. These interpretations follow if se is characterized as a variable. If it is bound by an antecedent, a coreference reading (i.e. reflexive) or a chain reading (i.e. passive/anticausative) are obtained. If, in the absence of antecedents, the se variable is closed by a generic operator, we obtain the impersonal reading (cf. Chierchia 1995).

142

M. Rita Manzini

Consider then what happens if se is inserted in the context in (29) as an alternative to le. One possible outcome is that se is interpreted as a reflexive. Though this is not normally emphasized in presentations of the data, sentences like Spanish (8c) are ambiguous between the Spurious se reading ‘Maria sent it to him’ and the reflexive reading ‘Maria sent it to herself’. In this perspective, the Spurious se phenomenon corresponds essentially to a different interpretation of a configuration independently attested for the reflexive reading. Specifically, we propose that the Spurious se corresponds to a reading of the se variable within the scope of the D operator. The latter acts as a definiteness closure for the variable — which is interpreted as having definite reference. If a Probe/Agree terminology is preferred, one could presumably say that the D probe in (30) can be valued by both lo and se, since the latter provides only a partial valuation and does not interfere with valuation by lo. (30)

[D[se,lo]

[Q se=x][D l [N o]]

[I

At this point, the account of Spurious se is no longer necessarily predicated on some form of repair, hence of backtracking/Late Insertion. The lexicalization pattern in (30) is generated and interpreted without any need for (29) to be first generated, then excluded and repaired. In other words, the well-formedness of (30) is not causally linked to the ill-formedness of (29) — though there is a conceptual link, provided by the notion of D scoping/probing over the clitic string. In his discussion of repairs to the PCC, Rezac (2006) is aware of the possibility that they may be treated just as ‘paraphrases’, i.e. as alternative means of lexicalization. He rejects this possibility, on the basis of the observation that the supposed paraphrase is restricted to a particular context. Our discussion of Spanish (29)-(30) shows that there is no contradiction between the notion of alternative lexicalization/paraphrase and that of contextual restriction. The same environment, defined by the insertion of the definite clitic lo, determines the ill-formedness of (29), because of the single D operator/Agree constraint — and also determines the wellformedness of (30) with the non reflexive reading. For, only in the scope of an independently merged/valued D operator can se receive a definite reading. (29) and (30) therefore correspond to different lexical choices (numera-

Unifying minimality and the OCP

143

tions), which given the lo context, become either impossible (le) or acquire a meaning not available in other contexts (se).7 In discussing (27)-(28), we concluded that an asymmetric account of double-l (in terms of dative intervention under Minimality) and a symmetric account (in terms of equidistant dative and accusative) are empirically equivalent. One may wonder whether the Spurious se repair means that it is dative intervention that matters, after all. In reality, even the sparse exemplification of Romance varieties provided above is sufficient to show that the accusative may equally be targeted by repairs. Thus in Gavoi in (25), the dative keeps the same lexicalization as in isolation, while it is the accusative that reduces to a pure nominal class vowel (without l- definiteness base). Similarly, when double-l violations are avoided via simplification of the cluster, the dative may be suppressed, as in Aliano in (10), but also the accusative, as in Mascioni in (9). In other words, the overall evidence from descriptive repairs supports the symmetric status of dative and accusative clitics under mutual exclusion. The parameter between Spanish (8) and Italian (7), where two l-clitics are possible within the same string, also seems well within reach of the present model. In Probe/Agree terminology, one could presumably invoke multiple valuation of the D probe in Italian (31), i.e. Multiple Agree. In other words, the parameter would be between Agree (Spanish) and Multiple Agree (Italian). In a bare syntax approach of the type favoured here, essentially nothing needs to be said for Italian; simply no constraint applies to l/D lexicalizations. If we maintain the generalization that each clitic string is associated with a single D operator, we can assume that Italian (31) is interpretable to the extent that the two lexicalizations of D undergo pairquantifier formation in the sense of May (1989) (cf. also the discussion of Déprez 1999 in §4). (31)

[D[glie, lo]

[D gli [N e]]

[D l [N o]]

For reasons of space, we cannot provide analyses (or even data) for the full range of parametrization of double-l violations and their alternative lexicalizations in Romance varieties, for which we refer the reader to the discussion by Manzini and Savoia (2002 et seq.). In the present discussion 7

Under conventional Merge, lo is merged first, so that no look ahead is implied by the merger of se. Proceeding top-down (for instance under Form Dependency in the sense of Manzini (1996) definite se may be taken to select an l- clitic (here lo). In either instance, the representational account in the text can be projected onto a derivation without implying non-local choices.

144

M. Rita Manzini

we have rejected the view that double-l phenomena are to be attributed to a constraint introducing a violation, which in turn leads to some form of repair. Of particular relevance is the fact that the double-l/D constraint is highly parametrized. In Probe/Agree terminology, its presence or absence in a language can be formalized in terms of Agree vs. Multiple Agree. Equivalently, we may simply refer to languages allowing for one vs. many D lexicalizations. This less abstract formulation leads Manzini and Savoia (2005, 2007) to suggest that a form of Economy is involved. In other words, in a language like Spanish one lexicalization of l- Definiteness properties suffices for the entire clitic domain — call it Economy of lexicalizations. This means on the one hand that, since more than one lexicalization is unnecessary, it becomes impossible — yielding the local anti-identity effect. On the other hand, other clitics, specifically the free variable se, can be interpretively associated with the unique l- lexicalization, yielding the definite (so-called ‘spurious’) reading, unavailable to them in other contexts.8 In what follows we apply this perspective, developed in connection with double-l, to the analysis of the other local anti-identity phenomena introduced in §1-§2. 4. Double-n In §2.3 we reviewed double-n mutual exclusions, concluding that they bear remarkable surface similarity to the double-l phenomenon. Some languages (e.g. Italian) admit of the local co-occurrence of two n-words, while other languages (e.g. English) do not; other languages yet (French) admit it to a varying degree. Repairs consist in not lexicalizing one of the two n-words or in introducing an alternative lexicalization. An apparent asymmetry between double-l and double-n, also reviewed in §2.3, is represented by the fact that the co-occurrence of two n-words in a local domain yields the impossibility of a certain interpretation (the single negation or Negative Concord interpretation) and not ungrammaticality. We already suggested that the asymmetry is only apparent. The local domain for double-l/D is the clitic string (the inflectional D domain), while the local domain for doublen/Neg is the scope of a logical negation Neg. Two l- clitics are ungrammatical in the same string in Spanish, but not in two different strings. Similarly 8

Apart from se, any oblique can be suppletive for the dative, hence also the locative or the partitive in languages that possess these clitic forms (Manzini and Savoia 2002 et seq.).

Unifying minimality and the OCP

145

two n-words are ungrammatical in English in the scope of the same negation, but not in the scope of two different negations. The only real difference is that the clitic string is a syntactically defined domain. On the contrary a double negation operator can be inserted at the LF interface subject only to pragmatic constraints, removing ungrammaticality. In order to maximize consistency with the discussion that precedes, we will consider not the Germanic/Romance Negative Concord macroparameter, but micro-parameterization within the Romance languages. Let us begin as usual by reviewing the lexical properties of the items involved. We assume that n-words in Romance do not lexicalize the logical operator negation, but are just Negative Polarity Items (NPI), hence variables read within the scope of an abstract negation and existential closure. The basic evidence in favour of this conclusion (Longobardi 1992, Acquaviva 1994 on Italian) is that n-words can be read in the scope of non-negative operators, in particular the question operator, as in Italian (32a,b), but also conditionals, as in (32c). The typological literature suggests that “in questions, negation is neutralized [...]: Can you hear nothing? and Can you hear anything? have identical truth conditions” (Haspelmath 1997: 121). However we know that non-negative nessuno is involved in (32a,b) because the non clitic required by negative readings is omitted. Furthermore the negative and non-negative reading of n-words is not neutralized in a conditional like (32c). (32)

a. b. c.

E’ venuto nessuno? is come anybody ‘Has anybody come?’ Gli ho chiesto se era arrivato nessuno him I.have asked if was arrived anybody ‘I asked him if anybody had arrived’ Se arriva nessuno, dimmelo if arrives anybody, tell.me.it ‘Tell me, if anybody arrives’

Sentences like (32) are not possible in standard French. One may then want to conclude that French words are truly negative. We prefer the conclusion that n-words in French are NPIs. One piece of evidence in favor of this is provided by the n-clitic. The literature quoted takes the n-clitic to instantiate a negative operator even in Italian. By the same logic adopted in the discussion of (32), however, the Italian n-clitic must be just an NPI, since it is licenced not only by the logical negation, but also by other modal

146

M. Rita Manzini

operators. Like other n-words it occurs in questions, as in (33a), but also in contexts where negative and non-negative readings have clearly different truth-conditions, in particular comparatives, as in (33b), or comparativelike contexts, like (33c). These non-negative occurrences have been independently studied in the literature as instances of ‘spurious’ or ‘expletive’ negation (Belletti 2000 on Italian). In French (34) exactly the same holds as in Italian (33). This suggests that in French as well n-words do not actually introduce a negative operator, but simply require to be read in its scope (being NPIs). (33)

a. b. c.

(34)

a. b.

Mi chiedo se non sia venuto me I.ask if no the.is come ‘I wonder if he has come’ E’ più alto di quanto non pensi he.is more tall than how.much not you.think ‘He is taller than you think’ E’ arrivato prima che non pensassimo he.is arrived before that not we.thought ‘He arrived before we thought (he would)’ Jean en veut plus que Marie n’en a Jean of.them wants more than Mary not of.them has ‘John wants more of them than Mary has’ … avant qu’il ne soit trop tard before that it not be too late ‘… before it is too late’

What interests us here directly is the mutual exclusion between pas and rien in (21b). Despite their closeness, Italian and French do not represent the best of minimal pairs, since Italian does not have a sentential negation adverb comparable to French pas. A much closer comparison is with Northern Italian varieties (Zanuttini 1997, Manzini and Savoia 2005, 2011a). In (35) we report data from the Piedmontese variety of Mezzenile, in which the sentential negation adverb can always combine with a negative argument, even if they are adjacent, as in (35a). Déprez (1999) similarly notes that doubling of pas by other n-words is possible in some varieties of French, such as Quebecois French in (36).

Unifying minimality and the OCP

(35)

a. b.

(36)

147

u f@i Iint Iɛnte Mezzenile he does not nothing ‘He doesn’t do anything’ u j ɔnt Iint ʧaˈma IyI they Loc have not called nobody ‘The haven’t called anybody’ Québec

J’ ai pas vu parsonne I have not seen anybody ‘I haven’t seen anybody’

In other Piedmontese varieties, more restrictive patterns are observed than in Mezzenile — yet less restrictive than in French. Thus in Fontane in (37) the combination of the sentential negation adverb with the negative argument ‘nobody’ is possible, as in (35a), though the combination with ‘nothing’, as in (37b), is not attested. (37)

a. b.

u i vɛŋ Ient Iyŋə it Loc comes not nobody ‘There doesn’t come anybody’ e mɔIʤu Ient they eat nothing ‘They eat nothing’

Fontane

Our hypothesis is that the mutual exclusion of the sentential negation adverb and the negative polarity argument in French (21b) is strictly comparable to the dative-accusative mutual exclusion that gives rise to Spurious se in Spanish. In Probe/Agree terms, we can say that the logical negation operator cannot be valued by both pas and rien, as schematized in (38); recall that if pas and rien are just NPIs, an abstract negation operator is required at LF, theory-independently. The mutual exclusion resulting from the violation in (38) can then be modelled as a repair, perhaps consisting in “deleting structure” (Richards 2010). (38)

a. b.

[¬ [pas, *rien] [¬ [*pas, rien]

[I veux [I veux

[pas [pas

[rien [rien

However, this is unnecessary under the present approach introduced here for double-l. We will say that inserting rien as the internal argument is sufficient to satisfy all of the properties (i.e. negative polarity properties)

148

M. Rita Manzini

that would otherwise be lexicalized by the sentential negation adverb pas, excluding its insertion on Economy grounds. Manzini and Savoia (2005, 2011a) make this more precise, by assuming that a so-called sentential negation adverb like pas is an an eventive NPI, i.e. an eventive variable, restricted by the predicate VP — while rien introduces a variable corresponding to the internal argument. Since negating the internal argument also negates the event (cf. I eat nothing → I don’t eat, I eat no cabbage → I don’t do cabbage-eating etc.), pas is redundant with respect to rien — hence excluded by Economy. We expect that, along with languages of the type of French, there might be languages like Mezzenile in (35) that have no double-n constraint, exactly like there are languages which have no double-l constraint. In Agree terminology one could say that in Mezzenile the negation operator can be valued by more than one NPI (i.e. the negation probe allows for Multiple Agree). In present terms, Mezzenile is a language in which no Economy of lexicalizarion applies to n-words, so that the same logical negation can have however many of them in its scope, as in (39). (39)

[¬ [Iint, Iɛnte]

[I f@i

[Iint

[Iɛnte

Our approach captures an important fact about mutual exclusion between n-words, often discussed in the Negative Concord literature, namely that it is constrained by lexical properties. Thus in a language like Fontane in (37) the mutual exclusion is only between the sentential negation adverb and the ‘anything’ argument, while the ‘anybody’ argument co-occurs with the adverb. The crucial difference between ‘anything’ and ‘anybody’ seems to reside in the fact that ‘anybody’ is endowed with a human restriction, while ‘anything’ does not have any restriction (it could refer to an individual, an event, etc.). Roughly speaking, then, Economy of lexicalizations and hence the double-n constraint do not apply in Fontane in the presence of a lexical restriction (‘nobody’). Déprez (1999) also favors a lexical construal of the Negative Concord parameter, but bases it on a much deeper, logical divide. She argues that while in Quebecois French (or here Mezzenile) n-words are NPIs, in standard French they are numerical quantifiers. Negative Concord between two n-arguments, as in French (21a), depends on the creation of a pairwise numerical quantification over n-words; this is impossible with pas in (21b), which has a different logical structure, not being a numerical quantifier. However consider a variety like Fontane. It seems unlikely that we want to attribute different logical properties to ‘nobody’ (a numerical quantifier,

Unifying minimality and the OCP

149

incompatible with negative Ient ‘not’) and to ‘nothing’ (an NPI, compatible with Ient ‘not’). More evidence that parametrization does not depend on the logical properties of the lexical items involved comes from yet other Piedmontese varieties, where the sentential negation adverb is mutually exclusive with narguments in simple tenses, as in (40a), but not in compound tenses as in (40b). The mutual exclusion in (40a) cannot depend on a different logical status of the n-adverb and the n-argument, as negative and numerical quantifiers respectively, since they happily co-occur in the Negative Concord reading in (40b). One possible construal of the contrast in (40) is based on the conclusion that compound tenses are syntactically bisentential, though some event unification (‘restructuring’) operation ultimately yields a single event/situation reading (Manzini and Savoia 2005, 2011a; cf. Kayne 1993). If so, in (40b) the two n-words are not local enough to trigger Economy of lexicalizations — though restructuring means that they can be read in the scope of the same logical negation (Negative Concord).9 (40)

a. b.

i mɔIʤ Iente I eat nothing ‘I don’t eat anything’ i ø Ieŋ maIˈʤɒ I have not eaten ‘I haven’t eaten anything’

S. Bartolomeo Pesio Iente nothing

The line of reasoning followed so far can in principle be extended beyond Romance. In particular, the view of the Romance vs. Germanic (English) parameter as involving deep, logical properties (i.e. English n-words are negative quantifiers) seems unnecessary. Instead we may assume that English n-words, like their Romance counterparts, are indefinites, i.e. NPIs, though specialized for the scope of the negation. In English, furthermore, Economy of lexicalizations is generalized, leading to the mutual exclusion 9

A further problem that remains open is that in French, mutual exclusion between pas and other n-words is not removed in compound tenses. Discussing Richards (2010) in §1, we commented that the relevant notion of locality is not directly addressed in the present paper. In reality, if we compare the discussion of double-l and double-n, no single notion of locality seems to be involved. Implicitly, we have in fact suggested that the notion of local domain is relativized much in the sense of Rizzi (1990), coinciding with the clitic string (i.e. the D inflectional string) for clitics or with the sentence (the domain of the logical neg) for NPIs.

150

M. Rita Manzini

of any two n-words. Penka and Zeijlstra (2010: 782) arrive at a similar conclusion, namely that n-words “crosslinguistically are analyzed as nonnegative indefinites … The difference between NC and non NC languages can then be attributed to a parameter fixing … whether one interpretable negative feature can check multiple instances of uninterpretable features or a single one” — i.e. to the Agree vs. Multiple Agree parameter, as briefly described here in the discussion surrounding (38) and (39). In short, the separation between the different traditions of semantic analysis (‘Negative Concord’) and of morphological analysis (‘Spurious se’) has long stood in the way of a recognition that double-l and double-n phenomena could actually be unified under the same syntactic/LF account, implying neither morphological complexities (‘Late Insertion’) nor deep logical parameters. The most significant feature of the unification that we propose is that no violation is involved, hence no repair — but only varying conditions of lexicalization. 5. Negative imperatives Next, we turn to negative imperatives, beginning with an analysis of the lexicon they involve. The examples in §2.2 present 2nd person singular forms, which are treated by the literature (Rivero 1994, Zanuttini 1997) as ‘true’ imperatives. This is not to say that these forms are morphologically specialized. For instance, in Italian, III and IV conjugation imperatives like regg-i ‘hold-2sg!’, dorm-i ‘sleep-2sg!’ are syncretic with the 2nd singular of the present indicative, i.e. ‘you hold’, ‘you sleep’. In the I conjugation, imperatives like lav-a ‘wash-2sg!’ are syncretic with the 3rd singular indicative present ‘s/he washes’. What is true is that all of these forms are very elementary, coinciding with the verb root followed by a vowel, which in the I and IV conjugations is uncontroversially the thematic vowel. Graffi (1996) argues in fact that the same holds of the II-III class vowels (e.g. regg-i ‘hold’). In short, morphological analysis suggests that so-called 2nd singular imperatives are uninflected for either person or tense/mood/aspect properties (cf. Manzini and Savoia 2007 on Albanian). This in turn invites the conclusion that the modal properties of a positive imperative in the 2nd singular are not provided by the verb form itself, but are contributed by the C modal position where it sits. As for the negation, there is fairly direct evidence that it may lexicalize modality, essentially as proposed by Zanuttini (1997). For instance, several languages (including Albanian, cf.

Unifying minimality and the OCP

151

Manzini and Savoia 2007) have distinct lexicalizations for what we may call the declarative negation, co-occurring with the indicative, and for the modal negation, co-occurring with the subjunctive and imperative. Now, consider French, where the verb moves to C in positive imperatives, but remains in I in negative imperatives, so that enclisis in positive imperatives alternates with proclisis in negative imperatives, as in (17). In Probe/Agree terms, the modal operator present in imperatives, namely the necessity operator in the schematic structure in (41) for (17b), can be valued by the negation or by the imperative in C, but not by both. Doing away with Agree formalism, we can say that in (41) the modal properties contributed by the C position and by the negation fall under Economy of lexicalizations and are mutually exclusive. (41)

a. b.

[¬ (*ne, donne-C) [¬ (ne, * donne-C)

[ne [ne

[C donne [C donne

[le [le

[lui [lui

Suppose instead that the imperative is associated with the I position. The result is well-formed in French, as in (17c) with the structure in (42), because there is no redundancy in modal properties between the negation and the non-modal verb in I. In Probe/Agree terminology, just the modal negation values the modal operator probe, or perhaps the non-modal verb in I also values it, but only partially, as in (42). This is close to what Zanuttini (1997) proposes in Minimality terms, namely that Minimality is satisfied if the negation moves instead of the imperative, to check the modal. (42)

[¬ (ne, donne-I)

[ne

[le

[lui

[I donne

As mentioned in §2.3, the Minimality approach of Zanuttini cannot easily account for the existence of languages like S. Mauro in (16), where the negation coexists with the C position of the imperative; for, it is difficult to see what would stop Minimality from applying. Abandoning Minimality in favour of Probe/Agree terminology, we can assume that in S. Mauro the verb in C and the negation can both value the modality operator as an instance of Multiple Agree, as schematized in (43). In bare syntax terms, in a language like S. Mauro, no Economy of lexicalizations applies between the modal properties of the negation and those of the verb. On the assumption that there is a single logical operator for imperative modality, the correct interpretation follows from pair-quantifier formation (May 1989), as already proposed for double-l (i.e. double-D) languages in §3.

152 (43)

M. Rita Manzini

[¬ (nu, ʧam)

[nu

[CI ʧam

[al

Consider next Italian, which resorts to suppletion by the infinitive, as in (18). Non-negated matrix infinitives in Italian also have an imperative reading, as in (44). This reading is however associated with a generic EPP argument, revealed in (44) by the 3rd person reflexive. 2nd person (singular or plural) reference is impossible. Furthermore, sentences like (44) get only a deontic reading. What Portner (2007) calls the bouletic reading (‘you should by all means fasten your seat belt if you like to’) is not possible, though it is available with an ordinary imperative, e.g. (18a). (44)

Allacciar-si/*ti/*vi Fasten-oneself/yourself/yourselves ‘Fasten your seat belt!’

la cintura! the seat.belt

Crucially, the negative imperative formed with the infinitive, as in (18c), has an addressee interpretation of the EPP argument (binding 2nd singular anaphors) — as well as the same nuances of necessity (deontic and bouletic) as ordinary imperatives. The suppletion problem is then formally identical to the one we faced with Spurious se in §3. A lexical form independently attested in a language (se in §3, the infinitive here) takes on an additional interpretation in a particular context, which is also characterized by mutual exclusion. A Minimality-based account is bound to proceed via the postulation of a repair mechanism, consisting of an added probe; this is essentially what the empty auxiliary of Zanuttini (1997) amounts to (cf. §2.2). In present terms, on the contrary, the idea is that the 2nd person interpretation and full modal value of the infinitive in Italian (18c) is conferred simply by the context of insertion, i.e. the negation, that triggers the mutual exclusion — but without any causal link between the two. We begin by observing that the characterization of the imperative operator provided so far, in (41)-(43), is oversimplified. In particular, whatever operator governs the imperative interpretation is probably associated with addressee reference. For instance Zanuttini (2008) proposes a functional projection, Jussive Phrase, which “has an operator in its specifier that... takes as input a proposition, consisting of the predicate saturated by the subject, and yields as output a property. This property has a presupposition that its argument, corresponding to the subject, refers to the addressee(s)... this operation is what is at the basis of the observation... that the subject of imperatives is not the individual that is being talked about (the subject of predication) but rather the individual that is being talked to”. Let us then

Unifying minimality and the OCP

153

say that the imperative operator actually requires two arguments. One of them is a property, provided by the imperative predicate or, in negative environments, by the eventive variable lexicalized by the negation clitic (cf. §4); the other argument is the addressee. Returning to Italian (18b), the mutual exclusion between ordinary imperatives and negation can be modelled as in French, except that we now make the assumption that the imperative operator has two arguments, the first of which is an addressee and the second one is either a predicate or an eventive variable, i.e. the so-called negation. The negation and the ordinary imperative are then mutually exclusive in valuing the second argument of the modal. The schema in (45) for (18b) is otherwise the same as for French (41). (45)

a. b.

[¬ (Addr) (*non, da-C) [non [C da [¬ (Addr) (non, *da-C) [non [C da

[glie [glie

[lo [lo

As in the Spurious se suppletion (§3), it is not the impoverished content of the Italian infinitive that allows for suppletion in negative imperative contexts — but rather its positively specified properties, specifically the fact that the infinitive is a modal form of the verb, capable of carrying deontic necessity in isolation, as in (44), with the structure in (46a). In (44), however, the infinitive is not sufficient to satisfy the second argument of the transitive imperative operator — which explains why it cannot have an imperative reading proper, i.e. addressee-oriented, as in structure (46b). In negative contexts, as in structure (46c) for (18c), we may on the contrary assume that the second argument of the imperative operator is supplied by the negation (effectively an eventive NPI, cf. §4). Therefore the infinitive can legitimately occur in the scope of the imperative operator, which confers on it addressee reference and the whole range of necessity readings (in particular the bouletic one). (46)

a. b. c.

[¬ (allacciare) [¬ (Addr) (*allacciare) [¬ (Addr) (non, dare) [non

[C allacciar [C allacciar [C dar

[si [vi [glie

[lo

6. Concluding remarks In this work, we have addressed three phenomena to which local antiidentity constraints have been applied. One of them is generally thought of

154

M. Rita Manzini

as morphological (double-l), another as syntactic (negative imperatives) and a third one as semantic (Negative Concord). Correspondingly, the OCP is generally invoked for the double-l constraint and Minimality for negative imperatives; as for Negative Concord, Minimality/the OCP is perceived as irrelevant, face to semantic constraints. In reality, some notion of identity is involved in all of these phenomena (including negative imperatives, whose Minimality account implies that negation and imperatives share some property). Locality is also crucial, since mutual exclusion is defined only for a given clitic string, a given proposition (i.e. a given scope of logical Neg), etc. and not for larger domains. We have argued that at least for the phenomena considered, the surface effect of mutual exclusion derives from the fact that some languages enforce a local Economy of lexicalization (also formalizable in terms of Agree vs. Multiple Agree). Identical lexicalizations are thus avoided. The other property shared by all of these phenomena is that violations of local anti-identity (i.e. Economy of lexicalizations) do not lead to ungrammaticality. In the present conceptualization, this depends on the fact that the very same context that defines Economy of lexicalizations and thus mutual exclusion, also defines the well-formedness conditions for alternative lexicalizations. Therefore there is no causal link between violation and repair, but only a conceptual link between different lexical choices (alternative numerations). In a nutshell, what we have achieved is on the one hand a unification of the various phenomena mentioned and on the other hand a demonstration that descriptive repairs need not global mechanisms of backtracking, lookahead and Late Insertion. One may wonder where our proposal leaves conventional anti-identity constraints, namely the OCP and Minimality. As for the former, we may expect that if we switch from LF primitives (D, Neg, modality) to phonological primitives, an Economy formulation of the type entertained here could apply to the phonological OCP as well. The results of Nasukawa and Backley (this volume) seem to us consistent with ours (i.e. no local anti-identity constraint per se). In morphosyntax, what remains out of the present account is the analysis of phenomena generally construed as involving Minimality violations on core (i.e. A-/A’-) movement, like whislands. Even if turns out that Minimality cannot wholly be reduced to other principles (like the present Economy of lexicalizations), the discussion that precedes shows that the divide between them cannot simply run along the PF vs. syntax/LF boundary — since Economy of lexicalizations covers the three domains (PF, syntax, LF). Indeed the importance of the PF vs. syntax/LF divide may be overestimated in current minimalist theorizing (Ber-

Unifying minimality and the OCP

155

wick and Chomsky 2011, cf. Kaye, Lowenstamm and Vergnaud 1990 for a classical statement of the contrary perspective). References Acquaviva, Paolo 1994 The representation of operator-variable dependencies in sentential negation. Studia Linguistica 48: 91–132. Adger, David, and Daniel Harbour 2007 The syntax and syncretisms of the person case constraint. Syntax 10: 2–37. Anagnostopoulou, Elena 2005 Strong and weak person restrictions: a feature checking analysis. In Clitics and Afxation, Lorie Heggie and Fernando Ordoñez (eds.), 99–235. Amsterdam: John Benjamins. 2008 Notes on the person case constraint in Germanic (with special reference to German). In Agreement Restrictions, Roberta DʼAlessandro, Susann Fischer and Gunnar Hrafn Hrafnbjargarson (eds.), 15–47. Berlin/New York: Mouton de Gruyter. Archangeli, Diana, and Doug Pulleyblank 1994 Grounded Phonology. Cambridge, MA: MIT Press. Berwick, Robert, and Noam Chomsky 2011 The biolinguistic program: the current state of its development. In The Biolinguistic Enterprise, Anna Maria di Sciullo and Cedric Boeckx (eds.), 19–41. Oxford: Oxford University Press. Bejar, Susana, and Milan Rezac 2009 Cyclic agree. Linguistic Inquiry 40: 35–73. Belletti, Adriana 2000 Speculations on the possible source of expletive negation in Italian comparative clauses. In Current Studies in Italian Syntax. Essays offered to Lorenzo Renzi, Guglielmo Cinque and Giampaolo Salvi (eds.), 19–37. Amsterdam: North Holland. Bianchi, Valentina 2005 On the syntax of personal arguments. Lingua 116: 2023–2067. Bonet, Eulalia 1994 The person-case constraint: a morphological approach. In The Morphology-Syntax Connection, MIT Working Papers in Linguistics, Heidi Harley and Colin Phillips (eds.), 33–52. 1995 Feature structure of Romance clitics. Natural Language and Linguistic Theory 13: 607–647.

156

M. Rita Manzini

Brody, Michael 2003 Towards an Elegant Syntax. London: Routledge. Chierchia, Gennaro 1995 Impersonal subjects. In Quantification in Natural Languages, Emmon Bach, Eloise Jellinek, Angelika Kratzer and Barbara Partee (eds.), 107–143. Dordrect: Kluwer. Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. 2001 Derivation by phase. In Ken Hale: A Life in Language, Michael Kenstowicz (ed.), 1–52. Cambridge, MA: MIT Press. Déprez, Viviane 1999 The roots of negative concord in French and French based creoles. In Language Creation and Language Change, Michel DeGraff (ed.), 329–375. Cambridge, MA: MIT Press. Giannakidou, Anastasia 1998 Polarity Sensitivity as (Non-)veridical Dependency. Amsterdam: Benjamins. Graffi, Giorgio 1996 Alcune riflessioni sugli imperativi. In Italiano e dialetti nel tempo: Saggi di Grammatica per Giulio Lepschy. Paola Benincà Guglielmo Cinque, Tullio De Mauro and Nigel Vincent (eds.), 133–148. Roma: Bulzoni. Grimshaw, Jane 1997 The best clitic: constraint conflict in morphosyntax. In Elements of Grammar, Liliane Haegeman (ed.), 169–196. Dordrecht: Kluwer. Grohmann, Kleanthes 2011 Anti-locality: too close relations in grammar. In The Oxford Handbook of Linguistic Minimalism, Cedric Boeckx (ed.), 260– 290. Oxford: Oxford University Press. Halle, Morris, and Alec Marantz 1994 Some key features of Distributed Morphology. In Papers on Phonology and Morphology, MIT Working Papers in Linguistics 21, Andrew Carnie, Heidi Harley and Thomas Bures (eds.), 275– 288. Harley, Heidi, and Elizabeth Ritter 2002 Person and number in pronouns: a feature-geometric analysis. Language 78: 482–526. Harris, James 1994 The syntax-phonology mapping in Catalan and Spanish clitics. In Papers on Phonology and Morphology, MIT Working Papers in Linguistics 21, Andrew Carnie, Heidi Harley and Thomas Bures (eds.), 321–353.

Unifying minimality and the OCP

157

Harris, James, and Morris Halle 2005 Unexpected plural inflections in Spanish: reduplication and metathesis. Linguistic Inquiry 36: 195–222. Haegeman, Liliane, and Raffaella Zanuttini 1991 Negative heads and the NEG criterion. The Linguistic Review 8: 233–251. Haspelmath, Martin 1997 Indefinite Pronouns. Oxford: Clarendon Press. 2004 Explaining the ditransitive person-role constraint: a usage-based approach. Ms., Max Planck Institute, Leipzig. Hauser, Marc, Noam Chomsky, and W. Tecumseh Fitch 2002 The faculty of language: what is it, who has it and how did it evolve? Science 298: 1569–1579. Kaye, Jonathan, Jean Lowenstamm, and Jean-Roger Vergnaud 1990 Constituent structure and government in phonology. Phonology 7: 293–231. Kayne, Richard 1991 Romance clitics, verb movement and PRO. Linguistic Inquiry 22: 647–686. 1993 Towards a modular theory of auxiliary selection. Studia Linguistica 47: 3–31. 1994 The Antisymmetry of Syntax. Cambridge, MA: MIT Press. 2010 Comparisons and Contrasts. Oxford: Oxford University Press. Kratzer, Angelika 2009 Making a pronoun: fake indexicals as windows into the properties of pronouns. Linguistic Inquiry 40: 187–237. Leben, William 1973 Suprasegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Longobardi, Giuseppe 1992 In defence of the correspondence hypothesis: island effects and parasitic constructions in Logical Form. In Logical Structure and Linguistic Theory, James Huang and Robert May (eds.), 149–196. Dordrecht: Kluwer. Manzini, M. Rita 1986 On Italian si. In The Syntax of Pronominal Clitics, Hagit Borer (ed.), 241–262. New York: Academic Press. 1996 Adjuncts and the theory of phrase structure. Mimeo, University of Girona Summer School.

158

M. Rita Manzini

2012

From Romance clitics to case: split accusativity and the Person Case Constraint. In Romance Languages and Linguistic Theory 2009. Selected Papers from ‘Going Romance’ Leiden 2009, Irene Franco, Sara Lusini and Adrés Saab (eds.), 1–19. Amsterdam: John Benjamins. Manzini, M. Rita, and Leonardo M. Savoia 2002 Clitics: Lexicalization patterns of the so-called 3rd person dative. Catalan Journal of Linguistics 1: 117–155. 2004 Clitics: cooccurrence and mutual exclusion patterns. In The Structure of CP and IP, Luigi Rizzi (ed.), 211–250. Oxford: Oxford University Press. 2005 I dialetti italiani e romanci: Morfosintassi generativa, 3 vols. Alessandria: Edizioni dell’Orso. 2007 A Unification of Morphology and Syntax. Studies in Romance and Albanian Varieties. London: Routledge. 2010 Syncretism and suppletivism in clitic systems: underspecification, silent clitics or neither? In Syntactic Variation: The Dialects of Italy, Roberta D’Alessandro, Adam Ledgeway and Ian Roberts (eds.), 86–101. Cambridge: Cambridge University Press. 2011a Grammatical Categories. Cambridge: Cambridge University Press. 2011b Mesoclisis in the imperative: phonology, morphology or syntax? Lingua 121: 1101–1120. May, Robert 1989 Interpreting Logical Form. Linguistics and Philosophy 12: 387– 435 Martineau, France, and Viviane Déprez 2004 Pas rien/Pas aucun en français classique: variation dialectale et historique. Langue française 143 : 33–47. Nasukawa, Kuniya, and Phillip Backley this volume Contrastiveness: the basis of identity avoidance. Neeleman, Ad, and Hans van de Koot 2005 Syntactic haplology. In The Blackwell Companion to Syntax, vol. IV, Martin Everaert and Henk van Riemsdijk with Rob Goedemans and Bart Hollebrandse (eds.), 685–710. Oxford: Wiley-Blackwell. Nevins, Andrew 2007 The representation of third-person and its consequences for person-case effects. Natural Language and Linguistic Theory 25: 273–313. Penka, Doris, and Hedde Zeijlstra 2010 Negation and polarity: an introduction. Natural Language and Linguistic Theory 28: 771–786.

Unifying minimality and the OCP

159

Perlmutter, David 1971 Deep and Surface Constraints in Syntax, New York: Holt, Rinehart and Winston. Portner, Paul 2007 Imperatives and modals. Natural Language Semantics 15: 351– 383. Rezac, Milan 2006 Escaping the Person Case Constraint. Linguistic Variation Yearbook 6: 97–138. 2008 The syntax of eccentric agreement: the Person Case Constraint and absolutive displacement in Basque. Natural Language and Linguistic Theory 26: 61–106. Richards, Norvin 2010 Uttering Trees. Cambridge, MA: MIT Press. Riemsdijk, Henk C. van 2008 Identity avoidance. In Foundational Issues in Linguistic Theory: Essays in Honor of Jean-Roger Vergnaud, Robert Freidin, Carlos Otero and Maria-Luisa Zubizarreta (eds.), 227–250. Cambridge, MA: MIT Press. Rivero, Maria-Luisa 1994 Negation, imperatives and Wackernagel effects. Rivista di Linguistica 6: 39–66. Rizzi, Luigi 1990 Relativized Minimality. Cambridge, MA: MIT Press. Roberts, Ian 1994 Two types of head movement in Romance. In Verb Movement, Norbert Hornstein and David Lightfoot (eds.), 207–242. Cambridge: Cambridge University Press. 2010 Agreement and Head Movement: Clitics, Incorporation and Defective Goals. Cambridge, MA: MIT Press. Yip, Moira this volume Linguistic and non-linguistic identity effects: same or different? Zanuttini, Raffaella 1997 Negation and Clausal Structure. Oxford: Oxford University Press. 2008 Encoding the addressee in the syntax: evidence from English imperative subjects. Natural Language and Linguistic Theory 26: 185–218. Zeijlstra, Hedde 2004 Sentential negation and negative concord. Ph.D. dissertation, University of Amsterdam.

160

M. Rita Manzini

Semantic versus syntactic agreement in anaphora: The role of identity avoidance Peter Ackema 1. Introduction There are several well-known differences between stronger forms and weaker forms of pronouns, both where it regards their syntactic and semantic behaviour (see Cardinaletti and Starke 1999) and where it concerns the discourse status of their antecedent, in particular how accessible to the hearer this must be (see Ariel 1990). One striking generalisation about strong pronouns that has been made in these respects is that they must always have a [+human] antecedent. In this paper I will argue for the following four points. (i) A more precise characterisation of the relevant property of strong, as opposed to weak, pronouns is that they must agree semantically rather than syntactically with their antecedent. (ii) So-called ‘semantic agreement’ implies a lack of any syntactic agreement. (iii) This accounts for cases where what is arguably a strong pronoun obligatorily has a non-human referent. (iv) This behaviour of strong pronouns is the result of an instance of identity avoidance, such as can be found in many other cases in natural language. The evidence for these claims comes from the different behaviour of strong and weak pronouns where it concerns agreement in gender with their antecedent in modern standard Dutch. The paper is structured as follows. In §2 I briefly introduce the phenomenon of identity avoidance in general. In §3 I discuss the possibility that identity avoidance imposes a penalty on syntactic agreement, but not on socalled ‘semantic agreement’. §4 discusses the distinction between strong and weak pronouns and introduces the central hypothesis that, if there is an opposition between strong and weak forms for the same pronoun, the strong pronoun does not agree syntactically with its antecedent because of

162

Peter Ackema

identity avoidance. §5 and §6 test the hypothesis by discussing the question which kind of antecedent strong and weak pronouns can take in standard Dutch. The gender system of this language is interesting in this respect, since there is a mismatch in the number of genders distinguished in pronouns (three) and in nouns (two). Thus, there cannot be a one-to-one correspondence between ‘semantic agreement’ with the biologically feminine/masculine/neither status of the referent of a noun phrase and ‘syntactic agreement’ with the syntactic gender features of the noun to start with (which could obscure mismatches between the two to some extent). §7 discusses a potential extension of the analysis to bound pronouns and their antecedents. 2. Identity avoidance Natural language seems to have an aversion to situations in which identical elements appear in close proximity. In phonological theory the importance of dissimilation phenomena has long been recognised, and it has sought to capture these via principles such as the Obligatory Contour Principle (Leben 1973; see Bye 2011 for a recent overview). But syntax and morphology have their own fair share of phenomena that indicate that these modules of grammar also exhibit signs of this aversion. Several such phenomena are discussed in Menn and MacWhinney (1984), Mohanan (1994), Ackema (2001), Neeleman and Van de Koot (2005), Van Riemsdijk (2008), Richards (2010), Hiraiwa (2010), and Nevins (2012), amongst others. Broadly speaking, there are two different forms of 'identity' that can play a role in identity avoidance effects, namely what one could describe as morphophonological identity and morphosyntactic identity (see Nevins 2012). The first type refers to identity of the overt form of two elements. An example of this type is the phenomenon that in Serbo-Croat, which is a language that in general fronts all wh-phrases in a clause, a wh-phrase is exceptionally left in situ when it is identical in form to another wh-phrase and would therefore end up next to an identical form if both were fronted (see Bošković 2002): (1)

a.

Ko šta kupuje? who what buys ‘Who buys what?’

Semantic versus syntactic agreement in anaphora

(2)

b.

*Ko who

a.

*Šta šta uslovljava? what what conditions Šta uslovljava šta? what conditions what ‘What conditions what?’

b.

163

kupuje šta? buys what

In other cases of such haplology, where such an avoidance strategy may not be available, one of the offending elements can be deleted (as in the case of two adjacent er pronouns in Dutch, see for instance Van Riemsdijk 1978) or replaced by an element with a different form (as in the case of two adjacent clitics si in some Italian dialects, see for instance Grimshaw 1997). The final option is for a language to just tolerate a particular instance of haplology. The latter option certainly occurs as well. This indicates that identity avoidance is not an absolute constraint, not in UG, nor, presumably, in individual grammars (at least, I am not aware of a language that does not tolerate any adjacent identical elements under any circumstances). It will be central to the argument below that, when formulated as a constraint, identity avoidance not only is violable, but is also gradient, so that less severe violations in one option can be tolerated if they prevent a more severe violation in the alternative option. This general outlook is familiar from a theory like Optimality Theory. The second type of ‘identity’ that can play a role in identity avoidance is identity in morpho-syntactic features. Thus, there are cases in which a language displays an effect when two elements occur close together that do not share the same form, but do share a particular feature or feature set. The effect can be one of deletion again, as for example in the case of the deletion of possessive markers in Romanian in the context of preceding definite determiners (Ortmann and Popescu 2001, Neeleman and Van de Koot 2005): (3)

a.

castel-ul alb al castle-DEF.M white POSS.SG.M ‘the boy’s white castle’

b.

castel-ul (*al) castle-DEF.M POSS.SG.M ‘the boy’s castle’

băiat-ul-ui boy-DEF.M-DAT.M

băiat-ul-ui boy-DEF.M-DAT.M

164

Peter Ackema

Although the determiner and the possessive marker do share one phoneme here, they are not identical in form, and as noted by Neeleman and Van de Koot (2005) it is unlikely that phonological identity is the trigger of the effect in this case, since indefinites whose stem ends in ul do not require deletion of a following al possessive. Rather, they suggest, it is the fact that definite determiner and possessive marker share a set of gender and number features that gives rise to the effect. The examples given above might give the impression that identity avoidance only shows its effects when the two offending elements are adjacent. This is not always the case, however, particular in the ‘morphosyntactic’ identity cases (although in the ‘morpho-phonological identity’ cases, too, there are some instances which show a long distance effect of identity avoidance, see Nevins 2012: §3.4). For instance, the ban on multiple accusative NPs within the same vP in Japanese (the ‘double o constraint’, see Hiraiwa 2010) is not lifted by intervening material such as the adverb in (4) (= (66b) from Nevins 2012): nomaseta. *Taroo-wa Hanako-o tikarazukude kusuri-o medicine-ACC drink-cause-PST Taro-TOP Hanako-ACC forcibly ‘Taro forcibly made Hanako drink the medicine.’

(4)

Thus, it can differ from case to case what the domain is in which identity avoidance makes itself felt. This is important for what follows, as I will argue that there is at least one case where this domain may in principle even cross a sentence boundary. 3. Agreement and identity avoidance If morpho-syntactic feature identity can trigger identity avoidance effects, then perhaps surprisingly, we may expect such effects to occur in what is one of the most canonical relations of feature identity in grammar, namely in agreement relationships. In what follows, it will indeed be crucial that from the point of view of identity avoidance, agreeing with another element is bad. Let us therefore adopt the following general constraint:1 1

As a reviewer points out, in a theory of agreement such as that of Chomsky (1995) there is an asymmetry between the features involved in an agreement relation, in that in the usual case one will be uninterpretable and the other interpretable, the latter providing a value for the former via the Agree operation. In that sense, the features may be said to be not truly identical. Although (5) can be formulated

Semantic versus syntactic agreement in anaphora

(5)

165

Don’t agree *[Fα]i … [Fα]i

(in domain D)

where Fα is a variable over features and co-indexation indicates that a syntactic agreement relation is established. Clearly, formulated in this general way this constraint must be violable and can be counteracted by other grammatical principles that can demand agreement between two particular elements (compare Optimality Theory). I will assume furthermore that the constraint is gradable. By this I mean that the more instances of agreeing features are present in a structure, the worse it is for (5). Of course, the structures to be compared with respect to the number of offending features should be identical in all other relevant respects. In most models that adopt the possibility of competiton between syntactic structures or derivations, including syntactic OT and some versions of Minimalism, competition is limited to candidates that share the same semantics, or at least the same thematic relationships, and the same lexical items (see Grimshaw 1997 for discussion). In the cases to be discussed below, the two structures to be compared only differ in whether a strong or a weak form of a pronoun is used. Indications are that agreement can indeed trigger an identity avoidance effect. Thus, at least according to the analysis in Ackema and Neeleman (2003, 2004), certain cases of ‘agreement weakening’ in Dutch and Standard Arabic are the result of a rule that deletes one of the agreeing features in case the agreeing elements find themselves in a particular local relationship at PF. The central hypothesis of the present paper is that not only are there cases of deletion, but also cases of agreement avoidance that result from identity avoidance, namely in some cases of anaphora. In this respect, it is important to stress again that (5) relates to syntactic agreement between features. It explicitly does not refer to referential identity of the two elements involved. What this means is that there is a contrast between syntactic agreement and so-called ‘semantic agreement’ between a pronoun and an antecedent with respect to (5). When there is syntactic agreement between a pronoun and its antecedent for a morpho-syntactic feature like gender, (5) will be violated. In contrast, in cases of semantic agreement a proin such a way as to make it compatible with this theory, I will assume instead that there is no inherent distinction between the features on both elements in an agreement relation, and that agreement truly is feature identification. I cannot discuss this here, but see Ackema and Neeleman (2013) for motivation.

166

Peter Ackema

noun’s features are determined by properties of the referent of its antecedent (such as the sex of that referent), rather than by formal morphosyntactic features (such as gender) of that antecedent. Hence, in those cases there is actually a lack of syntactic agreement, and thereby no violation of (5) is incurred. There is a tension then between the principle in (5) and a general principle that says that a pronoun and its antecedent should agree and share morpho-syntactic features. This will be crucial in the account below of the different behaviour of strong and weak pronouns with respect to syntactic versus semantic agreement. In this account I will assume, following for instance Corbett (2006) and Audring (2009), that where a pronoun shares morpho-syntactic features with an antecedent, there is indeed an agreement relationship between the two, not just in the case of bound pronouns, but in general. Clearly, this kind of agreement is not subject to the same type of syntactic restrictions as agreement between, say, a verb and a subject. After all, a syntactic agreement relationship between an antecedent and a pronoun can be established even across sentence boundaries, whereas agreement between a verb and a DP is subject to much-discussed locality restrictions. This is no bad thing, I think. It does not have to be the case that each and every agreement relationship is subject to the same structural demands. It may be that agreement as a general phenomenon is not subject to any particular structural constraint at all. Rather, certain types of agreeing elements, in particular those without much if any independent referential content, such as verbal agreement inflection or self-reflexives, need an antecedent that is very ‘accessible’ (compare Ariel 1990), and this need can become grammaticalised and turn into a strict grammatical locality constraint. Arguably, there are indeed different locality restrictions on different types of agreement. For example, there is evidence that the structural conditions on person agreement differ from those on number agreement; see Baker (2008, 2011) and Preminger (2011) for recent discussion. The restrictions on how local the antecedent for a pronoun must be are not grammaticalised like this (in contrast to the restrictions on how local the antecedent for a self-reflexive must be, see §7 for some discussion), but this does not mean the pronoun cannot stand in a syntactic agreement relation with its antecedent. And indeed, it is not the case that there are no restrictions at all on where a pronoun, rather than an independent Rexpression, can be used, but such restrictions are pragmatic in nature. Factors that influence when a pronoun can be used and when an R-expression is required are described in detail in, for example, a theory like Ariel’s

Semantic versus syntactic agreement in anaphora

167

(1990, 1991) Accessibility Theory, but I will not go into this here. For what follows, the following points made in this section are crucial: (6)

a. Syntactic feature agreement is bad with respect to identity avoidance (=(5)). b. The more instances of agreeing features a structure contains, the worse this is with respect to (5). c. A pronoun that shares formal morpho-syntactic gender and number features with its antecedent stands in a syntactic agreement relationship with that antecedent. d. A pronoun whose features are determined by properties of the referent of the antecedent (‘semantic agreement’), rather than by the formal morpho-syntactic features of that antecedent, is not in a syntactic agreement relation with the antecedent and therefore does not violate (5).

This leads us to what this paper will focus on: the different behaviour of strong and weak pronouns where it concerns syntactic versus semantic agreement. 4. Strong versus weak pronouns Very many languages make an opposition in their inventory of pronouns between strong forms and weak forms for pronouns with the same grammatical function. This can express itself in a difference of phonological shape: weak forms are often (but not always) reduced compared to their strong counterparts in that, for example, they contain a schwa instead of a full vowel, and/or cannot be independent phonological words but must phonologically cliticise, and/or are shorter. For example, in Dutch, the masculine and feminine pronouns show the following opposition between strong and weak forms (I leave aside the neuter forms for the moment, as the proper classification of what the strong and weak form of the neuter pronoun is will be a crucial issue below):2 2

There is a difference between the masculine and feminine weak subject forms ie and ze in that, arguably, the former is a special clitic while the latter is not. For example, ie cannot stand in sentence-initial position, whereas ze can (see Ackema and Neeleman 2003 for some discussion). This difference will be mostly irrelevant for what follows, one possible exception being discussed at the end of this section.

168

Peter Ackema

(7)

Subject

Object

Masculine

strong: hij (/hɛi/) weak: ie (/i/)

strong: hem (/hɛm/) weak: m (/əm/)

Feminine

strong: zij (/zɛi/) weak: ze (/zə/)

strong: haar (/har/) weak: r (/ər/)

As has been noted many times, besides their difference in form, strong and weak pronouns behave differently in various syntactic and semantic ways as well. Cardinaletti and Starke (1999) provide a detailed overview of these differences. I will illustrate a few of them with the Dutch pronouns mentioned in (7). For a start, weak forms cannot normally be stressed, whereas strong forms can:3 (8)

a. b.

Ik zag HEM, niet HAAR. I saw him, not her ‘I saw HIM, not HER.’ *Ik zag M, niet R.

Possibly related to this is the fact that weak pronouns cannot be coordinated: (9)

a. b.

Dit product is voor hem en haar. this product is for him and her ‘This product is for him and her.’ *Dit product is voor m en r.

Weak pronouns also cannot be modified, in contrast to strong ones: (10) a. a’. 3

Alleen zij kan zo'n boek schrijven. only she can such-a book write ‘Only she can write such a book.’ *Alleen ze kan zo’n boek schrijven.

Weak forms can be stressed in metalinguistic use, when the form itself is talked about: ik had het over de zwakke vorm M, niet R ‘I was talking about the weak form m, not r’. But in such usage literally anything can be stressed, even affixes for example.

Semantic versus syntactic agreement in anaphora

b. b’.

169

Zij met die hoed zal de vergadering voorzitten. she with that hat will the meeting chair ‘She with the hat will chair the meeting.’ *Ze met die hoed zal de vergadering voorzitten.

With respect to their syntactic distribution, Cardinaletti and Starke remark that weak pronouns cannot occur in what may be taken to be the base position for DPs with the relevant grammatical function. This is particularly clear for the weak object pronouns in Dutch, which cannot appear in the basic object position to the immediate left of the main verb’s base position. Strong pronouns can appear in this position (they can also occur further to the left because of the availability of scrambling in Dutch):4 (11)

a. b. c. d.

Karel heeft gisteren haar een boek geleend. Karel has yesterday her a book lent ‘Carl lent her a book yesterday.’ Karel heeft haar gisteren een boek geleend. *Karel heeft gisteren r een boek geleend. Karel heeft r gisteren een boek geleend.

One of the semantic generalisations about strong versus weak pronouns mentioned by Cardinaletti and Starke will become important below. This is that strong pronouns must obligatorily refer to human entities, whereas weak pronouns can refer to non-human entities as well: (12)

Strong pronoun → [+human] referent

This holds for the Dutch pronouns in (7) as well, with the caveat that when a weak form is used to refer to a non-human entity, the masculine form is used by default, rather than the feminine (see §6.1). The difference between strong and weak forms in this respect is illustrated in (13).5 However, we will see below that things are somewhat subtler than (12) suggests. 4

There is a difference between (11a) and (11b) with respect to the discourse status of haar: in (11a) the pronoun must be focused, in (11b) it need not be. Hence, it may be that (11c) is impossible because the weak pronoun cannot be in focus. But whatever the ultimate cause, on an observational level it is indeed the case that weak pronouns cannot occur in situ. 5 Of course, non-human things can be ‘humanised’ by a speaker, in which case strong pronoun use is fine. For example, a sailor might refer to his/her ship with the strong forms zij and haar.

170

Peter Ackema

(13)

a.

context: Wat is er met Jan aan de hand? what is there with John at the hand ‘What is the matter with John?’

Ik heb hem / m al dagen niet gezien. I have him(STRONG) him(WEAK) yet days not seen ‘I haven’t seen him for days.’ b.

context: Waar is die fijne pen toch gebeleven? where is that nice pen yet stayed ‘Where has that nice pen gone?’

Ik heb *hem / m al dagen niet gezien. I have him(STRONG) him(WEAK) yet days not seen ‘I haven’t seen it for days.’ With respect to (13b), it is important to realise that in texts one will very often encounter the written form hem in contexts where it refers to nonhuman entities. However, written Dutch texts are entirely unreliable where it concerns the question whether the pronouns used are strong or weak, since in many cases what looks like a written strong form actually represents a spoken weak form. From primary school instruction onwards, it is imprinted on people not to write ‘colloquial’ (i.e. spoken) forms like ie, m or r, but to write hij, hem and haar instead (interestingly, this does not seem to hold to anything near the same extent for the weak form ze versus strong zij).6 Of course, such written weak forms do occur as well, especially when it is the aim of the writer to be quite informal or explicitly represent spoken language, but it remains the case that very often an apparently strong form in writing will be pronounced as a weak form. Therefore, written language is literally useless as a source of evidence when determining the behaviour of strong versus weak pronouns in Dutch. We can make clearer that it is really the case that when hem is used as a strong form, it 6

That this pressure is alive and well is evidenced by this quote from a recent ‘Taalfouten top 10’ (‘Language error top 10’) compiled by a Dutch secondary school teacher (cited in Bonset 2007): “ ‘Ie’ is geen persoonlijk voornaamwoord; het moet ‘hij’ zijn. Verder is ‘me’ geen bezittelijk voornaamwoord; het moet ‘mijn’ zijn. Als we schrijven hoe we iets uitspreken, schrijft iedereen straks wat anders.” This translates as: “Ie is not a personal pronoun; it should be hij. Also, me is not a possessive pronoun, it should be mijn. If we write the way we pronounce things, then everyone will write something different before long.”

Semantic versus syntactic agreement in anaphora

171

cannot refer to a non-human thing like a pen, by making use of one of the other distinctions between strong and weak forms mentioned above. For example, when hem is modified, it absolutely cannot have a [–human] antecedent: (14)

Ik heb alleen hem gezien. I have only him seen ‘I have seen only him/*it.’

ok: hem = Jan, *hem = pen

Why is this difference between strong and weak pronouns relevant to the issue of identity avoidance in cases of agreeing anaphoric elements discussed in §3? Crucial here is that, if Cardinaletti and Starke’s (1999) analysis of the distinction is on the right track, strong pronouns share more features with their antecedent than weak pronouns do. Cardinaletti and Starke argue that all the differences between strong and weak pronouns can be reduced to one fundamental difference, namely that strong pronouns contain an extra layer of functional structure that is lacking in weak pronouns. They call the extra head that is present a ‘nominal complementiser’, to emphasise the parallel with clauses, where complementisers provide the highest possible layer of structure. Hence, the difference in structure between strong and weak pronouns is as follows for Cardinaletti and Starke, where CN represents the ‘nominal complementiser’: (15)

a. strong:

b. weak:

CNP CN0

XP XP

pronoun

pronoun The precise nature of the extra head is not relevant for the account below. What is crucial is that, whatever the nature of the extra structure, the extra structure will lead to more instances of agreeing features being present in case the pronoun is in a syntactic agreement relation, with the consequence that, while weak pronouns do not altogether avoid violating (5) when syntactically agreeing with an antecedent, strong pronouns violate this condition more in that case. Consider why. Suppose there is agreement for one feature (though the number and nature of the agreeing features does not affect the basic argument) between an

172

Peter Ackema

antecedent DP and a pronoun, say for gender. Since both weak and strong pronouns can agree for this feature, it cannot originate in the extra layer of functional structure the strong pronoun contains, but must originate in a head lower down in their extended projection. Because of the nature of extended projections, the feature will then be present throughout the projection above this point (cf. Grimshaw 1991). When a syntactic agreement relation is then established between pronoun and antecedent DP, all instances of the feature violate (5). Since it has a larger extended projection above the point where the gender feature is introduced, the strong pronoun violates (5) more than the weak pronoun does.7 This is schematised in (16), where ‘FX’ represents the feature in syntactic agreement with an identical feature in the antecedent. (16)

Strong pronoun

Weak pronoun

CNP [+FY +FX +FC] CN[+FC]

XP [+FY +FX] X[+FX]

YP[+FY]

XP [+FY +FX] X[+FX] YP[+FY] Y[+FY]

Y[+FY] Whatever the exact nature of the projections, if the structure of the strong pronoun is identical to that of the weak pronoun with the addition of one layer on top this always means there is an extra instance of the agreeing feature present in the strong pronoun. 8 As a corollary, strong pronouns should have a stronger predilection to avoid syntactic agreement and opt for semantic agreement instead (thereby avoiding violations of (5)) than do weak pronouns. Let us assume the strongest possible hypothesis in this respect:

7

Of course, the agreeing features in the antecedent violate (5) in just the same way. In §7 I will briefly discuss why, if in a syntactic antecedent-dependent relation there is a violation of (5), a strategy that aims at reducing the number of offending features on one of the elements will affect the dependent rather than the antecedent. 8 This is so even in case the agreeing feature does originate in this highest layer itself, since in that case the weak pronoun simply cannot agree for this feature.

Semantic versus syntactic agreement in anaphora

(17)

173

Where there is an opposition between strong and weak pronouns, strong pronouns agree semantically, not syntactically.

Another way of saying this is that, in any syntactic agreement context, a weak pronoun must be chosen if possible; in those contexts where semantic agreement is allowed, a choice between strong and weak form is allowed (where the choice will be made on pragmatic grounds such as those discussed in Ariel’s Accessibility Theory, just like the choice between strong pronouns and R-expressions, see the end of the previous section). There is an overlap between (17) and the observation that strong pronouns must always refer to humans in (12), but I will show below that (17) is the more basic generalisation, from which (12) follows in some cases as a corollary, whereas in other cases (17) will correctly predict an exception to (12). The generalisation in (17) is based on the idea that strong and weak forms are compared with respect to how bad they are with regards to identity avoidance in agreement relations (5), that is, it is based on the idea of competition. This is the relevance of the ‘where there is an opposition between strong and weak pronouns’ bit: if a weak pronoun is really impossible to begin with, there is no option but to use the ‘strong’ pronoun (which then is not really strong, as this is a meaningless term in the absence of an opposition), with the consequence that it may agree syntactically in that case. The question is whether this competition holds per pronoun pair for an entire language, or whether it holds per pronoun pair in particular contexts. By this I mean that it could be the case that in any language that has an opposition between a strong and a weak form for a particular pronoun, the strong form simply always agrees semantically. But it could also be the case that, even if a language has a strong-weak opposition for a particular pronoun, the strong pronoun can agree syntactically in those syntactic contexts where the weak pronoun is barred for independent reasons, for instance because it is a special clitic with a restricted syntactic distribution. In connection with the latter, there is some evidence from Dutch that is suggestive, although the judgements are rather subtle (also because of the confound mentioned in fn. 9). Above I claimed that, in line with Cardinaletti and Starke’s generalisation in (12), Dutch masculine and feminine strong pronouns must be [+human], and as noted I will argue that (12) in fact derives from (17) in these cases. However, there is a case where what is formally the strong form of the masculine pronoun, namely hij, can nevertheless easily refer to non-human entities (without these being ‘humanised’), namely in sentence-initial position. An example is (18). Precisely in

174

Peter Ackema

this position the weak counterpart ie cannot occur because of its special clitic status (see fn. 2). Wherever ie can occur, for example directly after the complementizer in an embedded clause, there is at least a preference to use this rather than hij to refer to non-human entities, as illustrated by (19a,b).9 (18)

context:

Waar is die fijne pen toch gebeleven? where is that nice pen yet stayed ‘Where on earth has that fine pen gone?’

Hij ligt daar op de tafel. he lies there on the table ‘It is lying there on the table.’ (19)

a. context:

Waar is Jan toch gebeleven? where is Jan yet stayed ‘Where on earth has Jan gone?’

Ik geloof dat hij / ie daar dronken onder de tafel ligt. I believe that he(strong) / he(weak) there drunk under the table lies ‘I think he is lying there under the table, drunk.’ b. context:

Waar is die fijne pen toch gebeleven? where is that nice pen yet stayed ‘Where on earth has that fine pen gone?’

Ik geloof dat ??hij / ie daar op de tafel ligt. I believe that he(strong) / he(weak) there on the table lies ‘I think it is lying there on the table.’ It should be noted, though, that when there is another explicit indication that hij is being used as a strong form, for instance if it is modified, it can9

It should be noted again that there is pressure to avoid writing weak forms like ie, so that in written texts many instances of embedded clauses starting with a sequence dat hij ‘that he’ will be found where hij refers to a non-human entity. I contend that in these cases written hij usually represents spoken /i/ rather than spoken /hɛi/. The judgements in (18) and (19) of course should be taken to be about spoken /i/ versus /hɛi/. To me the contrast between using /hɛi/ rather than /i/ to refer to a non-human in (18) (where it is entirely unmarked) and doing this in (19b) (where it is odd at least) is quite sharp.

Semantic versus syntactic agreement in anaphora

175

not refer to a non-human entity even in sentence-initial position. Thus, in (20) hij cannot refer to a pen for instance: (20)

Alleen hij ligt daar op de tafel. Only he lies there on the table ‘Only he/*it is lying on the table.’

This would indicate that hij (in contrast to hem) may function as both a strong and a weak form in Dutch. In that case (19b) would just indicate that ie is preferred as the weak form for the pronoun over the weak version of hij, in those contexts where the former can occur. I will leave this matter open here. In what follows I will show that in at least one case study, the strong hypothesis in (17) is likely to be correct. This case study concerns gender agreement between pronouns and their antecedent in Dutch. 5. Pronouns and gender In cases of anaphora, the choice of a masculine, feminine or neuter pronoun depends, of course, on properties of the antecedent. In a language like English, which has lost grammatical gender as a property of nouns, semantic agreement is the only possibility, so the determining factor is the natural gender of the referent of the antecedent. If the antecedent refers to a male entity, a masculine pronoun like he will be used in anaphora; if the antecedent refers to a female entity, a feminine pronoun like she will be used; elsewhere we get it.10 In languages that have a grammatical gender system, a conflict can arise between the grammatical gender of a noun and the natural gender of its referent. For example, in Dutch the grammatical gender of the word meisje ‘girl’, with its obviously female referent, is neuter (the word is formally a diminutive and all diminutives are neuter in Dutch). Amongst other things, 10

Throughout I focus on referring pronouns. Interesting complications with the pronominal gender system arise when considering pronouns bound by a quantifier. Heim and Kratzer (1998) argue that in such cases the pronoun’s φ-features act as a restriction on the variable introduced by the pronoun. Cases such as Only Mary did her homework are problematic for this assumption, since (for many speakers) this statement is not restricted such as to apply only to homework done by female students. For discussion of these, and similar, cases, see for example Heim (2008) and Spathas (2010).

176

Peter Ackema

this shows itself by the fact that this word obligatorily combines with the neuter rather than the non-neuter definite determiner. As a consequence, in principle there is a choice of which pronoun to use in case the antecedent is a noun like meisje. The choice of pronoun can either be based on the grammatical gender of the antecedent noun, so that there is syntactic agreement between pronoun and antecedent. Alternatively, it can be based on the natural gender of the referent of the antecedent, so that there is semantic agreement, but, as discussed in §3, crucially no syntactic agreement. The situation is especially interesting in languages that have a grammatical gender system, but make less distinctions in this than there are gender distinctions in the pronominal paradigm, such as Dutch and French, which have two grammatical genders, but the usual three-way gender distinction for pronouns. Let us consider the Dutch situation by way of example, as recently discussed in detail by Audring (2009). In standard Dutch, nouns either carry neuter or non-neuter grammatical gender. Following the usual terminology, I will refer to the latter as common gender.11 Different grammatical gender manifests itself, amongst other things, in a different choice of definite determiner (de for common nouns versus het for neuter nouns) and a different choice of relative pronoun (die for common nouns versus dat for neuter nouns, although even relative pronouns sometimes exhibit semantic agreement rather than syntactic agreement; see also Corbett 2006). Personal pronouns show a distinction between three genders, just as in English for example. The strong and weak forms of the masculine and feminine pronouns were given in (7). The neuter pronoun can take the form het, pronounced /hɛt/, or t, pronounced /ət/ (there is no further distinction between subject forms and object forms for neuter pronouns). Given their respective phonological forms, it seems plausible to assume that het is a strong form, opposed to weak t, and indeed it appears to be a fairly common assumption that the two stand in such an opposition. For example, Audring (2009: 97) introduces “The neuter pronoun with the full form het and the clitic form (e)t.12 However, I will argue

11

Several dialects of Dutch have retained a three-way grammatical gender system. The data and judgements in this paper should all be taken to reflect the standard variant. 12 Although in an earlier footnote on p.62 she finds it questionable whether the full form het actually occurs in the spoken language at all. I certainly agree that, just as with most other weak forms (see above), there is indeed pressure not to use t as written form so that in very many instances written het represents the spoken /ət/ form. But this does not mean a spoken pronoun /hɛt/ does not exist as well.

Semantic versus syntactic agreement in anaphora

177

below (§6.2) that such a classification is incorrect, and that both are equally weak forms. The mismatch between two grammatical genders and three pronominal genders ensures that the choice for a particular pronoun cannot be entirely governed by syntactic agreement in the first place. There must be instances of semantic agreement in any case. At first sight, one might expect that the cases of semantic agreement are limited to choices between masculine and feminine pronouns with common gender antecedents, as it seems a match is possible between grammatically neuter nouns and neuter pronouns. This is clearly not the case however. First of all, semantic agreement can override syntactic agreement even when the antecedent is neuter. Thus, it is perfectly fine (the preferred option, even) to refer to neuter nouns whose antecedent has a clearly male or female referent, such as the diminutive forms jongetje ‘little boy’ and meisje ‘girl’, with a masculine or feminine pronoun. Less well known, and perhaps more surprisingly, is the reverse situation. But as Audring (2009) extensively documents, there are in fact many cases in which common-gender antecedents trigger semantic agreement that results in a neuter pronoun. Audring shows that this can happen when the antecedent has semantic properties such that it is low in individuation. Mass nouns provide a good example, as in (21) (from Audring 2009: 97–98), where wijn and puree are common gender nouns, witness de/*het wijn and de/*het puree. (21)

a. een decanteerfles, daar stop je je wijn in en dan kan t a decanter there put you your wine in and then can it(NEUTER) luchten. breathe ‘A decanter, you put your wine in it and then it can air.’ b. Ik vind puree van echte aardappelen altijd lekkerder want het is I find puree of real potatoes always tastier because it(NEUTER) is wat steviger. somewhat firmer ‘I always find puree made from real potatoes tastier, as it is a bit firmer.’

The question, then, is when we get semantic agreement and when syntactic agreement. Audring discusses a number of factors that could be relevant, opposing personal pronouns with respect to these factors to other elements that can agree with the noun for gender (and which show a greater predilec-

178

Peter Ackema

tion for syntactic agreement than personal pronouns do to varying degrees, compare also Corbett’s 1979 Agreement Hierarchy). If the reasoning in §4 holds water, then with regard to the behaviour of personal pronouns themselves the distinction between strong and weak forms must be important, since according to (17), where they stand in opposition to a weak pronoun, strong pronouns should always agree semantically rather than syntactically. Let us see to what extent this is borne out by the Dutch data. 6. Strong pronouns and semantic agreement 6.1. Masculine and feminine pronouns Regarding masculine and feminine pronouns, we have in fact already seen that, indeed, their strong forms must agree semantically. This is because, in their case, the generalisation about semantic agreement reduces to the generalisation in (12) that strong pronouns must have a [+human] referent. Or, to be precise, it reduces to the observation that, where there is an opposition between strong and weak form, the strong form must refer to something that has a referent that has a readily identifiable male or female biological gender, which includes a number of higher animals of course. But we can take [+human] as a, slightly anthropocentric, shorthand for that. Here are some more examples with masculine pronouns showing this:13 (22)

context: Had je Jan / die snelheidscamera niet gezien? Had you John / that speed camera not seen ‘Didn’t you see John / that speed camera?’ a. b.

13

Nee, ik had m niet gezien. no I had him(WEAK) not seen ‘No, I had not seen him/it.’ Nee, ik had hem niet gezien. no I had him(STRONG) not seen ‘No, I had not seen him/*it.’

Recall that sentence-initial hij can easily refer to non-human things, but that this is not an exception to the generalisation, either because it does not stand in opposition to weak ie in this position (so it is not a strong form in the relevant, competitive, sense), or because it may be that hij itself functions as weak form in the relevant cases, see the discussion of (18)-(19)-(20) at the end of the previous section.

Semantic versus syntactic agreement in anaphora

(23)

a. b.

179

Meestal doet-ie het. usually does-he(WEAK) it ‘Usually he does it.’ / ‘Usually it functions.’ Meestal doet hij het. usually does he(STRONG) it ‘Usually he does it.’ / *‘Usually it functions.’

Feminine strong pronouns, too, must refer to [+human] referents. In fact, all feminine pronouns must refer to [+human] entities, even the weak ones:14 (24)

a. b.

(25)

a. b.

Ik zie haar. I see her (STRONG) ‘I see her/*it.’ Ik zie r. I see her (WEAK) ‘I see her/*it.’ Zij ligt daar. she(STRONG) lies there ‘She/*It is lying there.’ ligt daar. Ze she(WEAK) lies there ‘She/*It is lying there.’

This can be accounted for by the assumption that syntactic agreement with a common-gender antecedent always results, as a default rule, in use of the masculine pronoun. In that case, the feminine pronoun, whether strong or weak, can only be used in semantic agreement. For the data discussed so far, then, the generalisation in (12) seems to suffice, and there would appear to be no reason to think the proper general14

As noted before, this includes things that are ‘humanised’ such as in the case of the sailor referring to a ship with ze ‘she’ and haar ‘her’ (note that schip ‘ship’ is a neuter noun, so this would be an instance of semantic agreement of the type meisje ... zij ‘girl ... she’). Recall also (from fn. 11) that these data reflect the standard variant, with its two-way grammatical gender system opposing common and neuter gender. In those variants of Dutch that have retained a three-way grammatical gender system, so where feminine still exists as a distinct grammatical gender for nouns, weak forms can of course agree syntactically with such a noun, whether or not it refers to a human.

180

Peter Ackema

isation is really (17), the one that I argued follows from considerations of identity avoidance. However, things are different when we take into account the neuter pronouns in Dutch. 6.2. Neuter pronouns As noted, a conventional assumption where it concerns the neuter personal pronoun in Dutch appears to be that het, or rather spoken /hɛt/ (cf. fn. 12) is its strong form and t (/ət/) is its weak form. If so, it can be observed that the weak neuter pronoun has a predilection to refer to non-human entities (26a), although it is not impossible for it to take a [+human] referent under syntactic agreement, as in (26b): (26)

a. b.

Ik zag t. I saw it(WEAK) ‘I saw it/*him/*her’ A: Heb jij dat rare mannetje onlangs nog gezien? have you that strange man-DIM recently yet seen ‘Did you see that strange little man recently?’ B: Ja, ik zag t gisteren nog in de supermarkt. yes I saw it(WEAK) yesterday yet in the supermarket ‘Yes, only yesterday I saw him in the supermarket.’

What about the strong neuter pronoun, supposedly het? As mentioned in §5, it is shown in detail by Audring (2009) that there can be semantic agreement that involves non-human referents for neuter pronouns, namely when we are dealing with a referent that is low in individuation. If the generalization in (17), which claims that where there is a strong-weak opposition strong pronouns must agree semantically, is correct, we would expect it to be possible to use the apparently strong neuter pronoun het in such cases (as well as weak t), but not in cases like (26b) where the pronoun clearly agrees syntactically rather than semantically (the antecedent is a grammatically neuter noun with a human, individuated, referent). But in fact, (26b) remains possible when we replace t by het. That would appear to be good news for the generalisation in (12) instead, given that the antecedent is human in (26b). On the other hand, it is unexpected for (12) that, as it turns out, replacing t by het does not make a difference in (26a) either, as here there is a clear preference for the antecedent not to be human. What is going on here?

Semantic versus syntactic agreement in anaphora

181

The first part of an answer to this is that the usual classification regarding het is wrong: het, also when pronounced /hɛt/, is just as weak a form as /ət/, it never is a strong one. In contrast to the opposition between for instance the masculine forms /hɛm/ and /əm/, the opposition between /hɛt/ and /ət/ is a purely phonological one, it is not a strong-weak opposition in the sense of Cardinaletti and Starke (1999) and others. This can be demonstrated clearly by considering the tests Cardinaletti and Starke mention as distinguishing between strong and weak forms (see §4). On all criteria that indicate that something is a strong form, het (here representing spoken /hɛt/) fails in the same way that t (/ət/) fails. First, het cannot appear in the base position for objects in Dutch, leftadjacent to the main V position: (27)

a. b.

Ik heb gisteren hem/*het gezien. I have yesterday him it seen ‘I saw him/*it yesterday.’ Ik heb hem/het gisteren gezien.

Second, it cannot be coordinated: (28)

a. b.

Met Klaas en hem/*het gaat het niet goed. With Klaas and him it goes it not well ‘Things do not go well for Klaas and him/it.’ Hij en zij/*het zijn de beste dingen in mijn leven. he and she/ it are the best things in my life ‘He and she/it are the best things in my life.’

Third, it cannot be modified: (29)

a. b.

Alleen zij/*het is de oorzaak. only she it is the cause ‘Only she/it is the cause.’ Wie/Wat begrijp je niet? Hem/*Het met die rare who/what understand you not him / it with those strange principes. principles ‘Who/What don’t you understand? Him/It with those strange principles.’

And finally, it cannot be stressed and contrasted (modulo fn. 3):

182

Peter Ackema

(30)

Ik ga binnenkort weg, maar ik denk dat ik HEM/*HET nog wel zal zien. away but I think that I him / it yet well will I go shortly see ‘I will leave shortly, but I think that I will still see HIM/IT at least.’

So it seems there is no strong form of the neuter personal pronoun in Dutch at all. One may think that this indicates that we need both the generalization ‘strong pronouns must refer to humans’ in (12) and ‘strong pronouns must agree semantically’ in (17), because (if Audring’s generalisation that semantic agreement for neuter pronouns involves non-individuated, and therefore likely non-human, antecedents is correct) it would then actually be derived that there can be no strong neuter pronoun, as it would have to refer to a human and a non-human at the same time. But in that case, of course, it is impossible to see the second generalization (17) as the more fundamental one and make the first generalization (12) follow from it. What may really be going on, however, is the following. 6.3. Dat as the strong counterpart of weak het/t I propose that there is a strong counterpart to the weak neuter personal pronoun het/t, namely dat. This is identical in form to the neuter distal demonstrative pronoun. Of course, demonstratives themselves can be used anaphorically. The anaphoric use of both neuter demonstrative dat and common gender die is extensively discussed by Audring (2009), for instance. My contention, however, is that, besides its use as a demonstrative, dat also fills up the strong neuter personal pronoun slot in Dutch, and that in this role it does indeed comply with (17), rather than (12). In other words, the Dutch personal pronoun system in (7) should be extended as follows:15

15

The second use of dat, as a demonstrative, is not part of the personal pronoun system and hence falls outside of this opposition and the generalisation about it (the same holds, incidentally, for the common gender demonstrative die). Nor is a third use of dat, namely as the neuter relative pronoun, part of this system.

Semantic versus syntactic agreement in anaphora

(31) Masculine Feminine Neuter

Subject

Object

strong: hij (/hɛi/) weak: ie (/i/) strong: zij (/zɛi/) weak: ze (/zə/)

strong: hem (/hɛm/) weak: m (/əm/) strong: haar (/har/) weak: r (/ər/)

strong: dat (/d@t/) weak: het/t (/hɛt/ /ət/)

strong: dat (/d@t/) weak: het/t (/hɛt/ /ət/)

183

Let us consider what evidence there is for this. First of all, it is probably unsurprising that dat does behave as a strong form as such, since it will do so anyway in its use as a demonstrative. Thus, in all examples in (27) to (30) where het was not allowed, use of dat is fine: (32)

Ik heb gisteren dat gezien.

(33)

a. b.

Met Klaas en dat gaat het niet goed. Hij en dat zijn de beste dingen in mijn leven.

(34)

a. b.

Alleen dat is de oorzaak. Wat begrijp je niet? Dat met die rare principes.

(35)

Ik ga binnenkort weg, maar ik denk dat ik DAT nog wel zal zien.

But how can we show that there is a personal pronoun dat that complies with (17)? If it does, it should be bad, contrary to the generalisation in (12), to refer to humans with this strong form, because of Audring’s generalisation that when neuter pronouns agree semantically they take nonindividuated antecedents. Now, when clearly used as a demonstrative, there is no question that dat can agree syntactically, and hence take a grammatically neuter noun that refers to a human, such as meisje ‘girl’, as its antecedent: (36) Q: Welk meisje? which girl ‘Which girl?’ A: Dat daar. that there ‘That one over there.’

184

Peter Ackema

In fact, Audring (2009:162) finds that demonstratives in general are “even less frequent switchers” than personal pronouns, meaning they disprefer semantic instead of syntactic agreement to a greater extent. However, consider now the following data in this light. Whenever dat is used in a way that is not clearly demonstrative, there appears to be a contrast with weak het/t in its ability to refer to humans. In (26b) we already saw an example that showed weak t can syntactically agree with a neuter noun that refers to a human. Replacing t with dat in examples of this type is quite bad, however:16 (37) Q: Heb jij dat rare mannetje onlangs nog gezien? have you that strange man-DIM recently yet seen ‘Did you see that strange little man recently?’ A: Ja, ik zag t/?*dat gisteren nog in de supermarkt. yes I saw it(WEAK)/it(STRONG) yesterday yet in the supermarket ‘Yes, I saw him in the supermarket only yesterday.’ (38) Ik heb gisteren een fantastisch zesjarig pianistje gehoord. I have yesterday a fantastic six-year-y pianist-DIM heard Ik denk dat t/?*dat een grote toekomst tegemoet gaat. I think that it(WEAK)/it(STRONG) a big future towards goes. ‘I heard a fantastic six year old pianist yesterday. I think he/she has a great future.’ (39) Het slachtoffer van het verkeersongeval had een zware klap gehad en the victim of the traffic-accident had a heavy hit had and het was niet zeker of het/?*dat vlug bij zou komen. it was not sure if it(WEAK)/it(STRONG) quickly by would come ‘The victim of the accident had been hit badly and it was not sure whether he/she would regain consciousness soon.’ One might think that the preference for weak het/t over strong dat in these examples is a matter of pragmatics, since the antecedent is quite close by, hence quite accessible. This is not likely to be the explanation for the contrast, however. If we force a context in which a strong form must be used, 16

In all these cases with a neuter noun referring to a human, there is a preference to apply semantic agreement and therefore use a masculine or feminine pronoun instead of a neuter one in the first place. But the point is that syntactic agreement is not impossible, but resists the strong form dat, as opposed to the weak form het/t.

Semantic versus syntactic agreement in anaphora

185

for example if the anaphoric pronoun is contrastively stressed, dat still cannot be used in these cases. If anything, this makes it even worse, as (40) shows. Note that the (semantically agreeing) strong masculine pronoun hem ‘him’ is fine in such a context. (40) Q: Heb jij dat rare mannetje en z’n vrienden onlangs nog gezien? have you that strange man-DIM and his friends recently yet seen ‘Did you see that strange little man and his friends recently?’ A: Nou, ik heb HEM/*DAT gisteren nog in de supermarkt gezien, well I have him/it(STRONG) yesterday yet in the supermarket seen maar z’n vrienden niet. but his friends not ‘Well, I saw him in the supermarket only yesterday, but not his friends.’ Furthermore, if we replace the human antecedent by one that is appropriate as antecedent for dat in a semantic agreement relation, such as a mass noun, use of dat in the same type of contexts is acceptable: (41) Q: Hou jij van chocoladepudding? hold you of chocolatepudding ‘Do you like chocolate pudding?’ A: Ja, ik heb het/dat gisteren nog in de supermarkt Yes I have it(WEAK)/it(STRONG) yesterday yet in the supermarket gekocht. bought ‘Yes, I bought it in the supermarket only yesterday.’ (42) De tomatensoep was een mislukking dus het was niet zeker of de the tomatosoup was a failure so it was not sure if the gasten het/dat wel wilden eten. guests it(WEAK)/it(STRONG) well wanted eat ‘The tomato soup was a failure, so it was uncertain whether the guests would want to eat it.’ So, non-demonstrative dat really does have a dislike of human antecedents, in line with what (17) predicts. There are two caveats to be made here. The first concerns spoken versus written language again. In written language, applying semantic rather than the supposedly ‘correct’ syntactic agreement is frowned upon by some

186

Peter Ackema

prescriptivists. Combined with the pressure not to write weak forms, this means we can expect instances of dat with human antecedents in written language also when dat is not being used in an obviously demonstrative sense. An example is the one with which Audring (2009: 13) introduces the very possibility of using dat anaphorically: (43) Ken je zijn dochtertje? know you his daughter.DIM ‘Do you know his daughter?’ Dat is al zeven. that is already seven ‘She’s seven already.’ However, Audring herself already remarks that, compared with other possible choices of anaphor in this case (such as the semantically agreeing feminine pronoun ze or common gender demonstrative die), dat is “the preferred option in writing” while the other options “sound more natural in spoken language”. As before, the actual spoken language is of concern, and here dat in cases like (37)-(40) is really quite unnatural indeed, that is to say, quite bad. The second caveat is that, just as it is possible to ‘humanise’ a nonhuman thing and then have a (strong or weak) masculine or feminine pronoun agree semantically with it (compare §4), it is not impossible to ‘dehumanise’ or at least ‘de-individuate’ a human and then have a (strong or weak) neuter pronoun agree semantically with it. Thus, an example like (44) is possible with a strongly pejorative meaning: (44) Q: Ken jij Miep? know you Miep ‘Do you know Miep?’ A: Oh god, ik heb dat ooit nog als buurvrouw gehad. Oh god I have that ever yet as neighbour had ‘Oh God, she was my neighbour at one point.’ Apart from these caveats, I contend that examples like (37)-(40) are representative of the dislike of dat to take a [+human] antecedent. As noted, this cannot follow from dat being a demonstrative, since, as Audring has shown, the demonstratives have a greater rather than a lesser preference than personal pronouns for syntactic rather than semantic agreement (and see (36)). However, if there is indeed an incarnation of dat in the language that is a

Semantic versus syntactic agreement in anaphora

187

personal pronoun, which stands in a competitive opposition with weak het/t where it concerns identity avoidance under agreement, then these data are as predicted by (17) (given Audring's generalisation that semantic agreement for neuter pronouns involves antecedents low in individuation). I conclude, then, that the Dutch data suggest that (17), repeated here as (45), may be the fundamental generalisation where it concerns the behaviour of strong versus weak pronouns. (45) Where there is an opposition between strong and weak pronouns, strong pronouns agree semantically, not syntactically. The generalisation that strong pronouns must have a [+human] antecedent (12) is a corollary of this in the case of masculine and feminine pronouns, but the behaviour of the Dutch neuter pronoun, which does not comply with (12) but does comply with (17), shows that (17) is the more fundamental generalisation. In turn, (17) follows from considerations of identity avoidance, under the assumptions that (i) pronouns that share the grammatical gender feature of their antecedent stand in a syntactic agreement relation with that antecedent, regardless of the structural configuration that holds between the two (ii) syntactic agreement is a violation of identity avoidance (iii) because of their larger extended projection, strong pronouns contain more instances of agreeing features than weak ones, hence they violate identity avoidance more in cases of syntactic agreement with the antecedent (iv) semantic ‘agreement’ implies absence of syntactic agreement. Of course, this should be tested further on a wider range of data from other languages. Here, though, I will conclude with one further possible piece of evidence in favour of this approach. 7. A potential extension to Binding Theory So far, we have considered the consequences of identity avoidance under agreement for the choice of weak versus strong pronoun in cases where coreference between the anaphoric pronoun and the antecedent is not a problem as such. Of course, there are grammaticalised restrictions on possible coreference relations as well, known under the name of Binding Theory. In this section I will consider whether we can find evidence for the idea that semantic agreement avoids violations of identity avoidance in this domain as well.

188

Peter Ackema

As noted by Prinzhorn, Van Riemsdijk and Schmitt (2010), Principle C of Binding Theory can be seen as another case of identity avoidance: a. *Shei saw Maryi in the mirror. b. *Hei thinks that Billi is brilliant.

(46)

The question is which notion of ‘identity’ it is that plays a role here. Clearly, it cannot be that a pronoun cannot c-command any DP that happens to have the same phi-features, since the examples in (46) are perfectly fine if there is no binding relationship. 17 If the reasoning in §4-§6 above holds water, the mere fact that she and Mary refer to the same person, that is, that there is ‘semantic agreement’ between the two, should not trigger any effect either. It is only when there is syntactic agreement between the two that identity avoidance effects are to be expected. As it happens, there are a number of theories of binding that argue that an agreement relationship is established between the binder and the bindee, see for instance Reuland (2005) and Rooryck and Vanden Wyngaerd (2011) for recent discussion. If so, Principle C becomes understandable as a violation of (5). Provided that, as discussed above, we see (5) as a violable and gradient constraint, it actually accounts for the relationship between Principles A, B and C. Consider why. The general idea is quite familiar from the literature: suppose that in general, there is a principle that says that, when choosing a bound anaphoric element, the element with the least independent referential content should be chosen. However, this principle is counteracted by a locality principle that says that certain types of anaphors must be bound within a local domain. Which anaphors these are, and which domain counts as local enough, may parametrically vary somewhat per language (for discussion of such issues see for instance Wexler and Manzini 1987 and Reinhart and Reuland 1993), but for ease of reference let us call them self-reflexives. Hence, in principle a self-reflexive should be used as bound anaphor (Principle B: do not use a pronoun), unless this is impossible because the antecedent is not local enough (Principle A). (Another reason for why it may be impossible to use a self-reflexive can be that the language’s lexicon simply lacks such elements altogether; indeed, in that case the next least contentful element, namely a pronoun, can be used as a locally bound 17

The exact structural relationship the two elements need to stand in for a Principle C effect to arise is immaterial here. Bruening (2012), for example, argues that it is not c-command, but a notion of ‘precede and phase-command’ that is relevant.

Semantic versus syntactic agreement in anaphora

189

anaphor, see Pica 1984 and Reinhart and Reuland 1993). Outside of the local domain, a pronoun is preferred as bound anaphoric element over an even more contentful R-expression (Principle C). Theories based on this general idea of competition between elements with varying degrees of referential content as candidates for anaphor go back at least to Reinhart (1983), and can also be found in Burzio (1989, 1991), Richards (1997) and Safir (2004), for example. Given the discussion in §3, the principle that determines that the ‘least contentful’ element should be chosen as bound anaphor can be taken to be (5). The principle can be violated, so an agreement relationship can be established between binder and bindee. But as noted, the principle is also gradient, meaning that the less features agree, the better. Under the assumption that reflexives contain less features than pronouns (Burzio 1991, Reinhart and Reuland 1993), which in turn contain less features than Rexpressions (see §4), the mutual relations of these elements with respect to binding (involving agreement) follow from (5) plus the appropriate locality condition on the use of self-reflexives. Note that there is an asymmetry in this case between the binder, the controller of the agreement relationship, and the bindee, the target of the agreement relationship. It is the target of the agreement relationship that avoids violations of (5) as much as possible. If this asymmetry were not there, a distinction between Principle B and Principle C could not be made. After all, from the point of view of identity avoidance alone, syntactic agreement between Jim and he is as bad in (47a) as it is in (47b). (47)

a. Jimi hopes that hei can take a day off. b. *Hei hopes that Jimi can take a day off.

Apparently, when there is an identity avoidance issue in agreement, something is done to the target rather than the controller. The same asymmetry arises in the cases of agreement weakening discussed in Ackema and Neeleman (2003), where some agreeing features are deleted in the verb (the target of the agreement relation), not in the subject (the controller). Ackema and Neeleman suggest a functional explanation for this asymmetry: a hearer determines what the referent of a subject is on the basis of the features of that subject, rather than on the basis of the agreeing features of the verb. Hence, partially deleting features in the subject would mislead the hearer. Only when there is no subject to begin with (as in cases of pro drop, for instance) does the hearer rely on the agreeing features on the verb to determine the missing subject's reference (and indeed, pro drop and agreement

190

Peter Ackema

weakening are mutually incompatible; see also Ackema and Neeleman 2012 for discussion). Essentially the same explanation carries over to the case at hand: a hearer will rely on the controller of the agreement relation to determine the reference of a pair of coreferent agreeing elements, rather than on the target. Hence, an element with less independent referential features can be used for the target in the agreement relation, but not for the controller. Of course, another question is which of the two elements can in principle count as the antecedent in an anaphoric relationship (the controller of the agreement relation) and which as the dependent element (the target of the agreement relationship) in the first place. This is covered by Williams (1997: 588) ‘General pattern of anaphoric dependence’ (GPAD), which essentially states that a dependent element in an anaphoric relationship must either follow the antecedent or be in a clause that is subordinate to the clause containing the antecedent. Summarising so far, the relationship between Binding Principles A, B and C can be seen as the result of a competition between elements to avoid the violation of identity avoidance that results from agreement, this competition being counteracted by locality constraints on certain anaphors. One result of this is that R-expressions, the most contentful nominal expressions, should always lose out as a possible bindee: Principle C. However, if the central hypothesis discussed in §4-§6, namely that semantic agreement is a way of avoiding the identity avoidance violation that syntactic agreement gives rise to, holds water, and if (17)/(45) is correct, it is predicted that Principle C effects should be voided when the Rexpression is bound by a strong instead of a weak pronoun. At least, that should be the case where the strong form stands in opposition to a weak one. This prediction appears to be correct, as it can account for examples such as (48) and (49), which are clearly better than examples in which Condition C is violated.18

18

A reviewer notes that, given this reasoning, Principle C should be voided regardless of whether the strong pronoun is contrastive, as in (48) and (49), or not. The reviewer provides the following example, which, given that in section 6 it was argued that dat is the strong version of the neuter personal pronoun in Dutch, seems to contradict this: (i) *Ik zag dati goedkoop verkocht worden om van de tomatensoepi af te komen. I saw that cheaply sold become to of the tomatosoup off to come ‘I saw the tomato soup being sold cheaply, to get rid of it.’

Semantic versus syntactic agreement in anaphora

191

(48) Niemand vindt Pieti een goede zanger. noone finds Pete a good singer ?Zelfs HIJi vindt Piet geen goede zanger. finds Piet no good singer even he ‘Noone thinks Pete is a good singer. Even he himself thinks Pete is not a good singer.’ (49) Alleen ziji zelf denkt dat Mariei de verkiezingen gaat winnen. only she self thinks that Mary the elections goes win ‘The only person who thinks Mary is going to win the elections is Mary herself.’ Similarly, Evans (1980) notes that in English examples like (50b) are possible, in contrast to (50a). (50) a. b.

Everyone here admires someone on the committee. Joan admires Susan, Mary admires Jane, and he*i admires Oscari. Everyone has finally realized that Oscar is incompetent. Even hei has finally realized that Oscar is incompetent.

Note, however, that this example remains bad if the pronoun is actually contrastive, as in (ii). (ii) *Ik zag alleen DATi goedkoop verkocht worden om van de tomatensoepi af te komen. Also, a similar example in which the pronoun does not c-command the intended antecedent remains bad, as in (iii). (iii) *Ik zag dati goedkoop verkocht worden maar de tomatensoepi was toch van I saw that cheaply sold become but the tomatosoup was yet of goede kwaliteit. good quality ‘I saw the tomato soup being sold cheaply, but it was of good quality all the same.’ This probably indicates we are not dealing with a Principle C effect in (i). Rather, (i) does not comply with Williams’s (1997) GPAD, since the pronoun neither follows the intended antecedent, nor is it subordinate to it. Surprisingly, (49) does not comply with the GPAD either; at least, this example seems fine even if there is no mention of Marie in a preceding sentence (so that the GPAD would be satisfied by the pronoun following this; compare (48)). If so, the question is why, and to what extent, contrastively used pronouns can circumvent the GPAD in a way that non-contrastive pronouns cannot, which is a question on which I admittedly have no insights to offer here.

192

Peter Ackema

Evans argues that a pronoun can c-command a coreferential NP as long as it does not pick up its reference from that NP (which in (50a) it needs to do as there is no earlier mention of Oscar, while in (50b) there is). However, note that there is another difference between (50a) and (50b) as well. Although standard English written language does not formally distinguish between strong and weak pronouns, the pronoun in (50b) must be strong, as it is modified. When it is strong, Principle C effects seem to be ameliorated even if there is no earlier antecedent for the pronoun, so if it does depend on the NP that it binds for its reference. For Dutch an example like (49) shows this. For English, judgements appear to be quite variable (which might be because, in the absence of a preceding instance of the antecedent, they are out of line with the GPAD mentioned above, see also fn. 18), but on the whole examples like (51) do appear to be better again than Principle C violations like (46) or (47b) (which, of course, also violate the GPAD as given there, and which do not seem to improve either if an earlier instance of the antecedent is mentioned).19 (51) a. b.

?Everyone here admires most others on the committee, but only hei himself admires Oscari. ?Maybe SHEi thinks that Maryi will win the elections, but no one else does.

8. Conclusion Overall, the data discussed in this paper provide sufficient evidence for the idea that strong and weak forms of pronouns can be in competition in case they syntactically agree with their agreement, with the strong pronouns losing out because they violate identity avoidance more than the weak ones as a result of their having a more extended structure and thus more instances of the agreeing feature(s). In case there is no syntactic agreement (but semantic ‘agreement’ instead), there is no such competition, hence strong pronouns are fine. As a result, strong pronouns agree semantically when they stand in an opposition to weak pronouns.

19

Each of (51a) and (51b) was accepted by several native speakers. However, not everyone accepted both; some people thought one better than the other, though which one was considered to be the less acceptable one varied.

Semantic versus syntactic agreement in anaphora

193

Acknowledgements This paper has its origins in discussions about pronouns I had with Caroline Heycock and Ad Neeleman. Special thanks to them, though needless to say they are not to be held responsible for any of the views expressed here. Thanks as well to two perceptive anonymous reviewers for this volume.

References Ackema, Peter 2001

Colliding complementizers in Dutch: another syntactic OCP effect. Linguistic Inquiry 32: 717–727. Ackema, Peter, and Ad Neeleman 2003 Context-sensitive Spell-out. Natural Language and Linguistic Theory 21, 681–735. 2004 Beyond Morphology. Oxford: Oxford University Press. 2012 Agreement weakening at PF: a reply to Benmamoun and Lorimor. Linguistic Inquiry 43: 75–96. 2013 Subset controllers in agreement relations. Morphology 23: 291– 323. Ariel, Mira 1990 Accessing Noun-Phrase Antecedents. London: Routledge. 1991 The function of accessibility in a theory of grammar. Journal of Pragmatics 16: 443–463. Audring, Jenny 2009 Reinventing pronoun gender. Ph.D. dissertation, Free University, Amsterdam. LOT Dissertation Series 227. Baker, Mark 2008 The Syntax of Agreement and Concord. Cambridge: Cambridge University Press. 2011 When agreement is for number and gender but not person. Natural Language and Linguistic Theory 29: 875–915. Bonset, Helge 2007 Onderwijs in spelling en interpunctie in de onderbouw. Enschede: SLO. Bošković, Željko 2002 On multiple wh-fronting. Linguistic Inquiry 33: 351–383. Bruening, Benjamin 2012 Precede-and-Command Revisited. Ms., University of Delaware.

194

Peter Ackema

Burzio, Luigi 1989 1991 Bye, Patrik 2011

On the non-existence of disjoint reference principles. Rivista di Grammatica Generativa 14, 3-27. The morphological basis of anaphora. Journal of Linguistics 27: 81–105.

Dissimilation. In The Blackwell Companion to Phonology, Marc van Oostendorp, Colin J. Ewen, Elizabeth V. Hume and Keren Rice (eds.), 1408–1433. Oxford: Wiley-Blackwell. Cardinaletti, Anna, and Michal Starke 1999 The typology of structural deficiency: a case study of the three classes of pronouns. In Clitics in the Languages of Europe, Henk C. van Riemsdijk (ed.), 145–233. Berlin/New York: Mouton de Gruyter. Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. Corbett, Greville 1979 The agreement hierarchy. Journal of Linguistics 15: 203–224. 2006 Agreement. Cambridge: Cambridge University Press. Evans, Gareth 1980 Pronouns. Linguistic Inquiry 11: 337–362. Grimshaw, Jane 1991 Extended projection. Ms., Brandeis University. Published in Words and Structure. Jane Barbara Grimshaw (ed.) (2005), Stanford: CSLI. 1997 The best clitic: constraint conflict in morphosyntax. In Elements of Grammar, Liliane Haegeman (ed.), 169–196. Dordrecht: Kluwer. Heim, Irene 2008 Features on bound pronouns. In Phi Theory: Phi Features Across Interfaces and Modules, Daniel Harbour, David Adger and Susana Bejar (eds.), 35–56. Oxford: Oxford University Press. Heim, Irene, and Angelika Kratzer 1998 Semantics in Generative Grammar. Oxford: Blackwell. Hiraiwa, Ken 2010 Spelling out the double-o constraint. Natural Language and Linguistic Theory 28: 723–770. Leben, William 1973 Suprasegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Menn, Lise, and Brian MacWhinney 1984 The repeated morph constraint: toward an explanation. Language 60: 519–541.

Semantic versus syntactic agreement in anaphora Mohanan, Tara 1994.

195

Case OCP: a constraint on word order in Hindi. In Theoretical Perspectives on Word Order in South Asian Languages, Miriam Butt, Tracy Holloway King and Gillian Ramchand (eds.), 185– 216. Stanford: CSLI. Neeleman, Ad, and Hans van de Koot 2005 Syntactic haplology. In The Blackwell Companion to Syntax, vol. IV, Martin Everaert and Henk van Riemsdijk with Rob Goedemans and Bart Hollebrandse (eds.), 685–710. Oxford: Wiley-Blackwell. Nevins, Andrew 2012 Dissimilation at distinct stages of exponence. In The Morphology and Phonology of Exponence, Jochen Trommer (ed.), 84–116. Oxford: Oxford University Press. Ortmann, Albert, and Alexandra Popescu 2001 Haplology involving morphologically bound and free elements: evidence from Romanian. In Yearbook of Morphology 2000, Geert Booij and Jaap van Marle (eds.), 43–70. Dordrecht: Kluwer. Pica, Pierre 1984 On the distinction between argumental and nonargumental anaphors. In Sentential Complementation: Proceedings of the International Conference Held at Ufsal, Brussels, June 1983, Wim De Geest and Yvan Putseys (eds.), 185–194. Dordrecht: Foris Publications. Preminger, Omer 2011 Asymmetries between person and number in syntax: a commentary on Baker’s SCOPA. Natural Language and Linguistic Theory 29: 917–937. Prinzhorn, Martin, Henk van C. Riemsdijk, and Viola Schmitt 2010 Description of Identity in Grammar workshop, GLOW Newsletter 65. Reinhart, Tanya 1983 Anaphora and Semantic Interpretation. London: Croom Helm. Reinhart, Tanya, and Eric Reuland 1993 Reflexivity. Linguistic Inquiry 24: 657–720. Reuland, Eric 2005 Agreeing to bind. In Organizing Grammar: Linguistic Studies in Honor of Henk van Riemsdijk, Hans Broekhuis, Norbert Corver, Riny Huybregts, Ursula Kleinhenz and Jan Koster (eds.), 505– 513. Berlin/New York: Mouton de Gruyter.

196

Peter Ackema

Richards, Norvin 1997 Competition and disjoint reference. Linguistic Inquiry 28: 178– 187. 2010 Uttering Trees. Cambridge, MA: MIT Press. Riemsdijk, Henk C. van 1978 A Case Study in Syntactic Markedness. Dordrecht: Peter de Ridder. 2008 Identity avoidance: OCP-effects in Swiss relatives. In Foundational Issues in Linguistic Theory, Robert Freidin, Carlos Peregrín Otero and Maria Luisa Zubizarreta (eds.), 227–250. Cambridge, MA: MIT Press. Rooryck, Johan, and Guido Vanden Wyngaerd 2011 Dissolving Binding Theory. Oxford: Oxford University Press. Safir, Kenneth 2004 The Syntax of Anaphora. Oxford: Oxford University Press. Spathas, Giorgos 2010 Focus on anaphora. Ph.D. dissertation, Utrecht University. LOT Dissertation Series 264. Wexler, Kenneth, and M. Rita Manzini 1987 Parameters and learnability in binding theory. In Parameter Setting, Thomas Roeper and Edwin Williams (eds.), 41–76. Dordrecht: Reidel. Williams, Edwin 1997 Blocking and anaphora. Linguistic Inquiry 28: 577–628.

Part III Syntax

Exploring the limitations of identity effects in syntax Artemis Alexiadou 1. Introduction In the literature, there have been several proposals dealing with a number of phenomena in different languages that all seem to be constrained by a ban on multiple objects of the same type that are too close together. Such a case is illustrated in (1) below, taken from Richards (2010: 3): (1)

a. b.

I know everyone danced with someone, but I don’t know who with whom. *I know everyone insulted someone, but I don’t know who whom.

(1) is an example of multiple sluicing in English. While slucing may involve multiple arguments (1a), it is impossible if sluicing remnants are both DPs. To handle these facts and others like them several researchers have offered various formal characterizations of what it means to be ‘of the same type’ and ‘too close together’. A basic divide among the various conditions that have been put forth is whether or not these apply to the interface between narrow syntax and phonology or belong to the core computational component proper. A selection of these approaches is offered in (2), in chronological order.1 (2)

a. b.

1

Unlike category condition Hoekstra (1984) no head can govern a phrase of the same category Unlike feature condition Van Riemsdijk (1988) * { [+Fi]_[+Fi]P } where Fi = N or V

But see also Kayne (1982), Stowell (1981), Moro (2000), and Lohndal and Samuels (2010).

200

Artemis Alexiadou

c.

d.

e. f. g.

Categorial identity thesis Van Riemsdijk (1998) Within a projection, the following well-formedness condition holds: *[αN, βV] (where α,β,γ,δ range over + and −) [γN, δV] unless either (i) α=β and γ=δ or (ii) at most one of α,β,γ,δ has the value +. The subject-in-situ generalization (SSG) By Spell-Out, vP can contain only one argument with a structural Case feature. Alexiadou and Anagnostopoulou (2001) LCA-reduction of the MLC Lechner (2004) Prohibit phase-internal movement across symbols with identical feature specification. Constraint on Direct Recursion Heck (2010) Ban merging of categories with identical features. Distinctness If a linearization statement is generated, the derivation crashes.

As has been often noted, all the proposals above can be viewed as subcases of a more general ban on multiple adjacent objects, similar to the Obligatory Contour Principle (OCP) in phonology, given in (3), see Leben (1973). (2)

Adjacent identical tones are disallowed.

The proposals in (2) are not all identical. (2a,b,c and f) are conditions that apply to the structure building operations, (2d,e) seem to be restricted to the syntactic component and regulate movement, while (2g) applies to the interface between narrow syntax and phonology, i.e. it is not a condition that applies to the core computational component. In this paper, I will focus on two of the above proposals, namely (2d) and (2g), which both have been developed to deal partially with the same set of phenomena, and differ in that crucially SSG is part of the computational component, and regulates the distribution of arguments, while Distinctness applies to the interface between narrow syntax and phonology. In §2 and §3, I will first introduce both conditions and the phenomena they were set out to explain. In §4, I will then turn to a direct comparison of their empirical coverage. Finally, in §5, I will briefly explore the option

Exploring the limitations of identity effects in syntax

201

that there is a division of labour between the two, and claim that the conditions that regulate the distribution of arguments should be set apart from other ‘identity’ phenomena that can well be accounted for under any of the other conditions in (2). 2. The subject in situ generalization 2.1. Formulating the subject in situ generalization As is well known, in French and English there is a transitivity restriction on subject inversion in constructions containing an expletive subject. While expletive constructions are well-formed with intransitive verbs (4a, 5a), transitive expletive constructions are ungrammatical (4b, 5b): (4)

a. b.

(5)

a. b.

il est arrivé un homme. expl is arrived a man ‘There has arrived a man’ *il a lu un élève le livre. expl has read a student-NOM the book-ACC There arrived a man. *There finished somebody the assignment.

expl-VS *expl-VSO expl-VS *expl-VSO

It is generally agreed upon that the inverted subjects remain in vP-internal positions (see Bobaljik and Jonas 1996, Déprez 1991 and references therein). In these languages, there are constructions where the subject can remain vP-internal with transitive predicates. These constructions involve movement of the object to a position outside the vP. These are stylistic inversion (SI) in French and quotative inversion in English. I discuss SI here (see Kayne and Pollock 1978, Déprez 1991, Collins and Branigan 1997, among many others). SI, which involves postposing of the subject in wh-questions, relative clauses and subjunctive sentential complement, is disallowed when the vP contains a direct object (6): (6)

*Je me demande quand acheteront les consommateurs les pommes I wonder when will-buy the consumers-NOM the appples- ACC

If, however, the direct object itself is wh-extracted or cliticized SI becomes possible again:

202

Artemis Alexiadou

(7)

a. Que crois-tu que manquent un grand nombre d’etudiants? what believe-you that be-absent-from a great number of students ‘What do you believe that a great number of students is missing?’ b. Tes cours, a quelle occasion les ont manques un your courses at which occasion them-have been absent-from a grand nombre d’etudiants? great number of students ‘Your courses when have students missed them?’

The object must either be moved out of the vP, as in (7), or surface as a PP, as in (8): (8)

?Quand ecrira ton frere a sa petite amie? when will write your brother to his little friend ‘When will your brother write to his girlfriend?’

The above facts motivated the generalization in (9): (9)

Subject-inversion with vP-internal subjects is prohibited in the presence of vP-internal DP objects.

Alexiadou and Anagnostopoulou (2001, 2007) proposed that the condition in (10) regulates the availability of vP-internal subjects and objects across languages: (10) The subject-in-situ generalization (SSG) By Spell-Out, vP can contain only one argument with a structural Case feature. 2.2. The mechanics of the SSG More specifically, the generalization captured by the SSG can be further decomposed into two parts: (11) i. If two DP arguments are merged in the vP domain, at least one of them must externalise. ii. If two arguments remain vP-internal, one of them must surface as a PP.

Exploring the limitations of identity effects in syntax

203

The two clauses of (11) can be understood if the SSG derives from the Case constraint in (12). According to (12), the presence of two arguments with an unchecked structural Case features in the vP domain is prohibited in the overt syntax. (12) By Spell-Out, vP can contain only one argument with an unchecked Case feature. The two clauses in (11) describe two alternative strategies that can be employed to circumvent (12). A first option is that one of the two arguments must leave the vP moving to (or through) its Case checking position (T or v, and from there it can move further to C; clause i of 11). A second option is that one of the two arguments is a PP lacking a structural Case feature (clause ii of 11). In both situations there is only one argument with an unchecked Case feature in the vP domain, conforming with (12). The intuition is that there is a link between v-to-T raising and the SSG. In configurations violating the SSG, (13), v and T fall together either overtly (in French/Icelandic and for Fox and Pesetsky 2005, Johnson 1991 also in English) or covertly (in a traditional Emonds 1976/Pollock 1989-style analysis of English). The Case-features of the arguments must be checked after v-to-T raising takes place creating a complex head with two Case features as in (14): (13) *There finished somebody the assignment. Tmax

(14) V V

T v

The complex head in (14), with two active Case features, is an illicit item.2 In this analysis, the SSG (10) results from the improper amalgamation of two Case-bearing heads v and T, as stated in (15):

2

There are several reasons why this might be so which are discussed in detail in Alexiadou and Anagnostopoulou (2001). They all crucially rely on the assumption that T and v cannot directly enter into Case checking after head adjunction because they fail to c-command outside the non-terminal node dominating them.

204

Artemis Alexiadou

(15) v and T cannot both bear active Case features when they form a complex head. As a consequence of (15), it is necessary that at least one Case feature be checked before the complex head is formed: (16) T or v must be eliminated before the complex head is formed. Intuitively, a local relationship between an argument and its Case-checking head must be established, which is destroyed by the formation of a complex head with two active Case features. The clearest example of the effects of (15)/(14) is instantiated by the transitivity restriction in English/French (13). In these cases, the numeration contains a v and a T which both bear weak Case features that can be eliminated without phrasal pied-piping. The derivation proceeds as follows:3 (17) i. First, v is merged, and the object does not raise overtly. ii. Then, T is merged. iii. The expletive is merged eliminating the EPP feature of T. iv. v raises to T overtly or covertly, resulting in the formation of a complex head Tmax with two unchecked Case features. In conclusion, the SSG can be viewed as a universal principle that regulates argument externalization. See Alexiadou and Anagnostopoulou (2001, and especially 2007 for details).

3

Note that Alexiadou and Anagnostopoulou’s (2001) account of the SSG is incompatible with cyclic Agree. Alexiadou and Anagnostopoulou (2007) concluded that the SSG cannot be (directly) expressed in a system based on cyclic Agree. To this end, we proposed that it is possible to adopt counter-cyclic Agree, as formulated by the ‘T-v-Agree Hypothesis’ below: T-v-Agree Hypothesis v enters Agree with T and then Case valuation takes place, creating a configuration of Case checking ambiguity (v and T could value the Case of SUB or OBJ). Under this hypothesis, the Agree relation between the v-T heads emulates the effects of a complex head in the older system.

Exploring the limitations of identity effects in syntax

205

3. Distinctness 3.1. Formulating Distinctness As already mentioned, Richards’s (2010) proposal can be viewed as a general theory of ‘syntactic OCP’ (cf. Hoekstra 1984, Van Riemsdijk 1998). This is repeated in (18): (18) Distinctness If a linearization statement is generated, the derivation crashes. According to (18), syntactic nodes with the same label must not be located too close together in the tree: they must be separated by a phase boundary, or else they cannot be ordered with respect to each other. Linearization fails whenever the objects to be linearized in a strong phase are insufficiently distinct. The linearization domain includes a statement DP, DP which will cause the derivation to crash. Let us see how Distinctness works applied to Locative inversion: (19) a. [Into the room] walked a man. b. [Into the room] walked a man in the afternoon. c. *[Into the room] kicked a man a ball. In (19), the subject is in some post-verbal position and the verb has apparently raised past it. To account for (19), Richards (2010: 14) needs to assume that the base position of the subject is not in fact the highest position in the vP phase. He introduces a projection above vP, vcP, projected by the head vc. vc is related to v in the way C relates to T. v inherits its ability to Agree with and license objects from the phase head vc. vc is responsible for making v transitive, and it is absent when v is intransitive. vc is a phase head, so any phrase that is to exit the lowest phase will move to its edge. In the locative inversion, the PP moves to its specifier. To get the word order, one must assume that v raises to vc. After the PP moves, the vc will trigger Spell-out of its complement. All is left to linearize are the two DPs a man and a ball. The linearization process has no way of distin-

206

Artemis Alexiadou

guishing between the two instances of DPs; the ordering statement is self-contradictory and causes the derivation to crash:4 (20)

* PP

vcP vc’

into the room kicked

vP

DP

v’

a man

v

VP DP

a ball

V’ V

3.2. The mechanics of Distinctness The way Distinctness is to be understood is as follows: when two DPs are included within a strong phase, they cannot be linearized, as they are of the same type, i.e. they bear the same label. When, however, one of the two DPs becomes sufficiently distinct, then linearization is possible. In principle, there are three ways in which elements can become distinct: either by adding structure, or by deleting structure. Movement operations can also be seen as Distinctness-driven, as they keep the two argument DPs (subject and object) in separate Spell-Out domains (cf. Moro 2000). In what follows, I illustrate these three ways, based on Richards (2010). 3.2.1. DP internal arguments: adding structure As is well-known, nominal arguments are accompanied by functional structure not found with the corresponding verbs, i.e. the preposition of is obligatory only in (21a): 4

Lohndall (2012) points out that (19c) is grammatical in a V2 language like Norwegian. It is not clear how to account for this under Richards’s analysis.

Exploring the limitations of identity effects in syntax

207

(21) a. the destruction of the city b. *the destruction the city c. They destroyed the city. Why is this so? For Richards, this follows from Distinctness. Consider the stuctural representation of (21b), given in (22): (22) * [DP the [NP destruction [DP the [NP city]]]] In the process of linearizing (22), the grammar will generate the ordered pair , which will cause the derivation to crash. The offending nodes are not linearly adjacent, but they are structurally close enough together to prevent linearization from succeeding. To save this, a P is introduced, namely of; assuming that P heads a phase, its complement will be spelled out, and as a result, the two Ds will be spelled out in different Spell-out domains, as in (23):5 (23) * [DP the [NP destruction [PP of [DP the [NP city]]]]] Now note that if the two arguments surface with the same preposition, then the string is ungrammatical (24c). If however, the PPs are introduced by different prepositions, Distinctness is not violated. (24) a. b. c. d.

the singing of the children the singing of songs *the singing of songs of the children the singing of songs by the children

This suggests that in (24d) of is a Kase head, while by heads a PP. As both K and P are phase heads, they are sufficiently distinct. 3.2.2. Construct state: deleting structure Construct state is presented by Richards as an example avoiding Distinctness violations in which structure is deleted. As is well-known, in Hebrew 5

As Terje Lohndall (p.c.) points out, it is not clear how an of-insertion mechanism is supposed to work, assuming a version of the strict cycle condition and/or the No Tampering Condition.

208

Artemis Alexiadou

there are two ways to express possession relations within an NP. This is illustrated below: either the possessor is introduced by a ‘šel’ of phrase (25a), or it appears in the construct state (25b). In the latter case, the head noun is bare, i.e. it surfaces without a determiner (cf. the ungrammaticality of (25c)). (25) a. ha-bayit šel ha-mora the house of the teacher ‘The house of the teacher’ b. beyt ha-mora house the teacher ‘The teacher’s house’ c. *ha-beyt ha mora the house the teacher

Hebrew

How does it come about? According to Richards, in the process of linearizing (25c), the grammar will generate the ordered pair , which will cause the derivation to crash. The solution here is to either add functional structure (25a), or to delete functional structure (25b/26):6 (26) [NP house [DP the [NP teacher]]] To explain this pattern, it is crucial for Richards to assume that linearization happens prior to vocabulary insertion. Following the architecture of grammar put forth in the framework of Distributed Morphology, lexical heads undergo Early insertion, while functional heads undergo late insertion. When lexical projections are linearized the lexical items have already been inserted and the linearization makes reference to a rich array of properties distinguishing the heads from each other. When functional heads are linearized, vocabulary insertion has not yet taken place, hence information that might serve to distinguish different heads is not yet present.

6

Richards does not comment on analyses of the construct state, according to which (25b) involves N-to-D movement, and hence N is in D, see e.g. Ritter (1991).

Exploring the limitations of identity effects in syntax

209

3.2.3. Differential object marking: ban on multiple a marking: movement In Spanish, specific animate objects are marked via a: (27) Laura escondio a un prisionero durante does años. Laura hid a one prisoner for two years ‘Laura hid a specific prisoner for two years.’ Spanish disallows both objects of ditransitives to be marked by a: (28) ??Juan le presentó a Maria a Pedro. Juan 3.dat introduced a Mary a Peter ‘John introduced Mary to Peter.’ But if one of the arguments is extracted or extraposed, multiple as are allowed. In this case, the two arguments are, according to Richards, linearized in two different Spell-Out domains: (29) a. A Pedro, Juan le presentó a María. A Peter Juan 3.dat introduced a Mary ‘To Peter, Juan introduced Mary.’ b. Juan le presento a Maria, a Pedro. Juan 3 dat introduced a Maria, a Peter ‘Juan introduced Maria, to Peter.’ 4. Distinctness vs. the SSG The question that arises at this point is whether or not the two conditions discussed in §2 and §3 can be seen as the one being a sub-case of the other. In fact, the SSG, according to Richards, is a sub-case of distinctness. This has obvious advantages. From a theoretical point of view, it is immediately explained why a constraint like (12) is imposed on syntactic derivations. On the empirical side, the effects of SSG are unified with a range of different phenomena that have received independent explanations in the literature (such as Doubl-ing, double infinitive filters, multiple sluicing; see below and Richards 2010 for details). (30) *The police are stropping drinking on the campus

double-ing

210

Artemis Alexiadou

In order to answer this question the two approaches will be directly contrasted with one another in the subsequent sub-sections. What we will first note is that Distinctness, unlike the SSG, and in fact unlike also all the other proposals in (2), escapes a firm formal definitition not only across languages, but also within a language. Importantly what counts as being of the same type is subject to inner- and cross-linguistic variation, not a desirable result. At the theoretical level, defining Distinctness across domains and languages is far from trivial. In fact, Richards claims that languages vary in the extent in which they draw distinctions between projections with the same label. In e.g. English, all DPs are treated as identical, in other languages, DPs are the same only if they have identical features for case, grammatical gender, and/or animacy. On the contrary, the SSG, which is based on Case theory, is uniformly defined across languages for those domains that can be shown to be sensitive to properties of Case-checking/licensing. In other words, what counts as distinct differs within a language and across languages, while structural Case features are uniformly defined.7 The cases to be discussed in the followin sub-sections substantiate this criticism. 4.1. DP vs. AP internal syntax In §3, I pointed out that adding structure was one way in which violations of Distinctness could be avoided. The case in point was complements of nouns, and the relevant examples are repeated below: (31) a. *the destruction the city/*the destruction of the city of the barbarians b. the destruction of the city c. the destruction of the city by the barbarians Recall that of, being a phase head, introduces a distinct Spell-Out domain. The derivations are repeated below:

7

Note that with respect to this, all the other proposals in (2) also fare better than Distinctness, since they are mostly defined on the basis of merge/structure building operations.

Exploring the limitations of identity effects in syntax

211

(32) * [DP the [NP destruction [DP the [NP city ]]]] (33) * [DP the [NP destruction [PP of [DP the [NP city ]]]]] As already mentioned, a first problem that arises with this approach is that of is once categorized as a P head, in e.g. (31b), and once as a K head, in e.g. (31c). In other words, it is not clear why the preposition cannot be uniformly categorized as a K or P head. An alternative, SSG based account, could work as follows. Assuming that nominalizations are ergative in nature, as proposed in Williams (1987) and elaborated in Alexiadou (2001), (31c) is a domain that contains only one structural Case feature. From this perspective, of is always a case marker, and as nominals can only license one structural Case, the second argument, if it appears, it must bear ergative/lexical case (see Alexiadou 2001 for details). Crucially, the domain contain the two arguments of the derived nominals can only include one DP with a structural Case feature. A second problem that arises with Distinctness concerns of-insertion with the complements of adjectives. Adjectives also require prepositional objects, much like nominalizations do in English, a fact that is unexpected under Distinctness. This is illustrated in (34), and (35) offers a structural representation of (34a). Note that in (35) nothing would prevent a linearization of A and D as these labels are sufficiently distinct. From this perspectiv, the ungrammaticality of (34a) is surprising: (34) a. *proud his father b. proud of his father (35) [AP proud [DP his [NP father ]]] This, however, follows from classic Case theory, which regulates the availability of DP complements in English, as in (36): (36) V and P allow a DP complement. N and A do not allow a DP complement. Richards (2010: 67) suggests that the facts in (34a,b) can be accounted for by assuming that of is inserted to avoid Distinctness violations caused by the interaction of the functional structure of adjectives with that of their complements. Still, however, the functional structure of adjectives should

212

Artemis Alexiadou

be significantly different from that of their complements. The puzzle thus remains unexplained. Further problems arise, once we start looking at the details of what counts as distinct across languages. 4.2. Which features count for Distinctness? Greek, Spanish and Romanian allow VSO orders with two vP-internal DPs, as discussed in Alexiadou and Anagnostopoulou (2001), see (37), a Greek example. (37) an ehi idi diavasi [vP prosektika [o Janis to vivlio] if has already read carefully the-John-NOM the book-ACC ‘If John has already read the book carefully.’ Alexiadou and Anagnostopoulou (2001) argue that such orders do not challenge the SSG, because the Case of the in situ subject is realized on the pronominal verbal agreement which has the status of a clitic and checks overtly its (phi and Case) features on T as a result of verb-raising (see Alexiadou and Anagnostopoulou 1998). From this perspective, the inverted in situ subject does not have an unchecked structural Case feature, despite appearances to the contrary. Thus a link was established between the above mentioned property of Greek, Spanish, and Romanian verbal subject agreement to the clitic doubling parameter which permits the formation of such feature-chains between clitics and in situ DP arguments in clitic doubling languages like Greek, Spanish and Romanian and prohibits them in non-clitic doubling languages like French, Italian and Catalan. (37) does raise, at first sight, a problem for Distinctness as it involves two DP arguments that are in the same Spell-Out domain. Distinctness could offer a solution suggesting that case morphology counts (see also below). For instance, Greek and Romanian have case morphology, albeit heavily syncretic. For (37), one could assume a linearization statement of the type in (38), which would then not violate Distinctness. (38) But, this raises an issue with Spanish which lacks case morphology:

Exploring the limitations of identity effects in syntax

(39) Todos los días compra Juan el diario. every day buys Juan the newspaper ‘Juan buys the newspaper everyday.’

213

Spanish

(38) could crucially not be used for (39). Note that in (39), insertion of the special marker a in Spanish does not take place. As a result, (39) should violate Distinctness contrary to fact. In principle, there are two ways out for Distinctness: i) one could suggest that one of the arguments is outside the vP. But there is no conclusive evidence that this is the case in Spanish. ii) Another option would be to assume that the linearization statement in Spanish makes reference to animacy, see (41). This could also help deal with (40): (40) ekopse to agori to luludi. cut the child-neut.NOM the flower-neut.ACC ‘The boy cut the flower.’

Greek

In (40), both the subject and the object are in neuter gender, and belong to the same declension class. In this case, NOM and ACC are syncretic, and the linearization statement should be as in (41): (41) But now inner-linguistic variation is introduced concerning what counts as distinct, as for (47) case morphology is the key ingredient but for (40) animacy features are the ones that are relevant. For the SSG, neither case morphology nor animacy is an issue, since the account depends on the formation of clitic-doubling chains, independently available in these two languages. A related set of problems concerns multiple sluicing and multiple whfronting. According to Richards, linearization in German is sensitive to features like [MON], and [ACC], see (38), i.e. case morphology makes DPs distinct in this language. This is why (42) does not violate Distinctness contrary to its English counterpart in (1b), although both wh-phrases are within the same Spell-Out domain:

214

Artemis Alexiadou

(42) Ich habe jedem Freund ein Buch gegeben, aber ich weiß nicht mehr I have every friend a book given, but I know not anymore wem welches. whom which ‘I gave every friend a book, but I do not know anymore whom which.’ If indeed case morphology makes DPs distinct, this predicts that German will not show any SSG effects. This prediction is borne out: subjects and objects may both remain vP-internal in German (see e.g. Haider 1993, 2005; Fanselow 2001; Wurmbrand 2004 and others). Evidence for this comes from two sources. First, adverbial placement demonstrates that both arguments remain inside the vP: ein junger Hund einen Briefträger (43) weil schon oft since already often a young dog a mailman gebissen hat. bitten has ‘Since a young dog has already often bitten a mailman’ Second, in contexts of vP-fronting, both arguments can be topicalized: (44) [Ein junger Hund einen Briefträger gebissen] hat hier schon oft. a young dog a mailman bitten has here already often ‘It has happened often here already that a young dog has bitten a mailman.’ For the SSG account, German, like Greek, Spanish and Romanian would be a doubling language. In fact, German has been argued to permit featurechains between null clitics and in situ DP arguments qualifying essentially as a clitic doubling language (Haider 1985, Fanselow 2001). Note that Distinctness establishes a link between these two phenomena, i.e. multiple-sluicing and the availability of vP internal arguments, a link that is hard to establish for the SSG. This is so for the following reason: for Distinctness it only matters that the DPs are in the same Spell-Out domain, although this domain may very well be different as is here the case. The Spell-Out domain for multiple sluicing is the CP for the former, and the Spell-Out domain for (44) is the vP. This link is not easily established under the SSG, as this regulates movement of arguments out of the vP and in

Exploring the limitations of identity effects in syntax

215

the case of multiple sluicing, wh-movement has taken place so that at least one of the elements has vacated the vP and is in Spec, CP. But should the two phenomena be related to one another? The conditions under which multiple sluicing is allowed across languages are still poorly understood. Merchant (2001) cites the availability of multiple whfronting as a prerequisite. But clearly, as Merchant also acknowledges, German is not a language that allows multiple wh-fronting, still it allows multiple-sluicing, and this is why a Distinctness-based account is in prinicple attractive. But, does it underlie the phenomenon? The following casts doubt on this. Merchant (2001: 112) cites the English example in (45) as acceptable, in sharp contrast to (1b): (45) ?Everybody brought something (different) to the potluck, but I couldn’t tell you who what. Merchant (op.cit.) observes that multiple sluicing in English is grammatical in environments where an appropriate pair-list reading can be generated. It is not clear how Distinctness can account for the grammaticality of (45) as opposed to the unacceptability of (1b). The next problem that arises in the context of multiple sluicing and multiple wh-fronting is that of phonological identity/syncretism. As we saw in e.g. (40), syncretism is sometimes not an issue. This is also the case in German multiple sluicing, where phonological identity does not seem to be the key issue, i.e. (46) is fine (from Richards 2010: 48): (46) Ein Auto hat ein Haus zerstört, aber ich weiss nicht mehr a car has a house destroyed, but I don’t know any more welches Auto welches Haus which car which house ‘A car has destroyed a house, but I don't know anymore which care which house.’ Note that in Greek the counterpart of (46) is also fine, supporting the correlation made above between the conditions on multiple sluicing and the SSG.8 8

Note that not all speakers agree with the judgments in (47). In fact Richards (2010: 47) reports it as unacceptable. Still (39) is fine for all speakers. For Richards, different linearization statements should hold to explain (39), and (47), for those speakers that judge it as ungrammatical.

216

Artemis Alexiadou

(47) ena aftokinito katestrepse ena spiti, ala de ksero pio aftokinito a car destroyed a house but NEG know-1SG which car-NEUT-NOM pio spiti. which house-NEUT ACC ‘A car has destroyed a house, but I don't know anymore which care which house.’ However, in Serbian multiple wh-questions, identity and syncretism play a key role (data from Richards 2010: 50f.). Serbian distinguishes between case and gender but if multiple wh-fronting would bring DPs with the same gender and case into proximity, it is avoided, (48a). In this language, even if the cases are different, but syncretic, multiple wh-fronting is out. In (48b), the two DPs bear distinct cases, accusative and genitive respectively, but these are syncretic: (48) a. *Kojem je čovjeku kojem dječaku mrsko pomogati? which.DAT AUX man.DAT which.DAT. boy. DAT boring help. INF ‘Which man doesn’t feel like helping which boy?’ b. ??Kojeg je čovjeka kojeg dječaka sram which.GEN AUX man.GEN which.GEN boy.GEN ashamed ‘Which man is ashamed of which boy?’ Thus it seems that in some languages syncretism leads to Distinctness violations, while in other languages this is not the case. In other words, what counts as distinct varies within a language and across languages (see also Lohndal 2012 for further discussion). The data discussed in this section raise a more general question for Distinctness: why should morphological richness affect syntax? This is especially unexpected under views according to which case morphology is realized on the basis of the hierarchical relations between the arguments (Marantz 2000, Sigurðsson 2009, Schäfer 2012, and others). To account for (48b), Richards must assume that some mechanism interferes with the syntactic representation prior to lexical insertion, like the one of Impoverishment, which deletes the Case feature in the presence of the feature masculine. Impoverishment is generally considered a very powerful tool that should be appealed to if a treatment on the basis of underspecification fails. Impoverishment rules are language specific, but un-restricted. It is not clear why underspecification would fail in the context of (48), but not in the context of e.g. (45) and/or (39).

Exploring the limitations of identity effects in syntax

217

4.3. Object movement in linker constructions A final problem is raised by the Khoisan language Ju’hoansi has certain constructions where a particle, called “linker” by Collins (2003), appears between the direct object and a secondary object or nominal ad-positional phrase. In (49a) the linker ko appears between the theme and a locative phrase, in (49b) between the theme and an instrument and in (49c) it occurs between the two objects of a double object construction, the beneficiary and the theme (from Collins 2003: 1-2): (49) a. Uto dchuun-a |Kaece ko n!ana n!ang. car hit-TRANS |Kaece ko road in ‘A car hit Kaece in the road.’ b. Mi ba ||ohm-a !aihn ko |’ ai. tree ko axe My father chop-TRANS ‘My father chopped the tree with an axe.’ c. Besa komm ||’ama-|’ an Oba ko tcisi. Oba ko things Besa EMPH buy-give ‘Besa bought Oba some things.’ The conditions under which -a and ko surface are closely related, though not identical. The transitivity suffix -a and the particle ko are both disallowed with transitive verbs, as shown in (50), while they are both required when a locative phrase is added to transitive verbs, as shown in (49a) above. (50) a. Uto dchuun-(*a) |Kaece Car hit-TRANS |Kaece ‘The car hit |Kaece.’ b. *Uto dchuun-(a) |Kaece ko. Car hit-TRANS |Kaece ko ‘The car hit |Kaece.’ c. *Uto dchuun-(a) ko |Kaece Car hit-TRANS ko |Kaece ‘The car hit |Kaece.’ When a locative phrase is added to intransitives, -a is required, but ko is unacceptable:

218

Artemis Alexiadou

(51) a. Ha ku u 3SG ASP go ‘He was going.’ b. Ha ku u-a Tjum!kui. Tjum!kui 3SG ASP go-TRANS ‘He was going to Tjum!kui.’ c. Lena koh djxani-a tju n!ang. Lena PAST dance-TRANS house in ‘Lena danced in the house’ d. *Lena koh djxani-a ko tju n!ang. Lena PAST dance-TRANS ko house in In order to account for the distribution of -a in Ju’hoansi, Collins (2003) argues that: a) locative phrases are nominal and have a Case feature to check; b) The transitivity suffix -a is inserted to check the Case of locative phrases. This explains why -a is added in transitives and intransitives. In (49a) transitive v checks the Case of either DP |Kaece or PP n!ana n!ang, and the transitivity suffix -a checks the Case of the other argument: (52)

vP DP

v’ v a

VP v DP Kaece V hit

V’ PP

ko is obligatory in transitives. Collins (2003: 15–16) argues that ko is a Last Resort mechanism. It is inserted to provide a landing site for movement in constructions that would otherwise violate a condition which he labels Multiple Case Condition (MCC): (53) Multiple Case Condition By Spell-Out, VP can contain no more than one argument with a (valued) undeleted Case feature.

Exploring the limitations of identity effects in syntax

219

In (53) above, the complex functional head [v a v] has two sets of uninterpretable phi-features, one for a and one for v. Even though two Agree relations can be established — Agree (v, DP) and Agree (a, PP) — there are two Case features internal to the VP that need to be deleted at Spell-Out. In order to avoid a violation of the MCC, ko is merged providing a landing site for one of the two arguments, as in (54): (54)

v’ V

a



koP

v Kaece

ko’ ko

VP DP tKaece

V’ V

PP

Being a Last Resort operation, ko-insertion is triggered only if a violation of the MCC would ensue, which explains why ko is obligatorily absent in intransitives.9 The MCC is a version or a close relative of the SSG. The MCC forces movement of either the direct object or the ad-positional phrase out of the VP when both have structural Case. Like the SSG cases, if one of the VP constituents is extracted by A’-movement, the result is acceptable without ko, as shown in (55): (55) a. Kaece komm uto dchuun-a (*ko) n!ama n!ang. road in Kaece EMPH car hit-TRANS ‘Kaece, the car hit in the road.’ b. N!ama n!ang komm uto dchuun-a (*ko) Kaece. EMPH car hit-TRANS Kaece road in ‘In the road the car hit Kaece.’ This pattern is strongly reminiscent of the conditions licensing SI in French. These cases do not raise a problem for the SSG, as the account relies on 9

See Collins (2003) for further discussion of the constraints of movement of the lower argument in Ju’hoansi double object constructions, which follow from locality considerations.

220

Artemis Alexiadou

case-checking relations. In principle, they should not raise a problem for Distinctness either. One could argue that the linker head is a phase head, hence the two DPs will be in different Spell-Out domains. However, these cases are indeed problematic for Distinctness: as PPs and DPs are sufficiently distinct, it is not clear why (i) the transitivity affix a- should be inserted in the case of intransitives, when a locative phrase is added, and (ii) insertion of ko is obligatory in the case of transitives. I thus conclude that, when it comes to explaining argument externalization and Case issues, SSG is superior to Distinctness. 5. The division of labor between Distinctness and the SSG As has become clear from the discussion up to now, we can clealy distinguish two types of phenomena: those for which both accounts fare equally fine and which crucially involve/regulate DP movement (i.e. they are Caserelated) and those that can only be captured under Distinctness or a more general condition such as the OCP. In my view, English quotative inversion, French stylistic inversion as well as the realization of DP-internal arguments and the phenomena discussed in §4, with the exception of multiple sluicing/fronting, belong to the first group. To that one could add the following data involving Romance causatives in (56). Causatives are also subject to a transitivity restriction, namely when the caused predicate is transitive, its causee is marked like an indirect object, i.e. it is introduced via the preposition a (56b). When the caused predicate is intransitive, the causee is marked like an object (56a). A detailed discussion of such constructions goes well-beyond the scope of this paper, but if one adopts the view that causatives are ergative in nature (Bok-Bennema 1991), and hence allow one structural Case in the embedded clause, again an appeal to SSG is made possible. (56c) is unacceptable as the embedded clause contains two DP internal arguments which both have structural Case. (56) a. Jean a fait manger Paul. Jean has made eat-INF Paul ‘Jean made Paul eat.’ b. Jean a fait manger la tarte à Paul. Jean has made eat-INF the pie to Paul ‘Jean made Paul eat the pie.’

Exploring the limitations of identity effects in syntax

221

c. *Jean a fait manger la tarte Paul. Jean has made eat-INF the pie Paul ‘lit. Jean made eat the pie Paul.’ But there are other cases that cannot easily be accounted for by the SSG involving doubling of infinitival morphology in Italian, Longobardi (1980), and double-ings in English, Ross (1972): (57) a. *Giorgio comincia ad amare studiare. Giorgio begins to love-INF study-INF b. Giorgio vuole cercare di eliminare i rischi. Giorgio wants search to avoid the risks ‘Giorgio wants to search to avoid the risks.’ (58) a. It continued raining. b. *It’s continuing raining. Does the above suggest that something like Distinctness regulates the domains that are not part of argument externalization/Case? I believe two issues arise here: First, are two conditions necessary or just one? Second, are the conditions part of the computational system or not? With respect to the first issue, Richards himself acknowledges (2010: 140) that maybe Distinctness is not all the grammar needs: ‘Parts of case theory can be made to follow from Distinctness. But Case theory still has a residue, which will be beyond Distinctness. We still need to understand what role Case plays, in driving movement of DPs. Movement operations for DPs are simply the result of general EPP requirements.’ In fact, SSG was linked to a generalized EPP requirement, see Alexiadou and Anagnostoupoulou (2007). I thus believe that two conditions are necessary. A condition like the SSG, which regulates DP-movement, and a condition, like Distinctness, which regulates phenomena such as the ones (57) and (58). In my view, while it is appealing to unify the two, their properties are so distinct that they do not seem to be amenable under one and the same condition. In some cases, as was shown with multiple sluicing, the criteria that allow multiple DPs are simply more complex than formal identity. With respect to the second question, as already pointed out, there are a number of proposals that precede (or follow) Distinctness that deal with cases like (57) and (58). These crucially differ from Distinctness in that they are part of syntax, as they are stated as conditions on projection for-

222

Artemis Alexiadou

mation. Distinctness applies after syntax. Certainly, one would need to contrast Distinctness with these approaches in detail and examine whether any difference can be observed with respect to empirical coverage and/or predictions each of these theories makes before giving a final answer. Let me only note here that if OCP is viewed as ‘surface’ condition, then Distinctness conforms with that, as it applies at the interface between narrow syntax and phonology. It can thus be thought of as late ‘filter’, unlike the conditions on phrase structure-building that are heavily loaded with information as to their feature constitution. Acknowledgements I am indebted to the participants in the GLOW workshop Identity in Grammar in Vienna in May 2011 and to Elena Anagnostopoulou and Terje Lohndal for their comments and input.

References Ahn, Sang-Cheol, and Gregory K. Iverson 2004 Dimensions in Korean laryngeal phonology. Journal of East Asian Linguistics 13: 345–379. Alexiadou, Artemis 2001 Functional Structure in Nominals: Nominalization and Ergativity. Amsterdam: John Benjamins. Alexiadou, Artemis, and Elena Anagnostopoulou 2001. The subject in situ generalization, and the role of case in driving computations. Linguistic Inquiry 32: 193–231. 2007 The subject in situ generalization revisited. In Interfaces + Recursion, Hans-Martin Gärtner and Uli Sauerland (eds.), 31–60. Berlin/New York: Mouton de Gruyter. Bok-Bennema, Reineke 1991 Case and Agreement in Inuit. Dordrecht: Foris Publications. Collins, Chris 2003 The internal structure of vP in Ju’hoansi and Hoan. Studia Linguistica 57: 1-25. Collins, Chris, and Phil Branigan 1997 Quotative inversion. Natural Language and Linguistic Theory 15: 1–41.

Exploring the limitations of identity effects in syntax

223

Déprez, Viviane 1991 Two types of Verb Movement in French. MIT Working Papers in Linguistics 13: 47–85. Fanselow, Gisbert 2001 Minimal Link, phi-features, Case and theta-checking. Linguistic Inquiry 32: 405–437. Fox, Danny, and David Pesetsky 2005 Cyclic linearization of syntactic structure. Theoretical Linguistics 31: 1–45. Haider, Hubert 1985 The case of German. In Studies in German Grammar, Jindrich Toman (ed.), 65–101. Dordrecht: Foris Publications. 1993 Deutsche Syntax-Generativ. Tübingen: Narr. 2005 How to turn German into Icelandic, and derive OV-VO contrasts. Journal of Comparative Germanic Linguistics 8: 1–53. Heck, Fabian 2010 Categories, recursion, and bare phrase structure. Ms., Universität Leipzig. Hoekstra, Teun 1984 Transitivity: Grammatical Relations in Government and Binding Theory. Dordrecht: Foris Publications. Johnson, Kyle 1991 Object positions. Natural Language and Linguistic Theory 9: 577–636. Kayne, Richard, and Jean-Yves Pollock 1975 Stylistic inversion and successive cyclicity and move NP in French. Linguistic Inquiry 9: 595–621. Leben, William 1973 Suprasegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Lechner, Winfried 2004 Extending and reduicng the MLC. In Minimality Effects in Syntax. Arthur Stepanov, Gisbert Fanselow and Ralf Vogel (eds.), 205– 241. Berlin/New York: Mouton de Gruyter. Lohndal, Terje 2012 Without specifiers: phrase structure and events. Ph.D. dissertation, University of Maryland. Lohndal, Terje, and Bridget Samuels 2010 Linearizing empty edges. Paper presented at On Linguistic Interfaces (OnLI) II, University of Ulster, Belfast. Longobardi, Giuseppe 1980 Some remarks on infinitives: a case for a filter. Journal of Italian Linguistics 5: 101–155.

224

Artemis Alexiadou

Marantz, Alec 2000

Case and Licensing. In Arguments and Case: Explaining Burzio’s Generalization, Eric J. Reuland (ed.), 11–30. Philadelphia: John Benjamins.

Merchant, Jason 2001 The syntax of Silence. Oxford: Oxford University Press. Moro, Andrea 2000 Dynamic Antisymmetry. Cambridge, MA: MIT Press. Richards, Norvin 2010 Uttering Trees. Cambridge, MA: MIT Press. Ritter, Elisabeth 1991 Two functional categories in noun phrases: evidence from Modern Hebrew. In Perspectives on Phrase Structure: Heads and Licensing, Susan Rothstein (ed.), 37–62. New York: Academic Press. Ross, John Robert 1972 Double-ing. Linguistic Inquiry 3: 61–86. Riemsdijk, Henk C. van 1988 The representation of syntactic categories. Proceedings of the Conference on the Basque Language, Basque World Congress Vol. I, 104–116. 1998 Categorial feature magnetism. Journal of Comparative Germanic Linguistics 2: 1–48. Schäfer, Florian 2012 Local case, cyclic agree and the syntax of truly ergative verbs. In Local Modelling of Non-Local Dependencies in Syntax, Artemis Alexiadou, Tibor Kiss and Geron Müller (eds.), 273–304. Tübingen: Niemeyer. Sigurðsson, Halldor 2009 The No Case generalization. In Advances in Comparative Germanic Syntax, Artemis Alexiadou, Jorge Hankamer, Thomas McFadden, Justin Nuger and Florian Schäfer, 249–280. Amsterdam: John Benjamins. Williams, Edwin 1987 The thematic-structure of derived nominals. Papers from the 23rd Annual Regional Meeting of the Chicago Linguistic Society, Part One: The General Session, Barbara Need, Eric Schiller and Anna Bosch (eds.), 366–375. Chicago IL: Chicago Linguistic Society. Wurmbrand, Susi 2004 Licensing case. Ms., University of Connecticut.

Constraining Doubling Ken Hiraiwa 1. Introduction A number of studies have revealed that human language has a mechanism of avoiding adjacent identical elements. One of the most notable achievements in this respect is the discovery of the Obligatory Contour Principle (OCP) in phonology (see Leben 1973, Goldsmith 1976, Odden 1986, Mohanan 1994, Yip 1998, among others). Phonology and morphology are conceived as involved in “externalization” (Chomsky 2007) and hence one might wonder if such phenomena are one of variable and complex phenomena inherent to externalization at sensorimotor interface system. Since the discovery of the OCP, however, various syntacticians have proposed that a similar principle is also at work in the domain of syntax (see Perlmutter 1971; Ross 1972; Menn and MacWhinney 1984; Ackema 2001; Ackema and Neeleman 2003; Neeleman and Van de Koot 2006; Van Riemsdijk 1998, 2008; Hiraiwa 2010a,b; Richards 2010; Corver and Van Koppen 2011). Neeleman and Van de Koot (2006) observe that syntactic haplology phenomena are subject to two conditions: an identity condition and a syntactic adjacency condition. In the discussion below, I will call elements that have identical phonological forms PF-identical elements. A puzzling but significant property is that PF-identical elements are sometimes (but not always) prohibited from appearing adjacent to each other in the following surface configurations. In the configuration (1a), the PF-identical morphemes M1 and M2 are linearly adjacent and prohibited. The adjacency at work is more lax in the ill-formed case (1b) in that the PF-identical morphemes are not linearly adjacent but rather phrases containing them are. Let us call the former head adjacency and the latter phrasal adjacency. (1) a. ... M1 M2 ... b. ...[X M1] [Y M2] ...

(head adjacency) (phrasal adjacency)

As an example of head adjacency, consider the Dutch data (2) from Neeleman and Van de Koot (2006). The demonstrative die is followed by a

226

Ken Hiraiwa

PP in the grammatical example (2a). But as shown in the example (2b), the same demonstrative cannot be followed by a relative clause whose relative pronoun is PF-identical with the demonstrative die. (2) Neeleman and Van de Koot (2006: 688–689) a. die met dat rooie haar that with that red hair ‘the one with the red hair’ b.?? die die dat rooie haar heeft that that that red hair has ‘the one that has the red hair’ As an example for phrasal adjacency, Ross (1972) observed that the use of two adjacent verbs with the morpheme -ing results in ungrammaticality as shown in (3d) and proposed the Doubl-ing Constraint (see Richards 2010 for a reinterpretation of Ross’s insight). (3) Ross (1972: 61) a. It continued to rain. b. It continued raining. c. It’s continuing to rain. d. *It’s [continuing] [raining]. (4) The Doubl-ing Constraint (Ross 1972: 78) All surface structures containing a subtree of the form, S X V a NP Z S X

NP

Va V

ing

S Y

Vb V

Z

ing

in which the node corresponding to Va in remote structure was immediately dominated by Si, and the node corresponding to Vb in remote

Constraining Doubling

227

structure was immediately dominated by Sj, and in which no S node intervened in remote structure between Si and Sj, are ungrammatical. It is also well known since Harada (1973) that Japanese is subject to the double-o constraint, which prohibits double accusative -o NPs from appearing adjacent to each other, as shown in (5b). (see Kuroda 1992 and Hiraiwa 2010a,b and the references cited therein for detailed discussions). (5) Harada (1973: 114–116) a. Keesatu-ga [NP sono doroboo-ga nigeteiku tokoro-o] tukamaeta. policeman-NOM DEM thief-NOM run.away place-ACC caught ‘The policeman caught the thief as he ran away.’ b. ?? Keesatu-ga [NP sono doroboo-o][NP nigeteiku tokoro-o] tukamaeta. policeman- NOM DEM thief-ACC run.away.try place-ACC caught ‘The policeman caught the thief as he ran away.’ Common to both of the examples above is the fact that multiple adjacent PF-identical elements are prohibited. Thus, breaking either PF-identity or adjacency will make illicit forms grammatical. Neeleman and Van de Koot (2006) note two main strategies to avoid adjacent repetition: deletion and suppletion/coalescence. In addition to these, it is also possible to avoid adjacent repetition by inserting an intervening material between two PFidentical elements or by moving away one of them (see Hiraiwa 2010a). In recent developments of the minimalist program (see Chomsky 2000, 2001, 2004, 2005, 2007, 2008), narrow syntax is conceived as a computational system that builds up structure by (internal or external) Merge and Agree. In this conception of the general architecture, the output of narrow syntactic computation undergoes various modifications in both the Morphology and the Phonology components on the way to PF. It is, then, predicted that a principle that has been considered to govern phonological phenomena may also play a role in deriving surface strings from outputs of narrow syntax, as Van Riemsdijk (2008) and others have claimed. The Morphology and the Phonology components can use various strategies to modify syntactic outputs in order to avoid (1). In this model, I argue that a notion of phase plays a significant role in interfacing narrow syntax and the two interface levels, PF and LF. In particular, some researchers have proposed that phases receive empirical support from considerations of interface properties (Legate 2003; Adger 2007; Ishihara 2004, 2007; Kayne 2005; Kratzer and Selkirk 2007; Hiraiwa 2002, 2010a, b; Richards 2010; and Uriagereka 1999, 2012. For a general com-

228

Ken Hiraiwa

ment, see Grohmann 2009). In an earlier work on the double-o constraint (Hiraiwa 2010a, b), I have proposed that the OCP also applies in syntactic computation: more specifically, at Spell-Out. (6)

A Phase Theory of the Double-o Constraint (Hiraiwa 2010a: 753) Multiple identical occurrences of the structural accusative Case value cannot be morphophonologically realized within a single Spell-Out domain at Transfer.

In this theory, the prohibition against multiple accusative NPs is explained as follows: in (7), a CP and a vP being phases, more than one accusative NP that appears within VP or TP undergoes Spell-Out in the same domain and hence violate (6). Put differently, syntactic adjacency within a certain domain is a crucial factor. A square indicates a Spell-Out domain. (7)

CP XP

C’ C

TP YP

T’ vP

T

v’

tYP v

VP WP

V’ V

ZP

The important insight of (6) is that what is prohibited is not mere linear adjacency, but rather, syntactic adjacency within a Spell-Out domain. Let us generalize (6) as follows.

Constraining Doubling

(8)

229

A Phase Theory of Phrasal Adjacency Constraint Multiple phrases containing PF-identical elements cannot be morphophonologically realized within a single Spell-Out domain at Transfer.

The goal of this article is to investigate how far such a phase-theoretic notion of phrasal adjacency can be extended to head adjacency (9), by focusing on three types of DP-internal doubling phenomena in Japanese (see also Van Riemsdijk 2002 for a similar distinction between phrasal and head adjacency). (9)

A Phase Theory of Head Adjacency Constraint Multiple PF-identical head-adjacent elements cannot be morphophonologically realized within a single Spell-Out domain at Transfer.

As I will show, doubling does not always result in ungrammaticality. I will demonstrate that the proposed phase-based theory (9) explains exactly when doubling is allowed and when it is disallowed, once a phase is carefully identified. The present article is organized as follows. §2 first investigates cases where a genitive case particle no and a pronominal no appear adjacent to each other and one of them is obligatorily deleted. Then, I look at more complicated cases and show that not all adjacent instances of no are prohibited. I argue that the syntactic OCP operates phase by phase and linear adjacency should be formulated in terms of a phase. §3 examines where linearly adjacent instances of a conjunction particle to are sometimes prohibited but sometimes allowed, and shows that the hypothesis proposed in the previous section correctly predicts their (un)grammaticality. §4 shows that exactly the same analysis applies to a disjunction particle ka. I, then, examine cases of a disjunctive coordination of two embedded questions and show that a disjunction particle is missing exactly due to the same mechanism. §5 is a conclusion. 2. *no no 2.1. Part One Kuno (1973b) observes that a prohibition against two adjacent PF-identical particles is more general in Japanese than one might think. He gives the

230

Ken Hiraiwa

following examples, where the genitive case particle no and the pronominal no ‘one’ cannot surface adjacent to each other. 1 In (10a), the relativized head noun hon ‘book’ can be replaced with a pronominal no. This establishes the existence of a pronominal no. Now consider (10b). When the same pronoun is preceded by a genitive phrase NP-no, the surface form with two head-adjacent instances of no is ungrammatical and one of them is obligatorily deleted, as shown in (10c). (10) Kuno (1973b: 119) a. [[RC John-ga katta] hon] to [[RC Mary-ga katta] {hon/no}] John-NOM bought book CONJ Mary-NOM bougt book/PRON ‘the book that John bought and the one that Mary bought’ b. [[John-no] hon] to [[Mary-no] {hon/*no}] John-GEN book CONJ Mary-GEN book/PRON ‘John’s book and Mary’s (book)’ c. [[John-no] hon] to [[Mary no] John-GEN book CONJ Mary PRON ‘John’s book and Mary’s (book)’ An important question is why the underlying two adjacent PF-identical elements are reduced to one. Okutsu (1974) and Kamio (1983) proposed, without giving evidence, that it is the genitive case particle no that must undergo haplology in such cases (see also Murasugi 1991). (11) a. *kono hon-wa John-no no da. DEM book-TOP John-GEN PRON COP ‘This book is John’s.’ b. kono hon-wa John-no no da. DEM book-TOP John-GEN PRON COP ‘This book is John’s.’ In this article, I do not commit myself to the question of which of the two occurrences of no is actually deleted and just follows Okutsu/Kamio’s analysis. Instead, what is important for us is to understand that this phenomenon is another case of the ban on multiple adjacent PF-identical elements.2 1

See Hiraiwa (2012) for full discussion on the syntax of the pronominal no and ellipsis in NPs. 2 How do we know that what is deleted is the genitive case particle not the pronominal no? In this respect, some data in Murasugi (1991) does not tell us

Constraining Doubling

231

However, it is not the case that two adjacent PF-identical particles are always prohibited. The following examples are grammatical even though two instances of no are head-adjacent. (12) Murasugi (1991: 64) a. Taro-no no Taro-GEN field ‘Taro’s field’ b. no-no hana field-GEN flower ‘flowers of the field’ Murasugi (1991) observes “[T]hese examples suggest that the rule applies only when the genitive case particle no directly precedes the pronoun no.” In (12a), the genitive case particle no precedes a noun no ‘field’. In (12b), the same noun is followed by the genitive case particle no. Similarly, the following examples are perfectly well-formed, despite the fact that multiple adjacent occurrences of no appear. This shows that the identity avoidance constraint only works on the same PF-identical morphemes. The examples (13a) and (13b) are fine because no of Sano or nori is not the same morpheme as the genitive case particle no, just as the English examples (14) are fine. (13) a. Sano-no hon Sano-GEN book ‘Mr. Sano’s book’ b. Sano-no nori Sano-GEN glue ‘Mr. Sano’s glue’ (14) a. sing-ing b. ring-ing Now, we must reevaluate Kuno/Okutsu/Kamio’s analysis that argues that the elliptical form NP-no in (10c) and (11b) is derived from NP-no no by anything decisive. But there are dialects where a genitive case-marking is not required in DP-GEN DP (e.g. Akita dialect and Shuri Okinawan), which suggests that what is missing may be the genitive case particle no in Japanese. There are also languages, such as Korean (An 2009) and Dagaare (Bodomo 1997), that use no genitive case particle in [NP NP N].

232

Ken Hiraiwa

deleting the first genitive case particle no and leaving the second pronominal no. Their *no no constraint is easily subsumed under the general phase theory of head adjacency (9). If a DP constitutes a single Spell-Out domain by itself, the two instances of no are predicted to become illicit, as shown in the following derivation, assuming that the genitive case particle no is realized as a D head. (15)

DP nP CP/AP ….

D n’



n no

(16)

DP nP DP

D n’

NP

D…

n

boku

no

no

Importantly, there is corroborating evidence for this haplology approach. Some speakers of certain dialects (e.g. Nagoya dialect) can use the allomorph n instead of the genitive case particle no when it is preceded by a certain element such as the first or second person pronouns, and followed by the pronoun no.3 3

The exact condition for the use of the allomorph n, although it is an interesting topic in itself, is immaterial for our purpose here. The most common use of n in Standard Japanese is found when it is followed by a particular type of light noun such as ti ‘home’ and toko(ro) ‘place’ (see Hiraiwa 2013 for light nouns). In this case, no haplology takes place because no and n are not PF-identical.

Constraining Doubling

(17) a. *Sore-wa {ore/kimi}-no 1SG/2SG-GEN it-TOP ‘It’s mine/yours.’ b. Sore-wa {ore/kimi}-n 1SG/2SG-GEN it-TOP ‘It’s mine/yours.’

no

233

da.

PRON COP

no

da.

PRON COP

There are also some speakers of certain dialects (e.g. Osaka dialect) who use the allomorph n for the pronoun no in the same context. Again, haplology does not occur. da. (18) Sore-wa {ore/kimi}-no n it-TOP 1SG/2 SG-GEN PRON COP ‘It’s mine/yours.’ As expected, however, the data (19) show that it is impossible to replace both of the occurrences of no with the allomorph n, because that also results in the violation of the constraint on doubling (9). (19) a. *Sore-wa {ore/kimi}-no 1SG/2SG-GEN it-TOP ‘It’s mine/yours.’ b. *Sore-wa {ore/kimi}-n 1SG/2SG-GEN it-TOP ‘It’s mine/yours.’

no

da.

PRON COP

n

da.

PRON COP

Thus, we consider the use of the allomorph to be a strategy to avoid two adjacent occurrences of no within the DP domain, as shown below.

Ken-ga ore-n {ti/toko(ro)}-ni home/place-DAT Ken-NOM 1SG-GEN ‘Ken came to my home/place.’

kita. came

234

Ken Hiraiwa

(20)

DP nP DP NP

D n’

D …

ore {n/no}

n {no/n}

2.2. Part Two Strictly speaking, the data so far does not require a notion of phase. One could say that multiple head-adjacent PF-identical elements are prohibited within a noun phrase. However, there is good evidence that an articulated partition of phrase structure plays a role here. Murasugi (1991) notes an interesting example where two adjacent occurrences of no, genitive or pronominal, are perfectly permitted within a DP, compared with (11). (21) Murasugi (1991: 64) [akai no]-no hyoosi red PRON-GEN front.page ‘the front page of the red one’ In the example (21), the second no is a genitive case particle, and the first no is derived from a pronominalization of an NP (e.g. hon ‘book’) by no. Thus, the underlying structure for (21) is represented as (22). (22) [akai hon]-no hyoosi red book-GEN front.page ‘the front page of the red book’ What this suggests is that the Spell-Out domains for the first no and the second no are different. In fact, the structure contains two DPs — hence two Spell-Out domains.

Constraining Doubling

(23)

235

DP nP

D

DP nP AP

n

akai

no

n’ D

NP

no

hyoosi

n

The same paradigm can also be reproduced with relative clauses. The entire relative clause headed by the pronominal no can be followed by the genitive case particle no. (24) a. [Ken-ga katta hon]-no nedan Ken-NOM bought book-GEN price ‘the price of the book that Ken bought’ b. [Ken-ga katta no]-no nedan Ken-NOM bought PRON-GEN price ‘the price of the one that Ken bought’ Furthermore, it is important to point out that the above structure [NP-no no] can be followed by a genitive case particle no without triggering the identity effect, as shown in (25). (25) a. [boku-no kuruma]-no 1SG-GEN car-GEN ‘in front of my car’ b. [boku-no no]-no 1SG-GEN PRON-GEN ‘in front of mine’ c. Ken-no hon-no Ken-GEN book-GEN ‘Ken’s book’

mae-ni front-at mae-ni front-at hoo way

236

Ken Hiraiwa

d. Ken-no no-no Ken-GEN PRON-GEN ‘Ken’s book’

hoo way

The structure of (25b) and (25d) is represented below. (26)

nP DP

n’

nP DP

D no

n’

NP

D NP

n

boku

no …

no

NP

n mae/hoo

The data in (21), (24) and (25) are important because they show that mere linear adjacency considerations do not lead to a correct prediction. They evidence for a phase-theoretic partition: in (26), the first and the second instances of no are low enough in terms of the third instance of no and hence no haplology is required between the former and the latter. (27) A Phase Theory of Head Adjacency Constraint (=(9)) Multiple PF-identical head-adjacent elements cannot be morphophonologically realized within a single Spell-Out domain at Transfer. The fact that what is at issue is clearly not phrasal adjacency, but rather head adjacency is supported by the data in (28). While the example (28b) is ungrammatical due to the two head-adjacent instances of no, an infinite number of adjacent genitive phrasal modifiers can be tolerated within nP as shown in the examples (28a) and (28c), the structure of the former being given in (29). If phrasal adjacency were at work here, the examples (28a) and (28c) would never be allowed in Japanese.

Constraining Doubling

237

(28) a. Chomsky-no saikin-no haadokabaa-no akairo-no gengogaku-no Chomsky-GEN recent-no hard.cover-GEN red-GEN linguistics-GEN hon book ‘Chomsky’s recent hard-bound red book on linguistics’ b. *Chomsky-no saikin-no haadokabaa-no akairo-no gengogaku-no Chomsky-GEN recent-no hard.cover-GEN red-GEN linguistics-GEN no PRON

‘Chomsky’s recent hard-bound red one on linguistics’ c. Chomsky-no saikin-no haadokabaa-no akairo-no gengogaku-no Chomsky-GEN recent-no hard.cover-GEN red-GEN linguistics-GEN no PRON

‘Chomsky’s recent hard-bound red one on linguistics’

(29)

DP

nP

D

DP NP-no

DP NP-no DP NP-no DP NP-no DP NP-no

NP N hon

n

238

Ken Hiraiwa

Finally, it is predicted that if head adjacency is broken up by an intervening element, the sentence will become grammatical. This prediction is borne out. Even though the sentence is ungrammatical when two occurrences of no are head-adjacent as in (30a), when there is an adjective intervening between them, the sentence becomes perfectly natural, as the grammaticality of (30b) indicates. dore? (30) a. *Chomsky-no no-wa Chomsky-GEN PRON-TOP which ‘Which is Chomsky’s?’ dore? b. Chomsky-no yomiyasui no-wa Chomsky-GEN easy.to.read PRON-TOP which ‘Which is Chomsky’s easy one?’ (31)

DP nP

D

DP Chomsky-no

n’ AP yomiyasui

n’ NP

n



no

To summarize, I have demonstrated that the ban on multiple head-adjacent instances of no is conditioned by a phase-based Spell-Out and that haplology remedies the otherwise illicit derivation. 3. *to to Kuno (1973a,b) observes that in Japanese, conjunction coordination takes the form [A CONJ B] as shown in (32a). Japanese is a language that allows coordination doubling optionally, and when doubling takes place, another

Constraining Doubling

239

to appears at the right periphery, taking the form [A CONJ B CONJ], as shown in (32b).4 (32) a. [Stalin to Roosevelt]-ga kaidan sita. Stalin CONJ Roosevelt-NOM meeting did ‘Stalin and Roosevelt had a meeting.’ kaidan sita. b. [Stalin to Roosevelt to]-ga CONJ-NOM meeting did Stalin CONJ Roosevelt ‘Stalin and Roosevelt had a meeting.’ When there are more than two conjuncts, each conjunct must be followed by the coordination particle to as in [A CONJ B CONJ C]. When doubling occurs, another to appears at the right periphery as in [A CONJ B CONJ C CONJ]. (33) a. Kuno (1973b: 117) [Stalin to Roosevelt to Churchill]-ga kaidan sita. Stalin CONJ Roosevelt CONJ Churchill-NOM meeting did ‘Stalin and Roosevelt and Churchill had a meeting.’ Roosevelt to Churchill to]-ga kaidan sita. b. [Stalin to Stalin CONJ Roosevelt CONJ Churchill CONJ-NOM meeting did ‘Stalin and Roosevelt and Churchill had a meeting.’ Kuno (1973b) makes an interesting observation here. Suppose that the first conjunct consists of an already coordinated noun phrase [A CONJ B]. If it is coordinated with another noun phrase, it will lead to the form (34a). However, if coordination particle doubling occurs inside the first conjunct, two occurrences of to end up appearing adjacent to each other and the whole expression is severely degraded as shown in (34b). (34) a. Kuno (1973b: 118) Roosevelt] to Churchill (to)]-ga kaidan sita. [[Stalin to Stalin CONJ Roosevelt CONJ Churchill CONJ-NOM meeting did ‘Stalin and Roosevelt and Churchill had a meeting.’

4

I will ignore this issue, as it is immaterial for my purpose here. For coordination, see Johanessen (1998) and Progovac (1998a,b) and the references therein. For various syntactic approaches to coordination in Japanese, see Kasai and Takahashi (2001) and Chino (2013).

240

Ken Hiraiwa

b. *[[Stalin to Roosevelt to] to Churchill (to)]-ga kaidan Stalin CONJ Roosevelt CONJ CONJ Churchill CONJ-NOM meeting sita. did ‘Stalin and Roosevelt and Churchill had a meeting.’ In order to understand why the sequence in (34b) results in ungrammaticality, it is necessary to clarify the syntax of coordination and doubling. In this article, I will follow Chino’s (2013) analysis. Her main proposals are summarized in (35). (35) a. Logical connectives are head-initial in Japanese (and perhaps universally). b. Coordination particle doubling is a result from a CONJP shell structure (see Progovac 1998a,b) and a local movement of a lower CONJP1 to the specifier of CONJP2. In addition to this, let us assume that CONJ is a phase head. (36) A CONJ1 head is a phase head (but a CONJ2 head that appears in doubling is not). Let us see how this works. (37) is the structure of a coordination [A CONJ B] in Japanese. CONJ1P

(37) DP A

CONJ1

DP

to

B

When doubling takes place, another CONJP is merged above the CONJP, the entire structure of which is moved to the specifier of the higher CONJP, leading to the order A CONJ B CONJ.

Constraining Doubling

(38)

241

CONJ2P CONJ2’ CONJ1P

CONJ2

to

DP A

DP

CONJ1

B

to CONJ1P

(39) CONJ1P

CONJ2

DP A

CONJ1

to

DP

tCONJ1P

to

B

With this in mind, consider the derivation of the example (34b), which has three conjuncts.

242

Ken Hiraiwa

(40)

DP CONJ1P

D

CONJ2P

CONJ1P CONJ2

DP A

CONJ1

to

DP

tCONJ1P

CONJ1

DP

to

C

to

B

It is clear why the example (34b) is ungrammatical: in the final Spell-Out domain, two instances of to, CONJ2 and CONJ1, are head-adjacent in the DP phase (the Spell-Out domain being the complement of D) and hence violates (9).5 Now suppose that A is coordinated with the conjunct [B CONJ C]. Then, we obtain the form (41a). But if doubling also occurs, we obtain (41b). As Kuno observes, what is of particular interest is the fact that the surface form (41c) is indeed grammatical despite the two adjacent occurrences of to at the right periphery. A linear view of adjacency is incapable of explaining the asymmetry here. 5

There seem to be speakers (including one reviewer) who find (34b) grammatical. There are two possible explanations in the configuration (40). One possibility is that in the grammar of those speakers, a CONJ2 head is also a phase head. The other possibility is that in the grammar of those speakers, the trace of CONJ1P interferes with head-adjacency (see Jaeggli 1980). I leave the issue for future research. The same reviewer suggests that the ungrammaticality of (34b) is due to two adjacent identical syllables /to/ in [Ru:zuberuto to]. However, that such phonological adjacency has no relevance here is already mentioned in (12) and (13), where even three identical syllables can appear adjacent to each other (see Murasugi 1991). Notice also that (32b) is perfect despite the same sequence of two identical syllables.

Constraining Doubling

243

(41) Kuno (1973b: 118) a. [Stalin to [Roosevelt to Churchill ]]-ga kaidan sita. Stalin CONJ Roosevelt CONJ Churchill-NOM meeting did ‘Stalin and Roosevelt and Churchill had a meeting.’ b. [Stalin to [Roosevelt to Churchill to ]]-ga kaidan sita. Stalin CONJ Roosevelt CONJ Churchill CONJ-NOM meeting did ‘Stalin and Roosevelt and Churchill had a meeting.’ c. [Stalin to [Roosevelt to Churchill to ] to]-ga kaidan Stalin CONJ Roosevelt CONJ Churchill CONJ CONJ-NOM meeting sita. did ‘Stalin and Roosevelt and Churchill had a meeting.’ The phase-based head adjacency offers an explanation. The derivation is as follows. (42)

DP CONJ2P

D

CONJ1P

CONJ2’ CONJ2

DP C

CONJ2P

CONJ1

to

to

CONJ1P CONJ2

DP A

CONJ 1

DP

to

B

to

tCONJ1P

tCONJ1P

244

Ken Hiraiwa

In the final structure, two instances of to, CONJ2 and CONJ2, although they are linearly adjacent, are in different Spell-Out domain and hence the outcome is grammatical. Summarizing this section, I have demonstrated that the phase theory of head adjacency explains the asymmetry in coordination particle doubling as long as the assumption that a CONJ1 head is a phase head. 4. *kaka 4.1. Disjunctive coordination The disjunction particle ka in Japanese also allows optional doubling. Just like the conjunction particle to, when doubling takes place, another ka appears at the right periphery, taking the form [A DISJ B DISJ], as shown in (43b). (43) a. [Okinawa ka Hokkaido]-ni iki-tai. Okinawa DISJ Hokkaido-DAT go-want ‘I want to go to Okinawa or Hokkaido.’ b. [Okinawa ka Hokkaido ka]-ni iki-tai. Okinawa DISJ Hokkaido DISJ-DAT go-want ‘I want to go to Okinawa or Hokkaido.’ When there are more than two disjuncts, each conjunct except the last one must be followed by ka as in [A DISJ B DISJ C]. When doubling occurs, another ka appears at the right periphery as in [A DISJ B DISJ C DISJ]. (44) a. [Okinawa ka Hokkaido ka Hawai]-ni iki-tai. Okinawa DISJ Hokkaido DISJ Hawaii-DAT go-want ‘I want to go to Okinawa or Hokkaido or Hawaii.’ iki-tai. b. [Okinawa ka Hokkaido ka Hawai ka]-ni Okinawa DISJ Hokkaido DISJ Hawaii DISJ-DAT go-want ‘I want to go to Okinawa or Hokkaido or Hawaii.’ Now, again, let us imagine contexts where the constituency is [[A DISJ B (DISJ)] DISJ C] or [C DISJ [A DISJ B]]. In the former case, adjacent PFidentical elements surface if doubling occurs inside the first constituent. (45b) is ungrammatical as expected.

Constraining Doubling

245

(45) Context: There are two prizes for the game: either a trip to Hawaii or a choice between a trip to Okinawa or Hokkaido. A participant asks what the prizes are and someone answers: a. [[Okinawa ka Hokkaido] ka Hawai (ka)] dayo. Okinawa DISJ Hokkaido DISJ Hawaii DISJ COP ‘It’s a choice of either Okinawa or Hokkaido, or Hawaii.’ b. *[[Okinawa ka Hokkaido ka] ka Hawai (ka)] dayo. Okinawa DISJ Hokkaido DISJ DISJ Hawaii DISJ COP ‘It’s a choice of either Okinawa or Hokkaido, or Hawaii.’ In contrast, in the latter case, doubling is grammatical, just as we have seen for conjunctive coordination. All of these patterns naturally follow as long as a disjunction particle DISJ1 is a phase head, on a par with DISJ1. (46) Context: There are two prizes for the game: either a trip to Hawaii or a choice between a trip to Okinawa or Hokkaido. A participant asks what the prizes are and someone answers: a. Hawai ka [Okinawa ka Hokkaido] dayo. Hawaii DISJ Okinawa DISJ Hokkaido COP ‘It’s Hawaii or a choice of either Okinawa or Hokkaido.’ b. *Hawai ka [Okinawa ka Hokkaido] ka dayo. Hawaii DISJ Okinawa DISJ Hokkaido DISJ COP ‘It’s Hawaii or a choice of either Okinawa or Hokkaido.’ c. Hawai ka [Okinawa ka Hokkaido ka] ka dayo. Hawaii DISJ Okinawa DISJ Hokkaido DISJ DISJ COP ‘It’s Hawaii or a choice of either Okinawa or Hokkaido.’ 4.2. Disjunction and Q-complementizer A more interesting pattern comes out in an interaction of a disjunction particle and a question complementizer. The disjunction particle ka can also coordinate TPs. In (47b), the declarative complementizer to takes two TPs coordinated by the disjunction particle ka. kuru] to] itta. (47) a. Boku-wa [CP [TP Ken-ga Ken-NOM come C said 1SG-TOP ‘I said that Ken would come.’

246

Ken Hiraiwa

b. Boku-wa [CP [[TP Ken-ga kuru] ka [TP Naomi-ga iku]] to] itta. Ken-NOM come DISJ Naomi-NOM go C said 1SG-TOP ‘I said that Ken would come.’ As it is true of some languages (Jayaseelan 2001), the disjunction particle is homophonous with a question complementizer in Japanese, as shown in (48). (48) Boku-wa [Ken-ga kuru ka] tazuneta. 1SG-TOP Ken-NOM come Q asked ‘I asked whether Ken would come.’ Now consider the following embedded question. The complement of the verb of asking is clearly an embedded question. The question here is what the syntactic category of each instance of ka is. (49) Boku-wa [Ken-ga kuru ka Naomi-ga kuru ka] (dotti nano ka) 1SG-TOP Ken-NOM come ?? Naomi-NOM come ?? which COP Q tazuneta. asked ‘I asked whether Ken would come or Naomi would come.’ There are three possible structures for (49). One possibility is that the first ka coordinates two TPs and the second ka is a Q-complementizer. The second possibility is that two TPs are coordinated by the first ka and the second ka is a result of doubling. The last possibility is that both instances of ka are a Q-complementizer and a silent disjunction particle coordinates the two embedded questions. How do we determine the structure? (50) a. [CP [[TP .... ] DISJ [TP ... ]] Q] b. [CP [[TP .... ] DISJ [TP ... ]] DISJ] c. [CP ... Q] (DISJ) [CP ... Q] There is good reason to think that they are Q-complementizers. First, notice that a disjunctive adverb soretomo needs to be licensed by a Qcomplementizer. Thus, (51a) is ungrammatical because the embedded clause is headed by a declarative complementizer to. On the other hand, (51b) is fine because the complementizer is a Q-complementizer ka.

Constraining Doubling

247

(51) a. *Boku-wa [[Ken-ga kuru] ka soretomo [Naomi-ga kuru]] to 1SG-TOP Ken-NOM come DISJ DISJ.Adv Naomi-NOM come C itta. Said ‘I said that either Ken would come or Naomi would come.’ b. Boku-wa [Ken-ga kuru ka soretomo Naomi-ga kuru ka] 1SG-TOP Ken-NOM come ?? DISJ.Adv Naomi-NOM come Q tazuneta. asked ‘I asked whether Ken would come or whether Naomi would come.’ Second, let us look at so-called stripping patterns in Japanese. (52) a. Boku-wa [Ken-ga kuru ka] tazuneta. 1SG-TOP Ken-NOM come Q asked ‘I asked whether Ken would come.’ b. Boku-wa [Ken-ga ka] tazuneta. 1SG-TOP Ken-NOM Q asked ‘I asked whether Ken would come.’ As shown in (52b), the Q-complementizer ka must be retained in Japanese sluicing/stripping (Fukaya and Hoji 1999, Hiraiwa and Ishihara 2012). Note that the stranded ka in (52b) cannot be a disjunction particle, because case-marked NPs cannot be coordinated in Japanese, as shown in (53). (53) a.

[Ken ka Naomi]-ga kuru Ken DISJ Naomi-NOM come ‘Ken or Naomi will come.’ b.?* [[Ken-ga] ka [Naomi-ga]] Ken-NOM DISJ Naomi-NOM ‘Ken or Naomi will come.’

yo. SFP

kuru yo. come SFP

Then, the example (54), the stripping version of (49)/(51b), shows that the two instances of ka have to be Q-complementizers. (54) Boku-wa [Ken-ga ka] soretomo [Naomi-ga ka] tazuneta. 1SG-TOP Ken-NOM Q DISJ.Adv Naomi-NOM Q asked ‘I asked whether Ken would come and whether Naomi would come.’

248

Ken Hiraiwa

Therefore, the correct structural analysis of (49) must be (50c): there must be a silent disjunction particle coordinating two alternative questions, as represented in (56). (55) (=(49)) kuru ka]] (dotti nano Boku-wa [[Ken-ga kuru ka] ∅ [Naomi-ga 1SG-TOP Ken-NOM come Q DISJ Naomi-NOM come Q which COP ka) tazuneta. Q asked ‘I asked whether Ken would come or Naomi would come.’ (56)

VP DP

V

DISJ1P

D

CP TP

C



ka

CP

DISJ1

ka

TP

C



ka

The obvious question to ask is why the disjunctive particle ka is silent. The answer is already at our hand: if the disjunction particle is phonologically realized, the surface strings will contain two head-adjacent PF-identical elements within a single Spell-Out domain. Finally, if the above analysis is correct, the ungrammaticality of (57) has nothing to do with the adjacent PF-identical elements. It is simply bad because the doubled final ka cannot appear there in the absence of a preceding disjunction particle. (57) *[Ken-ga kuru ka] ∅ [Naomi-ga kuru ka] ka (dotti nano ka) Ken-NOM come Q DISJ Naomi-NOM come Q DISJ which COP Q tazuneta. asked ‘I asked whether Ken would come or Naomi would come.’

Constraining Doubling

249

5. Conclusion In this article, I have examined three cases of identity avoidance in Japanese: *no no, *to to, and *ka ka, and argued that head-adjacent PF-identical elements within a single Spell-Out domain cannot undergo Spell-Out. The present work provides empirical support for the idea that syntactic objects are transferred to the PF interface phase by phase (Chomsky 2001, 2004, 2008). The remaining question is why there seem to be two different types of adjacency in the domain of identity avoidance in human language: head adjacency and phrasal adjacency. How they are unified is a challenging but important issue for us to tackle in the future. Possibly related to this issue is the question why a chain almost always results in erasure of all but one copy (see Nunes 2004 for a linearization approach to Chain Reduction). Whether this is ultimately linked to identity avoidance in the sense discussed in this paper or not also remains an open issue. Acknowledgements This research has been funded by the JSPS Grant-in-Aid for Young Scientists (B) (No. 22720168). I thank Tomo Fujii, Akira Watanabe, and two anonymous reviewers for helpful comments. I am also grateful to participants of my graduate syntax seminar at Meiji Gakuin University and Kwansei Gakuin University: Shin’ya Asano, Yukiko Chino, Sanae Ezakim Eriko Hirasaki, Koyuki Ichida, Kazuya Kudo, Nobuhiro Ohuchi, and Hajime Takeuchi.

References Ahn, Sang-Cheol, and Gregory K. Iverson 2004 Dimensions in Korean laryngeal phonology. Journal of East Asian Linguistics 13: 345–379. Anderson, John M., and Colin J. Ewen 1987 Principles of Dependency Phonology. Cambridge: Cambridge University Press. Ackema, Peter 2001 Colliding complementizers in Dutch: another syntactic OCP. Linguistic Inquiry 32: 717–727.

250

Ken Hiraiwa

Ackema, Peter, and Ad Neeleman 2003 Context-sensitive spell-out. Natural Language and Linguistic Theory 21: 681–735. Adger, David 2007 Stress and phasal syntax. Linguistic Analysis 33: 238–266. An, Duk-Ho 2009 A note on genitive drop in Korean. Nanzan Linguistics 5: 1–16. Center for Linguistics, Nanzan University. Bodomo, Adams 1997 The Structure of Dagaare. Stanford, CA: CSLI. Chino, Yukiko 2013 The syntax of coordination in Japanese. M.A. dissertation, Meiji Gakuin University. Chomsky, Noam 2000 Minimalist inquiries: the framework. In Step by Step: Essays on Minimalist Syntax in Honor of Howard Lasnik, Roger Martin, David Michaels and Juan Uriagereka (eds.), 89–155. Cambridge, MA: MIT Press. 2001 Derivation by phase. In Ken Hale: A Life in Language, Michael Kenstowicz (ed.), 1–52. Cambridge, MA: MIT Press. 2004 Beyond explanatory adequacy. In Structures and Beyond: The Cartography of Syntactic Structures vol. 3, Adriana Belletti (ed.), 104–131. Oxford: Oxford University Press. 2005 Three factors in language design. Linguistic Inquiry 36: 1–22. 2007 Approaching UG from below. In Interfaces + Recursion = Language?, Uli Sauerland and Hans-Martin Gärtner (eds.), 1–29. Berlin/New York: Mouton de Gruyter. 2008. On phases. In Foundational Issues in Linguistic Theory, Robert Freidin, Carlos P. Otero and Maria Luisa Zubizarreta (eds.), 133–166. Cambridge, MA: MIT Press. Corver, Norbert, and Marjo van Koppen 2011 NP-ellipsis with adjectival remnants: a micro-comparative perspective. Natural Language and Linguistic Theory 29: 371–421. Fukaya, Teruhiko, and Hajime Hoji 1999 Stripping and sluicing in Japanese and some implications. In The Proceedings of the 18th West Coast Conference on Formal Linguistics (WCCFL), Sonya Bird, Andrew Carnie, Jason D. Haugen and Peter Norquest (eds.), 145–158. Somerville, MA: Cascadilla. Goldsmith, John 1976 Autosegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Published 1979, New York: Garland.

Constraining Doubling

251

Grohmann, Kleanthes K. 2009 Interfaces and phases. In InterPhases: Phase-theoretic Investigations of Linguistic Interfaces, Kleanthes K. Grohmann (ed.), 1– 22. Oxford: Oxford University Press. Harada, Shoichi I. 1973 Counter Equi NP deletion. In Annual bulletin 7: 113–147. Research Institute of Logopedics and Phoniatrics, Tokyo: University of Tokyo. Hiraiwa, Ken 2002 Facets of case: on the nature of the double-o constraint. In The Proceedings of the 3rd Tokyo Psycholinguistics Conference (TCP 2002), Yukio Otsu (ed.), 139–163. Tokyo: Hituzi Shobo. 2010a Spelling out the double-o constraint. Natural Language and Linguistic Theory 28: 723–770. 2010b The syntactic OCP. In The Proceedings of the 11th Tokyo Conference on Psycholinguistics, Yukio Otsu (ed.), 35–56. Tokyo: Hituzi Shobo. 2012 NP-ellipsis revisited: a comparative syntax of Japanese and Shuri Okinawan. Ms., Meiji Gakuin University, Tokyo. 2013 Decomposition of indefinite pronouns. The Proceedings of the Workshop on Syntax and Semantics at Fuji Woman’s University, Sapporo, Fuji English Review 1: 53–68. Hiraiwa, Ken, and Shinichiro Ishihara 2012 Syntactic metamorphosis: clefts, sluicing, and in-situ focus in Japanese. Syntax 15: 142–180. Ishihara, Shinichiro 2004 Prosody by phase: evidence from focus intonation–wh-scope correspondence in Japanese. In Interdisciplinary Studies on Information Structure 1: Working Papers of SFB 632, Shinichiro Ishihara, Michaela Schmitz and Anne Schwarz (eds.), 77–119. Universität Potsdam. 2007 Major phrase, focus intonation, and multiple spell-out (MaP, FI, MSO). The Linguistic Review: Special Issue: Prosodic Phrasing and Tunes 24: 137–167. Jaeggli, Osvaldo 1980 Remarks on To contraction. Linguistic Inquiry 11: 239–246. Jayaseelan, Karattuparambil A. 2001 Questions and question-word incorporating quantifiers in Malayalam. Syntax 4: 63–93. Johanessen, Janne Bondi 1998 Coordination. Oxford: Oxford University Press.

252

Ken Hiraiwa

Kamio, Akio 1983

Meisiku no koozoo (The structure of noun phrases). In Nihongo no Kihon Koozoo (The basic structure of Japanese), ed. Kazuko Inoue, 77–126. Tokyo: Sanseido. Kasai, Hironobu, and Shoichi Takahashi 2001 Coordination in Japanese. In The Proceedings of Formal Approaches to Japanese Linguistics 3 (FAJL3), MIT Working Papers in Linguistics 41, Maria Cristina Cuervo, Daniel Harbour, Ken Hiraiwa and Shinichiro Ishihara (eds.), 19–32. Kayne, Richard 2005 Movement and Silence. Oxford: Oxford University Press. Kratzer, Angelika, and Elisabeth Selkirk 2007 Phase theory and prosodic spellout: the case of verbs. In The Linguistic Review: Special Issue: Prosodic Phrasing and Tunes 23: 93–135. Kuno, Susumu 1973a Nihon Bumpoo Kenkyuu (The Structure of the Japanese Language). Tokyo: Taishukan. 1973b The Structure of the Japanese Language. Cambridge, MA: MIT Press. Kuroda, Sige-Yuki 1992 Japanese Syntax and Semantics: Collected Papers. Dordrecht: Kluwer. Leben, William 1973 Suprasegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Legate, Julie 2003 Some interface properties of the phase. Linguistic Inquiry 34: 506–516. Menn, Lise, and Brian MacWhinney 1984 The repeated morph constraint: toward an explanation. Language 60: 519–541. Mohanan, Tara 1994 Case OCP: a constraint on word order in Hindi. In Theoretical Perspectives on Word Order in South Asian Languages, Miriam Butt, Tracy Holloway King and Gillian Ramchand (eds.), 185– 216. Stanford, CA: CSLI. Murasugi, Keiko 1991 Noun phrases in Japanese and English. Ph.D. dissertation, University of Connecticut.

Constraining Doubling

253

Neeleman, Ad, and Hans van de Koot 2006 Syntactic haplology. In The Blackwell Companion to Syntax, vol. IV, Martin Everaert and Henk van Riemsdijk with Rob Goedemans and Bart Hollebrandse (eds.), 685–710. Oxford: Wiley-Blackwell. Nunes, Jairo 2004 Linearization of Chains and Sideward Movement. Cambridge, MA: MIT Press. Odden, David 1986 On the role of the Obligatory Contour Principle in phonological theory. Language 62: 353–383. Okutsu, Keiichiro 1974 Seisei Nihon Bunpooron (The Generative Grammar of Japanese). Tokyo: Taishukan. Perlmutter, David 1971 Deep and Surface Structure Constraints in Syntax. New York: Holt, Reinhart and Winston. Progovac, Ljiljana 1998a Conjunction doubling and ‘avoid conjunction principle’. In Topics in South Slavic Syntax and Semantics, Mila DimitrovaVulchanova and Lars Hellan (eds.), 25–40. Amsterdam: John Benjamins. 1998b. Structure for coordination (Part I and Part II). In GLOT International, vol. 3(7): 3–6 and 3(8): 3–9. Richards, Norvin 2010 Uttering Trees. Cambridge, MA: MIT Press. Riemsdijk, Henk C. van 1998 Syntactic feature magnetism: the endocentricity and distribution of projections. Journal of Comparative Germanic Linguistics 2: 1–48. 2002 The unbearable lightness of GOing: the projection parameter as a pure parameter governing the distribution of elliptic motion verbs in Germanic. Journal of Comparative Germanic Linguistics 5: 143–196. 2008 Identity avoidance: OCP-effects in Swiss relatives. In Foundational Issues in Linguistic Theory: Essays in Honor of JeanRoger Vergnaud, Robert Freidin, Carlos P. Otero and Maria Luisa Zubizarreta (eds.), 227–250. Cambridge, MA: MIT Press. Ross, John Robert 1972 Doubl-ing. Linguistic Inquiry 3: 61–86.

254

Ken Hiraiwa

Uriagereka, Juan 1999 Multiple spell-out. In Working Minimalism, Samuel D. Epstein and Norbert Hornstein (eds.), 251–282. Cambridge, MA: MIT Press. 2012 Spell-Out and the Minimalist Program. Oxford: Oxford University Press. Yip, Moria 1998 Identity avoidance in phonology and morphology. In Morphology and its Relation to Phonology and Syntax, Steven G. Lapointe, Diane K. Brentari and Patrick M. Farrell (eds.), 216–46. Stanford, CA: CSLI.

Recoverability of deletion Kyle Johnson 1. Introduction There is a trivial observation about Ellipsis which should follow from whatever ingredients license it. That observation is what Katz and Postal (1964) called the Recoverability Condition on Deletion, and it is elegantly argued for in Fiengo and Lasnik (1972). It amounts, simply, to the observation that a phrase cannot elide unless there is an antecedent for that phrase. A theory of ellipsis should guarantee that an elided phrase is understood as an anaphor, then, and we should also hope to have a successful account of how that anaphora is resolved: the antecedence conditions on ellipsis. One way of ensuring that elided material be recoverable is to build it into the ellipsis process itself. That is presently the most popular view, with a couple types of execution. One execution builds on the idea that an elided phrase is a kind of word. As a word, it has a particular denotation and a particular morphology. The morphology is silence, and the denotation is whatever it is that its antecedence conditions require. The idea that ellipsis is a kind of silent word is found in many places. A good extended explication of the idea, with references to many of the others, is in Hardt (1992). Another execution has many of the same features as the silent word idea, but allows for ellipsis to be a phrase with internal structure. This approach comes from Merchant (2001), who suggests that elided phrases are permitted only when they are in construction with a licensing head. (The idea that ellipses are licensed by nearby heads has antecedents in Zagona (1988b,a) and Lobeck (1987a,b, 1992).) The heads that license ellipsis are responsible, on Merchant’s proposal, for specifying how the elided phrase is pronounced and semantically interpreted. He suggests that these heads are equipped with a feature — an “e-feature” — which says that its complement phrase is silent and has the denotation required to produce the requisite antecedence conditions. Just like the pronoun approach, then, the efeature bundles the properties of silence and anaphora that ellipsis seems to combine. The salient difference with the pronoun approach is that it allows the elided phrase to continue to be a phrase, and not a word.

256

Kyle Johnson

Both the pronoun approach and the e-feature approach have the virtue of allowing elided phrases to invoke a specific set of antecedence conditions. The e-feature or pronoun can be equipped with a denotation that is built specifically for the case of ellipsis. That is a virtue because the antecedence conditions for ellipsis are not known to be identical to any invoked by other anaphors. Many treatments of ellipsis follow Tancredi (1992), and attempt to relate the antecedence conditions on ellipsis to those on deaccented material.1 But there are several differences between those conditions that have not been resolved. At present, then, the antecedence conditions on ellipsis appear to be proprietary. To the extent that this is true, it supports the view that they should be built into the denotation of the ellipsis, or its licensor, in the ways just described. If, however, the antecedence conditions on ellipsis could be reduced in full to some other form of anaphora — say, for instance, to the antecedence conditions that hold of deaccented material — then we could entertain a second possibility. We could see ellipsis as invoking no special antecedence conditions at all. All there would be to ellipsis is whatever it is that licenses the unpronounceability of certain constituents. How those ellipses gain the meanings they do would follow from a general condition on the recoverability of unspoken material. Tancredi (1992) is a proponent of this view, and it is given an extended defense in Fox (2000). They argue that the conditions on deaccented material are, by themselves, able to distinguish ellipses from spoken, but deaccented, strings. This paper argues for the second view. I will discuss a case of ellipsis in which there is no antecedent, and consequently no antecedence conditions whatsoever are invoked. Ellipses do not always induce antecedence conditions, then. In the cases we will see, the licensor for ellipsis allows the elided clause to be displaced: to be literally spoken somewhere unexpected. A licensor for ellipsis should therefore be thought of as an instruction about how to pronounce a certain phrase. It amnesties that phrase from being pronounced in the “normal” position. When that happens and the result is no pronunciation of the phrase at all, a meaning must be recovered to furnish the rest of the sentence with enough content to produce a denotation. In that case, an antecedent is required, and the antecedence conditions kick in. However, when a licensor amnesties a phrase from being pronounced in the normal position and the result is an alternative placement for the pronunciation of the phrase, there is no need for an antecedent. That is what we will see. 1

See Rooth (1992) and Merchant (2001) for two prominent examples.

Recoverability of deletion

257

Where we will see it is in “Andrews amalgams.” After sketching an account of Andrews amalgams, we will look at the central use they make of the licensing condition on Sluicing. There is no anaphora in an Andrews amalgams, however, and so no invocation of antecedence conditions. They are a construction in which an ellipsis process is invoked, but not the antecedence condition which normally accompanies ellipsis. 2. Andrews Amalgams Lakoff (1974) introduced sentences like (1), and credited their discovery to Avery Andrews. I will follow Kluck (2011) and call them “Andrews amalgams.” (1) Sally will eat [I don’t know what] today. The peculiarity of (1) that makes it interesting is how the bracketed clause (the “interrupting clause”) manages to function as the object of the verb, ate, in the “hosting clause.” We need to find a way of giving (1) a syntax that allows it to have a meaning parallel to (2). (2) Sally will eat something today but I don’t know what. Marlies Kluck and Maxamiliano Guimaräes wrote dissertations in the last few years that provide very complete analyses of Andrews amalgams, and my discussion will take their work as a starting point.2 Two properties of Andrews amalgams that help steer us towards an account are, first, that the interrupting clause has the word order characteristic of root clauses. This is indicated, among other things, by the fact that they must have verb second word order in those languages, like Dutch, where that is required of root clauses.

2

See Guimarães (2004) and Kluck (2011). Kluck puts Andrews amalgams together with “Horn amalgams,” illustrated below. Sally ate [I think it’s nattoo] yesterday. My focus will be on Andrews amalgams, and the account I will provide does not trivially extend to Horn amalgams. Since part of my goal will be to explain where amalgams are possible, that is a serious shortcoming and it should be kept in mind.

258

Kyle Johnson

(3) a. Bob heeft [je raadt nooit hoeveel koekjes] gestolen. Bob has [you guess never how many cookies] stolen ‘Bob has stolen you’ll never guess how many cookies.’ b.*Bob heeft [je nooit hoeveel koekjes raadt] gestolen. Bob has [you never how many cookies guess] stolen ‘Bob has stolen you’ll never guess how many cookies.’ (Kluck 2011, (14): 55) A companion fact is that most of the interrupting clause is not in the scope of material in the host clause. This can be appreciated by seeing that a pronoun in the interrupting clause cannot function as a variable bound by a quantifier in the host clause ((4a) is ungrammatical) and a name in the interrupting clause does not trigger disjoint reference effects with material in the host clause ((4b) is grammatical). (4) a.*Almost every student1 kissed [he1 didn’t even remember how many classmates]. b.?He1 had been kissing [the professor1 (himself ) didn’t even remember how many students]. compare: *He1 had been kissing many students that the professor1 (himself ) didn’t remember. (based on Kluck 2011, (170): 97 and (195): 102) These facts lead to the conclusion that the interrupting clause and the clause it interrupts are, in some sense, two independent sentences. That also corresponds to the fact that the interpretation we aim for conjoins these two clauses, as (2) does. On the other hand, Kluck shows that the interrupting clause is positioned syntactically within the host clause in just the places that it should be if it were fulfilling the semantic role that it seems to fulfill. In (1), for instance, it seems to provide the object of ate, and it can be in just the places that objects can be in this sentence. (5) a. i. Sally will eat [I don’t know what] today. ii. Sally will eat [the rutabagas] today. b. i. Sally will eat today [I don’t know what]. ii. Sally will eat today [the rutabagas]. c. i. *Sally will [I don’t know what] eat today. ii.*Sally will [the rutabagas] eat today.

Recoverability of deletion

259

And when an interrupting clause seems to provide the object of a preposition, it can only be in the position that objects of prepositions can be. (6) a. i. Sally stepped into [I won’t say what] on her way home. ii. Sally stepped into [the nattoo pot] on her way home. b. i. *Sally stepped into on her way home [I won’t say what]. ii.*Sally stepped into on her way home [the nattoo pot]. c. i. *Sally stepped [I won’t say what] into on her way home. ii.*Sally stepped [the nattoo pot] into on her way home. And when an interrupting clause seems to play the role of an adjective, it can only appear in the positions that adjectives can. (7) a. i. Sally ate [I don’t know how smelly] a dinner. ii. Sally ate [so smelly] a dinner. b. i. *Sally ate a dinner [I don’t know how smelly]. ii.*Sally ate a dinner [so smelly]. c. i. Sally became [I don’t know how sick] yesterday. ii. Sally became [so sick] yesterday. d. i. *Sally became yesterday [I don’t know how sick]. ii.*Sally became yesterday [so sick]. We therefore want the interrupting clause to be enough a part of the hosting clause syntactically that we can use the normal syntax to determine its position. One way of doing that would be to let a part of the interrupting clause be in the hosting clause, but leave all the rest of it outside. Van Riemsdijk (1998, 2000, 2006), Wilder (1998) and Guimarães (2004) suggest doing that with multidominant phrase markers. This would give to (1) a representation something like (8). (8)

260

Kyle Johnson

Then we could rely on the linearization scheme’s desire to map phrases onto contiguous strings to put all of the interrupting clause where the shared material is. Imagine, for concreteness, that there is a violable constraint on the linearization of syntactic structures of the form in (9). (9) Contiguity Assign a violation mark to a phrase if the strings assigned to each of its immediate daughters do not have an adjacent edge. Contiguity requires that phrases map onto contiguous strings of formatives. This is sometimes violated (as when movement applies), but Contiguity will otherwise enforce it. Let us assume that the linearization of a phrase marker satisfies Contiguity as best as it can, minimizing its violation. Contiguity, then, will be violated only to the extent required by other properties of the phrase marker being linearized. Contiguity will force the interrupting clause to be adjacent to the shared material. To see why, consider how Contiguity will judge the two strings in (10). (Throughout, I will assume that the principles responsible for linearizing English phrase markers do the right thing; that is, they put heads and Specifiers first.) (10) a. Sally [should]a a [ [eat]b [I don’t [know]c ] b,c [what]] today b. Sally [should]a [I don’t [know]c ] a [ [eat]b b,c [what]] today I’ve bracketed constituents that are sisters and attached matching labels to their edges. The grammatical string, (10a), has one violation of Contiguity: what is not adjacent to eat and that causes the VP headed by eat to be assigned a violation mark. (10b), by contrast, has two violations of Contiguity: know is not adjacent to what, causing its VP to get a violation mark, and should is not adjacent to the VP headed by know, causing the IP which should heads to receive a violation mark. As this comparison highlights, the best place for the interrupting clause will be adjacent to the shared material. That incurs a violation of Contiguity by separating the shared material from its other sister, but because the shared material has two sisters, there is no avoiding one violation of Contiguity. Putting the interrupting clause anywhere else will not only separate it from the shared material, thereby invoking the unavoidable violation of Contiguity, it will also cause two other sisters to be separated. Thus, the best place for the interrupting clause will

Recoverability of deletion

261

be adjacent to the shared phrase; this is the only spot that incurs no unnecessary violations of Contiguity.3 Adopting (8), then, provides a way of accounting for where the interrupting clause gets linearized. There are other facts which can be construed as evidence for (8) as well.4 For one thing, the shared material, unlike the rest of the interrupting clause, does seem to be within the scope of stuff in the hosting clause. Unlike what we saw in (4), a quantifier in the hosting clause can bind a pronoun in the shared material ((11a) is grammatical), and a name in the shared material does invoke disjoint reference effects with material in the hosting clause ((11b) is ungrammatical). (11) a. Almost every student1 kissed [you can’t imagine how many of his1 classmates]. (based on Kluck 2011, (169): 97) b.*He1 kissed [I don’t know how many of the professor1’s students]. That follows from (8) because (8) puts the wh-phrase inside the hosting clause, and therefore within the scope of the material it contains. The central problem with (8) is that the shared material, the wh-phrase, seems to make different semantic contributions to the hosting clause and the interrupting clause. In the interrupting clause, the wh-phrase is part of a sluice. The what in (8) is not the object of know, but the remnant wh-phrase of an embedded question that has been sluiced. Kluck (2011) makes an excellent case for this. Among other things, we find the characteristic “form 3

With one exception: the very front of the sentence. Kluck shows that this position is unavailable for the interrupting clause even when that would put it adjacent to the shared material. So something independent rules out this option. (I will connect this to (48).) 4 Uli Sauerland raises an interesting problem that speaks against (8), however. He observes that an elided VP can find its antecedent in an Andrew’s amalgam, and when it does, it doesn’t seem to distinguish the two VPs in it that (8) claims exist. Sally ate I don’t know what yesterday, and Jill did too. The second conjunct in the above example has the meaning: and Jill ate I don’t know what yesterday, too. I suspect this is because neither of the independent VPs in (8) are salient enough to act as antecedents for an ellipsis (see Hardt and Romero (2004) for an account of saliency), and so the ellipsis is resolved by creating an antecedent from the information in both VPs (See Webber (1978) for a parallel case and an account).

262

Kyle Johnson

matching” facts that are indicative of ellipsis.5 The morphological form of the wh-phrase in a sluice matches that which would be found from its putative source. (12) a. Er will jemandem schmeicheln, aber sie wissen nicht wem. but they know not who-DAT he will someone-DAT flatter, ‘They will flatter someone, but they don’t know who.’ b.*Er will jemandem schmeicheln, aber sie wissen nicht wen. but they know not who-ACC he will someone-DAT flatter, ‘They will flatter someone, but they don’t know who.’ And the same is true in precisely the same way of Andrews amalgams. wem] schmeicheln. (13) a. Bea wollte [ich weiss nicht mehr Bea wanted [I know not anymore who-DAT] flatter ‘Bea wanted to flatter I don’t know who.’ wen] schmeicheln. b.*Bea wollte [ich weiss nicht mehr Bea wanted [I know not anymore who-ACC] flatter ‘Bea wanted to flatter I don’t know who.’ (Kluck 2011, (83): 184) To put the sluice in (8), we’d have to embrace something like (14). (14)

5

This is an argument found in Ross (1969) and extended in Merchant (2001).

Recoverability of deletion

263

We want the hosting clause to mean something like “Sally ate something.” So what must be interpreted as “something” in the host clause. But in the interrupting clause, that very same what must be interpreted as, well, “what.” That’s the problem. Kluck (2011) proposes a solution to this problem that builds on Tsubomoto and Whitman (2000). She points to examples like (15), which has many of the same characteristics as Andrews amalgams. (15) Sally ate something — I don’t know what — yesterday (Kluck 2011: 293ff ) The parenthetical clause in (15) also acts as if it isn’t within the scope of the hosting clause. Building on an unpublished manuscript by Jan Koster, and on the program developed in Mark de Vries’s dissertation and subsequent papers,6 Kluck suggest that there is a special Merge operation that joins otherwise independent clauses to material in the host clause. This special form of Merge does not subordinate the embedded clause in a way that scopal phenomena, like variable binding, are sensitive to. For (15), we might have the representation in (16). (16)

The dashed line is meant to signify the special form of Merge involved here. An Andrews amalgam, Kluck suggests, is essentially (15), but with a silent

6

See de Vries (2006), for instance.

264

Kyle Johnson

object. If we represent the silent object with “e,” then (1) would have the structure in (17). (17)

What this structure approximates, then, is something like what is happening in (18). (18) Sally ate, but I don’t know what. In (18) too, there is an implicit object of ate, and we might equate that implicit argument with “e.” On Kluck’s view, Andrews amalgams are just (18), but with the special instance of Merge responsible for bringing the two clauses together. How does this capture the fact that the shared material, and only the shared material, in the interrupting clause seems to be within the scope of the hosting clause? This, Kluck argues, follows from the fact that the shared material has been moved out of an elided clause that is identical to the hosting clause. Thus, because movement induces reconstruction effects (see (19)), it will appear as if the shared material is in the scope of the hosting clause when it is actually only within the scope of the partially elided interrupting clause. (19) A: Almost every linguist1 criticized some of his1 work. B: Sure, sure: but you’re not saying how much of his1 work almost every linguist1 criticized. (based on Kluck 2011, (3): 172)

Recoverability of deletion

265

On this view, (11a) would get the representation in (20), and reconstruction would explain why his can be bound by almost every linguist. (20)

A problem for Kluck’s analysis is that the implicit argument in (18) has properties that are not found in Andrews amalgams. First, implicit arguments of the sort found in (18) are sensitive to the verb they are arguments of. Some verbs do not allow their arguments to be implicit. The direct object of send, for instance, cannot be silent. (21) *Sally sent to Mary. compare: Sally sent something to Mary. But an Andrews amalgam is licit in these cases. (22) Sally sent [I don’t know what] to Mary. The silent argument that is needed for the Andrews amalgam in (22) needs to be blocked in (21). An Andrews amalgam can be anywhere an overt DP, AP or PP can be, whereas implicit arguments are in a much narrower range of cases. These don’t look like the same thing.

266

Kyle Johnson

Second, although the implicit object in (18) is interpreted as an existentially quantified variable, its existential force cannot out-scope anything else in the sentence. Thus, in (23) the existential quantifier binding the implicit object is necessarily within the scope of negation. (23) Sally can’t eat. ≈ Sally can’t eat anything. For this reason, Sluices are not licensed by antecedents whose implicit argument is in the scope of negation.7 (24) *Sally can’t eat and I don’t know what. But this is not the case for Andrews amalgams. (25) Sally can’t eat [I don’t know what]. ≈ There’s something Sally can’t eat and I don’t know what that is. I think these facts teach us that there is no implicit argument introduced into the host clause by the interrupting clause. The term that is understood as “something” in the host clause is not a silent existential to which the interrupting clause is joined. Instead, I want to explore the idea that the term which is understood as “something” in the host clause is just the shared wh-phrase. That will mean that the shared wh-phrase gets a different interpretation in the hosting and interrupting clauses. I believe we should embrace the apparently problematic conclusion that the shared wh-phrase gets a different interpretation in each of the clauses it stands in. What we need to do is understand how a wh-phrase is capable of getting different interpretations. That a wh-phrase can have two meanings is, in some sense, uncontroversial. When a wh-phrase moves, it must involve two interpretations. If we adopt the remerge theory of movement, a constituent question gets a representation like that in (26).

7

See Romero (2000) for discussion of this fact.

Recoverability of deletion

267

(26)

The wh-phrase is in two positions, though pronounced in English only in the higher of these. We want it to introduce a variable in the lower position that is bound by something near the higher position. In many languages, we can see that the variable that a wh-phrase invokes is bound by a functional head. In Japanese, for instance, where wh-phrases are pronounced in their lower position, we can see that what binds the variable introduced by that wh-phrase depends on what nearby functional heads there are. In (27), for example, dono gakusei (‘which student(s)’) functions as the variable-part of a constituent question. nattoo-o tabe-tagatte-iru-to (27) (Kimi-wa) dono-gakusei-ga (you-TOP) which-student-NOM natto-ACC eat-desirous-be-C omoimasu-ka? think-Q ‘Which student do you think wants to eat natto?’ That is because the question complementizer ka binds that variable. In (28), however, dono gakusei is a variable bound by the universal quantifier mo. (28) [dono gakusi-ga syootaisita sensi]-mo odotta. teacher]-mo dance [which student-NOM invited ‘The teacher that every student invited danced.’ (Shimoyama 2006, (1b): 139) A nice way of unifying English with these languages is to let the silent Q complementizer bind the variable introduced by which dish in (26). This

268

Kyle Johnson

means that which dish cannot be semantically interpreted in its higher position, and the denotation that wh-phrases have in English is, like the interpretation of dono-phrases in Japanese, just an open variable which gets closed by some local operator.8 Note, then, that a phrase which resides in two positions, as which dish does in (26), need be semantically interpreted in only one of those positions. The normal requirement that everything in a syntactic representation must be interpreted by the semantic component must be allowed to permit (29). (29) If a term is in more than one position in a phrase-marker, it need be semantically interpreted in only one of them. There is a difference between English and Japanese style languages, and that’s that the wh-phrases of English can only show up when they are bound by the Q morpheme. Unlike Japanese, wh-phrases in English are not capable of getting their quantificational force from some other operator. We can describe this difference between English and Japanese with (30). (30) The Question morpheme in English must be pronounced as the determiner of the DP it binds. Whether (30) holds in a language or not might correlate with whether the functional heads which act as binders are overtly pronounced.9 They are in Japanese, but they are not in English. If so, (30) might be seen as a reflex of the more general Principle of Full Interpretation.10 (31) Principle of Full Interpretation Every terminal in a phrase marker must have a morphological reflex. In this case, the Q morpheme has its morphological reflex in the determiner. I’ll follow Adger and Ramchand (2005), Kratzer (2005) and Cable (2010) and take this to happen through the agency of Agreement. Cable (2010) argues for a locality condition on the Agreement relation which can be seen as the cause for the wh-phrase’s movement. Q and the determiner it Agrees with must be essentially adjacent to each other, on Cable’s view.11 In what 8

See Hagstrom (1998, 2000) and Kishimoto (2005). See Cheng (1991), but also Cable (2010) for some qualifications. 10 This is inspired by Chomsky (2000). 11 Cable makes use of two operators, one that resides in Complementizer position and another that is in construction with the phrase that moves. My discussion here conflates those two. 9

Recoverability of deletion

269

follows, I’ll assume that what forces movement of a wh-phrase in English is not semantic; perhaps it is (31). To implement the idea that a wh-phrase introduces a variable bound by a higher operator, I’ll adopt the approach to questions introduced by Hamblin (1973). On that view, the denotation of a wh-phrase is a set of alternatives. The denotation of what, for instance, is a set of alternative things. Alternatives combine semantically in a point-wise fashion, so that the denotation of eat what, for instance, is a set of alternative VP meanings, each differing just in the thing that serves as object. If what introduces the alternatives nattoo, durian and balut, for instance, then ⟦eat what⟧ will be the alternatives ⟦eat nattoo⟧, ⟦eat durian⟧ and ⟦eat balut⟧. When all the material in a clause has been combined up in this way, what we’ll have is a set of alternative propositions. This is what the Q morpheme combines with to form a question. On Hamblin’s treatment, a question is a set of alternative propositions of a certain kind, and so the difference between what Q gets and what it produces is minimal. See Beck (2006) and Cable (2010) for discussion. Now reconsider the multidominant representation of Andrews amalgams. If we add to it the elements of the account of wh-phrases just sketched, we get (32). (32)

270

Kyle Johnson

In (32), what is in three places. I’ll step through these positions and describe how it is evaluated in each. (33) Daughter of VP† a. Not pronounced here because (single) wh-phrases must be pronounced closer to Q (and because this is within the sluice). b. Semantically interpreted as the object of ate. (34) Daughter of CP a. Pronounced here because English requires the Specifier of a CP to be phonologically filled by a wh-phrase. b. Not semantically interpreted here. (35) Daughter of VP a. Pronounced here for whatever reason the objects of transitive verbs must be overt in English. b. Semantically interpreted as the object of ate. The alternatives that what provides is used by Q to make the complement of know a question. Its semantic contribution to the interrupting clause, then, is as the term that generates the alternatives from which a question denotation is derived. In the host clause, however, something different happens. We must assume that there is a silent operator which uses the alternatives generated by what to produce a meaning equivalent to one produced by something. A similar thing happens with the wh-phrases in Tlingit, the language that forms the backbone of Cable (2010)’s analysis, and Cable argues that in Tlingit sentences where wh-phrases get such an interpretation there is an existentially bound choice function that operates on the alternatives. Let us adopt a similar strategy for our case. I’ll give an informal description of how it works. The choice function, F, takes a set of alternatives as its argument and returns one of those alternatives as its output. (36) F(A) = a, where A is a set of alternatives and a is one of those alternatives. Assume that clauses can contain a silent existentially quantified F. If one of these is in the host clause of (32), this would produce a representation like (37).

Recoverability of deletion

271

(37)

The meaning of TP† will be a set of alternatives of the form indicated in (38). (38) If we assume that the denotation of a clause is a description of a particular situation, perhaps the one under discussion, then the set in (38) can be understood to be a set of alternative descriptions of that situation. The root TP in (37) will have a meaning that is derived from letting F choose one of these alternatives. That meaning is paraphrased by (39). (39) (39) says that one of the ways of describing the situation is in the set of alternatives provided by ⟦TP†⟧. That is truth functionally equivalent to the meaning of Sally ate something yesterday,12 and so this derives the desired meaning for the host clause. 12

See Cable (2010: 72ff ) for a proof.

272

Kyle Johnson

What this analysis claims, then, is that the set of alternatives that a whphrase introduces can be used either to form a question or to form a declarative existential sentence. In the case of Andrews amalgams, both of these possibilities are exploited: the question meaning forms the sluice of the interrupting clause, and the declarative existential meaning is delivered by the host clause. If this is correct, it must be ensured that the existential meaning is not produced by wh-phrases outside of amalgams. It is not the case that a wh-in-situ, for instance, can be interpreted as anything but an interrogative phrase. Sentences like (40) do not have an interpretation like that produced for (41). (40) What did Sally give to whom? (41) What did Sally give to someone? What makes amalgams different from cases like (40) is that the wh-phrase in the amalgam is in two different sentences, only one of which is as an interrogative. If we can derive (42), then we will correctly limit to amalgams the ability of a wh-phrase to be interpreted as a non-interrogative indefinite. (42) A wh-phrase’s denotation must always be part of the meaning of a question. I don’t know how to derive (42), but it seems likely to me that strengthening the relationship between Q and the determiner that expresses its morphology to something more semantic is in the right direction. See Beck (2006) for some techniques that might be exploited here. Unlike Kluck’s solution, this account provides an explanation for why Andrews Amalgams can be found with obligatorily transitive verbs: the wh-phrase serves as the object of those verbs. And it explains why the indefinite in the hosting clause can have wider scope than implicit arguments are able to. The scope of the wh-phrase in the host clause will be wherever the existentially bound choice function is inserted. Implicit arguments must get their existential force in some other way, a way that allows them only the narrowest of scopes. I do not know what existentially closes off the free variable introduced by implicit arguments, nor why that has the consequences it has for their scope. Nor do I know what determines where the existentially quantified choice function which I am suggesting interprets the alternatives introduced by wh-phrases can be. I cannot therefore pro-

Recoverability of deletion

273

vide an explanation for why these two things have the scopes they do. But unlike Kluck’s account, mine can explain why they do not have the same scopes. 3. Sluicing There are two important problems with the account just sketched. Both of these problems have to do with the shape that Andrews amalgams can have. One problem is that the account does not determine which of the two sentences that make up the amalgam is the host and which interrupts. The other problem is that the account does not force the interrupting clause to have a sluice in it. This section argues that these two problems are connected. The solution has the consequences for licensing ellipses that were advertised at the beginning of the paper. The account in the previous section forces the two sentences that make up an Andrews amalgam to be linearized as one string. Contiguity requires that the shared material be as adjacent as possible to the sister it has in each of the sentences it stands. I argued that this drives the interrupting clause inside the host clause, putting it as close to the wh-phrase as possible. In fact, Contiguity drives one of the sentences into the other, but it doesn’t determine which sentence that is. Reconsider the structure we are entertaining for Andrews amalgams and let’s walk through how Contiguity judges it. (9) Contiguity Assign a violation mark to a phrase if the strings assigned to each of its immediate daughters do not have an adjacent edge.

274

Kyle Johnson

(43)

We can ignore what Contiguity would say about the phrases within CP‡ because it is elided. Contiguity will assign one violation mark if the daughters of VP‡ are not adjacent, that is if ate and what are not adjacent. And it will assign one violation mark if the daughters of VP† are not adjacent, that is if know and what are not adjacent. Because the other principles of English linearization require what to follow both ate and know, it is not possible for what to be simultaneously adjacent to both of them. Unavoidably, then, VP‡ or VP† will trigger a violation of Contiguity. All other violations of Contiguity are avoidable, however, and so the winning linearizations will be ones in which VP† alone violates Contiguity or ones in which VP‡ alone violates Continguity. In the linearization that actually emerges, (44), VP‡ violates it. (44) Sally ate I don’t know what yesterday. But the linearization in (45), where only VP† violates Contiguity, is also allowed. (45) I don’t know Sally ate what yesterday.

Recoverability of deletion

275

This must be prevented. Something chooses (44) over (45); something specifies which clause is driven inside the other. In Kluck (2011), this is connected to the observation that the clause which is linearized inside the other clause is understood to be the speaker’s comment about the proposition denoted by the other clause. She does this by building this meaning into the operation that merges the interrupting clause into the host clause. The proposal here cannot make use of that idea, since the interrupting clause is not being subordinated to the host clause, but I would like to preserve Kluck’s idea that these two facts are connected. Luis Vicente observes that in discourses there is an ordering on clauses of the sort that we see here too. The ordering in (46a) is much more natural than the ordering in (46b), for instance. (46) a. Sally ate something yesterday. I don’t know what. b. I don’t know what. Sally ate something yesterday. Even when there isn’t the anaphora invoked by the sluice in (46), there is a preference for putting the sentence that denotes the proposition about which you wish to comment before the sentence that denotes the comment. (47) a. Paul Ryan claims to run marathons. I hate lying stooges to American capitalism. b.#I hate lying stooges to American capitalism. Paul Ryan claims to run marathons. (47a) is a coherent discourse, where the second sentences expresses my comment about the matter introduced in the first sentence. (47b), by contrast, cannot have that structure and so does not make a coherent discourse. Let’s assume, then, that the principles structuring a discourse have the effect described in (48). (48) The left edge of P must precede the left edge of a clause that is a comment on P. This will correctly choose (44) over (45). The other problem with the account concerns Sluicing, and it’s this problem that bears on the nature of the licensor for ellipsis. The problem is that an Andrews amalgam must have a sluiced clause in it. If (43) is pronounced with the material inside CP‡ spoken, the result is ungrammatical.

276

Kyle Johnson

(49) * Sally ate I don’t know what Sally ate. Moreover, if neither of the sentences that are brought together in an Andrews amalgam have material that can be sluiced, the result is ungrammatical. An Andrews amalgam only happens when a sluice can, and does, occur. To see that Andrews amalgams are licensed only in environments where Sluicing happens, we need to know first what those environments are. Sluicing elides the TP part of questions, and nothing else. It distinguishes TPs embedded in indirect questions from TPs embedded in free relatives, for instance. That is responsible for the contrast in (50). (50) a. Mary ate something, and I’ll tell you what Mary ate. b.*Mary danced some time last week, and I danced when Mary danced. As we’ve seen, interrupting clauses can contain indirect questions from which a TP has been dropped. But if the interrupting clause contains a free relative, then the TP inside it cannot drop. (51) *Mary danced I’m pleased when. This is perfectly general. Andrews amalgams only arise when the host clause matches a clause in the interrupting clause that has sluiced. What we’re looking for is something that will allow Andrews amalgams only where sluices are permitted.13 13

Michal Starke points out, however, that not all kinds of sluices are possible in Andrews amalgams, as the contrast in (i) illustrates. (i) a. Sally ate some balut, but I don’t know what else. b. *Sally ate I don’t know what else yesterday. This particular case might follow from the requirement that else places on the antecedent of sluices. There is no antecedent in an Andrews amalgam. What I will derive here is that an Andrews amalgam requires a sluice, not that every kind of sluice can produce an Andrews amalgam. An anonymous review offers another interesting example where this happens. Sluicing can apply to a clause inside a subject, as in (iia), but Andrews amalgams aren’t grammatical in this environment, as the contrast with (iib) indicates. (ii) a. He devoured some cookies, but how many cookies is completely unclear. b.* He devoured [how many cookies] is completely unclear.

Recoverability of deletion

277

Deriving this property is one of the central goals of Guimarães (2004). His proposal has two parts. He suggests a structure, slightly different from (43), that will ensure that the standard linearization schemes fail. Then he devises a linearization scheme purpose built for that structure that has the right outcome. I will adopt his structure, but suggest a different way of resolving the linearization problem it creates. Guimaräes’s structure aims to explain why (49) is ungrammatical. What is needed is something that prevents the sluiced material from being spoken. Note that in Andrews amalgams, the sluiced material is part of the same string which holds its antecedent. We could use this feature of amalgams to force sluicing if we could find a way of preventing the sluiced material from being pronounced in the same string as its antecedent. One way of achieving that goal in frameworks which allow multidominant phrase markers is to let the material which can only be pronounced once be the same material given two positions. Under normal circumstances, when a phrase has two positions in a phrase marker, the linearization algorithm cannot let the material associated with that phrase show up twice in the string.14 For instance, consider how the structure in (52) gets linearized. (52)

The amalgam in (iib) is improved if it occurs in extraposed position, rather than in subject position, as in (iii). (iii)

He devoured it’s completely unclear how many cookies.

This is reminiscent of Ross (1967)’s Sentential Subject Constraint, and so suggests that the sluice in an amalgam is, like movement, sensitive to island effects. And yet, as Vidal Valmala Elguea points out, in general, Andrews amalgams are not sensitive to island effect. I find (iv) an improvement on (iib), for instance. (iv) He devoured I don’t know anyone who knows how many cookies. See Nunes (1999, 1996, 2004) for discussion of the special cases. The technique described here for preventing a double pronunciation of material in two positions is in its essentials the one proposed in Nunes’s work. 14

278

Kyle Johnson

YP is an immediate daughter of both XP and BP. Imagine that the linearization statements for the structure that (52) illustrates are satisfied if y precedes the material in AP or follows b. (YP, for instance, might be a whphrase that has moved in English, with b representing a verb and XP representing a CP.) The linearization scheme will be blocked from allowing y to both precede the material in AP and follow b, however. The strings that standard linearization schemes will permit are those in (53), but not (54). (53) a. yab b. aby (54) yaby (54) is prevented by Kayne (1994)’s Antisymmetry. (55) Antisymmetry If α precedes β in a linearization, then β cannot precede α in that linearization. In (54), y both precedes and follows a, and this is what Antisymmetry forbids. In general, then, there must be a mechanism that permits a structure like (52) to avoid a violation of Antisymmetry. There must be a mechanism that allows one of the positions a phrase occupies to be ignored by the linearization algorithm, yielding the licit possible linearizations in (53). Multidominant representations, then, yield licit strings only when something licenses a partial linearization of them, thereby avoiding violations of Antisymmetry. Guimaräes exploits this method to explain the ungrammaticality of (54). He suggests that the host clause and the clause that is sluiced are the same clause. We can incorporate this suggestion into the analysis here with (56).

Recoverability of deletion

279

(56)

In (56), it is not the wh-phrase that is shared, but instead a TP containing that wh-phrase which is shared. This TP combines with the Complementizer heading the CP which know selects, and it combines with a Complementizer that forms the CP which constitutes the host clause. This shared TP will have the set of alternatives meaning described in the previous section (see (38)). When that meaning is combined with the Complementizer holding Q, the resulting CP will have the meaning of a question. When that denotation is combined with the complementizer that lies in the hosting clause, it will produce a CP with the meaning of a declarative existential sentence. This requires that the choice function, F, which operates on the alternatives to produce this reading, be very high in the hosting clause. It cannot be in the shared part, but must instead reside in the complementizer or higher.15 The semantics sketched in the previous section, then, transfers over straightforwardly to (56). 15

This predicts, correctly I believe, that the existential quantification in the hosting clause must be sentence wide. In the previous section, we saw that the existential quantification associated with the object in sentences like Sally hasn’t

280

Kyle Johnson

So also do the effects of Contiguity. The boxed TP in (56) must be linearized so that its left edge precedes the left edge of TP†. This, recall, is meant to follow from the ordering that seems natural for comments and the propositions they are comments on. Just as before, this pitches the VP headed by ate against the VP headed by know: only one of these can satisfy Contiguity. If the boxed TP must start the string, then letting the VP headed by ate violate Contiguity permits a linearization in which no other violations of Contiguity arise. That is the best linearization possible, and it corresponds, correctly, to Sally ate I don’t know what. Adopting (56), then, fits the semantic and linearization interpretations developed in the previous section without problem. But it changes how the material in the interrupting and host clauses scope. The facts discussed in the previous section — that the wh-phrase is in the scope of both clauses and that the rest of the interrupting clause is not in the scope of the hosting clause — continue to be accounted for by (56). But (56) makes the perverse prediction that the hosting clause will be in the scope of the interrupting clause, and this seems false. Principle C effects are not triggered in (57b), and a bound interpretation of his in (57a) isn’t possible. (57) a.*His1 favorite child ate no father1 can guess what. b. Sally1’s daughter ate she1 can only guess what. If (56) is to be maintained, we need an explanation for why the host clause does not seem to be in the scope of material found in the interrupting clause. I think we should adopt a solution to this problem suggested to me by Omer Preminger. The essential ingredient in that solution is that the semantics offered for (56) will screw up any of the phenomena that are used to diagnose what the scopal relation between interrupting and hosting clause is. Let me illustrate this solution by considering how the bound variable interpretation in (57a) would arise. Under the present proposal, (57a) would have the representation in (58).

eaten I don’t know what yesterday need not be within the scope of negation. In fact, however, this existential quantification can’t be within the scope of negation.

Recoverability of deletion

281

(58)

The semantics requires that the denotation of TP‡ be capable of fitting into both of the independent sentences: TP† and CP†. I have focused on how that allows what to get a different interpretation in each of these sentences, but everything else in the shared TP‡ must also have a denotation that finds the right evaluation in both of TP† and CP†. Arguably, this is not possible for the bound pronoun his1. The bound variable interpretation of his is achieved by giving it the same index as that of its binder, here no father, and then interpreting those indices in the appropriate way. Presumably this will require the index on the pronoun to confer the bound variable denotation to the pronoun only if it is in the scope of the other index-bearing expression, that is its binder. This is what ensures that a pronoun can be interpreted as a variable bound to something if it is the scope of that binder. The normal semantics, then, will allow the pronoun to have the bound variable interpretation in TP†. But, for the same reasons, it will prevent that pronoun from having a bound variable interpretation in CP†. In CP†, there is no other coindexed expression that allows the index on his to be interpreted in the requisite way. There is no way, then, to give the pronoun in (58) the denotation of a bound variable and also let TP‡ have a denotation that fits the interpretations given to both TP† and CP†. The absence of a bound variable interpretation for the pronoun in (57a) is not showing us that the host clause

282

Kyle Johnson

isn’t in the scope of the independent clause, on this view. It arises because the host clause is shared material. In general, the semantic interpretation of Andrews amalgams will prevent any term in the shared material from having a meaning that depends on being in the scope of some item, X, if X is only found in the interrupting clause. But this is precisely the sort of configuration needed to determine whether the host clause is in the scope of material in the interrupting clause. That is, to show that the host clause is the shared material, and therefore in the scope of stuff in the interrupting clause, requires finding an example in which the interpretation of something in the host clause arises only when it is in the scope of something that is just in the interrupting clause. The semantics, then, robs us of most, perhaps all, methods of determining whether the host clause is in the scope of material in the interrupting clause. Perhaps the disjoint reference effect in (57b) still provides a successful method of determining what the relative scopes of host and interrupting clause are? In (57b), the absence of a disjoint reference effect between Sally and she is expected if Sally is not in the scope of she. (57b)

Sally1’s daughter ate she1 can only guess what.

This is a test for scope that is free of the problem described for (57a), and so perhaps it indicates that the host clause is not, as (56) predicts, within the scope of terms in the interrupting clause. And yet, here too there is a confound that blocks this conclusion. On one popular explanation for the type of disjoint reference effect that would be expected to arise in (57b), it arises when an alternative method of expressing the same referential dependency could be achieved by bound pronoun anaphora. Reinhart (1983a,b) argues that the disjoint reference effect in examples such as (59a) arise just where the bound variable anaphora in (59b) is licensed. (59) a.*She1 can only guess what Sally1’s daughter ate. b. She1 can only guess what her1 daughter ate. If that is correct, then the absence of a disjoint reference effect in (57b) would follow from the ungrammaticality of the bound variable interpretation for the pronoun in (57a). It too would be expected even if Sally, and so the host clause, is in the scope of she. If this thinking is on the right track, everything we’ve seen about Andrews amalgams is consistent with giving them the parse in (56). This rep-

Recoverability of deletion

283

resentation makes the host clause the same clause that occupies the position where Sluicing occurs. As a consequence, Antisymmetry blocks (45), where it is pronounced in both positions. (45) Sally ate I don’t know what Sally ate. Sally and ate both precede and follow know, among other words, and Antisymmetry forbids this. (56), then, provides an explanation for why those Andrews amalgams which have a configuration that permits Sluicing must invoke Sluicing. What we require now is an account for why Andrews amalgams must always have a configuration with permits Sluicing. This, I will argue, follows from the thesis that Andrews amalgams always involve shared material in a double-rooted structure. Structures with this geometry will always violate Antisymmetry, and so will require something that amnesties them from that violation. They will require something that allows the linearization algorithm to ignore one of the positions that the shared material occupies. The configuration that licenses Sluicing is that something. To avoid the violations of Antisymmetry that structures like (56) invoke, Guimarães (2004) suggests something else. He devises an ingenious linearization algorithm that applies to old-fashioned phrase markers as well as ones like (56), and in both cases produces the right result. This algorithm is engineered so that anytime there is a structure such as (56), it will linearize the shared material so that it begins the string, and place the other clause in the correct interior position. That, as it turns out, is too powerful, however. As we’ve seen, Andrews amalgams are always built upon sluices. What is needed, then, is something that allows the Antisymmetry violation to be avoided only in contexts where Sluicing is allowed. Guimarães’s linearization scheme would allow Andrews Amalgams in other environments as well. We should conclude that the same thing which licenses Sluicing licenses the shared TP in an Andrews amalgam to be linearized according to both of its positions. Suppose, for instance, that the question complementizer licenses Sluicing. For this complementizer to license both Sluicing and Andrews amalgams, it must have the effect described in (60). (60) A term that licenses ellipsis allows its sister to not be submitted to the linearization algorithm.

284

Kyle Johnson

When the sister to an ellipsis licensor is not submitted to the linearization algorithm, the words in that phrase are not assigned the positions in the resulting string that are associated with being in that particular position. If the question complementizer is the licensor, for instance, then the words in the TP that is its sister will not be linearized so that they all follow the question complementizer. In the case of Sluicing, this has the effect of eliding that material. In the case of Andrews amalgams, this has the effect of allowing the material to be linearized according to its other position in the phrase marker. Because a violation of Antisymmetry would result if the shared material in an Andrews amalgam had to be linearized in both its positions, the ellipsis licensor is required to amnesty one of those positions from the linearization algorithm. This is why Andrews amalgams can only occur where sluices are licensed, and why the sluicing is required in Andrews amalgams. If (60) is the correct way of describing the licensing condition on ellipsis, however, it does not fit with the ways described at the outset that couple the licensing condition with a specification of its antecedence condition. If the licensing condition on ellipsis were necessarily coupled with a denotation that specified what its antecedent must be, then it would not be able to function grammatically in Andrews amalgams. There is no antecedent to the “elided” phrase in an Andrews amalgam. In an amalgam, the licensing condition on ellipsis is doing nothing more than allowing the phrase that is unspoken in one position to be spoken elsewhere. These cases, then, speak on behalf of a theory of ellipsis which divorces the conditions which allow a phrase to be elided from the conditions that indicate how that elided phrase’s meaning is recovered. I think this means that the way an elided phrase recovers its meaning is not given by a procedure dedicated to ellipsis. Andrews amalgams teach us that ellipsis is nothing more than allowing a phrase to go unpronounced in a particular position. The antecedence conditions that normally accompany that operation have nothing to do with it. They arise, I speculate, because the sentences that are partially pronounced will not otherwise secure a meaning. The antecedence conditions on ellipsis must be entirely reduced to something else, perhaps the conditions that hold of deaccented material. Acknowledgements My thanks to Marlies Kluck for teaching me everything about amalgams, and to Vidal Valmala Elguea for a careful and insightful critique of the paper. His many

Recoverability of deletion

285

ideas could fill an additional (and better!) paper on this subject. I’ve also been helped by discussions with Rajesh Bhatt, Luis Vicente, Michal Starke and Uli Sauerland. A sharp audience at New York University helped shake out a few other problems. And, finally, my thanks to an anonymous reviewer.

References Adger, David, and Gillian Ramchand 2005 Merge and move: wh-dependencies revisited. Linguistic Inquiry 36:161–193. Beck, Sigrid 2006 Intervention effects follow from focus interpretation. Natural Language Semantics 14:1–56. Cable, Seth 2010 The Grammar of Q: Q-particles, Wh-Movement and Pied-Piping. Oxford: Oxford University Press. Cheng, Lisa Lai-Shen 1991 On the typology of wh-questions. Ph.D. dissertation, Massachusetts Institute of Technology. Chomsky, Noam 2000 Minimalist inquiries: the framework. In Step by Step: Essays on Minimalist Syntax in Honor of Howard Lasnik, Roger Martin, David Michaels and Juan Uriagereka (eds.), 89–156. Cambridge, MA: MIT Press. Fiengo, Robert, and Howard Lasnik 1972 On nonrecoverable deletion in syntax. Linguistic Inquiry 3: 528. Fox, Danny 2000 Economy and Semantic Interpretation. Cambridge, MA: MIT Press. Guimarães, Maximiliano 2004 Derivation and representation of syntactic amalgams. Ph.D. dissertation, University of Maryland. Hagstrom, Paul 1998 Decomposing questions. Ph.D. dissertation, Massachusetts Institute of Technology. 2000 The movement of question particles. In Proceedings of the 30th North East Linguistic Society (NELS), Masako Hirotani, Andries Coetzee, Nancy Hall and Ji-yung Kim (eds.), 275–286. Amherst, MA: Graduate Linguistic Student Association. Hamblin, Charles 1973 Questions in Montague grammar. Foundations of Language 10:41–53.

286

Kyle Johnson

Hardt, Daniel 1992

VP ellipsis and semantic identity. In Proceedings of the Stuttgart Ellipsis Workshop, Steve Berman and Arild Hestvik (eds.). Stuttgart. Hardt, Daniel, and Maribel Romero 2004 Ellipsis and the structure of discourse. Journal of Semantics 21:375–414. Katz, Jerrold J., and Paul Postal 1964 An Integrated Theory of Linguistic Descriptions. Cambridge, MA: MIT Press. Kayne, Richard S. 1994 The Antisymmetry of Syntax. Cambridge, MA: MIT Press. Kishimoto, Hideki 2005 Wh-in-situ and movement in Sinhala questions. Natural Language and Linguistic Teory 23:1–51. Kluck, Marlies 2011 Sentence Amalgamation. Groningen: Landelijke Onderzoeckschool Taalwetenschap. Kratzer, Angelika 2005 Indefinites and the operators they depend on: from Japanese to Salish. In Reference and Quantification: The Partee effect, Gregory N. Carlson and Francis Jeffrey Pelletier (eds.), 113–142. Stanford, CA: CSLI. Lakoff, George 1974 Syntactic amalgams. In Papers from the 10th Annual Regional Meeting of the Chicago Linguistic Society, Michael Galy, Robert Fox and Anthony Bruck (eds.), 321–344. Chicago IL: Chicago Linguistic Society. Lobeck, Anne 1987a Syntactic constraints on ellipsis. Ph.D. dissertation, University of Washington, Seattle. 1987b. VP ellipsis in infinitives: Infl as a proper governor. In Proceedings of the 17th North East Linguistics Society (NELS), J. McDonough and B. Plunkett (eds.), 425–441. Amherst, MA: Graduate Linguistics Students Association. 1992 Licensing and identification of ellipted categories in English. In Proceedings of the Stuttgart Ellipsis Workshop, Steve Berman and Arild Hestvik (eds.). Stuttgart. Merchant, Jason 2001 The Syntax of Silence: Sluicing, Islands, and the Theory of Ellipsis. Oxford: Oxford University Press.

Recoverability of deletion Nunes, Jairo. 1996

1999 2004

287

On why traces cannot be phonetically realized. In Proceedings of the 26th North East Linguistic Society (NELS), Kiyomi Kusumoto (ed.), 211–226. Amherst, MA: Graduate Linguistics Students Association. Linearization of chains and phonetic realization of chain links. In Working Minimalism, Samuel Epstein and Norbert Hornstein (eds.), 217–249. Cambridge, MA: MIT Press. Linearization of Chains and Sideward Movement. Cambridge, MA: MIT Press.

Reinhart, Tanya 1983a. Anaphora and Semantic Interpretation. Chicago, IL: University of Chicago Press. 1983b. Coreference and bound anaphora: a restatement of the anaphora questions. Linguistics and Philosophy 6: 47–88. Riemsdijk, Henk C. van 1998 Trees and scions – science and trees. In Festweb Page for Noam Chomsky. Cambridge, MA: MIT Press. 2000. Wh-prefixes, the case of wäsch in Swiss German. In Naturally! Linguistic Studies in Honour of Wolfgang Ulrich Dressler, Chris Schaner-Wolles, John Rennison and Friedrich Neubarth (eds.), 423–431. Torino: Rosenberg and Sellier. 2006 Towards a unified theory of wh- and non-wh-amalgams. In In Search of the Essence of Language Science: Festschrift for Professor Heizo Nakajima, Yubun Suzuki, Mizuho Keiso and Ken-ichi Takami (eds.), 43–59. Tokyo: Hitsuji Shobo. Romero, Maribel 2000 Antecedentless sluiced wh phrases and islands. In Ellipsis in Conjunction, Kerstin Schwabe and Ning Zhang (eds.), 195–220. Tübingen: Niemeyer. Rooth, Mats 1992 Ellipsis redundancy and reduction redundancy. In Proceedings of the Stuttgart Ellipsis Workshop, Steve Berman and Arild Hestvik (eds.). Stuttgart. Ross, John Robert 1967 Constraints on variables in syntax. Ph.D. dissertation, Massachusetts Institute of Technology. 1969 Guess who? In Papers from the Fifth Regional Meeting of the Chicago Linguistic Society, Robert I. Binnick, Alice Davison, Georgia M. Green and Jerry L. Morgan (eds.), 252–286. Chicago IL: Chicago Linguistic Society.

288

Kyle Johnson

Shimoyama, Junko 2006 Indeterminate phrase quantification in Japanese. Natural Language Semantics 14: 139–173. Tancredi, Christopher 1992 Deletion, deaccenting and presupposition. Ph.D. dissertation, Massachusetts Institute of Technology. Tsubomoto, Atsuro, and John Whitman 2000 A type of head-in-situ construction in English. Linguistic Inquiry 31: 176–183. Vries, Mark de 2006 The syntax of appositive relativization: on specifying coordination, false free relatives, and promotion. Linguistic Inquiry 37: 229–270. Webber, Bonnie 1978 A formal approach to discourse anaphora. Ph.D. dissertation, Harvard University. Wilder, Chris 1998 Transparent free relatives. ZAS Working Papers in Linguistics 10: 191–199. Zagona, Karen 1988a Proper government of antecedentless VPs in English and Spanish. Natural Language and Linguistic Theory 6: 95–128. 1988b Verb Phrase Syntax: A Parametric Study of English and Spanish. Dordrecht: Kluwer.

On the loss of identity and emergence of order: Symmetry breaking in linguistic theory Wei-wen Roger Liao 1. Introduction This article explores the topics of symmetry and symmetry breaking in linguistics, which have been relatively understudied concepts in theoretical linguistics, yet are considered essential and fundamental in many other theoretical sciences. These topics have gained attention in recent years, as a “biolinguistic” approach is adopted to study the design of language faculty.1 Echoing the central theme of this volume, we shall approach symmetry and symmetry breaking from the perspectives of identity and loss of identity, and the dynamics between the two conflicting forces. This approach can be directly carried over to a group theoretical consideration of symmetry, where the notion of symmetry is defined by the identity functions among a group of objects under transformations. We will show that when symmetry is understood in a group theoretical sense, symmetry and symmetry breaking can be more easily detected in linguistic patterns. We also apply Curie’s principles of symmetry and symmetry breaking in the theory of linguistics, and by doing so, we can more precisely characterize both the symmetric and the asymmetric sides in the design of the human language faculty. In this article, we claim that symmetry breaking can be adopted as a unified source for fundamental linguistic principles. More radically, we argue that the architecture of linguistic computation (or the “Y-model”) from the internal Narrow Syntax to the external linguistic interfaces is a natural reflection of the process of symmetry breaking; therefore, each linguistic pattern is a piece of the broken symmetry derived from the intrinsic symmetry of Narrow Syntax. 1

To my knowledge, Sportiche (1983) makes the first attempt at incorporating the concept of symmetry into transformational generative syntax. It is not until Kayne (1994), however, that the topics of symmetry and asymmetry are seriously examined in generative grammar (For topics related to (a)symmetry in (bio)linguistics, see Brody 2003, Citko 2011, Di Sciullo 2002, Freidin and Vergnaud 2001, Hiraiwa 2005, Jenkins 2000, Moro 2000).

290

Wei-wen Roger Liao

Before beginning the linguistic discussion, it is important to illustrate the notions of symmetry, symmetry breaking, and their relations to the loss of identity. For simplicity, we shall keep our discussion rather informal. Under group theoretical considerations, symmetry is defined by identity (or invariance) under transformation. To illustrate, as shown in (1), under a rotational transformation, an equilateral triangle has three members in its symmetry group (there will be more when other transformations are taken into consideration): (1)

The symmetry group of an equilateral triangle under rotation 1 3

3 2

2

2 1

1

3

Each 120-degree rotation results in an identical equilateral triangle (should we not mark the numerals for reference, it would be as if the transformations never took place). The above case illustrates a simpler case of symmetry group. Modern developments in theoretical physics further suggest that many apparent asymmetric patterns can actually be regarded as “broken” symmetry, in which the original symmetry seems to be lacking in each produced effect (like a broken mirror, we may only see its pieces). In such cases, however, the theory of symmetry (breaking) indicates that the underlying symmetry can be restored in more generalized forms, for example, when we consider global phenomena (sometimes, this is only possible in theory). To illustrate with a concrete example, we may think of a perfectly symmetric pencil standing on its lead tip on a flat surface — a highly symmetric (yet extremely unstable) state. With even a slight disturbance, the pencil will fall to one direction, and each observed falling event of the pencil is not symmetric (since every time the pencil falls down, it falls in a random direction) (see (2)). The symmetry in the original system is not lost, however. It is only hidden from view. If we consider all the possible falling events of the pencil and all the possible directions it may fall, we obtain another (more abstract) type of symmetry: a rotational symmetry (which can be represented by an imaginary circle). This generalized symmetry is transformed from the original symmetry in the system. The “broken” symmetry of the system is, therefore, restored by rotational symmetry (Lederman and Hill 2004: 191):

On the loss of identity and emergence of order

(2)

291

The restored symmetry of the falling pencil

This type of hidden (broken) symmetry illustrates the “conservation” principle of symmetry (and lack of symmetry) pioneered by Pierre Curie in 1894. This principle is summarized in (3): (3)

Curie’s Principle of Symmetry (Curie 1894, Koptsik 1983, Stewart and Golubitsky 1992) a. If certain causes produce certain effects, then the symmetries of the causes reappear in the effects produced. b. Equivalently, if certain effects reveal a certain lack of symmetry, then the lack of symmetry will be reflected in the causes that give rise to it.

In other words, the underlying symmetry is never lost, although we are frequently misled by the apparent asymmetries in the produced effects. Another side of the principle of symmetry concerns the dynamics between symmetries and lack of symmetries in a system. According to the principle of symmetry, when different asymmetric causes are imposed on a system, they may accumulatively lower/break the symmetries in the observed effects, but they will only reflect the exact amounts of asymmetries in their causes. Therefore, no extra asymmetries will be imposed on the system. This amounts to saying that symmetries will be maximally preserved (but may be translated into another symmetrical form) outside those interfered by symmetry-breaking factors. The concept of symmetry breaking has proven to be useful in many branches of sciences in uncovering the intrinsic symmetry from the apparent asymmetry and accounting for pattern formations (and crystallization) that result from the loss of symmetry and its transformation. The question that really concerns us here is whether symmetry breaking can also be adopted in theoretical linguistics to reconstruct the symmetry (and/or asymmetry) of language design and to describe the formation of linguistic patterns. In terms of identity and non-identity relations in linguistics, under the principle of symmetry, we predict that an identity relation in the input form should be conserved when its output form requires non-identity (for

292

Wei-wen Roger Liao

various reasons to be determined). The identity relation, however, may be preserved in a more abstract form, or perhaps translated into another form of identity, yet it will not be lost in the system. Researchers should bear in mind that, in the study of “symmetry breaking” in linguistics (and all other scientific branches), it is important not to be distracted by surface asymmetry (so as to look for a robust asymmetric cause), but rather to restore the intrinsic symmetry that is hidden in the seemingly asymmetric effects.2 This paper is organized as follows. §2 discusses two major linguistic principles that can be rephrased under symmetry and symmetry breaking. The existence of symmetry conservation, under the Y-model of linguistic computation, provides an indication of intrinsic symmetry at Narrow Syntax. §3 extends the general ideas of Prinzhorn and Vergnaud (2004), and we reconstruct a possible form of symmetric syntax through the primitive syntactic relations that remain invariant under transformation. We also discuss an application of the symmetric syntax in the nominal domain, which provides a solution to a constituency paradox in the classifier constructions. §4 concludes the paper. 2. (Broken) Symmetry in Linguistic Patterns Under the (inverted) Y-model in (4) (Chomsky 1995), linguistic computations are boiled down to three core components, including Narrow Syntax and the two interface levels, PF (phonetic forms) and LF (logical forms) (assume no articulated architecture in Lexicon, which simply provides atomic elements for linguistic computation). The linguistic objects built in Narrow Syntax are sent to the Interface Levels, the PF is an interface with the sensory-motor system, and the LF is an interface with the conceptualintentional system. Instructions are then given to the interfaces according to the features of the lexical items with respect to sounds (or phonological gestures) at PF and meanings at LF:

2

See Stewart (1998) and Stewart and Golubitsky (1992), who point out that the term “symmetry breaking” can sometimes be misleading (which is often seen in linguistics as well; cf. Haider 2013). They suggest that the phenomenon be renamed “symmetry sharing” and should be understood as such.

On the loss of identity and emergence of order

(4)

The Y-model

293

Lexicon Narrow Syntax

Sensory-Motor Interface

PF

LF

Conceptual-Intentional Interface

Prinzhorn and Vergnaud (2004) and Leung (2007) point out that both of the interface levels, PF and LF, possess an asymmetry in their core operations. These are listed in (5a) and (5b), respectively: (5)

a. The elements concatenated by PF operations are non-commutative. b. The elements concatenated by LF operations are non-associative.

Consider, for example, the simple English sentence Every pug bit its owner. PF defines the linear order of the phonological string. Therefore, every > pug > bit > its > owner is not equivalent to its > owner > bit > every > pug. The linear order is non-commutative. On the other hand, LF defines a hierarchical order with respect to the meaning of the sentence. The structural domination is non-associative since [Every pug [bit [its owner]]] is not equivalent to [[[Every pug] bit] its owner]. In the former, every pug structurally contains its owner (and a bound variable reading is possible), while in the latter, the structural containment is a reversed one. The central question that remains at this point is, Under the Y-model, what are the (a)symmetric properties of Narrow Syntax? With respect to this question, two opposite conjectures of Narrow Syntax can be assumed. They are as listed in (6) below:3

3

The current minimalist syntactic theory, however, assumes that syntax is inherently non-associative and commutative, a bias towards LF. Syntactic derivations reflect the hierarchical structure that is to be linearized at PF at Spell-out, and then the hierarchical structure may need to be adjusted by (inaudible) LF rules that, in theory, behave exactly like syntactic rules (but may override syntactic locality constraints). This assumption hence provokes the consequence that syntax is not elegant enough due to the unwanted redundancy between syntax and LF (e.g. the redundancy between syntactic movements and LF chain formations; see Brody 1995 and subsequent works).

294

Wei-wen Roger Liao

(6)

a. Narrow syntax is both associative and commutative (and therefore highly symmetric), and the symmetry is broken in the interfaces (due to external factors). b. Narrow syntax is not associative or commutative (and inherently asymmetric), and each part of the asymmetries is reflected by PF and LF, respectively.

The anti-symmetric syntax in Kayne (1994) argues for the latter view. On the other hand, the alternative option in (6a) is pursued in this article. The conceptual reason for adopting the latter view is that linear orderings (e.g. precedence) and hierarchical orderings (e.g. c-command and structural dominance) may come external to Narrow Syntax. Under the symmetry principle, traces can be found in support of our hypothesis by examining two properties of linguistic computations. First, we predict that it is possible to find a form of symmetry among different modules of linguistic computations, even when the inherent symmetry is broken. Second, we predict that the inherent symmetry is maximally preserved by transforming to another type of symmetry, except for those required by interface conditions. In this section, we discuss two well-grounded principles in syntactic theory to support our symmetric view, including the Mirror Principle in §2.1 and X-bar theory in §2.2. 2.1. Generalized Mirror Principle Under its original formulation (Baker 1988), the Mirror Principle states that morphological affixation is a mirror image of the syntactic derivation, as in (7): (7)

Syntax: Morphology:

[F1 [F2 [F3…V]]] [V…-AffF3-AffF2-AffF1]

A step beyond the Mirror Principle is found in Williams (2003), who argues that mirror effects are rooted in the more general principle of conservation (attributed to Panini). The general idea in Williams is that syntactic derivations can be viewed as mapping mechanisms among several subcomponents in the computational system (or representational levels): Thematic Level, Case Level, Functional Level, etc. Each mapping between two adjacent levels also represents the growth of syntactic structures. Williams argues that a shape conservation principle governs the mappings among

On the loss of identity and emergence of order

295

different representation levels. Simply put, the principle says that when a representation level L1 is mapped to another level L2, use a representation in L2 that “best” conserves the original form in L1 at the “lowest” cost (subject to economy principle). To illustrate, let us consider an example in Williams (2003), reproduced as (8): (8)

Syntax: Compounds:

[supply [gun [to an army]]]VP [army [gun [supply-er]]]N

Interestingly, a mirror image (a type of symmetry) is found in English compounds and their corresponding verb phrase. The question raised by Williams is what governs the mirror mapping in such cases. Specifically, why do we not find compounds such as *gun army supplier (while army supplier is also a legitimate compound in English). Williams argues that the mirror image in this case represents a best conservation of the original hierarchy of thematic structure in the two levels, as shown in (9): (9)

a. supply > theme > goal b. [goal < theme < supply-er]

More generally, this shape conservation principle can be subsumed under the principle of symmetry and symmetry breaking. That is, when the asymmetry steps in (in this case, it is the suffix -er), the system conserves the symmetry by looking for the best alternative word order in the symmetry group that may keep the identical hierarchical relation. The best candidate that conserves the right ordering with minimal cost is then the mirror image of the other order, illustrated in (10): (10) Mirror transformation [A > B > C] → [C < B < A] Let us consider another possible output, in which theme and goal keep their original precedence, [[theme > goal] < supply-er] (output: gun army supplier). The latter form requires an additional mirror transformation targeting a sub-string of the input. This then involves an unnecessary loss of symmetry of the whole system (an asymmetry in the effect that does not find itself in the cause) (see (11)):

296

Wei-wen Roger Liao

(11) a. Mirror Transformation: [A > B > C] → [C < B < A] b. Mirror Transformation on a sub-string: [[C < B] < A]] → [[B > C] < A]] This case illustrates an underlying rule of symmetry and symmetry breaking. When asymmetries come to the system, the symmetries in the original system will be maximally “preserved” except for those interfered by the asymmetric causes. This claim is further supported by evidence found when we remove the asymmetric causes. That is, we may expect that in languages without suffix forms like -er or -ing in English, the orders in syntax and in compound forms should be identical. This is true when we consider Chinese and Japanese V-O and O-V compounds. Chinese is a VO language, while Japanese is an OV language, and for these compounds, neither language has to employ a suffix such as -ing in English (e.g. water-spraying, gun-loading). As expected, Chinese V-O compounds predominantly have the order V-N (this syntax-morphology isomorphism is also found in other types of compounds; see Liao (2014)), while (native) Japanese V-O compounds have the opposite order, N-V (Shibatani 1990: 240). Examples are listed in the table in (12). (12) Chinese and Japanese compounds Chinese V-O compounds a. sha-ren kill-man ‘man-killing’ b. zhi-xie stop-blood ‘blood-stopping’ c. guo-nian pass-year ‘year-passing, the New Year’ d. zhuo-se apply-color ‘coloring’

Japanese O-V compounds a. hito-gorosi man-kill b. ti-dome blood-stop c. tosi-kosi year-pass d. iro-zuke color-apply

Of course, there are exceptions to this perfect mirror parallelism. It is also possible to find V-N compounds in Japanese, but close scrutiny shows that, historically, they are in fact loan words from Chinese (i.e. the SinoJapanese compounds, also from Shibatani 1990: 240):

On the loss of identity and emergence of order

(13) Sino-Japanese V-O compounds in Japanese a. satu-zin b. si-ketu c. etu-nen kill-man stop-blood pass-year

297

d. tyaku-syoku apply-color

Chinese also has O-V compounds, but only when the verbs and objects are both bi-syllabic (or longer). Again, these exceptions are limited. They generally occur in the marked conditions, and the reason they are marked (and needs explanations) is a good indication that they do not follow from the general principle of symmetry hidden in the underlying linguistic designs, and it is exactly the asymmetric factors that call for explanations in the theory: (14) a. xiu-che car-fix ‘car-fixing’ b. xi-yi wash-clothes ‘laundry’

a’. qi.che-xiu.li fix-car ‘car-fixing’ b’. yi.fu-qing.xi clothes-wash ‘laundry’

Another instance that illustrates symmetry breaking in linguistics comes from argument alignments in the causative constructions (also briefly discussed in Williams 2003). The causative constructions usually involve combinations of two verb phrases: [VP1 make John] + [VP2 (John) read the book]. In French, this combination yields an output form where VP2 “adjoins” to the causative verb, and the original object of the causative VP1 (or the subject of VP2) is dislocated to the end of the VP chunk with dative Case: (15) [faire Jean] + [Jean lire ce livre] → [faire [lire ce livre] á Jean] ‘[make John] + [John read the book] → [make [read the book] to John]’ Williams argues that the shift of Case (Nominative → Accusative → Dative) and the new alignment of the dative argument (á Jean) can be attributed to the need to represent the complex theta structure by a simple Case frame. The theory is reminiscent of the analysis in Rouveret and Vergnaud (1980) in terms of Case Filters. Their idea is that the lower verb phrase raises to a position between the causative verb and its object [make [VP read the book] John tVP]. The raising transformation maintains the direction and adjacency of the Case assignment between the raised verb and its direct

298

Wei-wen Roger Liao

object, at the cost of losing the Case configuration between the causative verb and the inner subject. The subject is then supplied with an additional Case assigner (with dative Case). It is plausible that the two analyses can be bridged under the general principle of symmetry (breaking). In such an analysis, the verb raising to the causative verb is a cause of the loss of symmetry, and in order to maintain the maximal symmetry of the system, the common Case frame [V > Accusative > Dative] is adopted, and the dative argument is dislocated to the “mirror” position for the same consideration of global symmetry conservation. We may find additional support for this claim from the East Asian languages. In Japanese and Korean causative constructions, where the typical word order is SOV, the verb is adjoined to the causative verbal suffix. Unlike in the Romance counterpart constructions, this “opposite” verb raising does not twist the original argument alignment and Case assignment configuration, and therefore, no special trick is needed to readjust the position of the dative argument. (16) shows an example from Japanese: (16) a. [S [S O V] -CAUSE] → [S [S O] V-CAUSE] b. Taroo-ga piza-o tabe-ta. eat-PAST Taroo-NOM pizza-ACC ‘Taroo ate pizza.’ c. Hanako-ga Taroo-ni piza-o tabe-sase-ta. Hanako-NOM Taro-DAT pizza-ACC eat-CAUSE- PAST ‘Hanako made/let Taro eat pizza.’ (Miyagawa 1999) Additionally, in Chinese (as well as in English), where the free standing causative verb does not trigger verb raising at all, the original word orders are well preserved in isomorphic symmetry, as illustrated in (17): (17) a. [S CAUSE [S V O] → [S CAUSE [S V O]] b. Zhangsan du-le na ben shu Zhangsan read-PERF that CL book ‘Zhangsan read that book.’ na ben c. Lisi rang Zhangsan du-le Lisi make/let Zhangsan read-PERF that CL ‘Lisi made/had Zhangsan read that book.’

shu. book

We may conclude that the Generalized Mirror Principle is essentially governed by the principles of symmetry and symmetry breaking. What is crucial to our assumption here is that symmetry, either in the form of iso-

On the loss of identity and emergence of order

299

morphism or mirror image, is found among different modules (or levels) of linguistic computations. Given the Y-model, we may view these symmetric patterns as a “restored” symmetry of the intrinsic symmetry of the core of computational system, that is, Narrow Syntax. 2.2. X-bar theory: ripples of syntax Another indication for an intrinsic symmetry of Narrow Syntax comes from X-bar theory. X-bar theory possesses both symmetric and asymmetric properties (to be discussed below), and therefore, it plays an important role in the discussion of symmetry and symmetry breaking in syntactic computation. On the side of symmetry, the X-bar theory dictates that architecture of phrase structure is a recursion of an invariant X-bar schema [XP YP [X’ X ZP]], which, in group-theoretical terms, is a form of translational symmetry. On the other hand, the internal structure of the X-bar phrase, as Kayne (1994) points out, is asymmetric. This is because the specifier phrase of XP always asymmetrically c-commands XP, which in turn asymmetrically ccommands its complement phrase. Therefore, according to Kayne’s (1994) Linear Correspondence Axiom (LCA), only the structures conforming to the X-bar schema can be legitimate syntactic objects (with well-defined linear and hierarchical ordering). We may therefore describe X-bar theory as being locally asymmetric yet globally symmetric. Chomsky (1995), however, contends that X-bar theory is an architectural principle, but he views linearization (or LCA) as an external PF property that is not inherent in Narrow Syntax. Let us assume Chomsky’s position on the external condition. Furthermore, we shall argue that the symmetry and asymmetry of the X-bar structure is fully expected under the principles of symmetry and symmetry breaking. Asymmetry that comes from PF would “minimally” break the symmetry of Narrow Syntax, which is nonetheless preserved in another form of symmetry (i.e. translational symmetry).4 Under this view, X-bar structures are consequences of symmetry 4

Dynamic antisymmetry in Moro (2000) is a theory that provides clear illustrations of the interplay between symmetry and asymmetry. According to Moro (2000), certain types of symmetric structures are allowed at Narrow Syntax, and due to the linearization purpose, these symmetric structures need to be “broken” (through syntactic movements). Our theory of symmetry breaking is along the same path, yet pursuing a more radical alternative. As will be developed in §3, we take Narrow Syntax as a highly symmetric structure (not linearly or hierarchically

300

Wei-wen Roger Liao

breaking (rather than a principle of structure building). Each local X-bar domain is thus formed in order to satisfy the external condition required for linearization (and/or hierarchical relations), and subject to the principle of symmetry, the global symmetry is an indication of the intrinsic symmetry of syntax, displayed in a more general type of symmetry. A physical analogy of this local asymmetry and global symmetry is the wave forms. When one throws a pebble into calm water (i.e. an asymmetric force is imposed on a highly symmetric surface), we see ripples. Like other types of wave forms, in an ideal condition, each wave is locally asymmetric because if we observe a certain point on the surface of water, we see that each wave is a continuous uni-directional movement of up and down. Unlike the original symmetric surface, its position is not invariant. On the other hand, if viewed globally, each wave takes the shape of a recursive form that occurs in an invariant period of time. Like the X-bar structures, the wave forms are locally asymmetric but globally symmetric. In the words of the symmetry principle, this is caused by breaking of the intrinsic symmetry; the asymmetry is observed in its pieces, but the symmetry can be discovered in a higher-ordered form (recall Curie’s principle). The mirror and isomorphic patterns of symmetry of the Mirror Principles and the global symmetry of the X-bar theory, along with other types of symmetric patterns hidden in fundamental linguistic principles, serve as solid indications that the source of syntactic computation, i.e., Narrow Syntax, is highly symmetric.5 This hidden characteristic of Narrow Syntax is discussed in §3. 3. Underlying symmetry in syntax Recall that we opt for a symmetric view of Narrow Syntax in (6a), repeated as (18) below: (18)

Narrow Syntax is both associative and commutative (and therefore highly symmetric), and the symmetry is broken in the interfaces.

defined) constructed by primitive syntactic relations, and these relations are broken in order to satisfy the asymmetric external conditions. 5 See Sportiche (1983) for a discussion the Projection Principle from the perspective of invariance and symmetry of syntactic transformation, and see Freidin and Vergnaud (2001) for an analysis to derive the Binding Principle C from the perspective of symmetry.

On the loss of identity and emergence of order

301

In this section, we conceptualize an approach to the intrinsic symmetric structures of Narrow Syntax through the recursive syntactic parallelisms, which reveal the primitive syntactic relations that remain invariant despite various syntactic transformations. Guided by the work by Prinzhorn and Vergnaud (2004), we shall reconstruct Narrow Syntax as a direct product of the syntactic relations (to be referred to as Merge-marker). Such markers of Narrow syntax are symmetric in a group-theoretic sense in the following two aspects. First, Merge-markers are not defined with respect to their linear or hierarchical ordering; they are Cartesian products that are not subject to a certain ordering of application. Linear and hierarchical orders are regarded as external conditions imposed on Narrow Syntax by interfaces. Merge-markers, therefore, need to be broken in order to satisfy the interface conditions. We shall argue that symmetry breaking is realized by imposing ordering on the primitive syntactic relations, from which different crosslinguistic syntactic patterns can be derived. Second, the primitive syntactic relations remain invariant in each cycle/phase of derivation (to be elaborated), and these syntactic relations are also invariant in different crosslinguistic patterns that result from symmetry breaking. 3.1. Symmetry of syntactic relations: parallelisms in syntax A plausible way to look for symmetry in Narrow Syntax is through the recursive patterns in the observable syntax. Former attempts at looking for structural invariances can be found in pioneering works such as Sportiche (1983), Van Riemsdijk (1998), Bowers (2001), and Hiraiwa (2005), in which invariant transformational principles, meta-features, abstract relations, and super-categories are sought to regulate the syntactic computations. Here, we attempt to look for the intrinsic symmetry of Narrow Syntax through invariant syntactic relations, which we take to be the hidden constants that underlie Narrow Syntax. The general ideas presented in this section are originally proposed in Prinzhorn and Vergnaud (2004) and Vergnaud (2009), who hypothesize that Narrow Syntax is a Cartesian product of the primitive syntactic coupled domains (CDs), and standard phrase structures are then constructed from the Cartesian product (in ways that are not explicitly discussed). In what follows, we shall elaborate them in a more precise way from the perspectives of symmetry and symmetry breaking.

302

Wei-wen Roger Liao

We shall refer to a core hypothesis developed in Prinzhorn and Vergnaud (2004) as the Prinzhorn-Vergnaud Conjectures, summarized in (19): (19) The Prinzhorn-Vergnaud Conjectures a. Narrow Syntax is a Cartesian product of the primitive syntactic coupled domains (CDn): Narrow Syntax = CD1 ⊗ CD2 ⊗ CD3 b. Merge reflects the relation between any two adjacent nodes in Narrow Syntax, and each node defines a syntactic role for the inserted lexical items. c. Phrase structures are constructed from the abstract Narrow Syntax. We argue that the primitive relations include (i) the nominal-verbal domain, represented as {N, V}, (ii) the functional-substantive domain, represented as {Fn, Sb} (by which each substantive item is given a syntactic role), and (iii) a connective pair {k, k’}, which extends the structures. Narrow syntax is therefore a Cartesian product as in (20a), whose graph representation is shown in (20b). We shall refer to the graph structure in (20b) as a Mergemarker, in contrast to the Phrase-marker (P-marker) in the standard assumptions: (20) (← (19a)) a. Narrow Syntax = {N,V} ⊗ {Fn,Sb} ⊗ {k,k’} = {(N,Fn,k),(N,Fn,k’),(N,Sb,k),(N,Sb,k’),(V,Fn,k),(V,Fn,k’), (V,Sb,k),(V,Sb,k’)}. b. M(erge)-Marker (N,Fn,k’) (V,Fn,k’) (N,Sb,k’) (N,Fn,k) (N,Sb,k)

(V,Sb,k’) (V,Fn,k) (V,Sb,k)

Each node in the M-markers can be realized by a lexical item through lexical insertion, depending on the feature matrix of the given node. For example, (N,Sb,k) defines a nominal substantive item, and (N,Fn,k) defines a nominal functional item. Note that the strict locality of the core syntactic relations is directly derived from this theory because each coupled relation

On the loss of identity and emergence of order

303

is, by definition, computed in a local fashion. This is illustrated by the structure in (21), which focuses on the bottom dimension: (21)

Modal = (V,Fn,k’)

(V,Sb,k’) = V

D = (N,Fn,k’)

(N,Sb,k’) = N

Asp = (V,Fn,k)

(V,Sb,k) = V

D = (N,Fn,k)

(N,Sb,k) = N

Asp

V

D

N

First of all, the one-dimensionally contrastive pair, (N, Sb, k) and (V, Sb, k), reflects the N-V relation between a substantive V and a substantive N, which plays a role in the theta relations, one of the set of strictly local relations (Williams 1994). Another one-dimensionally contrastive pair, (V, Fn, k) = Asp and (V, Sb, k) = V, reflects the Substantive-Functional relation between a functional V (e.g. Asp) and a substantive V that is paired with Asp. Similarly, the minimally contrastive pair (N, Fn, k) = D and (N, Sb, k) = N represents the relation between the functional D and the substantive N. In this respect, the proposed theory differs from the standard assumption that a substantive element is embedded under a series of functional projections. However, we argue that each functional projection is coupled with a substantive item, and it is the functional item that defines the LF role (and the syntactic category) played by the substantive item.6 It is thus predicted that a substantive item may occur in many functional environments and play the grammatical role as a functional item. This analysis thus gives us the novel direction of looking at “grammaticalization” in synchronic linguistics. Examples of this type are abundant in an analytical language like Chinese (see §3.2). The other fundamental relation involves grammatical 6

The functional items may be morphologically realized as independent lexical items (as the grammaticalized items in an analytical language like Chinese), or they may be uniformly realized by the same lexical items, giving rise to the effect of verb-raising in the highly inflectional languages. The choice between the two strategies is likely to follow the macro-parameters discussed in Huang (2005).

304

Wei-wen Roger Liao

connective pairs (cf. Den Dikken 2006, Kayne 2005, Larson 2009), represented here as {k, k’}. We argue that the parallelisms of the Chomskyan derivational phrases (i.e. the selections between C-T and the v-V in Chomsky 2001) should reflect a dimension of the couplings of {k, k’} (please refer to Liao 2011 for details). Turning back to the structure in (21), as predicted, a non-connected pair that is contrastive in two dimensions (e.g. a substantive N (N, Sb, k) = ‘the N Root’ and a functional verbal projection (V, Fn, k’) = Aspect) do not engage in a direct syntactic relation, and any relations between them must be associated by the intermediate elements (through D or the root V). The principle in (22) is therefore derived: (22) The Locality Principle of Syntactic Relations A syntactic relation R between the node X and the node Y is established if X and Y are adjacent nodes in a Merge-marker. That is, X and Y are a one-dimensionally contrastive pair. e.g. X = (A, B, C) and Y = (A’, B, C). Essentially, the proposed theory argues that the “Narrowest” Syntax consists of a set of Merge-markers, each of which represents a domain in which the primitive syntactic relations are satisfied. For example, a simple sentence should involve at least the two M-markers shown in (23) in its base structure: (23) a. The C-T domain C Dwh

T DCase

V N

V N

On the loss of identity and emergence of order

b.

305

The v-V domain Modal Asp D

D

V(v)

V Nint

Next

Turning back to the hypothesis in (19c), Prinzhorn and Vergnaud (2004) point out that phrase structures can actually be translated to a multi-linear ordered structure. In our language of symmetry breaking, this is a point where the asymmetries of the “output conditions” step in and break the underlying symmetry in Narrow Syntax. Consider the standard representation in (24a): (24)

Z (Z = X or Y) X

Y

The structure in (24) contains three components: X, Y, and its projection label Z, where Z is always identified as X or as Y, and nothing else. The structure, therefore, can be alternatively represented in (25), which is formally equivalent to (24) (cf. also Boeckx 2008): (25) a. X b. X

Y Y

(where X and Y merge and Y projects) (where X and Y merge and X projects)

A standard X-bar structure, as in (26a), then, can also be represented in the given fashion, with the added brackets that keep the derivational history in (26b). We represent the ordering in the convention of (26c), where the Head-Complement structure is generated earlier than the Spec-Head structure (the double arrow >> represents the sequence of structural generations):

306

Wei-wen Roger Liao

(26) a.

XP Z

X’

X Y b. [Z [X Y]] c. Head-Complement (H-C) >> Spec-Head (S-H) The symmetry breaking process, in which P-markers are constructed from M-markers, proceeds in the same fashion if we replace the H-C relation and S-H relation with the primitive syntactic relations. One asymmetric direction is defined between the primitive pair, and the other asymmetry is defined among primitive relations. Let us assume that in English, the relevant orderings of the primitive syntactic relations are (N→V), (Subt→Func), and (k→k’), and the relevant orderings among the relations are (Subt, Func) >> (N,V) >> (k, k’) (where (X,Y) = X→Y). The asymmetric phrase structures that are constructed from the M-markers are displayed as follow in (27), taking the C-T domain for example:7 (27) a. The C-T domain C Dwh

T DCase

V

V

N

N

b. First, applying (Subt→Func) to (a): C C

7

T V T

DCase

Dwh V D

N D

N

Note that in the P-markers, the linear order between N and V is free/unspecified.

On the loss of identity and emergence of order

307

c. Second, applying (N→V) to (b) C

T C

Dwh D

DCase

NC

V

D

T NT

V

d. Third, applying the connective {k, k’} to (c) C

k’

D-N C-V

T

k

D-N C-V

The structural extension pair {k, k’} has a function similar to the Transformational markers in early generative grammar (Chomsky 1964, 1965). They introduce a certain transformational relations to the base structures. Suppose then, the pair {k, k’} is mapped to embedding structures (which is assumed to be the default case; Williams 2003). Then, we derive the following representation, where the empty set indicates that further embedding is possible: e. {k, k} = Tembed C D D

C N C C

T V D D

T ∅

N T T

V

If we apply the same operation to the v-V domain, we can derive a similar structure, with Mod replacing C and Asp replacing T:

308

Wei-wen Roger Liao

(28)

Mod D D

Mod N Mod Mod

Asp

V D D

Asp ∅

N Asp Asp

V

Ultimately, a structure is derived in (29), which connects (27) and (28). We arrive at a phrase structure that is not much different from standard structures; however, many structural properties are directly accounted for if it is assumed that phrase structures are actually constructed from the more abstract Merge-markers, where the core syntactic relations are established, and where many structural computations are actually applied.8

8

Note that the P-marker is a Calder-like structure (29). That is, it provides a blueprint for linearization at PF. This suggests that (29) can be mapped to more than one linear order. For a discussion of (a)symmetric rules in a parametric theory of merge and linearization, see Fukui and Takano (1998, 2000) and Saito and Fukui (1998).

On the loss of identity and emergence of order

(29)

309

C D D

C N

C C

T V D D

T N T T

Mod V D D

Mod N Mod

Asp

Mod . V D D

Asp ∅

N Asp Asp

V

Prinzhorn and Vergnaud (2004) further assume that global ordering of the primitive syntactic relations may be responsible for the macroparameters of linguistic variation (Baker 1996, Huang 2005). For example, suppose a language adopts a different order from English: (N, V) >> (Subt, Func) >> (k, k’). The resulting phrase structures that are mapped from the Merge-markers can be illustrated as follows (using the Mod-Asp domain as an example): (30) Applying the ordering (N, V) >> (Subt, Func) >> (k, k’) to Mod-Asp domain Mod Mod Mod D

Asp V

Mod

V

Asp N D

V Asp

V

N

Such phrase structures seem compatible with languages with syntactic N-V incorporation, such as Mohawk:

310

Wei-wen Roger Liao

(31) Wa’-ke-[nákt-a-hnínu]-’ Fact-1sS-bed-∅-buy-PUNCTUAL ‘I bought the/a bed.’

(from Baker 1996: 279)

Wiltschko (2002) argues that in Mohawk, not only do V and N form a constituent, but D is also incorporated into a higher functional projection (e.g. an Agreement head). This analysis is expected by the theory proposed here. The differences between English and Mohawk, then, are only apparent in the sequences of applying ordering to the relational pairs. English would have an order of the (Subt, Func) relation first, which results in head selection (N and D) applied in prior to the theta selection, while Mohawk would fix the order of the (N,V) theta relation first instead, which gives rise to the alternative pattern in (30). We can expect that other patterns of phrase structures be generated with respect to different orderings among the primitive syntactic relations. This is an interesting empirical task that we are not able to pursue in this article. We shall leave it for future research.9 From a symmetry point of view, the transition from a highly symmetric Merge-marker to an asymmetric Phrase-marker is compared to the symmetry breaking process. That is, each type of phrase structure represents a unique pattern generated by breaking the inherent symmetry in order to satisfy the interfaces, and although they may appear different in their surface forms, the phrase markers are globally related to one another by the invariant set of primitive syntactic relations. In this sense, each type of phrase marker is regarded as a broken piece from the intrinsic symmetry of the abstract Merge-marker. 3.2. The paradox of nominal syntax and merge-markers We have argued that, conceptually, the Y-model can be viewed as a symmetry-breaking process. The next step is to look for empirical consequences of the symmetry-breaking view. Due to the limitations of space, we focus on a revealing case in the nominal syntax of classifier languages, especially East Asian languages. We show that the standard phrase struc9

If the proposed symmetry-breaking framework towards crosslinguistic variation is on the right track, it will confirm the idea that the sources of macroparameter simply lie in different mapping orderings from Merge-markers to Phrasemarkers, and languages share the same Merge-markers as the underlying universal syntactic representations.

On the loss of identity and emergence of order

311

tural analyses result in a paradox, and the paradox can be resolved if the syntactic computation is actually performed at a more abstract level (i.e. Merge-markers). With respect to the nominal syntax of noun and classifier, there have been two major competing proposals. On the one hand, it has been proposed that classifiers and nouns form two independent constituents (Fukui and Takano 2000, Huang 1982, among others). On the other, classifiers are thought to be extended projections of nouns (Borer 2005, Li 1999, Simpson 2005, among others), and they belong to single constituents. The contrast is illustrated in (32): (32) a. Dual-constituency analysis CLP/NP (DP) (D)

NP

b. Single-constituency analysis DP D(em)

CLP NumP

NumP/CLP Num

CLP

CL

CL

NP

The debate between the two types of analyses is strengthened by the fact that each analysis has its own advantage. The dual-constituency analysis successfully captures the parallelism between N and CL. One of the same properties shared by both N and CL is the ability to take a case marker. (33) Case-marking of classifiers in Korean (Park 2008) Ku-nun [chayk-ul] [sey kwen-ul] ilkessta. three CL-ACC read He-TOP book-ACC ‘He read three books.’ Second, both N and CL may be assigned a theta-role, as observed in Van Riemsdijk (1998). The examples are from Chinese, where both the noun ‘water’ and classifier ‘bottle’ may receive a theta-role, depending on the choice of verb: (34) a. John he-le [san ping shui]. John drank three bottle water ‘John drank three bottles of water.’

(theme = water)

312

Wei-wen Roger Liao

b. John da-po [san ping shui]. John break three bottle water ‘John broke three bottles of water.’

(theme = bottle)

Despite being conceptually appealing, the dual-constituency analysis has great difficulties capturing the strong selection relation between CL and N because it predicts that the classifier and noun belong to two separate constituents. Given the common assumption that heads engaged in selection are subject to strict locality, the dual-constituency analysis fails to represent the strict locality. The local selection between CL and N, then, provides strong support instead for the single-constituency analysis. Since CL is the immediate extended projection of N, the selection can be easily accounted for within the latter analysis. The classifier constructions hence lead us to a conceptual paradox in the standard theories. This paradox is reminiscent of the sub-extraction problem noted in Kayne (2005), shown in (36). The sub-extraction problem involves the same paradox of single and dual constituency: (35) Moneyi, John has lots of [money]i. Lots of money in English looks like a single DP/NP constituency, and the extraction of an inner element should be blocked. However, no such blocking effects are observed, and Kayne therefore points out that lots and money actually belong to two separate constituents as a result of a series of remnant movements, as illustrated in (36) (the capital OF stands for an unpronounced counterpart of of): (36) a. have [SC [money] [lots]] (Merging OF) b. OFCase [VP have [SC [money] [lots]]] c. [money [OFCase [VP have [SC tmoney [lots]]]]] (NP movement to Spec, OFP) d. of [money [OFCase [VP have [SC tmoney [lots]]]]] (Merging of) e. [VP have [SC tmoney [lots]]] [of [money] [OFCase tVP]] (remnant movement to Spec, ofP) Two crucial aspects of Kayne’s analysis are as follows: (i) The local selection between lots and money (and between CL and N) is established in a small clause in the underlying structure, and (ii) the movements resulting in dual constituency are triggered by a pair of connective (of, OF). It might be possible to solve the tension between the single and dual constituency anal-

On the loss of identity and emergence of order

313

yses of CL and N if we assume a similar analysis in Chinese, where (k, k’) in Chinese is realized by the pair of connectives (de, DE). This is shown in (37): (37) a. he san ping (de) jiu wine drink three bottleCL DE ‘drink three bottles of wine’ b. drink [SC [wine] [three-bottleCL]] c. DECase [VP drink[SC [wine] [three-bottleCL]]] (Merging DE) d. [wine [DECase [VP drink [SC twater [three-bottle]]]]] (NP movement to Spec, DEP) e. (de) [wine [DECase [VP drink [SC twater [three-bottle]]]]] (Merging de) f. [VP drink [SC twater [three-bottle]]] [(de) [wine] [DECase tVP]] (remnant movement to Spec, deP) However, Kayne’s analysis still cannot explain why both CL and N may receive theta roles from the verb. In fact, Chinese has verb-copying constructions, which indicates that each of the dual constituents may contain a verb of its own:10 [he (38) Zhangsan [[he jiu] DEi Zhangsan drink wine drink ‘Zhangsan drank three bottles of wine.’

san three

ping] de] bottle

Again, the VP parallelism poses problems for both the standard analysis and Kayne’s analysis, as neither is able to give a satisfactory account for “chaining” of the two verbs and maintaining the local selection of CL and 10

Verb-copying constructions are different from VP-conjunction constructions in that the two VPs in verb-copying constructions share a single aspectual marking (only marked in the second verb), as in (i) (which is not the case in VPconjunction constructions, where both verbs can have their own aspectual marking), and the two VPs also share a single manner adverb (modifying either VP) in verb-copying constructions, as in (ii): i. Zhangsan he(*-le) jiu he-le san ping. Zhangsan drink-ASP wine drink-ASP three bottle ‘Zhangsan drank three bottles of wine.’ ii. Lisi (zixi-de) kan shu (zixi-de) kan san ben. Lisi carefully read book carefully read three CL ‘Lisi read three books carefully.’

314

Wei-wen Roger Liao

N. In view of the VP-parallelism in (38), we shall reinterpret Kayne’s analysis by assuming that (k, k’) is a pair of connectives that connects two parallel structures, as assumed in the Merge-markers. We shall call it the CL-N domain (see (39); F[unit] stands for the functional item responsible for mass/count distinction; see Liao and Vergnaud 2014 for a discussion):11 (39) The CL-N domain F1

F2 F[unit]

Num

V2

V1 N

N(CL)

Like other domains discussed in §3.1, the CL-N domain is constructed by the Cartesian products of the primitive syntactic relations. The correspondence between the feature matrices and the lexical realizations are displayed in (40) (where (k, k’) = (of, OF)): (40) {N,V} ⊗ {Fn,Sb} ⊗ {k,k’} = F1 = (V,Fn,k’) Num = (N,Fn,k’) V1 = (V,Sb,k’) CL = (N,Sb,k’)

(V,Fn,k) = F2 (N,Fn,k) = F[unit] (V,Sb,k) = V2 (N,Sb,k) = N

After lexical instantiations, (41b) illustrates the Merge-marker of the classifier constructions in (41a), and due to the asymmetric ordering requirements of phrase structures, the Merge-marker can be translated to the Phrase marker in (41c), subject to the ordering (Sb, Fn) >> (N, V) >> (k, k’) (with the same technique introduced in the last section): 11

The specific contents of F1 and F2 of the verbal domain are dependent on other domains, like Aspect-Modal and C-T when the CL-N domain is extended and combined with other domains. We shall not discuss the specific mechanisms here, leaving it instead as a topic for future research.

On the loss of identity and emergence of order

(41) a. [he san drink three b. F1 ‘three’

ping] (de) [he jiu] DE bottle drink wine c. ({k, k’} = {de, DE}) F2 kP F[unit]

VP F1 drink1 three bottle

‘drink1’ ‘bottleCL’

315

‘drink2’ ‘wine’

k k (de)

k’P VP

F2 drink2 F[unit] wine

k’ DE

The verb-copying construction, then, can be viewed as resulting from a reversed pattern of spelling out the order of (de, DE), and with a different pronunciation rule for the corresponding verbs, as illustrated in (42): (42) [k’P [VP2 he drink

jiu] wine

DEi [kP [VP1 he san ping] de ]] drink three bottle

The proposed analysis allows us to capture Kayne’s “small clause plus remnant movement” analysis in a more precise way. Under our analysis, the local selectional relations between N and CL and between the two verbs are actually performed in the underlying Merge-markers, and the difference in surface structures is only apparent, resulting from a symmetry-breaking process from the Merge-markers to the phrase markers. The tension between the single- and dual-constituency analyses is therefore neutralized. 4. Conclusion We have shown that many linguistic principles and theories are governed by an underlying symmetry principle that is widely assumed in other sciences of natural objects. The phenomenon of symmetry breaking, derived from the symmetry principle, is also argued to be responsible for the emergence of different linguistic patterns and orderings. Applying the principle of symmetry, we argue that the standard Y-model of linguistic computation can be understood as a symmetry-breaking process, and Narrow Syntax is highly symmetric. We construct a symmetric syntax by viewing syntax as a Cartesian product generated by the primitive syntactic relations, called

316

Wei-wen Roger Liao

Merge-markers. The breaking of the symmetric Merge-marker to the asymmetric Phrase-marker, triggered by the asymmetric requirements of the interfaces, gives rise to various syntactic patterns, yet the syntactic relations among them remain invariant. Empirically, the abstract level of Merge-marker also proves to be useful in that it provides a solution to the single- vs. dual-constituency paradox in the classifier constructions. References Baker, Mark 1988

Incorporation: A Theory of Grammatical Function Changing. Chicago: University of Chicago Press. The Polysynthesis Parameter. Oxford: Oxford University Press.

1996 Boeckx, Cedric 2008 Bare Syntax. Oxford: Oxford University Press. Borer, Hagit 2005 In Name Only. Structuring Sense, Vol. 1. Oxford: Oxford University Press. Bowers, John 2001 Syntactic relations. Ms., Cornell University. Brody, Michael 1995 Lexico-Logical Form: A Radically Minimalist Theory. Cambridge, MA: MIT Press. Brody, Michael 2003 Towards an Elegant Syntax. London: Routledge. Chomsky, Noam 1964 Syntactic Structures. The Hague: Mouton. 1965 Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. 1995 The Minimalist Program. Cambridge, MA: MIT Press. Citko, Barbara 2011 Symmetry in Syntax: Merge, Move, and Labels. Cambridge: Cambridge University Press. Curie, Pierre 1894 Sur la symétrie dans les phénomènes physiques. Symétrie d’un champ électrique et d’un champ magnétique. Journal de Physique 3: 393–417. Dikken, Marcel den 2006 Relators and Linkers: The Syntax of Predication, Predicate Inversion, and Copulas. Cambridge, MA: MIT Press. Di Sciullo, Anna-Maria 2002 Asymmetry in Grammar. Amsterdam: John Benjamins.

On the loss of identity and emergence of order

317

Freidin, Robert, and Jean-Roger Vergnaud 2001 Exquisite connections: some remarks on the evolution of linguistic theory. Lingua 111: 639–666. Fukui, Naoki, and Yuji Takano 1998 Symmetry in syntax: merge and demerge. Journal of East Asian Linguistics 7: 27–86. 2000 Nominal structure: an extension of the symmetry principle. In The Derivation of VO and OV, Peter Svenonius (ed.), 321–362. Dordrecht: Kluwer. Haider, Hubert 2013 Symmetry Breaking in Syntax. Cambridge: Cambridge University Press. Hiraiwa, Ken 2005 Dimensions of symmetry in syntax: agreement and clausal architecture. Ph.D. dissertation, Massachusetts Institute of Technology. Huang, C.-T. James 1982 Logical relations in Chinese and the theory of grammar. Ph.D. dissertation, Massachusetts Institute of Technology. 2005 Syntactic analyticity and the other end of parameter. Ms., Harvard University. Jenkins, Lyle 2000 Biolinguistics: Exploring the Biology of Language. Cambridge: Cambridge Univesity Press. Kayne, Richard 1994 The Antisymmetry of Syntax. Cambridge, MA: MIT Press. 2005 Movement and Silence. Oxford: Oxford University Press. Koptsik, V. A. 1983 Symmetry principles in physics. Journal of Physics C: Solid State Physics 16: 23–34. Larson, Richard K 2009 Chinese as a reverse ezafe language. Yuyanxue Luncong [Journal of Linguistics] 39: 30–85. Lederman, Leon M., and Christopher T. Hill 2004 Symmetry and the Beautiful Universe. Amherst, NY: Prometheus Books. Leung, Tsz-Cheung Tommi 2007 Syntactic derivation and the theory of matching contextual features. Ph.D. dissertation, University of Southern California. Li, Yen-hui Audrey 1999 Plurality in a classifier language. Journal of East Asian Linguistics 8: 75–99.

318

Wei-wen Roger Liao

Liao, Wei-wen Roger 2011 The symmetry of syntactic relations. Ph.D. dissertation, University of Southern California. 2014 Morphology. In The handbook of Chinese linguistics, C.-T. James Huang, Y. H. Audrey Li and Andrew Simpson (eds.), 3– 25. Oxford: Wiley-Blackwell. Liao, Wei-wen Roger, and Jean-Roger Vergnaud 2014 On merger-markers and nominal structures. In Primitive Elements of Grammatical Theory: Papers by Jean-Roger Vergnaud and His Collaborators, Katherine McKinney-Bock and Maria Luisa Zubizarreta (eds.), 237–274. New York: Routledge. Miyagawa, Shigeru 1999 Causatives. In The handbook of Japanese linguistics, Natsuko Tsujimura (ed.), 236–268. Oxford: Blackwell. Moro, Andrea 2000 Dynamic Antisymmetry. Cambridge, MA: MIT Press. Park, So-Young 2008 Functional categories: the syntax of DP and DegP. Ph.D. dissertation, University of Southern California. Prinzhorn, Martin, and Jean-Roger Vergnaud 2004 Some explanatory avatars of conceptual necessity: elements of UG. Ms., University of Southern California. Riemsdijk, Henk C. van 1998 Categorial feature magnetism: the endocentricity and distribution of projections. Journal of Comparative Germanic Linguistics 2: 1–48. Rouveret, Alain, and Jean-Roger Vergnaud 1980 Specifying reference to the subject: French causatives and conditions on representations. Linguistic Inquiry 11: 97–202. Saito, Mamoru, and Naoki Fukui 1998 Order in phrase structure and movement. Linguistic Inquiry 29: 439–474. Shibatani, Masayoshi 1990 The Languages of Japan. Cambridge: Cambridge University Press. Simpson, Andrew 2005 Classifiers and DP structure in Southeast Asian languages. In The Oxford Handbook of Comparative Syntax, Guglielmo Cinque and Richard Kayne (eds.), 806–838. Oxford: Oxford University Press. Sportiche, Dominique 1983 Structural invariance and symmetry in syntax. Ph.D. dissertation, Massachusetts Institute of Technology. Stewart, Ian 1998 Life’s Other Secret. New York: John Wiley and Sons.

On the loss of identity and emergence of order

319

Stewart, Ian, and Martin Golubitsky 1992 Fearful Symmetry: Is God a Geometer? Oxford: Blackwell. Vergnaud, Jean-Roger 2009 Defining constituent structure. Ms., University of Southern California. Williams, Edwin 1994 Thematic Structure in Syntax. Cambridge, MA: MIT Press. Williams, Edwin 2003 Representation Theory. Cambridge, MA: MIT Press. Wiltschko, Martina 2002. Agreement morphemes as determiner: reanalyzing the polysynthetic properties of mohawk. University of British Columbia Working Paper in Linguistics 10: 169–182.

320

Wei-wen Roger Liao

Part IV General

Linguistic and non-linguistic identity effects: Same or different? Moira Yip The topic of this volume is identity in language, but it is informative to begin by looking at larger issues of the role of identity in human and nonhuman cognition, and only then relate it to language. That is the goal of this chapter. Endress, Nesper and Mehler (2009) postulate that humans are endowed with a perceptual or memory primitive (POMP) devoted to identity relations. This is neither specific to language, nor to humans, but it is recruited by language, where it has a range of effects which we will examine in detail later. The presence of this primitive ability means that both the presence and the absence of identity are potential carriers of information. In acoustics, for example, signal modulation (Traunmüller 1994) introduces nonidentity into the signal, as when the period of silence during the closure of a stop consonant interrupts the vocalic carrier signal. A further consequence of our ability to notice identity is that false or inadvertent identity can cause confusion and be a hindrance to speech processing and communication. I begin by looking at evidence that the ability to detect or compute identity is clearly not limited to language, or indeed to humans, since it can be detected in different modalities, and by different species. I then restrict my attention to language, but always remembering that the speaker/hearer can draw on this general identity POMP, supplemented by more languagespecific skills. I will discuss a selection of cases to illustrate the breadth of the phenomenon. In language, identity is widely used to attract the hearer's attention, as in child-directed speech, mnemonics, or poetic rhyme, but it is also used for grammatical purposes, as in reduplication. The avoidance of identity is also common, possibly caused by the need to eliminate potential interference. Examples include classic OCP effects such as the cooccurrence restrictions on homorganic consonants in Semitic roots (McCarthy 1981), anti-gemination effects (McCarthy 1986), echo-words (Yip 1998), and haplology (Yip 1998). Identity can pay attention to phonological objects, to morphological, syntactic or semantic properties, or to a combination of two or more of these. In the phonological domain, identity avoidance has its roots in production (tongue twisters; Walter 2007) and in perception (laryngeal co-occurrence restrictions, Gallagher 2010). In the

324

Moira Yip

domain of syntax/semantics, its roots may lie in processing challenges, as suggested by the evidence for interference effects from false identity. Such effects have been argued to account for difficulties in handling centerembedding (Bever 1974, Gordon, Hendrick and Johnson 2001), and even as a reason for the preference for SOV word order (Gibson et al. 2011). 1. Introduction: identity in different modes, and in different species 1.1. Non-human species There are many dimensions of identity beyond those found in spoken language, and other species share the ability to detect identity. Murphy (2008) shows that rats can distinguish between identical and non-identical tones, and Giurfa (2001) shows that bees can distinguish identical and nonidentical colours, patterns and odours. The bees were trained in a simple maze to choose the path with the ‘same’ stimulus as the one displayed at the maze entrance. They were trained on colours, but tested on grid patterns, or vice versa. The results of the transfer tests were highly significant, P < .005 or better.

Figure 1: (from Giurfa et al. 2001). Each pair of bars shows which path was preferred given a particular sample pattern at the maze entrance. The bees were also trained across modalities, where they were trained on odours, but tested on colours. Guirfa and colleagues then tested for non-

Linguistic and non-linguistic identity effects: Same or different?

325

identity: the bees were now trained to choose the path with a ‘different’ stimulus from the one at the maze entrance. They were trained on colours, but tested on grid patterns, or vice versa. Again, the transfer test results were highly significant, P < .001 Given that even relatively simple organisms can detect identity, we might ask whether it is put to use in animal communication. In one sense, the answer is clearly yes. Consider birdsong, famously an instance of learned vocal communication. All song-learning necessarily involves mimicry, and the ability to monitor and correct for accurate imitation — in other words, the ability to detect identity. Note too that in many species song learning continues in adult life. Counter-singing is a phenomenon in which birds like the song sparrow repeat another bird’s song back to it for territorial or other purposes, and identity is central to the enterprise (Vehrencamp 2001). In other species, use of same/different songs is used to identify friend/foe. The skylark (Briefer et al. 2007) sings long (120secs) continuous songs with a repertoire of up to 300 syllables. 77% of the song content is composed of multi-syllable phrases repeated by the same bird and others in the community (i.e. a dialect). Strangers use different phrases, and this difference is used to identify friend vs. enemy. Finally, on a more sophisticated level, humpback whales have been claimed to use rhyme to remember songs. (Guinee and Payne 1988). I now turn to our own species, but not yet to language. 1.2. Cross-modal identity in humans There is considerable evidence that humans can detect identity in different modalities, including across modalities. The classic Stroop effect (Stroop 1935) shows that the naming of sensory colours is inhibited if the colour of the typeface and the meaning of the word conflict. For example, suppose the task is to name the colour of the ink, but the word is green, printed in red ink. Interference is also found when the task is instead to identify the word, i.e. green. This is called the reverse Stroop effect. The link between this phenomenon and identity issues comes from an extension of this line of research. Dyer (1973) shows that word-to-colour and colour-to-word samedifferent matching tasks are, not surprisingly, easier if the word and the colour match. Fascinatingly, these tasks are also facilitated by phonological priming, so that the word rat is recognized as being printed in red ink more quickly than the word cat. However, these effects are task dependent. Dur-

326

Moira Yip

gin (2000) shows that calculation of identity is not slowed by interference from the meaning of the word if the task is itself non-verbal, such as pointing to a red patch if the word blue is printed in red ink. But pointing to a blue patch in the same circumstances is slowed by the conflicting ink colour presumably because it involves a chain of inference from the semantics of the word to finding the blue patch that is an instance of those semantics. Durgin says: “If distracting information represents a possible response, then response competition may ensue when the distracter-based response would be in conflict with the correct response.” More generally, false identity is perceptually problematic. Identity may thus be avoided unless it is being made use of. The Stroop effect literature shows two things: identity can be cross-modal, and dangerous! There is also evidence that non-identical items enjoy a perceptual advantage, whereas identical ones are at a disadvantage. The first relevant body of literature concerns the so-called “oddball effect”, the perceptual salience of a novel stimulus after a repetitive series of stimuli. Oddballs are clearly more salient: hence the infant head-turning and sucking experimental paradigms. Pariyadath and Eagleman (2007) find that “in a repeated presentation of auditory or visual stimuli, an unexpected object of equivalent duration appears to last longer.” The implication: non-identical elements have a perceptual advantage. The inverse of this is a perceptual disadvantage for identical items, even if non-adjacent. Kanwisher (1987) documents a very strong effect whereby subjects fail to detect repeated elements properly, if at all. For example, if words are presented visually at five words per second (at the high end of the normal reading rate), subjects find it very hard to detect repeated words, even if they are not next to each other and bear different case. The effect is found in visual domains, and also in language tasks, as further shown in related work by Bavelier (1994) and many others. An excellent summary can be found in Walter (2007: 168). In language, more general cognitive effects like those discussed above are a plausible source of the well-known anti-repetition Obligatory Contour Principle (or OCP) effect proposed by Leben (1973). I now turn finally to language itself.

Linguistic and non-linguistic identity effects: Same or different?

327

2. Human language I begin with ways in which the presence of identity is put to use in human language, in both paralinguistic and linguistic ways. 2.1. Presence of identity 2.1.1. Paralinguistic uses of identity From birth onwards, we are exposed to large amounts of repetition. Childdirected speech shows a high prevalence of reduplication (Ferguson 1977). For Mandarin, Shi, Morgan and Allopenna (1998) say that “..about half of all bisyllabic lexical items were reduplications, almost all of which were distinctive baby-talk forms.” For Cantonese, Wong (2000) says that 36% of nouns were special reduplications not present in adult language, such as jam2 nai1nai1 ‘drink milk’. This sort of reduplication carries over into adult language in the diminutive, affective, hypocoristic, and onomatopoetic vocabulary, in words like itsy-bitsy, eensy-weensy, pitter-patter and ticktock. Given the perception deficit associated with repetition and discussed by Kanwisher and Bavelier, this might seem odd, but the difficulty of detecting or distinguishing the second occurrence does not mean that the collective duo cannot be detected effectively, nor indeed that two may not be better than one when it comes to learning. In fact identity can aid memory. Rhyme, a form of repetition, is not only used for artistic effect in verse, but is also widely used as a mnemonic. Indeed, illiterate medieval minstrels used this as an aid to remembering their long songs (Van de Weijer, p.c.). In order to remember a large number, one might attach an image to each digit, using this rhyme: One is a gun /Two is a shoe /Three is a tree /Four is a door /Five is knives /Six is sticks /Seven is oven /Eight is plate /Nine is wine /Ten is hen. Then the number 3.1417 is remembered via a picture of {tree, gun, door, gun, oven}. Phonological identity (rhyme) is the link between the visual image of the tree and the numerical digit ‘3’. Both of these uses of identity imply functional advantages of repetition, and it is thus not surprising that it is recruited by grammar itself.

328

Moira Yip

2.1.2. Grammatical uses of identity Identity (and non-identity) abound in the grammar, and identity calculations may be total or partial. As the papers in this volume show the relevant dimension may be phonological, morphological, syntactic or semantic. They may also involve complex combinations of two or more of the above, either both from within one module (e.g. two phonological dimensions such as Place and Manner, as in Semitic root constraints, see §3.1), or across modules (e.g. paying attention to both sound and meaning, as in Mandarin ta haplology, see §3.4). Reduplication is probably the best-studied type of identity in language. It may involve either full or partial repetition. There is a vast literature on the details of exactly how a partial reduplicant is formed (McCarthy and Prince 1995 and many others), and for this reason I shall spend little time on it here. Suffice it to say that from the information perspective either type can carry grammatical information, such as plurality: e.g. Hopi saaqa ‘ladder’ sasaaqa ‘ladders’ or aspect: e.g. iteration in Mandarin Chinese kan ‘look’ kanyikan ‘look a little’. (ignoring tonal changes). It can also convey other semantic information, such as intensity: e.g. Tagalog sira-sira ‘thoroughly damaged’, or diminutives: e.g. Swati lingi-lingisa ‘resemble a little’. There is also reduplication in syntax, as in sentences like “He walked and walked and walked.” Typically, even in these syntactic cases, the identity must be spelled-out, not just in LF. Consider comparative repetition as in sentences like this: The distant ship became tinier and tinier and more and more blurred. One can replace tinier and tinier with more and more tiny, but one cannot say *tinier and more tiny. Sometimes it is not clear if the reduplication is in the morphophonology, or in the syntax, and they may be stylistic variants. Consider Mandarin Anot-A questions. Formally, one reduplicates the entire verb: xihuan-buxihuan ‘like-not-like?’, but informally one may copy just the first syllable: xi-bu-xihuan. Cantonese has clearly moved to morphophonological syllable reduplication only chong-m-chongyee ‘like-not-like?’ or hai-m-haiclaasi ‘highclass-not-highclass?’ Copying the entire verb is out: *chongyee-mchongyee (Law 2001). Given the utility of identity in the form of reduplication, it is something of a puzzle that language has many, many cases where non-identity is required. So many, in fact, that Leben (1973) proposed the Obligatory Contour principle, or OCP, as a universal principle of grammar. However, the literature on perception suggests that the reason may lie in the need to elim-

Linguistic and non-linguistic identity effects: Same or different?

329

inate potential interference. Walter (2007) proposes that there are three distinct causes of identity avoidance: production constraints based in articulation (the Biomechanical Repetition Avoidance Hypothesis, or BRAH), perception constraints arising from the difficulty of perceiving the second of two repeated items, and a purely cognitive constraint on repeating syntactic features. In the next section I look at a selection of cases of identity avoidance in grammar. 3. Avoidance of identity I start with a classic OCP case, Semitic roots (McCarthy 1981), then move on to echo-words, anti-gemination effects, and haplology. 3.1. Paralinguistic uses of identity As is well-known, in many languages including Arabic roots avoid nonidentical homorganic consonants, but the prohibition is gradient. The pairs in (1a) below are virtually non-existent, but the pairing in (1b) occurs at about half the expected rate. (1)

Observed/Expected ratios for two non-identical Arabic root consonants that share place of articulation (from Coetzee and Pater 2008) a. Pharyngeals .08 Dorsals .03 Labials .00 Coronal sonorants .09 Coronal fricatives .05 Coronal stops .17 b. Coronal fricative + Coronal stop .52

Coetzee and Pater model this in a Harmonic Grammar using a set of OCPlike constraints that are assigned weights during learning, controlled by lexical statistics. See also Zuraw (2000). Boersma (1998) and Walter (2007) both agree that avoidance of repetition can be caused either by articulatory factors — the difficulty of repeated gestures — or by perceptual factors. But they differ as to the domains within which these different factors apply. Boersma proposes that only the avoidance of adjacent identical items is rooted in perception, and that non-

330

Moira Yip

adjacent cases have an articulatory explanation. Walter on the other hand sees phonological cases as driven mainly by articulatory pressures, whereas any sort of identity avoidance — phonological, syntactic, semantic — can be laid at the feet of perceptual factors. For example, the separation of two strident fricatives by an epenthetic vowel in the English plural is perceptually driven for Boersma, but articulatorily driven (by the BRAH) for Walter (p.37). However, contra Boersma, it seems that even non-adjacent effects can have perceptual causes. Gallagher (2010) looks at laryngeal co-occurrence restrictions in Quechua and other languages. South Bolivian Quechua does not allow two ejectives in a root. These restrictions affect non-adjacent consonants, and for Boersma would presumably be articulatorily based, but Gallagher argues that in these languages “Laryngeal co-occurrence phenomena …maximize the perceptual distinctness of contrast between words”. This conclusion is based on the results of an experiment which shows that ejectives are hard to perceive in the context of another ejective. Pairs of words that contrast one vs. zero ejectives [kap’i] vs. [kapi], are more reliably distinguished than words that contrast one vs. two ejectives, [k’api] vs. [k’ap’i] (p < 0.0001), even though each pair has a difference of exactly one ejective per word. The data were CVCV bisyllables spliced together from native Quechua recordings, and presented in a native Quechua sentence frame, but the subjects were English, so the results are claimed to show a fundamental perceptual difference.

Figure 2: Percent correct for 1 vs. 2 contrast categories by place of articulation, averaged across all subjects

Linguistic and non-linguistic identity effects: Same or different?

331

An alternative hypothesis for the Quechua facts might be interferencebased: suppose that the presence of one C’ renders another C’ hard to detect, partly because localizing laryngeal constriction in the signal is hard. In that case, C’aC’a vs. C’aCa would also be hard, because of competitors in the surroundings. To investigate this hypothesis, C’aCa vs. CaC’a stimuli were tested in a follow-up study (Gallagher p.c.). If localization of glottalization were hard, these might be readily confusable. However, performance was at ceiling, like C’aC’a vs. CaCa, suggesting that they are processed like all roots that differ in both consonants. Walter also offers some proposals that might explain the prevalence of Place (and Manner)-based OCP restrictions vs. restrictions based on other features of segments, such as laryngeal features. She says (p.26), that repetition of homorganic consonants is difficult for three reasons: “First, it involves sustained activity without a rest period for the relevant articulator. Second, the impossibility of overlapping coarticulation between identical gestures lengthens the necessary transition time. Third, it requires rapid reversal of an articulator’s trajectory.” These factors apply particularly to the major articulators, and less so to the structures of the glottis. Tongue twisters, of course, are designed to exploit this. Here are some examples from a variety of languages: these were mainly taken from http://www.uebersetzung.at/twister/. (1)

A selection of Tongue twisters English She sells seashells on the seashore. French Un chasseur sachant chasser sait chasser sans son chien de chasse. ‘A hunter who knows how to hunt can hunt without his hunting dog.’ Mandarin Chinese Sì shí sì zhī shí shī zi shì sǐ de. ‘44 stone lions are dead.’ German Zehn zahme Ziegen zogen zehn Zentner Zucker zum Zittauer Zug. ‘Ten tame goats pulled ten hundredweights of sugar to the train of Zittau.’

332

Moira Yip

Basque Itsasoan dabiltzan itsasontziaren ontzian, itsasoko itsazkiak, itxita daude. ‘In the box of the ship on the sea the sea fish are held.’ Hebrew Sara shara shir same’akh, shir same’akh shara Sara. ‘Sara sang a happy song, a happy song sang Sara.’ Xhosa Ndacol’ icik’ eQonca, ndayibeka kwase Qonca ‘I found something in Qonca but I left it there.’ 3.2. Echo-words: close but not too close A very different case of identity avoidance, with a quite different cause, is found in echo-words. Echo-words combine total reduplication with fixed affixal material, but with the crucial twist that they usually ban complete output identity. There is thus an interesting tension between identity and non-identity. In Kannada, there is a process that replaces the first CV of the initial copy with [ɡi-] (Lidz 2001) pustaka-gistaka ‘books and stuff’. The process applies to all lexical categories, and even to phrases. If the input has an initial /ɡi-/ like the word /ɡiDa/ ‘plant’, some speakers go ahead as normal, but four other options are also attested: ɡiDa-biDa, ɡiDa-viDa ɡiDa-paDa, or simple blocking. Why? The simplest answer is that in the disfavoured totally identical output ɡiDa-ɡiDa the affixal material /ɡi-/ is undetectable. The output can easily be mistaken for complete reduplication, with no affix at all. In other words, complete identity would be misleading, and is functionally suboptimal. Note that this is not quite the same proposal as the perceptual difficulty of noticing a second occurrence of a repeated entity that has been documented by Kanwisher and Bavelier. I am not suggesting that the affixal /ɡi-/ will not be noticed (although this is also possible). It is more that if it is noticed it will still not be detectable as an affix. 3.3. Anti-gemination McCarthy (1986) was the first to identity the anti-gemination phenomenon whereby vowel deletion is blocked between two identical consonants. For

Linguistic and non-linguistic identity effects: Same or different?

333

example, Afar deletes the unstressed medial vowel in xamilí > xamlí but not in sababá *sabbá. Rose (2000) takes a rather different view, arguing that the OCP applies across vowels too (cf. Semitic roots), so sababá *sabbá both violate the OCP. Instead, in her view, the vowel is retained to avoid a geminate. Boersma (1998) and Blevins (2004, 2005) suggest that a sequence of two identical consonants is perceptually indistinguishable from an underlying geminate (which Afar has), and thus information risks being lost if the vowel is deleted. This explanation is similar to the reasons advanced in the previous section for the behaviour of echo-words. Walter (2007) takes a different view. She attributes anti-gemination effects to the articulatory difficulty of re-articulation. Specifically, between two identical consonants vowels tend to be longer, to allow for the extra difficulty of repeating the same consonant twice. This greater length makes vowel reduction less likely, and if deletion is the end point of a historical reduction process it also makes deletion less likely. 3.4. Haplology Identity effects are by no means limited to phonology. Haplology is the name for a process whereby two identical morphemes would be expected to occur next to each other, but one is simply left out. The phenomenon is particularly interesting because the identity is often computed on a combination of phonological and morphosyntactic identity. It can act word internally, in which case it is a morphological process, or between words, in which case it is clearly syntactic. In the Mandarin data below, the third person singular pronoun ta can occur next to another ta if they have distinct references, as in (3a). However, if they are both adjacent and co-referential, as in (3b), the sentence is ungrammatical. Instead, one is deleted, as in (3c). Note that this sentence has only the co-referential reading. And if ta is coreferential with a preceding adjacent full NP, not another ta, as in (3d), the sentence is also fine. (3)

a. Wo wen tai taj mingtian lai bu lai I ask hei hej tomorrow come not come I asked himi whether hej would come tomorrow b. *Wo wen tai tai mingtian lai bu lai I ask hei hei tomorrow come not come I asked himi whether hei would come tomorrow

334

Moira Yip

c. Wo wen tai ∅i/*j mingtian lai bu lai I ask hei hei/*j tomorrow come not come I asked himi whether hei/*j would come tomorrow d. Wo wen Lao Wangi tai /∅i mingtian lai bu lai I ask Lao Wangi hei tomorrow come not come I asked Lao Wangi whether hei would come tomorrow It is clear that identity is computed over both reference, and phonological identity. Walter (2007) argues that the roots of this sort of anti-identity lie in perception. A second identical ta is too hard to detect, and thus omitted. Although she agrees that most cases require phonological identity, she discusses a case in Semitic where it is not involved. See also Neeleman and Van de Koot (2005). Co-reference is not the only syntactic or semantic factor that can play a role. Consider ellipsis. Ellipsis can be viewed as a sort of haplology, with the relevant identity being syntactic isomorphism rather than reference or phonological identity. In the semantic domain, there is evidence that identity on dimensions such as human vs. non-human or common noun vs. pronouns or names can cause processing problems when such items are adjacent. In the next section I look more closely at the possible reasons for identity avoidance in syntax and semantics. 4. Syntactic and semantic interference effects from false or potential identity 4.1. SVO to SOV word order: adjacent NP’s cause processing problems A long-standing puzzle has been why SVO word-order is crosslinguistically so common. Gibson et al. (2011) provide experimental evidence that the default word-order for human language is SOV, not SVO. They observe that SOV order emerges in home-sign, and in experimental gestural tasks (even for those whose L1 is SVO). However, in the same tasks the word-order preference shifts to SVO when S and O are similar, by which the authors mean when both NP’s are human. They hypothesize that memory demands are “…. sensitive to the number of similar elements between heads and their dependents”. In an SOV sentence with two human NP’s, the object intervenes between the subject and the verb, and the pro-

Linguistic and non-linguistic identity effects: Same or different?

335

cessor has to deliberately ignore it in order to identify the subject. They thus suggest that SOV word order increases memory demands. This may be resolved in some languages by case-marking, and in others by a historical change to SVO (Newmeyer 2000). If Gibson and colleagues are right, this is an instance of identity defined across broad cognitive classes (human vs. non-human, or potential agent vs. non-agent), and in which such identity hinders processing, very much in line with Walter (2007). 4.2. Interference in syntactic processing A second case comes from the extensive literature on the extra difficulty of processing object relatives (4b) rather than subject relatives (4a). (4)

Subject vs. object relatives: a. The banker praised by the barber climbed the mountain. b. The banker the barber praised climbed the mountain.

Object relatives like (4b) often contain sequences of NP’s, and sequences of V’s. The example below contains three of each. (5)

The reporter the politician the commentator met trusts said the NP NP NP V V V president won’t resign.

These act as competitors in the parsing process, hindering successful processing. Rendering them less similar on some dimension improves matters (Bever 1974): (6)

The reporter everyone I met trusts said the president won’t NP NP NP V V V resign.

In a series of experiments, Gordon, Hendrick and Johnson (2001, 2004) and Gordon et al. (2006) show that if one NP is a description, and the other is an indexical pronoun or a name, object relatives get easier. They claim that successful processing requires storing two things: the NPs themselves, and their order. Semantic similarity of the two items causes interference in retrieving the order information. Of course, object relatives might also become easier because pronouns and names are less likely to be modified by

336

Moira Yip

relatives. But the improved processing is also seen in clefts (though less so): It was the dancer/Jill that the banker/Joe phoned. A potential confound for the semantic claim in Gordon, Hendrick and Johnson’s 2001 paper is that the two types of NP in their stimuli also differed phonologically and syntactically. The descriptions had two words “the N”, and the noun was almost always polysyllabic. The names and pronouns were all single monosyllabic words. A typical example is The banker John praised climbed the mountain. But later papers used monosyllabic generics as descriptions, and longer names, so the word and syllable-count differences cannot play a central role. Gordon, Hendrick and Johnson discard a number of alternative explanations in favour of a semantic similarity-based interference. An important postscript brings us back to where we started: identity computations are a general cognitive skill that can be recruited by, but is not limited to, the grammar, and phenomena like the object/subject asymmetry are not necessarily grammatical. Van Dyke and McElree (2006) show that the processing of object relatives can be slowed by interference from items in memory that are not contained within the sentence itself. Subjects were asked to remember a list of nouns such as ‘table, sink, truck’, and then to process sentences like this: It was the boat that the guy who lived by the sea sailed/fixed in two sunny days. It turns out that if the words are potential alternates to the head noun, and thus competitors for the key role in the sentence, processing is slowed. In this example ‘truck’ slows processing in the “fixed” sentence but not in the “sailed” sentence. The lines of research in this section confirm that potential identity can interfere with processing in various ways, and as a consequence nonidentity can be functionally beneficial. 5. Conclusion I have given a swift overview of identity and non-identity phenomena in a variety of domains not restricted to humans, nor to language. Identity and its absence are clearly detectable in both linguistic and non-linguistic domains. It is also clear that while there are functional advantages to identity (e.g. in acquisition), there are also disadvantages (e.g. in processing). Unsurprisingly, language may recruit and grammaticalize identity for its own purposes (reduplication) and the same is true for the avoidance of identity (the OCP). The challenge for the linguist then becomes to disentangle when

Linguistic and non-linguistic identity effects: Same or different?

337

a particular (non)-identity phenomenon requires a grammatical statement, and when it can safely be attributed to more general cognitive abilities. In certain cases there is disagreement on which functional advantage underlies some phenomenon, but there is no a priori reason why there might not be multiple causes. So for example it could be that epenthesis takes place in English kisses both because of the articulatory difficulty of producing two strident coronal fricatives in a row, and also because of the difficulty of perceiving the second of two such fricatives. These dual pressures would surely make epenthesis even more likely to arise in language, and become enshrined as part of the grammar. Acknowledgements Thanks to Gillian Gallagher, Ted Gibson and Peter Gordon for useful discussions, and to the participants at the GLOW workshop in Vienna.

References Bavelier, Daphne 1994 Repetition blindness between visually different items: the case of pictures and words. Cognition 51: 199–236. Bever, Thomas G. 1974 The ascent of the specious, or there’s a lot we don’t know about mirrors. In Explaining Linguistic Phenomena, David Cohen (ed.), 173–200. Washington, DC: Hemisphere. Blevins, Juliette 2004 Evolutionary Phonology: The Emergence of Sound Patterns. Cambridge: Cambridge University Press. 2005 Understanding antigemination: natural or unnatural history. In Linguistic Diversity and Language Theories, Zygmunt Frajzyngier, David Rood and Adam Hodges (eds.), 203–234. Amsterdam: John Benjamins. Boersma, Paul 1998 Functional Phonology: Formalizing the Interactions Between Articulatory and Perceptual Drives. Landelijke Onderzoekschool Taalwetenschap 11. The Hague: Holland. Academic Graphics.

338

Moira Yip

Briefer, Elodie, Thierry Aubin, Katia Lehongre, and Fanny Rybak 2007 How to identify dear enemies: the group signature in the complex song of the skylark Alauda arvensis. The Journal of Experimental Biology 211: 317–326 Coetzee, Andries W., and Joe Pater 2008 Weighted constraints and gradient restrictions on place cooccurrence in Muna and Arabic. Natural Language and Linguistic Inquiry 26: 289–337. Durgin, Frank 2000 The reverse Stroop effect. Psychonomic Bulletin and Review 7: 121–5. Dyer, Frederick N. 1973 Same and different judgments for word-color pairs with “irrelevant” words or colors: Evidence for word-code comparisons. Journal of Experimental Psychology 98: 102–108. Endress, Ansgar D., Marina Nespor, and Jacques Mehler 2009 Perceptual and memory constraints on language acquisition. Trends in Cognitive Sciences 13: 348-353. Ferguson, Charles A. 1977 Baby talk as a simplified register. In Talking to Children: Language Input and Acquisition, Catherine E. Snow and Charles A. Ferguson (eds.), 209–235. Cambridge: Cambridge University Press. Gallagher, Gillian 2010 Perceptual distinctness and long-distance laryngeal restrictions. Phonology 27: 435–480. Gibson, Edward, Kimberly Brink, Steven Piantadosi, and Rebecca Saxe 2011 Cognitive pressures explain the dominant word orders in language. Paper presented at CUNY 2011, Stanford, CA. Giurfa, Martin, Shaowu Zhang, Arnim Jenett, Randolf Menzel, and Mandyam V. 2001 Srinivasan The concepts of ‘sameness’ and ‘difference’ in an insect. Nature 410: 930–933. Gordon, Peter C., Randall Hendrick, and Marcus Johnson 2001 Memory interference during language processing. Journal of Experimental Psychology: Learning, Memory, and Cognition 27: 1411–1423. 2004 Effects of noun phrase type on sentence complexity. Journal of Memory and Language 51: 97–114.

Linguistic and non-linguistic identity effects: Same or different?

339

Gordon, Peter C., Randall Hendrick, Marcus Johnson, and Yoonhyoung Lee 2006 Similarity-based interference during language comprehension: evidence from eye tracking during reading. Journal of Experimental Psychology. Learning, Memory, and Cognition 32: 1304– 1321. Guinee, Linda N., and Katharine B. Payne 1988 Rhyme-like repetitions in songs of humpback whales. Ethology 79: 295–306. Kanwisher, Nancy. G. 1987 Repetition blindness: type recognition without token individuation. Cognition 27: 117–143. Law, Ann 2001 A-not-A questions in Cantonese. UCL Working Papers in Linguistics 13: 295–317. Leben, William R. 1973 Suprasegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Lidz, Jeffrey 2001 Echo formation in Kannada and the theory of word formation. The Linguistic Review 18: 375 – 394. McCarthy, John J. 1981 A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12: 373–418. 1986 OCP effects: gemination and antigemination. Linguistic Inquiry 17: 207–263. McCarthy, John J., and Alan S. Prince 1995 Faithfulness and reduplicative identity. UMass Occasional Papers in Linguistics 18: Papers in Optimality Theory, Jill Beckman, Suzanne Urbanczyk and Laura Walsh Dickey (eds.), 249– 384. Amherst, MA: Graduate Linguistics Students Association. Murphy, Robin A., Esther Mondragón, and Victoria A. Murphy 2008. Rule learning by rats. Science 319: 1849–1851. Neeleman, Ad, and Hans van de Koot 2005 Syntactic haplology. In The Blackwell Companion to Syntax, vol. IV, Martin Everaert and Henk van Riemsdijk with Rob Goedemans and Bart Hollebrandse (eds.), 685–710. Oxford: Wiley-Blackwell. Newmeyer, Frederick J. 2000 On the reconstruction of ‘Proto-world’ word order. In The Evolutionary Emergence of Language: Social Function and the Origins of Linguistic Form, Chris Knight, Michael StuddertKennedy, and James Hurford (eds.), 372–388. Cambridge: Cambridge University Press.

340

Moira Yip

Pariyadath, Vani, and David Eagleman 2007 The effect of predictability on subjective duration. PLoS ONE 2(11): e1264. Rose, Sharon 2000 Rethinking geminates, long-distance geminates, and the OCP. Linguistic Inquiry 31: 85–122. Shi, Rushen, James L. Morgan, and Paul Allopenna 1998 Phonological and acoustic bases for earliest grammatical category assignment: a cross-linguistic perspective. Journal of Child Language 25, 169–201. Stroop, J. Ridley 1935 Studies of interference in serial verb reactions. Journal of Experimental Psychology 18: 643–662. Traunmüller, Hartmut 1994 Conventional, biological and environmental factors in speech communication: a modulation theory. Phonetica 51: 170–183. Van Dyke, Julie A., and Brian McElree 2006 Retrieval interference in sentence comprehension. Journal of Memory and Language 55: 157–166. Vehrencamp, Sandra L. 2001 Is song-type matching a conventional signal of aggressive intentions? Proceedings: Biological Sciences Vol. 268, No. 1476, 1637–1642. Walter, M. A. 2007 Repetition avoidance in human language. Ph.D. dissertation, Massachusetts Institute of Technology. Wong, Yam-man, Wendy 2000 Reduplication in the early lexical development of Cantonesespeaking children. BSc dissertation, University of Hong Kong. Yip, Moira 1988 The obligatory contour principle and phonological rules: a loss of identity. Linguistic Inquiry 19: 65–100. 1989 Feature geometry and co-occurrence restrictions. Phonology 6: 349–374. 1998 Identity avoidance in phonology and morphology. In Morphology and its Relation to Phonology and Syntax, Steven G. Lapointe, Diane K. Brentari and Patrick M. Farrell (eds.), 216–246. Stanford, CA: CSLI. Zuraw, Kie 2002 Aggressive reduplication. Phonology 19: 395–440.

On the biological origins of linguistic identity Bridget Samuels 1. Introduction: types of identity, types of evidence As should be obvious from the variety of topics addressed in this volume, the notion of identity is pervasive throughout all branches of linguistics, yet it takes a multitude of forms. My goal in this chapter is to address the biolinguistic origins of some versions of identity, with the hope of shedding some light on how they may have arisen in our species. I have discussed many of these topics in other places (Samuels 2009a,b, 2011, 2012a,b), though not always in the context of identity, and borrow from those discussions here. A recurring theme throughout this chapter will be that evidence can be found just about anywhere: from non-linguistic domains, from other mammals, and also from more distantly related clades, such as birds. As I have argued before (Samuels 2009b), it may well be that the animal abilities for which I provide evidence are only analogous (rather than homologous) to the representations and operations of identity found in human language. Nevertheless, it is worth emphasizing that discovering analogs to human cognitive capabilities can have important implications for how we understand the evolution of language, and on the ways that evolution can be channeled by physical and developmental constraints (Gould 1976, Gehring 1998, Hauser, Chomsky and Fitch 2002). In other words, analogs serve to highlight ‘Third Factor’ principles of biological and physical design (Chomsky 2005, 2007) which might be at play. Because my own specialization lies in phonology and its interface with syntax, I will focus on some notions of identity found in those realms (for a larger sample, see Nevins 2012). However, the same type of argument could be made from other subfields as well. For example, there is both behavioral and neurophysiological evidence that monkeys can be trained to distinguish between categories that are novel to them, such as dogs and cats (Freedman et al. 2001), and even pigeons can do the same with arbitrary symbols like the letter ‘A’ (Vauclair 1996). Understanding these abilities and their limitations could be relevant to how humans learn lexical semantics and categories such as noun classes. In the sections that follow, I will discuss phonological identity classes and identity avoidance in phonology,

342

Bridget Samuels

then contrast the notion of copying in syntax with reduplication in morphophonology. 2. Phonological identity classes Communication is only possible because humans are able to create identity where it does not exist, by creating abstract categories from the chaos of real utterances. Somehow, we are able to get past the fact that every time a linguistic sound is uttered, it is different — different in fundamental frequency, formant structure, length pitch contour, degree of articulatory closure, background noise — and categorize the sounds that we hear (or signs that we see). In short, we rely on the marvel of categorical perception, which is notable for having a crucial property: tokens from within a single category are perceived as being alike, and indeed are very difficult to distinguish, while tokens from different categories are distinguished with much greater ease. As Jackendoff (2011: 22) notes, we discretize the phonological signal into categories (phonemes) because “the only way to develop a large vocabulary and still keep vocal signals distinguishable is to digitize them. This is a matter of signal detection theory, not of biology…. So we might want to say that the digital property of phonology comes by virtue of ‘natural law,’” or what in recent years has become known as ‘Third Factor’ principles (Chomsky 2005). Not only does phonology group acoustically diverse tokens into the identity classes that we call phonemes, phonological features identify groups of phonemes that behave identically with respect to certain phonological processes. That is to say, the phonemes that trigger or undergo a particular phonological rule typically share a feature that may be described in terms of acoustics (for instance, all of the sounds’ first formants fall within a given frequency range) or articulation (all the sounds are produced with vibration of the vocal folds). Phonologists call these groups of similar sounds “natural classes.” The standard view, as expressed by Kenstowicz (1994: 19), is that “the natural phonological classes must arise from and be explained by the particular way in which UG organizes the information that determines how human language is articulated and perceived.” For the present purposes, then, we might ask how the identity classes in phonology — phonemes and features — have come to be, both over the course of human evolution and during routine language acquisition. In order to address this question, we may turn to the converse of identity: distinguishing between categories. The human auditory system matures

On the biological origins of linguistic identity

343

early, and many studies have shown that the youngest infants are capable of discriminating phonetic contrasts that are utilized in the various languages of the world (Werker and Tees 1984). Perhaps even more tellingly, not only are the sets of contrasts used cross-linguistically available to human infants, many of them are also perceived by non-humans. Take, for example, the distinction between the voiceless stops /p t k/ and their voiced counterparts /b d ɡ/. English speakers perceive a category boundary at +15/+20 ms of voice onset time (VOT), which is known as a short-lag/long-lag distinction (Hay 2005). The discrimination peak at this category boundary thus coincides with what is known as the positive auditory discontinuity: a bias in the auditory system, common to humans and most other mammals, which produces a non-linear mapping between acoustic inputs and the percepts they produce. It is also important to note that, while auditory discontinuities seem to provide natural boundaries for speech categories (Kuhl 1993, 2000), these are psychoacoustic biases that exist independently of human speech. Category boundaries that coincide with auditory discontinuities are the most salient to infants and the easiest for adults to learn, though other boundaries are also readily learnable (Hay 2005), even by languageimpaired children (Wright 2006). Cortical re-mapping or perceptual warping is the process by which category boundaries are mapped onto auditory discontinuities (or to other, more arbitrary areas of the acoustic space), yielding categorical perception. Despite early notions that categorical perception is what makes speech ‘special’, categorical perception has since been demonstrated in humans for non-speech sounds, faces, and colors. It has also been shown that macaques, baboons, and mice perceive conspecific calls categorically (Cheney and Seyfarth 2007), and that crickets, frogs, blackbirds, sparrows, quail, finches, budgerigars, marmosets, and other animals also perform categorical labeling (see references in Hauser 1996, Guenther and Gjaja 1996, and Kluender et al. 2006). In oscine birds which exhibit ‘closed-ended’ song learning, we find that neurogenesis accompanies this process (see Anderson and Lightfood 2002 §9.5.2). Birds also show the perceptual magnet effects that characterize the warping of the cortical map (Kluender et al. 1998). In humans, it has been shown that the sensory cortex also undergoes re-mapping in a number of less usual circumstances: for instance, when a person is blinded or deafened, the other senses can take over the brain areas which formerly served the now-absent sense, and the same occurs with amputees (Ramachandran and Blakeslee 1998). It is now possible to record single neurons from the auditory cortex of mammals that largely share our auditory system, in order to understand

344

Bridget Samuels

how basic auditory processing might subserve human phonemic systems. Specifically, Mesgarani et al. (2008) have recorded the neural responses of ferrets listening to human speech. They then determine what properties of the auditory stimulus cause each individual neuron to respond the most strongly. Some neurons respond best to the high front vowels because they are tuned to the frequency band where these vowels have a strong formant. Other neurons are tuned to a broader spectrum of frequencies but are sensitive to sudden bursts of noise, so they respond most strongly to the stop consonants. In short, ferrets have neurons capable of detecting acoustic properties that correlate with phonetic properties of phonologically active classes. Assuming the same capabilities in pre-linguistic humans (‘prelinguistic’ in both ontogenetic and phylogenetic sense, perhaps) is plausible because the auditory system is highly conserved among mammals. Creating a phonemic system would then reduce to tagging properties that are already being perceived with phonological features, and in turn using those features to carve out categories. In terms of how we might tie perception to production and leverage these perceptual abilities into a full-fledged phonemic system, one possibility is provided by Coen (2006). Coen develops a computational model of warping the cortical map that is capable of learning both bird songemes and human vowel categories. The idea behind his work is that In a notion reminiscent of a Cartesian theater — an animal can ‘watch’ the activity in its own motor cortex, as if it were a privileged form of internal perception. Then for any motor act, there are two associated perceptions — the internal one describing the generation of the act and the external one describing the self-observation of the act. The perceptual grounding framework described above can then cross-modally ground these internal and external perceptions with respect to one another. The power of this mechanism is that it can learn mimicry…. [It yields] an artificial system that learns to sing like a zebra finch by first listening to a real bird sing and then by learning from its own initially uninformed attempts to mimic it. (Coen 2006: 19)

The success Coen has in modeling the human vowel system is consistent with the results of de Boer (2001) and Oudeyer (2006), who also model the emergence of vowel systems. These three models differ in the parameters they assume and the methods they use, yet they all approximate attested vowel systems very closely, even without priors such as the ultimate number of categories to be established. In addition to the multimodal input produced by mimicry, Oudeyer (2006) and Guenther and Gjaja (1996) also emphasize the role of self-monitored experimentation (‘motor babbling’) in

On the biological origins of linguistic identity

345

connecting auditory and articulatory representations to produce phonological categories. Coen’s model in particular can also utilize input from multiple modes of external perception (e.g. sight and sound). Though it is certainly not necessary for the construction of a phonological system, or else the blind could never learn to talk, giving a role to visual input in phonological category building explains three facts that have long been known (the first two of which are discussed in Coen 2006 §2.1): first, that watching the movement of a speaker’s lips can greatly aid comprehension; second, that speech sounds which are acoustically ambiguous can usually be distinguished by unambiguous visual cues; third, that visual input can affect an auditory percept, as in the famous ‘McGurk Effect’ auditory illusion (McGurk and MacDonald 1976), in which a subject presented with (for instance) a synchronized visual /ɡa/ and auditory /ba/ perceives /da/. Recent neurological studies corroborate this behavioral evidence: it has been shown that both visual and somatosensory input reach the auditory cortex in macaques, and that watching lip movements produces a response in the supratemporal auditory cortex in humans (see Brosch, Selezneva and Scheich 2005, Ghazanfar et al. 2005, Ghazanfar, Chandrasekaran and Logothetis 2008 and references in Budinger and Heil 2006). Also, Weikum et al. (2007) have shown that visual information alone is sufficient to allow four- to sixmonth-old infants to discriminate between languages. 3. Identity avoidance in phonology Let us now turn our attention to an apparent restriction on identity in phonological representations. Since Leben (1973), the Obligatory Contour Principle (OCP) has been a mainstay of phonological theory. The OCP serves to ban consecutive identical features (e.g. on autosegments that are adjacent on some tier). This may be seen as a phonological manifestation of a more general identity-avoidance constraint that takes the general shape of *XX, as I will discuss later for syntax. While many phenomena that fall under the purview of the OCP pertain to tone (for instance, banning adjacent high tones), others are within the segmental realm. One such ‘antigemination’ rule of vowel deletion comes from Biblical Hebrew. This syncope rule turns underlying /ka:tab-u:/ into a surface form [ka:θvu:] after spirantization of /t/ and /b/, but the rule applies only when the flanking consonants (adjacent on the consonantal tier) are underlyingly non-identical

346

Bridget Samuels

(McCarthy 1986, see also Yip 1988). Hence, syncope does not occur in /sa:bab-u:/, which surfaces as [sa:vavu:] after spirantization of /b/. Before discussing the synchronic representation of rules like the one I have just described, I will first recap arguments by Walter (2007) that OCP effects in phonology arise from two sources, an articulatory one and a perceptual one (see also Yip 1988). On the articulatory side, Walter shows that repeating phonological gestures within a short time frame is highly effortful and physiologically taxing. She argues on the basis of biomechanical, experimentally-supported evidence that gestural repetition is difficult for three reasons (Walter 2007: 26-27): (1)

a. it involves sustained activity without a rest period for the relevant articulator b. the impossibility of overlapping coarticulation between identical gestures lengthens the necessary transition time c. it requires rapid reversal of an articulator’s trajectory

These difficulties can be mitigated by weakening or deleting one of the repeated gestures, or by increasing the time between them, all of which are attested in natural languages (Walter 2007: 36). Production difficulties alone cannot account for all OCP effects, however, because we often observe repetition avoidance on longer timescales (i.e. even across multiple syllables) than would be expected if the basis were purely articulatory. To account for the remaining data, Walter proposes that perceptual difficulties are to blame. Importantly from a biolinguistic perspective, the perception of repetition is known to be faulty also from the visual processing literature, so this is not a purely phonological or even linguistic deficit. Kanwisher, Yin and Wojciulik (1999) discusses a number of known problems in the visual processing of repetition, among them ‘repetition blindness’, or the failure to perceive a repeated stimulus. This is attested for stimuli including visually-presented words and is robust to intervention by a dissimilar word as well as visual changes between the repeated stimuli, such as changing from upper to lower case. See Walter (2007: 169) for an extensive list of linguistic and non-linguistic stimulus types for which repetition blindness has been found experimentally. Repetition blindness would presumably result in the deletion of one of the repeated units, whereas other forms of perceptual difficulties could result in other repairs. For example, Kingston et al. (2006) demonstrate that in some cases, two successive stimuli are perceived as being distinct when acoustically this is not the case.

On the biological origins of linguistic identity

347

These perceptual mechanisms can serve as diachronic driving forces in the evolution of phonological OCP effects. The scenario above would fall under the category of ‘CHANGE’-type sound change in the Evolutionary Phonology model proposed by Blevins (2004). CHANGE is defined as when “the phonetic signal is misheard by the listener due to perceptual similarities of the actual utterance with the perceived utterance” (Blevins 2004: 32). Fruehwald and Gorman (2011) use a slightly different type of explanation, still consistent with Blevins (2004), in their discussion of the emergence of anti-gemination in the English past tense (e.g. coronal stop-final verbs that take -ǝd as a suffix). They invite us to consider the following Middle English past tense forms, in which schwa is optionally syncopated: (2)

a. packed: [pækǝd] ~ [pækt] b. waited: [weitǝd] ~ [weit:]

Note that, when syncopation occurs with a coronal stop-final stem, the result is a geminate. Fruehwald and Gorman (2011) propose that such forms would occasionally have been misparsed as instances of present-tense forms (e.g. /weit/). They further suggest that children could misinterpret the lower rate of unambiguous syncopation in coronal-final verbs as evidence that such verbs cannot be syncopated in the past tense. The modern antigemination pattern then emerges as syncopation is posited for all forms except coronal stop-final ones. Again, no OCP constraint is necessary to motivate this reanalysis, yet identity avoidance is the result. The evidence for an ‘emergent OCP’, as Walter (2007) calls it, is strong: given the articulatory and perceptual motivations for repetition avoidance, one may conclude that it is not necessary to posit a hard-wired OCP (even as a violable principle) in UG. Though it is beyond the scope of this piece to discuss in more detail, this conclusion is consistent with a view that the OCP and many other purported constraints in the phonological grammar emerge from extralinguistic factors (Samuels 2009a, 2011). See Reiss (2008) for a simultaneously more humorous and developed argument against the OCP in particular and phonological constraints in general. With this said, even if we can ground OCP effects in extra-linguistic principles from a diachronic standpoint, synchronically we must still be able to represent the cross-linguistic pattern of identity-avoidance data. Reiss (2003a) focuses on the typology of rules like the Biblical Hebrew one described earlier in this section, which deletes a vowel between nonidentical consonants. Anti-gemination processes like this one have typically been explained by OCP-driven repair. As Odden (1988) points out, anti-

348

Bridget Samuels

gemination is but one piece of a larger typological puzzle. There are syncope rules which exhibit antigemination effects, there are others that apply blindly regardless of whether they create geminates, and there is a third type which only applies in the case where flanking consonants are identical, producing what Odden dubs ‘antiantigemination.’ For example, in Koya (Taylor 1969, Odden 1988) a final vowel deletes when it finds itself between flanking consonants (e.g. across a word boundary) that are identical:1 Underlying a. na:ki ka:va:li b. a:ru ru:pa:yku

(3)

Surface na:kka:va:li a:rru:pa:yku

‘to me it is necessary’ ‘six rupees’

The same typology, Odden (1988) shows, is found for vowel insertion rules: there are some which apply only when the flanking consonants are identical, some which apply blindly, and some which apply only if the flanking consonants are non-identical. Crucially, the cross-linguistic typology of such rules shows that we must be able to account for antigemination cases where a rule fails to apply when two segments in its structural description differ by a single, arbitrary feature (either selected from the entire set of features used by the language, or from a particular subset of features). However, as Baković (2005) notes, there are two unattested cases: those in which two segments are required to differ in the values of every feature (complete non-identity), and those in which two segments are required to have the same value for one member of a selected subset of features, but it does not matter which one (variable partial identity). Thus, the set of possible patterns is somewhat constrained; see fn. 2 for further discussion. In autosegmental representation, the attested effects are obtained by employing an OCP constraint to ban structures like the one below, in which two segments are linked by sharing the same feature value, [+F]: (4)

While feature-value identity can be conveniently expressed in autosegmental notation as in (4), Reiss (2003a,b) makes the point that because the autosegmental/feature geometric approach does not use variables, it cannot 1

Retroflexion is ignored: for instance, a retroflex [ḍ] and coronal [d] count as sufficiently identical. Thus, verka:ḍi digte becomes verka:ḍdigte.

On the biological origins of linguistic identity

349

account for attested rules which require that two segments differ by any arbitrary feature, or any from among a particular subset of features. In order to account for such rules, Reiss proposes a system of ‘feature algebra’ incorporating variables and quantifiers. In this approach, the attested varieties of rules are described using conditions on rule application, without any OCP-style constraints. For example, to capture antigemination, one could write a rule of the form A → Ø / B_C, and then add the condition that B ≠ C. This possibility was raised and rejected by Yip (1988), but finds proponents in Odden (1988) and Reiss (2003a,b). The basis of feature algebra is the notion that a segment (here, C1 or C2) is an abbreviation for a feature matrix represented in the following manner: (5)

Segments as feature matrices (Reiss 2003b: 222)

Fi denotes a feature, such as [nasal], and Greek letter variables denote the value (±) that feature Fi has for a given segment. The subscript outside a pair of parentheses containing αFi denotes the segment in question; thus, these subscripts are always 1 for C1 and 2 for C2. Such representations are meant to replace feature geometric, autosegmental representations (see Samuels 2009a, 2011 for the development and exposition of this idea). For present purposes, the important consequence of adopting a representation like (5) is that it is still possible to represent the equivalent of (4), where segments C1 and C2 have the same value for feature Fn: (6)

[(αFn)1 = [(βFn)2]

This is actually a special case of a more general condition in which Fi belongs to some set of features G ⊆ F, with F being the complete set of phonological features. (7)

Identity Condition ∀Fi ∈ G such that [(αFn)1 = [(βFn)2]

350

Bridget Samuels

In cases where segments must share all feature values in order to undergo a phonological rule (e.g. for antigemination to occur), G = F. As I have discussed in more detail elsewhere (Samuels 2009a, 2011), Odden (2011) points out that all the attested patterns of identity and nonidentity conditions on phonological rules reduce to variations of (7).2 The requirement that two segments differ in some way (e.g. antiantigemination) is simply the negation of the Identity Condition: (8)

Non-Identity Condition ¬∀Fi ∈ G such that [(αFn)1 = [(βFn)2]

The conditions in (7) and (8) permit antigemination and antiantigemination of the attested types, while ruling out the unattested conditions mentioned earlier, namely complete non-identity and variable partial identity. I will illustrate the application of these principles with two examples: syncope in Afar, which is subject to the Non-Identity Condition, and syncope in Yapese, which is subject to the Identity Condition. First, let us look at the Afar case. The data, originally from Bliese (1981), have been treated by McCarthy (1986), Yip (1988), Reiss (2003a,b), and Baković (2005). The alternations in question are the following: (9)

a. b. c. d.

Underlying digib-e xamil-i danan-e xarar-e

Surface digbé xamlí danané xararé

‘he married’ ‘swamp grass’ ‘he hurt’ ‘he burned’

Descriptively, the second vowel in a word deletes if it is unstressed and the flanking consonants are not completely identical. In order to prevent deletion from applying in (9c,d) it is necessary to add a Non-Identity Condition which permits deletion only if, for flanking consonants γi and γj, ¬∀Fi ∈ F such that [(αFi)γi = [(βFi)γj]. This solution will give the correct result for the forms in (8). However, more careful inspection of the Afar data reveals

2

Adopting the Non-Identity Condition as stated in (8) and proposed by Odden (2011) obviates Baković’s criticism of Reiss’ original formulation of the Non-Identity Condition: ∃Fi ∈ G such that [(αFn)1 ≠ [(βFn)2]. This version allows the unattested cases of complete non-identity and variable partial identity, while (8) does not.

On the biological origins of linguistic identity

351

that the situation is a bit more complicated. Syncope does not happen when the second syllable is closed: (10) a. digibté *digbté b. wagerné *wagrné

‘she married’ (cf. digbé ‘I married’) ‘we reconciled’ (cf. wagré ‘he reconciled’)

One way to express this is discussed in Samuels (2011), Ch. 5. In short, we want to add a further condition that will allow deletion only when the segment immediately to the right of γj is [-cons]. This can be represented formally in the search-and-delete framework for which I have argued in Samuels (2009a, 2011). A similar approach can account for syncope in Yapese (Jensen 1977, Odden 1988, Reiss 2003a,b), which only applies if the flanking consonants are homorganic and the first consonant is word-initial;3 this is a case which requires the Identity Condition. Underlying a. ba puw b. ni te:l c. rada:n

(11)

Surface bpuw nte:l rda:n

‘it’s a bamboo’ ‘take it’ ‘its width’

In this case, the vowel only deletes if ∀Fi ∈ {coronal, dorsal, labial} such that [(αFi)γi = [(βFi)γj]. The connection between biology and the synchronic representation of effects like these is admittedly less obvious than the connections between their diachronic emergence and difficulties with perception and production, as outlined earlier in this section. Nevertheless, understanding the type of operational and representational complexity that is necessary to capture the attested range of effects provides a starting point for the investigation of their possible analogs and origins. 4. Reduplication and token identity In morphology, multiple notions of identity are also of importance; again, see Nevins (2012) for a wider variety of examples than I am able to present 3

Or postvocalic. Let us abstract away from this because it will add the unnecessary complication of the rule potentially bleeding itself. See Samuels (2009a), Ch. 4 for discussion of this issue.

352

Bridget Samuels

here. To mention but one well-known phenomenon, the Person Case Constraint (also known as the ‘Me-lui Constraint’) ensures in certain languages ensures that if a dative third person clitic is present, an accusative clitic in the same clause must also be third person. And, in a larger sense, identity classes define morphological alternations: parts of speech and finer-grained distinctions such as noun declensions and verbal conjugations serve similar functions to natural classes in phonology, defining which morphemes undergo which rules and appear in which contexts. Here, I will focus on the phenomenon of reduplication, which is characterized by the copying of phonological material to serve a morphological function. 4 Reduplication can be very informally defined as a process that causes some portion of a word to appear twice. A given instance of reduplication can be either total (encompassing an entire morpheme/word) or partial, and in many languages both appear, with different morphological functions. Just as the base may contain some elements that are not copied in the case of partial reduplication, the reduplicant may contain elements not found in the base (‘fixed segmentism’). However, I will argue following Raimy (2003) that reduplication does not create token identity relations akin to copying (i.e. internal Merge) in syntax. Raimy (1999, 2000a,b) develops a framework in which reduplication stems from linearization requirements in phonology. The starting assumption for this view is Marantz’ (1982) argument that reduplication involves the introduction of a (typically) phonetically null morpheme. Reduplication and affixation are then two sides of the same coin; they are different surface manifestations of two morphemes coming together. Each time a string enters the phonological workspace, before anything else happens to it, it must be combined with the string which is already present in the workspace. Reduplicated structures arise when this process creates a ‘loop’ in the strings, as will become clear shortly. In Raimy’s framework, each word comes specified with precedence relations that order its segments. These are denoted by arrows or as ordered pairs; both X → Y and (X, Y) are read as ‘X precedes Y.’ In the usual case, a lexical representation consists of a linear string initiated by a ‘start’ boundary symbol # and terminated by an ‘end’ boundary symbol %.

4

See Samuels (2006) and Nevins (2012) for evidence that some other morphologically-created structures which give the false or ambiguous appearance of reduplication, such as infixation that inadvertently creates a string of two identical adjacent syllables, may actually be avoided.

On the biological origins of linguistic identity

(12) /kæt/ is shorthand for: or as ordered pairs:

353

#→k→æ→t→% (#, k), (k, æ), (æ, t), (t, %)

However, another morpheme may be added to this string, in which case its precedence relations will be added to those found in (12). A morpheme, as discussed above, may be null except for directions to add new precedence relations. The new precedence relationship may create a loop (specifically here a ‘backward’ loop) in the string: (13) In this case, the direction (t, k) has been added, while the instructions (t, %) and (#, k) remain. This creates a set of precedence relations for the string that is ‘non-asymmetric’ (see discussion in Raimy 2003). For precedence to be asymmetric, if A precedes B, then the statement that B precedes A cannot be true. But notice that in (13) we have added the direction that /t/ (directly) precedes /k/ while it is still the case (via transitivity) that /k/ precedes /t/. The precedence relations in this string are therefore no longer asymmetric — the string is visibly looped, not linear — and as a result the string is by hypothesis unpronounceable. In this framework, while looped or non-asymmetric structures may be produced, they must be fully linearized prior to pronunciation. Specifically, the asymmetric, linearized output of a looped structure like (13) is the shortest path through the looped string, where as many precedence relations as possible are realized. Concretely, this means traversing backwards loops as soon as possible, and where there are conflicts between lexically-present and morphologically-added links, the latter have priority. Fitzpatrick (2006) formalizes these principles with a fixed ranking of Optimality Theoretic constraints, ECONOMY and COMPLETENESS, plus an additional constraint, SHORTEST. In the event that multiple nested loops begin at the same point, SHORTEST ensures that the shorter inner loop is taken first (see Samuels 2011 for skepticism). For the sake of space, I omit discussion of Idsardi and Shorey’s (2007) alternative, which achieves the same results without using any constraints, instead employing a modified version of Dijkstra’s shortest path algorithm (Dijkstra 1959). Whatever implementation one chooses, the result of linearizing (13) should result in (14): (14) # → k → æ → t → k’ → æ’ → t’ → %

354

Bridget Samuels

Crucially, as I have indicated with prime marks in (14), the segments created by unwinding the looped representation are new tokens of the same type as the originals; they are not taken to stand in an identity or correspondence relationship with the originals, nor is there any other dependency created between them (Raimy 2003). This stands in sharp contrast to the relationship that is created when a syntactic element is copied during internal Merge. There are several reasons to believe that this is the state of affairs for reduplication. One is that the different tokens created by reduplication can receive different treatments in the phonology. For example, in Washo, coda devoicing interacts with reduplication (Jacobson 1964, Kager 1999, Raimy 2003). Reduplicating a form such as wedi ‘it’s quacking’ produces the form wetwedi with coda devoicing occurring in a transparent or surface-true fashion, as it would in an unreduplicated form. The correct result is easily achieved by reduplicating, then allowing coda voicing to apply—but if the instance of /d/ created by reduplication were token-identical to the one in the base, then devoicing should apply uniformly to both and result in *wetweti (or alternatively, not apply at all, resulting in *wedwedi), counter to fact (see Raimy 2000b and Samuels 2009a for extensive discussions of ‘backcopying’ and other over-/under-application cases). A complementary argument against looped representations creating token identity, sketched by Gagnon (2007), goes as follows. Imagine that lexical representations are governed by a principle of economy. Such a notion has been pervasive in generative phonology (for instance, to motivate underspecification) and has been explicitly stated as the principle of Lexical Minimality (Steriade 1995, cf. Samuels 2009a). If this principle holds, then to minimize the number of segments that must be maintained in the lexical representation, we ought to represent banana as follows: (15) Roughly, the problem is that allowing lexical representations like these makes wild predictions. 5 Consider a hypothetical language that aspirates voiceless stops in the onset of stressed syllables. Now consider three lexical items in this language: kat, pat, and tat. The first two are straightforward,

5

See Samuels (2009a), Ch. 4 for additional, more theory-bound arguments.

On the biological origins of linguistic identity

355

but according to the principle of economy that led us to (14), tat should be represented in the following manner:6 (16) Now assume that there is token identity between the two instances of /t/ created when (16) is linearized as tat. When aspiration applies, it should do the same thing to both tokens; essentially, the Washo examples just described should be impossible. We couldl expect only overapplication or underapplication in the reduplicated forms: Overapplication a. khat b. phat c. thath

(17)

Underapplication khat phat tat

Gagnon (2007) reviews cases that could potentially show such patterns and concludes that there is no evidence for cases of over- or underapplication of the type in (17). Thus, the conclusion that reduplication does not create token identity is supported. Even though I have argued here that reduplication does not create token identity, it is no less important to think about reduplication from a biolinguistic point of view. Indeed, it seems that repetitive structures are commonplace in animal songs. Yip (2006) discusses how zebra finch songs are structured, building on work by Doupe and Kuhl (1999) and others. The songs of many passerine songbirds consist of a sequence of one to three notes (per Coen 2006, ‘songemes’) arranged into a ‘syllable.’ The definition of a syllable in birdsong is a series of notes/songemes bordered by silence (Williams and Staples 1992, Coen 2006), which of course is very unlike syllables in human language. Birdsong syllables, which can be up to one second in length, are organized into ‘motifs.’ While Yip (2006) equates motifs with human prosodic words, others equate them with phrases. There are multiple motifs within a single song. Examples from numerous species discussed by Slater (2000) show that the motif is typically a domain of repetition; the shape of a song is (axbycz)w where a, b, c are syllables and 6

Debatably here, since the savings of a single segment does not obviously outweigh the creation of two precedence statements. I use this very short example because it illustrates my point concerning the predictions made by this theory; it would be easy to construct an example with the same force in which the economy argument is more plausible.

356

Bridget Samuels

parentheses indicate a motif. That is to say, a song is a string of syllables, each repeated an arbitrary number of times to create a motif, and the motif itself is also repeated some number of times. Payne (2000) shows that virtually the same can be said of humpback whale songs, which take the shape (a…n)w, where the number of repeated components, n, can be up to around ten. Of course, it is very difficult to say just how far we can or should push the parallelism between song structures and reduplication. Schneider-Zioga and Vergnaud (2009) have contended that reduplication arises at the planning level, while birdsong exhibits repetition only at the level of execution. While this is a possible interpretation of the type of repetition found in bird and whale song, it is speculative at this stage; we do not known enough about the mental representations in these species to rule out the possibility that there is repetition at the planning level in their vocalizations. And regardless of the level at which repetition occurs, there must be some computational mechanism that makes it possible. While evidence for this mechanism in birds and whales may be hard to come by, the similarities between its output and that of reduplication in human language on the surface are at least suggestive. 5. Conclusions As I noted in the earlier discussion of phonological categorization, communication would simply be impossible without the ability to abstract categories — identity classes — out of the linguistic signal. I couched this discussion in terms of sound, but it is also true for morphology, for syntax, and for semantics. Categories are what make declensions and conjugations possible; they are behind lexical and functional parts of speech; they are the basis of linguistically relevant notions such as animacy. To take but one well-known example, it is by virtue of their shared identity as nouns that we can use the word ‘dog’ in the same sentential frames as ‘cat.’ I have only been able to touch on a few selected notions of (non-)identity and their biolinguistic contexts here, even after narrowing my focus to the morphophonological side of language. There are certainly many other worthy topics. For example, Lohndal and Samuels (2013) discuss an identityavoidance effect that occurs after Vocabulary Insertion, preventing two identical elements from being linearized. We suggest that this effect is yet another example of the *XX type of anti-identity principle. While this OCP-like notion may have originated in phonological theory, its manifestations are many and widespread; see Van Riemsdijk (2008) for a syntactic

On the biological origins of linguistic identity

357

example, as well as Alexiadou (this volume) and Nevins (2012) for recent overviews of different *XX-style effects. The same basic idea underlies Moro’s (2000) dynamic antisymmetry, as well as Bošković’s (2002) analysis of exceptions to multiple wh-fronting that occur precisely when the whwords are homophonous. Not only is identity in language a ubiquitous and multifarious concept, the landscape of identity and anti-identity effects is puzzling. We have already seen that there is gemination, antigemination, and antiantigemination. For every instance of haplology, there is a word that fails to undergo it, like haplology itself. And, as reduplication shows us, we cannot even be sure when we see two copies of a linguistic object that they are in fact identical, technically speaking. Moreover, anti-identity is not as simple as a templatic *XX constraint that can be relativized to different features and deployed at any level of representation as a language desires (cf. Boeckx 2009). Rather, identity is created in a number of ways — through category-formation; through copying for internal Merge — but in some cases, it may be disfavored for reasons that range from perceptual difficulty to articulatory fatigue. Thus, the evolutionary and cognitive histories of these effects are not unitary, either. While the origins and mechanisms behind the various notions of identity are not yet well understood, I hope the present chapter can serve as an illustration of how we can begin to think of each of them individually in a biolinguistic way. Acknowledgements Thank you to Henk van Riemsdijk and Kuniya Nasukawa for giving me the opportunity to contribute to this volume, and to Jeroen van de Weijer for helpful comments on a previous draft. I am also grateful to Cedric Boeckx, Michaël Gagnon, Morris Halle, Bill Idsardi, Terje Lohndal, Dave Odden, Eric Raimy, Charles Reiss, and Bert Vaux. Each of them has shaped my views on linguistic identity and my identity as a linguist.

References Alexiadou, Artemis this volume Exploring the limitations of identity effects in syntax. In Identity in Grammar, KuniyaNasukawa and Henk van Riemsdijk (eds.), 201–226. Berlin/New York: Mouton de Gruyter.

358

Bridget Samuels

Anderson, Stephen R., and David W. Lightfoot 2002 The Language Organ: Linguistics As Cognitive Physiology. Cambridge: Cambridge University Press. Baković, Eric 2005 Antigemination, assimilation and the determination of identity. Phonology 22: 279–315. Blevins, Juliette 2004 Evolutionary Phonology. Cambridge: Cambridge University Press. Bliese, Loren F. 1981 A generative grammar of Afar. Dallas, TX: Summer Institute of Linguistics and the University of Texas at Arlington. Boeckx, Cedric 2009 What happens when syntax faces the sensori-motor systems, phylogenetically and ontogenetically. Paper presented at ConSOLE XVIII. Universitat Autònoma de Barcelona, Bellaterra, Barcelona. Boer, Bart de 2001 The Origins of Vowel Systems. Oxford: Oxford University Press. Bošković, Željko 2002 On multiple wh-fronting. Linguistic Inquiry 33: 351–383. Brosch, Michael, Elena Selezneva, and Henning Scheich 2005 Nonauditory events of a behavioral procedure activate auditory cortex of highly trained monkeys. Journal of Neuroscience 25: 6797–6806. Budinger, Eike, and Peter Heil 2006 Anatomy of the auditory cortex. In Listening to Speech: An Auditory Perspective, Steven Greenberg and William A. Ainsworth (eds.), 91–113. Mahwah, NJ: Lawrence Erlbaum Associates. Cheney, Dorothy L., and Robert M. Seyfarth 2007 Baboon Metaphysics. Chicago: University of Chicago Press. Chomsky, Noam 2005 Three factors in language design. Linguistic Inquiry 35: 1–22. 2007 Approaching UG from below. In Interfaces + Recursion = language?, Uli Sauerland and Hans Martin Gärtner (eds.), 1–29. Berlin/New York: Mouton de Gruyter. Coen, Michael Harlan 2006 Multimodal dynamics: self-supervised learning in perceptual and motor systems. Ph.D. dissertation, Massachusetts Institute of Technology. Dijkstra, Edsger W. 1959 A note on two problems in connexion with graphs. Numerische Mathematik 1: 269–271.

On the biological origins of linguistic identity

359

Doupe, Allison J., and Patricia K. Kuhl 1999 Birdsong and human speech: common themes and mechanisms. Annual Review of Neuroscience 22: 567–631. Fitzpatrick, Justin M. 2006 Sources of multiple reduplication in Salish and beyond. Studies in Salishan, MIT Working Papers on Endangered and Less Familiar Languages 7, Shannon T. Bischoff, Lynika Butler, Peter Norquest, Daniel Siddiqi (eds.), 211–240. Freedman, David J., Maximilian Riesenhuber, Tomaso Poggio, and Earl K. Miller 2001 Categorical perception of visual stimuli in the primate prefrontal cortex. Science 291: 312–316. Fruehwald, Josef, and Kyle Gorman 2011 Cross-derivational feeding is epiphenomenal. Studies in the Linguistic Sciences, Illinois Working Papers 2011: 36–50. Gagnon, Michaël 2007 Token identity vs. type identity. Paper presented at the CUNY Phonology Forum Conference on Precedence Relations. CUNY Graduate Center, New York. Gehring, Walter J. 1998 Master Control Genes in Development and Evolution: The Homeobox Story. New Haven, CT: Yale University Press. Ghazanfar, Asif A., Chandramouli Chandrasekaran, and Nikos K. Logothetis 2008 Interactions between the superior temporal sulcus and auditory cortex mediate dynamic face/voice integration in rhesus monkeys. Journal of Neuroscience 28: 4457–4469. Ghazanfar, Asif A., Joost X. Maier, Kari L. Hoffman, and Nikos K. Logothetis 2005 Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience 67: 580–594. Gould, Stephen Jay 1976 In defense of the analog: a commentary to N. Hotton. In Evolution, Brain, and Behavior: Persistent Problems, R. Bruce Masterton, William Hodos and Harry J. Jerison (eds.), 175–179. New York: Wiley. Guenther, Frank H., and Marin N. Gjaja 1996 The perceptual magnet effect as an emergent property of neural map formation. Journal of the Acoustical Society of America 100: 1111–1121. Hauser, Marc D. 1996 The Evolution of Communication. Cambridge, MA: MIT Press. Hauser, Marc D., Noam Chomsky, and W. Tecumseh Fitch 2002 The faculty of language: what is it, who has it, and how did it evolve? Science 298: 1569–1579.

360

Bridget Samuels

Hay, Jessica 2005

How auditory discontinuities and linguistic experience affect the perception of speech and non-speech in English- and Spanishspeaking listeners. Ph.D. dissertation, University of Texas. Idsardi, William J., and Rachel Shorey 2007 Unwinding morphology. Paper presented at the CUNY Phonology Forum Workshop on Precedence Relations, CUNY Graduate Center, New York. Jackendoff, Ray 2011 What is the human language faculty? Two views. Language 87: 586–624. Jacobson, William H. 1964 A grammar of the Washo language. Ph.D. dissertation, University of California, Berkeley. Jensen, John Thayer 1977 Yapese Reference Grammar. Honolulu, HI: University of Hawaii Press. Kager, René 1999 Optimality Theory. Cambridge: Cambridge University Press. Kanwisher, Nancy, Carol Yin, and Ewa Wojciulik 1999 Repetition blindness for pictures: evidence for the rapid computation of abstract visual descriptions. In Fleeting Memories, Veronika Coltheart (ed.), 119–150. Cambridge, MA: MIT Press. Kenstowicz, Michael 1994 Phonology in Generative Grammar. Oxford: Blackwell. Kingston, John, Della Chambless, Daniel Mash, Jonah Katz, Eve Brenner, and 2006 Shigeto Kawahara. Sequential contrast and the perception of co-articulated segments. Poster presented at the 10th LABPHON, Paris. Kluender, Keith R., Andrew J. Lotto, and Lori L. Holt. 2006 Contributions of nonhuman animal models to understanding human speech perception. In Listening to Speech: An Auditory Perspective, Steven Greenberg and William A. Ainsworth (eds.), 203–220. Mahwah, NJ: Lawrence Erlbaum Associates. Kluender, Keith R., Andrew J. Lotto, Lori L. Holt, and Suzi L. Bloedel 1998 Role of experience for language-specific functional mappings of vowel sounds. Journal of the Acoustical Society of America 104: 3568–3582.

On the biological origins of linguistic identity

361

Kuhl, Patricia K. 1993 Innate predispositions and the effects of experience in speech perception: the Native Language Magnet Theory. In Developmental Neurocognition: Speech and Face Processing in the First Year of Life, Bénédicte de Boysoon-Bardies, Scania de Schonen, Peter W. Jusczyk, Peter MacNeilage, and John Morton (eds.), 259–274. Norwell, MA: Kluwer Academic. 2000 Language, mind, and the brain: experience alterts perception. In The New Cognitive Neurosciences, Michael S. Gazzaniga (ed.), 99–115. Cambridge, MA: MIT Press. Leben,William 1973 Suprasegmental phonology. Ph.D. dissertation, Massachusetts Institute of Technology. Lohndal, Terje, and Bridget Samuels 2013 Linearizing empty edges. In Syntax and its Limits, Raffaella Folli, Christina Sevdali and Robert Truswell (eds.), 66–79. Oxford: Oxford University Press. Marantz, Alec 1982 Re reduplication. Linguistic Inquiry 13: 435–482. McCarthy, John J. 1986 OCP effects: gemination and antigemination. Linguistic Inquiry 17: 207–263. McGurk, Harry, and John MacDonald 1976 Hearing lips and seeing voices. Nature 264: 746–748. Mesgarani, Nima, Stephen V. David, Jonathan B. Fritz, and Shihab A. Shamma. 2008 Phoneme representation and classification in primary auditory cortex. Journal of the Acoustical Society of America 123: 899– 909. Moro, Andrea 2000 Dynamic Antisymmetry. Cambridge, MA: MIT Press. Nevins, Andrew 2012 Haplological dissimilation at distinct stages of exponence. In The Morphology and Phonology of Exponence, Jochen Trommer (ed.), 84–116. Oxford: Oxford University Press. Odden, David 1988 Antiantigemination and the OCP. Linguistic Inquiry 19: 451–475. 2011 Rules v. constraints. In The Handbook of Phonological Theory (2nd. ed.), John A. Goldsmith, Jason Riggle and Alan C.L. Yu (eds.), 1–39. Oxford: Blackwell. Oudeyer, Pierre-Yves 2006 Self-Organization in the Evolution of Speech. Oxford: Oxford University Press.

362

Bridget Samuels

Payne, Katherine 2000 The progressively changing songs of humpback whales: a window on the creative process in a wild animal. In The Origins of Music, Nils Lennart Wallin, Björn Merker and Steven Brown (eds.), 135–150. Cambridge, MA: MIT Press. Raimy, Eric 1999 Representing reduplication. Ph.D. dissertation, University of Delaware. 2000a The Phonology and Morphology of Reduplication. Berlin/New York: Mouton de Gruyter. 2000b Remarks on backcopying. Linguistic Inquiry 31: 541–552. 2003 Asymmetry and linearization in phonology. In Asymmetry in Grammar, vol. 2, Anna Maria Di Sciullo (ed.), 129–146, Amsterdam: John Benjamins. Ramachandran, V.S., and Sandra Blakeslee 1998 Phantoms in the Brain: Probing the Mysteries of the Human Mind. New York: William Morrow. Reiss, Charles 2003a Quantification in structural descriptions: attested and unattested patterns. The Linguistic Review 20: 305–338. 2003b Towards a theory of fundamental phonological relations. In Asymmetry in Grammar, vol. 2, Anna Maria Di Sciullo (ed.), 214–238. Amsterdam: John Benjamins. Reiss, Charles 2008 The OCP and NOBANANA. In Rules, Constraints, and Phonological Phenomena, Anna Maria Di Sciullo (ed.), 252–301. Oxford: Oxford University Press. Riemsdijk, Henk C. van 2008 Identity avoidance: OCP effects in Swiss relatives. In Foundational Issues in Linguistic Theory: Essays in Honor of Jean-Roger Vergnaud, Robert Freidin, Carlos Otero and MaríaLuísa Zubizarreta (eds.), 227–250. Cambridge, MA: MIT Press. Samuels, Bridget 2006 Reduplication and verbal morphology in Tagalog. Ms., Harvard University. 2009a The structure of phonological theory. Ph.D. dissertation, Harvard University. 2009b The third factor in phonology. Biolinguistics 3: 355–382. 2011 Phonological Architecture: A Biolinguistic Perspective. Oxford: Oxford University Press. 2012a The emergence of phonological forms. In Towards a Biolinguistic Understanding of Grammar: Essays on the Interfaces, Anna Maria DiSciullo (ed.), 193–213. Amsterdam: John Benjamins.

On the biological origins of linguistic identity

363

Animal minds and the roots of human language. In Language, From a Biological Point of View, Cedric Boeckx, María del Carmen Horno-Chéliz and José Luis Mendívil Giró (eds.), 290– 313. Newcastle upon Tyne: Cambridge Scholars Publishing. Schneider-Zioga, Patricia, and Jean-Roger Vergnaud 2009 Feet and their combination. Paper presented at the CUNY Phonology Forum Conference on the Foot, CUNY Graduate Center, New York. Slater, Peter J. B. 2000 Birdsong repertoires: their origins and use. In The Origins of Music, Nils Lennart Wallin, Björn Merker and Steven Brown (eds.), 49–64. Cambridge, MA: MIT Press. Steriade, Donca 1995 Underspecification and markedness. In The Handbook of Phonological Theory, John A. Goldsmith (ed.), 114–174. Oxford: Blackwell. Vauclair, Jacques 1996 Animal Cognition: An Introduction to Modern Comparative Psychology. Cambridge, MA: Harvard University Press. Walter, Mary Ann 2007 Repetition avoidance in human language. Ph.D. dissertation, Massachusetts Institute of Technology. Weikum, Whitney M., Athena Vouloumanos, Jordi Navarra, Salvador Soto-Franco, Núria Sebastián-Galles, and Janet F. Werker 2007 Visual language discrimination in infancy. Science 316: 1159. Werker, Janet F. Werker, and Richard C. Tees 1984 Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behavior & Development 7: 49–63. Williams, Heather, and Kirsten Staples 1992 Syllable chunking in zebra finch (Taeniopygia guttata) song. Journal of Comparative Psychology 106: 278–286. Wright, Beverly A. 2006 Perceptual learning of temporally based auditory skills thought to be deficient in children with Specific Language Impairment. In Listening to Speech: An Auditory Perspective, Steven Greenberg and William A. Ainsworth (eds.), 303–314. Mahwah, NJ: Lawrence Erlbaum Associates. Yip, Moira 1988 The Obligatory Contour Principle and phonological rules: a loss of identity. Linguistic Inquiry 19: 65–100. 2012b

364

Bridget Samuels

2006

Is there such a thing as animal phonology? In Wondering at the Natural Fecundity of Things: Essays in Honor of Alan Prince, Eric Baković, Junko Ito and John McCarthy (eds.), 311–323. Santa Cruz, CA: Linguistics Research Center, University of California, Santa Cruz.

Language index

Arabic 14, 30, 34, 69, 70, 74, 165, 329 Basque 334 Brazilian Portuguese 78 Catalan 214 Chinese 296-298, 303, 311-313, 328, 331 Cantonese 329, 330 Mandarin 329, 330, 333, 335 Chumash 48 Dagaare 231 Dakota 28 Diyari 40-42 Dutch 3, 6, 41, 52, 53, 59, 161-182, 187, 190, 192, 225, 257 English 3, 5, 14, 25-28, 49, 59, 61- 63, 66, 70-78, 81, 82, 125, 136, 137, 144, 145, 149, 175, 176, 191, 192, 199, 201-204, 210-215, 220, 221, 231, 260, 261, 267-270, 274, 278, 286, 293-298, 306, 309, 310, 312, 330, 331, 337, 343, 347 American 80 Estonian 78 French 51, 78, 114, 131, 133, 134138, 144-149, 151, 153, 176, 201, 203, 204, 212, 219, 220, 297, 331 Québecois 146, 148 German 7, 78, 213, 214, 215, 331 Germanic 145, 149 Swiss German 126 Greek 212-215, 349 Hebrew 60, 78, 207, 208, 332 Biblical Hebrew 345, 347 Italian 124, 128, 129, 132-136, 138, 139, 143-146, 150, 152, 153, 155, 157, 163, 212, 221 Italian dialects Aliano 129, 143

Fontane 147, 148 Gavoi 139, 143 Mascioni 129, 143 Mezzenile 146-148 S. Bartolomeo Pesio 149 S. Mauro Pascoli 133 Japanese 6, 8, 13, 16, 28, 29, 32, 33, 78, 164, 227, 229, 231, 232, 236240, 244-249, 267, 268, 296-298 Akita 231 Nagoya 232 Northern Tohoku 28, 29 Osaka 233 Shuri Okinawan 231 Sino-Japanese 296, 297 Ju’hoansi (Khoisan) 217-219 Kaqchikel 28, 30 Kiowa 132 Klamath 28 Korean 28, 213, 298, 311 Koya 348 Maidu 28 Maori 78 Mapila Malayalam Middle English 349 Mohawk 309, 310 Navajo 28 Ponapean 30 Quechua 330, 331 Cuzco Quechua 28 South Bolivian Quechua 330 Romance 6, 124-28, 133, 139, 141, 143, 145, 149, 220, 298 Romanian 163, 212, 214 Scots Gaelic 41 Serbian 216 Serbo-Croat 162 Spanish 78, 124, 129, 130, 132, 137, 140-144, 147, 209, 212-214

366

Language index

Swahili 60, 78 Swati 328 Swedish 27, 30, 78 Tagalog 47, 48, 328 Tlingit 270

Tuareg 49, 51 Vata 1 Xhosa 332 Zulu 30, 33

Subject index

acoustic correlates..17 acoustic patterns 17-23, 102 acoustics 323, 342 acquisition 16, 31, 77, 83-86, 336, 342 affixation 294, 352 Agree 124, 130, 140-144, 147-154, 164, 204, 205, 219, 227 alliteration 39 ambisyllabicity 109 Andrews amalgams 7, 257, 262, 263266, 269, 272, 273, 276, 277, 282284 Antecedence condition on ellipsis 255-257, 284 anti-gemination 323, 329, 332, 333, 345, 347, 350, 357 anti-locality 127 antisymmetry 8, 278, 283, 284, 299, 357 articulation 5, 17, 19, 59-62, 64, 7580, 329, 333, 342 aspiration 4, 14, 19, 21, 23, 27, 30, 31, 33, 355 association lines 42, 110 autosegmental spreading 42 babbling 5, 59, 60-81, 344 baboons 343 bees 324, 325 Binding Theory 2, 187 biolinguistic perspective 8, 346 Biomechanical Repetition Avoidance Hypothesis (BRAH) 329, 330 birdsong 7, 325, 355, 356 blackbirds 343 bracketing notation 109 budgerigars 343 Canterbury Tales, The 50 Case theory 210, 211, 221

Categorial Identity Thesis 200 categorical perception 342, 343 causatives 220 Chain reduction 249 CHANGE 347 classifier 7, 292, 310-316 clitic doubling languages 212 Coarticulation-Hypercorrection Theory 14 coda 18, 28, 39, 45, 49, 61, 63, 65, 66, 70, 101, 354 coda-onset sequence 5, 102, 104, 106, 107, 112, 114, 115 Coloured Containment 51 COMPLETENESS 353 complex onset 5, 114 compound expressions 21 consonant elements 17-19, 21, 22, 27, 30, 33 consonant harmony 60 consonant-vowel interaction 20 Constraint on Direct Recursion 200 construct state 207, 208 contour 5, 102, 114, 115, 120, 136, 342, contrastiveness 4, 13 cooccurrence restrictions 323 coordination particle doubling 239, 240, 244 copy theory 1, 2 copying 1, 40-43, 54, 313, 315, 328, 342, 352, 357 co-reference 334 Correspondence Theory (CT) 5, 4648, 50, 51, 54 cortical re-mapping 343 crickets 343 Curie’s Principle of Symmetry 291

368

Subject index

deletion 2, 13, 48, 127, 163-165, 227, 255, 332, 333, 345, 346, 350, 351 dependency 24-26, 102, 104-106, 109-115, 126, 143, 282, 354 dependency phonology 15 Dijkstra’s shortest path algorithm 353 Directionality 104, 111 dissimilation 2, 13, 14, 130, 132, 162 Distinctness 6, 127, 128, 132, 200, 205-207, 209-216, 220-222, 330 Distributed Morphology 126, 127, 130, 208 distributional regularities 102-104, 106 Don’t agree 165 double infinitive filter 209 double-ing 209, 221 Double-l constraint 129, 132, 137, 139, 148, 154 Double-n constraint 136, 137, 148 Double-o constraint 227, 228 double-rooted structure 283 Doubly Filled Comp Filter (DFC) 2, 3, 126 dynamic antisymmetry 8, 299, 357 echo-words 323, 329, 332, 333 Economy 148, 149. 151, 154, 295, 353-355 Ejectiveness 31 Element Theory 15-21, 26, 27, 30, 31, 34, 106, 111, 115, 117 elements 15-34, 103, 106, 107, 111, 112, 115-117 Empty Category Principle 114 empty nucleus 114 endocentric dependency 112 epenthesis 337 epenthetic vowel 42, 52, 330 Evolutionary Phonology 347 exocentric dependency 112 externalization 204, 220, 221, 225 extrinsic ordering 108 features

distinctive features 2, 34, 42, 52, 54, 70, 71, 119, 331, 342, 344, 345, 348, 349 feature geometry 5, 15, 19 feature spreading 42 ferrets 344 finches 343 frogs 343 General pattern of anaphoric dependence (GPAD) 190 Generalized Mirror Principle 294, 298 government 105 Government Phonology 104 gradient OCP 64-66, 69, 70, 74-77, 81 grammaticalization 303 Graph Theory 109 haplology 1, 2, 126, 127, 163, 225, 230, 232, 233, 236, 238, 323, 328, 329, 333, 334, 357 Harmonic Grammar 329 head adjacency 7, 225, 229, 232, 236, 238, 242-244, 249 head-dependency relations 21, 22, 24, 104 headedness 23, 26-30 head-movement 125 Horn amalgams 257 humpback whales 325 identical transvocalic consonants 5, 59, 60, 61, 69, 76, 77, 81 identity avoidance 2-8, 13, 15, 16, 50, 112, 126, 161-165, 167, 171, 173, 180, 186, 187-190, 231, 249, 323, 329, 330, 332, 334, 341, 345, 347 Identity Condition 225, 349-351 identity sensitivity 1, 7 imperfect rhyme 50, 54 Impoverishment 130, 216 internal Merge 41, 43, 46, 352, 354, 357 Intonational Phrase 44

Subject index Koot, Hans van de 127, 162-164, 225-227, 334 Laryngealisation 19 Last Resort mechanism 218 lexical identity 52, 104 Lexical Minimality 354 lexicon 14, 32, 33, 59-61, 69, 76, 81, 130, 150, 188, 292, 293 Licensing Inheritance 106, 112 linear adjacency 127, 228, 229, 236 Linear Correspondence Axiom (LCA) 127, 200, 299 linear order 102, 193, 194, 305, 306, 308 linearization 44, 46, 127, 200, 205208, 211-215, 223, 249, 260, 274, 277, 278, 280, 283, 284, 299, 300, 308, 352 Locality 104, 111 Locality Principle of Syntactic Relations 304 locative inversion 205 long-distance agreement 45 Loop Theory 5, 48, 54 Lyman’s Law 32, 33 macaques 343, 345 mammals 341, 343, 344 marmosets 343 maximal projection 105 Me-lui Constraint 352 Merge 8, 142, 143, 202, 204, 210, 219, 227, 240, 263, 264, 275, 285, 302, 305, 308 Merge-marker 301, 302, 304, 308311, 314-316 mice 343 mimicry 325, 344 minimalist 41, 124, 126, 127, 130, 136, 140, 141, 154, 227, 293 Minimality (relativized), 3, 123, 124, 126, 128, 131, 134-136, 140, 143, 151-154 Mirror Principle 294, 298 modalities 7, 323-325

369

mora-based syllable theory 113-114 Morphological Merger 125 MRC Psycholinguistic Database 61, 62 multidominance 5, 44, 46, 47-49, 51, 54 multidominant phrase markers 259, 277 Multiple Agree 132, 136, 143, 144, 148, 150, 151, 154 Multiple Case Condition (MCC) 218, 219 multiple sluicing 199, 209, 213, 214, 215, 220, 221 multiple wh-fronting 215, 216, 357 nasalization 19, 29, 48 natural classes 21, 71, 72, 79, 342, 352 Neeleman, Ad 127, 162-165, 167, 189, 225-227, 334 Negative Concord 6, 125, 135-138, 144, 145, 148-150, 154 negative imperatives 6, 125, 133, 135, 150, 151, 154 NO CROSSING CONDITION (NCC) 42 *no no constraint 232 nominal complementizer 3 Non-Identity Condition 350 non-linear representations 103, 108, 109 nucleus 18-20, 39, 49, 63, 101-107, 112-115 Obligatory Contour Principle (OCP) 2, 13, 37, 60, 123, 126, 200, 225, 326, 328, 345 obstruent voicing 13, 19, 21, 23, 27, 28, 31, 33 oddball effect 326 onset 5, 18, 19, 25, 28, 32, 39, 43, 45, 50-54, 61-66, 70, 76, 77, 101-109, 112-117, 354 Optimality Theory 37, 40, 46, 108, 126, 163, 165 oscine birds 343

370

Subject index

Panini 294 Particle Phonology 15, 23 perceptual or memory primitive (POMP) 323 perceptual warping 343 Person Case Constraint (PCC) 124, 128, 130-133, 140, 142, 352 phoneme(s) 108, 111, 115, 164, 342 phonetic interpretation 5, 29, 107, 112-116 phonotactic constraints 102, 103 phonotactic domains 5, 102, 103, 106, 107, 112, 116 phrasal adjacency 7, 225, 226, 229, 236, 249 poem 39, 46, 49, 51 precedenc 42, 48, 49, 102, 110, 111, 115, 116, 294, 295, 352-355 Principle C 2, 187-192, 280, 300 Principle of conservation 294 Principle of Full Interpretation 268 proper government 107, 114 prosodic constituent 32, 34, 40, 45, 49, 50, 114 prosodic domain 4, 15, 26, 30, 32, 34, 101 Prosodic Morphology 40 protophonation 75, 76, 80 quail 343 quotative inversion 201, 220 rats 324 Recoverability Condition on Deletion 255 redundancy-free 5, 103, 107, 108 reduplication 1, 39, 40-49, 327, 328, 332, 336, 342, 351-357 remnant movement 312-315 Rendaku 32 repair 13, 123-132, 135, 136, 138, 142-144, 147, 150, 152, 154, 346, 347 repetition 5, 39, 59-61, 75, 76, 127, 227, 327-331, 346, 347, 355, 356 repetitive babbling 5, 62, 75, 77, 81

retroflexion 348 rewrite rules 108 RHTEMPLATE(foot) 47 rhyme 5, 39, 40, 43-54, 105, 113, 323, 325, 327 Richards, Norvin 127, 128, 132, 147, 149, 162, 189, 199, 205-216, 221, 225-227 Riemsdijk, Henk C. van 2-4, 7, 13, 126, 127, 162, 163, 187, 199, 200, 205, 225, 227, 229, 259, 301, 311, 356 sC- sequences 115 semantic agreement 6, 161, 162, 165167, 172-190 Semitic root 323, 328, 329, 333 Sentential Subject Constraint 277 Shakespearean couplet 45 Shortest attract 128 similarity avoidance 5, 74, 76, 81 skylark 325 sluicing 7, 199, 247, 257, 275-277, 283, 284 sonority 103, 104 sparrow 325, 343 SPE 17 Spell-out domains 206-209, 220, 234 spontaneous voicing 29 Stanford Phonology Projects 78 stanza 39 strong and weak pronouns 6, 161, 162, 166-168, 171, 173, 187, 192 Stroop effect 325, 326 stylistic inversion 201, 220 Subject Auxiliary Inversion 3 Subject-in-situ generalization (SSG) 200, 202-204, 209-215, 219-221 suppression of headedness 27, 30 syllable 1, 2, 5, 13, 15, 25, 26, 28, 33, 34, 40, 45, 48-50, 54, 63, 76-79, 101, 105, 108-110, 114, 115, 242, 325, 328, 330, 346, 351-356 symmetric view of Narrow Syntax 300

Subject index symmetry breaking 7, 8, 289-292, 295-299, 301, 305, 306, 310 syncopation 347 syncretism 215, 216 syntactic adjacency 225, 228 syntactic agreement 6, 161, 162, 165167, 171-180, 183-190, 192 taxonomic phonemics 115 ‘Third Factor’ principles 341, 342 three-dimensional theory of phrase structure 7 underspecification 138, 141, 216, 354 UG (Universal Grammar) 163, 342, 347

371

Unlike category condition 199 Unlike feature condition 199 Utterance 44 V-O compounds 296, 297 voice onset time (VOT) 343 vowel elements 17-19, 27, 30, 33 vowel harmony 16, 27 vowel insertion 348 wh-question 41, 201, 216 word-final empty nucleus 20 word-order preference 334 working memory 60 Y-model 289, 292, 293, 299, 310, 315