252 53 13MB
English Pages 374 Year 2020
The Syntax of Meaning and the Meaning of Syntax
POTSDAM LINGUISTIC INVESTIGATIONS POTSDAMER LINGUISTISCHE UNTERSUCHUNGEN RECHERCHES LINGUISTIQUES À POTSDAM Edited by / Herausgegeben von / Edité par Peter Kosta, Gerda Haßler, Teodora Radeva-Bork, Lilia Schürcks, Nadine Thielemann and/und/et Vladislava Maria Warditz Editorial Board: Tilman Berger (University of Tübingen, Germany) Željko Bošković (University of Connecticut, USA) Sarah Dessì Schmid (University of Tübingen, Germany) Anna Maria di Sciullo (UQAM / Université du Québec à Montreál, Montreal, Canada) Steven Franks (Indiana University, Bloomington, USA) Atle Grønn (University of Oslo, Norway) Holger Kuße (Dresden University of Technology, Germany) Hans-Georg Wolf (University of Potsdam, Germany) Ghil'ad Zuckermann (University of Adelaide, Australia)
Vol./Bd 31
Zu Qualitätssicherung und Peer Review der vorliegenden Publikation Die Qualität der in dieser Reihe erscheinenden Arbeiten wird vor der Publikation durch einen externen, von der Herausgeberschaft benannten Gutachter im Double Blind Verfahren geprüft. Dabei ist der Autor der Arbeit dem Gutachter während der Prüfung namentlich nicht bekannt; der Gutachter bleibt anonym.
Notes on the quality assurance and peer review of this publication Prior to publication, the quality of the work published in this series is double blind reviewed by an external referee appointed by the editorship. The referee is not aware of the author's name when performing the review; the referee's name is not disclosed.
Peter Kosta
The Syntax of Meaning and the Meaning of Syntax Minimal Computations and Maximal Derivations in a Label-/Phase-Driven Generative Grammar of Radical Minimalism
Bibliographic Information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the internet at http://dnb.d-nb.de. Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Cover illustration: Cornelia Scheide: Untitled, 2020. Courtesy of the artist
Printed by CPI books GmbH, Leck ISSN 1862-524X ISBN 978-3-631-67132-0 (Print) E-ISBN 978-3-653-06463-6 (E-PDF) E-ISBN 978-3-631-70446-2 (EPUB) E-ISBN 978-3-631-70447-9 (MOBI) DOI 10.3726/978-3-653-06463-6 © Peter Lang GmbH Internationaler Verlag der Wissenschaften Berlin 2020 All rights reserved. Peter Lang – Berlin · Bern · Bruxelles · New York · Oxford · Warszawa · Wien All parts of this publication are protected by copyright. Any utilisation outside the strict limits of the copyright law, without the permission of the publisher, is forbidden and liable to prosecution. This applies in particular to reproductions, translations, microfilming, and storage and processing in electronic retrieval systems. This publication has been peer reviewed. www.peterlang.com
For Erika who provided me with comfort and peace and gave me love while working on this book
Preface I want to thank all who accompanied me with advice and constructive criticism in discussions at conferences while writing this book. In the first place, I thank Diego Gabriel Krivochen to whom I owe nearly everything. I thank Diego for inspiring me with the ideas of Radical Minimalism, which he introduced, invented and conceived, a fact which resulted in lifelong collegiality and friendship in the form of joint ideas and publications and even one book (Krivochen and Kosta 2013). I also would like to thank Diego Krivochen for reading this book and giving valuable hints for improvements. Furthermore, I owe my gratitude to Anna Maria di Sciullo who gave me the needed forum to express my ideas in front of leading specialists within the International Network in Biolinguistics. She also was the original spirit who gave me the idea of combining generative framework with Biolinguistics, and this influence can be seen throughout all the chapters of this book. I want to express my sincere gratitude to my wife, Erika Corbara, who has accompanied my research from the very start until the very end. She has provided me with all which is necessary to conduct research: ideal working conditions at my office and home, understanding, and support, and above all, love. On the technical side, I would also like to express my gratitude to my secretary, Mrs. Monika Kruschinski, for helping me to prepare the layout of the manuscript, and Peter Lang Verlag, with Mr. Michael Ruecker and John Britto Stephen, for having accepted to publish this book, and all colleagues at Peter Lang Publisher for the excellent print set. Last but not least, I would like to dedicate this book to my wife Erika Corbara, who accepted that I worked on it even at nights and provided me a friendly atmosphere. Thank you, dear Erika. I owe my assistant Alina Liebner, M.A., and my research assistant Anne Hänsch that the index has been improved in an exemplary manner. All remaining defects and flaws are my responsibility.
Contents Introduction �������������������������������������������������������������������������������������������������������� 15 1 Third Factor “Relevance” between Semantics, Pragmatics, and Syntax ������������������������������������������������������������������������������������������������������� 19 1.1 The Two Types of Meaning ������������������������������������������������������������������������ 19 1.1.1 Proposition, Truth Values, and “Meaning” ��������������������������������������� 19 1.1.2 Russel’s Definite Descriptions ������������������������������������������������������������ 20 1.2 Alternative Analyses ����������������������������������������������������������������������������������� 21 1.2.1 Generalized Quantifier Analysis �������������������������������������������������������� 21 1.2.2 Fregean Analysis ���������������������������������������������������������������������������������� 22 1.2.3 Mathematical Logic ����������������������������������������������������������������������������� 23 1.3 About “Meaning” and Illocution ��������������������������������������������������������������� 24 1.4 How Do Syntax, Semantics, and Pragmatics Interact? ��������������������������� 27 1.5 The Acquisition of Meaning ����������������������������������������������������������������������� 30 1.6 Ambiguity and the Quantum Mind ���������������������������������������������������������� 31 1.7 Conclusion ��������������������������������������������������������������������������������������������������� 35
2 The Language of Thought Hypothesis, Classes, and Relations �������������������������������������������������������������������������������������������������� 37 2.1 General Remarks ����������������������������������������������������������������������������������������� 37 2.2 Crash-Proof Grammars and Crash-Proof Syntax Theory ��������������������� 42 2.3 On Thought and Language ������������������������������������������������������������������������� 44 2.3.1 Philosophers on Universals: Realists ������������������������������������������������� 45 2.3.2 Reference and Meaning in Standard Philosophy ����������������������������� 47 2.4 William of Ockham and Ockham’s Razor ������������������������������������������������ 50 2.5 Classes and Relations Are Determined by Biological Needs and Endowment: Eric Lenneberg on Language and Arithmetics ���������������� 61
10
Contents
2.5.1 Small Number of Basic Lexical Categories ��������������������������������������� 62 2.5.2 Categorization and the Function of Labels at the Interfaces as Interpretative Tools for Narrow Syntax ���������������������������������������� 65
2.6 Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) �������������������������������������������������������������������������������������������������������� 77 2.6.1 Small Number of Axioms of the Model of I-Grammar: ����������������� 78 2.6.2 The Architecture of Mental Grammar: The Computing Mind ������ 81 2.6.2.1 The Puzzle: Derivation and Time ����������������������������������������� 83 2.6.2.2 Economy and Derivation ������������������������������������������������������ 84 2.6.2.3 Raising, Lowering vs. Merge and Late Insertion: Don’t Move! ����������������������������������������������������������� 88 2.7 Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion ������������������������������������������������������������������������������������������������������� 89 2.7.1 Shortness of Evidence and Earliness Principle �������������������������������� 91 2.7.2 Small Amount of Logical Constants: Feature Inheritance vs. Feature Sharing ������������������������������������������������������������������������������������ 99
3 Gender and Animacy between Displacement and Agreement ����������������������������������������������������������������������������������������������������� 103 3.1 Words, Phrases, and Sentences ���������������������������������������������������������������� 103 3.2 Crash-Proof and Crash-Rife Grammars ������������������������������������������������ 104 3.3 Phase Impenetrability and Phase Interpretability ��������������������������������� 106 3.3.1 Gender and Animacy Declension and Agreement Classes ���������� 108 3.3.2 The Structure of Noun Phrase or Determiner Phrase in Slavic ����������������������������������������������������������������������������������������������� 113 3.3.3 Pronominalization ����������������������������������������������������������������������������� 115 3.3.4 Numerals and Animacy �������������������������������������������������������������������� 116 3.4 Mixed Gender Agreement in Russian DPs �������������������������������������������� 118 3.4.1 Gender ������������������������������������������������������������������������������������������������� 120 3.4.2 Gender Assignment systems ������������������������������������������������������������ 121 3.4.3 Gender Agreement ���������������������������������������������������������������������������� 122 3.5 Theoretical Background on DP-Internal Agreement ��������������������������� 124 3.5.1 DP Structure ��������������������������������������������������������������������������������������� 124
Contents
11
3.5.2 Location of Gender in the Nominal Phrase ����������������������������������� 125 3.5.3 Interpretability of Gender ����������������������������������������������������������������� 127
3.5.4 Agree and Feature Sharing ���������������������������������������������������������������� 128
3.6 Mixed Agreement �������������������������������������������������������������������������������������� 130
3.6.1 Hybrid Nouns ������������������������������������������������������������������������������������� 130
3.6.2 Proposals That Distinguish Concord and Predicate Agreement ������������������������������������������������������������������������������������� 132
3.6.3 Concord vs. Index Agreement (Wechsler and Zlatić 2000, 2003) ���������������������������������������������������������������������������������������������� 133
3.6.4 Multiple Levels of φ-Feature Interpretation (Sauerland 2004) ���������������������������������������������������������������������������������������������������� 134
3.6.5 Assignment Strategies between Semantics, Morphology and Syntax in Slavic ����������������������������������������������������������������������������������� 135
3.6.6 Further Evidence for Animacy in Word Formation: Possessives in Czech and and Compounds in German ���������������� 136
3.7 Further Perspectives of Research and Exploration of Gender and Animacy ������������������������������������������������������������������������������������������������������ 139
4 Adjectival and Argumental Small Clauses vs. Free Adverbial Adjuncts – A Phase-Based Approach within the Radical Minimalism with Special Criticism of the Agree, Case and Valuation Notions �������������������������������������������������������������������������������������� 141 4.1 On Definition of Secondary Predicates and Small Clauses: The Puzzle ���������������������������������������������������������������������������������������������������������� 141 4.2 The Framework: Radical Minimalism ���������������������������������������������������� 142
4.2.1 The Architecture of the System �������������������������������������������������������� 150
4.2.2 Nom Agree Case and Instr Case in Russian Secondary Predicates �������������������������������������������������������������������������������������������� 151
4.2.3 Argument-Small Clauses: Subjects of Causatives (on SC and pro, PRO) �������������������������������������������������������������������������������������������� 158
4.2.4 PredP (Bowers 1993, Bailyn and Citko 1999) �������������������������������� 159
4.2.5 Against a Valuation Feature–Based and in Favor of a PHASEBased Approach ��������������������������������������������������������������������������������� 161
12
Contents
4.2.6 Expletive pro in Small Clauses: An Incorporationist Perspective ����� 167
4.3 Further Perspectives of Research on Predicative Instrumental vs. Nominative ������������������������������������������������������������������������������������������������� 172
5 Case and Agree in Slavic Numerals – Valuation of Features at the Interfaces within a Phase-Based Model ����������������������������������� 175 5.1 Introduction ����������������������������������������������������������������������������������������������� 175 5.1.1 A Short Overview of the Puzzle ������������������������������������������������������� 176 5.1.2 Higher Numerals in Russian ����������������������������������������������������������� 178 5.2 Three Types of Cardinal Numerals in Polish ����������������������������������������� 179 5.2.1 The Classification of Pawel Rutkowski �������������������������������������������� 179 5.2.2 The Accusative Hypothesis for Higher Numerals 5 Animacy and Maculine Personal Gender (Hence Virile V) in Numerals ���� 182 5.3 Agree and Valuation in Russian in a Phase-Based Model �������������������� 183 5.3.1 Interim Conclusion ������������������������������������������������������������������������������ 189 5.3.2 Animacy and masculine personal gender (hence virile V) in Numerals ����������������������������������������������������������������������������������������������� 189 5.3.3 Classification of Numerals in Polish (PL) and SCB and Animacy (V vs. NV) �������������������������������������������������������������������������� 191 5.3.4 Lower Numerals in Russian (Paucals) ��������������������������������������������� 193 5.4 Summary ���������������������������������������������������������������������������������������������������� 195
6 On Phases, Escaping Islands, and Theory of Movement (Displacement) �������������������������������������������������������������������������������������������� 199 6.1 Wh-Movement ������������������������������������������������������������������������������������������� 199 6.2 Freezing Effects and Generalizations ������������������������������������������������������ 201 6.3 Improper Movement Revisited ���������������������������������������������������������������� 203 6.4 Object Shift ������������������������������������������������������������������������������������������������� 204 6.5 Clitic Doubling in Bulgarian and English Resumptive Pronouns ������ 205 6.6 Scrambling and Clitic Left-Cleft Dislocation in Czech DP? The Case of ten, ta, to a DET-CL-Hypothesis ����������������������������������������������� 207
Contents
13
6.6.1 Scrambling and Islands: In Czech and Elsewhere ������������������������� 208 6.6.2 Scrambling in Complex NPs and Escaping Islands ����������������������� 215
6.7 Summary ���������������������������������������������������������������������������������������������������� 222
7 On the Causative/Anti-causative Alternation as Principle of Affix Ordering in the Light of the Mirror Principle, the Lexical Integrity Principle and the Distributed Morphology ���������������� 223 7.1 Mirror Principle and Functional Hierarchies �������������������������������������������� 224 7.1.1 Preliminary Observations and Comments ����������������������������������������� 224 7.1.2 Affix Ordering in Morpho-Syntax and Semantic Classes ����������������� 236 7.1.3 Higher Subjects: Benefactive or Experience Dative Subjects in Impersonals vs. Anti-causatives ����������������������������������������������������������� 241 7.2 On Merge, Mirrors, and Syntactic Derivation of Causatives and Anti-causatives ����������������������������������������������������������������������������������������������� 241 7.2.1 Recent Approaches on Derivation of Grammatical Categories: Theta Hierarchy and Argument Structure Hierarchy ������������������������������������������������������������������������������������������������� 241 7.2.2 Nominals and Passives ��������������������������������������������������������������������������� 245 7.2.3 The Derivation of Adjectival Passives �������������������������������������������������� 251 7.2.4 Three Levels of Representation: a-Structure, Thematic Structure and Event Semantics ������������������������������������������������������������������������������ 252 7.2.5 Putting the Pieces Together ������������������������������������������������������������������� 258 7.3 Conclusion ������������������������������������������������������������������������������������������������������ 260
8 Radical Minimalist Hypothesis and Early Grammars �������������� 263 8.1 Basic Notions and Assumptions on Phase-Theory And Minimalism ������������������������������������������������������������������������������������������������ 263 8.2 Some Further Important Notions on Phase Theory ����������������������������� 268 8.3 Merge and Features ����������������������������������������������������������������������������������� 270 8.4 Proposal: Radical Minimalist Hypothesis and Early Grammars �������� 273 8.4.1 The Logical Problem of Language Acquisition and RMH ����������� 273
14
Contents
8.4.2 Word Order in Children’s Language ������������������������������������������������ 292
8.5 Adult Grammar Derivations �������������������������������������������������������������������� 296
Bibliography ������������������������������������������������������������������������������������������������������ 301 Index ��������������������������������������������������������������������������������������������������������������������� 359
Introduction This monograph is a collection of contributions of the author mainly concerned with the theoretical approach of Minimalism (Chomsky 1993, 1995, 1999, 2000, 2001, 2004, 2005, 2007, 2008, 2010, 2012, 2015, 2020) in the modified version as Radical Minimalism (Krivochen 2011a, 2011b, 2011c, Krivochen and Kosta 2013, Kosta and Krivochen 2014a, b, c, Krivochen 2018). The eight chapters focus on the division of labor between Narrow Syntax and Meaningful Units of the sentence. This contribution focuses on the role of Mental Lexicon (understood as a selection of Roots and Labels), the Labeling Mechanism, and the participation of the SM/CI-Interfaces within a Crashproof Grammar of Human Language. The data are taken from languages of different genetic origin and type. The study is based on the idea that Language and Thought are closely connected and must be studied within the physical laws of Anti-Entropy and Dynamical Frustration theory (Krivochen 2018). As opposed to Krivochen (2018), we do not reject the idea of FLN, but we descend from a refined model different from the Minimalist Model in Chomsky (1993–2020, forthcoming). Since our goal is similar to the idea of Krivochen (2018), namely “to propose a neurocognitively implementable theory of language (in connection to the physical properties of cognitive systems),” we will try to show that “generative” must be understood as a “fully explicit” description of syntactic structures mapped into cognitive frames of thought: as Krivochen (2018) puts it: a theory of grammar must “account for the relations between Conceptual Structures (CS) (Taylor et al. 2007, 2010, Uriagereka 2012, Jackendoff 2002, Lobina 2014, 2017, among others) and linguistic [(i.e. morphosyntactic and phonetic/phonological)] structures, as well as how (and why) hierarchical, n-dimensionall structures are ‘flattened’ for externalization as sound waves.” (cf. Krivochen 2018). Moreover, we also require of such a theory to establish reasonable limits for the time of computation, thus “limiting the properties of what is actually computable in a classical sense (e.g., objects that can be computed in exponential time are not computable in polynomial time), and revisiting it with experimental predictions about what can be computed in polynomial time within an interactive model of computations.” (Krivochen 2018, and p.c.). To summarize this point, the sense in which we understand “generative” properly contains the conventional Chomskyan sense and complements it with a
16
Introduction
“structure assignment in real-time” touch. Another innovative point in this book is how we treat the notion Phases and Labels. While we are introducing the notion features in a more or less traditional Minimalist way in Chapters 2–7, the notions of Labels and Phases are specified in Chapter 2 in a more sophisticated way than, for example, in the conventional classical approaches (cf. Adger 2003: 69–70). We introduce the notion of unspecified Root of a Syntactic Object (SO), which gets its categorial status only in a specific syntactic context (cf. Chapter 2 in large under the notion of Occam’s Razor, § 2.3., footnote 14). Even though some chapters represent articles with different topics published elsewhere, the collection has a certain logic and system. In each article, the Model of Radical Minimalism is the theoretical basis, and, second, the relation between Sound and Meaning at different levels of grammar (namely Syntax, Semantics, Pragmatics, Morphology, and Phonology/Prosody) is concerned. Whereas Chapter 1 gives an introduction to a very different theory of Meaning than that known from traditional descriptive approaches, the Chapter 2 is concerned with the relation between Thought and Language and critically reviews classical theories of Meaning from Plato, Aristotle, and Descartes, until the very new developments in the late Nineteenth Century and Twentieth Century, as discussed in detail in Gottlob Frege’s (1892) concept of “compositionality of sentential meaning” and “sense,” and Saul Kripke’s criticism on the concept of proper names as descriptors. Chapter 3 is the central theoretical chapter of the book introducing two fundamental principles of UG, one which operates at the level of Narrow Syntax (NS, Phase Impenetrability Condition, PIC) and the other which works at the level of Interpretation (Principle of Phase Interpretability, PPI). This chapter tries to argue for a modified theory of “Crash-proof ” grammars and gives arguments based on selected syntactic phenomena in several Slavic and other languages. Chapter 4 is a revised version of an article published in lingbuzz in early 2014 concerned with secondary predication, the notion of depictives and resultatives and also of the stage- and individual-level predicates within the concept of small clauses. It also explains the distribution of Nominative, Accusative, and Instrumental Case in Russian in secondary predicates. Chapter 5 applies the notion of Phase (cf. Citko 2014) and Label (cf. Adger 2013) in a field that has not yet been analyzed under the perspective of Radical Minimalism, namely the so-called heterogeneous vs. homogenous assignment of Case of Cardinal Numerals in Russian and Polish. Chapter 6 is a summary of the notion of Islands trying to give further arguments in favor of a Label-/Phase-based approach within the Theory of Radical Minimalism. Chapter 7 has been literally published in Zeitschrift für Slawistik 2015; 60 (4) 570–612 and is reprinted by
Introduction
17
kind permission of the de Gruyter Publisher in full version. Chapter 8 goes back to a talk given as invited speaker at the International Biolinguistic Conference at the Université de Quebeck à Montréal (UQAM 2011) but has been enlarged and modified. It should have originally been published elsewhere, but the publication could not have been realized so that I am posting the full version here. This last chapter is concerned with the field of Biolinguistics, Neurolinguistics, and Language acquisition of normal and impaired speakers. It serves, both, as a further external argument and empirical evidence in favor of a theory of Generative grammar, which relates sound to Meaning within a Crashed proof Grammar of Radical Minimalism.
1 Third Factor “Relevance” between Semantics, Pragmatics, and Syntax* Abstract: The research concerning the division of labor between semantics, pragmatics, and syntax as the three basic faculties of Natural Language has always been the primary concern of linguistics since the very beginning. There is an ongoing discussion in which domains of Language Faculty in Narrow Sense (FLN) can be language-specific for syntax, which concern the distinction between functional and lexical categories, and which roleplay the interfaces. But there is little known about the division of labor between semantics and pragmatics. In my contribution, I shall try to find out which parts of pragmatics could be included in FLN. If pragmatics can be considered as a relation between language systems, language as a performance of the system, the language users, and the way he/she refers to the world he/she is manipulating, how can we account for pragmatics as a potential third factor including pragmatic behavior of humans? We believe that this can be done if we account for a principle that is superior to both FLN and human behavior, namely the relevance principle. Can a particular module based on relevance be a part of language faculty in the narrow sense (cf. Hauser, Chomsky, and Fitch 2002)? Keywords: Relevance Theory, Quantum Mind Hypothesis, Radical Minimalism, Pragmatics, Reference
1.1 The Two Types of Meaning 1.1.1 Proposition, Truth Values, and “Meaning” Semantics, as the domain of natural languages, has to do with the relation between signs and things or objects. This view is ancient. It is found in Plato’s Kratylos (427–347 BC): Words “name” or “refer to” things. It works well for proper nouns like Erika, Iuventus, and Daimler Mercedes Benz. It is less clear when applied to abstractions, to verbs, and adjectives – indeed wherever there
* Chapter 1 has been already published as Czech translation and appeared under Peter Kosta, (2015) “Třetí faktor” relevance “mezi sémantikou, pragmatikou a syntaxí”, in: ČÍTANKA TEXTŮ Z KOGNITIVNÍ LINGVISTIKY II. Eds. Božena Bednaříková & Monika Pittnerová. Univerzita Palackého v Olomouci. Filozofická fakulta. Olomouc 2015, 57–76. I thank Ing. Aleš Prstek Director Palacky University Press for the consent he has given the Publisher Peter Lang to print the original version in English in this book as chapter 1.
20
“Relevance” between Semantics, Pragmatics, and Syntax
is no immediately existing referent (thing) in the physical world, to correspond to the symbol (word). An abstract notion as love can be interpreted only if we have a concept of the sign love. As an abstract notion, it remains with no more meaning than the proposition (1)
(1) The present King of France is bald
Under the premise that this proposition would refer to a non-existent King because France is not a Monarchy anymore, the whole proposition is false. France is presently a republic and therefore has no king. Bertrand Russell pointed out that this raises a puzzle about the truth value of the sentence “The present King of France is bald.” The sentence does not seem to be true: If we consider all the bald things, the present King of France is not among them, since there is no present King of France. But if it is false, then one would expect that the negation of this statement, that is, “It is not the case that the present King of France is bald,” or its logical equivalent, “The present King of France is not bald,” is true. But this sentence does not seem to be true either: the present King of France is no more among the things that fail to be bald than among the bald things. We, therefore, seem to have a violation of the Law of Excluded Middle. Is it meaningless, then? One might suppose so (and some philosophers have; see below) since “the present King of France” certainly does fail to refer. But on the other hand, the sentence “The present King of France is bald” (as well as its negation) seems perfectly intelligible, suggesting that “the Present King of France” cannot be meaningless.
1.1.2 Russel’s Definite Descriptions Russell (1905) proposed to resolve this puzzle via his theory of descriptions. A definite description like “the present King of France,” he suggested, is not a referring expression, as we might naively suppose, but rather an “incomplete symbol” that introduces quantificational structure into sentences in which it occurs. The sentence (1) “the present King of France is bald,” for example, is analyzed as a conjunction of the following three quantified statements:
(2) There is an x such that x is presently King of France: ∃x[PKoF(x)] (using ‘PKoF’ for ‘presently King of France’) (3) For any x and y, if x is presently King of France and y is presently King of France, then x=y (i.e. there is at most one thing which is presently King of France. ∀x∀y[[PKoF(x) & PKoF(y)] → y=x] (4) For every x that is presently King of France, x is bald: ∀x[PKoF(x) → B(x)]
Alternative Analyses
21
More briefly put, the claim is that “The present King of France is bald” says that some x is such that x is presently King of France and that any y is presently King of France only if y = x, and that x is bald: (5) ∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & B(x)]
This is false since it is not the case that some x is presently King of France. The negation of this sentence, that is, “The present King of France is not bald,” is ambiguous. It could mean one of two things, depending on where we place the negation “not.” On one reading, it could mean that there is no one who is presently King of France and bald: (6) ~∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & B(x)]
On this disambiguation, the sentence is true (since there is indeed no x that is presently King of France). On a second reading, the negation could be construed as attaching directly to “bald,” so that the sentence means that there is presently a King of France, but that this King fails to be bald: (7) ∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & ~B(x)]
In this disambiguation, the sentence is false (since no x is presently King of France). Thus, whether “the present King of France is not bald” is true or false depends on how it is interpreted at the level of logical form: If the negation is construed as taking wide scope (as in ~∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & B(x)]), it is true, whereas if the negation is construed as taking narrow scope (with the existential quantifier taking wide scope, as in ∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & ~B(x)]), it is false. In neither case does it lack truth value. So we do not have a failure of the Law of Excluded Middle: “the present King of France is bald” (i.e., ∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & B(x)]) is false, because there is no present King of France. The negation of this statement is the one in which “not” takes wide scope: ~∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & B(x)]. This statement is true because there does not exist anything which is presently King of France (cf. also a detailed analysis with more examples in Zimmermann, Sternefeld 2013: 205–231, Chapter 9: Presuppositions).
1.2 Alternative Analyses 1.2.1 Generalized Quantifier Analysis Stephen Neale (1990), among others, has defended Russell’s theory and incorporated it into the theory of generalized quantifiers. In this view, “the” is a
22
“Relevance” between Semantics, Pragmatics, and Syntax
quantificational determiner like “some,” “every,” “most,” etc. The definite description “the” has the following denotation (using lambda notation). (8) λf.λg.[∃x(f(x)=1 & ∀y(f(y)=1 → y=x)) & g(x)=1].
That is, the definite article “the” denotes a function which takes a pair of properties f and g to truth just in case there exists something that has the property f, only one thing has the property f and that thing also has the property g. Given the denotation of the predicates “present King of France” (again PKoF for short) and “bald” (B for short) (9) a. λx.[PKoF(x)] b. λx.[B(x)]
We then get the Russellian truth conditions via two steps of function application: “The present King of France is bald” is true just in case ∃x[PKoF(x) & ∀y[PKoF(y) → y=x] & B(x)]. In this view, definite descriptions like “the present King of France” do have a denotation (specifically, definite descriptions denote a function from properties to truth values—they are in that sense not syncategorematic, or “incomplete symbols”). Still, the view retains the essentials of the Russellian analysis, yielding precisely the truth conditions Russell argued for.
1.2.2 Fregean Analysis The Fregean analysis of definite descriptions, implicit in the work of Frege and later defended by Strawson (1950), among others, represents the primary alternative to the Russellian theory. On the Fregean analysis, definite descriptions are construed as referring expressions rather than quantificational expressions. Existence and uniqueness are understood as a presupposition of a sentence containing a definite description, rather than part of the content asserted by such a sentence. The sentence “The present King of France is bald,” for example, is not used to claim that there exists a unique present King of France who is “bald”; instead, that there is a unique present King of France is part of what this sentence presupposes, and what it says is that this individual is “bald.” If the presupposition fails, the definite description fails to refer, and the sentence as a whole fails to express a proposition. The Fregean view is thus committed to the kind of truth-value gaps (and failures of the Law of Excluded Middle) that the Russellian analysis is designed to avoid. Since there is presently no King of France, the sentence “The present King of France is bald” fails to express a proposition, and therefore fails to have a truth value, as does its negation, “The present King of France is not bald.” The
Alternative Analyses
23
Fregean will account for the fact that these sentences are nevertheless meaningful by relying on speakers’ knowledge of the conditions under which either of these sentences could be used to express a true proposition. The Fregean can also hold on to a restricted version of the Law of Excluded Middle: for any sentence whose presuppositions are met (and thus expresses a proposition), either that sentence or its negation is true. On the Fregean view, the definite article “the” has the following denotation (using lambda notation). (10) λf: ∃x(f(x)=1 & ∀y(f(y)=1 → y=x). [the unique y such that f(y)=1]
(That is, “the” denotes a function which takes a property f and yields the unique object y such that y has the property f, if there is such a y, and is undefined otherwise. The presuppositional character of the existence and uniqueness conditions is here reflected in the fact that the definite article denotes a partial function on the set of properties: it is only defined for those properties f which are true of exactly one object. It is thus undefined on the denotation of the predicate “pre sently King of France” since the property of “presently being King of France” is true of no object; it is similarly undefined on the denotation of the predicate “Senator of the US” since the property of being a US Senator is true of more than one object.
1.2.3 Mathematical Logic In much more formal work, authors use a definite description operator symbolized using ιx. The operator is usually defined so as to reflect a Russellian analysis of descriptions (though other authors, especially in linguistics, use the ι operator with a Fregean semantics). Thus
(11) ιx (ϕx)
means “the unique x such that ϕx,” and (12) ψ(ιx(ϕx)
is stipulated to be equivalent to “There is exactly one ϕ and it has the property ψ”: (13) ∃x∀y(ϕ(y) ⇔ y = x ψ(y))
Under a pragmatic analysis of the proposition (1), however, the utterance becomes true (and not false) if we include the notion of the deictic center. The pragmatic notion of deixis (going back to Bühler’s 1934 Sprachtheorie) is a very good tool to combine the Fregean notion of truth value and reference with the pragmatic point of view of the Speaker: Since the Fregean view is committed to
24
“Relevance” between Semantics, Pragmatics, and Syntax
the kind of truth-value gaps we observe in a sentence such as (1) The present King of France is bald, the Fregean will account for the fact that these sentences are nevertheless meaningful by relying on speakers’ knowledge of the conditions under which either of these sentences could be used to express a true proposition. The Fregean view can thus hold on to a restricted version of the Law of Excluded Middle: For any sentence whose presuppositions are met (and thus expresses a proposition), either that sentence or its negation is true. If we consider the notion of the deictic center as a corrective of truth values of sentences, then the Speaker can make an utterance if the King of France refers to a known concept of a French King whose attribute was “bald.” Moreover, if this attribute refers to a name, it can even be an individual king that became a nickname of Philip III (30 April 1245 – 5 October 1285), called the Bald (French: le Hardi), being the King of France, succeeding his father, Louis IX, and reigning from 1270 to 1285. We can thus say that the propositional meaning of a sentence has to include the speakers’ knowledge of the truth conditions under which either of these sentences could be used to express a true proposition. This is also the reason why a context-free interpretation of propositions is limited only to a minimal number of propositions, maybe only to descriptions. There is indirect evidence from Ontogeny of Language that this Fregean viewpoint enriched with the notion of the deictic center is on the right track. How do children develop concepts and how comprehension of concepts matures? The interpretation of propositional meanings has to be conceived as a function of the stage of language acquisition in the individual. If at the initial state of language acquisition, the lexical array of the mental lexicon consists of roots that have only a generic meaning, it would be the use of the mental lexicon that enables children to arrive at the concepts of adults step-by-step. This is exactly what we want to call procedural meaning (cf. §§ 1.4 and 1.5 of this chapter). Before we develop a theory of relevance that enables us to enrich generic meanings of roots of the mental lexicon with concepts of the target grammar, we need to introduce and repeat what we know about the interaction between meaning and pragmatics within the classical speech act theory (SAT).
1.3 About “Meaning” and Illocution In the past four decades, the SAT has been one of the most influential, most widespread, and most vividly developing theoretical frameworks in linguistics. Originating in analytic philosophy, the SAT has been reflected in most of the linguistic disciplines. The core topic of the SAT, “how we do things with words,” that
About “Meaning” and Illocution
25
is, how people act by means of language or, what does it mean to use language, can be identified with the investigation of human communicative competence, competent (successful) use of language. In this viewpoint, the SAT deals with language/speech universals and with universals of human behavior. Even though several different speech acts (SAs) classification systems with different levels of comprehensiveness have been known, certain classes of different speech act, namely assertions, promises, and requests/directives (under more other labels) can be found in any of them. Therefore, a question arises whether any language specifics (e.g., specific features of typologically related languages) can influence the performance of a particular speech act of a specific type. In other words, the crucial question concerned here can be formulated like this: Is there anything specific about speech acts performance in Slavic and other languages? Habermas (1971, 1981) calls such a theory the universal pragmatics. It should be emphasized, however, that Habermas’ concept of “communicative competence” in contrast to Chomsky’s narrower term “linguistic competence” describes general structures of possible speech situations as its object, and that it is “the object of this theory to re-design this system of rules by which we produce or generate situations of possible speech acts” (Habermas 1971: 102; cf. Brekle, 1972: 127). Austin’s (1962) primary formulations of SAT is based on a distinction between what he calls an utterance used for “stating” (describing) things, for conveying information (which, therefore, can be evaluated as true or false), and the “performative,” an utterance used for “doing” things, for performing actions. The phrases “I now pronounce you man and wife,” when uttered by a priest or mayor at a wedding, “I name/christen this ship Queen Elizabeth,” “I promise I’ll be there,” and “I bet you five dollars” convey no descriptive information, therefore, are neither true nor false, but they just perform actions simultaneously to the point of speech while they are pronounced: They perform the action referred to in the phrase (marrying, christening, promising, betting) by saying it. They can be evaluated as felicitous (successfully performing the act of marrying, christening, etc.) or infelicitous if the speech act does not fulfill all the necessary conditions, for example, if the person performing the act of marriage is not a person entitled to do so if somebody utters the formula of promise without intention to accomplish the promised action, etc. Performative speech acts are utterances that are assigned an illocutionary force, a meaning given to it by the speaker’s (producer’s) intention (illocutionary point). The speaker’s intention is realized in the performance of a certain utterance by means of it. A very important distinction is further the dichotomy made by the follower of Austin, John Searle. Searle’s topic in the field of SAT is the notion of direct vs. indirect speech acts, and the explanation of the indirect speech acts mechanism
26
“Relevance” between Semantics, Pragmatics, and Syntax
(cf. Searle 1969, 1975. His main point is that a Speaker performs a primary speech act (the act which is meant) by means of a secondary speech act (the act which is said). A sentence that contains the illocutionary force indicators for one kind of A can be uttered to perform another type of illocutionary act, in addition to the “literal” illocutionary acts, for example, in a pair of sentences (cf. Searle 1975: 61), cf. (14).
(14) A: Let’s go to the movies tonight. B: I have to study for an exam.
The reaction B constitutes a rejection of the proposal A (primary illocutionary act), but in virtue of its meaning, it is still a statement (secondary illocutionary act). The secondary IA is literal, and the primary IA is not literal. Also, in some cases, the Speaker may utter a sentence and mean what he says and, at the same time, mean another illocutionary act with a different propositional content (which is the case of the well-known question “Can you reach the salt?” meant as a request). The performance and interpretation of indirect speech acts are based on asserting or questioning various felicity conditions of the speech act performed indirectly. The Speaker of an indirect speech act relies on the background information shared with the Hearer and on the Hearer’s ability to make inferences. In the later investigation of the broad field of “indirectness,” an important role belongs to principles of conversation (Grice 1975) and the theory of relevance (Sperber and Wilson 1986; cf. Hirschová 2009). The assumption that the literal referential meaning of a proposition and its meaning in a concrete context as utterance (e.g., in a deictic situation) must be interpreted in two different ways referring to two different situations and verbalized as two different speech acts – a primary speech act with the intended meaning of the proposition (say illocution A) and a secondary speech act with the literal propositional meaning (B) can be shown on the following German sentence uttered in the context of “Oktoberfest” in a Munich “Biergarten” as a directive speech act but realized as a neutral assertive propositional act, cf. (15).
(15) Herr Ober, ich bekam ein Bier “Mr. waiter, I have received a beer.”
The literal lexical meaning of the verbal predicate bekommen “to receive” would be a suggestion or a statement or an assertion (B) about something that has happened, so a declarative speech act, the result being the speaker has received a beer and must be content. But the illocutionary force (A) of the primary speech act can only be interpreted as a reminder or as insisting on getting some beer after having waited so long, and nothing happened, so quite the opposite meaning,
How Do Syntax, Semantics, and Pragmatics Interact?
27
inferring an illocutionary meaning of a directive speech act “please, bring me a beer!” The interpretation (A) that feeds into the literal meaning of the proposition (B) results in a conversational implicature (A (B)) → A in the sense of Kosta (2011b), deriving the following steps: the speaker is not content at all because he did not get any beer, so he is asking again until he gets a beer. To be able to interpret the literal meaning of the proposition not as a pure statement (B) about something that happened but on the opposite about something that did not happen but has to happen finally, we need the context and the situation in which the proposition has been uttered as utterance and as a speech act. After this short overview and recapitulation of the problem of literal and inferred meaning, we will try to implement some ideas how to account for the problem from the viewpoint of the theory of relevance (Sperber and Wilson 1986b) within an account of language design and neurological optimization (Krivochen 2012a, b).
1.4 How Do Syntax, Semantics, and Pragmatics Interact? Assuming, that FLN is a “mental organ” (a module) with a biological basis and has, therefore, the general properties of other physiological systems, then, according to Chomsky, we need to look for those factors that come into play in the development of this faculty within the species. These factors are (Chomsky 2005a: 6): 1) The genetic endowment, initial genotypic state of the faculty which determines the limits of variation. 2) Experience, which enables the initial state (conceived as an acquisition device, in any given module) and leads to the final phenotypic state, one of the possibilities licensed by the first factor. 3) Principles not specific to a faculty, including: a) Principles of external data analysis b) Principles of computational efficiency and architectural constraints related to the development of systems. We believe there is a close relation between third-factor requirements, architectural constraints that affect the very basis of the systems, and the principles of relevance, which would strengthen the hypothesis that Relevance Theory (RT) is an internist theory which works at a subpersonal level to provide principled explanations of the functioning of the inferential module. It is true that most applications of RT have to do with the area of Pragmatics, but this is not an obli gatory thing to do, as Leonetti and Escandell (2011) say,
28
“Relevance” between Semantics, Pragmatics, and Syntax “(...) procedural meaning is a class of encoded linguistic meaning that plays a decisive role in triggering pragmatic inference, but it is not itself a part of any sort of pragmatic rules or routines (...)”
If our derivational model generates a Procedural-Conceptual dynamic (as has been made explicit in other works, mainly Krivochen, 2012b), it is not because there is a stipulation, but because our syntax is blind and unbounded, and constraints are third-factor principles. As there has been no clarification regarding the nature of these principles in Generative Grammar, we think a biologically based, computationally explicit version of RT can provide significant insight into the operations that take place at C-I. Let us expand on the aforementioned derivational dynamics. We take roots to have a conceptually “nominal” nature. N is the most basic conceptual category, the non-relational element, and the conceptual terminal that does not decompose (Jackendoff ’s 1983 [THING]). (16) N + P = Adj / Adv / V. (see Hale and Keyser, 1993 et. seq) (17) {cause Ø, {event Ø, {location Ø, √} = copy of the root’s corresponding p-signature in PF.
Complex categories are formed with √ + several procedural nodes that cumulatively influence (and determine the interpretation of) the sum specified conceptual content conveyed by the root (see Krivochen 2012b, Chapter 2). We first have an entity (N); then, it is located in space in relation to other entities (P, establishing a relation of central or terminal coincidence between Figure and Ground). Only after having undergone these periods can we conceive events, first uncaused, then caused, as the latter are more complex. The order of the bottom-up syntactic (linguistic) derivation is by default the order of purely conceptual hierarchical representations, built-in C-I:
(18) {cause, {event, {location {thing, {location, thing}}}}}
Using traditional labels, this would be:
(19) [vP Ø [VP Ø [PP [DP] [P’ [P][DP]]]]]
Does this imply a contradiction with our earlier claim that the C-I – syntax interface is not transparent? The mirror instantiation is the simplest option, the first to be considered if we take the First Principle of Relevance to be valid. Other orderings are later-accessed options, nonetheless available for the system. Ontogenetically, nouns are acquired first, and the holophrastic period in language acquisition is largely (if not entirely) based on Ns. Nothing prevents using the naturalistic methodology in this research, and so our version of Relevance Theory can become a perfect complement to the
How Do Syntax, Semantics, and Pragmatics Interact?
29
generative model, traditionally focused on the computational system. What is more, we believe that the very formulation of the Principles of Relevance legitimates this possibility. The First Principle of Relevance, which makes a strong claim respect for the role of optimization of computations in the mental modules (without specifying a particular, note that says “human cognition,” not “this or that faculty”) would correspond to the factor (3b), non-specific principles of economy of a power which come to determine the nature of the acquired language (Chomsky 2005b: 6). Its formulation is as follows: Cognitive Principle of Relevance: Human cognition tends to be geared to the maximization of relevance. This is a compelling claim on principles of economy since Relevance is defined as a cost-benefit relation. In any mental process, then there will be Relevance if the benefit is higher than the cost. In our terms, if the interface effects justify extra steps in the derivation or extra elements in representations. Notice that we integrate and explain motivations for, for example, Movement (understood as Remerge) without resorting to features or other stipulations, but only to thirdfactor principles. The Second Principle of Relevance is formulated as follows: Second Principle of Relevance: Every ostensive stimulus carries the presumption of optimal relevance. This principle corresponds, we believe, with the factor (3a), since it is a principle that involves an assumption about external data, be it linguistic or not. In deciding between different possibilities in a given interface, Relevance Theory is guided by the following principles, defined in Carston (1998).
“a) Set up all the possibilities, compare them and choose the best one(s) (according to some criterion/a) b) Select an initial hypothesis, test it to see if it meets some criterion/a; if it does, accept it and stop there; if it doesn’t, select the next hypothesis and see if it meets the criterion, and so on.”
These claims work extremely well as the formulation of the constraints of a Quantum Mind, with some comments: notice that principle (b) works in a DC, but not in a QC: we do not need to proceed linearly since a QC can compute many possible states of the physical system at once. We can improve the explanatory adequacy of Relevance Theory by enriching it with Radically Minimalist assumptions and get – as a result – a comprehensive model of the interaction between the syntactic workspace and the interfaces, whatever they are (since, as the reader must have noticed, there is no substantive claim regarding units or levels of representation in Relevance Theory).
30
“Relevance” between Semantics, Pragmatics, and Syntax
Feed (roots/ semantic primitives) Conservation Principle
Type-Array
C-I
Dynamic workspace: GEN function applies Analyze
Transfer
S-M
Analyze
Transfer
Tab. 1: Quantum Mind and FLN
The model we are defending would look as follows (cf. also Chapter 2): In such a model, the optimal scenario, and the one we have in mind, is that all operations are interface-driven and, thus, ruled by our formalized, biologically oriented version of Relevance Principles. In such a model, the optimal scenario, and the one we have in mind, is that all operations are interface-driven, and, thus, ruled by our formalized, biologically oriented version of Relevance Principles.
1.5 The Acquisition of Meaning In the process of language acquisition, the child is not only able to “learn” how syntactic units and rules work (such as how the joining of simple lexical units to complex constituents and sets to work and concur which units can merge and which cannot); the child also acquires “meanings” (in the sense of Frege’s term “sense,” i.e., de Saussure’s signifié). Forms without meanings are not only of little communicative value and thus non-relevant for comprehension, but they are permanent without concept, that is, the objects do not allow for concept formation, when learning the meaning fails (e.g., this is the case in semantic impairments, called Wernicke’s aphasia, that is, semantic disturbances in a specific brain part). Unlike syntax acquisition, which is accomplished at a specific time and its development is relatively independent of the ontogeny of general cognitive structures, the acquisition of meanings represents a kind of never-ending process (since it is also never finished in the mental lexicon of adults by learning new words and meaning modifications in the semantic part of the lexicon), and is apparently based on the interaction of various subsystems of cognition. The development of the semantic component requires basic cognitive structures
Ambiguity and the Quantum Mind
31
and processes that fall within the scope of perceptual and conceptual structure formation. The mental lexicon is part of our working (i.e., short-term and long-term) memory where the conceptual knowledge of all words in a language LX is stored (cf. Baddeley, Papagno, and Vallar 1988, Baddeley and Wilson 1986, Baddeley et al. 1988, Baddeley 1992). The basic elements of this subsystem are the mental lexicon entries, those lexical items, in which all phonological, syntactic, and semantic information of words are related to each other and stored. The interaction of linguistic representational units is expressed in the following scheme:
(20) LEX = {phonle, synle, semle}
The index le indicates that it is an abstract lexical representation of mental units, therefore lexemes that contain the morpho-phonological variants of each word as well as the idealized set of semantic features. The lexicon is thus the intersection, the actual Interface, with more or less formal and conceptual structure-formation. Located in the language acquisition process, the stunning child must now first “learn” how to “acquire” the representational specification of the individual units of information (i.e., phon, syn, and sem), and secondly, it must learn to recognize the relationships between these entities and learn how to “manipulate” them (e.g., the coupling between semantic and phonological representation, or between the conceptual-intentional system (CI) and the sensory-motor system (SM)). It must also develop the ability to relate the situation demands with words on their environments, for example, the child must grasp the complex languageworld relational structure and context-specific strategies appropriate reference (see sections 1.1 and 1.2 of this chapter).
1.6 Ambiguity and the Quantum Mind Our concept of meaning and especially the notion of ambiguity or ambiguous forms is a crucial part of our argumentation. We have shown that ambiguous sentences are a problem for a theory of mind that has to work in a very effective and economical manner in order to process the load of information that is stored in the mental lexicon and has to be processed in working memory (cf. Baddeley and Wilson 1988, Baddeley 1986, 1992, Badeley et al. 1988). For the time being, the ideal scenario can probably be found in the Theory of Quantum Mind that we will propose here in adopting the idea from Salmani
32
“Relevance” between Semantics, Pragmatics, and Syntax
Nodoushan’s (2008) proposal of the Quantum Human-Computer Hypothesis (QHCH). As exemplified in Krivochen (2011c), the “model of the human mind” can be compared with quantum computers. The effectiveness in processing a lot of information within a very short time span leads us to a model of Quantum Mind. This model is not only supported by conceptual necessity and parsimony but also by empirical evidence in support of a minimalist model of mind (cf. Krivochen 2011a, b). The idea suggests that the QHCH must nevertheless be completed with a more local theory of mental faculties, and that is where Radical Minimalism (Krivochen, 2011a) comes into play. Krivochen (2011a, b, c) aims at building a stipulation-free theory of the quantum mind-brain under Radically Minimalist tenets. The author takes language to be a physical system that shares the basic properties of all the bodily systems, the only difference being the properties of the elements that are manipulated in the relevant system. The author also draws the reader’s attention to the proposal that in the physical world of language, the operations are taken to be very basic, universal and straightforward, as well as the constraints upon them, which are determined by the interaction with other systems, not by stipulative intra-theoretical filters. The paper of Krivochen (2011e) ends with the proposal of a Strong, Radically Minimalist thesis (SRMT). The term “Quantum Linguistics” is proposed and elaborated on for the first time in Krivochen (2011e). Now, we can also apply the notion of meaning and form and especially the problem of ambiguity to the model of Quantum. What looks like ambiguity in natural languages on the level of the clause are preferably different propositions (for which we will use the predicate (argument) notation), generated separately as such ambiguity does not exist in natural language. Spell-Out (S-O) narrows down the way, and there is only one materialized form, but many possible interpretations. Since the mind is Quantum, it can parse all possibilities behind a single phonological form, which are generated by the different Merge positions of procedural elements like Neg(ation) or Q (quantifier). Spell-Out, that is, the materialization of syntactic nodes via phonological matrices, obeys certain patterns, which arise in the history of a language (SVO, SOV, etc.), which are mere epiphenomena, or let us call them arbitrary in the sense of de Saussure (1916). The 3-D models (Krivochen 2012a) can operate on different levels simultaneously, so there is no extra-processing cost, which would only exist in a digital serial computer. In the derivation, procedural elements are E-Merged in the position(s) in which they generate the wanted interpretation, regardless S-O, which will depend on the inventory of available pieces of vocabulary (VI) in a given language L and Spell-Out patterns, inductively acquired by the learner in a speech community.
33
Ambiguity and the Quantum Mind Form 1
Form 2
Form 3
~ (f (x))
~ f (x))
~ f (x))
Spell-Out
Unique phonological result at SM-(PF)
C-I
interpretation 1
C-I
interpretation 2
C-I Interpretation 3
Tab. 2: S-M and C-I Interfaces and Spell-Out: One Single Linearized Spell-Out at SM-Interface (PF), but Potentially Infinite Set of Interpretations at the C-I-Interface (LF)
As a provisional conclusion, we will put forth that, because of the linear character of the audible (i.e., externalized) linguistic sign (de Saussure 1916), we can derive that the SM interface is not Quantum (two simultaneous states, e.g., materializations, are not tolerated as outcomes of the system), and nor is the C-I component as there is an unambiguous 1-to-1 relation between Form and Interpretation. This allows us to solve cases of apparent ambiguity fast enough and without any computational overload. Spelling-Out, or any Transfer, for that matter, is like opening Schrӧdinger’s box (cf. Schrödinger 1935). The visual faculty seems to have the very same impediment, as optical illusions show (e.g., one can see either two faces or glass in the well-known illusion, but not both at a time). In this case, there is no linearity but figure-ground dynamics, related to the physical impossibility of focusing eyesight on more than one object at a time (in turn, determined by brain architecture, mainly, the interaction between the prefrontal cortex and temporal and parietal lobes). We would like to make a distinction that we consider essential when building a theory about the mind: the distinction between Generative systems and Interpretative systems (Krivochen 2011c: 93). This distinction is not only
34
“Relevance” between Semantics, Pragmatics, and Syntax
terminological but has major consequences to the Theory of QHC since we will demonstrate that only certain systems allow elements in their quantum state (i.e., comprising all possible outcomes), which, following Schrödinger (1935), we will call the ψ-state. a) Generative Systems: Generation equals Merge, a free, unbounded, blind operation that takes elements sharing either ontological or structural format1 and puts them together. For example, the syntactic component, the arithmetical capacity, the musical ability, and the pre-syntactic instance of the CI. b) Interpretative Systems: Interface systems, they have to read structural configuration build-up by generative networks. For example, the SM and Relevance Theory’s inferential module (the post-syntactic instance of the CI. An essential difference is that, as Generative systems are blind to anything but the format (see Boeckx 2010a for a similar view, but with different aim), they can manipulate objects in their ψ-state and transfer them to the interface systems. A structural relation between an element in its ψ-state and a procedural element / logical unit with specific characteristics collapses the quantum state onto one of the possible outcomes. The interfaces can “peer into” the syntax (i.e., have access to the syntactic workspace), to make sure a syntactic object (or any symbolic representation, for that matter) is transferrable: This is what we call the operation “Analyze.” A typical derivation, then, would have three steps, which occur cyclically (following Krivochen 2012b: 6; 2011c: 94).
1 See Krivochen (2011a) for an analysis of both. A brief definition is the following: “(…) Ontological format refers to the nature of the entities involved. For example, Merge can apply (‘ergatively’, as nobody / nothing ‘applies Merge’ agentively) conceptual addresses (i.e., roots) because they are all linguistic instantiations of generic concepts. With ontological format, we want to acknowledge the fact that a root and a generic concept cannot merge, for example. It is specially useful if we want to explain in simple terms why Merge cannot apply cross-modularly: ontological format is part of the legibility conditions of individual modules. Structural format, on the other hand, refers to the way in which elements are organized. If what we have said so far is correct, then only binary-branched hierarchical structures are allowed in human mind. The arguments are conceptual rather than empirical, and we have already reviewed them: Merge optimally operates with the smallest non-trivial number of objects. Needless to say, given the fact that ontological format is a necessary condition for Merge to apply (principled because of interface conditions, whatever module we want to consider), the resultant structures will always consist on formally identical objects (…)”
Conclusion
35
a) Narrow Syntax: Merge {α, β[ψ]}. This Merger collapses the quantum dimension on β for interface purposes. b) Conceptual Intentional System: Label {α, {α, β[D]}} c) C-I: Analyze: Is {α, {α, β[D]}} fully interpretable? That is, are all of its elements fully legible/usable by the relevant Interface? These three steps are obligatory, but there is a fourth step that depends on the result of Analyze: If affirmative, then the structure is transferred to the module that has analyzed it, performing the necessary modifications according to the legibility conditions of this module (as stated in the Conservation Principle). The relevant conclusion and a powerful generalization regarding the architecture of the mind is the following: Interpretative Generalization: Interpretative systems are not quantum, but generative workspaces are.
1.7 Conclusion In this short outline, we have tried to show how meaning in natural languages can be acquired. The primary working hypothesis is the following: In this first chapter of the book, we tried to find out which parts of pragmatics could be included in FLN. If pragmatics can be considered as a relation between language systems, language as the performance of the system, the language users and the way he/ she refers to the world he/she is manipulating, how can we account for pragmatics as a potential third factor including pragmatic behavior of humans? We believe that this can be done if we account for a principle that is superior to both FLN and human behavior, namely the relevance principle. Can a particular cognitive module based on relevance be a part of language faculty in the narrow sense (cf. Hauser, Chomsky, and Fitch 2002)? And how can we imagine a model of natural language? In the first section, we have discussed the logical problem of reference. In the second section, we have tried to show how reference (i.e., Frege’s meaning) functions and how it can be set into relation with communicative intention or illocution in a particular utterance. In the third section, we presented a model of how the different levels (syntax = synle, semantics = semle, and phonetics = phonle) interact with one another with respect to meaning formation and pragmatics. Finally, we have tried to consider a model of procedural meaning formation based on Radical Minimalism, the Theory of Quantum Mind and the Relevance Theory, bringing the production and processing of information, the syntactic, semantic and phonological components, to interfaces in order to be able to communicate with each other. The relevant conclusion, and a powerful generalization, regarding the architecture of the mind, was the so-called Interpretative Generalization.
2 The Language of Thought Hypothesis, Classes, and Relations 2.1 General Remarks In our first chapter, we have introduced the notions propositional meaning, sense, and procedural meaning, derived from the first two, but including also inference, presuppositions, and conversational implicatures. The driving force of the mind to produce and comprehend meaningful symbols must be found not outside but inside the computation. If Language (L) is not less and not more than the (maybe not quite perfect) most economical system, how to relate sound to meaning, then we must define what Language is as compared to non-Language (Non-L). The comparison brings up the idea of looking at the famous and still not resolved relation between thought and L (language). Ever since Frege’s contribution to a theory of meaning (cf. Frege 1892) and the application of his theory of truth conditions in a fragment of English (by Lewis, Montague, and Creswell, cf. also Heim and Kratzer 1998: 2), it is known that the compositionality of meaningful elements must be based on the notion of compositionality and truth values. The latter can – in a nontrivial way – explain the relation between the whole and part of the sentence meaning, between the saturated part of the sentence (argument) and its unsaturated part (call it “function” or predication, cf. Heim and Kratzer op.cit.) with respect to its truth conditions, but the notion compositionality is far from being explained and applied in a generative grammar approach (cf. Heim and Kratzer 1998). It is known that the theory of meaning is mainly based on a theory of the meaning of proper names. Frege was convinced that proper names are the most meaningful elements in a sentence/utterance. However, he did not understand the problem as a syntactician. For him, the problem was of unsaturated or saturated meanings of the proposition (which is a notion of predicate logics), rather than a problem of syntax. In the first part of this chapter, the problem needs to be based on syntactic notions and relations such as the subject of TP or object of vP/VP in order to be resolved. Consider the following examples in German sentence/utterance (i), (ii)
(i) Peter ist Professor der Universität Potsdam (ii) Angela Merkel ist die Bundeskanzlerin der BRD
In these two examples, the first argument Peter (i) and Angela Merkel (ii) are valid Arguments, also saturated without the second part of the sentence – a function in the terminology of Kratzer and Heim (1998: 3). Without the first
38
Language of Thought Hypothesis, Classes, Relations
part, the argument as a proper name, sentences such as (i) and (ii), would not be comprehensible because it would not be possible to assign truth values to them, irrespective of the truth conditions. As a contrast to (i) and (ii) with the first argument in the sentence being a proper name, let us consider sentences in which we replace the positions of the argument with appellatives, and we also put proper names in the position of the unsaturated part of the sentence, the predicate:
(1) Angela Merkel is currently (the / * a) Chancellor of the Federal Republic of Germany (2) Peter is (* the / a) Professor of the University of Potsdam (3) The Chancellor of the Federal Republic of Germany is currently Angela Merkel (4) #A/ the Chancellor of the Federal Republic of Germany is currently Angela Merkel (5) *The Professor / * a professor at Potsdam University is Peter (6) *Professor of the Potsdam University is Peter
In the sentences (1)–(6) both names (appellatives, in 1–2), as well as proper nouns (nomina propria; 3–6), are represented in different syntactic functions as a placeholder. However, it is not possible to change any of their places freely without having consequences of their acceptability. Thus, the degree of acceptance of propositions with regard to their truth values seems to depend to a considerable extent on the syntactic function of these elements, namely of the proper names, whether they are true arguments or semi-predicates. Moreover, it seems that the acceptability of (3)–(6) decreases if the unsaturated part of the sentence entails only the proper name (thus, in 5–6 the acceptability is lower than in 4 marked here with a #) because the lower adverb currently modifies the proper name “Angela Merkel” as if the proper name were a true predicate. Why is in (1) the use of the indefinite article out or even ungrammatical; why is the use of definite article “the” as modifier of the noun professor in (2) not acceptable but the indefinite article acceptable; why is the indefinite article excluded, and the definite article preferred in (4); why both articles are ungrammatical in (5); and finally where does the contrast between (2) vs. (6) concerning the items without a proper name consist? I have – for all these deviations and preferable readings – a relatively simple answer and even an explanation. The winner is the candidate called the axiomatic notion of economy and this seems to be confirmed also by the empirical data. In all these and similar cases, in the most real sense, it is the parsimonious part of computation local economy and silent movement or base-generated in-situ because LF (covered) movement and Merge are preferred over overt movement, and internal Merge; these two principles of UG seem to correspond with the
General remarks
39
diameter Ockhamite principle. The explanation is as simple as evident at first glance. It goes back to two working hypotheses or assumptions, which we will talk more in detail as we proceed, and it will also verify by appropriate logical arguments, empirical evidence, and logical inferences. 1. Axiom: Only real names (appellatives) may occur as predicates, converted from nouns. They are taken out of the subset of the lexical array as arguments of the kind N, and in the course of derivation, they are being merged in the position of a PredP or VP converting to true predicates (or secondary predicates). We infer from there axiom 2. 2. Axiom: Proper names (in the sense of “properly” proper names) cannot appear as pure predicates (inside a PredP) because they are not part of the lexical categories (V or N). As the first evidence for assumption # 1, we provided the examples (5) and (6), in which the proper name Peter occupies the syntactic position of a predicate noun (so it stands in an unsaturated position of “function”/predication position). In all these and similar cases, a result is ungrammatical outputs. These deviations I explain with the fact that the proper name Peter in (5) and (6) with a copula BE is the only element in the structure, that would take over the function of a pure predicate. The result of ungrammaticality changes as soon as a lexical “meaningful” predicate replaces the copula + proper name. A semipredicate can thus be saturated only and only by the Variable individual Professor, which will be converted into a predicate becoming a part of a secondary predicate. We call this operation, in which an individual variable professor takes over the functional significance of a predicate and behaving thus syntactically as real predicate → functional raising or semantic conversion.
(7) #The Professor of Potsdam University is Peter (7’) Peter is a/the professor of Potsdam University (8) #The professor of the University of Potsdam is Peter (8’) A professor of the University of Potsdam is Peter
In (8’), the indefinite article “a” is only morphologically a determiner. Semantically and syntactically, it functions as quantifier with the meaning “one of the professors of the set of all professors of the University of Potsdam is Peter.” Thus, in (8’) the noun phrase is not a DP, thus [DP A professor], but instead is an instantiation of the referential meaning of a kind of a subset of individual terms, thus [QP [DP a professor]], where QP is the superset (Obermenge), and the DP is the subset (Unter-/Teilmenge of QP). In the logical notation of formal semantics, we can also use the symbol for sets in the sense of Heim and Kratzer (1998/2012: 4–12).
40
Language of Thought Hypothesis, Classes, Relations
Examples (1) and (2) are grammatically and semantically completely unobtrusive because proper names (such as Peter and Angela Merkel) do not belong to the same set as real names. If they do not belong to the same set, they cannot have the same “properties,” and the same (syntactic) function cannot determine them. It is the reverse logic of argumentation, its function determines the membership of a term in a set. With other words, x and y can become a part of a set A only, if x and y have the same property P and fulfill the same function f, thus: Let A be the set whose elements are x, y, and z, and nothing else: A: = |x, y, z| in P and f
Predicates are descriptors or labels: they are unsaturated functions of arguments, which refer to specific events, activities, properties, or the like. Since proper names do not have properties (they are pure placeholders in the sentence without intensional or extensional meaning), they cannot become predicates. The resolution of the problem I see in fact in the notion of referential instantiation. Only terms that are subject to a referential instantiation in terms of Landau, both as individual terms or term variables, occur as well as predicates and as arguments. At the same time, pure proper names (as Peter, Hercules or Angela or Erika) are not semantically saturated by their intensional meaning (intension) or reference (extension) but gain their compositional meaning by their binding to an operator which itself is valuated within the scope of a function of a possible world. Thus, if names are operators, they could be like pronouns and anaphora because they can be identified or semantically determined through their binding relation to their antecedents in discourse (D-linking), namely through referential indices, but not by their intensional or referential semantics. We assume that names are relational concepts whose intensional meaning (Frege’s Sinn 1892) is only obtained by the reference to other elements, namely to the operators that they are coreferential with, Cf.:
(9) Angela Merkel1, the Bundeskanzlerin1, has called for calm after the attack in Paris. (10) Angela Merkel is Chancellor (11) The Chancellor is Angela Merkel (12) The Chancellor of Germany is X (13) X is the Chancellor of Germany
We still need an explanation for the contrast between (4) and (5), repeated here as (14), (15) vs. (16), and (17). Where does the low acceptability of (16) and (17) come from (marked with diamond # highlighted) relative to the totally reasonable interpretation of (16)?
General remarks
(14) (15) (16) (17)
41
The Chancellor of the Federal Republic of Germany is Angela Merkel Angela Merkel is the Chancellor of the Federal Republic of Germany #The Professor / # a professor at Potsdam University is Peter * Professor at Potsdam University is Peter
If we analyze the sentences (14) and (15) in more detail, we find that they meet the same truth conditions in (s). Only the DP in (14) (The Chancellor of the Federal Republic of Germany) can be used as a marking B2 within the meaning of Frege, realized as well as Nominalization of a clause (Satznominalisierung in the sense of Frege category B1) and bring it in the sense of an existence statement in a lambda notation wherein it then constitutes the second-order predicate.
(18) Preparation of meanings in lambda notation [[Angela Merkel]] = λxλs.x is AM in s [[is Chancellor]] = λxλs.x speaks in s [[Angela Merkel is Chancellor]] = λs. Angela Merkel is Chancellor in s
The sentences (14) and (15) have the same truth conditions because they apply to a situation (s) in which they are true, while in another situation(s’), they are false. That if they are expressed in a situation prior to 2005 (the inauguration of Chancellor Angela Merkel), they have the logical value 0 (false). By contrast, sentences like (5/16) #The Professor / # a professor at Potsdam University is Peter or (6/17) #The Professor / # a professor at Potsdam University is Peter have no truth value because they have no reference. (19) #The Professor / # a professor at Potsdam University is the famous rock guitarist Peter (20) * Professor of the Potsdam University is Peter (21) # A certain professor at the University of Potsdam is Peter.
Why are we able to interpret sentences such as (16) with the proper name “Peter” better than (17) and (20)? By the simple operation of “adjustment to referential values”: Referential values are those parts of the proposition which can be characterized by various referential nouns or verbs adjacent to the proper name “Peter.” Since both referential nouns, adjectives, nouns, or prepositions are object to Lambda conversion into true predicates, we are able to “refer” to the proper name as a part of a predication. This is one trick on how to get away with the unacceptability of separate proper names in predicate positions (unsaturated positions). Another way, how to make the acceptability better, is to extend the proper name with specific features of the name holder. Thus, (18) seems to be more acceptable if we extend the DP by quantification. The quantifier “certain” as a part of the DP professor converts the name Peter by the deictic relation to its saturated part, and the local adverb (PP) at
42
Language of Thought Hypothesis, Classes, Relations
the University of Potsdam narrows the potential class of Peters to a more definite class of potential referents. The rest is done by inference or implicature. The extension of the proper name of an individual (definite) Peter, therefore, determined by the marking function of its possible predicates (e.g., Professor, University of Potsdam, musician, guitarist, 64 years old, the man with the white hair, etc.) shifts a proper name to a referential noun. Shifters like these are certainly not an inherent property of the intensional meaning of a proper name, because the proper name “Peter” himself has no intensional meaning. Thus, the proper name seems to be insulated from the labeling function of its possible predicates. It is merely an indefinite term (similar to a generic pronoun any) or a universal quantifier (in all Peters in the world). In other words, proper names are descriptors in terms of Kripke (cf. Morris 2006). In this chapter, we shall argue for a theory of Compositionality based on syntactic and morphosyntactic Computation, which gives more importance to the interfaces, leaving Narrow Syntax with three significant operations: free and blind (internal and external) Merge, Labeling/Categorizing, and Multiple-Spell-out. We are trying to postulate that Merge is just external Merge, that there is a minimal set of lexical categories but quite a rich set of formal features (Labels) which determine and restrict the computation procedure, peering from the interfaces CI/SM back to syntax. This initial idea presupposes a theory of Computation where interpretation at LF (CI-interface) and PF rules (SM-interface) communicate with Spell-Out after Merge (post-syntactically) via a Procedure called Analyze and Remerge. Further on, this theory presupposes an approach based on Crash-proof Grammars, which regulates an incremental, step-by-step control system disallowing intermediate output of non-well-formed strings. Thus, all operations which reconstruct or repair post-syntactically are not licensed, which means that we strictly reject a post-syntactic Repair or Remove Operation2.
2.2 Crash-Proof Grammars and Crash-Proof Syntax Theory Principles of Crash-proof Syntax have first been introduced by Frampton and Gutmann (1999, 2000, 2002). In this approach, the Crash-proof Model is viewed as a logical consequence of the Local Economy (Chomsky 1995). The
2 On Remove Operation cf. Müller (2015) and Chapter 3 of this book; on Crash-proof Syntax cf. 2.2
Crash-Proof Grammars and Crash-Proof Syntax Theory
43
goal of this program is to explain the notion of optimal design of Language Faculty. As compared to the Minimalist program (Chomsky 1995) and its further subsequent development (Strong Minimalist Hypothesis, Chomsky 2004a, b, 2005a, 2007, 2010, 2020), Crash-proof syntax is not waiting until interfaces to meet FI but rather the repair or reconstruction is prevented because of the possibility to Analyze and Remerge. So, there is a link between Syntax and Interfaces allowing Analyze to peer back and forward to avoid Crash at every single step of the derivation in Narrow syntax. Thus, each derivational step is checked against the notion of the optimal solution, and in case it crashes, another candidate must be chosen (Frampton and Gutman 2002: 90). The main ideas and principles of crash-proof syntax were developed by Frampton and Gutmann (1999, 2000, 2002). Later, the main concepts of the crash-proof syntax during and after the “Exploring Crash-Proof Grammars” conference at Carson-Newman College (2008) were critically discussed, expanded, and partially empirically reviewed3. A brief comparison between classic minimalism and crash-proof grammars is given in the following table. Minimalist program
Crash-proof syntax
The syntactic system of the language faculty is designed in such a way that it optimally fulfills the design conditions of the interface systems. Crashes are only eliminated before the interface levels.
An optimal derivation system (at least from a computational perspective) only generates well-shaped objects that meet the interface requirements. Crashes are eliminated at every step of the derivation.
Putnam (2010: 5–8) distinguishes two types of the crash: “strict crash” and “soft crash.”
3 Presented in: Frampton, John and Gutmann, Sam (2002). Crash-Proof Syntax. In Samuel D. Epstein, and Daniel Seely (ed.), Derivation and Explanation in the Minimalist Program. Oxford: Blackwell, 90–105. Putnam, Michael T. (ed.). (2010). Exploring Crash-Proof Grammars. In Language Faculty and Beyond. Amsterdam, Philadelphia: John Benjamins. Researchers dealing with the topic are John Frampton, Sam Gutmann, Michael Putnam, Samuel Epstein, Daniel Seely, Hamid Oulali, Thomas S. Stroik, Dennis Ott, Vicky Carstens and others.
44
Language of Thought Hypothesis, Classes, Relations
• Definition of “strict crash” (or “fatal crash”): If one cannot interpret a syntactic object α in some or all of its characteristics at the interface level, α cannot be used or read off at the interface level. • Definition of “soft crash” (or non-fatal crash): If a syntactic object α has some features that cannot be interpreted, α is neither readable nor applicable at the interface level. However, if α can be connected to another local derivation unit that “repairs” the violation (s) of α and makes α interpretable, α is applicable and readable at the interface level. (cf. also Putnam and Stroik 2011; Stroik 2009; Stroik and Putnam 2013). “Computational Efficiency” of the crash-proof syntax • The crash-proof syntax aims to make every step in the sentence construction crash-proof so that the sentence is always well formed. • The syntax with crashes allows long derivatives, but they break down at the interface level:
(1) * it to be believed Max to be happy
• Derivatives in this sentence can be constructed further. However, it is not found that the fatal error is already there. This error is visible when converging at the interface level:
(2) * It was expected to seem to be believed Max to be happy
• The crash-proof theory should immediately determine that (1) is incorrectly shaped. The system, which is not crash-proof, is computationally inefficient. (Frampton and Gutmann 2002: 93) We will try to develop this idea in Chapter 4, sections 4.2 and 4.3, as we have already tried to do in our recent approach of Radical Minimalism (Krivochen and Kosta 2013).
2.3 On Thought and Language One of the most influential thinkers who first introduced the notion of inner speech was the Russian psychologist Lev Vygotsky. He uses this metaphor in order to describe a stage in language acquisition and the process of thought. In Vygotsky’s conception, “speech began as a social medium and became internalized as inner speech, that is, verbalized thought” (Katherine Nelson, Narratives from the Crib, 2006). Quite different regarding the motivation but very similar to the idea of inner thought is the notion I-language in Noam Chomsky externalization hypothesis (cf. 2.4., 2.5.).
On Thought and Language
45
These ideas have recently been developed under the presumption that thinking and thought both are identical operations of the computing mind, and the Language of Thought is something like a pre-state or initial state of “mental language” not yet externalized or materialized. “The Language of Thought Hypothesis” (LOT) postulates that thought and thinking take place in a mental language. This language consists of a system of representations that is physically realized in the brain of thinkers and has a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations. According to LOT, thought is, roughly, the tokening of a representation that has a syntactic (constituent) structure with an appropriate semantics. Thinking thus consists in syntactic operations defined over such representations. Most of the arguments for LOT derive their strength from their ability to explain certain empirical phenomena like productivity and systematicity of thought and thinking” (cf. Aydede 2010). The problem of LOT4 is that it is purely axiomatic and beyond provides no clear theoretical framework to prevailing semantic or syntactic theories. Even though some of the ideas seem to be original and intuitively understandable, the “theory” as such is not a theory but proves at its best pure speculation. The biggest problem I see in it is the unclear “linguistic” and philosophical terminology (e.g., terms like “meaning” and “constituent structure”), and the limitation on propositional attitudes of the Mind. Before we present some new ideas on the relation between thought and language, and in particular, between the meaningful part of the internal, intensional, and individual language (call it I-grammar following Chomsky 2005a) and the world, we would like to recapitulate some basic facts in the philosophy of language.
2.3.1 Philosophers on Universals: Realists Long before linguists dared to openly express their opinion about the relation between Language and Mind, it were the philosophers who initiated the discourse on the properties of entities. In metaphysics, the problem of universals refers to the question of whether properties of entities exist a priori in our Mind or are dependent on the outer world5. 4 A revision of this theory has been undertaken partly by Jerry A. Fodor (2009) and partly by Jerry A. Fodor and Zenon Pylyshyn (2015). 5 This point of view how we understand the term “syntax” of meaning is important. Whereas we share the position Noam Chomsky’s that Syntax in narrow sense is just a
46
Language of Thought Hypothesis, Classes, Relations
Plato tried – answering these questions – to introduce the notion of “ideas.” Thought requires the existence of the ideas expressed in the general ideas a priori. The idea is that which remains the same in all subjects or actions, however much they may differ from each other. It is the form (eidos) or essence (ousia) of things. The ideas are identified by a kind of spiritual “vision” (theoria). This show takes place in the dialog that requires the art of proper conversation and reasoning (dialectic). The “ideas” are the “archetype” (paradeigma) of all things. They are immutable and upstream to specific things (universal ante rem), the latter only participating or parasitic on “ideas” (methexis). Only “ideas” are truly existing in the literal sense of the word. The visible individual things only represent more or less (in-)complete images of the ideas in which they are inert. Individual things arise, change, and pass away. Their place is between being and non-being. Compared to his early and middle writings, Plato’s view in the late writings (Parmenides) gives a new perspective on ideas. Plato points to problems of the theory of ideas. Aristotle, in his Metaphysics, softened the idealistic approach of Plato with a new doctrine of abstraction. However, he also took a position in which universals are realistic. For him, recognition (Erkenntnis) is only possible if the General (katholou) has existence before itself (on he on). However, for Aristotle, this existence of Universals was not independent of the particular thing. Universals are nothing detached (chorismos). The notion or concept of “Universal” can only be assumed if individual things exist a priori. Art arises “when from many notions gained by experience one universal judgment about a class of objects is produced.” (Met. I, 1, 981 a 5–5). The universal is an abstraction, something peeled from the individual things. Thus, the existence of individual things take priority over the General (Aristotle) vs. ideas are prior to things (Plato). Taking up the idea of an individual thing and universal notion, we can derive our idea first introduced in Chapter 1 about reference and propositional meaning.
blind Merge, an unbounded recursive Operation of potential infinite output, the input to Syntax is Mental Lexicon containing basic categories which are themselves subject to computation in Syntax. Both modules work independently.
On Thought and Language
47
2.3.2 Reference and Meaning in Standard Philosophy It should be clear that the problem of reference and meaning is one of the crucial problems of standard philosophy. If the sentence 1. The present King of France is bald is bared of meaning then why can we still refer to it in a way that we can say it has some “Sinn” (sense = intensional meaning) but no meaning (extensional meaning or denotation, referential meaning) (Frege 1892)? Would not a sentence without reference be meaningless? The answer seems to lie between the two “ideas” of Plato and Aristotle. If we consider sentence 1. a real sentence with a real content and a real referent x, about which the predication P has been pronounced, then we do not refer to an individual referent x at a particular time t in a specific location l but to an idea, better: to a class term which could potentially refer to a certain individuum but not necessarily. In language, meanings usually do not refer to individual items or entities but abstract relational classes. What has been said derives from the simple fact (and truism) that language is not the world, but nothing more than an abstract system of symbols which “stand for” but do not replace the concrete individual objects or events of the world. The same what we have just mentioned about individual items and classes of notions can be applied to events. Events are semantic concepts or classes of actions, states, and events, rather than phenomenological entities. Thus, the propositional meaning of the statement Peter works in the university is compositional consisting of an event (action) and location of an argument in space. In contrast, the propositional meaning of a statement the sun is shining can be analyzed as consisting of a one-place-predicate, describing a state of an X (physical entity). The statement Erika ran a mile, or Ivana ate a cake do not refer to an actual real outer world situation or event, but rather to a propositional meaning of an achievement/result of physical action “Erika ran a mile” or an accomplishment “Ivana ate a cake.” These examples demonstrate why some examples of proper names do not refer; they just designate and accomplish their function as designators. Names as designators (in the sense of Kripke) are subject to the computing Mind of I-grammars as much as referential expressions are, but more than that, the former serve instead as “indices” than real physical or psychological notions of the outer world (referents). They substitute the distinct referential meaning of the object, place, event, situation they are assigned. Nevertheless, this designation as descriptors does not have any natural or semantic connection to the real referents but instead goes back to what Platon’s Hermogenes would call the relation between name /form and
48
Language of Thought Hypothesis, Classes, Relations
meaning/referent which in his opinion is arbitrary and given by pure convention (θεσει) (in the sense of de Saussure 1916)6. [383 St.1 A] HERMOGENES: Willst du also, dass wir auch den Sokrates unserer Unterredung hinzuziehen? KRATYLOS: Wenn du meinst. HERMOGENES: Kratylos hier, o Sokrates, behauptet, jegliches Ding habe seine von Natur ihm zukommende richtige Benennung, und nicht das sei ein Name, wie Einige unter sich ausgemacht haben etwas zu nennen, indem sie es mit einem Teil ihrer besonderen Sprache anrufen, sondern es gebe eine [B]natürliche Richtigkeit der Wörter, für Hellenen und Barbaren insgesamt die nämliche. Ich frage ihn also, ob denn Kratylos in Wahrheit sein Name ist, und er gesteht zu, ihm gehöre dieser Name. Und dem Sokrates? fragte ich weiter. Sokrates, antwortet er. Haben nun nicht auch alle andern Menschen jeder wirklich den Namen wie wir jeden rufen? Wenigstens der deinige, sagte er, ist nicht Hermogenes, und wenn dich auch alle Menschen so rufen. Allein wie ich ihn nun weiter frage, und gar zu gern wissen will was er eigentlich meint, erklärt er sich [384 St.1 A] gar nicht deutlich, und zieht mich noch auf, wobei er sich das Ansehen gibt als hielte er etwas bei sich zurück was er darüber wüsste, und wodurch er mich, wenn er es nur heraussagen wollte, auch zum Zugeständnis bringen könnte, und zu derselben Meinung wie er. Wenn du also irgendwie den Spruch des Kratylos auszulegen weißt, möchte ich es gern hören. Oder vielmehr, wie du selbst meinst, dass es mit der Richtigkeit der Benennungen stehe, das möchte ich noch lieber erfahren, wenn es dir gelegen ist.
6 As is known, another position, ϕύσει (de nature) takes in the famous dialogue Kratylos. Sokrates plays somewhat a mediator role. The Dialogue Kratylos of Plato is at the same time a model of modus ponens, whereby Kratylos is representing the physei (ϕύσει) thesis, Hermogenes the anti-thesis (θεσει), and Socrates the synthesis of the two. Cited from Platon. Phaidon. Das Gastmahl. Kratylos. Bearbeitet von Dietrich Kurz. Griechischer Text von Leon Robin und Louis Méridier, In: Platon. Werke in acht Bänden. Griechisch und Deutsch. Herausgegeben von Gunther Eigler. Darmstadt 1974: Wissenschaftliche Buchgesellschaft.
On Thought and Language
Cf. Platon. 1974. [383 St.1 A] HERMOGENES: Do you want us to include Socrates in our interview? KRATYLOS: If you mean. HERMOGENES: Kratylos here, Socrates, claims that everything has its proper name, which is inherent in it, and that it is not a name, as some have chosen among themselves, to call something by calling it with part of their particular language, but rather there is a natural correctness of the words, for Hellenes and barbarians as a whole the same. So I ask him if Kratylos is really his name, and he admits that this name belongs to him. And Socrates? I asked further. Socrates, he replies. Don't all other people really have the same name as we call everyone? At least yours, he said, is not Hermogenes, even if everyone calls you like that. Just as I now ask him further, and even want to know what he actually means, he does not explain [384 St.1 A] clearly, and he teases me, giving himself the reputation as if he were doing something back what he knew about it, and how he could, if he only wanted to say it, get me to a concession and agree with him. So if you somehow know how to interpret the Kratylos saying, I would like to hear it. Or rather, as you think yourself, that the names are correct, I would like to know that even more if it suits you.
49
50
Language of Thought Hypothesis, Classes, Relations
Similarly, the Neoplatonian philosopher Porphyrios descended the System of Categories of Aristotle under the title “Isagoge”, in which he explains the Aristotelian predictabilities as the mode of how we speak about things. This became the most important book during the scholastic Mid Ages. In this book, Porphyrios explains why it is hard to understand the difference between names and objects of genera (genders, classes) and species (individual items). “As to the genera [genres] and species [species], so I am going to the question whether they subsist or whether they merely exist only in the intellect and also, if they subsist, whether they are material or immaterial and whether separate from sensible things or only in the sense of things and these are composed, avoid to express me; for a task like this is very high and requires detailed investigation.” (Isagoge, Book I, second comment)
2.4 William of Ockham and Ockham’s Razor Nominalism (from Latin “nomen,” “name, word”) is usually used in contrast to the (universals) Realism, Platonism, or Essentialism. Nominalist positions are distinguished by the fact that they generally deny the existence of terms, classes, properties, and other general objects (universals) and allow, on the other hand, only the adoption of individual objects. We can distinguish several variants of nominalism. For the strong nominalism, acceptance of universals is ever missed. Thus, it is considered that general terms are formed only by abstraction processes of consciousness and therefore do not correspond to real objects in conceptualism. General terms and definitions apply as mere words that do not denote things. However, one can also find weaker positions as the formalism, where only the assessment of formal properties of conceptual distinctions stands for in the foreground, so-called constructivism, where general objects (understood as “classes”) conceive as constructs of abstraction. The representatives of classical empiricism or sensationalism are sometimes viewed as nominalists because the process of human knowledge starts in the specific individual things and only on this basis – through different abstraction and comparison processes – General Concepts can be formed. The classification between Realists and Nominalists is, therefore, quite controversial. A good example is the theory of William of Ockham, who is regarded as an outstanding representative of nominalism in the late scholasticism, known from other nominalist positions as “Platonic”: “Every universal is an individual thing and therefore the only designation for a universal.” “It can be shown with evidence that no universal is an extra-mental substance.” (William of Ockham, Summa Logicae I, 15, 2)
William of Ockham and Ockham’s Razor
51
At least three stages of the debate about nominalism must be distinguished, which have little to do with each other:
(1) the universals battle dominates the scholasticism; (2) the current phase, which is dominated by empiricism, and, finally, (3) the modern state of discussion, in which the problems of the logical interpretation and logical notation of the language are predominant.
William of Ockham is considered the founder of nominalism in the late scholasticism. His reality concept allows only individual substances. It summarizes the terms on a sign and rejects the theory of abstraction. “Erkenntnis” comes, after him, about through an intuitive cognition, in which a direct causal link between the concept and the immediately present object exists. Because of this causal connection between the concept and the signified (“Bezeichnetes”), our knowledge relates to reality. The advantage of his theory is an “economic,” for example, rather thrifty use of ontological entities because it does not require the adoption of general objects or concepts obtained by abstraction. This situation is known as Ockham’s Razor in the history of philosophy and epistemology. Ockham’s Razor is a methodological principle that always selects the easiest of the two possible assumptions to describe or explain X. The interpretation of this principle in connection with the problem of universals might become rather complicated. It is connected to the tradition of William of Ockham (and often led to the alleged quote “entia non sunt multiplicanda praeter necessitatem,” which means “Things should not be multiplied beyond what is required.”). Although these words are not found in Ockham, yet many places can be cited from his writings that say similar. We would further develop these philosophical ideas and assume for the time being until the contrary is proven that I-languages do not contain abstractions or specific things of the outer world. Instead, they contain linguistic entities in our Mind as terms of lexical and functional classes, features, and relations. They compute abstract classes of entities and events in the mental lexicon, computing them via syntax to the interfaces SM and CI in the way we have shown it in Chapter 1, Tab. 1. As a first approximation, we assume the following properties of these linguistic entities: 1. small number7 of basic categories; 2. small number of axioms;
7 “Small” means as small as required to exhaustively make sense of the computation of meaning and syntactic processes mapped on meaningful expressions, not just “small.”
52
Language of Thought Hypothesis, Classes, Relations
3. shortness of evidence; 4. small number of logical constants; 5. optimal computation8 and comprehension of the evidence within a category (due to our general theory of language as part of the physical world, obeying physical laws). We assume that properties 1.-5. are crucial for our understanding of the notion of economy and minimal computation as put forward since the Minimalist Program (Chomsky 1995). How can we imagine “A small number of basic categories” within a syntactic-semantic interface approach we try to defend as a minimal notion of FLN and Human Computation (HC)? The most straightforward notion of HC would be to combine primitives of the mental lexicon and project them into syntax-semantics interfaces. If we understand mental lexicon as the reservoir from which a human being derives both, his/her syntactic and semantic knowledge of UG and which seems to be the base of any human language (the SM/CI-interface), we should be able to define UG as some notion comparable to what Collins and Stabler (2011) call “UG” under the assumption that gives the basic notion of what we call minimal computation processing. The following Definitions are following Collins and Stabler (2011) and the Definition paragraphs are taken literally from them to discuss these points: “Definition 1. Universal Grammar Universal Grammar is a 6-tuple: PHON, SYN, and SEM are universal sets of features. Select, Merge, and Transfer are universal operations. Select is an operation that introduces lexical items into the derivation. Merge is an operation that takes two syntactic objects
That qualification, for me, yields the distinction between “perfect” (small number) and “optimal” (the smallest number with which we can still account for the relevant computations. I owe this improvement of my terminology “small” Diego Gabriel Krivochen, p.c). 8 For optimal computation with attracting neuronal networks, some testable and falsifiable proposals exist already. For example, in an article by Latham et al. (2003), the authors demonstrate that biologically plausible recurrent networks can perform optimal computations with noisy population codes, at least for uncorrelated or stimulus independent noise in the input and for noise-free evolution. This is a first step toward understanding how spiking networks, which do not evolve noise-free, can perform optimal computations, and how they can do so when the noise is correlated and/or stimulus-dependent. Cf.: Latham et al. (2003).
William of Ockham and Ockham’s Razor
53
and combines them into a syntactic object. A transfer is an operation that maps the syntactic objects built by Merge to pairs that are interpretable at the interfaces. Select, Merge, and Transfer are defined later in the paper. Definition 1 captures what is invariant in the human language faculty. There is no need to augment the 6-tuple with a specification of the format of lexical items since the required format of lexical items is already given by the definitions of the operations” Collins and Stabler (2011). Collins and Stabler (idem) further assume that UG specifies three sets of features: semantic features (SEM), phonetic features (PHON), and syntactic features (SYN). They further assume that these three sets do not overlap: “SEM may include features like eventive or (not stative); it could include thematic roles like agent, recipient, experiencer, and it could also include semantic values like λyλx.x breaks y. PHON may contain segments, phonological features like [+ATR], and ordering restrictions. SYN includes syntactic categories like N, V, P, Adj, but also subcategorization features, EPP features, and ‘unvalued’ features [uF] valued by Agree.” From these axioms, they derive another crucial axiom for their interpretation of UG: “Definition 2. A lexical item is a triple of three sets of features, LI = where Sem ⊂ SEM, Syn ⊂ SYN, and Phon ⊂ PHON. Note that for some lexical items, Sem, Syn, or Phon could be empty. Definition 3. A lexicon is a set of lexical items. While infinite lexicons are not excluded since the lexical items are essential (not generated), and human minds are finite, only finite lexicons need to be considered. Null lexicons are also not excluded in principle. Definition 4. An I-language is a pair where LEX is a lexicon, and UG is Universal Grammar” (cited from Collins and Stabler 2011). According to Ockham’s razor, all of these properties cannot logically be fulfilled at the same time. For example, one could only use one operator in the propositional logic (e.g. the Sheffer-Strich, Sheffer 1913), but this would lead to a lot of very long and not intuitive proofs, so that such “simple” systems are not used in the practice of logical thinking. In addition, a distinction must be made between a qualitative and a quantitative interpretation of this principle. Usually, only the qualitative economy appears to be desirable. For example, if we would not prefer a physical theory which states that there are 1,080 elementary particles in the universe, before another, that says that there is only 1079, simply because it is easier quantitatively, for example, because the first assumption postulates fewer entities
54
Language of Thought Hypothesis, Classes, Relations
than the other theory. Instead, we would prefer a theory that takes fewer types of elementary particles than another because it seems qualitatively easier. These two interpretations are, however, not always to separate sharply. Lewis argues for his theory of possible worlds by citing that he accepted not the existence of many different objects, but many objects of the same type, namely worlds. The problem, however, is that the set of all possible worlds already contains the set of all possible objects as objects in these worlds – we cannot assume all possible worlds as equally real and not do so for all sorts of objects. Insofar Lewis’ claim fails, stating that his theory does not violate the “important,” that is, the qualitative variant of Occam’s razor. At present, we cannot state a priori what the “small set of lexical or functional categories” or the “small amount of logical constants” of the Mental Lexicon will be. Postulate 2. “small number of axioms” might be related to the Theory of Computing Mind in general, but postulate 3. “shortness of evidence” is external to the set of properties; instead, it merely relates to the explanative and descriptive power of our Theory and says something about the falsifiability of our Model. Thus, in the present Theory of Human Mind and Language Faculty, we want to define the UG from three independent levels which play the driving and triggering mechanisms of external and internal Merge and which are crucial for projection and concatenation of elementary lexical to elaborate notions of human grammars, namely roots, phase, and interface(s). We strongly rely on the Theory of Radical Minimalism (RM) as developed in Krivochen’s work and adopted in Krivochen and Kosta (2013) and Kosta and Krivochen (2014). For the time being, some essential facts and assumptions must be introduced. We descend from a syntactic approach in which all syntactic (including morphological) operations, that is, the forming process of Merge, starts out from the √ ROOT9 up to a PHASE. The derivation begins with the choice of a lexical item derived from a lexical list of the mental lexicon (select units of lexical fields
9 We take the notion √ROOT from Panagiotidis (2014) and from the seminal work of Heidi Harley. It also corresponds to the definition 11 in Collins & Stabler: “For any syntactic object X and any stage S= with workspace W, if X ∈ W, X is a root in W. When X is a root, we will sometimes say simply that X is undominated in W, or when W is understood, simply that X is undominated. The Merge operation is constrained to apply to a root (see the definition of derive- by-Merge below), and the operation acts at the root in the sense that it embeds a root into a more complex syntactic object.” It is clear that a root cannot yet be defined in terms of any syntactic notion but is taken as a purely semantic notion since it is not dominated by any categorical notion of hierarchy and/or dependence. Thus, a root √RX itself can be categorized only in a
William of Ockham and Ockham’s Razor
55
à Lexical Array) and combines it with the second unit by a simple operation, i.e., External merger (external MERGE) in the corresponding area of the searchorder-Probe (cf. Krivochen and Kosta 2013, Kosta and Krivochen, 2014, Kosta 2015a, b).
(22) If the functional characteristics of the two lexical units (LI phi1 and LIphi2) match, they can be read off on one of the two basic levels/interfaces (sensu-motoric / SM or conceptually-intentional / CI), they will then be assessed and eventually filtered out. The result is the construction of complex phrases, expanded on (semantic) principle dynamic of full interpretation (see Krivochen and Kosta 2013: 70). (23) Merge is a free, unbounded operation that applies to two (smallest non-trivial number of elements) distinct (see below) objects sharing format, either ontological or structural. Merge is, on the most straightforward assumptions, the only generative operation in the physical world. Recursive Merge yields digital infinity (Ott 2009: 257) (24) Types of Merge: 1) Merge (α, β), α ≠ β –but α and β share format- Distinct binary Merge (Boeckx 2010a, Krivochen 2011, 2012) or External Merge (Adger 2011: 9); 2) Merge (α, β), α = β Self Merge (Adger 2011: 9)10; 3) Merge (α, β, γ...), α ≠ β ≠ γ Unrestricted distinct Merge
(25) No Labelling needed
Merge is the simplest universal of FLN, maybe even for FLB (cf. Hauser, Chomsky, and Fitch 2002).
(26) Labels or Features?
The Merge algorithm brings up the problem of label, how to signal headedness, and account for endocentricity as it seems to be a pervasive feature of human
certain syntactic workspace (or projection). For example, if √RX = √open, and √RX is merged with a functional category Y = CAUS (to make), it can evidently be only a lexical category marked for an event feature, and √RX is merged with a functional category Z = DET (the, a(n)), it can evidently become only a functional projection DP (an/the opening) marked, for example, individual term and definiteness. Thus, only the combination of the feature [lexical category] vs. [functional category] and workspace derives a root to become a category (in the sense of labeled bare phrase structure). 10 Adger (2011: 9) gives the following definitions: [(24a)] Merge (A, B), A distinct from B, → {A, B} (External Merge), [(24b)] Merge (A, B), A part of B, → {A, B/A} (where B/A signifies A is contained in B) (Internal Merge) :[(24c.)] Merge (A, B), A = B, → {A, A} = {A} (Self-Merge). As we can easily see, Self-Merge does not project new heads, so it is ruled out: [(25)] a. Merge x with x = {x, x} = {x}, b. Merge {x} with {x} = {{x}, {x}} = {{x}}, c. Merge {{x}} with {{x}} = {{{x}}, {{x}}} = {{{x}}} ...
56
Language of Thought Hypothesis, Classes, Relations
language (see Adger, 2013 for a cognitively oriented introduction). Chomsky attempted to solve it with a simple rule: he proposed that there are two kinds of the merge: pair-merge and set-merge (Chomsky 1998: 58). In the former, we are talking about adjunction, which is still a problem in minimalist theory, since no satisfactory theory for their derivation has been yet proposed (that we know of; see Uriagereka, 2005 for a Markovian take on adjunction). Insofar as current theories (see, e.g., Hornstein 2009: Chapter 4) rest on a series of stipulations over phrase markers and the distinction between complements and non-complements in a non-principled way. Thus, if we externally pair-merge α to β, it is always β that projects. Chomsky (2004a) has suggested that adjuncts are assembled in a parallel derivational space and then introduced in the main tree by means of an old mechanism recently revived: a generalized transformation, which, simplifying, introduces a sub-tree in a terminal node of another sub-tree, as we have pointed out with non-monotonic operations. Asymmetries between arguments and adjuncts are thus theoretically enhanced in purely structural terms. In set-merge, there is some “requirement” of α, which is satisfied by its merger with β (say, argument structure, possibly coded in terms of selectional features), and it is α that projects a label. The labeling algorithm that Chomsky proposes for Merge(n), where n always equals two distinct elements, can be summarized as follows (Chomsky 2008: 145; also 2013: 43):
(27) i. In {H, α}, H an LI, H is the label ii. If α is internally merged to β, forming {α, β}, then the label of β is the label of {α, β}.
We will argue in favor of 1) Merge (α, β), α ≠ β –but α and β share format- Distinct binary Merge (Boeckx 2010a, Krivochen 2011, 2012, Krivochen and Kosta 2013) within our framework of Radical Minimalism (RM), deriving binarity (though not 2-D binary-branching as in Kayne 1994) from interface requirements, not from intra-theoretical axioms. Most of this discussion is based on Krivochen (2012c), although additional arguments are provided in Krivochen and Kosta (2013: 45–46) and Kosta and Krivochen (2014a). Against “Self Merge”: RM’s definition of the generative function, Merge allows us to dispense with notions like “Unary Merge” and “Primary Merge” (De Belder 2011, de Belder and van Craenenbroeck 2011: 11).
“(...) When an element {α} is the first one to be taken from the resource by Unary Merge, it is included in an empty derivation, i.e., the object under construction is the empty set Ø (see also Zwart 2010: 8). The output of this instance of Merge is no different from any other: it yields an ordered pair, in this case .” (cf. Krivochen and Kosta 2013: 45–46).
William of Ockham and Ockham’s Razor
57
Notice that, for interface purposes (the only ones that matter in a “free, unbounded Merge” system, since DFI deals with legibility, and syntax is only generative, blind to the characteristics of the elements it manipulates, except for their format), equals {α} (which crashes at the semantic interface for independent reasons), since Ø is “completely and radically empty: it has no category, no grammatical features, no specification of any kind” (p. 12). The application of the operation Merge does not generate an interpretable object, or, at least, nothing “more interpretable” than {α} alone (as Ø) is the “empty set,” a Root Terminal Node. Unary Merge and Primary Merge can be both ruled out in favor of simple interface-driven binary Merge by appealing independently to specific legibility conditions (C-I specific requirements to build an explicate) and to DFI, which is a universal strictly interface condition, applicable to any derivation in any mental faculty. According to Adger (2011), the basic units with which syntax operates are:
(28) a. RLex, the set of LIs, which he identifies with roots b. CLex, the set of category labels
Self-Merge combines α and β, α = β, and CLex provides labels for the structures built by Merge. The effect of Unary Merge is to create unary branched structures that are extended projections of the root, of the type {...{{{√}}}...}. Besides the apparent criticism that Self Merge is trivial at the interface level supposing that DFI is valid (see above), we have found problems with labels and functional nodes. If CLex is a set from where a function “Label” takes an element and provides an unlabeled syntactic object with a label, some complications arise: Ad 1) There is a potential violation of the Inclusiveness Condition: labels should be recognized from a configuration at the relevant interface, not created and inserted via an arbitrary algorithm. Ad 2) The labeling algorithm is stipulative (i.e., there is no principled way of determining the label of an object). Examples: Label ({√cat}) = N by Root Labeling (Adger 2011: 12). The algorithm is as stipulative as Chomsky’s. Moreover, an additional ad-hoc operation is assumed. Labels are introduced in NS, where they play a role only if associated with the interfaces PF and LF. Besides, there is an a priori set of labels, whose nature and justification is unclear. Roots should never contain Labels in advance because then we run into a circle definition. In Adger’s proposal, labels also take care of categorization, which is also unnecessary in Narrow Syntax (NS). No functional/procedural elements are taken into account. Moreover, “there are no functional elements qua lexical items” (Adger 2011: 10). As far as RM is concerned, labels may show the
58
Language of Thought Hypothesis, Classes, Relations
results of category recognition at the interface for expository purposes, but in no way can they categorize since they have no entity in NS. Ad 3) The function of EVAL = Evaluator in our Theory Chomsky’s (1995) version of the Minimalist Program had a global evaluator, namely, the Minimal Link Condition (MLC) or Fewer Steps11, which was soon abandoned in favor of more local constraints (in phase theoretical terms), from 1998 on. Too much has already been said on Chomsky’s phase system (see Boeckx and Grohmann 2007 for an overview and comparison with the Barrier system and Krivochen 2010b for a summary of current proposals and critical readings, as well as our own proposal), and we will not get into it for the time being. On the other hand, OT’s EVAL has been explicitly formulated and modified over the years and consists of a set of hierarchically ordered constraints, which apply to representations (or “candidates”) generated by GEN. It is responsible for filtering out suboptimal representations that do not fulfill conditions established by constraints. There is a competence among elements, as in DM, but not to match the features of a syntactic terminal node (i.e., a morpheme), but between outputs that violate as few constraints as possible (optimally, but rarely, none). The main question, however, is not about quantity but about hierarchy: a violation of a single high-ranked constraint is rejected against the violation of more than one lower-ranked constraints. The hierarchy relation that holds between constraints in a language L gives rise to several problems. It is in the evaluator where most of the problems we find for traditional OT lie. Ad 4) Constraints of UG and problems of Hierarchy of Labels Let us assume there is a number X of constraints. Some of these, say, X-4 apply to an L1, and a different subset X-3 applies to L2. This is a serious problem since there are constraints that could plausibly apply to a single language (which takes us back to EST rules). This criticism could be avoided if one accepts the claim that constraints are universal, which in turn licenses a different critic: What is the difference between a constraint and a parameter?12 Moreover, how can L1 metalinguistically account for the set of constraints that do not apply to it? 11 Defined in Müller (2011a: 14) as follows: Fewest Steps: If two derivations D1 and D2 are in the same reference set and D1 involves fewer operations than D2, then D1 is to be preferred over D2. 12 An example, taken from Müller (2011b: 7) is the following: Wh-criterion: Wh-items are in SpecC [wh]. θ-Assignment: Internal arguments of V are c-commanded by V. In our opinion, these so-called constraints are nothing more than the two possible settings
William of Ockham and Ockham’s Razor
59
The development of a metalanguage is problematic (see Kempson’s criticism of feature-composition semantics for a similar argument). Regarding the ontology of the hierarchy, we find the following problem: let us assume that there is a hierarchy between constraints, so that they have the form >. Which is the principle ruling the hierarchy? In syntactic X-bar representations, for example, we have the three axioms, mainly endocentrism. Here, there seems to be no ruling principle. If there is not, there is a fundamental problem with the very concept of hierarchy. But, what is worse, if there is, then there is an even bigger problem: X-bar phrase structure can be seen as a set-theoretical phrase structure (see especially Di Sciullo and Isac 2008, Panagiotidis 2010). If this is so, that is, if there is a set relation between constraints such that Cx cannot apply unless Cy, which is higher in the hierarchy, then the application of the most specific constraint can appear instead of long chains of constraints. This would work in the same way as simplifying rules like [+ concrete], [+ animate], [+ human] simply as [+ human], since it presupposes (as it requires) the other two. If there is a true hierarchy, things should work this way, and many constraints could be eliminated in favor of the most specific ones. But the drawback would be that the more specific the constraint, the less likely it be universal. So, elimination of redundancy, which would be desirable in any system, is apparently fatal for an OT evaluator. In OT, the EVAL function is in charge of filtering out illegitimate/suboptimal representations generated by GEN. That evaluation, in most current versions of OT, applies at a global level (see Embick 2010 for discussion), even though globalism does not necessarily follow from an OT-like architecture. The main problem we see here is that global evaluation is both computationally inefficient and biologically implausible. If we consider OT-like constraints in a local domain, things get only a little better: the problems with the ontology of the hierarchy remain. In turn, extremely local optimization (Heck and Müller 2007, Müller 2011b) leads to a proposal in which all SO are stipulative phases, as in the strongest interpretations of Epstein and Seely (2002), bear in mind that the word “optimization” is used, not just evaluation. If an SO is an optimal candidate, there is no reason why it should not be transferred ipso facto, regardless of interface legibility conditions.13 Our local evaluation system (qualifiable
of the “Wh-parameter,” with the extra elements and relations “Spec-of,” “Wh-feature,” “c- commanded-by.” On parameter setting, cf. the critical discussion in Chapter 8 of this book. 13 On further problems of OT regarding predictability and explanatory adequacy cf. Kosta (2002).
60
Language of Thought Hypothesis, Classes, Relations
as “harmonic serialism”) is more flexible, allowing the working memory to host structures of variable complexity before Transfer applies and the interfaces take what they can minimally read. Given the fact that the Faculty of Language (FL) is in our proposal not an autonomous system but a workspace originated from the intersection of two systems (call them Conceptual-Intentional / Sensory-Motor, Semantics / Phonology) and the activation of the pre-frontal cortex (following recent neural models of working memory), it exists within those systems (see also Stroik and Putnam 2013). Therefore, it is only natural that the so-called external systems (which are not external at all, in our proposal, as there is nothing to be “external” to in our non-existence framework) can have access to the derivational steps, that is, the computations in WX. This conception of “invasive interfaces14” (i.e., the fact that the interfaces can access the workspace and analyze the result of each derivational step in order to look for minimal fully interpretable units to “take”) is of essential importance for our architecture, since it completely changes the traditional minimalist view on Transfer: in our model, it is not the case that the generative component “handles” a certain object to the interfaces according to some criterion of delimitation (e.g., the presence of [u-FF], see Gallego 2010), but instead it is the interfaces which are active and take the smallest pieces of structure they can read, independently of each other. We have already claimed that it is interfaced legibility requirements that trigger the application of the GEN function (i.e., Merge), and we have dismissed otherwise logical possibilities (Self Merge and Unrestricted distinct Merge) with basis on interface conditions on interpretation. The interfaces Analyze the result of each derivational step, which only they can trigger, and try to find minimal portions of legible/relevant information which they can grab. Let us call those minimal, fully interpretable units Phases. The evaluation is performed by both “performance systems” separately, which has, as a consequence, the fact that PF-Phases and LF-Phases do not need to coincide. Once one of these units is found in real time, it is Transferred; that is, the relevant interface takes the piece of information and stores it to perform further computations (e.g., reconstruction effects in C-I). In this situation of independent evaluation, we will call Asymmetry between the interfaces. The resulting derivational dynamics from what we have said above is as follows: (29) a. Narrow Syntax: Merge (α, β) = {α, β} C-I: Label recognition: {α, {α, β}} C-I / S-M: Analyze: is {α, {α, β}} fully interpretable in the relevant system? [C-I / S-M Transfer {α} iff it is]
14 Cf. the proposal made in Boeckx (2007), stemming from different assumptions and arriving at different conclusions with this same idea of invasive interfaces.
Classes and Relations are determined by biological needs and endowment
61
Additional theory internal assumptions (29) b. 1. Phase Impenetrability Condition
In phase α with head H, the domain of H is not accessible to operations outside α; only H and its edge are accessible to such operations. (Chomsky 2000: 108) The general configuration predicted so far would be given in (29) c. and (29) d. (29) c. [ZP Z… [XP X [HP α [HYP]]]] (cf. Chomsky 2001: 13) d. [CP C… [TP T [vP DP [v VP]]]]
In (29) c., Z and H, given in bold, are phase heads, and there is no intermediate phase head X between them. This configuration is demonstrated on (29) d., in which C and v are phase heads, and T is not a phase head. According to (29) b., as soon as HP is complete, the complement of H is spelled out. HP is complete when H no longer projects. Being inaccessible means that if there are any unvalued (or unchecked) features, they are left behind (cf. Citko 2014: 1431–1432). Elements, which must remain accessible to operations beyond the phase, must undergo internal Merge (IM) to the phase head or its projections. Chomsky (2000, 2001). Spell-Out/Transfer sends syntactic structures to the interfaces for interpretation in chunks. Syntactic domains relevant to Transfer are called phases. Proposed phase heads are v*, and C. Complement of phase head is Transferred. 2. In addition to PIC (ultimately, stipulative) to determine how the result of Merge will be interpreted for the purposes of further computations, there is a corollary to (5 i), which was examined by Uriagereka (1998) and, in less detail, Chomsky (1994). If Label is part of Merge, then the label of Merge (X, Y) is either X or Y, no other option being acceptable under Minimalist assumptions. In the course of the study, we will try to give enough empirical evidence to justify our approach of computation of syntactic categories from more primitive notions such as semantic roots.
2.5 Classes and Relations Are Determined by Biological Needs and Endowment: Eric Lenneberg on Language and Arithmetics Little is known about the language faculty with respect to meaning; we take here the biolinguistic position introduced by Eric H. Lenneberg (1994) who stricly rejects a purely nominalist position, instead: He argues both from logical and
62
Language of Thought Hypothesis, Classes, Relations
neurological perspective with appealing and convincing arguments in favor of a relational approach of language in which words are acquired as classes (not as individual items) via relational structure or faculty which only human species share (irrespective which class of words is at stake). Lenneberg shows this not only with arithmetics, numbers, and words but also with sensual objects (such as color terms), sensations (such as fear and love), and also in some other domains (Cf. Lenneberg 1994). Conceptual systems are mapped in the way Semantics maps on Syntax, but the Externalization happens via Phon. Only man (and by extension chimpanzees) can acquire this; in man it is genetically endowed, and in chimpanzee only some of these properties are “visible,” but only man can build up a conceptual system on at least two, maybe four examples: Arithmetics, Language, Music, and Arts (cf. Mukherji 2010). Whereas the first operation of reflection of the outer world in our mind belongs to the interface between psycholinguistics, neurolinguistics, and philosophy, the second, conceptualization of grammar and lexicon belongs to linguistics proper. We shall try to answer the second, linguistic question of collaboration between I-grammar and a theory of meaning in the sense of mental lexicon in the next section.
2.5.1 Small Number of Basic Lexical Categories If we want to remain consistent within our presumptions of parsimony, the model of semantics must contain only a limited number of basic categories of Mental Lexicon (ML). The categories we bear in mind must be not only sufficient but also necessary. Moreover, these elementary lexical units (ELU) of the lexicon must meet at least three requirements of the semantic theory of Mental Lexicon. (i) ELU must be elementary, that is, an elementary unit must not be disassembled into smaller segments of meaning15;
15 Contrary to what has been assumed by some cognitive linguists, an ELU is not subject to semantic decomposition (cf. Jackendoff, Ray 1983 Semantics and Cognition) because it is semantically underspecified in the sense that it does not have any categorical feature nor does it have any s-selectional or c-selectional properties. Thus, we do not really speculate which primes are more important than other primes. Rather, our notion of lexical and functional categories is guided from a syntactic-decomposition perspective in the sense of semantically underspecified √roots determined by c-selection for different lexical categories during syntactic derivation, cf. Panagiotidis (2014: 4).
Classes and Relations are determined by biological needs and endowment
63
(ii) ELU must constitute the appropriate input for the generation of larger syntactic units, phases. (iii) ELU must meet the criterion of compositionality. Ad (i) How can we decide which category of the ML is elementary? In other words, how can we know that a category or an ELU is not decomposable into smaller semantic features? The problem of decomposition of lexical meaning16 immediately comes to our mind. Does a decomposition of the meaning of the ELU √corere, √run, √laufen, √běžet help us in analyzing the structures (30)– (35)? We assume it not to be the case. However, it does contribute to the information about event structure (and in this sense, we might even speak about the decomposition of the event structure of the predicate), the predicate-argumentstructure and the Label which is nothing else than the syntactic category which come into play under a syntactic notion of derivation. If we consider the following sentences, we intuitively know that the complement of the predicate “to run” cannot be derived just from the one meaning of the predicate; rather, it seems to be restricted to three different syntactic categories (c-selection), determined by a bundle of different distinctive syntactic features, namely unergativity and non-achievement vs. ergativity/unaccusativity. Altogether, these syntactic features form a Label and determine which Phase they are assigned to (or better if they are Phase or a non-Phase). (30) Erika ha corso per due minuti.
[c-selection: unergative V] [event structure = non-achievement]
Erika has run for two minutes. “Erika has been running for two minutes” (duration of the event) (31)
Erika è corsa a casa in due minuti. Erika is run at home in two minutes. “Erika ran back home in two minutes”
(32)
Erika ist zwei Meilen gelaufen Erika is two miles run. “Erika has been running for two minutes”
(33)
[ c-selection unaccusative V] [event structure: achievement]
[c-selection: unergative V] [event structure = non-achievement]
Erika ist in zwei Meilen zu Hause [c-selection unaccusative V) angekommen [event structure: achievement] Erika is in two Miles at home arrived. “Erika arrived back home in two minutes.”
16 Cf. Pustejovsky (1996).
64
Language of Thought Hypothesis, Classes, Relations (34)
Erika běžela dvě mile/minuty Erika ranIPFASP two miles/minutes domů home “Erika was running for two miles home.”
(35)
Erika doběhla
[c-selection: unergative V] [event structure = non-achievement/ duration]
(za dvě míle)
domů [c-selection unaccusative V) [event structure: achievement at the end of the time of 2 miles or 2 minutes] Erika ranPFASP after two miles at home “Erika arrived in two miles at home.”
In Italian, the difference between the compositional meaning of the sentence (30) and (31) is expressed with two different grammatical categories of the same lexical verb, namely in (30) with a unergative (intransitive) verb in which the external argument is the agent (Theta-role) and is base generated, and the durative frame PP per due minuti only means the duration of the process of running without a limit in the end (thus, the verb is a process verb controlled by the subject). In (31), however, the same ELU corere “to run” has to take a complement PP (a casa) and another so-called frame complement PP in due minuti which denotes the achievement of the interval in which the running has come to an end. The verb takes an auxiliary essere (just like an unaccusative verb) because the achievement of the last interval at the same time marks the result of the action of running; this means a new state has been reached by the exhaustion of the time span. In German, the example (32) represents a non-telic verb laufen “to run” which can take only the PP über zwei Meilen “two miles long,” whereas in (33) the verb ankommen “to arrive” is telic and can be composed only with a PP which denotes the achievement of the interval in which the running has come to an end. In Czech, a West-Slavic language, in which the category of aspect is grammaticalized, (34) has a verb of motion in the imperfective form běžet “to run” and denotes duration with no limitation of beginning or end of the action in which the event happened (two miles). This imperfective verb can subcategorize only the PP with an accusative of time (dvě míle), where both the beginning and the achievement of the action is unmarked. On the contrary, in (35) the determinative, secondary perfective verb (delimitative Aktionsart doběhnout “to run to an end and reach the limit”) must take a PP za dvě míle “after two miles” which denotes the event as a momentary short concept where the time of the event is marked only at the limit with the preposition za “after” as a short time span which has been saturated and potentially like in Italian telic example with unaccusative verb denotes the achievement of the interval in which the running has come to an end. These six examples
Classes and Relations are determined by biological needs and endowment
65
clearly show that the c-selection and event structure of the predicate is crucial but not prior to computation in syntax. In fact, it is the syntactic context of the not yet defined syntactically specified root in the Mental Lexicon, which restricts or predicts syntax, and not vice versa (contrary to what any approach of LFG would predict and state). This argumentation is in line with what we said about the role of Syntax as a computation (Merge and Recursion) of semantic units into syntax. The fact that argument-predicate structure and event structure are determined by the syntactic structure of a sentence (and not vice versa) can be seen in the following examples in (36)-(39) which show the differences between causative and anti-causative verbs:
(36) Petr otevřel dveře. (Transitive/Causative) Peter opened door “Peter opened the door.” (37) Dveře se otevřely. (Anti-causative) The door Refl opened “The door opened” (38) *Dveře se otevřely Petrem (Passive)
The door Refl opened Peter *The door opened by Peter
(39) Dveře se otevřely průvanem. The door Refl opened draft “The door was opened by draft”
Only an argument with the Theta-role Agent can be inserted into a transitive or Causative context as SO (cf. 38), and only a Thematic Object can become part of a structure which is either Unaccusative (like in the e xamples 31/35) or an AntiCausative (as in 37). Anti-Causatives just as impersonal passives block in Czech an Agent Theta-role to the external argument and thus prevent an Agent Thetarole to be assigned in (37), so that a Passive cannot be built (cf. 38). We analyze these examples in detail in Chapter 7 of this book.
2.5.2 Categorization and the Function of Labels at the Interfaces as Interpretative Tools for Narrow Syntax We can see that the syntactic structures are not determined by the individual lexical meaning of the verbal root √otevřít “to open” but rather by its categorial status much in the way how we differentiate between individual lexical meanings of words and their categorial status as representative of a class17. The restrictions 17 Cognitive Linguists (Langacker Ronald W. 1990. Concept, Image, and Symbol. The Cognitive Basis of Grammar. Berlin, New York: Mouton de Gruyter) usually replace the
66
Language of Thought Hypothesis, Classes, Relations
of the kind we can see in examples (30)–(39) of section 2.5.1 of similar or same kind of categorial meanings of syntax that shape semantics are introduced throughout different chapters of this book. Ad (ii) A root/ELU must constitute the appropriate input for generation of larger syntactic objects (SO). Previously, we have argued for a theory in which Merge is just external Merge, and that the Mental lexicon includes a finite set of lexical categories and a set of formal features that determine and restrict the computation procedure, peering from the Interfaces back to Narrow syntax. This major idea presupposes a theory of computation where interpretation at LF (CI-interface) and PF rules (SM-interface) communicate with Spell-Out after First Merge (postsyntactically) via a Procedure called Analyze and Remerge. In the subsequent part of this chapter, we try to give a short exposition of this idea on a simple example of Clitization in Czech, French, Italian (and partly in German18) where we can see that pronominal argumental clitics operate first on the level of narrow syntax (NS) to be assigned Theta-Roles in the Theta-Domain (VP) and case in a K-Phrase; notion of modularity and division of labor between syntax, morphology, phonology, and semantics by some vague notions of “Cognitive commitment” and “Generalization commitment.” Whereas a lexical or functional category in Generative Linguistics is defined by necessary and sufficient conditions of the taxonomy of the categories themselves, Cognitive Linguistics replaces taxonomy by intuitive notions of the mind (something like Prototype Semantics does, cf. Kosta 2009c) by replacing grammar with psychological notions which are hardly better than lexical or functional categories they criticize (cf. e.g. Lakoff, George. 1991. Cognitive versus Generative Linguistics: How commitments influence results. Language and Communication. 11(1–2), 53–62). Typically, their assumptions fail if we look at the methods and analysis the representatives use in order to reject Generative Syntax in more detail, cf. Evans, Vyvyan, and Melanie Green. 2006. Cognitive Linguistics. An Introduction. Edinburgh: Edinburgh University Press. – Cf. the criticism in Panagiotidis (2014), 1.5. The notional approach revisited: Langacker (1987) on the notion Noun and Verb. 18 We do not want to say that German and some German dialects are clitic languages; rather we will assume that there are some elements that could be candidates for cliticization/clisis as a phonological operation at PF (but not operative in NS) that, however, is more or less a problem if we want to postulate a theory of pronominal clitics in general. In the broad sense, cliticization, clisis, and pronominal and verbal clitics should not be identified under the same term since there are many more phenomena called cliticization or clisis which include also non-pronominal elements, while languages with pronominal or verbal auxiliary clitics and their prosodical and syntactic properties build a rather small subset of languages, cf. e.g. Corbara, PhD diss. and Kosta (2002), Kosta and Zimmerling (2014), Zimmerling and Kosta (2013), and Uhlířová, Kosta, and Veselovská (2017).
Classes and Relations are determined by biological needs and endowment
67
only then they Clitic climb (CC), and then the formal features are deleted before Spell-Out at PF. In some cases, the loss of F-Features can lead to a template which is not a legitimate object of the grammar. This forces the system to look back to the interfaces and reconstruct at LF, with a second trial of Remerge. In the previous examples, one could see a clear trend: there are verbs that represent the most important ELU of the clause. Firstly, because verbs as the prototypical representants of predication determine both the external and the internal arguments only after they have been projected as verbal categories into syntax. Secondly, the theta-grid of the verb not only determines the number of arguments, but also the syntactic connectivity and compatibility with the other constituents in the sentence via Theta-roles (e.g., Adjuncts: Adverbs of time and of place) when put into syntax (the unambiguous assignment of Theta-roles to their arguments within a verbal frame is determined by Theta-Criterion, cf. Kosta 1992, chapter 2). Therefore, we tentatively consider the verb or predicate as the center of a syntactic unit (even a minimal one, for example, in Small Clauses as demonstrated later under Chapter 4). Thirdly, the fact that children build up their grammar (syntax) in different phases of language acquisition only after they have recognized and interiorized the notion/ the difference between individual terms, predicates (events/states/ achievemens/ accomplishments) and operators, w.o.w.: the existence of small clauses gives us a justification for our bottom-up approach from roots to sentences. Fourthly, it follows that it is always the verb which is the head of the sentence. These four basic assumptions also match with all previously known grammatical models (Dependence- and Valence-Grammar, HPSG, GB, Minimalist program, LFG, Categorial Grammars, and even Construction Grammars). On the other hand, it is clear that our Zero-Hypothesis of the verbal root as the head of the sentence, based on four axioms, is not considered as a free rider but must prove correct for the present model only by empirical facts of natural languages. What seems, however, to be the case is that some parts of the intensional meaning of the syntactically not yet specified root (in the sense we have described under Chapter 1) apparently determine which ELU can be concatenated with which ELU and how ELUs are combined and connected with other ELUs in syntax. Thus, it is hardly imaginable that a referentially identical root such as √swim will be merged with a referentially identical root √swim under the same category without additional functional nodes or devices: *√swim√swim19. If two identical roots √swim and √swim are merged, there would be no criteria to differentiate
19 If we concatenate two identical forms with two different Labels with eath other, one of the categories being nominal (noun), the other verbal we derive: to swim a swim, which is a
68
Language of Thought Hypothesis, Classes, Relations
them if they had the same label, say *v: thus, the same token must be somehow differentiated by functionally different Labels, in fact, it is the Label which are responsible for the identification of two identical roots or two different roots. Cf.: (40) (40) (i) Given any two formally non-distinct syntactic objects A, A, Merge(A,A) = {A,A....} (ii) Given any two formally distinct syntactic objects A, B, Merge(A,B) = {A,B}
The Merge operation both in (40) (i) and in (40) (ii) in Narrow syntax does not differentiate between categorical or other formal features between two merged categories (since the Labels are unvalued and thus not visible). The recognition or awareness of two similar, identical or different syntactic objects (SO) can only be activated after Narrow syntax has received a feedback from the Interfaces, this means after Narrow syntax and Merge have arrived the interface PF/LF. An operation without this connection would lead to a potential infinite string of identical objects or non-distinct categories (the same tokens), which would display in fact a non-binary flat tree. Thus, interfaces always interact and invade Narrow syntax in order to activate the Generator with the instance Analyze. The Case (40) (i) without Syntax as instance exists quite frequently in natural phonology of natural languages, mostly expressing non-categorical syntactic functions or non-syntactic hierarchies but a pure linear phonology (sometimes assigned an expressive function of gradation and sometimes expressing quantification in natural languages). However, even in these cases, the linear phonology of identical material must be able to differentiate different meanings. Thus, for example, in some Amazonian languages, the same tautological linear structure may become a part of functional phonology and/or morpho-syntax (cf. examples in Mundurukú of the Mundurukú branch of the Tupí family, a language in Amazonas, which uses reduplication to express different numerals or tones to mark differences in meaning, cf. Aikhenvald 201520), and in this case there must be another label identified, which distinguishes two previously (phonologically apparently) identical objects on a deeper syntactic level (and in semantic terms) which allow for differentiation between them as an instance of
typical case of figura etymologica in which the output is licensed as well-formed because we have two different Labels: L(nominal) and L(verbal. This idea goes back to Panagiotidis original insights in Panagiotidis (2014). Cf. also Collins (2002) on “Eliminating labels.” 20 Mundurukú has over 120 classifier morphemes which characterize the referent in terms of shape, and they can be used in combination with verbs, nouns, demonstratives, and modifiers. A good example for reduplication and tone variation can be seen In -ba4
Classes and Relations are determined by biological needs and endowment
69
self-merge. This means, self-merge is allowed but only if two different Labels can be assigned to two different SO. What has been assumed for two SO with identical formal features can be repeated in case of two SO with identical semantic features. Consider the following idioms in English, German and Russian in (41).
(41) a. Rhyme and Reason, Null and Void b. etwas hat Hand und Fuß, Null und nichtig c. Elki-Palki “Damn it” (litteraly: Furtrees and Sticks)
The two words in a binominal construction have both the same meaning and thus are idioms of a general (maybe) multiplied meaning “something is ordered”, “something is not valid”, and “something is messy”. The binominal construction is neither formally nor semantically decomposable nor motivated (transparent). Last but not least, we are faced with the problem of derivation at different levels of the GEN system. Until now, we have introduced and analyzed only cases which clearly show syntactic derivation. But there are many more cases where the data do not really tell us from the surface which kind of derivation is at stake. We should differentiate at least between the following levels of derivation: (i) phonology (level of syllables), (ii) morphophonology (at the level of word formation) and (iii) syntax. Let us look at some data where these cases can be compared with each other: Phonology, Morphology or Syntax? (42) a. Petrovi bylo zakázáno jezdit do Československa dokud PeterDAT AUX prohibitedPPPimpers.Pass. to visit to Czechoslowakia until komuniste vladli. communists reigned. b. Petrův zákaz jezdit do Československa dokud komunisté vládli. c. Petrovi zakázali3Pl.indef.personal jezdit do Československa dokud PeterDAT they prohibitedPPPimpers . Pass. to visit Czechoslowakia until the komunisté vládli. communists reigned.
refering to a long rigid object which in combination with the general root a2ko3 forms the meaning “banana fruit” (numbers indicate tones in the original source) and xep3 xep3- mean 2 x 1 = two, so that xep3 xep3- pa4 a2ko3-ba4 is literally one-one-Classifier for cardinals-root banana-Classifier for a long rigid object “two bananas.” We can see that in cases where the syllable, expressing the number, is phonologically identical, it is the reduplication itself and the Classifier for cardinals which derive the meaning of the quantified noun phrase. Cf. Aikhenvald (2015): 297–298.
70
Language of Thought Hypothesis, Classes, Relations
The sentence (42) (a)-(c) include a √root in three different formal settings: in (42) (a) bylo zakázáno, the form is an impersonal passive of the root √prohibitseemingly derived in syntax; in (42) (b) the root √ prohibit- is a resultative nominalization zákaz apparently derived in lexicon and in (42) (c) the root √ prohibit- is zakázali3Pl.indef.personal so-called indefinite personal form 3rd Person Plural with generic arbitrary meaning of the pro, thus it is derived in syntax. But at the same time, we are faced with the problem of having a morphophonological alternation in quantity of the vowels between the three roots, namely in (42) (a) zakázáno with twice length stem and derivational suffix, in (42) (b) zákaz with one length on the first syllable, and (42) (c) zakázali with one length on the second syllable. How can NS “know” in advance which of the three different roots have to be chosen from the LA, if the Mental lexicon would consist just of a list of formally indefinable and undifferentiated roots. • Either, we must consider a pre-syntactic level including word formation rules much in the way of Distributed Morphology (Marantz 1984) that determines the categorial status of the roots before projecting into syntax, or • We have to equip and enrich the Gen-Analyze system with a post-syntactic but pre-PF “feed-in mechanism” in order to equip PF with information about the appropriate morphophonology including phonological, both suprasegmental and syllable structural information, after syntax has determined the categorial and functional status of the ELU. In Cáha and Ziková (2016), these and similar data of nominalization zákaz vs. verbal root zakázat are derived both syntactically and phonologically (with zákaz being the starting root and zakázat being derived in syntax and applying a post-syntactic length deletion rule). But what will happen if another nominalization derived from an already syntactically formed root comes into play? – cf. bylo zakázáno “it has been prohibited” ⇒ zakázání kouření “the prohibiting of smoking.” The form zakázání – as opposed to the nominal form zákaz – is a clearly verbal root and entails all verbal categories including voice (features which a default verbal root should contain), and it is marked semantically for a dynamic processual event whereas the nominal form zákaz is really projected in syntax as a nominal root with the semantic information +result-dynamicprocessual/ (cf. also Karlík and Kosta, for nominalization in Czech of subordinate clauses). Again, how does the system of NS know which root to choose in advance from LA, if there were no Labels. We believe that Labels are the crucial answer
Classes and Relations are determined by biological needs and endowment
71
to the problem of syntactic derivation. As we will demonstrate later, resultative nominals are by character/structure real nominals (and not derived verbs); they are not derived in syntax, rather they are merged already in ML, since they behave syntactically very differently than gerundive nominals, and this fact can be observed even in Czech nouns ending in –ní/-tí. Thus, a rule of derived and phonologically shortened roots should be reconsidered (cf. 2.6.1.). We will now introduce further evidence from some languages which will demonstrate the need and necessity of Labels in or even before Narrow Syntax. Let us consider the following examples from Czech:
(42) Petr nechal děti[uα] sníst bonbony[uβ] Narrow Syntax (NS) Peter CAUS childrenACCPL eat sweetsACCPL ⇒ Pronominalization:
(42’) a. *Petr
je[vα]
je[vα] nechal sníst *1st Spell-Out = Crash (at PF)
⇒ Peter themACCPL them ACCPLCAUS eat ACCPL ⇒b. ??Petr je[α] je[β] nechal t[α] sníst t[β] LF Reconstruction ⇒c. Petr je nechal děti[α] sníst bonbony[β] 2nd Spell-Out = √ ⇒ Remerge: d. *Petr je[β] nechal děti[α] sníst bonbony[β] *3rd Spell-Out = Crash (RM-effect) e. Petr je[β] jim[α] dal dětem[α] sníst bonbony[β]. 4th Spell-Out after Reanalyze with the Selection of another Causative Verb from the Mental Lexicon allowing for two different Labels on the Clitics
The Czech example (42’a-e) is apparently an instance of Clitic climbing in a “second-causative” construction; clitic climbing of the lower clitic je[β] (refering to the object bonbony AccPl and the upper clitic leading to a PF with two identical labels21 (je[α] je[α]) which leads to a PF dupplication of identical phonological output (homophony). The formal features of the two distinct SOs cannot be identified without further context. Only LF reconstruction can identify the two SO (CL) as two different tokens subcategorized with the same Case feature Accusative but derived from two different syntactic positions, thus being two non-identical CL (a lower CLacc is assigned Case Accusative by the lower lexical verbal head, the upper 21 Label A and A are identical if they either entail the same φ-features (for Agree; for nouns Number, and Gender) and/or they are assigned the same Case. In (42’a), the wellformed Spell-Out is initiated by Reanalyze leading to the Selection of another Causative Verb from the Mental Lexicon allowing for two different Labels on the Clitics, namely Dative Plural jim vs Accusative Plural je.
72
Language of Thought Hypothesis, Classes, Relations
by the upper functional Causative head nechat ‘to let, to make’ similar to ECMcases). This reconstruction effect feeds into a new Remerge situation with a second Spell-out of the lower copy and Clitic Climbing of the upper CL to a CP-domain. What if the lower Clitic moves first crossing the upper clitic? This case evidently leads to a Relativized Minimality effect (cf. 42 d). The explanation could be that the lower accusative NP as a target of typical argumental transitive verb (with Theme as lower Theta-role) cannot move over a higher argumental (direct object) NP. Thus, it cannot cross another argumental Clitic NP in Accusative (with the Theta-role Agent) because the outer specifier position of the DP is occupied by the argument and builds a barrier qua Relativized Minimality (Rizzi), thus is not accessible for successive movement of the lower accusative Clitic NP (following PIC). (43) *Petr je[β] nechal děti[α] sníst bonbony[β] NS/LF Interface Peter themAccCl CAUS children. eat candies AccCl “Peter made the children eat them (the candies)”
Another possible explanation would follow from the Minimal Link Condition that in cases of two competing NPs only the higher one (with minimal distance to the goal and with a prominent Theta-role Agent22) can target the landing position in CO leaving the lower copy (Theta-role Theme) stranded. The original idea of the MLC was that “at a given stage of derivation, a longer link from α to K cannot be formed if there is a shorter legitimate link from β K” (Chomsky 1995: 295). Later in Chapter 4 of the Minimalist Program, Chomsky relies on the “checking relation” and “attract” notions, cf. (44) (44) Minimal Link Condition (MLC) K attracts α only if there is no β, β closer to K than α, such that K attracts β (Chomsky 1995: 311)
22 We can see that Narrow Syntax cannot be entirely blind for the hierarchy of Thetaroles because the reverse Scope of Theta-roles would lead to a weird interpretation: if the Agent (children) were ranked lower than the Theme (candies), the interpretation would be something like “it were the candies which have eaten the children”, a situation not viable in any of the possible worlds. But in fact, the MLC prevents the lower Argument being assigned the Theta-role Theme to cross the higher Argument Agent for reasons of NS rule (MLC) which not only ranks arguments by functional roles at NS (subject vs. object) but rather by thematic roles at LF.
Classes and Relations are determined by biological needs and endowment
73
In NS, we have many examples of Island effects, which could be explained in the same way. Traditionally, the explanation for a Wh-movement violation was the extraction of a Wh-element out of another Wh-phrase. This Wh-island is created by an embedded sentence that is introduced by a wh-word. Good examples are, for instance, the complement of wonder in (45) b. is a wh-island. The contrast with (45) a. serves to show that it is the wh-element to whom that blocks the extraction of what.
(45) a. what did you think [Bert gave t to Bobje] b. *what did you wonder [to whom Bert gave t t]
The ill-formedness of (45) b. is usually explained as a Subjacency violation. In the Minimalist Program, wh-islands are analyzed as an effect of the Minimal Link Condition23. In the Minimalist Program, the MLC accounts for superiority condition effects, wh-islands, and super raising. In Chapter 4 of the Minimalist Program, the MLC is incorporated into the definition of Attract. Cf. also Chapter 3 of this book. The same blocking effects can be shown in cases of Dative and accusative Object Clitics in Czech and French and their asymmetrical behavior in Control vs. Causative Constructions. In French, clitic climbing is allowed in Causatives, but not in Control constructions: this is exactly the same evidence in Slavic languages which allow clitic climbing but not certain control constructions. French
(46) a. Jean les a fait manger à Paul (Causative) Jean themCLACCPL AUX CAUS eat to Paul “Jean made them eat (to) John.” b. *Jean les veut manger (Subject Control) *Jean themCLACCPL wants eat “Jean wants to eat them.” c. *Jean les a dit à Paul de manger (Object Control) *Jean themCLACCPL AUX told to Paul to eat. “Jean told Paul to eat them”
23 The MLC states that derivations with shorter links are preferred over derivations with longer links, also known as Shortest Link or Shortest Move. In the Minimalist Program, the MLC accounts for superiority condition effects, wh-islands, and super raising. In chapter 4 of the Minimalist Program, the MLC is incorporated into the definition of Attract.
74
Language of Thought Hypothesis, Classes, Relations (47) a. Jan je dal jíst dětem. (je = bonbóny) Jan them CLACCPL CAUS eat childrenDATPL b. Jan je chce jíst. Jan them CLACCPL wants to eat c. *Jan je řekl Pavlovi sníst. Jan them CLACCPL told PaulDAT eat
(Causative) (Subject Control) (Object Control)
The directives/order-kind verbs allow only marginally for clitic climbing out of object control clauses:
(48) a. * Jan je poručil/nařídil/rozkázal... [lower Dative Phrase NP Pavlovi [PRO sníst]] = Jan poručil/nařídil/rozkázal [lower Dative Phrase NP Pavlovi] [PRO sníst banány]]
The direct object/Accusative Clitic cannot cross the lower indirect Object in Dative of the embedded clause because the lower Dative NP is presumably in a lower Applicative Phase position out of which it cannot move further up. If the lower Dative addressee Theta-role NP is an applicative NP with the Theta-role benefactive or malefactive, the grammaticality upgrades to a marginal semantic inacceptability (marked with #), cf. (48)b.:
(48) b. #Jan je poručil/nařídil/rozkázal [higher ethic Dative Phrase NP Pavlovi [PRO sníst]]
The explanation could be that the lower applicative phrase as a target of typical argumental Dative constructions (often Addressee as Theta-role) serves as Barrier and is impenetrable for further computation of another argumental (direct object) NP. Thus, it cannot even be crossed by an argumental Clitic NP in Accusative (with the Theta-role Theme) because the outer specifier position of the Dative NP/DP is occupied by the argument and builds a barrier qua Relativized Minimality (Rizzi), thus is not accessible for successive movement of the accusative Clitic NP (following PIC). In case of optional quasi-argumental or pseudo-argumental positions (such as those with Dativus ethicus, Benefactive or Malefactive Dative), the NP is not argumental but rather an adjunct which does not project an argumental NP but rather an adjunct; in the derivation adjuncts are inserted to the structure post-syntactically as late insertion (much in the sense of late adjunct insertion as formulated in Krivochen and Kosta 2013 and Kosta and Krivochen 2014b.).
(48) c. #Jan je poručil/nařídil/rozkázal [higher applicative Phrase NP Pavlovi] [PRO sníst t]]
The marginal #-interpretation in (48) b. is not due to ungrammaticality but due to processing short-term memory problems caused by non-locality or distance from the externally merged position. This is confirmed by subject control that do not allow for an addressee interpretation of the Dative NP at all, but they do allow for
Classes and Relations are determined by biological needs and endowment
75
a benefactive or malefactive Theta-role interpretation of the Dative NP, which is assumed to stand higher in the tree, presumably in a higher applicative position: (48) d. Pavel je chtěl [NPDat higher applicative Phrase Petrovi] sníst (banány) Paul them_ CLAcc wanted (of) Peter eat (bananas)
In Italian, CL climbing out of object control verbs is, however, licensed.
(49) a. Giovanni ha ordinato a Paolo di mangiarle Giovanni Aux ordered to PaulDAT to eat themCLAkk b. * Giovanni le ha ordinato a Paolo di mangiare Giovanni himCLDAT AUX ordered (to) eat
It is possible to use the verb ordinare in the sense of “to order,” but then the clitic cluster must be topicalized, thus stands in CP: (49) c. Giovanni [TOP gliele] Giovanni to himCLDATSg themCLAccPL
ha AUX
ordinate ordered_PPPAccPL
Clitic Climbing is allowed in Subject control clauses if the Dative Clitic has the Theta-role malefactive/benefactive and not addressee, that is, it is base generated in a high applicative phrase. (50) Paolo gliele voleva mangiare Paul CL_of him CL_them wanted eat
The asymmetry between ditransitives and causatives shows up in contexts with both Dative CL and Accusative CL:
(51) a. Petr chtěl Pavlovi darovat zesilovač Peter wanted PaulCLDAT make a present amplifierAcc b. Petr mu ho chtěl darovat √ CLDAT > CLACC c. Petr *ho mu chtěl darovat *CLACC > CLDAT
We should expect that the same asymmetry disappears when a causative verb instead of a subject control verb is chosen; and this is exactly what happens; thus, this prediction is born out, both word orders are allowed: CLDAT > CLACC in (52) b., CLACC > CLDAT in (52) c.: (52) a. Petr dal Pavlovi ušít oblek PeterNOM gaveCAUS (to) PaulDAT sew dressACC b. Petr mu ho dal ušít. √ Peter him_CLDAT it_CLACC c. Petr ho mu dal ušít. √
76
Language of Thought Hypothesis, Classes, Relations
Following Pylkkännen’s PhD diss. (2002/2008), the benefactives are typically in a higher applicative position, and they also are typical for causative constructions such as
(53) He let his wife eat the food
as opposed to the addressee oriented double transitive construction in (54) I baked him a cake (55) a. High Applicative (Chaga) VoiceP He Voice wife APPLBen eat
food
b. Low Applicative (English) VoiceP I Voice bake him APPL eat
cake
The situation with the asymmetry of case assignment of Dative and Accusative in object control constructions shows up in German even in clauses with full DPs (non Clitic), where the precedence of Dative over Accusative is evident:
(56) a. Paul befahl dem Peter das Kind abzuholen Paul asked theDAT PeterDAT theACC childACC pick-to-up Paul asked Peter to pick up the child b. Paul befahl das Kind dem Peter abzuholen # Paul asked theDAT PeterDAT theACC childACC pick-to-up #
Here we do not even get the malefactive interpretation of the Dative DP dem Peter. In German, we cannot do Clitic Climbing except of one case with “es” pronoun being a light or weak pronoun following Starke and Cardinaletti (1999):
Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) 77 c. Er befahl *es, ihm abzuholen He asked it him pick-to-up “He asked him to pick it (the child) up” d. Er befahl ihm, es abzuholen He asked him, it pick-to-up “He asked him to pick it (the child) up”
We know that in German, as rule, midrange scrambling should allow for displacement and free scrambling between indirect and direct objects:
(57) a. Paul gibt dem Professor das/ein Buch Paul gives the professorDAT the /a bookACC b. Paul gibt das Buch/ein Buch dem Professor Paul gives the /a bookACC the professorDATIVE c. Paul gibt es ihm /Paul gibt’s ihm d. *Paul gibt ihm es /* Paul gibt‘m es
Again, the situation with both full DPs and Clitics changes even in German, when we have causative constructions:
(58) a. Paul ließ den Peter das Kind abholen b. Paul ließ das Kind [FOK den PÉter] abholen c. Paul ließ es ihn abholen. d. Paul ließ ihn es abholen.
The explanation for this complex scenario must lie within the grammar division of labor between syntax (Case assignment), semantics (Theta-role assignment), and especially within the ability or disability to transmit F-features to NPs in long distance environments in non-permeable Phase contexts which is, for example, the object control constructions due to a TP phrase which in fact does not count as Phase under standard assumptions of the Phase theory (only vPs and CPs and DPs are real phases, following Chomsky 2001a,cf. also Citko 2014). This would lead us to the conclusion that Causatives are in fact not separate Phases but a subpart of the vP Phase, and this fact allows for double case assignment in the sense we have already postulated for transitive and causative constructions.
2.6 Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) To differentiate between different degrees of grammaticality, we have to recall the difference between *grammatically not well formed vs. #semantically non-interpretable. In one case, either a deep-rooted syntactic principle of UG has been violated, or Labels in Narrow syntax of a language specific Grammar remain unevaluated. With
78
Language of Thought Hypothesis, Classes, Relations
other words, either some universal constraints introduced in Ross (1967) have not been respected, or Labels in Narrow syntax of a language specific Grammar remain unevaluated. In the other semantic case, the interpretation of a category is not possible in syntax because some Labels of tokens are not identifiable/interpretable. While the present study will recall and deepen the first type of violation of UG principles in the following paragraphs, it attempts to explore more in depth the different degrees of grammaticality caused by violation of semantic (LF) or syntactic (NS) rules. As we have already stated, the present study therefore does not deal with the semantic decomposition of lexical meaning; rather, it is concerned with the interaction of roots and their projection in the clause. In our opinion, the theoretical approach of Noam Chomsky’s Generative framework is correct because it postulates – for independent reasons – an autonomous status to the module of Narrow syntax, and considers the most important feature of Narrow Syntax of FLN in Minimal Computation expressed by two fundamental and applicable only to human language operations: Merge and Recursion. In the end, the two basic modules of FLN – namely SM and CI interfaces and Syntax – must be involved in a division of labor in the brain in order to perform the basic operation Merge so that the two interfaces SM and CI can be read off before Spell-Out. Here we are interested in particular in the question which parts of the Mental lexicon are selected in the operation Merge already from the lexical array and which parts have significance only at the interfaces (especially at the SM/CI interfaces). We base our major idea on the existence of Crash-proof grammars as part of the Minimal Computation in syntax that does not wait to fulfill FI only at Interfaces but instead it peers back and forward in order to avoid Crash at every single step of the derivation in Narrow syntax, thus controlling each derivational step which is checked against the notion of optimal solution and in case it crashes another candidate has to be chosen (Frampton and Gutman 2002: 90). We are still following our major line of methodological principles (1)-(4), taking up now the principles (2)-(4). 2. small number of axioms; 3. shortness of evidence; 4. small amount of logical constants.
2.6.1 Small Number of Axioms of the Model of I-Grammar Let us take a closer look at some facts, as a first approximation to the problem of relation between ELU, Narrow syntax (NS), SM/PF, and CI /LF. To demonstrate
Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) 79
how FLN computes ELU, we need a mechanism, which selects and merges model of FLN, and we reconsider the model of computation in Tab. 1: Feed (roots/ semantic primitives) Conservation Principle
Type-Array
C-I
Dynamic workspace: GEN function applies Analyze
Transfer
S-M
Analyze
Transfer
Tab. 1: FLN
This model is not yet accomplished since it does not consider the possibility of deriving roots via syntactic projection and taking the syntactic output as potential input for further computations, for example, at a deeper level (of word formation or morphology). To further develop this idea, we can consider cases in which derivational products of syntax (such as the derived nominals of passive) may serve as input material for the Mental lexicon that itself contains a computation level for word formation before serving again as LA input for further computations in syntax. Consider Cz zákaz – zakázat – zakázáno and zakázání. The Mental Lexicon entails as first input in its LA the root √kaz which serves as output for the word formation via prefixation, deriving the new word zá-kaz “prohibition” which itself serves a new MLU to the input in LA {zákaz} > which projects into Narrow Syntax the phrase: [VP zakázat kouřit] “to prohibit to smoke” or > [VoiceP zakázáno kouřit] “it is prohibited to smoke.” The first word of this phrase can further serve as a stem and a possible input for a new LA input {zakázán} that can become a part of a syntactic computation on the level of word formation or syntax with the output > zakázání “the/a prohibiting” as a potential input for the LA {zakázání} to become a projected output of the syntax [NP zakázání kouření] “prohibiting of smoking.” We can see, that under the perspective of applying the principle of Recursivity, the FLN not only applies this UG principle in syntax but on any level of human language creativity, at the level of words, the level of word formation, the level of morphology, and the level of syntax. The system abstracted from the real natural language is a Model of Mental Grammar call it for expository reasons FLN. This means that FLN is not a real
80
Language of Thought Hypothesis, Classes, Relations
natural language, rather it is an axiomatic model, which concentrates on two central properties of FLN: (I) Generation and (II) Interpretation.
(59) Generation equals the standard Operation Merge: i. Merge (α, β), α ≠ β – but α and β share ontological or structural format – Distinct binary Merge (Boeckx 2010a, Krivochen 2011b for a principled explanation) ii) Merge (α, β), α = β Self Merge (Adger 2011) iii) Merge (α, β, γ …), α ≠ β ≠ γ Unrestricted distinct Merge
For the time being, our main working hypothesis will be the assumption that Merge is (i) distinct binary branch and that all other types of Merge be it (ii) Self Merge or (iii) unrestricted distinct Merge are not part of the Generator (cf. Krivochen and Kosta 2013: 43). Furthermore, Merge (both internal and external) are structurebuilding operations driven or triggered by structure-building features. (60) Generation =Merge a. Merge (internal and external Merge) is a structure-building operation. b. Merge is triggered by structure-building features [•F•] (61) Agree a. Agree relates functional heads and arguments to ensure Argument encoding: exchange of case /φ-features. b. Agree is triggered by probe features [*F*]. c. Agree triggers Probe-Goal relational dependencies/displacement in a Searching Domain (ΣΔ) in NS
Generalized MLC (Takano 1996, Kitahara 1997, Mü̈ller 1998, Fitzpatrick 2002, Rackowski and Richards 2005). In a structure α ... [... β ... γ ...] ..., movement to [•F•] can only affect the category bearing the [F]feature that is closer to [•F•] (where β is closer to α than γ if β dominates or c-commands γ). A movement to [•F•] can only affect the category bearing the [F]feature that is closer to [•F•] (where β is closer to α than γ if β dominates or c-commands γ). A last but very important prerequisite for our theoretical take on derivation is that we believe that a derivational Model of FLN has to explain the relation between Spell-Out as a crucial point of departure (which mediates between Narrow Syntax (NS), and the interfaces of the C-I/SM modules, LF-syntax and PF-syntax). What is the order of computation and how can it be falsified? To answer this intriguing question, let us first consider the single modules and their functions within the FLN Model in Tab. 1 (Chapter 1), repeated in Tab. 2 (Chapter 2), which descends from the Theory of Radical Minimalism (Krivochen 2010, 2011, Krivochen and Kosta 2013).
Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) 81
2.6.2 The Architecture of Mental Grammar: The Computing Mind In this chapter, we will explain the architecture of the Mental Grammar – call it as a first approximation Computing Mind (CP). Our goal will be to assume a hand-in-hand division of labor between NS and the interfaces SM/CI, which work in parallel within the Dynamic Workspace, comparable to a Model of Quantum Mind (Krivochen 2011)24 and Frustrated Mind (Kosta and Krivochen 2017). We will just make some crucial and fundamental assumptions, which we will try to prove in the course of this book: (1) FLN Structure generation only comprises External Merge (both monotonic and non-monotonic, see Uriagereka, 2002), a strong commitment to a nontransformational, monostratal theory (Culicover and Jackendoff 2005); and only humans can make use of finite means provided by sound-meaning interfaces to weakly generate potentially infinite pairs Exp = (π, λ) with information interpretable for the performance systems. (2) FLB displays the major human language-specific property, namely, the linking of sound to meaning (and vice versa) in an intentional way arises as an emergent property of the nonlinear dynamics of the interactions between the syntactic engine and the C-I / S-M systems. While the sole presence of sound-meaning pairs might be shared with non-human species (e.g., superior primates), their computational manipulation at different levels of formal complexity is hypothesized to be human-specific, related to the structure of the genome. (3) Only humans’ computational capacities, dynamically ranging from finitestate grammars (Markovian processes) to Turing-computability allow full recursion (i.e., not only head-tail recursion or so-called true recursion, but also inter-sentential recursion via referential dependencies and cross-clausal connectives: conjuncts and disjuncts, Quirk et. al., 1985). Only human’s computational capacities are able to account for the whole range of soundmeaning relations and the internal structure of each interface (Cf. Kosta and Krivochen 2013, Kosta and Krivochen 2014). The leading idea consists in avoiding superfluous levels of representations, to preserve conceptual and formal uniformity within the Model, and to avoid as many as possible ad-hoc stipulations. The assumption that human language is a perfect system of symbols relating Sound to Meaning in a principled, non-trivial 24 Cf. Krivochen, Diego Gabriel. 2011. The Quantum Human Computer Hypothesis and Radical Minimalism: A Brief Introduction to Quantum Linguistics. International Journal of Language Studies 5 (4), 87–108.
82
Language of Thought Hypothesis, Classes, Relations
way using a finite means in order to generate a potentially infinite number of strings remains a postulate to be confirmed or at least partly verified in this book. The model of a grammar that we want to propose is similar to the model of the Minimalist Program in its latest development, but, at the same time, it differs in terms of the individual components, especially with regard to the role of SpellOut and the interfaces PF / SM or LF / CI. The Mental Lexicon (ML) includes lexical, functional, and mixed categories, which we characterize in terms of their input function as follows: Lexical categories and functional categories form independent modules of the Mental Lexicon. The lexical categories (N, A, V, P) contain an (a) information on the intensional semantics of the root / root entries (based on the opposition between individual terms and events); (b) the Theta-grid information contains the predicateargument structure and the information about thematic-hierarchy, (c) the event structure contains the information about states, activities, accomplishments and achievements of predicates; the (d) granularity information contains the distinction of (mass/count/individual) in nouns; and the (e) operator information contains scope and truth values. The verbal and nominal semantic features such as [Modality], [Time: reference time/R, time of speech/S, event time/E], [Voice], [Aspectuality], [Determination], etc. are not part of the and thus not noted in the Mental Lexicon, but depend on the setting of Operator (Speaker / discourse relation) and the respective functional category in a certain local semantic workspace. The abstract functional categories (light vP, CP, VoiceP, CausP, ModP, DP) contain abstract grammatical features (such as nominal formal features: Case, φ-features (Person, Number, Gender, Determination) and verbal formal features: Voice, Mode, Aspect). The Syntax component is a blind mathematical-algebraic recursive procedure that operates on structures of ML: it contains elementary Phrase structure rules, external Merge as the starting rule and internal Merge as a copy-delete-rule or Type-token-mechanism (for displacement reasons). Spell-Out is the “hub” that connects the two interfaces PF / SM and LF / CI. It can analyze both pieces of information (formal characteristics of PF and semantic features of the LF), interpret them, and send the information to Transfer, whether a structure is well formed /illformed (PF) or interpretable/ non-interpretable (CI). The PF and LF cannot interpret themselves and do not interpret each other. It is the major function of Spell-Out (or call it Interpret) to Analyze and Interpret these levels of representation. Since Spell-Out fulfills the interpretative and analyzing functions, it can also stop at each derivational path (i.e., it is crash-ripe), and it can filter out the nonoptimal candidate (s. Filtering out means that the sub-optimal candidate of a
Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) 83
phase is not sent to the Transfer. Ungrammaticality means Crash before sent to the Transfer. If a crashed structure is sent as if “transferred” to the Transfer, it ends up as ungrammatical sentence. Only the best candidate of both interfaces is selected by Spell-Out at the end and sent to the Transfer. Transfer has no monitoring function, since it cannot look backward nor forward; it operates as a last and final component in the derivation. Transfer is subjected to the phase impenetrability condition (PIC). Phases are thus no instances of syntax, but the final Output of the PF / LF interfaces.
2.6.2.1 The Puzzle: Derivation and Time In order to explain in more detail our Model of Mental Grammar (or Computing Mind), let us have a brief look at the interplay and mutual interdependence between the interfaces PF and LF and Spell-Out (and all the other levels).
(62) a. Peter doesn’t like dogs. b. Peter does not like dogs. c. *Peter likes not dogs (63) a. Peter hasn’t liked dogs for a long time. b. Peter did not like dogs for a long time. (64) a. Peter can’t swim. b. Peter cannot swim. c. *Peter swims not (65) a. *Peter hasn’t any dog b. *Peter has not any dogs c. Peter has no dogs. d. Peter does not have any dog. e. Peter doesn’t have any dogs. (66) a. Do you have a dog? / b. * Have you a dog c. Have you got a dog? d. *Do you have no dog e. Don’t you have any dog?
The apparent problems arise with the wrong usage of the main verb have vs. auxiliary verb have in English, but also with the distribution of negative and positive polarity items and main vs. auxiliary verbs. Thus, in negative polarity contexts, the present root clauses display a construction in which the negated verb cannot be the main verb but rather the auxiliary or the expletive do-verb (62). In negated past tense clauses, both, auxiliaries have and do can be used (cf. 63 a., b.), and they are both optimal candidates. In modal root clauses (64),
84
Language of Thought Hypothesis, Classes, Relations
the modal auxiliary must be negated but not the main verb. In possessive root clauses, the main verb is “to have” (to possess), and thus it should be ruled out (cf. 65 a., b.), but it can be passed to transfer if the negation is not “NOT” but “no” (cf. 65 c.). This is so because the main verb has. In yes-/no-root clauses (cf. 66 a.-e.), main verbs can only be used with the expletive do (cf. 66 a.), or if the form of the possessive form is have got «to receive» (cf. 66 b.). Why is (cf. 66 c.) possible? Simply, because it is the auxiliary have which is in scope of polarity operator (yes/no) and got is the past of to get “receive” (gotten), thus it is the main verb. Why is (66) d. ungrammatical and (66) e. grammatical?
2.6.2.2 Economy and Derivation All these sentences have to be analyzed by Spell-Out mutually and simultaneously at PF and LF, because all rules which apply at PF (such as do-support, cf. Pesetsky 1989) or contraction of Negation and auxiliary verbs (isn’t, doesn’t, hasn’t but not *hasn’t as main verb) must be read at PF, and all rules which concern the LF, must be interpreted and Spelled-Out at LF. Some of these rules such as contraction and do-insertion must be read simultaneously and independently from each other, but they apply at both interfaces to the same output; this means they must feed into the same input (cf. 66d. as opposed to 66e.). The question remains: At which stage of the derivation, insertion of affixes into lexical roots takes place? Since we do not assume that external or internal Merge are triggered by formal features (and we have a very powerful empirical evidence in language acquisition for it, cf. Kosta and Krivochen 2014c), we believe that morphology-insertion happens post-syntactically (similarly to the Model of Distributed Morphology, Marantz 1984) and is interface-driven. We can see it in many ways, for example, not only in the existence of prosodic rules of contraction, clitic templates, and do-insertion but also in many other cases which we will reconsider in the course of this study. Let us first analyze the structures, here repeated as (67)–(69)
(67) a. Peter doesn’t like dogs. b. Peter does not like dogs. c. *Peter likes not dogs (68) a. Peter hasn’t liked dogs for a long time. b. Peter did not like dogs for a long time. (69) a. Peter can’t swim. b. Peter cannot swim. c. *Peter swims not
Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) 85
The standard explanation has been that in English we have certain instances of V-I-movement. Thus, overt verb movement includes standard copula constructions (70) and other examples including the auxiliary verb be (71); possessive constructions such as (65) c. and (72), at least in some British dialects (British English), and other examples involving auxiliary have (cf. 68a., 73) and standard modal auxiliaries (69 and 74). These data correlate with the observation that exactly these verbs (but not main verbs) also undergo Auxiliary Inversion (cf. 75 a.-f.).
(70) (71) (72) (73) (74) (75)
Peter is usually available. Peter is usually sleeping (at this late hour) Peter has usually no money. Peter has usually sought help by his father. Peter might usually have received help by his father. a. Is Peter usually available? b. Is Peter usually sleeping? c. Has Peter usually no money? *US English /British English √ d. Has Peter usually sought help by his father? e. (*) Might Peter usually have received money from his father?
The reasoning for the contrasts were articulated in terms of strong vs. weak features between Main verbs vs. Auxiliary (or weak) verbs in English. The same parametric differences go far beyond the border of parametric variation between lexical vs. functional categories of one single language. If we think about what has been said with respect to the differences between adverbs and verb movement in French vs. English and the standard “explanations” of Main Stream Generativists (MSG), it seems to me that UG principles and language-specific rules are confused and this is exactly what makes Generative Grammar incredible. We try to make a strong point on Generativist’s approaches to Models of FLN in order to avoid shortcomings and possible sources of misinterpretations. Consider the case of V-I Raising vs. Lowering in interaction with Negation and Adverbs in French and English (Pollock 1989, Chomsky 1989, Pesetsky 1989). (76) V-I Raising in French a. Marie ne parle pas t français b. Marie parle souvent t français (77) * I-to-V Lowering in French a. *Marie ne t pas parle français b. *Marie t souvent parle français (78) *V-to-I Raising in English a. *Peter speaks not t French b. *Peter speaks usually/often French
86
Language of Thought Hypothesis, Classes, Relations
(79) I-to-V Lowering in English a. Peter [INFL t] often / usually speaks English b. Peter [INFL does] [NegP NEG not [VP speak English]]
These examples have responded to two basic assumed principles on UG vs. language-specific constraints on structures. Chomsky (1989) proposes a corollary of two basic principles of UG, namely: (80) Principles of Economy a. Principles of Least Effort: If two derivations from a given D-structure (call it now ML + PSG) each yield legitimate outputs and one contains more transformational steps than the other, only the shorter derivation is grammatical. b. Last Resort Principle: UG principles are applied, whenever possible, with language-particular rules used only to “save” a D-structure yielding no output. Chomsky, in addition, uses Lasnik’s Filter (1981) as a trigger and prime mover for the various transformations and insertions found in French and English Verbal Auxiliary systems. Lasnik’s Filter requires morphemes designated as affixes to be “supported” by lexical material at PF. It is informally stated under (81). (81) Lasnik’s Filter: An affix must be lexically supported at PF. (Pesetsky 1989: 1)
To be more specific and clear about the data of V-I movement in (76)-(79), Chomsky (1989) suggests that length of the derivation makes the crucial make up difference. Assuming at this stage of Theory that ECP must be respected as one of the most important UG principles (82), based on two modules of the GB-theory–Proper Government (83) and Theta-Government (84). (82) The Empty Category Principle (ECP). [e]must be properly governed (Chomsky 1981: 250) (83) Proper Government: In the configuration [C … B … A … B …], A properly governs B iff: α c-commands β iff: (i) α theta-governs or (ii) antecedent governs β and Theta-governed elements are such which are suitable to assign Theta-roles. (84) Theta-Government: α theta-governs β iff (i) α is a XO category, which theta-marks β and (ii) α and β are sisters (Chomsky 1986: 15, Kosta 1992: 199)
The central principle of the Government Theory (LGB Chomsky 1981) is known as the Empty Category Principle or ECP. This is a trace-licensing
Division of Labor: I-grammars and a Theory of Meaning (Mental Lexicon) 87
condition that requires that traces at the level of logical form must be strictly governed. In order to understand this condition (84), we must realize at first what the term “trace” (traces) means and what the term “proper government” means. To answer the first two questions, we first consider the following examples:
(85) a. [Who] do you think that Peter loves? b. * [Who] do you think that loves Peter (86) a. [Who] do you think Peter loves? b. [Who] do you think loves Peter? (87) a. * [How] is it time to fix the car b [Who] is it time to visit? (88) a. [John] seems to be patient. b. * [John] seems that everybody loves
Examples such as (85) b. and (88) b. should both be ungrammatical because in both cases, ECP is violated at S-structure (the trace not being c-commanded by its potential “antecedent”). The French examples (77) clearly demonstrate that Lasnik’s Filter (81) has been violated, and, in addition, the ECP has been violated at S-structure because the upper trace is not properly governed. To explain the grammatical output of the English example in (79), Chomsky (1989) has to assume two steps, first at S-Structure as Lowering of the Affix to the Verb (Affix-Lowering) in Order to satisfy Lasnik’s filter (81) and then, in order to circumvent ECP violation, covert Raising at LF of the whole complex [infl-verb] to upper INFL projection to account for ECP, cf. p. 88, Tab. 2. Given the ECP, it is somewhat surprising that Affix Lowering should be allowed at all, as Pesetsky puts it (1989: 2). Chomsky must assume this sort of “round trip” derivation with two steps, Lowering (back) and Raising (forth), because otherwise ECP would be violated and the grammatical output could not be justified by a deeper UG principle. Thus, the “Least Effort” principle (54a) guaranties that a shorter step in the derivation is preferred over a longer one (Economy of Derivation), while the “Last Resort” principle (80b.) ensures that a UG principle is preferred over a language particular law (UG over L). Hence the “round trip option” arises only and only in case when the one-wayticket is unavailable. The Theory strongly relies on the theory of strong vs. weak features (verbs). Pesetsky (1989) mentions a new problem with the theory of do-insertion. If do-insertion is the only option in order to satisfy Lasnik’s Filter, why then is not do-insertion the general option for all verbs in English? Pesetsky (1989: 2) mentions the example (67), here repeated as (89) with do-support in a non-focus-contrastive context, which is ruled out in English:
88
Language of Thought Hypothesis, Classes, Relations TP Lowering at S-structure (often/usually)
SpecTP T’ T0 VP NP
–s V0 speak English > speak-s
√
S-Structure TP
(often/usually)
T0
covert Raising at LF speak-s
V0 speak
VP
NP English
LF
Tab. 2: Lowering at S-Structure and Covert Raising at LF of the Whole Complex [inflverb] to Upper INFL Projection to Account for ECP Violations
(89) *Bill does speak French
2.6.2.4 Raising, Lowering vs. Merge and Late Insertion: Don’t Move! In order to explain this ungrammaticality, Pesetsky (1989: 2) mentions an additional condition: an absolute ban on the use of language-particular insertion rules like do-support when there is an alternative legal derivation involving UG movement – be it Raising or Lowering. This is the Last Resort Principle mentioned in (80) b. and here modified as (90). Assumptions:
(90) Merge over Move Principle Merge a category α with a category β in a category γ before you Move (91) Earliness Principle: Merge as soon as an ELU is available/accessible to the intention (i) of the speaker (S1)
Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion
89
Let us reconsider the examples in (62), here repeated as (92), under the conditions of (64) in (65). (92) a. Peter doesn’t like dogs. b. Peter does not like dogs. c. *Peter likes not dogs
(93) Lexical array: {Peter, like, dog, -s[V], -s[N], do}
Each single ELU in (92) can be merged with an item it is selected for. Principle (96) guarantees that an item which has not been merged before move is not accessible anymore for the next possible Phase. (94) Merge [VP like] with NP [dogs] → [VP [V like [NP dogs]]] (95) Merge [VP [V like [NP dogs]]] with [Neg] → [Neg [VP [V like [NP dogs]]]] (96) Late Insert Principle over Move If a merged category α does not yield a correct input with a merged category β for γ, insert material m in order to enable a concatenation {α, β} to serve as input for a category γ before you Move to a domain Δ.
Although conform with the observed empirical data (cf. the contrast between 62 a. Peter doesn’t like dogs, b. Peter does not like dogs. vs. 62 c. *Peter likes not dogs), how can a blind engine operation Merge look ahead in order to license the string 62a. and 62b. but not 62c.?
2.7 Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion In the first section of this paragraph, we will introduce some further evidence in order to test the assumption that Move is a very costly operation and should be delayed as long as possible (e.g., by Greed or Procrastinate). In fact, we want to argue for a displacement theory which is not driven either by the need to Narrow syntax of Move in order to Agree, nor the reverse, Agree in order to Move (cf. Hornstein 2009: 127). We will rather establish a very critical view on contemporary theories on which movement presupposes agreement in UG. Instead, we want to critically review the possibility of late insertion of certain elements as part of language-specific, PF-related phenomena – an interface-driven approach as we will develop further. For the time being, it will be stated that, among others, instances of do-insertion, affixation of auxiliary (light) verbs and modals but also late Merge of adjuncts are clear cases of late insertion as instances of language particular rules, and thus they are “inserted” partly late at S-structure (before Spell-out) in a grammar G1 of a language L1 under language-specific rules
90
Language of Thought Hypothesis, Classes, Relations
(LSR of G1/L1), sometimes pre- and sometimes even post-syntactically (after S-structure) at PF because they are not constraints of Narrow syntax rules of this language. Thus, auxiliary light verbs, negation, and modals show contraction, deletion, assimilation / dissimilation properties of the PF in English; Clitics show unrestricted Clitic Climbing in French, Italian, and Czech in Causatives but restricted in Control constructions, etc., and also (LF) language-specific linear ordering of adverbials in English (Location before time, e.g. John arrived in London at 5 p.m. // John arrived at 5 p.m. in LONDON) are not determined by UG, but rather by language-specific rules. Contrary to language-specific rules, main verbs seem to follow a more universal rule (including existential verbs, unaccusatives, unergatives, anticausatives, transitives and passives) and they are – as opposed to light Auxiliaries – “Merged” early in the derivation within their predicate-argument structure. (Adjuncts are by definition adjoined to the structure by late Adjunction, cf. Kosta and Krivochen 2012, Krivochen and Kosta 2013). All auxiliaries are base-generated as adjunction in light vP (functional) projection and must move until T0 to check their features (since they have interpretable strong features and T0 by definition is defect and has weak and uninterpretable AGR features in English). Main verbs in English but also in Russian and most Slavic languages have weak features, thus raising of main verbs is not triggered by T0-features at all. Rather, main verbs in English and other languages remain in situ and do not raise at all to T0 (only raising verbs, ECM-verbs, Modal verbs and Auxiliaries must move to T0). We can see, that only modal auxiliaries but not main verbs move until T0 on the following examples and then are “spelled-out” at PF including PF rules (contraction, etc.).
(97) Peter should (not) obey the principles of UG. (98) Peter shouldn’t obey the principles of UG. (99) Peter must not obey the principles of UG. (100) Peter mustn’t obey the principles of UG. (101) Peter wouldn’t, couldn’t, might not obey the principles of UG. (102) Peter won’t (will not) obey the principles of UG. (103) *Peter wants not > *Peter wantsn’t
The examples of English modal and light (tense) Auxiliaries clearly demonstrate that modals and Auxiliaries show contraction, deletion, assimilation / dissimilation properties of the PF in English in (97)-(102), but not in the volitional verb to want, cf. (103). We assume that the verb to want in English is not a modal auxiliary but rather a main verb, and thus it cannot Merge late (after Move) but must Merge in its original target position and remain in situ, thus yielding late do-insertion as Last Resort in (104).
Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion
91
(104) [TP Peter [T doesn’t] [Neg t [VP want [CP/TP to obey the principles of UG]]] PF-insertion Merge in situ
Let us introduce some arguments which include the earliness and the economy considerations of early Minimalist framework.
2.7.1 Shortness of Evidence and Earliness Principle Shortness of evidence means that if we discover but one example in which Move is banned due to restrictions on functional categories higher up the tree (e.g., lack of features in T0), we can generalize that this type of evidence can be hypothesized about not this one category but about many more categories of UG. In our theory, it is NOT a parametric theory of auxiliary or main verbs (by Theta-theory) which predicts restrictions on movement (cf. Pesetsky 1989), and which can explain these facts. Rather, we want to make a lexical stance of parametric variation of the ML between lexical and functional categories. Some more theoretical axioms are needed in order to develop our own line of argumentation: a) There are no narrow-syntactic operations of valuation that can explain the above-mentioned differences. All language-specific differences can be explained as an interaction between interface at PF (SM) and LF (CI) phenomena. Syntax means application of unbounded Merge and some further fundamentals of Phase Theory (which we will try to introduce in the next chapter). b) Any properties that are predicted of the cognitive system are to be justified at all three levels of scientific practice (Marr 1982). Computational theory (Parsing of Phases), Interfaces and Lexicon. c) There are no mapping rules (i.e., no transformations over kernel strings). Interfaces are opaque, but this does not mean there are linking rules: it means each cognitive system handles a different kind of information format: Phases are primitive syntactic categories that follow two simple UG rules: One Phase – one functional node and Phase Impenetrability for Extraction. The role of the interfaces is to find low-entropy correlates. There is only free structure building (i.e., free, unbounded Merge), all dependencies between symbolic objects being established at the relevant interface system. d) Complexity (both cognitive and neural) is an emergent property of a nonlinear, dynamical system. The ELU selected from the Mental Lexicon can be merged in the appropriate dimension of the workspace under the assumption that they fit together. The
92
Language of Thought Hypothesis, Classes, Relations
gloss “fit together” has to be specified. What does it mean to “fit together.” Assume, a potential SO is an ELU taken from LA, and it can only become a member of a word class alpha if it has features of this category. Let us start with a very simple observation: verbs such as to work, to operate, to construct, to shoot, to pain, etc. can easily be associated of a word class [V], moreover, they all can be a member of this class because they share typical features with this word class, that is, {V}-features, for example, they can express verbal meaning (e.g., process), they can take an external argument, they can (at least potentially) assign a Theta-role, they can assign structural or lexical Case, and they can be associated with Person, Number, Tense, Mode, Voice, and CAUS features. On the other hand, Nouns such as a/the work, a/the operation, a/the construction, a/the shot, a/the pain, can easily be associated with a different word class [N], because they all share the same {N}-features of Nouns: Number, Gender, Declension Class, definite/indefinite Article, they receive or get assigned a ThetaRole, Case, etc. Consider the following examples:
(105) a. Peter worked in the garden for two hours and then he relaxed. After a rest, he could continue to work / working. b. Peter’s rest after working for two hours in the garden gave him a lot of strength to keep going on c. *Peter’s rest after (a/the) work for two hours in the garden gave him a lot of strength to keep going on. d. Peters break after (a/the) work of two hours in the garden gave him a lot of strength to keep going on.
(106) a. Petr pracoval v zahradě po dobu dvou hodin. Poté, co si odpočinul, mohl pokračovat pracovat/s prací. b. Petrovo dvou hodinové odpočívání po práci v zahradě mu dodalo hodně síly, aby pokračoval s prací. c. Petrův odpočinek (*po dobu dvou hodin) po práci v zahradě mu dodal hodně síly, aby pokračoval s prací. d. Petrův odpočinek po dvou hodinách práce v zahradě mu dodalo hodně síly, aby pokračoval s prací.
A quick look at the sentences (105)–(106) shows that in both, English and Czech, the a-clauses are sentences, while the b-examples seem to be transformations of these clauses through different types of nominalizations (in the sense of Chomsky 1970). In (105) b./(106) b). It is the derived type of nomimalization, in (105) c./(106) c. the less complicated type of gerundive or event like nominals. In Czech, the productivity between the two types is reverse to that of English – a fact which has already been mentioned in Chomsky (1970).
Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion
93
“Many differences have been noted between these two types of nominalization. The most striking have to do with the productivity of the process in question, the generality of the relation between the nominal and the associated proposition, and the internal structure of the nominal phrase. Gerundive nominals can be formed fairly free from propositions of subject-predicate form, and the relation of meaning between the nominal and the proposition is quite regular. Furthermore, the nominal does not have the internal structure of a noun phrase…..” Chomsky (1970: 187)25 An important observation in “Remarks on Nominalization” (Chomsky 1970) was that apart from all differences in the surface structure (syntax, morphology, and phonology) the propositional/referential meaning of the two sets of data (a and b), ceteris partibus, remains more or less unchanged, but just in gerundive nominals. On the contrary, we can see that in derived nominals, as their internal structure is NP/DP, much information as compared to the propositions in a-set examples cannot be expressed. Thus, examples (105)c./ (106)c. are grammatically ruled out, because they do not allow a modification of the event time span of duration, since their nouness does not allow for a verbal modification of duration, namely with an eventive aspectual feature [e +duration]. Because these derived nominals are by character/structure real nominals (and not derived verbs), they are not derived in syntax; rather they are merged already in ML, since they behave syntactically very differently than gerundive nominals, and this fact can be observed even in Czech nouns ending in –ní/-tí. They are not part of the syntactic operation Merge, rather of a word derivational process in lexicon. According to Lees (1960), all verbs in English have a corresponding gerund, but not all of them give the corresponding derived nominal (cf. Alexiadou 2001: 2). In Czech, the productivity between derived nominals and event nominals (gerundives) is reverse to English: derived nominals are potentially more frequent, whereas eventive nominals ending in –ní/-tí (traditionally so-called deverbal substantives) are relatively rare, some of them restricted to certain written textsorts: we have consulted the Czech National Corpus, which gives a relative distribution of these two forms.
25 Chomsky, Noam. 1970. Remarks on Nominalization. In Roderick A. Jacobs, and Peter S. Rosenbaum (eds.), Readings in English Transformational Grammar. Boston: Ginn, 184–221.
94
Language of Thought Hypothesis, Classes, Relations
Most of the gerundive forms can be found in special professional literature (odborná literatura) and in correspondence. Beletrie Odborná Publicistika Korespondence 0%
20%
40% odpočívání
60% 80% odpočinek
100%
In Czech, the distribution between eventive Nominals (deverbal nouns) and resultative Nominals (real nouns) goes clearly in favor of a lexicalization of resultative Nouns. This means that the eventive nominals are restricted to word classes of Verbs which often do not have a corresponding noun form (hledání “the searching for” does not have a form “hlídka,” or *hled), because the potentially corresponding form has been lexicalized and bears already another lexical semantics, for example, hlídka “patrol, guard,” prohlídka “tour, inspection, examination of a doctor.” The meaning in search of the corpus is mentioned only as an archaic form. In the search of the Czech National Corpus = ČNK, we have found the following distribution:
Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion
95
Written corpus:
The proportions of Spoken corpus are nearly the same like in written corpus of formal texts, so that we can conclude that the deverbal forms conserve the lexical meaning of the verbal form they are derived from, and in this meaning, they also display verbal categories such as aspect.
It is premature to say something about the distribution in the corpus, one would have to make a statistical research of all (potential) competing forms with the ending –ní/-tí as opposed to forms which are derived nominals proper, but the tendency is that the forms ending in –ní/-tí are restricted to the eventive aspectual forms, for example, we can usually have both aspects, perfective and imperfective, whereas the nominal, for example, non-eventive forms, are mostly resultative, for example, pití (impf.), napití (perf.), // nápoj (perfective-resultative).
96
Language of Thought Hypothesis, Classes, Relations
Written corpus
Spoken corpus The derivation of the form pití vs napití shows more about the verbality or nouness of these two forms. Both the imperfective and perfective forms of the verb are transitive + Gen and exclude reflexivization. The distribution and productivity in Czech26 goes clearly in favor of the non-reflexive forms, even though the propositional equivalent is restricted to the reflexive form of the verb, cf.: Chase, James Hadley: Kdo je Mallory? Renč, Ivan: Tajemství posledního večera Hlaváčková, Iva: Vládci Sedmihoří. Ma… Feist, Raymond E.: Královská krev
“Corridon mu kývnutím oplatil si lehl na své lehátko
a napil se
Hlavou mu táhlo,
a napil se
z poháru: “Císař
si k němu přisedla.
Morrell, David: Ze zoufalství
,ale s vodou.
Napil se řekl: “Je přesně piva a Napil se nastavil tvář trochu prvnímu závanu vody a větříku Napil se a pokračoval. “Tenkrát v
,pomyslel si Borric.
26 It can be shown that contrary to Czech, Polish preserves the reflexive particle after derivation to a nominal, but also that deverbal nominals are more complex and more “verbal” than verbs proper. The question of compexity and online comprehension of derived nominals in Polish is treated and analyzed in details in two psycholinguistic studies, cf. Błaszczak et al. (2015) and Błasczak et al. (2018).
Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion
97
Bergman, Ingmar: Soukromé rozhovory Hemingway, Ernest: Ostrovy uprostřed proudu Hemingway, Ernest: Ostrovy uprostřed proudu Morrell, David: Osudný portrét Potok, Chaim: Na počátku Morrell, David: Totem
vzal si sklenici s vodou a napil se
tak, jak pastor Konradsen
jen ošklivý sen. Rozsvítil
a napil se
trochu minerálky, Měl velkou
druhý dopis, natáhl ruku
a napil se
whisky s vodou.”
Napil se piva a a napil se
zalitoval, že si neobjednal Otec dlouho nepromluvil. vody. Máš hlad,
Murakami, Haruki: Norské dřevo Goldberg, Leonard: Mozkové vlny Brown, Sandra: Závist
snažil vypadat co nejklidněji. seskočil k mističce s vodou televizoru, zašel do koupelny jsem zašel do kuchyně,
napil se vody a Nakonec odběhl za bar a napil se
Chamberlainová, Diane: Světlo pod hladinou Laurensová, Stephanie: Svůdcova přísaha Doctorow, Edgar Lawrence: Pochod k moři Harrison, Harry: Ocelové vize Kostiha, Vladimír: Únos Mildred Hawkinsové Kundera, Milan: Směšné lásky Berková, Alexandra: Utrpení oddaného Všiváka Pelc, Jan: Basket Flora Fuks, Ladislav: Smrt morčete
a napil se
z okna u dřezu se
Mike Strother odložil stránky rukopisu
a napil se
?”zeptal se Kennyho
a napil se
vody rovnou z kohoutku. limonády připravené z vlastnoručně vymačkané “Jde to,
?” Vane zvedl šálek
a napil se
Pár věcí by jí
plnou bourbonu. Odšrouboval uzávěr milé, “řekl Gust
a napil se
. Áááááá! Dej si
a napil se
“Vůbec ne.
trochu moc, “řekl
a napil se
“Nicméně vím,
o lecčems promluvit.”
Napil se zeptal limonády a se: “Kontaktoval Napil se začal vyprávět. vody a Chystal si
a promnul si spánky.
“sedl si Kojot “řekl živě pan Festan
a napil se a napil se
“ze začátku to vína, “možná že
98
Language of Thought Hypothesis, Classes, Relations Fuks, Ladislav: Smrt morčete Bondy, Egon: Šaman
“řekl pan Festan, “Šaman si odkašlal,
napil se vína a napil se a
pohlédl na loď, kde otevřel ústa. “Zavři
The only mentioned two forms with a nominal is not reflexive, the reflexive particle belongs to the raising verb zdát se “to seem”: Y: Deníky Bohemia, 1. 9. 2008 moc dobré. Na první napití se sice zdá trochu víc hořké, Y: Deníky Bohemia, 1. 9. 2008 moc dobré. Na první napití se sice zdá trochu víc hořké,
From the viewpoint of economy of language, at the SM interface (formal side), some formal features of the a-data and in the b-sets are either underspecified or at least implicit if not covered, and for example, aspectuality in the c-sets of data is not present at all. We want to introduce as a first approach to these examples the question of whether nominal categories or classes of words do express the featural characteristics of the verb or just rephrase them in another manner. The answer to the question cannot be a straightforward one; neither it will be a trivial one, as our examples (105)–(106) have demonstrated and much recent work on nominalization has proven for other languages (cf. for English and Spanish Krivochen 2012; for English, Greek and other languages Alexiadou 2001; for Slavic languages in comparison to many other languages cf. in Schürcks et al. eds. 2014, for Polish from the viewpoint of psycholinguistics Błaszczak 2015, and Błaszczak et al.
Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion
99
2018, and also Kosta on nominalization in Passives and Anti-/Causatives in Kosta 2015a)27. As long as the distinction of word classes (partes orationis) can be clearly associated with the basis of their characteristics (e.g. grammatical or formal features), it will hardly be necessary to accept an additional level that allows us for the operation blind Merge. But when we suddenly realize that there are other elementary lexical units that do not allow unambiguous assignment (because they share both nominal and verbal features), or whenever we discover ELU that are underspecified, our theory begins to waggle and the sharp distinction of word classes gets fuzier and fuzier until it fails. This is a very welcome take and not a disadvantage since one of the principles natural languages seem to be exposed is “plastic underdeterminacy.” What seems to resemble a “fuzziness” between categories is simply a biological fact, and this can be observed in the fact that features are either shared or inherited but never just randomly repeated (a more detailed explication of the relation between language as a natural object and physics is given in the PhD by Diego Krivochen 2018). We will try to explain this obvious observation by biological principles of anti-enthropy and conservation principle in terms of Radical Minimalism (cf. also chapter 8). In fact, the case is much more common in natural languages than the clear-cut division between word classes or categories as we would have wrongly assumed in former stages of the generative enterprise (cf. Panagiotidis 2014). This is why we have chosen another type of classification of lexical categories based on root semantics and not on partes orationis as is still the case in LGB (Chomsky 1981) with the not always straightforward distinction between NP, VP, AP, and PP by the features ±nominal, ±verbal.
2.7.2 Small Amount of Logical Constants: Feature Inheritance vs. Feature Sharing And it is mainly this observation which forces us to give up our division into four word classes and discrete modules (like it was in the GB framework, Chomsky 1981, developed for Slavic languages in Kosta 1992), while clearly replacing them in the following chapters. Thus, we are again guided by a naturalistic position to natural languages (cf. Uriageraka 2002: 27), being, like all other objects of the physical surroundings of the world, guided by principles of structural and conceptual economy, anti-enthropy, conservation, but also “discrete unboudedness” and “plastic underdeterminacy.” 27 Cf. also the work on Nominalization in Chomsky 1970 and Karlík and Kosta 2020. On the complexity of Noun Phrase in different languages cf. also Kosta (2020b).
100
Language of Thought Hypothesis, Classes, Relations
Considering word classes or lexical categories (in LGB Chomsky 1981, and in MP Chomsky 1995) as a set of categorial features (±V, ±A, ±N, and ±P), we want to ask: Are they sufficient and necessary? Since we want to be guided by the Parsimony criterion of the minimum number of categories and axioms, it will not be easy to open more classes than the already adopted. What other options are we left with? Our first take on this matter starts with a classical notion of Minimalist program, namely Feature inheritance and Feature sharing: Working Hypothesis 2: Two ELU can undergo an Operation of Merge, if they either a. share features of the same word class; or if they b. inherit features from one class to another. To be grammatically compatible means that two ELU A and B (a) share the same syntactic characteristics and/or they (b) have inherited this characteristics. Consider the two cases: (I) Feature sharing is the classical case of Agree: (107) Peter3Sg sings3Sg a song (108) Ivan3Sgm pel3Sgm pesnju
In (107), the Noun Peter shares with the finite Verb sings the Agreement features Person (3rd) and number (Singular), but not Case. In (108), the Noun Ivan is marked with 3rd Person and Singular and by morphology it must be masculine, it is thus Gender (masculine). But the l-participle does not share the feature of the 3rd Person, rather it inherits this feature by Agree with the subject NP. Thus, the feature Person is not specified on -l, rather it is unspecified; we will use the term “underspecified.” Context-free, this form could be either first order second or third Person (1–3) Sg Masc. Consider the other type of feature sharing mechanism in syntax, namely Feature inheritance as in (109). (109) a. Peter3Sgm and Mary3Sgf sing3Pl a song b. Ivan3Sgm i Maša3Sgf peliPl pesnju
If we properly look at the example (109), the crucial property of feature inheritance cannot be just that proposed for phases in Chomsky (2008), rather it should be a more general mechanism which must operate across board dependencies. Since both, 109a. and 109b. are examples of coordinate NPs, the plural form of the V cannot be just inherited from the NP Peter and/or NP Mary because then we would expect that the form of the Verb would be 3Sg. by Agree. The assumption about this constraint is born out because (110a.-c.) are ungrammatical:
Some Further Evidence: Interfaces Driven Syntax and/or Late Insertion 101 (110) a. *Peter and Mary sings a song b. *Ivan3Sgm i Maša3Sgfem pelSgm pesnju c. *Ivan3Sgm i Maša3Sgfem pelaSgfem pesnju
So, how can the Verb (l-participle) which is underdetermined with respect to the category of Person (like pel, pela) get to “know” during the computation which feature it “shares” or “inherits” if the V is externally merged prior to Move (internal Merge) of the N to the SpecTP for Case assignment under Agree? A similar paradoxon within the theory of feature inheritance comes to mind when considering the typical cases of Causative verbs considered to be both analytically and synthetically derived by incorporation of the lexical head (V) adjoint to the head of the causative CausP (by substitution or adjunction). In both cases, how can the mechanism and derivation of Merger be analyzed under a syntactic feature–driven inheritance approach? How can a causative head (probe) search for a verbal head (goal) if the verbal head is not specified for causativity beforehand? Are the categorial characteristics of the causative head more important than the features of the word class verb in order to trigger Move? How does Select analyze features? Are features of a causative head inherited from CP (which as far as we know does not entail Causativity Operators) or from TP (which by definition is a defective phase) or where do these features come from? (111) a. The customer gave a suit made b. Zákazník dal (oblek) ušít (oblek) c. Der Kunde ließ seinen Anzug nähen (112) a. The farmer watered the flowers b. Farmář napojil koně c. Der Bauer tränkte die Pferde
There is another problem with the theory of feature sharing and inheritance. As long as we have the analytical type of Causative verbs (CAUS1 and CAUS2)28, we do not need to bother about morphology and morphosyntax because morphology is feeding into syntax; thus, it is the input for syntactically driven incorporation (cf. 111a.-c.), where the Causative light verb to give or to let incorporates into the main verb (ušít, sew, nähen) assigning the complement an ACC; but in case of synthetical type of CAUS1 the causative verbs seem to be suppletive
28 CAUS1 is the Causative of first level, derived from transitive or intransitive Verbs (like in to sew a costume vs. to let the costume sew); CAUS2 is a causative assigning two Cases to its complements, namely ACC and Dative or double-ACC: Pierre a fait manger le gateau aux enfants (ACC and Dative), Peter hat die Kinder den Kuchen essen lassen (ACC and ACC).
102
Language of Thought Hypothesis, Classes, Relations
forms morphologically quite different from their transitive counterparts (in 112a. to drink water, in 112b. pít vodu, in 112c. das Wasser trinken). How can a theory which wants to explain not only the selection but first of all the inner order and system of Mental Lexicon that these verbs are derivates of each other? The resolution of these puzzling data derives from a property which will be established in Chapter 3 under the notion of Phase interpretability condition. If between a causative head of the light vP and the lexical head of the lexical VP no direct causative relation in syntax can be established (due to the fact that lexical roots are seemingly underspecified for any functional features), there must be a mediation established between both lexical and functional heads, but above all an element must be merged which shares the features of the lower head and then upper head as Edge features or Labels. The first approximation – still somewhat “pre-theoretical” – is to establish a relation between different levels of representation, thus (1) lexical representation with Theta-roles, (2) argumentpredicate structure, and (3) event-structure of the predicate which operate on different derivational steps of Merge and Agree somewhere between morphology and syntax. Since our point is not standard (MGG), nor late Minimalism, but rather a phase theoretical take based on the theory of Radical Minimalism (Krivochen and Kosta 2013, Kosta and Krivochen 2014a, b), we shall establish these relations in the course of Chapter 3.
3 Gender and Animacy between Displacement and Agreement 3.1 Words, Phrases, and Sentences It seems to be a trivial statement to say that words and phrases cannot be combined randomly to form sentences. Careful observation by linguists suggests that the composition of a sentence is constrained at a number of levels of representation – morphological, syntactic, semantic/pragmatic, etc. For example, rules of syntax govern the order in which sentence elements appear and the grammatical roles they can play. Sentences that violate syntactic constraints (e.g., “John hoped the man to leave” instead of correct “John asked the man to leave”) are easily perceived as anomalous. Similarly, the need for meaningful coherence constrains the selection of words at a semantic and pragmatic level (consider “John buttered his bread with socks” instead of “John buttered his bread).” From a linguist’s point of view, violations of syntactic constraints are clearly distinct from violations of semantic/ pragmatic constraints. However, it is not at all certain that these anomaly types are distinct with respect to the psychological processes involved in language comprehension. The informational types proposed by linguists are based on linguistic description and observation and might, therefore, have only an indirect relationship to the informational types actually involved in the process of comprehension (Swinney, 1982). Thus, one basic question about language concerns the identification of the informational types functionally involved during comprehension. An underlying assumption of much recent work in psycholinguistics is that a relatively direct mapping exists between the representational levels proposed by linguistic theory and the processes and representations employed during comprehension (cf. Berwick and Weinberg 1983, 1984, Frazier and Clifton 1989, J. A. Fodor 1983, J. D. Fodor 1978, Fodor, Bever, and Garrett 1974, Ford, Bresnan, and Kaplan 1982, Forster 1979, Kimball 1973, Marcus 1980, Norris 1987). Distinct sets of cognitive processes are thought to interpret a sentence at each posited level of representation, and distinct mental representations are claimed to result from these computations. In sharp contrast, other theorists have proposed that a semantic interpretation is assembled directly, without an intervening syntactic representation (Ades and Steedman 1982, Bever 1970, Bates, McNew, MacWhinney, Devescovi, and Smith 1982, Crain and Steedman
104
Gender and Animacy between Displacement and Agreement
1985, JohnsonLaird 1977, MacWhinney, Bates, and Kliegal 1984, Riesbeck and Schank 1978). Our view of the interplay between the Sensu-Motoric Interface (SM) of Language Faculty and the Conceptual-Intentional Interface (CI) is that Syntax and Semantics are two independent levels of Representation of Cognitive Modules of the Mind of Grammar and are mediated only to convey at the input of Spell-Out. Two possible scenarios allow to process the information further or to block it: First Scenario: Either the information at one or at both of the two interfaces is underdetermined or wrong and the system does not process it further; thus, at this point the machinery of Analyze (as introduced in Chapter 2) tries to find a way out of this dilemma. In other words, the level of Analyze can re-initiate an analysis and reestablish the mapping of semantics onto syntax in such a way that SM and CI convey an interpretable output which can be used as a well-formed string and as input for further computation (in the terms of Chomsky’s MP, 1995 passim, the Principle of Full Interpretation serves as filter and allows for further processing in our mind), or the system Crashes. This kind of scenario is more appropriate than just the originally thought of having either a well-formed or an ill-formed output at Spell-Out; thus, the sentences either converge or crash. We believe that our proposal is more appropriate and can be justified by evidence how speakers of natural languages produce and comprehend syntax and semantics. In order to explain this approach of computation, we must recall some basic principles of the difference between crash-proof and crash-rife Grammars as introduced by Frampton and Gutman (2002: 90).
3.2 Crash-Proof and Crash-Rife Grammars Recall, this book tries to argue for a Model of Grammars of natural languages, which is based on the principle of mutual interaction between derivational steps, narrow syntax, and interfaces. While Chomsky’s Minimalism is based on the idea that FI is reached only before the interfaces, we plea for a model in which the interfaces and syntax is connected with a mechanism called Analyze which allows for looking back and forward in order to Converge. Thus, Crash-proof syntax does not wait until Crash or Converge in order to fulfill FI at Interfaces but instead it peers back and forward in order to avoid Crash at every single step of the derivation in Narrow syntax; thus, each derivational step is checked against the notion of optimal solution, and in case it crashes another candidate has to be chosen (Frampton and Gutman 2002: 90).
Crash-Proof and Crash-Rife Grammars
105
Let us recall the two basic notions, namely “Crash” und “Converge” as introduced in Minimalist Program (Chomsky 1995).
(i) Crash: A derivation (D) crashes if it does not converge at the interfaces. (ii) Converge: A derivation (D) converges whenever the conditions at LF and PF are fulfilled (principle of Full Interpretation / FI) (Chomsky 1993, 1995: 219)29
Metaphorically, this model could be compared with the early warning system in modern systems such as speed control in a Tom-Tom navigation system of cars or even in the control of aircraft in the Cocpit. A syntactic structure is not created until all features in the incremental steps to the spell-out match are checked against each other. The difference to traditional models is not so much that the models are not compatible, but where and when Generate takes place. In our model of grammar, creating simple and complex structures from phase to phase will only be gradual, and strictly local. What this means is that the hierarchic structure before the Spell-Out must already be correct before the linearism of SM comes into play. As emphasized by Noam Chomsky in his most recent contribution (see Chomsky 2020), the linear structures are not crucial (they are casually generated as post-syntactically as a result of the characteristics of the CI / SM interface), but the hierarchical ones are central issue of narrow syntax. To name just one example of Chomsky (2020). “Crucially, grammatical status and semantic interpretation are determined by structural hierarchy while linear order is irrelevant, much as in the case of verb-initial versus verb-final.” To illustrate the difference between the notion “linear distance” and “structural distance,” he considers the following sentences [(4)]–[(7)] [(4)] Birds that fly instinctively swim. [(5)] The desire to fly instinctively appeals to children. [(6)] Instinctively, birds that fly swim. [(7)] Instinctively, the desire to fly appeals to children.
Examples [(4)] and [(5)] are ambiguous because the adverb “instinctively” can be interpreted under linear order (equidistance) in both ways (“fly instinctively” // “instinctively swimm/appeal”), but in [(6)] and [(7)] the ambiguity disappears because the adverb can be construed only with the remote verb to fly. Not only can examples like these enforce the fact that CI and SM work relatively independently,
29 “The language L determines a set of derivations (computations). A derivation converges at one of the interface levels if it yields a representation satisfying FI at this level, and converges if it converges at both interface levels, PF and LF; otherwise, it crashes.” (Chomsky 1995: Chapter 4: Categories and Transformations, pp. 219–220)
106
Gender and Animacy between Displacement and Agreement
but also that it is not the linear order which decides about interpretation of sentences but rather the structural dependency at Phase level. In addition to the explanation given by Chomsky (2020: 19–20), it can be stated that “instinctively” is a sentential adverb which can be only construed with Phases. Since IPs are not Phases, the sentential adverb (which is in SpecCP) can be construed only with CPs. The Structure of [(6)] and [(7)] ist thus [(6’)] and [(7’)], respectively, cf.:
[(6’)] [CP1/MoodP SpecCP Instinctively [CP2 [Top DP birds [CP3 C that fly] [IP swim]]]] [(7’)] [CP1/MoodP SpecCP Instinctively [CP [Top DP the desire [CP C to fly] [IP appeals to children]]]]
Since sentential adverbs are in the definition of Kosta (2003a, b) propositions over propositions, they must adjoin to the highest hierarchical position on the left most periphery for sentence mood (CP1) and only CP phases can move at LF. IP is not a phase and cannot move (cf. also Abels 2003, Bošković 2020: 27–29. Footnote 1; Wurmbrand 2001).
3.3 Phase Impenetrability and Phase Interpretability Consider first the example (1) a. yielding a well-formed sentence as opposed to the ungrammatical minimal pair (1) b.
(1) a. There1 seems t1 there to be a man in the room. √ Merge over Move b. *There seems a man1 to be t1 in the room * Move over Merge
The ungrammatical sentence (1) b is caused by a violation of an UG Principle on Economy: if there is any element to be moved, it has to be moved only after another candidate has been (first) merged (Merge over Move), or it has to be delayed as long as possible before Spell-Out (procrastination principle). In (1) a., Move is delayed as much as possible (given the elements available in the numeration). These examples give a rather nice motivation for the notion phase instead of X-bar Phrase structure rules. Another conceptual argument for phases concerns also the uninterpretability of features: If Spell-Out has to remove uninterpretable features to avoid a crash at the interfaces, it must know which features are interpretable and which are uninterpretable. However, narrow syntax is supposed to be blind as a bat, and thus, would need to be able to look ahead (up the tree, to LF and PF) in order to determine the interpretability of a feature. A transfer of derivational chunks to the interfaces remedies this issue, search space being reduced to a local Domain (a Phase). Thus, visibility of features from the outside can avoid ill-formed structures. If only edges of a phase, thus only Specifiers and Complements (in the terminology of X-bar notions), can interpret features at the interfaces, the phase impenetrability condition (PIC) in (2) can be reduced to a Phase interpretability condition in (3), cf.:
Phase Impenetrability and Phase Interpretability
107
(2) Phase Impenetrability Condition (PIC) A phase (CP, vP, VP) is not accessible for further computation (internal or external merge) if
(i) a merged element α has not reached the edge of the phase, (ii) the interfaces PF or LF cannot interpret the phi-/wh-features of α[φ] if α in a phase {α[φ] {β[•]}} has not reached an edge phase position.
We can see in (2) (ii) that in a merged SO situation, only α can be read off by the interfaces for a feature [φ] since it is at the edge of a phase, while β is internal to the merged phase and cannot be interpreted (we use a dot • for a non-interpretable feature in β[•]). In order to get a Label, the element β[•] has to move via internal merge to a higher position in order to be interpreted. This situation is typical for any kind of movement (be it wh-, NP-, or head movement of a non-valued/interpretable lexical or functional category). While the reduction of levels of representations and the invention of Phases launched at this stage of Minimalism a highly parsimonial Model of FLN, the reason for establishing the Condition (2) of UG has actually led to a deadlock because the restrictions on cyclic movement only rephrased the basically old problem of MGG on ban of extraction out of islands, moreover the ban of an element once extracted to move again (Freezing effects cf. Rizzi 2004, 2006). We believe that the Phase Interpretability Condition is the positive answer to the problem. We show its version under (3): (3) Principle of Phase Interpretability (PPI) The formal features (φ/EPP/wh-) of an element α of a phase π are interpretable at LF, iff they are valued at PF (4) The visibility condition for valuation (UG principle) An element α of an element LEX is valued, iff: It is labeled either at the edge XP or YP or moved/adjoined cyclically to (i) Agree position (φ of the category [Lex_ φ]) or (ii) Case position (which is always a A-position) (iii) Expletive position30
We believe that principle (3) with the sub-conditions (4i), (4ii) and (4iii) is a stronger, and thus a more parsimonial, axiom than (2) because it can explain more data. We want to show in the following examples of how different displacement operations can be derived from this simple UG principle: We show this recalling some basic facts on different types of agreement and displacement 30 (i) φ of the category [Lex_ φ] = AGREE; (ii) Case of the category [Lex_Case] = structural case, (iii) an Expletive is valued at the Edge of a Phase by EPP by assumption.
108
Gender and Animacy between Displacement and Agreement
operations, namely in the subsequent paragraphs of this chapter in which we want to prove if PPI is an axiom or a falsifiable hypothesis. The major concern is a category in Slavic which has a long tradition of research, both on functional and in generative (formal) approaches: Gender vs Animacy. These two categories are taken because of their status between Morphosyntax and Semantics.
3.3.1 Gender and Animacy Declension and Agreement Classes The work on Animacy and Gender is motivated by its broad and very manifold variety of expressions in the 7000 languages of the world. In Slavic, for instance, there is a differentiated grammaticalization of the category Gender and Animacy (as sub-gender) expressed with different strategies of Case assignment, in other languages such as Ket the expression of animate objects is encoded by different verbal affixes (cf. further below). The grammatical categories of Gender and Animacy belong to the most pervasive and discussed problems of nominal categories in Slavic and elsewhere. In Slavic languages, every noun is characterized by two features of Gender, which pertain to two distinct semantic domains (the reason why it is also interesting from the viewpoint of cognitive approaches to semantics of natural languages). One of the features is known as Gender proper (which signifies a grammatical category known in most languages as Masculine, Feminine and in many also Neuter or Genus Commune and Neuter), as opposed to the natural biological category natural gender or “sex” (which is not only a matter of linguistics but of life, thus the division between male and female). For sake of comprehension, I shall refer to the first category as (grammatical category of) Gender, differentiating between Masculine, Feminine and/or Neuter (or sometimes Genus commune), while in case of natural sex I shall speak of Sex or referential Gender and I shall use the terms “male” and “female.” The other category is known as the “subgender” of Animacy (cf. Stankiewicz 1986: 129, Corbett 1980: 43, Corbett 1991, Kosta 2003). Gender proper involves reference to sex but only in cases when the noun refers to an animate object. Unlike “natural gender” (sex), grammatical Gender does not specify the natural sex of the referent, but rather signals the absence or presence of a sexual property, or the absence or reference to sex as a cognitive reflex of the world how we imagine and classify it in different grammars of the 7000 languages of the world. And this encoding can be very different from the picture we have, for example, in Indo-European or more specific in say Russian.
Phase Impenetrability and Phase Interpretability
109
Such sexual specification is obligatory for all nouns (substantives) regardless of whether they are animate or inanimate. The feminine gender cannot designate male human beings, and the neuter gender cannot specify sex since it is ne-uter in the etymological sense of the word, neither of both, masculine or feminine. In some languages (e.g. in Russian, Czech and Polish), the neuter gender tends to consist almost exclusively of substantives of the animate inanimate sub-gender. In contrast to feminine and neuter, masculine gender can entail / designate both male and female human beings, and it includes both sexual and asexual entities. The three genders in Slavic thus form two oppositions: in one of them, the unmarked masculine gender is opposed to the marked feminine and neuter genders, whereas in the other the two marked genders, the feminine and the neuter, are opposed to each other as positive vs. negative members. Within the category of Animacy, the inanimate is opposed to the animate gender as marked vs. unmarked, since the inanimate gender cannot designate animate beings, whereas the animate gender encompasses both animate and inanimate entities, cf. (5) Ivan poprosil ètogo očenʼ umnogo studenta, [CP TP vP VP] (5a) DP D’
SpecDP D0 ètogo
AP SpecAP očen’
A’ A0 umnogo
NP
N0 studenta [Number, Gender and Case ← Animacy]
110
Gender and Animacy between Displacement and Agreement
How can we derive a special declension class of nouns in which the grammatical gender is determined by the morphology (Declension Class) while the referential subset of this class can entail not only one class (e.g. Declension Class I ending on a Zero with a referential class of male referents with a subdivision of female referents such as vrač or brigadir, or reverse, the nouns of the Declension Class II type žena with a subdivision of referential masculines type mužčina)? Pereltsvaig (2006, 2007a) places reference on D, following Longobardi (1994) and others, contra Baker (2003). Pereltsvaig distinguishes grammatical φ-features, used only for grammatical features from N and values the referential features, which may have the same or different values than the corresponding grammatical features. This is illustrated in (5)b., adapted from Pereltsvaig (2006): (5) b. DP brigadier [ref. gender: FEM] [ref. number: SG] [ref. person:3] [gram. gender: MASC] [gram. number: SG]
NP brigadir [ref. gender: _____] [ref. number:_____] [ref. person: _____] [gram. gender: MASC] [gram. number: SG]
Concord, including determiner marking, reflects grammatical features. Subjectpredicate agreement reflects reference features, if any. For example, brigadir “foreman” in (5) c. is grammatically masculine but can refer to males or females. In our theory, we can predict that a unifying approach taking into consideration a unified subset of features can actually predict the Agreement on a higher level of derivation. (5) c. naš brigadir naxodilas’ v dekretnom otpuske. Russian Our-M foreman.M was-F in maternity leave “Our foreman was on maternity leave.’” [“Komsomols’kaya Pravda”, 17 Feb. 1962] (Pereltsvaig 2006: 485)
Criticism: The problem of the system assumed by Pereltsvaig (2006, 2007a) and Longobardi (2005) and contra Baker (2003) is that (a) the term “Declension Class” does not play any role and that (b) the term “Agreement Class” does not play any role, but these are exactly the two most important distinctions relevant for Russian and any other Slavic Gender/Animacy Languages (e.g. also Czech, Polish, and BCS). Since Animacy/Inanimacy is expressed with different Case assignment on the Complement of the transitive Verbs which govern it, it should be “Case” sensitive
Phase Impenetrability and Phase Interpretability
111
and trigger the Case Assignment of Genitive-Accusative to the complements of only masculine animate nouns, while it remains underspecified in inanimate nouns and thus the default Case Accusative (as the default structural Case of direct object). Thus, in (6), a double object construction, a Genitive-Accusative is assigned only to the first direct object because the noun is masculine animate, while the second object is a noun of direction (direct object) with the Theta-role Goal, and it is masculine inanimate and thus it is assigned the default Case Accusative: (6) Mama Mom NomSg Agent
položila Patient
malčik-a boyGen-AccMascAnim Goal
na stol on table_AccMascInanim
Important is to stress that the Animate/Inanimate distinction is expressed in Russian and likewise in most Slavic languages only in structural Cases (Nominative/ Accusative and in NPs also Genitive) which by definition are assigned in Syntax, while inherent or lexical Cases (in the sense of Kosta 1992, Chomsky 1981) such as Dative, Locative, or Instrumental/Prepositive are assigned in lexicon by the subcategorizing governor (Predicate) and are still sensitive to the distinction animate/inanimate but not to the Case Assigning property, cf. (7). (7) a.
Ètomu ThisDatSgBen b. Ètomu ThisDatSgBen
mal’čiku boyDatSgMal gruzoviku truckDatSgMal
ne not ne not
sdat’ to pass proexat’ to come
èkzamen31 exam through
Therefore, it is maybe interesting to look at the range and variety of means with which languages express these two categories of Gender and Animacy and also to look whether this Category is expressed by different Case assignment or with other means of grammar. The category of Animacy (Russian kategorija oduševlënnosti ili neoduševlënnosti, literally “animatedness”) is a classifying category in the Slavic languages (also in Russian and Polish or in Czech), by which a noun, in Polish also of the number, of either the class of the grammatically animate or with the class of grammatically non-animate nouns is differentiated. (Gabka et al., RSG 2: Morphology, 3.4. The category of Belebtheit, pp. 203–208). In Russian, this category of grammatically animated nouns is grammatically encoded only for the domain of humans and animals (but not for plants or heavenly bodies, etc.), but it is important to understand the category of Animacy in two ways: 31 The abbreviations Mal for the Theta-role Malefactive, Ben für Benefactive.
112
Gender and Animacy between Displacement and Agreement
• as a purely classifying category of grammar (so-called Declension or Agreement Classes in Slavic) (Corbett 1982, 1991, forthcoming) but in Ket Animacy is sensitive to Case Marking and Morphology of the Verb; • as a semantically motivating category (motivated by the reference of the objects). Until now, there are different approaches on the Category of Animacy, roughly speaking it is in fact the division of labor between the PF and LF interfaces which seems to play the critical mass, so that the Phonology, Morphology, and Syntax as domains which are components of the formal structure of a natural language (thus a part of the Sensu-Motoric /PF interface) might express this category in a very intriguing but also very specific way in different languages. On the other hand, if languages are instances of parameterization of UG in the sense we have adopted the Principle-Parameter approach in Krivochen and Kosta (2013), it is quite obvious that these differences are to be expected. In Slavic languages, the animate/non-animate distinction is internally parameterized as to the languages and also with respect to the categories: in North Slavic Languages (thus in East and West Slavic Languages) the Case Assignment of Genitive-Accusative in animate masculine nouns vs. Nominative-Accusative in non-animate masculine nouns is the rule but there is a sharp further differentiation (parameterization) with sub-genders and singular/plural distinction. Klenin considers Animacy a central category of human mind which can be encoded with different verbal means: Quote: “The grammaticalized expression of animacy and personhood correlates with such referential features as definiteness, and the strength of expression of animacy/personhood may correlate inversely with the strength of expression of definiteness. The correlation of animacy/personhood with sex-based gender and grammatical number may be complex and even counter-intuitive. The core expression of animacy in Slavic is GA case syncretism, which, in its core usage, occurs in the singular of certain paradigms of masculine-gender words. All Slavic languages that have retained a fully articulated case system have retained GA syncretism in the masculine singular of nouns referring to animate beings. The Eastern South Slavic languages (Macedonian, Bulgarian) fall outside the general Slavic pattern. Slavic GA syncretism has generally (with the partial exception of Slovak) arisen as a replacement for older NA syncretism. GA syncretism occurs in agreeing forms in the absence of GA syncretism in the head, if the head is eligible for animacy marking on grounds of (masculine) gender and (animate) reference but is morphologically ineligible for GA syncretism (a-stem noun paradigm). Different Slavic languages show a variety of extensions of animacy/personhood marking beyond the
Phase Impenetrability and Phase Interpretability
113
masculine singular, creating typologically diverse and sometimes complex morphosyntactic patterns.” (Klenin 2009: 152).
3.3.2 The Structure of Noun Phrase or Determiner Phrase in Slavic The structure of a noun phrase (NP) or a determiner phrase (DP) in Slavic has been discussed (cf. e.g. the discussion in and Krivochen and Kosta 2013: 137 on Long Branch Extraction is close connected to the assumption of the presence or absence of a DO head and Schürcks, Giannakidou and Etxeberria 2014), and there are many opinions about the real structure: two opposite groups can be differentiated; the NP-ists who reject that in most Slavic languages there exists a functional projection of the functional head DO /DP similar to languages with definite article, and the DP-ists who suggest that Determination is not a question of PF or morphology, but rather of Semantics, and thus assume that it is not the presence of a morphological overt article which makes an NP to an NP and not a DP in Slavic (as opposed to Germanic and Romance languages and to Bulgarian/ and Macedonian in Slavic), but rather the Det feature which can be abstractly assigned to a phonologically empty head DO, cf. among others arguments in Krivochen and Kosta (2013: 173 passim). Späth (2006: 78–80) convincingly argues in favor of a semantic universality of Determination and differentiates between “Determiniertheit/ Indeterminiertheit” in Syntax which is reflected by the presence or absence of the det element, but for a “Determination” as universal feature in Semantics. We adopt here the position that many independent factors in syntax such as the LBE facts and further syntactic behavior militate against a bare NP and speak in favor of a DP layer in all Slavic languages. One important argument is the fact that the head D serves as goal of φ-features. We give the representation under (8): (8) DP D’
SpecDP Do ϕ-features
NP SpecNP
N’ N0
N0 = animate, countable etc. and lexical Case Do = ϕ-features = Number, Gender and structural Case
114
Gender and Animacy between Displacement and Agreement
Since it must be the functional head DO which is the bearer of the grammatical Agreement features for Gender, Number and Case (Späth 2006: 1–3)32, it is also the locus to which structural Case will be assigned by structural configuration of C-command. Gender is usually an arbitrary category, which does not correspond to the semantic-referential “properties of the referent,” even if Gender can sometimes be, one way or another, motivated by the natural sex. The same holds true for the arbitrary features individuality, countability, etc. of the nominal head of the [NP, NO]. We assume that Animacy is encoded in the ML on the lexical noun and only then checked against the functional head in the Specifier-Head Agreement relation, thus: DP-DO. The nominal head NO is specified among others for the following S(emantic)-features: [± animate, ±virilis, ± individual, ± countable……], and, of course, it is specified as an individual member of a subset of an intensional class (in the sense of Heim and Kratzer 1998). The functional head DO of the functional projection DP is specified just for F(unctional)-features [Gender, lexical Case and Number]. Lexical cases (such as Dative, Instrumental, Locative, Prepositional, Allative, Illative, etc.) are assigned in Narrow Syntax via external Merge with the governing lexical category (N, A,V, or P). Structural Cases (e.g. Nominative/Accusative and in the DP also Genitiv; Ergative and Absolutive in Ergative languages) are assigned via internal Merge (Move) to the structural position of the goal-probe domain33. In case of “semantic” or “formal” agreement (ad sensum or ad formam), both heads incorporate and we receive a non-split resolution between semantics and morphology, this is the most frequent case in naturally motivated sex-oriented Gender assignments which usually prevail in the animate group of lexical categories, and very seldom in the non-animate group (maybe only in cases of mythological figures or metaphoric and metonymic verbal shifters). The agreement on number must clearly be a matter of division of labor between a semantic feature ±animate on the lexical root of the noun NO and a function of the φ-feature in DO. Some types of Numerals (such as distributives) in Russian, Polish, and BSC can only be realized on masculine animates but never on inanimates (which would be things and feminine referents). As we will demonstrate on several cases of formal and semantic (Dis-) Agreement in part 3.3 and 3.4, there is a mismatch between natural sex and 32 (i) φ of the category [Lex_ φ] = AGREE; (ii) Case of the category [Lex_Case] = structural case, (iii) an Expletive is valued at the Edge of a Phase by EPP by assumption. 33 On the relation between morphological Case (m-Case), syntactic Case (s-Case), inherent and lexical Case and structural Case cf. Peter, Kosta, and Zimmerling, Anton Vladimirovič. 2019, Case Assignment. In Jan Fellerer, and Neil Bermel (eds.), Oxford Encyclopedia of Slavonic Languages. Oxford, New York: Oxford University Press.
Phase Impenetrability and Phase Interpretability
115
grammatical gender, and it follows, that there will be also a mismatch in Agreement between formal Agreement and semantic Agreement. We will see that this kind of mismatch is often in professional names/titles (cf. in Russian vrač, “doctor,” in Russian and Polish professor which can refer to a male or a female referent), and sometimes also in proper names (often also for reasons of PF like in Italian, when Andrea can be the masculine and the feminine, due to the PF requirement in Italian that every single word can have only open syllables except of foreign words). In Russian, certain derivational affixes can be “bi-sexual.” Polish shares the subcategory of animacy with other Slavic languages. In contrast to other languages, Polish has developed a more detailed sub-category of personality, which is marked only in Plural, by all structural cases, not only within the Acc.=Gen syncretism. but also by grammatical endings of the Nom. (The Acc.=Genetive syncretism is shared partially by Slovak.) All animate masculine nouns take Acc=Gen. in Singular, but only personal nouns take it in Plural. As a result of a different scope of case syncretism the school grammar text-books have established a different number of gender classes in Singular and Plural. It is the general masculine which takes the function of unmarked gender for people of both sexes in Pl. The functions of Polish animacy (including virility) have been changing for centuries. Apart from the reference to the natural gender or affiliation to the animate world, the Acc.=Gen. syncretism nowadays marks determinate inanimate objects and applies to more and more groups of nouns, for example, masc. names of dances, dishes, currency units, and sports. Given the determinative function of morphological animacy, special attention must be paid to numerals and pronouns, which can be marked for animacy even in non-structural cases.
3.3.3 Pronominalization It is quite significant that despite the clear distribution of full pronouns and Clitic pronouns in Syntax in languages such as Polish, Czech, Lower and Upper Sorabian and, for example, BCS and Bulgarian, a very interesting syntactic behavior can be observed with regard to the category of animate vs. non-animate items and pronominalization in IS contexts where the focused pronoun must be spelled-out at PF in the long form. In the dialogue sequence (9)
(9) Sp1 - Dlaczego wymyłeś swoją filiżankę, a nie mój kubek? Sp2 - No jakże, wymyłem przecież jego, a nie ją. // - No jakże, wymyłem przecież *go, a nie ją
116
Gender and Animacy between Displacement and Agreement
it is rather the full (accented) form of personal pronoun than the Clitic form that can be used in reference to inanimate referents if the referent has been topicalized or focused. In anaphoric function, the Clitic pronoun is otherwise the default form (on the problem of Clitics, Adverbs and Topic-Comment Structure in Slavic cf. e.g. Kosta 1997, 1998). It would be interesting to see how this peculiar relation between Syntax and PF and Semantics in Context of Animacy and Gender can be explained from a comparative and typological perspective since data of different stages of diachrony in Slavic (Kosta and Zimmerling 2012) and also the Dialects (Zimmerling and Kosta 2013) have already shown how important the difference between PFrelated explanation (Tobler-Mussafffia, Wackernagel, etc.) and Syntax/SemanticInterface driven choice of Clitics and Full Pronouns can become. Thus, the distribution of pronouns with regard to Animacy can show interesting properties in Syntax and Semantics. While the Clitic Forms (Accusative and Dative) “go/mu” must be used in anaphoric function as default if they show up in a non-accentuated Second Clitic Position (Wackernagel position) regardless of whether the referents are animate or inanimate, for example, (10) Pomogłem mu go sprzedać. I helped him (Peter) it/him (a cup or a slave) to sell.
the personal pronouns “on/ona” are never used in reference to inanimate referents in a deictic function. One can only say “Jakie TO ładne,” not: jaki ON ładny, pointing a finger at a building (Pol. budynek masc., inanim). Only the neuter pronoun TO can be used in deictic, demonstrative function in reference to an inanimate item.
3.3.4 Numerals and Animacy Numerals have no singular form (but for the number 1), so the only division in the syntactic agreement runs between virile and all other forms. (11) a. Ci TheseNomPlViril. na ławkach on park benches b. Te
czterej
leniwi.
studenci spali
fourNomSgviril
lazyNomPlViril
studentsNomPlViril slept3PlViril
cztery
ładne
studentki
TheseNOMPL non-animate fournon-animate beautiful studentsNOMPLFEM spały na ławkach w parku. slept_ 3Plfnon-animate on benches in park c. Te cztery okna były otwarte. TheseNomPlffour
windowsNomPl were3plPlinanimate open_PPPNomPLinanimate
Phase Impenetrability and Phase Interpretability
117
There is another variant of the syntactic mis-agreement, the obligatory for the numerals from 5 up: (12) Tych czterech/pięciu theseGenPl four/fiveGenPl
uczniów studentsGenPlMasc
spało slept_3SgNeutr
na ławkach on park benches
There is no consensus how to label the case of czterech/pięciu in (12). The grammatical interpretation varies among all structural cases Nom., Acc., and Gen. (Przepiórkowski 2004, Willim 2015). The interpretation of this position against the Slavic and broader background helps us to disambiguate the interpretation and to determine the role of structural cases in the animacy-based syncretism. Polish collective numbers – similar to Russian – require a narrow animacybased gender class of the young beings: troje piskląt/dzieci “three nestlings/children”. More interesting is the secondary function of collective numerals which appear in formal incongruence with virile nouns, while agreeing with them ad sensum. (13) Przyszło troje Came3SgNeutr three_coll “Three students came”.
studentów. studentsGenPlMasc or Fem (ad sensum)
The ad sensum agreement in (9) shows that the group of three students is of mixed sex, but the Gender is declined after the general Masculine declension class. The corpus research and the analysis of syntactic agreement should make it possible to describe the complex relation between animacy, gender, and other syntactic categories in Polish against the Slavic background. Declension Classes and Gender/Animacy In Russian, the Masculine Gender entails the following morphologically significant Declension classes: Masculine Gender / Ending in NomSg I. Declension Class -∅ дед “grandfather”, космонавт “cosmonaut” II. Declension Class -/a/ папа “paps”, дядя “uncle”, Мужчина «man» парнишка «lad», староста «meyer» Declension Class (neutra) -/o/ воронко «funnel» -/’e/ волчище«wolf», -/je/ подмастерье «apprentice» Adjectival Declension - / oj / - / yj /
портной «tailor» взрослый «adult»
118
Gender and Animacy between Displacement and Agreement
Feminine Gender / Ending in Nom Sg II. Declension Class /a/ III. Declension Class -∅
космонавтка «female spaceman» корова «cow» мать «mother»дочь. «daughter» мышь «mouse» -/aja/ Adjectival Declension больная «a sick female person» заведующая «a female manager» Animate Neutra (very seldom)
Ending on -/o/ лицо “person” -/ище чудовище “monster”
Tabl.: Declension-Gender Classes in Standard Russian
In Russian, the same problem arises looking at the special group of subclass of the Declension class (I) also in combination with the grammatical category of numerals which serve in other languages as specific classifiers. In combination with the declension class II, of the masculines, there is a ban on combination of this form with the numerals 3–4 and all composed cardinal numerals in which the numbers 3–4 appear.
(14) a. Пришли *три / There came3PL three*четыре умные four intelligent студента / students b. Четверо студентов пришло (There) Four NomSgCollNeutr. studentsGenPlM came_3Ps.Sg.Pret. A group of four students came
3.4 Mixed Gender Agreement in Russian DPs Mixed agreement results when some elements in a sentence agree with the grammatical gender of a hybrid noun, while other elements agree with the semantic gender of the referent. In the present proposal, grammatical gender is a feature on N, and agreement with grammatical gender propagates up from N. D is the locus of reference and introduces the referent-derived φ-features. Via feature sharing, features on D can value not only elements further up in the derivation, as in previous proposals, but also elements downward in the DP, allowing DPinternal modifiers to display semantic agreement. A null blocking morpheme Б prevents feature sharing between the two agreement domains. As in many languages, nouns in Russian are classified into genders. For most nouns in Russian, gender is derived from the noun’s declension class and unrelated to semantics. This lexically assigned gender, whether based on declension class or semantics, is the grammatical gender of the noun. Within a phrase or a clause, other elements (e.g., modifiers, predicates, and personal pronouns) must generally all agree with the noun, meaning they are marked for the same gender as the
Mixed Gender Agreement in Russian DPs
119
noun. Many animate nouns have one gender grammatically (masculine, feminine, or neuter) but may refer to individuals of any gender. For some of these, called hybrid nouns, some elements in a sentence may agree with the gender of the referent instead of the grammatical gender of the noun, producing mixed agreement. For example, the noun vrač’ “doctor” is grammatically masculine, but may refer to either a male or female doctor. Consider the following scenario under (15) a. vs b. For example, the noun vrač’ “doctor” is grammatically masculine (Declension Class I ending on a Zero), but may refer to subset of male or female persons. (15) a. Наш врачь пришел our-M doctor arrived-M ‘Our doctor arrived.’ (referring to a male doctor) b. Наш врачь пришла our-M doctor arrived-F ‘Our doctor arrived.’ (referring to a female doctor) (Corbett 1991: 180)
There have been numerous syntax proposals that attempt to explain mixed agreement, but in my view, none of them contains the entire answer. Existing proposals may be divided into two main categories. The first category of proposals (including Wechsler and Zlatić 2003; Pereltsvaig 2006; Rappaport 2014) distinguish between DP-internal agreement (concord) and subject-predicate agreement for animate referents. Concord is based on the grammatical gender of the noun, while subject-predicate agreement is based on the semantic gender of the referent. In most of these proposals, D is the locus of reference and introduces the referent-derived φ- features for the predicate to agree with. These models correctly account for the common case of mixed agreement as in (15). However, while subject-predicate mixed agreement is by far the most common case in Russian, Corbett (1991) points out that it is also sometimes possible for DP-internal modifiers to display semantic agreement, as in (16). None of the above models can account for this. (16) Ivanova -- xoroš-aja vrač’ Russian Ivanova good-F doctor ‘Ivanova is a good doctor.’ (Corbett 1991: 231)
Proposals in the second category (Matushansky 2013, Pesetsky 2013, Landau 2015) do allow attributive adjectives to agree with semantic gender and even mixed agreement inside a DP. Each of these proposals makes use of a movable boundary that prompts switch in the gender (or number) feature. Elements below the boundary agree with the grammatical gender of the noun, while elements
120
Gender and Animacy between Displacement and Agreement
above the boundary (both inside and outside DP) agree with the gender introduced at the boundary. These proposals account for the data, but they do so in a way that does not provide for a locus of reference. The boundary element may be introduced at various points inside DP, or in some cases even adjoined to or outside DP. This leaves no fixed location for reference, and no reason given for semantic gender to enter the derivation where it does. In her Master thesis (2010), Katherine E. King offers an account that incorporates the strengths of each of these types of accounts, without their main shortfalls. I assume that grammatical gender is a feature on the noun, similar to other proposals discussed. Agreement with grammatical gender propagates up from N via feature sharing. Additionally, I assume that reference is on D, as in the first category of proposals. However, instead of reference features only being able to value elements further up in the derivation, they can also value elements downward in the DP, again via feature sharing. A null blocking morpheme Б (the Cyrillic letter B, pronounced /be/) prevents feature sharing between the two agreement domains.
3.4.1 Gender In many languages, some or all nouns are classified by gender. For example, Romance languages use a masculine/feminine division (17), as does Dizi (18). German (19), Russian (20), Tamil (21) and many others use a masculine/feminine/neuter division. The Algonquian languages use an animate/inanimate classification (22). The Bantu languages have ten or more genders, usually referred to as noun classes (23). As we see in the examples below, even in systems with similar gender divisions, nouns may fall into different genders in different languages. (17) French FEMININE: pomme ‘apple’, fleur ‘flower’, maison ‘house’, femme ‘woman’, fille ‘girl’ MASCULINE: arbre ‘tree’, ananas ‘pineapple’, homme ‘man’ (18) Dizi (Omotic) (Corbett 1991: 11) FEMININE (females and diminutives. dade ‘girl’, kuocin ‘woman, kieme ‘small pot’ MASCULINE (all others) dad ‘boy’, yaaba ‘man’, kiemu ‘pot’ (19) German FEMININE: Blume ‘flower’, Birne ‘pear’, Ananas ‘pineapple’, Frau ‘woman’ MASCULINE: Baum ‘tree’, Tisch ‘table’, Apfel ‘apple’, Pfirsich ‘peach’, Mann ‘man’ NEUTER: Haus ‘house’, Wasser ‘water’, Mädchen ‘girl’
Mixed Gender Agreement in Russian DPs
121
(20) Russian34 FEMININE: gruša ‘pear’, voda ‘water’, tablitsa ‘table’, ženšcǐ na ‘woman’, devuška ‘girl’ MASCULINE: persik ‘peach’, tsvetok, ‘flower’, dom ‘house’, mužčina ‘man’ NEUTER: jabloko ‘apple’, derevo ‘tree’ (21) Tamil (Dravidian) (Corbett 1991: 9) MASCULINE (gods and male humans aaɳ ‘man’, civaɴ ‘Shiva’) FEMININE (goddesses and female humans peɳ ‘woman’, kaaɭi ‘Kali’) NEUTER (all others maram ‘tree’, viiʈu ‘house’) (22) Ojibwa (Algonquian) (Corbett 1991: 20) ANIMATE: enini ‘man’, enim ‘dog’, mettik ‘tree’, epatemiss ‘button’ INANIMATE: essin ‘stone’, peka:n ‘nut’, mettik ‘piece of wood’ (23) Kilega (Bantu) noun classes (1–10) in singular/plural pairs (Carstens 2010) a. musikila/basikila 1young man/2young man ‘young man/men’ b. mubili/mibili 3body/4body ‘body/bodies’ c. liínyo/ményo 5tooth/6tooth ‘tooth/teeth’ d. kishúmbí/bishúmbí 7chair/8chair ‘chair/s’ e. nzogu/nzogu 9elephant/10elephant ‘elephant/s’
Corbett (2006) describes a typological survey of 256 languages. In this survey, just under half have some kind of gender system. Of those 112 languages, 50 have two genders, 26 have three genders, 12 have four genders, and 24 have five or more genders.
3.4.2 Gender Assignment Systems Among languages that use gender classifications, different languages use various systems to assign gender to nouns. Many systems are wholly or mostly semantic; others are mostly formal (non-semantic). In the above survey, the split is fairly even: 53 languages have a strict semantic or predominantly semantic assignment system, while 59 have a mostly formal system. In languages that use semantic gender assignment, nouns are assigned gender based on their meaning. In some, there are few, if any, exceptions, and
34 The motivation for Gender in Russian is of course not always as straightforward as it would seem. Evgenia Markovskaya (2012: 144–148) proposes a derivational syntactic minimalist analysis of gender distribution in event-denoting non-neutre deverbal nominals in Russian. She thus refines the common view articulated in more conventional work (Corbett 1982, 1991, 2006) who holds that gender of non-derived inanimate nouns is predicted by their phonological shape, while the gender of derived words is determined by nominalizing affixes which are associated with a particular gender value (cf. Markovskaya 2012: 137).
122
Gender and Animacy between Displacement and Agreement
apparent exceptions have meaning within the culture; for example, certain animals are being assigned genders reserved for humans because they are important in the mythology. In other languages, most gender assignment is based on semantic properties, but there are significant exceptions. In the Dravidian language Tamil (21), there are three genders: gods and male humans are masculine (male rational), goddesses and female humans are feminine (female rational), and all others are neuter (non-rational) (Corbett 1991). English also has a semantic gender system, reflected only on the pronouns: he refers to male humans, she refers to female humans, and it is used for all others, with some variation with animals, infants, and a small number of others (Corbett 1991). Other languages use mostly formal, non-semantic assignment systems. For example, gender in Russian generally corresponds to the declension class of the noun, which is based on morphology. Noun gender in Bantu languages is also based on morphology. Other languages use a phonological gender assignment system. French gender appears to be random; however, Corbett (1991) cites evidence that it too is phonological in nature. According to Corbett, there are no purely formal systems. All have at least some semantic basis, and semantic rules usually take precedence over the formal ones. For example, the Russian word djadja “uncle” has a form usually used for feminine words, but djadja is masculine because it can only refer to a male. There are exceptions in the other direction, too. German Mädchen “girl” is neuter, even though it refers to a female. We will see further on how exceptions of this nature can have other consequences.
3.4.3 Gender Agreement The main evidence for gender is agreement, meaning that the forms of other elements in a sentence vary with features on the noun. Following Corbett (2006), I use agree and agreement (lower-case) to denote that lexical items are marked for the same gender (or other feature), regardless of how that agreement comes about, whether by Chomsky’s (2000, 2001) Agree or by some other mechanism. We discuss Agree and other mechanisms in our project in detail. Following Corbett (1991, 2006), agreement requires a controller and one or more agreement targets. The targets surface in different forms based on the features of the controller. In the case of nominal agreement, the noun is the controller, and other items in the sentence (determiners, adjectives, verbs, and pronouns) are the targets. In (24), we see examples of gender agreement with Russian nouns. In (24a.), the noun gruša “pear” is feminine, and the targets – the determiner èta, the
Mixed Gender Agreement in Russian DPs
123
adjective spelaja, and the verb upala – are likewise all marked feminine. In (24b.), the noun persik “peach” is masculine, and the determiner, adjective, and verb are all marked masculine. In (24c.), the noun jabloko “apple” is neuter, and the targets are all marked neuter. (24) a. Èt-a spel-aja gruša sejčas upal-a. this-F ripe-F pear.F just fell-F ‘This ripe pear just fell.’ Russian b. Èt-ot spel-yj persik sejčas upal. this-M ripe-M peach.M just fell-M ‘This ripe peach just fell.’ c. Èt-o spel-oje jabloko sejčas upal-o. this-N ripe-N apple.N just fell-N ‘This ripe apple just fell.’
The nouns in (24) are all inanimate. The gender and gender agreement of nouns with animate referents is more nuanced. Some animate nouns can refer only to individuals of one sex, which may or may not match the grammatical gender. For example, the feminine Russian noun sestra “sister” can refer only to female individuals, and always takes feminine agreement. Likewise, the Polish noun babsztyl “hag” can refer only to women, but it is grammatically masculine and always takes masculine agreement (Rappaport 2014). Agreement with these nouns is straightforward and always follows the grammatical gender of the noun. Other animate nouns can refer to either male or female individuals. These can be divided into three types. The first type is what Rappaport (2014) refers to as fixed gender nouns. These nouns have a fixed agreement pattern (either masculine or feminine) regardless of the sex of the referent. For example, Russian kit “whale” may refer to either a male or female whale, but always takes masculine agreement. Likewise, Russian osoba “person” always takes feminine agreement, but may refer to either a male or female person. (Corbett (1991) and others call these nouns epicene, but as epicene is also used variously for other categories (I will use the more descriptive term “fixed gender.”). The second type Rappaport calls dual gender nouns; they are often referred to as common gender nouns or sometimes also as epicenes. The gender of these nouns depends on the sex of the referent, if there is one, or a default if not. An example is Russian sirota “orphan.” These nouns may be thought of as not having an inherent grammatical gender, and agreement is always semantic.
124
Gender and Animacy between Displacement and Agreement
The third type are hybrid nouns (Corbett 1991) or mixed gender nouns in Rappaport’s terminology. Examples are Russian vrač “doctor” and èkspert “expert.” These nouns have an inherent grammatical gender, but can take different agreement on different agreement targets, in the same sentence, as shown in (25). (25) Naš vrač’ prišl-a Russian our-M doctor arrived-F ‘Our (female) doctor arrived.’ (Corbett 1991: 180).
Hybrid nouns are introduced in more detail in our study and are the primary topic of Kosta (2020c). Agreement is frequently divided into two domains. The first is concord, in which determiners and adjectives must all be marked for the same gender as the noun. The second is predicate agreement, in which the predicate (verb or adjective) must be marked for the same gender as the noun. In addition, bound pronouns and personal pronouns display agreement with their antecedents. Regardless of whether concord, predicate agreement, and pronoun agreement are the same mechanism or different mechanisms, I follow Corbett (2006) in referring to them all as agreement. I distinguish specific agreement targets (e.g., modifiers, predicates, and DP-internal elements) as needed. This thesis focuses primarily on DP-internal agreement and on its relationship with predicate agreement. Pronoun agreement is out of the scope of this talk.
3.5 Theoretical Background on DP-Internal Agreement 3.5.1 DP Structure Abney (1987) proposed the DP Hypothesis, according to which the functional projection for a nominal phrase is a Determiner Phrase (DP), headed by D and taking an NP complement (26). (26) DP D’
Spec D
NP
Theoretical Background on DP-Internal Agreement
125
Articles reside on D, whereas other determiners such as demonstratives and possessive pronouns are in SpecDP, possibly originating lower. Since then, there has been some controversy as to whether the DP Hypothesis applies to all languages. Some (e.g., Chierchia 1998, Baker 2003, Bošković 2005) argue that languages without articles, such as Chinese and Slavic languages (except of Bulgarian, Macedonian and soma Slavic Dialects) lack the DP projection. Others (Longobardi 1994, Pereltsvaig 2007b, Rappaport 2014) maintain that these languages still have a DP projection, with an obligatorily null D. I assume here that the DP layer exists, even in article-less languages such as Russian. I use a simplified structure, DP > NP, with adjectives adjoined to NP and little n omitted. This simplified structure does not deny the presence of other functional projections between DP and NP, such as NumP, DemP, PossP, nP, or AP, cf. (27) ‘a/the new house’ DP D ∅
NP A novyj ‘new’
NP dom ‘house’
3.5.2 Location of Gender in the Nominal Phrase Numerous proposals have been made for where the gender feature is located, both where it originates and where it is interpreted. Some put gender on its own functional projection, GenderP or the like: for example, Picallo (1991) proposes GenP for Catalan; Bernstein (1993) proposes WMP (Word Marker) for Romance; and Picallo (2008) proposes cP (class) for Romance in general. The hierarchies are similar to (28) a.
(28) a. DP > NumP > GenP > NP
Others do away with a separate projection for gender, arguing instead that gender is a feature on some other head, either NumP or NP. Accounts vary as to whether the category hierarchy and the locations of φ-features hold crosslinguistically. Ritter (1993) proposes the same categories crosslinguistically (DP > NumP >
126
Gender and Animacy between Displacement and Agreement
NP), but argues that the gender feature enters in different places for Hebrew and Spanish. Carstens (2010) also uses this structure for Romance, placing person on D, number on Num, and gender on N, as in (28) b., but proposes a different structure for Bantu involving NP adjunction.
(28) b. DP [Person] > NumP [Number] > NP [Gender]
Danon (2011) presents crosslinguistic evidence that the φ-features person, number, and gender are valued on different nodes (also D, Num, and N, respectively), and uses the same structure across languages. The structure in (28) b. is commonly proposed, though not universal, for φ-features in nominals, though, accounts vary as to whether category hierarchy and φ-feature locations hold crosslinguistically. None of these proposals distinguish between formal and semantic gender. This is not unexpected, given that the language groups under consideration – Romance and Bantu – use mostly non-semantic gender assignment (Corbett 1991). Next, I present two accounts that do make that distinction. Kramer (2009) argues that no single feature can account well for both formal and semantic gender, which she calls natural gender. Instead, focusing on Amharic, Kramer (2009) uses two gender features to distinguish natural gender from formal gender (grammatical gender that is not semantically assigned). Formal gender is located on √root. Natural gender (generally for animates of known gender) is on little n. Agreement is with the higher feature: natural gender if it is specified, otherwise, formal gender if it is specified, and otherwise default (masculine for Amharic). Kramer (2014) modifies the earlier proposal, putting both formal gender and natural gender on little n, based on evidence from Amharic including the interaction of gender and number and the morphosyntax of nominalizations. In the modified proposal, formal gender is uninterpretable, while natural gender is interpretable. If natural gender is known, little n is licensed with i[±FEM]. If natural gender is unknown, irrelevant, or not applicable, then little n is licensed with u[±FEM], or simply n (with no gender feature) to indicate default agreement. In this way, Kramer (2014) also accounts for a nominal having either interpretable or uninterpretable gender. Neither Kramer (2009) nor Kramer (2014) distinguishes a referential gender that is different from natural gender, and neither accounts for mixed agreement. Dobrovie-Sorin (2012) also incorporates both interpretable gender and uninterpretable gender into a proposal for Romance and English. Dobrovie-Sorin uses a nominal hierarchy of DP > nP > NP. Gender features are valued on little n, and they may be interpretable or uninterpretable. If the gender feature on little n is checked by lexical gender features of N, it is uninterpretable. If the gender feature on little n remains unchecked (in the case of Romance Adj-to-N conversion,
Theoretical Background on DP-Internal Agreement
127
for which the DP contains no N), then it is interpretable. In either case, Det has unvalued gender features, which are valued by concord agreement and are then visible outside the nominal. Again, while this proposal allows for both interpretable and uninterpretable gender, it cannot account for mixed agreement of grammatical and referent gender. In the subsequent sections, we look at proposals that do account for mixed gender agreement (cf. 3.6.). But before we tackle this issue, we concentrate on the problem of interpretability of Gender in syntax (3.5.3) and Agree and feature sharing properties (3.5.4).
3.5.3 Interpretability of Gender There are various views on whether gender is interpretable. Some researchers consider a nouns grammatical gender to be always interpretable (e.g. Danon 2011) or always uninterpretable (e.g., Picallo 2008). Matushansky (2013) says nominal gender is interpretable if it is semantic (based on inherent properties of nouns), but uninterpretable if it is purely grammatical (e.g., based on declension class); it depends on the language which type of assignment is predominant. Dobrovie-Sorin (2012) proposes a system for feature checking that leaves gender sometimes interpretable and sometimes uninterpretable. Kramer’s (2014) model also says semantic gender is interpretable and grammatical gender is uninterpretable; in her system only, one may exist in a given nominal. Carstens (2010) argues that grammatical gender is intrinsically valued (it enters on N) but uninterpretable (it has no semantic meaning) for both Romance and Bantu languages. This position forces a distinction between valued/unvalued features and interpretable/uninterpretable features. It also requires reconsidering the need for uninterpretable features to be checked and deleted, and the need for all features to be interpretable in some location (Brody’s 1997 Thesis of Radical Interpretability, as cited in Carstens 2010). Following Pesetsky and Torrego (2007), Carstens (2010), and Matushansky (2013), I reject the Valuation/Interpretability Biconditional (Chomsky 2001), which states that a feature F is uninterpretable if F is unvalued. Instead, I consider formal grammatical gender features to be inherent in the lexicon but uninterpretable and semantic gender features to be interpretable35. Following Carstens 35 The original idea in Chomsky’s Minimalist Inquiries (2000) of the presence of uninterpretable features “to be an imperfection” (Citko 2014: 14) from the perspective of the Strong Minimalist Thesis (because it is not forced by Bare Output Condition) and to drive another imperfection, namely movement, is somewhat suspect and has been replaced in “Beyond Explanatory Adequacy” with internal Merge (Chomsky 2004a).
128
Gender and Animacy between Displacement and Agreement
(2010), I also assume that it is not necessary for uninterpretable features to be checked and deleted; the uninterpretable gender feature on a noun may be inherently valued without ever undergoing Agree, and it does not need to be deleted.
3.5.4 Agree and Feature Sharing The traditional definition of Agree (Chomsky 2000, 2001) requires a single unvalued, uninterpretable probe and a single valued, interpretable goal. When Agree applies, the uninterpretable feature is valued and deletes. As discussed in another place, it is desirable that uninterpretable gender features can be inherently valued without deleting. For the purposes of concord, it is also desirable for one valued feature (e.g., on the head noun) to value multiple unvalued features (e.g., on modifiers) without them deleting, so that they are all available at the PF interface. To accommodate such patterns, we turn to feature sharing. Feature sharing (Frampton and Gutmann 2000, Pesetsky and Torrego 2007) is an alternative to traditional Agree. It is a mechanism that allows interpretable or uninterpretable features to be shared among elements inside DP, regardless of where they are initially valued, and without any of them deleting. Values are not simply copied from one feature location to another; rather, two features join to become one feature, collocated on two (or more) elements. Features may be valued or unvalued regardless of their interpretability. A sample derivation is shown in (29). In (29) a., A enters with a valued feature iF, while B enters with an unvalued feature uF. (The interpretability of the features on A and B could also be switched, with no effect on feature sharing.) Agree applies (29) b., and F is now shared between A and B, with one value for the two locations. The italicized val denotes that the value was not inherent at that location but determined by Agree. However, once the features are shared, there is no distinction between inherent and non-inherent feature values; the font simply helps us visualize where the value originated. I follow Pesetsky and Torrego (2007) in assuming that the interpretability of a feature stays tied to a location even after sharing. If C is merged with an unvalued feature F, as in (29) c. (i), Agree again applies between C and B, and C is now valued with the same value. If, instead, C is merged with an already-valued feature F, as in (29) c. (ii), then Agree cannot take place because C and B are both valued, and no new sharing happens.
129
Theoretical Background on DP-Internal Agreement (29) a.
b.
BP
BP B [uF:_]
B [uF:val]
AP A [iF:val]
AP A [iF:val]
X
X
Agree c. (i)
(ii)
CP BP
C [uF:val] B [uF:val]
BP
C [uF:val2] B [uF:val]
AP A [iF:val]
Agree
CP
AP A [iF:val]
X
X
Agree Agree
In addition, two unvalued occurrences of a feature can undergo sharing, and both can later be valued by another occurrence of the feature when it is merged and shared with them (30) a., A and B both enter with unvalued features F. Because they are the same feature type, Agree applies (30) b., and F is now shared between A and B, even though both are unvalued. In (30) c., C is merged with a valued feature F. In (30) d., Agree again applies between C and B, and F is now shared between A, B, and C, all valued with the same value (30).
130
Gender and Animacy between Displacement and Agreement
(30) a.
b.
BP
BP B [iF:_]
B [iF:_]
AP A [uF:_]
AP A [uF:_]
X
X
Agree c. (i)
(ii)
CP BP
C [uF:val] B [iF:_]
CP
B [iF:val]
AP A [uF:_]
Agree
BP
C [uF:val]
X
AP A [uF:val]
Agree
X
Agree
I follow these authors in redefining Agree as a feature sharing mechanism. I assume that valuation can happen in either direction, as outlined by Frampton and Gutmann (2000). For time and place, we cannot pursue this approach but can only highlight the major idea.
3.6 Mixed Agreement 3.6.1 Hybrid Nouns Some nouns take different agreement depending on the type of agreement target (Corbett 1991, 2006). For example, the German noun das/ein Mädchen “the/a girl,” while semantically female, always produces neuter agreement, except in the case of a personal pronoun, where either feminine sie “she” or neuter es “it” may be used, as seen in (31). (31) Schau dir dieses Mädchen an, wie gut sie/es Tennis spielt German look you this-N girl at how good she/it tennis plays ‘Do look at this girl, see how well she plays tennis.’ (Corbett 1991: 228)
Mixed Agreement
131
Nouns such as this are referred to as hybrid nouns. Agreement with hybrid nouns can be with their grammatical gender (neuter, in the case of 31), or with the gender that matches the sex of the referent (feminine for a female referent). I follow Corbett (2006) in referring to agreement with the grammatical features of the noun as syntactic agreement, and agreement based on real properties of the referent as semantic agreement. This does not imply that the mechanism for either of these is or is not syntactic in nature. When different agreement targets can show different agreement with the same controller, this is called mixed agreement. Hybrid nouns occur across languages, and semantic agreement may happen with various agreement targets, as shown in the examples below. The Russian noun vrač “doctor” normally produces masculine agreement. However, when it refers to a female doctor, a predicate may take either masculine or feminine agreement, as in (32) a. vs. (32) b.:
(32) a. vrač prišël/prišl-a Russian doctor arrived-M/ arrived-F ‘The (female) doctor arrived’ (Corbett 1991: 232)
(32) b. vrač prišël/prišl-a Russian doctor arrived-M/ arrived-F ‘The (female) doctor arrived’ (Corbett 1991: 232)
In Swahili, the noun rafiki “friend” is in morphological class 9/10, but because it is animate, it can also take 1/2 agreement. In (33) we see syntactic agreement in the possessive y-angu “my” and semantic agreement in the predicate a-mefika “arrived.” In (34), both attributive elements show semantic agreement.
(33) rafiki y-angu a-mefika friend 9-my 1-arrived ‘My friend has arrived’
(34) rafiki mw-ema w-angu friend 1-good 1-my sovereign.
In Spanish, the title Majestad “Majesty” is feminine, whether it refers to a male or female “my good friend” adjectives may show semantic (masculine) agreement (35).
(35) Su Majestad suprem-a (36) His majesty supreme-F ‘His Supreme Majesty’
(37) Su Majestad está content-o His majesty is happy-M ‘His Majesty is happy.’
132
Gender and Animacy between Displacement and Agreement
The equivalent French title is similar and can show semantic agreement in personal pronouns, as seen in (38).
(38) Sa Majesté fut inquiète, et de nouveau il envoya La Varenne French his-F majesty was worried-F and of new he sent La Varenne à son minister to his minister ‘His Majesty was worried, and again he sent La Varenne to his minister.’ (Corbett 1991: 227)
3 Previous Proposals There have been numerous proposals that attempt to solve the mixed agreement mystery. Most of these proposals make some sort of use of multiple levels of gender features, in order to differentiate between the grammatical gender of the noun and the semantic gender of the referent. Existing proposals may be divided into two main categories, proposals that distinguish between DP-internal agreement (concord) and subject-predicate agreement, and proposals with a movable agreement boundary.
3.6.2 Proposals That Distinguish Concord and Predicate Agreement The first category of proposals includes Wechsler and Zlatić (2000, 2003), Sauerland (2004), Pereltsvaig (2006), Steriopolo and Wiltschko (2010), and Rappaport (2014), among others. These proposals distinguish between DP-internal agreement (concord) and subject-predicate agreement. DP-internal gender agreement (concord) is based on the grammatical gender of the noun. For most of these proposals, D is the locus of reference and introduces the referent- derived φ-features for the predicate to agree with. There is considerable support in the literature for placing reference on D, including Abney (1987), Longobardi (1994), Longobardi (2005), and others. These reference features are visible outside the DP, allowing predicates to agree with the semantic gender of their subject’s referent. These models correctly account for the common case of mixed agreement, between subject and predicate. The modifier agrees with the grammatical gender of the masculine noun, while the predicate agrees with the semantic gender of the female referent. However, while subject-predicate mixed agreement is the most common case of mixed agreement in Russian, it is also sometimes possible for DP-internal modifiers to display semantic agreement, as in (39). None of the above models can account for this.
Mixed Agreement
133
(39) xoroš-aja vrač’ good-F doctor ‘good (female) doctor’
In all of these proposals, the syntactic-to-semantic gender switch happens at D. Semantic gender may be used only by elements outside the DP, not by adjectives inside DP. Most of these authors consider nominal-internal semantic agreement to be very rare. While it is less common than subject-predicate mixedagreement, nominals such as (39) are attested, especially in more recent years, and must be accounted for. The following subsections discuss the proposals in the first category.
3.6.3 Concord vs. Index Agreement (Wechsler and Zlatić 2000, 2003) Wechsler and Zlatić’s (2000, 2003) proposal is a nonderivational account that lays some important groundwork in the domain of mixed agreement. In Wechsler and Zlatić’s model, nouns have two separate sets of agreement features, Concord features and Index features. Concord features are used by adjectives and other DP-internal modifiers (concord), while Index features are used in predicate agreement, which Wechsler and Zlatić call “index agreement.” According to Wechsler and Zlatić (2000: 801), concord and index agreement “result from different grammatical processes” and differ in their domains and relevant features. Wechsler and Zlatić outline the relationship chain shown in (40), where the normal case is that all features match. Mismatches between any adjacent pair are possible but marked. (40) Declension ⬄ Concord ⬄ Index ⬄ Semantics
In Slavic languages, Concord features usually follow the declension class of the noun, although they can differ, as in Russian djadja “uncle,” which has a usually feminine morphological form but always takes masculine agreement. Index features usually match semantics, but they can also differ, as with the Serbian/ Croatian word devojce “girl,” which denotes a female individual but takes neuter agreements. Finally, Concord and Index features usually match, but sometimes do not; this produces mixed agreement. For example, the Serbian/Croatian word deca “children” has feminine singular morphology and takes feminine singular concord, but it denotes a plural entity and takes plural predicate agreement, as seen in (41):
(41) Ta dobra deca dolaze. Serbian/Croatian That-F.SG good-F.SG children come.3PL Those good children came. (Wechsler and Zlatić 2003: 51)
134
Gender and Animacy between Displacement and Agreement
In Wechsler and Zlatić’s model, relative pronouns use index agreement. Referential pronoun agreement is considered to be pragmatic, not syntactic. Wechsler and Zlatić (2003) admit to semantic agreement on adjectives, describing it as variation. They explain it as a different grammar available to some speakers, in which Concord is semantic agreement. Mixed syntactic/semantic agreement within DP is not allowed.
3.6.4 Multiple Levels of φ-Feature Interpretation (Sauerland 2004) Sauerland (2004) proposes a φP projection on top of DP, as in (42), where all φ-features are interpreted. Sauerland writes from a semantic perspective and does not hypothesize as to the nature of lower projections or where the features entered the derivation.
(42) φP > DP
Sauerland allows for more than one φ-head above DP, where the feature values on each may differ. For example, in Russian example (43), the features [fem, sg] on the higher φ-head are interpretable and are used for predicate agreement, while the features [masc, sg] on the lower φ-head remain uninterpretable and are only available for concord agreement within the DP.
(43) a. vrač’ prišl-a doctor arrived-F ‘The (female) doctor arrived.’ (Sauerland 2004: 9) b. TP
φP
φ [fem, sg]
T´
φP φ [masc, sg]
T [fem, sg] DP vrač’ [masc, sg]
VP prišla [fem, sg]
Mixed Agreement
135
Sauerland makes use of the MLC, in which agreement (concord or predicate agreement) must be with the closest head available with matching features. He uses the same model to account for other types of agreement mismatches, including number or person mismatches and coordination. Sauerland’s proposal can distinguish only between DP-internal agreement and predicate agreement, because φP can only be located outside DP. The entire nominal phrase agrees with the noun’s grammatical gender, interpreted on the lower φ-head, while the predicate may agree with a different gender, interpreted on the higher φ-head. Thus, it cannot account for mixed agreement within DP.
3.6.5 Assignment Strategies between Semantics, Morphology and Syntax in Slavic As we have already seen, Gender can differ in terms of number (as we can see on the chart 1, there can be languages with none, two or more than two genders, cf. WALS, Greville G. Corbett. 2013. Number of Genders). In: Dryer, Matthew S. & Haspelmath, Martin (eds.) The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. (Available online at http://wals.info/chapter/30, Accessed on 2017-12-02.). But the interaction between animacy and gender is even more complex, and we can show that as to gender, at least three different assignment strategies of Gender can be differentiated according to animacy: (1) Semantic Assignment: the lexical information of Gender in Mental Lexicon checks against the animacy value of the Lexical Unit (LU) w.r.t. feature |± animate, ±human ±count|, cf. Czech masculine pán, muž, lidé are |+animate| | +human| | +count| as opposed to Czech národ |−animate| | +human| | −count| The semantic information is interpretable already on the level of ML, often referred to as referential Gender assignment. In this case, grammatical Gender and natural Gender (Sex) coincide and are projected via Agreement into the Syntax.
The semantic rule is then that nouns including not only an information about ±count but also an information about ± animate project this kind of information up to the top of the tree of an agreeing path. LU with the referential value +animate can then be either masculine or feminine; LU with the referential value − animate can per default be neuter, or mass noun, or collective; and they can always project this semantic value into the syntax by default. Further subdivisions and fine-grained properties of animacy can be differentiated in a more complex system of representations, for example, some kind of specialized Sub-Gender such as +human +rational vs. +human −rational (e.g. ± virilis in Polish or Sorbian vs. the category of non-rational referents, e.g. feminine,
136
Gender and Animacy between Displacement and Agreement
non-rational masculine nouns (animals) and neuters must be specified as special semantic features mapped onto syntax). (2) Morphology-Declension Classes Assignment. The semantic or referential value of gender in Mental Lexicon is underspecified, thus non-interpretable. Recall that semantically (referentially) underspecified inanimate nouns are arbitrarily assigned w.r.t. gender, this means that gender is assigned either as masculine, like in Cz. hrad, stroj, or as feminine židle, píseň, kost, or as neutre e.g. město, stavení and this referential indifference is resolved on the level of morphology in terms of declension classes (in the sense of Corbett 1991). (3) Syntactic Assignment via Agreement. This kind of Assignment strategy is chosen, if either the semantic information or the morphology Assignment (Declension Classes) are underspecified on the head of the NP/DP.Then the referential value of the head of the NP/DP must be determined by the Agree relation via Valuation and Move.
3.6.6 Further Evidence for Animacy in Word Formation: Possessives in Czech and and Compounds in German Another line of arguments in favor of semantic Assignment of Gender and Animacy in certain domains of grammar is the problem of Possessives. Ludmila Veselovská in her recent book (2018) has developed an interesting line of argumentation in favor of a differentiation of two different types of Possessives, the one being closely connected to the category of + animate and restricted to the human possessive suffixes in Czech –ov and –in can be combined only with stems of +human features. “Looking at the form of the Possessives more closely, we can see that the Czech Possessive suffixes do not combines arbitrarily with nominal stems. Both suffixes in fact require a stem with a rather specific feature content – a true Czech Possessive can be derived only from nominal stems that are singular, animate and marked by an intrinsic semantic Gender.” (Veselovská 2018: 110), her examples (6.7 a-d), I shall give here under my own numeration (44) a.-d.: (44) a. * stol-ov-a noha ........ ..... * tableposs(m) leg ʻtable’s legʼ c. * fakult-in tajemník ..... * facultyPOSS(F) secretary ʻfaculty’s secretaryʼ
.......... b. noha stolu .......... ..... leg tableGEN ʻthe leg of the tableʼ d. tajemník fakulty .......... ..... secretary facultyGEN ʻsecretary of the facultyʼ
The idea that the hierarchy of increasing animacy is reflected both at the level of word formation and syntax can be shown on the following parallel contexts: The more animate and human suffixes combine only with
137
Mixed Agreement
nominal stems with animate Theta-roles, for example, agents, experiencers, and possessives; while the non-animate Possessives are grammaticalized only as adnominal possessive Genitives lying lower in the animacy hierarchy. The same kind of increasing animacy from low to high can be seen on the level of syntax proper because the animate Theta-roles (Agent, Experiencer, Adressee, and human Possessor as higher Applicative phrase, cf. Kosta (2015a)) are located higher in the tree than the inanimate Theta-roles (e.g., effectum, thema, local, and instrumental). Cf.: (45) Petr pracuje, pro miluje a píše knihy ve své pracovně na PC Agent Experiencer Thema Local Instrumental Human/animate Inanimate Inanimate “Peter works pro loves and pro writes books in his office on the PC”
In German, this fact is confirmed in Word Formation and in Syntax. The translation of (44) a.-d. in German is (46) a.-d. vs. (47) a.-d.
(46) a. *Das Bein des Tisches b. Das Tischbein c. *Die Helden des Krieges d. Die Kriegshelden
(47) a. Das Bein des Vaters b. *Das Vatersbein c. Die Kriege des Alexander des Großen d. *Die Alexander der Große-Kriege
Distributed Gender Hypothesis (Steriopolo and Wiltschko 2010)
Steriopolo and Wiltschko (2010) present the Distributed Gender Hypothesis, which divides gender into three levels, shown in (48). (48) D D-gender
← Discourse Gender ← Grammatical Gender
n
n-gender
√root
√root-gender
← Semantic Gender
138
Gender and Animacy between Displacement and Agreement
Semantic gender, or √root-gender, is valued male or female as part of the semantic information of the root. It applies to animates, such as “father” (male, even with no referent) and “cow” (female, similarly), but also to inanimates in languages where gender is predictable from the meaning of the root. For example, in the Omotic language of Dizi, females (dade “girl”) and diminutives (kieme “small pot”) are feminine and all others (dad “boy”, kiemu “pot”) are masculine. Semantic gender is interpretable. Grammatical gender, or n-gender, is valued masculine or feminine. It is purely grammatical, uninterpretable, and determined arbitrarily. An example is the Russian word čelovek “person,” which is not semantically male or female, but is grammatically always masculine. If both are present, grammatical gender overrides semantic gender. For example, the German word Mann “man” is semantically male, and triggers masculine agreement, as in (49). The diminutive suffix -chen is grammatically neuter and triggers neuter agreement even though the noun is still semantically male, as in (50).
(49) ein gut-er Mann a-M good-M man ‘a good man’
(50) ein gut-es Männ-chen a-N good-N man-DIM
(51) a. Vrač’ prišl-a. doctor arrived-F ‘The (female) doctor arrived.’ b.
D(female) D(female)
n[masc]
n[masc]
√vrač’
The authors acknowledge that they have no good explanation for why both D-gender and n-gender are available on hybrid nouns such as vrač “doctor” but not on other nouns such as čelovek “person.” With this proposal, the grammatical-to-referential gender switch happens at D. If both D-gender and n-gender are present, then any lexical items found on D
Research and Exploration of Gender and Animacy
139
should agree with D-gender, and all adjectives and other items below D should agree with n-gender. (52) D D-gender
n
n-gender
√root
√root-gender
3.7 Further Perspectives of Research and Exploration of Gender and Animacy Finally, I would like to explain why I have decided to include into my description the grammar of indigenous languages. First of all, I have noticed while studying the grammars of many languages of the world that animacy and gender are quite intriguing since they not only interact with each other but moreover, they introduce nominal and verbal categories of different kind that one would not have expected to be relevant for animacy and/or gender. These are Quantification, Numeral Phrases, so-called noun markers and noun classifiers but also special affixes on verbal stems and even introflection that can express animacy and gender. Systems with semantic gender assignment vary in their transparency. Gender choice in Dyrbal is semantically based, but not straightforward (Aikhenvald 2016: 19). In this language, four genders are expressed through article-like noun markers (but not on noun themselves). Three of them are associated with one or more concepts:
(53) gender 1 (noun marker bayi) - male humans and non-human animates; gender 2 (noun marker balan) -female humans; fire; drinkable liquids; fighting gender 3 (noun marker balam) - non-flesh food; gender 4 (noun marker bala) - a residue gender covering everything else, including body and other parts, place names, and flesh food (meat and fish)
All animates in this Gender system are distributed between gender 1 and gender 2 (except bees that are in gender 4). Three general principles determine gender membership of a noun:
140
Gender and Animacy between Displacement and Agreement
• I. If a noun has a characteristic X (on the basis of which its gender will be chosen) but is associated with characteristic Y through Belief or Legend it will be then assigned to a different gender based on characteristic Y. This is a principle of mythological association, or Myth-and-Belief principle /following Aikhenvald (2016: 19). Along this line, birds are as class the spirits of dead woman. Birds are classed as members of gender 2 (feminine balan) rather than bayi on the basis of their non-human but animate status. There are exceptions to this principle also based on beliefs. Willy wagtail belongs to gender I (masculine), bayi jigirrjigirr, since he is believed to be the metamorphosis of a legendary man (and the way the bird wiggles its tail is reminiscent of how men dance, corroborees). Non-edible snakes are members of gender I (bayi. An exception is balan bima “death adder” who is also a legendary woman, and thus belongs to gender 2. • We see some parallels even in highly grammaticalized Gender systems. For example, in Russian, the person who died is called mertvec “deadman” and it is case marked by the Genitive-Accusative Case after transitive verbs if direct object just like any other masculine animate (e.g., man or wolf). As opposed to it, the body after three days is called trup and is never case marked with Genitive-Accusative but only with the pure Accusative; it thus belongs to nonanimate masculine Gender. • II. If the referent of a noun with a characteristic X is perceived to have a physical association with the characteristic Y, then this may be reflected in the gender choice for this noun. For example, fruit and vegetable belong to gender 3, etc. (Cf. Aikhenvald 2016: 19).
4 Adjectival and Argumental Small Clauses vs. Free Adverbial Adjuncts – A Phase-Based Approach within the Radical Minimalism with Special Criticism of the Agree, Case and Valuation Notions 4.1 On Definition of Secondary Predicates and Small Clauses: The Puzzle In recent work on Russian and Slavic syntax, depictive secondary predicates are mostly considered to be “non-sentential adjuncts on the predicate layer of the clause” (cf. Schroeder, Hentschel, Boeder eds. 2008: preface, i). As opposed to adverbials, which modify the sentence or the VP, depictive secondary predicates modify the arguments (either the subjects or the objects). In fact, while adverbials are modifications of events (VP-adverbs) or propositions (sentence adverbs, cf. Kosta 2003a, b), depictive predicates are modifications of arguments. There has been a highly controversial discussion in the nineties in generative framework regarding generative description of depictive secondary predicates or small clauses (henceforth, SC). To mention only a few studies, only few focused on their semantic properties (cf. e.g. Steube 1994, Hentschel 2008), most were concerned with their syntactic status (small clause vs. AP-, NP or VP adjunction, cf. Stowel 1978, Stowel 1981, Williams 1984, Aarts 1992, Cardinaletti and Guasti eds. 1995, Staudinger 1997 vs. Wilder 1994, Emonds 2007, Hentschel 2008) and/or their Case assignment properties (cf. Bailyn 1995, Bailyn 2001, Bailyn and Citko 1999, Bowers 1997, Bowers 2001, Franks 1995, Strigin 2008, Bondaruk 2004, Bondaruk 2013a, b). Only few studies mention also the diachrony, for Slavic to mention just a few Moser (1994), Hentschel (1993, 1994), Menzel (2008), Klemensiewicz (1926) and Timberlake (2014 a.b., 2004). Small Clauses (SCs) are structures, which have clausal characteristics in that they contain a subject phrase and a predicate phrase. They are, however, generally believed not to contain a complementizer position or an INFL-node (cf. Aarts 1992).
142
Adjectival Clauses vs. free adverbial Adjuncts
The bracketed sequences in the S-Structure (1)-(4) are examples of Small Clauses: (1) Jai sčitaju [egoj pjanymj / *pjanyji / pjanogo] I consider him drunk (2) Jai sčitaju [egoj durakomj] I consider him a fool (3) Oni vypil čajj xolodnymj He drank the teaAkk cold (4) Oni vypil čajj golyji / golymi / *golyjj / golymj?xolodnyjj He drank the teaAkk naked / cold
The non-trivial question, which has until now not being considered analyzing secondary predicates is their relation and division of labor between semantics and syntax (distribution of Theta-roles, Case assignment and Binding) which interests us most in our present talk. More specifically speaking, what is the difference between sentences like (1)–(4) w.r.t semantics, syntax and pragmatics? This is a part of a book I am working on right now (cf. the article Kosta 2019). But for the time being, I shall try to demonstrate how the behavior of their Case Assignment and Agreement within the Phase-oriented approach of Radical Minimalism can be explained (cf. Krivochen and Kosta 2013; Kosta and Krivochen 2012).
4.2 The Framework: Radical Minimalism To begin with, we must outline the framework within which we will be working: Radical Minimalism. It is a theory that attempts to provide principled explanations for (linguistic) phenomena without ignoring the interactions between mental workspaces or biological-computational plausibility and, perhaps most importantly of all, seeking the elimination of all intra-theoretical stipulations via interface conditions and (we will claim, universal) economy principles in derivation and interpretation with basis on biology and mathematics. Let us review some of the basic tenets of Radical Minimalism and then discuss the implications of this framework:
(5) 1. Language is part of the “natural world”; therefore, it is fundamentally a physical system. 2. As a consequence of 1, it shares the basic properties of physical systems and the same principles can be applied (Heisenberg’s uncertainty, the Conservation Principle, locality, etc.), the only difference being the properties of the elements that are manipulated in the relevant system.
The Framework: Radical Minimalism
143
3. The operations are taken to be very basic, simple and universal, as well as the constraints upon them, which are determined by the interaction with other systems, not by stipulative intra-theoretical filters. 4. 2 and 3 can be summarized as follows: Strong Radically Minimalist thesis (SRMT): All differences between physical systems are “superficial” and rely only on the characteristics of their basic units [i.e., the elements that are manipulated], which require minimal adjustments in the formulation of operations and constraints [that is, only notational issues]. At a principled level, all physical systems are identical, make use of the same operations and respond to the same principles36.
We claim that there is only one generative operation (in the physical world in general, in the mind-brain in particular), call it Merge, which is free, “blind” (i.e., insensitive to the inner characteristics of the objects it manipulates, we follow and extend the thesis of Boeckx 2010a that only format is relevant37) and unbounded, and an operation Transfer that provides us with a way of delivering structured information across modules. Merge is an inherently (derivationally) diachronic operation that generates structures, endocentricity being merely a C-I interface requirement for interpretation purposes (i.e., take a referential variable from a sortal or extending-into-time perspective: NP / VP. See Borer 2005, Panagiotidis 2010 for details). Transfer, in a sense that will be clarified below, takes place as soon as it can, this timing being determined by the formation of a fully interpretable configuration in terms of “bare output conditions,” what we call, adopting the Chomskyan term (though not the whole theory associated with it), a phase38. In a “dumb (i.e., blind and free) syntax,” there are no syntactic constraints at all, so there is no point in positing feature-driven operations (e.g., Chomsky 1998, Pesetsky and Torrego 2004, 2007, Müller 2011a) as they represent a substantive complication of the theory, rather than being the null hypothesis. Merge applies because two whichever objects share a common format, and the relevant
3 6 Such a pretention of universality regarding physical explanation is somehow reinforced. 37 From our proposal (which stems partly from Tegmark, 2007), it follows that the same generative mechanism underlies the mathematical structure of a sentence, the Fibonacci sequence, Theodorus’ Spiral and DNA, to mention some clear examples. If we are on the right track, the explanatory and justificative power of the theory would not only limit itself to language, which would be the optimal scenario. To give an example, Theodorus’ Spiral is generated by merging, in a non-trivial way, sides of right triangles and their hypotenuses (a = 1, 1, √2; b = 1, √2, √3; c = 1, √3, √4….). 38 Cf. Chomsky (1998, 2000, 1999, 2001, 2005) for a definition of phasehood based on feature valuation.
144
Adjectival Clauses vs. free adverbial Adjuncts
interfaces take the results of structure-building operations as soon as those objects are fully interpretable: we have a free Generator and Invasive Interfaces, capable of accessing the syntactic workspace at every point in the derivation (so-called “extremely local evaluation,” cf. Müller 2011b) and taking the minimal object they can fully read. The role of features, which has been one of extreme importance in Minimalist syntax39, is questioned, and the very notion of distinctive feature as a primitive of syntactic theory is put to test. Our argument goes as follows: let us assume that we start with a fully interpretable Relational Semantic Structure (see Mateu Fontanals 2000a, b for the original presentation, Krivochen 2010b, d for developments within this framework40), built by structuring primitive, very simple and atomic generic semantic elements (à la Jackendoff 1987, for the earliest presentation, but see also Jackendoff 2002; Levin and Rappaport 2011, Mateu Fontanals 2000a, b). Hale and Keyser’s 1993 “l-syntax” and Pustejovsky’s 1995 “Generative Lexicon” which have been related to those positions as well, although their abstract pre-syntactic structures are lexical, not purely semantic, and the presence of syntactic principles like the ECP is strong, instantiate processes like conflation / incorporation. The reason why such a primitive semantic structure is fully interpretable is simple: the atomic units of which such a structure is built are very few and underspecified and, moreover, there are no superfluous elements: a conceptual structure “embodies” a global semantic-pragmatic intention following strict faithfulness constraints (nothing is either added or taken away). These conceptual templates are built with the input from experience, as the actants in events in the phenomenological world are assigned an interpretation. Information enters the mind chaotically, but it is organized in very much a Kantian way, into categories that exist as a priori forms: mainly, space (if we accept that time is conceptualized as a metaphor of space, in favor of which there is both linguistic and neurological evidence: see Talmy 2000 and
39 For an exhaustive analysis and references, see Adger (2011), Adger and Svenonius (2011). 40 We will not analyze initiatives like the Lexicon Project or Pustejovsky’s (1995) “Generative Lexicon” simply because our interest is precisely to separate language from semantic structures, not to offer univocal mapping rules. A syntactic theory of semantic structures (in the broad sense of “how semantic structures are generated”) is both possible and desirable, without the need to resort to strictly linguistic concepts. Therefore, the mere mention of an LEX (for example) is out of question. Contrarily to the aforementioned approaches, we impoverish the so-called syntactic component to a single algorithm which applies without faculty restrictions, as it would be the optimal scenario.
The Framework: Radical Minimalism
145
Dehaene 2011). However, the existence of more complex conceptual templates must be posited, as our awareness and understanding of the phenomenological world around us does not limit to things in a location (be it concrete or abstract). In previous works, we have depicted a theory of semantic primitives, which we will sum up here. Ontogenetically (and, perhaps, phylogenetically), the most primitive category is the noun, denoting things (i.e., sortal entities in the sense of Borer 2005)41. Things, however, are not isolated, but related in various ways, the most basic of which is a purely spatial relation in terms of central or terminal coincidence (Hale and Keyser 1993, 1997; Mateu Fontanals 2000a). We have, then, two categories so far, one conceptual, the other, procedural: N and P, respectively. Further up on the structure, a spatial relation between two entities is a static event, and we have thus derived uncaused verbs (i.e., Unaccusative. Different kinds of Unaccusative Vs arise when varying the nature of the P node: telic and atelic, static and dynamic Unaccusative Vs). The most complex structures appear when the event has an external initiator, which requires the presence of a cause primitive. We have now caused events, which may or may not include a spatial (i.e., prepositional) relation between entities: their linguistic instantiation is either a (Di)Transitive or Unergative verb, respectively. Having this conceptual (pre-linguistic) skeleton, we can fill the places with the available information, participants, time, place, etc. Such an approach to pre-linguistic semantic structures has the following advantage: if these structures are built via the same algorithm that applies in other faculties (and in other systems within the physical world, as Tegmark 2007 suggests), the theory is substantially simplified. Moreover, as we will see below, if the driven force of these structures is the speaker’s intention (i.e., the I part of C-I, which has been systematically swept under the rug in Minimalist accounts), there is a principle driving the selection of the Array: choose only those elements that minimally instantiate the semantic structure via linguistic means. Under the simplest assumptions, these generic semantic elements include non-relational elements (i.e., logical arguments) and relational primitives, which link those elements. We will assume the following typology of primitives, based on Mateu Fontanals (2000a).
41 At this point, we are talking about “nouns,” but as the reader will immediately find out, so-called nouns are actually an interface reading of a local relation between a root and a distributionally specified procedural element D, such that N = {D, √}. Moreover, when we refer to “entities,” we do so in a very underspecified way so that we will talk about “events” as “entities,” in a sense that will be clear by Chapter 4. See also Krivochen (2012c) for details.
146
Adjectival Clauses vs. free adverbial Adjuncts
Non-relational element: X (Hale and Keyser’s N, Jackendoff ’s [THING]), a logical argument. Relational predicative / primitives: Cause (CAUSE / HAVE) Event (BE –stative-- / GO –dynamic--) Location (TO –terminal coincidence-- / WITH –central coincidence)
Those primitives, which we have adopted and adapted from Mateu Fontanals’ work (in turn, based heavily on Jackendoff 1987, 2002, Hale and Keyser 1993, 1997a, b, 2002) can be linguistically represented (e.g., in a sentence like [John made Mary cry], [made] is Spelling-Out [cause]), but this does not mean that they are necessarily linguistic but rather that they can be linguistically instantiated and Spelled Out if the language L in question has Vocabulary Item availability. On the other hand, other categories like Tense, Aspect and Modality require a linguistic structure to modify. Let us take Tense as an example: it would be a mistake to confuse linguistic Tense with either physical Time (external and objective, defined as the direction towards which entropy increases) or conceptual temporal notions, what we have called “Zeit” in earlier works (Krivochen 2012a). Moreover, the conceptual Zeit is expressible in locative terms (a proposal that is based on Talmy 2000 and related work). An event, regardless its linguistic instantiation (i.e., a V, a gerundive nominal or a derived nominal) is expressible as an ordered pair (eX, tY), where e is the generic event denoted by a bare root and t is its anchor in the mental timeline, clearly spatial42. Tense, then, is not a viable pre-linguistic primitive, and Zeit can be subsumed to Location (a position that has its roots in Einstein’s works on special relativity and its philosophical implications). Aspect and Modality, other possible candidates, as they are commonly defined, need to have scope over a defined event (i.e., a {T, V} relation) and a fully fledged dictum, respectively, both eminently linguistic entities. As such, in our opinion, they are not candidates for semantic non-linguistic primitives. With these tools, provided that the non-relational element “X” comprises the whole set of generic entities (whose identity is irrelevant for the generative component), an “Unaccusative” Relational Semantic Structure (hereafter, RSS) would look as follows:
(6) [event GO [location BOYfigure [[TO] HOUSEground]]]
42 See Talmy (2000) for a recent review of the foundations of Localism, and Dehaene (2011) for further evidence of the localist nature of human cognition, particularly regarding the “number sense.” See also D’Espósito (2007) for an overview of cognitive / neural models of working memory, which are essential for the localist theory.
The Framework: Radical Minimalism
147
The elements of the RSS are not linguistic, but, at most, pre-linguistic (or, in the strongest interpretation, completely extra-linguistic). This means that an RSS is not necessarily instantiated as a sentence (i.e., a linguistic unit), but it conveys pure semantic substance with the potentiality (but, crucially, not necessity) of being instantiated via language. As such, the English words we have used serve only expository purposes, since RSSs do not belong to any particular natural language. They can be seen as semantic genotypes (this time, it is a metaphor), conveying such underspecified semantic substance its instantiation is by no means fixed beforehand. According to the Conservation Principle (Krivochen 2011b, Lasnik, Uriagereka and Boeckx 2005 for an earlier and somewhat different presentation under different assumptions), this information must be carried along the whole derivational path (i.e., information cannot be erased, but instantiated in a way so that it can be manipulated by the relevant module: we see here the first important departure from the concept of deletion), which implies that the aforementioned concepts will have to be instantiated in such a way that they can be manipulated by the syntax: in our model (and Distributed Morphology and some versions of Exo Skeletal Models – XSM from now on – like De Belder’s 2011 or Borer’s 2009) those concepts take the form of roots. So far, we have no features or procedural instructions, only semantic primitives, either relational (predicates, interpretable only as determining how the relation between other conceptual objects is to be read) or not (arguments). Apparently, features should be added at this point in the derivation, when a semantic object is transformed into a syntactic object, if we accept that syntactic operations are always feature-driven as in the strongest constructivist models (e.g., Chomsky 1998, Pesetsky and Torrego 2007, Lasnik, Uriagereka, and Boeckx 2005). (Un-)Interpretability depends on valuation (Chomsky 1999) and, in turn, valuation depends on the category on which those features appear. Those features that enter to the derivation unvalued in a category must be eliminated for the derivation to converge. Our objection here is: assuming the input of language is already structured and syntactic (i.e., built via Merge), which is the point of adding features in the first place if the system will then eliminate (some of) them?43 This, without taking into account the stipulation
43 Even if the reader rejects the so-called interpretability-valuation correlation posited in Chomsky (1999), the objection regarding the unprincipled character of feature and value assignment holds. Moreover, the advantage that Chomsky’s system represented by unifying two characteristics into one is lost, thus complicating the theoretical apparatus with no (further) empirical support.
148
Adjectival Clauses vs. free adverbial Adjuncts
that underlies the whole system regarding the fact that a feature [F]enters the derivation valued in category P but not in category Q. Even if the reader does not accept our use of the Conservation Principle, this second objection is valid within an orthodox Minimalist framework. Feature valuation-deletion also entails the following problem, first noticed by Epstein and Seely (2002) the timing of SpellOut. If we accept the orthodox view that Spell-Out deletes the features that are uninterpretable by LF (i.e., those which have entered the derivation unvalued, and have therefore acted as probes, copying the value of a c-commanded goal), then we have to indicate to the system which of all the features that we have in a determined derivational point had entered the derivation unvalued. But, in order to do so, we would have to look back and see the derivational point immediately before valuation, which is impossible in a derivational (even if it is not as strong as Epstein’s 1999) approach as the derivation is a diachronic process, and past states of the system are no longer accessible. The situation can be summed up like this: (7) Spell-Out timing: a. Prior to valuation. Result: crash. Uninterpretable features get to the interface levels. b. After valuation. Result: crash. There is no way of knowing which features entered the derivation unvalued (and were, therefore, uninterpretable by LF).
Chomsky (1999) attempted to solve the problem by stipulating that Spell-Out (i.e., Transfer to PF) takes place “shortly after” valuation, but we do not see how this could help solving the problem. Epstein and Seely also tried to provide an explanation by saying that the transference took place within a transformational cycle, that is, not before, not after valuation, but during the process. This is: (8) [α…uF…] [ …i-F…]
Transformational Cycle
α
Output
Transfer
Interface
Interestingly enough, their concept of “trace” is that they are “remnants” of previous derivational stages. They have no real entity, but are “diachronic” occurrences of the same element (we will return to the conception of traces as tokens below), thus strongly considering the temporal dimension of Merge, its inherent diachrony in an on-line derivational system, like the one we argue in
The Framework: Radical Minimalism
149
favor of. The system makes a derivational backtracking to the immediate previous derivational step, that is, to the input of the transformational rule to see which feature/s was/were unvalued and, therefore, uninterpretable. For us, that is not a satisfactory answer either, since it is simply not principled, but stated, and it does not follow from any interface condition or from conceptual necessity that the application of a rule R will always result in a legible object. If it does not, and the object gets transferred (a strong version of the “every phrase is a phase” desideratum), the model is clearly crash-rife, which is not a desirable state of affairs as it implies a considerable computational overload at the interfaces, which receive much more than they can actually interpret. Moreover, it relies on a computational system that can have access to inner characteristics of the manipulated elements, a substantive complication we will avoid with our “blind” generator. Our solution is quite more radical: we just eliminate features from the picture. If syntax (in the wide sense of “generative engine,” as in an OTlike architecture) only cares about putting things together, why should it bother about valuation? After all, feature valuation is an operation that only makes sense taking convergence at the interface levels into account, but in the syntax proper (or “narrow syntax,” using traditional terminology) it is perfectly superfluous, since nothing “converges” or “crashes” in the syntax: there is no way to determine well- / ill-formedness in a purely generative working space. What we propose regarding all so-called (un-)interpretable features is that they do not exist at all, particularly (but not necessarily) considering the proposal made in Chomsky (1999) that uninterpretability is concomitant to unvaluation. That would be the strong (and optimal) thesis. Instead of a number of features (number that has increased over the years) which enter the derivation valued or unvalued (with an arbitrary number of possible outcome states) depending on the category they compose, we have a minimal number of (interface-required) interpretabe quantum dimensions conveying, in abstracto, all possible outcomes licensed by the interface systems, but adopting one value or another in a local relation with another element whose distribution is specific enough to provide the interfaces with an unambiguous object. We will express it by using this notation: for any dimension D, [DX] expresses its quantum state (i.e., comprising all possible outcomes). Take, for example, Case. As we have proposed in earlier works, there are only three Case Spheres, any further refinement being a morphological epiphenomenon (on the PF side) or an inference (on the LF side. Should we accept this claim, the Case dimension would look as follows: (9) CaseX = NOM+ACC+DAT
We will come back to this below, when deriving a sentence following RM claims.
150
Adjectival Clauses vs. free adverbial Adjuncts
4.2.1 The Architecture of the System In this section, we will compare our version of the architecture of the mental grammar with other proposals, mainly Optimality Theory (Prince and Smolensky 2004, Smolensky and Legendre 2006) and the orthodox Minimalist Program (Chomsky 1995 et. seq.)44. Our goal will be to point out problems we find in those approaches that Radical Minimalism can solve without resorting to extra stipulations, but only to general interface conditions. Our architecture will share some components with OT and also the traditional Chomskyan version of the MP, but we will see that both the local (the derivational engine) and global (the architecture of the cognitive system we assume) characteristics of the system differentiate RM from its antecessors. The model of the “faculty of language” as a dynamic workspace we will assume from now on is as follows, without entering into details (which will be provided further below).
Feed
(roots/semantic primitives)
Conservation Principle
C-I
Type-Array Dynamic workspace: GEN function applies to tokens
Analyze
Transfer
Analyze
S-M -
Transfer
Tab. 1: The Architecture of the RM-System
44 For a presentation of OT-syntax, see Müller (2011b); for a Radically Minimalist version of OT and detailed discussion of the present state of OT-syntax, see Krivochen (2012b)
The Framework: Radical Minimalism
151
Thus, if we want to analyze secondary predicates, we have to decide which status they have on the level of syntax and semantics (e.g. within the C-I and S-M interfaces system) and which roles play the different levels before Spell-Out. For me, at present, it is l important to stress that I do not come from the theoretical background of the Mainstream Generative Framework (henceforth, MGF), but my recent studies together with my colleague Diego Krivochen (cf. Kosta and Krivochen 2012, Krivochen and Kosta 2013, Kosta and Krivochen 2014) would rather allow to place myself in the more Minimalist theoretical approach, called Radical Minimalism as it has briefly – for lack of time – been outlined here. Within this framework, thoroughly described in Krivochen and Kosta (2013), a theory of language, which not only wants to capture (observe) and describe the data but seeks to explain how a child of any language in the world can learn these structures based on a relatively short time, poor evidence and the lack of negative evidence must be able to observe the variety of structures, describe, and classify their intra-linguistic and cross-linguistic differences (presumably on the basis of a theory of language), and to explain their learnability from the standpoint of universal and specific properties of the structures themselves.
4.2.2 Nom Agree Case and Instr Case in Russian Secondary Predicates While saying only a few words on the semantic status of the so-called SC in this section of my talk, I shall concentrate on the latter question, namely the behavior of two syntactically crucial properties – the Agree and the Case assignment properties in Russian secondary predicate construction10. My analysis will include not only adjectival SC of the type (11) but also nominal SC (12) and adverbial SC (13)–(14). (10) Jai sčitaju egoj pjanymj / *pjanyj j / pjanogo I consider him drunk (11) Jai sčitaju egoi durakom I consider him a fool (12) Oni vypil čajj xolodnymi He drank the teaAkk cold (13) Oni vypil čajj golyji / golymi / *golyjj / * xolodnyjj He drank the teaAkk naked
In (11), an ECM matrix verb heads the embedded SC, the NP ego is marked Acc case and the argument position of the secondary predicate – an adjective – is assigned Instrumental case. Also, in (11) only the direct object of the matrix clause can be the antecedent of the secondary predicate, but not the subject. Note, however, that both individual level predicates such as byt’ (and the zero copula) and stage level
152
Adjectival Clauses vs. free adverbial Adjuncts
predicates such as stat’ “to become” can take Instrumental as depictive predicate modifying the subject (external argument) in Russian. But with an individual level predicate the situation changes with the change of the event or tense: then the nominative is preferred in individual use in present and the instrumental can be used only with the individual level predicate byt’ if the situation has already passed:
(14) a. он хороший человек b. он очень хороший человек c. он очень хороший профессор d. он был очень хороший человек e. он был хорошим человеком f. он был очень талантливым студентом
It seems to be the case that NOMINATIVE IN PAST with the verb byt’ has also a qualifying meaning of the individual’s quality in Past which in a way rests in the memory of the speaker, whereas the INSTRUMENTAL IN PAST TENSE has the meaning of a characterization of a person during the time span we are talking about, but all the rest being inferred by conversational implicatures (cf. Kosta 2019), by assuming that person X is either not alive anymore (14e.), or a student who is not student anymore (14f.) (cf. on conversational implicature and the three levels of meaning in Kosta cf. Chapter 1 of this book). Another peculiarity, which deserves attention, is the possibility to scramble (move) a part of the SC leaving a remnant in situ which leads to a contrastive focus reading:
(15) Студентом я был “боГАТЫМ” — повышенная стипендия да ещё подрабатывал в институтской лаборатории и сантехником в общежитии. (NKRJa)
There is also possible to pied-pipe an SC which resembles very much on adverbials, cf. Kosta (1998:148). So I assume that Small Clauses behave in a similar way like adverbial VP-adjuncts behave, which generally remain within the lexical VP and or rightadjoin or leftadjoin to VP – depending on the language. In contrast, adverbial adjuncts of the sentential class (cf. Kosta 2013a, b) are hierarchically ordered in syntax, according to their modal or temporal affiliation either by moving to SpecTP (temporal adverbs such as often), or to SpecCP position (epistemic, faktive, verificative or evidential adverbs, cf. Kosta 1998: 148, 2013a, b). Overt movement of adjuncts does not follow for the sake of checking and Valuing strong features – such as it is usually in theories such as Pesetsky and Torego (1994, 1997) or Biskup (2011)45, but for reasons that are typical for 45 As Petr Biskup (2011) tries to demonstrate (but takes the wrong examples, cf. bottom of this footnote), the reason why manner adverbs move is not because of the need
The Framework: Radical Minimalism
153
unbound processes such as for pied-piping (Haj Ross 1967 mentioned pied piping first, see also Růžička in 1998 and Chomsky 1995). (16) Кафедру возглавляет выпускник класса Л. Л. Христиансена, кандидат искусствоведения, президент Ассоциации колокольного искусства России, профессор А. С. Ярешко, который ещё будучи студентом консерватории преподавал на отделении руководителей народного хора музыкальнотеоретические дисциплины. NKRJa: Воспитываем подвижников этномузыкантов // «Народное творчество», 2003
The National Corpus of Russian Language shows that ECM verbs like to consider assign the Instrumental case to the secondary predicate (be it adjective or argument/noun) in object, but never in subject position; whereas subject control verbs such as to be and to become assign Instrumental case to the secondary predicate controlled by the subject position which is the controller of the secondary predicate. We give some examples for both: Subject control verbs:
(17) — Он тебе не чужой. Да и мне… А не хочешь благословлять, не надо. Просто пожелай ему, чтобы он стал хорошим врачом. [Людмила Улицкая. Казус Кукоцкого [Путешествие в седьмую сторону света] // «Новый Мир», 2000] Василий Аксенов. Новый сладостный стиль (2005)
(18) — Ничего лучшего ты бы не смог придумать! Папаша, да ты, мы видим, просто молодец! Вечер провели в квартире у Чистых прудов, которую московский ФК снимал для своего председателя у бывшего советского министра хлебозаготовок. Последний махал метлой у ворот, придуриваясь под простого мужика; вскоре он стал хорошим капиталистом. [Василий Аксенов. Новый сладостный стиль (2005)]
(19) За коммунизм воюют «добрым оружием» — молоком, маслом, насущным хлебом справедливости! Но и этим добрым оружием должны отважно воевать настоящие бойцы. — Он повернулся к Бахиреву, склонив голову
to check F-features in syntax, but because of information structural reasons. Thus, if a manner adverb moves out of its base generated position right or left adjoined tot he lower VP, it is because it has to check a EPP feature above the domain of the left periphery (CP), thus it gets either focused or back-grounded, just like in (b) as opposed to the unmarked reading in (a). (a) Zatím to vypadá dobře (b) Zatím to dobře vypadá. I find the example (b) nearly ungrammatical. And my native speaker intuition is justied by the entries in the largest subcorpus of National Corpus of Czech Language (Český Národní Korpus, ČNK) syn2000 which does not give any entry/token for the word order (b) but 12 entries, which is 0,1 % w.r.t. entire corpus.
154
Adjectival Clauses vs. free adverbial Adjuncts набок, посмотрел на него. — Вот тут говорили о товарище Бахиреве, что он стал хорошим бойцом. [Г. Е. Николаева. Битва в пути (1959)]
Out of all documents we have found in the National corpus, there was no single example in which the subject control verbs like byt’ or stat’ would assign a Nominative instead of Instrument. Given the correlation between the notion background and CP phase, it means that a back-grounded manner adverbial moves across the vP and targets the CP phase. Cf. Biskup (2011: 164). In cases of subject control in control infinitive of passive constructions after NP movement of the internal argument into the external position of the subject (21), and also in object control in control infinitive (20), nothing changes in the case assignment in the adjoint small clause. The adjunct always has the instrumental case, and is coreferential with the object of the matrix clause in (20) and with the derived PRO subject of the embedded clause in (21).
(20) Ja poprosila ego ne byt’ žestokim (21) Ja poprosila ego ne byt’ isključennoj iz školy (Both examples are from Růžička 1999: 29, ex. 55 and 56).
The broad spectrum of functions and uses of Russian instrumental in different syntactic positions as described with the Jakobsonian notion of peripheral case such as subject and object can typically be detested in context of secondary predication46. Strigin (2008: 383seq.) mentions the following uses of Instrumental in Russian taken from the seminal work Nichols (1978), where even intransitive verbs like rabotat’ “to work” or žit’ “to stay,” simple transitive verbs like vybrat’ “to elect” or služit’ and unaccusatives like vernuts’sja assign Instr to their complement. It has, however, to be mentioned, that the Instr phrase is very closely attached to the verb and that it is not just a simple adjunct but more probably a complement, thus argumental position (A-position) to which the case Instr is assigned. Under standard analysis, this position must then also be by definition a Theta-position visible to case assignment. We are left with the explanation how a verb like rabotat’ or žit’ can be assigned Instr case in the first place.
46 In Polish, predicative adjectives in subject control constructions usually agree with the subject of the matrix clause and assign Nominative except the subject of the matrix clause is assigned other case than nominative, then the predicative adjective is assigned the instrumental, cf. (a) Mareki pragnął [PRO i być najlepszy w czytaniu], (b) Mareki kazał Mariij [PROj być bardziej pewną siebie /* pewna siebie], (c) Mareki twierdzi, że ważne jest [PROi /arb być pewnym siebie /* pewny siebie], (d) Jest muk źle [PROk być starym], cf. Bondaruk 2004: 229seq. All examples are taken from Anna Bondaruk’s work (Bondaruk 2004, 2013a, b) who works in Minimalism and I very much appreciate her work.
The Framework: Radical Minimalism
155
Type 1:
(22) a. On rabotaet inženerom He works engineer: INS ‘He works as an engineer’ b. Ego vybrali prezidentom he: Acc elected:3Pl president:INS they elected him president c. Kamni im služat oporoj rocks they: Dat serve support: Instr Rocks serve them as support d. On igral vratarem he: Nom palyed goalkeeper:Ins He played goalkeeper
Type 2
(23) a. On sidel grustnyj He:Nom sat sad:Nom b. On vernulsja geroem he returned hero:Ins He returned a hero
Type 3 (24)
a. b.
Snačala First On he
mašinu truck:Acc vypil drank.uo the
vzvešivajut weigh čaj tea:Acc
pustuju empty:Acc xolodnym cold:Ins
Type 4 (25)
a. b.
Rebenkom Child:Ins Xolodnym cold:Ins
on he ètot this
žil lived čaj tea:Nom
v in ne not
Pariže Paris vkusnyj tasty:Nom
It seems that the case assignment property, especially the contrast between Instr and Nom case, has a reason in event semantics, as has often been stressed in the literature but never really explained (cf. Steube 1994). Strigin (2008) tries to derive all uses of secondary predicates (depictives) from one general meaning, leaving the rest to the inference and context. I state that the major difference between Instr and Nom as case which modifies the subject of the matrix verb is that between stage level and individual level predicates. Other uses such as (22c) are conditioned by the proper meaning of the Instr case as Instrument. In (22a), it is a temporary profession as engineer, which predicates over the subject (on).
156
Adjectival Clauses vs. free adverbial Adjuncts
Similarly, in (22d) it is not the quality of the individual which makes him a goalkeeper but the profession or activity, which itself cannot be permanent. In (23b), it is the result of a change of state which made the person a hero (no one is born as hero, as we all know). In (24a), the tea may have changed the quality while drinking, so in the beginning, it might have been still warm and then, maybe because the person did not pay attention and waited too long before drinking the tea, it got cold; so in the final stage, the tea was cold and the person drank it cold. Both examples under (25) must be interpreted as events that have taken place in a certain time span (period), which is not true in the present state. So if we say that a child will not grow, we might also say that this is not change of state. Any kind of change of state belongs to the class of stage level predicates; this is also known in languages such as Spanish or Italian, which have two different verbs of “to be”: stage level predicates (estar, stare) or individual level (ser, essere). One can also see this differentiation when one asks for how are you, one has to say ¿cómo estás? and in Italian come stai, not cómo eres, come sei, meaning how are you just now, not for ever. In (23), it is rather the individual property (quality) of the person who is sad than his momentary disposition and maybe the fact that sidet’ is a state and not a change of state predicate qualifies and even forces the selection of Nom. But as far as I know, even the Instr would not be excluded in this context if the meaning would be something like in the present situation I saw him sitting sad in his chair. Thus, we are left to explain the use of Instr in depictives where the Instr stands as second internal argument or adjunct after ECM matrix verbs such as sčitat’ “to consider” which assigns the accusative case to the direct object and the secondary predicate – a noun – is assigned Instrumental case. This Instrumental case comes as default case, maybe because there is no other possibility to assign another (structural) case in adjunct position. Adjuncts in Russian are mostly inherent (or lexical) cases. This can be also confirmed by the fact that in passives, in which the object (Patient) of the active clause moves to the subject position and is assigned case, the agent is either suppressed (impersonal passive form) or it is expressed with the Instr case: (26)
a. Dom stroitsja b. Dom postroen inženerom (cf. Kosta 1992, chapter 5)
The Framework: Radical Minimalism
157
Bailyn (1999: 20) assumes that the head of a Predicate Phrase is an inherent Instrumental case assigner. But how can an empty head such as zero copula assign case? Thus, it must either be the matrix verb which assigns two cases, the case Accusative to the direct object and the case Instrumental to the argument of the secondary predicate as it would be the case with ditransitive verbs such as to give which assign two cases, namely Dative to the first internal argument (Addressee or Goal) and Accusative to the second internal argument (Theme) or with Unaccusatives after having incorporated in a light vP or a Stage l level predicate Phrase, formerly also called light vP (cf. Kosta 2011: 276, Kosta and Zimmerling 2020). Or do we have to do with an Incorporation of a lexical verb into the head of a light verb (like in the case of causative constructions, cf. Kosta 2010, 2011, 2015), which allow then for double case assignment? In (24) b. and (25) b., as it would seem on first sight, the adjective as the overt lexical part of the secondary predicate has a depictive and not an attributive reading because it is not the cold tea he drunk but the adjective being a part of the predicate “cold” which has a depictive reading and refers to the object. The variation between Case assignment, argument position (external or internal argument), control, and semantics of the predicate seem to display an important role w.r.t. Case assignment between predicate and subject and/or object. Let us first consider those examples in the National Corpus of Russian. Language where the ECM verb считать to consider heads an SC. In (27), we can see that an SC itself can introduce an adjunct clause containing a degree phrase, DegP, and embedding a restrictive DP clause. ECM verbs assign two cases: (27) Можно сказать, что я считаю его работающим несравненно лучше и гораздо более актуальнее и приспособленнее к современной жизни, чем, скажем, христианство, которое, на мой личный взгляд, изжило себя полностью и превратилось Александр Клейн. Контракт с самим собой // «Пятое измерение», 2003
158
Adjectival Clauses vs. free adverbial Adjuncts
ECM verbs are – just like any other light verb – not base generated in the lexical head of the lexical VP but – similar to modals and Auxiliaries of different provenience (e.g. causatives, do-expletives, predicate parts of functional verb phrases “funktionales Verbgefüge” like in Rechnung stellen and other idiomatic chunks) – they are base generated in a semantically more or less empty node, which is v. This ability to my mind allows a light verb to be merged with another, lexical predicate which then bears the lexical or idiomatic meaning, forming a complex predicate structure. But contrary to Bailyn (1999), we do not assume a PredP in its own right because we do not believe that an empty node can assign case but that the incorporation of a lexical verb into a light verb allows for assignment of two cases: accusative and instrumental.
4.2.3 Argument-Small Clauses: Subjects of Causatives (on SC and pro, PRO) Further confirmation of our hypothesis of a shell structure vP and VP comes from the causative verbs (cf. Kosta 2010, 2011, 2015a). Under MGF, subjects of causative verbs entailing argument-Small Clauses that are selected by the V must be represented structurally. Otherwise, the sentence – regardless of the theory of SC – renders ungrammatical, by the predication
The Framework: Radical Minimalism
159
condition of Williams (1980). The contrast in (29) vs. (30) can be explained by the presence or absence of a light causative verb, strictly governing the empty category pro in surface object position of the matrix clause and its binding into an SC, coindexed with the PRO (subject of the embedded small or better lexical VP clause).
(29) Questa musica rende proarb [SC PROarb allegri] (30) *This music renders proarb [SC PROarb happy]
In Russian, the presence or absence of an empty category pro in object position is highly problematic from the viewpoint of RM (cf. Kosta 1992 vs. Krivochen and Kosta 2013). It seems, however, that there is new evidence that in certain contexts, Russian, being itself only a semi-pro-drop-language, replaces such generic sentences with an arbitrary object pro with the generic lexeme people:
(30) Эта музыка делает [proarb/людейarb]47 [SC счастливыми]
Just like in examples (1)-(4), causative matrix verbs assign the direct object the Accusative case and the lower part of the predication, the adjoined Adjective Phrase of the lower lexical VP, the Instrumental case. In Russian, we find that Russian secondary predicates, whether arguments (6) or adjuncts (1–5, 7, 10), are marked with Instrumental case. Thus, Instrumental seems to be the standard marking pattern in Russian secondary predicative SC (Pesetsky 1982, Bailyn 1995, Bailyn and Citko 1999: 19, Kosta 1992: chapter 5; Franks 1995, c hapter 6: Secondary Predication, Krivochen and Kosta 2013: 80).
4.2.4 PredP (Bowers 1993, Bailyn and Citko 1999) The theoretical framework, which usually serves as starting point of analysis, is Bowers’ (1993) PredP view of predication (adapted for Russian in Bailyn and Rubin 1991, Bailyn 1995 and then revised in Bailyn and Citko 1999). In this
47 In our book (Krivochen and Kosta 2013: p. 96ff., section 4.2.2; also when dealing with object pro, section 4.1.3, p. 87), we treated these kinds of sentences as containing a Universal Quantifier at LF, which might or might not be materialized (apparently, in Russian, it does materialize), and which serves to license a predicate in the embedded clause without resorting to PRO. Because there is no other element to fulfill that role, therefore, to prevent crash at the interfaces, the predicate takes that UQ as its subject.
160
Adjectival Clauses vs. free adverbial Adjuncts
approach, the predicational structure are headed by the functional projection PredP. Thus, under PredP analysis the structure of a small clause
(31) a. Jai sčitaju egoi durakom is as shown in (31b) (adapted from Bowers 1993 and Bailyn and Citko 1999: 18). b. [TP I [PredP [ti considerj [VP himk tj [PredP tk Pred [NP a fool]]]]]] c.
Under Minimalist assumptions, the morphology of both adjectival and nominal predicates follows from two universal assumptions on well formedness of linguistic structures:
(32) a. UNIVERSAL: All NPs (including predicates) must have Case checked in an appropriate configuration. b. UNIVERSAL: All APs (including secondaty predicates) must be in an agreement relation with an appropriate head. (cf. Bailyn & Citko 1999: 19)
In our view, the structures (31b) and (31c) are not minimalist at all, instead they assume empty nodes that cannot assign case in the first place if they are not Theta-marked or L-marked by a predicate (e.g., relational prepositions do not assign Case in their own right because they do not have a Theta-role of their own to assign, cf. Kosta 2011: 276). We already gave an alternative approach in section 2.3, but we give another alternative in 2.6., which seems to be even more minimalist in spirit, and that has already been outlined in our recent work (Krivochen and Kosta 2013: 80seq.), after we have clarified and rejected some further possibilities, namely the featurebased approach in 2.5.
The Framework: Radical Minimalism
161
4.2.5 Against a Valuation Feature–Based and in Favor of a PHASEBased Approach There are several points, which militate against a new projection PrP as proposed by Bowers (1993, 2001) and Bailyn and Citko (1999) and also against a featurebased approach in syntax. Early minimalism, ranging from Chomsky (1989) to Chomsky’s (1995) Minimalist Program (MP), incorporated a weakly derivational approach. The computational system (narrow syntax, CHL1) manipulates a selection of lexical items (LI) by means of a step-by-step application of the operations Merge and Move, until Spell-Out occurs. Then, PF and LF are created, the two levels of representation interfacing with the syntax-external modules A-P and C-I, respectively. Chomsky’s (2000) Minimalist Inquiries (MI) sought to reduce derivational complexity by chopping the lexical array (LA) up into sub-arrays2, each feeding CHL to derive a particular phase – a derivational cycle. Phases are well-defined chunks of a derivation – vP, CP, and DP (more on the reasons for this choice below) –, each of which, upon completion, is transferred to the interfaces, and thus no longer bothers the computation with its weight. This entails a theory of Multiple Spell-Out (cf. Uriagereka 1999). There is no PrP Phase in the Theory, no valuation. We believe that a feature-based approach cannot be the solution. Anna Bondaruk (2013: 82seq.) assumes that the contrast between the two sentences with secondary predicates, one with a non-defective Pred and with a φ-features containing head assigning Instr (33), the other with a defective Pred head with no φ-features and thus with no case to assign (34), can be explained by the presence of a non-defective head:
162
Adjectival Clauses vs. free adverbial Adjuncts
(33)
(34)
In order to account for the differences and to account for the nominative case marking in (34), Bondaruk (2013a) adopts the mechanism of case agreement along the lines proposed by Bondaruk (2013b). The proposal is based on two main mechanisms: 1) feature sharing, and 2) probing by a maximal projection. But if the same verb form to be (jesteś) selects two different Pred heads with
The Framework: Radical Minimalism
163
different values, it would be highly problematic to assume that in one case it is the property of the Pred head to assign instrumental just because there is a full set of φ-features, and in the other case, we have no features for the probe to find at all. How does the head (call it probe) “know” how and where to find the appropriate goal if the same verb selects the same Pred head from the same lexical array? There is another problem with the theoretical approach chosen by Bondaruk (2013a, b), namely the blindness for features in narrow syntax. A rather nice conceptual argument for phases concerns the uninterpretability of features in narrow syntax: If Spell-Out has to remove uninterpretable features in syntax before Spell-Out to avoid a crash at the interfaces, it must know which features are interpretable and which are uninterpretable. However, narrow syntax is supposed to be blind as a bat, and thus, would need to be able to look ahead (up the tree, to LF and PF) to determine the interpretability of a feature. A transfer of derivational chunks to the interfaces remedies this issue, search space being reduced to a local Domain (a Phase) and the rest being left for the interfaces (cf. my plea for an interface-oriented approach in Kosta and Krivochen 2014a, b, Krivochen and Kosta 2013). If we want to work – just as an intellectual challenge and exercise for thought and for the sake of the argumentation within the Valuation Theory (Pesetsky and Torrego 2007) – within a feature driven Valuation approach, we would have to consider other features than just the Agree and Case Assignment f-features. We would be forced to assume that there are features to be checked in a functional head which decide between the notion / function stage level vs. individual level predicates. Given there is only one lexical root √, which is highly underspecified and maybe even semantically empty (like Auxiliaries have and be and the expletive verb to do in English), there must be another Projection, call it Phase, which must be the holder domain of features. Since there are only three possible functional domains or workspaces/Phases where functional features can be hosted and checked (namely CP which selects TP, vP which selects VP, and DP which selects NP), we must rely on these categories. One possible solution would be that stage level predicates are events and must check strong eventive features against vP (since vP is aspectual, it is the locus of aspectual features). So first, the lexical item (presumably merged under lexical VP first) as probe has to search the first accessible Phase domain which is not VP but vP. The lexical item itself has unvalued and uninterpretable event features, but interpretable lexical semantics. It needs to be checked against the head of v which itself entails not only features of event structure, but maybe (per inheritance) also modal and tense features of the CP Phase. The prediction would then be that only events, change-of-state verbs and actions probe within this working space and value
164
Adjectival Clauses vs. free adverbial Adjuncts
their event-features. As soon as they are valued, they are deleted and the head of v allows the lexical head V to assign case Instr to its specifier (argument) or complement (adjunct). The case Nom (if there is no stage level feature available) is checked against the Spec of the vP (just like in case of unaccusative verbs, cf. Kosta 2010, 2011, Kosta and Zimmerling 2020), B. No agree relation via long distance agreement is needed. cf. (35). (35) CP
[±ev-features]
vP
Nom VP
[±ev-features] Instr
[√ root]
Empirical support for phases comes from the EPP-feature on T: How does CHL decide between attracting a subject DP and merging an expletive there? Given the economic preference of Merge over Move (Move is more “costly” than Merge since Move ≈ Copy + Merge), an insertion of there should be expected in every instance, and raising should never be possible. Phases circumvent this issue since lexical sub-arrays can be defined for every cycle. To get an idea of the technical side of this argument, first consider the following two examples, which illustrate Merge-over-Move. They share one and the same numeration, but one derivation yields the ungrammatical (37b). (36)
Num = {there1, T2, seem1, to1, be1, someone1, here1} a. There seems to be someone here. b. *There seems someone to be here
Let’s take a closer look at the steps of the derivation of (37a. A. [TPT[EPP] to be someone here] – Merge T B. [TP there T[EPP] to be someone here] – Merge there & check EPP D. [TP T[EPP] [VP seems [TP there T[EPP] to be someone here]]] – Merge T E. [TP there T[EPP] [VP seems [TP there T[EPP] to be someone here]]] – Move there & check EPP
The Framework: Radical Minimalism
165
Now consider the derivation of the ungrammatical (37b), based on the same numeration, taking another option at step B. A. [TPT[EPP] to be someone here] – Merge T B. [TP someone T[EPP] to be someone here] – Move someone & check EPP D. [TP T[EPP] [VP seems [TP someone T[EPP] to be someone here]]] – Merge T E. [TP there T[EPP] [VP seems [TP someone T[EPP] to be someone here]]] – Merge there & check EPP
The derivational step B of (37b) violates Merge-over-Move, moving someone instead of merging the expletive there available in the Num, which is why the derivation produces an ill-formed sentence. Defining different sub-arrays for (37b), provided the phasehood of vP and CP, can capture this issue derivationally:
(37) a. {{C, T}3 {seem, there, T, to}2 {be, someone, here}1} There seems to be someone here. – (2) b. {{C, there, T}3 {seem, T, to}2 {be, someone, here}1}. *There seems someone to be here – (2) c. {{C, T}3 {seem, T, to}2 {be, someone, here}1}. Someone seems to be here. – w/o expletive
The only way how to capture this problem avoiding the look ahead problem within a highly non-minimalist valuation approach is to define two different lexical arrays, one in which the verb to be is classified as a stage level predicate and is merged (or adjoined) at the lowest possible domain or incorporated as in our theory which we will adopt from Radical Minimalism. If Bondaruk (2013a, b) has to have the same lexical array (and not two different Pred heads in maybe two different positions), she needs to decide which of the two competing constructions to select, but this is not possible because in her model both lexical arrays would be identical for narrow syntax. Phases consist of three parts: the phase head H (v/C), its complement ZP (the internal domain), and its edge YP (the specifier domain). After completion, the phase is inaccessible to further operations, as formally captured by the PhaseImpenetrability Condition (PIC) (first formulated in Chapter 3, under (2) in this book): (38) Phase-Impenetrability Condition (PIC. “In phase α with the head H, the domain of H is not accessible to operations outside α, only H and its edge are accessible to such operations.” (Chomsky 2000: 108)
Our view of case assignment is different: It is based on accessibility, not on featurerelated assumptions. Under the theory explained in Krivochen (2012: 78ff) and Krivochen and Kosta (2013: 93ff.), case is not strictly assigned, but read off a local syntactic configuration. We have already pointed out that an empty zero Pred
166
Adjectival Clauses vs. free adverbial Adjuncts
head should not be able to assign case, except under special stipulations, and argued that V-to-v incorporation might help solving the problem. Defectivity in a certain head is a stipulation that should be avoided if possible; therefore, we will stick to fully LF-interpretable nodes. Let us consider a Hale and Keyser (2002)-like structure for transitives:
(39) [TP [vP [VP [PP]]]]
with vP comprising causativity, VP eventivity, and PP introducing two arguments: theme and location48. If an argument is accessible by T, this local configuration will be read off at LF and PF as Nominative, as argued in past works. Accessibility has to do with the absence of an intervenient head and with the interfaces filtering out sub-optimal candidates, such that a relation that would be forced by orthodox Minimality could be overseen if necessary (as in the case of Latin Long-Distance Anaphors, which are bound outside their governing category, thus ignoring their closest possible governer). This means that, if there is no initiator, the outer argument of P can receive Nom case, as v the more local relation v-Spec PP could be ignored because it would lead to crash at the interfaces. If, however, there is an argument between T and SpecPP, then this argument will be Nominative (the subject / initiator of transitive caused events) and the theme will receive an Acc-related case, in turn related to its semantic contribution. At this respect, P is essential not only for Dative interpretation (a locative case par excellance), but also other locative-related cases, like Genitive (e.g., Source Genitive in ancient Greek) and Ablative (e.g., Latin’s ubi adjuncts. Instrumental case seems to us to be a variant of nonNominative which, as any other case, is interpreted within the vP domain. The contrast Bondaruk (2013) notices can thus be subsumed to the two variants of P (Hale and Keyser 2002). Theme [TO] Location (e.g., Prepositional Indirect Object Constructions, like ‘Give the book to mary’ Location [WITH] Theme (e.g., Double Object Construction, like ‘Give Mary the book’
Semantically interpretable variants of P head can account for variants in Case “assignment” without the need to resort to feature processes, if the definition of Locality is semantically sensitive (as we proposed in Krivochen and Kosta 2013: 179), thus derivable as a third-factor (i.e., interface-based) constraint over
48 It is possible that each of these nodes configures a cycle, perhaps v--V being both a single cycle (i.e., a single phase) if the definition of phase is sensitive to the kind of information conveyed by each projection (locative, causative--eventive, and temporal respectively.
The Framework: Radical Minimalism
167
syntactic representations. Maybe, this needs to be explained in terms of two competing constructions of Small Clauses, one assigning the instrumental and the other assigning the accusative. I shall now proceed with section 2.6 in which our own theory of Radical Minimalism and case will be presented.
4.2.6 Expletive pro in Small Clauses: An Incorporationist Perspective In this section, we will focus on the derivation of so-called small clauses (SC), in which the null element pro plays a very important role49. To account for the derivation of SC in an elegant way, we will retrieve an intuition of Abney at the beginning of his thesis, namely, that there is an Agr-like node in nominal structures. That very same Agr node would be present, for example, in SC. Bosque and Gutierrez Rexach (2008: 423) define SC as “quasipropositional units without verbal inflection” (our translation). The absence of morphological verbal inflection is often confused with the absence of agreement, especially within the Anglo-Saxon linguistic literature. In English, SC have no overt agreement between the predicate and argument. Thus, a sentence like
(40) I consider [him intelligent]
has been analyzed as a transitive sentence with an “ECM” verb and an SC as a complement, whose structure is as follows (adapted from Chomsky 1986)50: (40’)
AP him
A´ intelligent
A is a predicate and thus licenses a specifier (Hale and Keyser 1993), the external argument and so-called subject of the SC. In Minimalist accounts, exceptional accusative case is assigned to [him] by the node v* to which V adjoins 49 On pro-drop cf. Franks (1995, chapter 5), Kosta (1992), but above all the recent study Krivochen and Kosta (2013) and Corbara (2012) and from diachronical point of view Meyer (2012) and Timberlake (2014). 50 We will ignore proposals that take SC to lack a head and take the argument and the predicate as sister nodes in a configuration like (i) since, even within traditional X-bar theoretical assumptions, it violates endocentricity: i) [SC [DP] [AP]]
168
Adjectival Clauses vs. free adverbial Adjuncts
via head-to-head movement during the derivation. This node would have a valued (thus, interpretable, following Chomsky 1999) [Transitivity] feature and unvalued φ-features to check with the external argument of AP, the closest goal in its c-command domain. While this acknowledges the fact that [him] is an argument of [intelligent] and not of [consider] the failure of this proposal is the fact that it does not provide an explanation for the morphological concord between Prn and A, visible, for example, in Spanish, Polish, Serbian, or in Czech and Slovak (in ECM and AcI constructions, and in constructions of resultative state)51. (41)
a.
Los considero [los inteligentes] (Sp.) CLACCPl consider1SgPresImpf [CLACCPl intelligentPl] “I consider them intelligent” b. Znalazłem go pijanego /* pijanym pro found1SgPret him-CLAkkSg drunk-CLAkkSg /*drunk-InstrSg c. Našao sam ga /* pijanim pijanog pro found1SgPret him-CLAkkSgm drunk/*drunkCLAkkSgm
InstrSgm
d. Švejk jí měl pěkně vyčištěnou. Švejk her-CLAkkSgf had3SgPret nicely cleanedAkkSgf e. Mat’ ju našla vyplakanú mother her found3Sgpret cryingAkkSgf (examples 41 b, c and e from Bailyn and Citko 1999: 27, [19 a-c]).
This concord was accounted for in the early 1990s from the perspective of Agr projections. According to this analysis, there is an AgrA node that would dominate AP, and the external argument of the adjectival SC would rise to take inflectional features in a Spec-Head relation with ConcA0. This proposal was abandoned by most researchers when Chomsky (1995) argued against Agr projections because they are nothing more than ϕ-features receptacles, without interpretable information for the interfaces, thus implying a violation of Full Interpretation; and Spec-Head relations were replaced by probe-goal relations, 51 In Kosta and Zimmerling (2020), section 6.1, a similar analysis is given for Copula predicative Nom-Nom, (Nom-Acc) and Nom-Instr Small Clauses and unaccusatives. The difference between Nom-Nom vs. Nom-Instr Case Assignment is motivated with the Individual level predicates (ILP) vs. Stage level predicates (SLP) distinction where the head of the ILP0 contains -inchoative/+stative or +resultative features and assignes modo Spec-head-Agreement Nom to the SpecILP, whereas the head of a lower V0 “probes” against the head SLP0, and checks unvalued +inchoative/-resultative features, and assigns Instr to the SpecSLPP.
The Framework: Radical Minimalism
169
which can take place at a distance, Spec-Head configurations being the result of independent EPP-related phenomena. Krivochen (2010a) intends to recover Abney’s idea of D (D ≠ Determiner) as an Agr node, but with a fundamental difference with respect to traditional agreement projections: the presence of interpretable features. Krivochen (2010a) proposed that the procedural category D comprised four semantically interpretable dimensions: (42)
a. b. c. d.
Referentiality Definiteness Person / Number Case
If Referentiality is a dimension, then it can take a negative value (since there is no referent assignment process affecting the SC, as it is not a sortal entity, see Panagiotidis 2010), and the projection would not be a referential DP (i.e., a denotative definite description), but it would keep the agreement ϕ-features. [Ref] is as interpretable as [+ Ref], so there is convergence at the interface levels without violation of the Principle of Full Interpretation. Every SC, then, would be a DP projection (in traditional X-bar terms, a {D} structure with a procedural D nucleus in our 2011e atomic phrase structure theory), which takes the projection of the A lexical predicate as a complement, with which it forms an extended projection (Grimshaw 1991). Given this scenario, the derivation of the small clause in (40) would be, following Krivochen (2010a). (43)
SD Him
D´ [-Ref] u-φ
AP him
A’ intelligent
But this representation is not yet correct in Radically Minimalist terms, since (among other complications) it assumes the fact that A is a primitive category in the grammatical system, in the line of Hale and Keyser (1993 et. seq). Following
170
Adjectival Clauses vs. free adverbial Adjuncts
Mateu Fontanals (2000a, b), Jackendoff (1990, 1997, 2002) and our own work within Radical Minimalism (see specially Krivochen 2011d, e, Krivochen and Kosta 2013: 80), we will decompose A into a configuration of the type [P…√], since to ascribe a property to an entity is to conceptually locate that entity within the sphere of the property, from a localist point of view (or asserting its being part of the predicate’s extension, from a logical point of view). Moreover, individual-stage adjectival predication involves an atelic static eventive node, the reason why a [BE] {event} node is necessary for semantic interpretation. We do not need to posit unvalued referential features in D, since we have already abandoned the probe-goal relation and Agree as an operation in the computational system. Such a modification would yield a simpler (and theoretically more correct) representation, namely (44)
{D} [D]
{event} [BE]
{P} Him
{P} [WITH]
√intellig
In this representation, the root would incorporate à la Baker (1988) onto P and then {event} and D0, motivated by the P / eventive “defective” character at PF (Hale and Keyser 2002). This process can also be accounted for via our External Merge Theory of Movement in terms of Merge of a token of the root in the aforementioned nodes, and when Chain Reduction (see Nunes 2004 and Chapter 5 of Krivochen and Kosta 2013) takes place at the Morphological component, our Anti-Spell-Out principle comes into play. For semantic purposes, however, all the properties of the relevant nodes count, and therefore the final interpretation is that of an individual level predicate (by virtue of the nature of P), which applies to a sortal element in an atelic static way (by virtue of the procedural instructions conveyed by the eventive node. {D}transforms the whole clause into an argument, nominal in nature and for conceptual purposes. The representation can also explain the SCs in which the argument is placed after the predicate, as in:
The Framework: Radical Minimalism
171
(45) Consideran [SC inteligente a María] (Sp.) Consider3PlPresImpf [intelligent to Mary] “They consider Mary intelligent”
The SC in this case, has no theme in informational terms, becoming a thetic clause. The ungrammaticality of similar structures in English is derived from interface conditions, namely, from the consideration of the semantic underspecification of Spanish and English roots, which leads us to review the proposed structures. In Spanish, the root √consid- (and the other roots that can instantiate as verbs which take an SC as a complement) within a “lexical” verbal/ eventive level allows the incorporation of yet another root to form a complex predicate: [V √consider-intelligent], although this incorporation is not strictly necessary. The root in question is semantically underspecified, but not severely underspecified, so there is no collapse explicature level if the root is left alone in V0. Incorporation enables the construction of an explicature with a more specific eventive reference (in combination with Time and Aspect nodes), while the root in situ leaves a “lighter” main V. The structure (41a), therefore, is (46)
(46) Ellos [[V √consider-intelig-] [D [D][event [BE] [P María [[WITH] √intellig-]]]]]
In English, on the other hand, it seems to be the case that verbal roots with which SC appear are semantically specified enough to saturate the valence of the terminal node V0, so the addition of another root is not necessary for the construction of an explicature. Therefore, optimally, there is no incorporation, since it would involve extra processing effort with very little benefit, if any at all. The only option left, according to basic economy principles, is to have a “heavy” verb that selects an SC in which the subject has risen for thematic reasons, as mentioned above. Taking up the idea of how to explain the assignment of Instr to the lower nominal or adjective secondary predicates, we can use the same mechanism of decomposition of lexical categories and incorporation. For example, the light verb is a kind of semantically defect predicate, which does not have the ability to assign Theta-role to its internal argument in its governing category, and thus, by definition, cannot assign case on its own. It needs to attract the lexical verb. Only then, it can assign Theta-role to the internal argument and case accusative in SpecvP. There is, however, still a problem how to explain the case assignment of the Instr case to a position, which seems to be either an argument (complement) or adjunct position (on the differentiation between A- and A’positions from the viewpoint of RM cf. Kosta and Krivochen 2014). The scenario would then be the following: rabotat’ (V) rabotat’ uchitelem (vP) intr. tr.
172
Adjectival Clauses vs. free adverbial Adjuncts
On the level of system, there seem to be two different verbs: (1) rabotat’ and (2) rabotat’ uchitelem, (1) a general meaning “to work,” and second a stage level meaning “to exercise the profession of teacher” (2). Since (2) is a stage level predicate which is true for the time span of the profession only, it must have been derived etymologically and also derivationally from the more general (intr.) lexical verb rabotat’ (not limited to any temporal, modal or aspectual restrictions) via incorporation of the general lexical verb into vP. Semantic decomposition of the verb rabotat’ uchitelem is thus an important Zero hypothesis (prerequisite) for my analysis: Lexical Array: {√ RABOT-, √ UCHIT’-} (i) Merge ROOT √RABOT- with lexical VP (ii) Merge ROOT √ UCHIT’- with the complement/adjunct of the lexical VP → (iii) Re-Merge (vP) with the head of the light vP (Merge over Move (Phase model)) (iv) The product of the incorporation assigns theta role to VP (via LMarking), V assigns per default Instr to the complement/adjunct!
To summarize, we have accounted for the derivation of Small Clauses without the need to resort to proexpl for changes in the argument-predicate phonological order inter-linguistically.
4.3 Further Perspectives of Research on Predicative Instrumental vs. Nominative In this chapter, I was able to show how the system works in Russian and at least in Spanish and English. This analysis should, however, be extended to other Slavic languages. Since I am working in the domain of comparative Slavic Syntax, I would like to give some further perspectives of research pointing especially at Polish syntax as compared to Russian and Czech later. I would like to mention the same problem in Polish which has already been addressed in some recent functionally oriented studies on predicative instrumental vs. nominative. Gerd Hentschel is maybe the best-known specialist working on secondary predicative constructions in Polish and other Slavic languages (cf. a summary in Hentschel 2009), but his work is not formal and describes the data without any theoretical framework. Hentschel also mentions studies, where the use of predicative Instrumental vs. Nominative has been assumed to be closely connected to the opposition change of state vs. permanence52. I believe 52 “Sehr oft ist über den “primären” prädikativen Instrumental gesagt worden, er stehe bei begrenzter Gültigkeit des für einen Referenten prädizierten Sachverhalts (z.B.
Research on Predicative Instrumental vs. Nominative
173
that the long-standing discussions and still ongoing hot debates on this distinction have to do with the unclear status of the definition change of state vs. permanence. Moreover, the facts are more complicated since we often have to do with a dynamic of language evolution, and we cannot capture this very fine-grained functional difference synchronically since all is in flow so to say. There is also the fact of language contact which can be seen especially in case of Czech and Sorbian which have a strict loss of instrumental in predicative position (under the influence of German), and on the other hand the tendency to typological language shift in South Balkan languages Macedonian and Bulgarian (which are more or less “case-less”), Slovenian (maybe also language contact) and Serbian, Croatian and Bosnian somewhat in between these two influences.
Jakobson 1936, Wierzbicka 1984), der Nominativ entsprechend bei unbegrenzter Gültigkeit. (Die “begrenzte Gültigkeit” der Prädikation ist notaabene ja auch ein Teil der gängigen Definition von Depiktiven.) Dem ist zumindest mit Bezug auf die modernen slavischen Sprachen ebenso oft widersprochen worden (z.B. Kacnel‘son 1972, Klemesiewicz 1926.” Gerd Hentschel zeigt aber, dass für frühere Sprachzustände der Instrumental zumindest bevorzugt bei Prädikatsnomina steht, die auf einen Übergang von einem Zustand (im weitesten Sinne) A nach nach B (bzw. non-A nach A oder A nach nonA) abheben (Hentschel 1993, Moser 1994). So zeigen früheste Belege den Instrumental vornehmlich bei resultativ-prädikativen Komplementen, bei autonymen Nominalgruppen (also oft bei Eigennamen), nach Verben des Nennen bzw. Benennens, aber auch in allen Kontexten, wo die Kopula oder das Auxiliarverb sein eine Lesart eines stage level Prädikats, also werden, annehmen, so im Aorist, Futur und auch in modalen Kontexten wie dem Imperativ.”(Hentschel 2009: 378). Hentschel fails to mention formal approaches, and of course at the time he wrote this article, my own research was not known to him, cf. now for a new take regarding the distinction between stage level predicates which assign Instrumental from an Aspectual head Asp0 to the specifier of a lower Stage Level Predicate Phrase cf. Kosta and Zimmerling, (2020, in print) chapter 6.1 with figures 1 and 2.
5 Case and Agree in Slavic Numerals – Valuation of Features at the Interfaces within a Phase-Based Model 5.1 Introduction The Quantification of Nominal phrases in Slavic is one of the most analyzed but also still a poor understood and controversial point of discussion in both traditional descriptive and generative studies. Numeral phrases in Polish and Russian, as in other West and East Slavic languages, exhibit puzzling behavior. In particular, to judge by the morphosyntax of the constituents of the numeral phrase, the internal structure of the phrase appears to depend upon the external context of that phrase. The problem is a well-known paradox of Slavic syntax, and a considerable literature has been devoted to it. The purpose of this chapter, which complements other work by Babby (1987), Franks (1995), Kosta (2008), Rappaport (2002, 2003) and Rutkowski (2006), is to develop a Minimalist theory of formal features, case syncretism, and lexical representation which effectively removes the paradox by maintaining (despite appearances) the usual independence of internal phrase structure and external context, while placing the morphosyntactic variation where such anomalous facts belong: in the lexicon and to the interfaces. In the first part, I shall try to give a short overview of the puzzle concerning all the problems of case, number assignment and agreement in numeral and numerative constructions in Russian and Polish. In the second part, I shall try to give the solution to some problems so far unresolved, namely in which connection Agreement and Case assignment are driven by interface conditions of either PF (SM) or LF (CI) in the new approach of Strong Minimalist Thesis (Chomsky 2005). The patterns of Case assignment will be analyzed under the notion Agree and Feature checking within the descriptive generalization of so-called heterogeneous vs. homogeneous paradigm. This section tries to refine the analysis of “heterogeneous” vs. “homogenous” morphosyntax of numeral expressions making use of recent syntactic developments, namely the emergence of the caseassigning mechanism Agree. The key insight is taken from Gilbert Rappaport’s approach on case assignment (2002), refined and enriched by the mechanism of interpretability and valuation of features from the recent studies on the Syntax of Valuation and the interpretability of Features developed by David Pesetsky and Esther Torrego (2004).
176
Case and Agree in Slavic Numerals
5.1.1 A Short Overview of the Puzzle Quantified nominal phrases in Russian and other Slavic languages include numeral phrases (such as odin, dva, tri, sorok, pjat’desjat ‚... pervyj, vtoroj...), different kind of quantifiers (such as kazdyj), vse, nekotorye, kto-to, kto-nibud’, malo, mnogo, mnozestvo, mensinstvo) but also other quantifying expressions (e.g., numeratives and container constructions such as stakan, koselek, korobocka, kilo, liter....
Crucially, the syntactic distribution of quantified numerative and numeral phrases depends on the semantics of countability of NPs (cf. Krifka 1989). Unlike mass nouns, individual items can be quantified in number. Mass nouns, on the other hand, always need a combination or a numeral and a numerative (e.g. stakan, etc.) to become countable items, cf: odin stakancik vody “one glass (of) water,” dva stakancika vody “two glasses (of) water.” A central problem for the analysis of quantified (numeral) expressions in Russian concerns the case and number assignment and agreement. In his book “Parameters of Slavic Morphosyntax,” Steven Franks (1995) mentions the problem that display many unusual and mysterious morphosyntactic properties, among which are that (a) the numeral sometimes governs the nominal material following it and sometimes agrees in case with it, and (b) the numeral phrase sometimes induces subject-verb agreement and sometimes does not (cf. Franks 1995: 93). Gilbert Rappaport (2003) describes the phenomenon as a paradox that lies in the fact “that the numeral acts like a nominal head of phrase in the direct cases (1a), but like a modifier of the quantified noun in the oblique cases (1b.) -GEN PL.”, cf.: (1) a.-b. (1) a. videtʹ pjatʹ-ACC=NOM krаsivych-GEN PL ptiček -GEN PL ‘to see five beautiful birds’ b. voschiščatsja pjat`ju-INS krаsivymi-INS PL ptičkаmi-INS PL ‘to be enthralled by five beautiful birds’ (Examples from Rappaport 2003)
The syntax of (1a) is that of, say, videtʹ stаju-ACC SG krаsivych-GEN PL ptiček-GEN PL “to see a flock of beautiful birds,” in which stаja “flock” is the head of the direct object noun phrase (in the accusative case) and krаsivych ptiček “beautiful birds” is a genitive complement of stаâ. In (1b), each constituent of the phrase stands in the instrumental case required by the governing verb, as if pjatʹju-INS krаsivymi ptičkаmi-INS PL “five beautiful birds” were a noun phrase headed by “birds,” -INS PL with “five” and “beautiful” as modifiers. Following Babby (1987 and elsewhere), we term the morphosyntactic pattern displayed in (1a) HETEROGENEOUS, and that displayed in (1b) – HOMOGENEOUS. To actually posit a contrast in
Introduction
177
syntactic structure to account for this contrast in morphosyntax would lead to overgeneration (deriving constructions which are actually not grammatical) and a complicated apparatus would be required to rule out the ungrammatical possibilities. The challenge has been to find alternative means of accounting for the apparent variation in structure. We define a NUMERAL to be a category (part of speech) that displays heterogeneous agreement in the direct cases (nominative and accusative) and homogeneous agreement in the remaining, oblique cases. This category is split into two in Russian: LOWER NUMERALS assign the singular grammatical number under heterogeneous agreement (videtʹ dve rekí-GEN SG “to see two rivers”), while HIGHER NUMERALS do not (videtʹ pjatʹ rek -GEN PL “to see five rivers”). In contrast, the number odin “one” displays homogeneous morphosyntax regardless of syntactic context and is therefore an adjective. Certain other numbers (e.g., tysjačа “thousand,” million “million”) display heterogeneous morphosyntax in all contexts and are nouns. Thus, not all numbers are categorially numerals. (Rappaport 2003). This contrast between homogenous and heterogeneous NPs (or DPs) itself is not difficult to formulate, but it is difficult to explain it by independent properties of the Agree / Case system within the approach of Minimalism in its recent development (Chomsky 2005). Let us first consider some of the examples we have already discussed by large in Kosta (2008: 248).
(2) Nado skazat’ čto èti dve zadači budut vypolnjat’sja orkestrom Volga-Band Must sayInf Comp these-NomPl two-CardNumPl exercises-GSgF will be3Pl conducted by the Wolga-Band “It should be mentioned that these two exercises will be conducted by the Wolga-Band”
In this example, the nominal phrase stands in a case position to which the narrow syntax assigns a direct case (nominative), and the numeral dve “two” acts like a nominal head of the phrase, assigning the Genitive to the quantified noun zadača as if it were a complement. When this same quantified NP stands in a position assigned an oblique case, the quantified noun acts like the head of the phrase, with the numeral falling in line with the other modifiers in agreeing with that quantified noun. The latter descriptive observation is exactly what happens in example (3) where the lower numeral chetyre “four” stands in a case position governed by the head of the PP s “with” that governs Instrumental in Russian and all the material must agree in Number and Case with the Numeral: (3) Tak, v oktjabre “Volga-Bend” budet vystupat’ s četyr’mja amerikanskimi muzykantami.
The first pattern can be called descriptively the heterogeneous pattern of Case assignment and Agreement, whereas the latter one can be called for expository
178
Case and Agree in Slavic Numerals
reasons the homogenous pattern. In the examples (3), we can see that in Russian this descriptive generalization is true for the lower numerals 2–4 (called in new terminology Paucal for Lower number) where the numeral četyre is part of the homogeneous pattern, and in the example (4) where the Paucal tri “three” stands in a structural case position of Nominative, the heterogeneous pattern must be chosen. Furthermore, this generalization is valid for masculine and feminine gender: (4) Saratov posetjat Tri dirižera i odin trubač. Saratov will visit threeNom directorGSgM and one trumpeter “Three directors and one trumpeter will visit Saratov”
5.1.2 Higher Numerals in Russian The same observation can be made for the contrast between the higher numerals (“5” and higher, i.e. ∃ (x) ≥ 5) and their nominal complements in direct case position (structural case Nominative and Accusative) vs. oblique case position: in a structural context (with the Quantifier in nominative or accusative case) the Quantifier assigns a Genitive Plural Case to its complement but in an oblique context where either the verb or the preposition assigns an oblique case (in 5 it is the Instrumental and in 6 the Prepositive) the Numeral, and the rest of the nominal material behave like nouns within a noun phrase where all the elements agree in Case and number. However, the numeral itself has the morphology of a singular noun: (5)
Prožila live-Pret3Sg čtoby Comp
(6)
Kstati, In fact, o about
ona she 1 janvarja 1998 g. 1 January 1998 year
sčetom pjat’ let billInstr five-AkkSg years-GenPl skromnymi pjat’ju rubljami …. modest-InstrPl five-InstrSg rubel-InstrPl “She spent exactly five years, in order to satisfy herself (to come along) with modest five rubel” o about pjati-rulevoj five-PrepSgFem
rovnym exactInstr obratit’sja get along
pjati five-PrepSgFem kupjure. Note-PrepSgFem
rubljax, rubel-PrepPlM,
točnee, more precisely,
It is worthy to mention for the analysis in part 3 of this chapter, that the nominal pattern with the agreeing (homogeneous) paradigm can be split so that the
Three Types of Cardinal Numerals in Polish
179
modifier of the NP skromnymi “modest” and the nominal head rubljami need not to be adjacent. We will see how these data can be compared to the situation in Polish numerals.
5.2 Three Types of Cardinal Numerals in Polish 5.2.1 The Classification of Pawel Rutkowski Similar to Russian, Polish numerals are not a homogeneous class. The semantic set of cardinals can be divided into three distinct syntactic subclasses (cf. Rutkowski 2006: 90; Rutkowski 2001, 2002a, Rutkowski and Szczegot 2001; see also Neidle 1988, Franks 1995, Giusti and Leko 1996) for similar classifications proposed for other Slavic languages. – A-numerals (adjectival numerals) – the four lowest numerals (odin, jeden “one,” dva, dve, dwa “two,” tri, trzy “three” and cetyri, cztery “four”) – N-numerals (nominal numerals) – very large numerals such as tysjaca, tysiąc “thousand,” milion “million,” milliard(a) “billion,” etc. – Q-numerals (numerals proper) – numerals such as pjat’, pięć “five,” “pjatnadcat,” piętnaście “fifteen,” pjat’desjat’, pięćdziesiąt “fifty” or pjat’sot, pięćset “five hundred” (this is the biggest subclass). These three subclasses differ in terms of case assignment. N-numerals resemble nouns because they always assign genitive to the quantified noun. Q-numerals require that the nouns take genitive only when the larger nominal expression is in a structural case (nominative or accusative) position. In the context of inherent cases (genitive, dative, locative and instrumental), Q-numerals agree in case with the noun. Finally, A-numerals agree with the quantified noun in all case contexts. These three patterns of morpho-syntactic behavior are illustrated below: note that the verb lubić “like” assigns accusative, whereas the verb doradzać “advise” requires dative. N-numerals (all examples taken from Rutkowski 2006). (7)
Cezary lubi milion Cezary likes million-ACC ‘Cezary likes one million people.’ b. *Cezary lubi milion Cezary likes million-ACC a.
osób. people-GEN
[structural case context]
osoby people-ACC
[inherent case context]
180
Case and Agree in Slavic Numerals Cezary doradza milionowi Cezary advises million-DAT ‘Cezary advises one million people.’ *Cezary doradza milionowi Cezary advises million-DAT
(8) a.
b.
osób. people-GEN osobom people-DAT
Q-numerals: (9)
Cezary lubi pięć Cezary likes five-ACC ‘Cezary likes five people.’ *Cezary lubi pięć Cezary likes five-ACC
a.
b.
(10)
a.
b.
osób. people-GEN
[structural case context]
osoby people-ACC
[structural case context]
Cezary doradza pięciu Cezary advises five-DAT ‘Cezary advises five people.’ *Cezary doradza pięciu Cezary advises five-DA T
osobom. people-DAT
[inherent case context]
osób people-GEN
[inherent case context]
A-numerals: (11)
(12)
Cezary Lubi trzy Cezary Likes three-ACC ‘Cezary likes three people.’ b. *Cezary lubi trzy Cezary likes three-ACC a. Cezary doradza trzem Cezary advises three-DAT ‘Cezary advises three people.’ b. *Cezary doradza trzem Cezary advises three-DA T a.
osoby. people-ACC
Osób people-GEN osobom. people-DAT
.
osób people-GEN
The following table summarizes this complicated pattern of case assignment: (13) Genitive assignment
N-numerals
Q-numerals
A-numerals
in structural contexts
+
+
−
in inherent contexts
+
−
−
Tab. 1: Genitive Assignment in Polish Numeral Expressions Rutkowski (2006: 90–91)
Three Types of Cardinal Numerals in Polish
181
These three subclasses differ in terms of case assignment. N-numerals resemble nouns because they always assign genitive to the quantified noun. Q-numerals require that the noun takes genitive only when the larger nominal expression is in a structural case (nominative or accusative) position. In the context of inherent cases (genitive, dative, locative and instrumental), Q-numerals agree in case with the noun. Finally, A-numerals agree with the quantified noun in all case contexts. These three patterns of morpho-syntactic behavior are illustrated below: note that the verb lubić “like” assigns accusative, whereas the verb doradzać “advise” requires dative (Rutkowski 2006: example (7)/(12)). (14)
DP D0
QP Spec
A-numeral
Q’ Q0
NP
Q-numereal
N-numeral
Rutkowski (2001, 2002a, 2006) attempts to account for this tripartite division by assuming that A-numerals are specifier-based modifiers (c.f., e.g., Giusti and Leko 1996, Veselovská 2001), and N-numerals have the syntactic status of nouns, whereas Q-numerals are functional elements, which are base-generated in a special projection (QP) in the region between DP and NP. The three possible syntactic locations of numerals are illustrated below: This analysis can explain the complex pattern of case assignment in Q-type expressions. If functional (as opposed to lexical) elements are inserted into the syntax after inherent case assignment but before structural case assignment, their inability to assign case in the inherent case contexts is straightforward (the noun has already been assigned an inherent case value). Thus, Q-numerals, being functional, can only assign genitive in structural contexts (see Veselovská 2001, Rutkowski 2001, 2002a, for a more detailed analysis).
182
Case and Agree in Slavic Numerals
5.2.2 The Accusative Hypothesis for Higher Numerals 5< by Miechowicz-Mathiassen A slightly different proposal especially for higher numerals 5< which constitute a larger group and are opposed to the lower (paucal) numerals 1–4 gives Miechowicz-Mathiassen (2012). Her paper addresses the issue of the Accusative Hypothesis, which is a descriptive fact about Polish numeral expressions with higher numerals according to which they are intrinsically accusative. Contrary to what she proposes, I would like to take a more conservative and general stance and define the category that quantifies over numerals as the functional category Q and its projection QP heading different types of projections, namely PartP, PaucP, and NP for Case reasons. Miechowicz-Mathiasen (2012) analyses polish Numerals as Numeral Phrase that can entail different features: Tab. 2: Miechowicz-Mathiasen (2012) Number head types a. Num0 [+Q] [+plural] – numerals 5< b. Num0 [+Q] [+paucal/plural] – numerals 2–4 c. Num0 [± plural] – unquantified noun phrases (also with 1)
She further proposes that “in the process of numeralisation, the once nominal nouns instead of merging in their NP (N0), merge in the counted noun’s Num0, which explains both why they have lost their own φF[α]-features and why they exhibit the gender properties of the counted noun.” While we believe that this could be a plausible explanation for the mixed character of the numerals between lexical and functional categories, it is not clear how the Case assignment and Agree pattern work in her system. Especially, we are not persuaded that the examples she gives must necessarily prove the numeral pięć “five” in Polish to be an old accusative, because this numeral can be either ACC tych pięciu mężczyzn or GEN tych pięciu mężczyzn. Also the other arguments in favor of an accusative hypothesis of pięciu are not convincing. Instead, the examples MiechowiczMathiasen gives show that the form pięciu could be either a Gen-Acc for animate male nouns or a partitive Genitive. (15)
a.
Czekałam około godziny/minuty/tygodnia. waited1.SG.pret around hourGEN/minuteGEN/weekGEN “I’ve waited about an hour/minute/week.” (structural GEN checked by około)
Agree and Valuation in Russian in a Phase-Based Model
183
b. Było około pięć / pięciu tysiący Polaków. were3.SG.N around fiveACC/ fiveGEN thousandsGEN PolesGEN ‘There were around five thousand Poles.’ (structural ACC & GEN checked by około) c.
Nie było około *pięć / pięciu tysiąc Polaków Neg were3.SG.N around fiveACC/ fiveGEN thousandsGEN PolesGEN ‘There were around five thousand Poles absent/missing.’ (Gen assigned by negative existential verb nie było)
d. Pomogli około pięciu tysiącom Polaków. Pro helped3.PL around fiveDAT thousandsDAT PolesGEN ‘They helped around five thousand Poles.’ (lexical case DAT checked by V ) e.
Opiekuję się około pięcioma tysiacami Polaków. care3.PL self around fiveINSTR thousandsINSTR PolesGEN ‘They are taking care of around five thousand Poles.’ (lexical case INSTR checked by V ) (all examples cited from Miechowicz-Mathiasen 2012)
As we can see, the form pięciu cannot be the Accusative but rather a frozen Genitive Partitive form of the i-stem noun pięć because the preposition okolo that governs the numeral and is adjacent does not even affect the Case assignment. The form pięciu in PL is thus a frozen Case form that is not assigned Case at all. The only case where the Verb assigns the Case Instrumental in (14) e. is a lexical (inherent) Case.
5.3 Agree and Valuation in Russian in a Phase-Based Model This section tries to refine the analysis of “heterogeneous” vs. “homogenous” morphosyntax of numeral expressions with focus on Russian making use of recent syntactic developments, namely the emergence of the case-assigning mechanism Agree. The key insight is taken from Gilbert Rappaport’s approach on case assignment (2002), refined and enriched by the mechanism of interpretability and valuation of features from the recent studies on the Syntax of Valuation and the interpretability of Features developed by David Pesetsky and Esther Torrego (2004) and partly adopted for Russian by Gilbert Rappaport (2003a, b). Since the Minimalist program is a logical step towards a more economical version of the language knowledge described as I-language in previous work on generative grammar (cf. e.g. Chomsky 1981, 1986a, b), it should be based on and descend from the notion of economy of representation and derivation. Thus, we assume only two levels of representation: the articulatory- perceptual interface (PF) and the conceptual-intentional interface (LF) accompanied with the single notion of Full Interpretation.
184
Case and Agree in Slavic Numerals
The principle of Full Interpretation (FI) ensures that convergent derivations are only those where no uninterpretable element can remain at the point where the derivation enters the semantic component, cf.: Tab. 3: Principle of Full Interpretation (FI) Principle of Full Interpretation (FI). Interface representation must be fully interpretable for the relevant performance systems, in particular: i. A PF-representation may contain no symbol that is not interpretable for the articulatory-perceptual systems (A-P). ii. An LF-representation may contain no symbol that is not interpretable for the CIs (C-I) (Chomsky 1995, 63).
Since the function of the derivation is to generate sound-meaning pairs, more precisely, pairs of representations at the ARTICULATORY-PERCEPTUAL (A-P) and CONCEPTUAL-INTENTIONAL (C-I) interfaces, all features (grammatical information) must be interpretable at one or the other interface. In order to be interpretable, a feature must take the form of a type and a value. For example, plurality, interpretable at the C-I interface, would be represented on a nominal lexeme upon insertion in syntactic structure by the feature [Number: Plural]. On the other hand, there are semantically uninterpretable features (illegible at the C-I interface) which must be represented in the syntax, because (a) they are interpretable at the A-P interface (i.e., are PHONOLOGICALLY INTERPRETABLE) and (b) their value is determined by syntactic context. Examples include case and concord features on an adjective. This latter type of feature is inserted into syntactic structure in the form of a feature containing a type without a corresponding value, such as [Case:] or [Number:]; the value of such a feature must be determined during the course of the derivation. Thus, features can be VALUED or UNVALUED. As a matter of definition, we will say that two instances of a feature MATCH if they contain the same type, regardless of their value. Second, AGREE is a basic operation of the narrow syntax, implementing case assignment and predicate agreement (cf. Chomsky 2000, 2001). For our purposes, the following simplified definition taken from Gilbert Rappaport’s work (2002) is sufficient:
(16) “Two categories Agree iff all of the following conditions are satisfied: • one of the categories c-commands the other; • each of the categories is ACTIVE (i.e., contains some unvalued feature); • there is at least one matching feature shared by the two categories; and • for each pair of matching features, at least one must be unvalued.” (Rappaport 2002: 333)
Agree and Valuation in Russian in a Phase-Based Model
185
With other words, Rappaport states that “the value of any valued matching feature [namely of two categories, PK] is copied onto its unvalued counterpart; and semantically uninterpretable features in the Agreeing categories are deleted from the Syntactic/Semantic derivation and passed on to Morphological Form.” As a result, portions of the sentence’s structure are passed on to Morphological Form before the higher syntactic structure containing these portions exists. This is actually the mechanism which accounts for “island conditions,” or opacity: when structure has been passed on to Morphological Form, it is no longer accessible to operations (Agree) applying higher in the tree and this accounts also for the fact that is driven by the notion of Phase Impenetrability Condition (PIC). A rather nice conceptual argument for phases concerns the uninterpretability of features: If Spell-Out has to remove uninterpretable features to avoid a crash at the interfaces, it must know which features are interpretable and which are uninterpretable. However, narrow syntax is supposed to be blind as a bat, and thus, would need to be able to look ahead (up the tree, to LF and PF) in order to determine the interpretability of a feature. A transfer of derivational chunks to the interfaces remedies this issue, search space being reduced to a local Domain (a Phase). With these theoretical preliminaries out of the way, we now return to the morphosyntax of numeral phrases in Russian. How can we account for the peculiar behavior of quantified NPs in Polish and Russian under the perspective of Valuation? Our working hypothesis is the following: In case of the homogeneous paradigm in Russian, the numerals behave like the A-class numerals (adjectival numerals) in Polish, i.e., they are modifiers of the N0 in a functional projection above the NP, namely QP. Since the nominal head N0 has valued Features for gender and number in the lexicon, the modifier of QP (Spec-QP) agrees in gender and number with N0 by the operation Agree. This is the reason why the numerals of this class agree in gender and number with the nominal because the material inside of the NP (modifiers, complements) receives its F features by the Operation Agree in Narrow Syntax from the nominal head N0 (the former percolation of formal features). What about case? Inherent case is assigned to the nouns either by the lexical property of the lexical Verb (V0) or/and lexical Preposition (P0). This means that the case in the homogeneous paradigm is assigned to the head of the NP because lexical Verbs and lexical Prepositions are assigned inherent (lexical) case in the lexicon. We can see this in the following examples (5) and (6) where, in an oblique context, the lexical Verb obratit’sja “to come along, to satisfy oneself with something” assigns the lexical case Instrumental to its complement. In example (6), the lexical Preposition assigns the case Praepositive to its complement. The Numeral in
186
Case and Agree in Slavic Numerals
(5) behaves more like a Modifier (Adjective) agreeing in Case and number with the N0 rubl’. However, the numeral itself has the morphology of a singular noun: In case of the heterogeneous paradigm, the numerals (∃ (x) ≥ 5) belong to the Polish Q-class just as much as the Quantifiers mnogo, malo, etc. that are not valued in lexicon for case and gender. They are modifiers of the QP with the semantic value of non-specified big amount or a higher number than 5. They typically head a QP that quantifies over a Partitive Phrase (PartP) with either an empty Part0 head (in articleless languages) or a Part0 head with a partitive preposition that assigns the Genitive Plural to the embedded NP. We can see this under (16). (17)
DP QP PartP NP VP
In order to justify such an analysis we can compare the situation in Russian and Polish with French and Bulgarian being languages with a DP. In French, it is the preposition de that assigns the partitive Case to the DP used for all constructions with the Quantifier beaucoup or a mass noun (cf. du beure, du lait, du pain). In Italian, it is the particle di which combines with the mass nouns. In Bulgarian nominal and numeral system which do not inflect for Case is not inflected anymore, the non-composed higher numerals such as 5 assign the case silent (17) + Nominative or with the additive preposition na “to” in composed higher numerals (petnadeset, literally five to ten = “15”)) while the generalized Quantifier of the Q-class (mnogo) assigns the case with the preposition ot “of ” (cf. (18))
(18) a. Pjat’ studentov govorili/ govorilo s professorom b. Pięciu uczniów rozmawiało z profesorem c. Pet studentite govoriha s profesora
Agree and Valuation in Russian in a Phase-Based Model
(19) a. Mnogo studentov govorilo s professorom partitive b. Mnogo ot studentite govariha s profesora c. Molti degli studenti hanno parlato col professore partitive
(20) a. Mnogie studenty govorili s professorom attributive collective b. Mnogo ctudenti govoriha s profesora c. Molti studenti hanno parlato col professore
187
(21) a. Student el mnogo xleba The/a ate much bread_GEN SG PART student b. Studentat jal mnogo xljab The/a ate much bread_NOM SG53 student c. L’élève a mangé beaucoup de pain The Student has eaten much of bread d. Lo student ha mangiato un sacco di pane “The student has eaten a bag of bread (= a lot of bread)
If we assume that it is the head of the PartP which has a valued Case [Case: Quantitative]54 (Rappaport 2003, and this valued (val) but not interpretable (unint) feature combines with higher or Q-numerals standing in the Spec-QP position and bearing unvalued (unval) Case features [Case: Quantitative], we can trigger Merge of the higher numeral and all higher quantifiers by Move and Agree). We can then assume that the higher numeral and Q-numerals and the quantifiers mnogo, malo first merge with the Spec-Part and check their (unval) Case features with the (val) Case features before remerging (Move) to the Spec-QP position above. Empirical support for phases comes from the examples with the heterogeneous class of Q-Numerals and Quantifiers (ex. (17) and (18)) vs. homogeneous class of A-Numerals and Quantifiers (19). How does CHL decide between attracting and merging the one or the other? Given the economic preference of Merge over Move (Move is more costly than Merge since Move ≈ Copy + Merge), an insertion of a Numeral should be expected in every instance and Raising should never
53 In modern BG, the inflection is impoverished so that the partitivity and the case assignment is abstract like in English. The neutral default expression is the NOM case. The Case Quantitative is assigned by the head oft he PartP (= PartP0) abstractly with the feature of partitivity to the NP in NomPL. 54 Rappaport (2003) assumes an “abstract” QUANTITATIVE case for higher numerals as well, but he analyzes and derives them in a more ad-hoc stipulative manner.
188
Case and Agree in Slavic Numerals
be possible. Phases circumvent this issue since lexical subarrays can be defined for every cycle. Thus, Q-Numerals belong to a class that have unvalued formal (F-Features), i.e. Case-feature [Case: Quantitative] in the lexicon and valued Case-feature [Case: Quantitative] on PartP0. On the other hand, Q-Numerals and non-inflected Quantifiers of this class have interpretable semantic features (s-Features) with the meaning of “Partitivity” in the lexicon but uninterpretable s-Features on the head of the PartP. Neither PartP0 nor Q-Numerals or noninflected Quantifiers share any property that would allow for any other F-features except of Case that is to be expected from a purely Case assigning Phase. To get an idea of the technical side of this argument, first consider the following two examples, which illustrate Merge-over-Move. They share one and the same numeration, but one derivation yields the ungrammatical (21b). (22)
a. b.
Mnogo * Mnogo
studentov student
govorilo govorili
s professorom s professorom
partitive
Lexical Array Num = {mnogo, student, govorit’, s, professor} Let’s take a closer look at the steps of the derivation of (25a). A. [VP [govorilo s professorom]] – Merge V + PP B. [PartP mnogo VP [govorilo s professorom]]– Merge VP + PartP C. [PartP [Part0 mnogo NP [N0 studentov VP[V [govorilo s professorom]]]]] – Merge Part P + NP & check Case
And now let us compare the derivation (25a) with the ill-formed derivation in (22b). (23) b. * Mnogo studenty-NOMPL govorili-3PSPL s professorom (23) b. is ungrammatical because the NP studenty is not assigned the proper case [Case:Quantitative];
instead it is wrongly assigned the Case [Case: Nominative], and a mismatch of Agree and Case features between the Modifier mnogo and studenty results in ungrammaticality. We assume that the reason is because the non-Agreeing Quantifier mnogo has moved to the Specifier of QP instead of Merge with the PartP. Since Merge-over-Move is the preferred solution for Economy reason, the example (22) b. crashed (is ruled out). Consider now the reverse option when an A-Quantifier of the inflected type is taken from the Lexical Array: Num = {mnogie, student, govorit’, s, professor} (24) Mnogie-NOMPL Many
studenty-NOMPL students
govorili3PSPL spoke
s professorom collective with (the) professor
Agree and Valuation in Russian in a Phase-Based Model
189
Suppose, that the A-quantifier mnogie is already inflected for number in the lexicon receiving the valued feature φF[α], [α] = {PlurTantum}val/interpretable only in the process of Merge and Move via Agree. Because it is an adjective, it has unvalued features for Number, Gender, and Case in the lexicon, and it can valuate these unvalued features only in the process of derivation under the operation Agree. Since the PartP is not a Phase with Edge features for Agreement (in Spec-PartP), the A-quantifier must first Merge with the Spec-NP and only then, for Scope reasons and check, with the Spec-QP by cyclic overt Movement. Since the head of the NP has valued features (φF [α]) for Number, Gender and Animacy (cf. Rappaport 2003), it can percolate these F-Features to the A-quantifier in Spec-NP by the operation Agree. After having checked the unvaluated F-features, it must check the s-Features of the QP in Q0 via covert or overt Movement to Spec-QP. The derivation is shown in (27).
(25) a. [VPV [govoril- s professorom] – Merge V + PP] b. [NP mnogie [N0 studenty] VPV [govorili s professorom]] – Merge VP + NP c. [Spec-QP mnogie NP[N0 studenty]] [NP mnogie [N0 studenty] VP[V [govorili s professorom]]] – Move NP to QP & check F-Features and Qfeatures via Agree
5.3.1 Interim Conclusion So far we can see that higher numerals and the Quantifiers of the Q-type behave in the same way: morphologically, they are non-inflected forms with unvaluated features for Number, Gender, and Case. Since they are by morphology noninflected categories, they are merged with a PartP0 that is neutral with respect to F-Features and Agree (Number, Gender) but which has a valuated Case feature [+Quantitative]. Higher numerals and the Quantifiers of the Q-type merge with the PartP in order to assign the Case [Case: Quantitative] to their NP complement. The numerals of the A-type as well as Quantifiers with the morphology and syntax of A-type, on the other hand, are inserted in Spec-NP and undergo Merge and Move to QP in order to check F-Features and Q-features via Agree.
5.3.2 Animacy and Maculine Personal Gender (Hence Virile V) in Numerals An important change that affected all numerals in Polish (though the higher ones in a particular way) was the developing category of masculine personal gender, hence virile (V), as opposed to literally the rest: non-virile (NV). Its exponence entailed the introduction of ACC=GEN syncretism, which meant that ACC forms of virile nouns, pronouns, and numerals modifying virile nouns were substituted with GEN ones.
190
Case and Agree in Slavic Numerals
The system of numerals in Slavic languages is highly developed and varied especially because of the abovementioned distinction of N-numerals, A-Numerals, and Q-Numerals but also because of the opposition between Gender (masculine, feminine, and neuter) and the opposition of the subgender of Animacy – masculine personal gender, hence virile (V), opposed to literally the rest: non-virile (NV) (cf. Rappaport 2003b, Kosta 2003c). In grammar, the notion of “animacy” is two-fold: a concept that strictly refers to the language Polish (and a few other Slavic languages) and defines the categories of animacy as a subcategory of masculine personal gender, hence virile. Features of animate forms are assigned Gen-Akk in Sg in complement position (for example: widzę tegoGen-AkkSg psaGen-AkkSg “I see the dog”) and Akk forms for non-animate referents (for example, widzę ten stół “I see this table”). In the Polish language, the category of animacy also divides to the category of virile animates, that is, męskoosobowosc (masculine personal gender, V) which shows Gen-Akk assigned not only on the Sg but also Pl in the structural position of a complement (cf.: widzę tegoGen-AkkSg studentaGen-AkkSg, widzę tychGen-AkkPl studentówGen-AkkPl) and non-virile (NV), a form of complement in the accusative singular form of the Gen-Akk but the accusative plural form of the Nominative, cf.: widzę tego psa, widzę te psy “I see that dog, I see these dogs.” Together with the V and NV sub-categories of the male declension type (i.e., masculine category of non-personal, but animate male referents “męskorzeczową”), the feminine, and the neuter, we talk about five Gender classes in Polish language (cf. Kosta 2003). Similarly understood is the concept of animacy in Croatian and Serbian (“Kategorija živosti”). The category of animacy, however, is not divided into męskoosobową (V) and męskożywotną (non-personal, NV), because there is a real form of the accusative in the plural, which is not identical to either of the forms of the nominative or genitive of the other forms. We deal only with the opposition regarding the singular masculine “animate” (e.g., Vidim studenta, psa “I see the student/the dog”) vs. “non-animate” (e.g., Vidim stol /sto “I see the/a table.”) There are broader concepts of “animacy” as grammatical categories in which grammatical units are classified according to the criterion of “animate” vs. “nonanimate” in all Genders and Numbers, that is, in masculine, feminine, and neuter gender, in Sg, Plur (and former Dual like in Old Russian) in East Slavic, and very narrow concepts. By this understanding of the term, one can specify different grammatical phenomena; animacy in this sense can be found in many languages, but it may relate to various phenomena. Learners especially of these three Slavic languages often define the numeral system as “the worst thing.” Native speakers
Agree and Valuation in Russian in a Phase-Based Model
191
of Russian, SCB, and Polish languages apparently do not have such problems with numbers. However, the use of language is changing. This also applies to the system of numerals. Recall the disappearance of duality in Polish, Czech and other Slavic languages between the 12th and 17th century. I want to prove in this chapter some trends in the data that show a simplification and transition from nouns to functional categories in the numeral system of Polish. We want to demonstrate that in Polish the category of animacy has played a decisive role in the recategorization of the Case-Agree-System of numerals.
5.3.3 Classification of Numerals in Polish (PL) and SCB and Animacy (V vs. NV) Before I will deal with the case assignment of numerals in Polish, I would like to review their system. Given the importance of the numerals, we divide them into the following groups: cardinal numbers (e.g., PL: dwaj, dwa, dwie, trzy; SCB: dva, dvije/dve, tri) collective numerals (e.g., PL: dwoje, troje, pięcioro; SCB: dvoje, troje, petoro) ordinal numbers (e.g., PL: drugi, trzeci; SCB.: drugi, treći) fractional numerals (e.g., PL: jedna trzecia; SBC: trećina) multiple numerals (e.g., PL: dwukrotny, dwojaki; SBC: dvostruk, dvojak).
Tab. 4: Animacy (Virilis Non-virilis Opposition) and Numbers
In Tab. 4, we can see that the category of animacy and virilis is crucial for the distribution and use of the cardinal numbers in PL. In virilis forms of the numbers 2–4 are dwaj, trzej, etc. and in non-personals – dwie, trzy. Crucial for our analysis in part 3 of this chapter will be the fact that the virilis forms dwaj, trzej, and czterej can be substituted by dwóch, trzech, czterech, and pięciu that agree as genitive plural forms of two, three, four and five with the plural genitive form of
192
Case and Agree in Slavic Numerals
the NP complement, but the finite form of the Verb is in the form 3 person singular of the type non-personal form, cf.: (26) a. b.
Czterej Four-virilis Czterech Four-GenPL
pancerni armored-NomPL pancernych-GenPL armored–GenPL
śpiewali. infantry sang-3PsPL śpiewało infantry sang-3PsSgn
This fact leads us to the conviction that the origins of the Case assignment must be closely connected with the category of collectivity and partitivity. The genitive form we consider here as a partitive form of the genitive, the semantically referenced to a specific set of individuals or copies of a set. The form of the impersonal form is typical of impersonal forms of Slavic verbs (see Kosta 2009b). We believe that in Russian Paucals the mismatch between Case and Number of the numeral and its modifier has the same source and thus deserves the same analysis. How can we account for the fact that in Russian the Noun agrees in Number with the former Dual grammaticalized or recategorized as GenSg but the Adjective has the form of GenPlur? Gilbert Rappaport (2003b) derives the form of the Adjective from a Feature Animacy assigned with the Gen-Akk. This would, however, presuppose that only animate male forms can be assigned the Gen-Akk., but never inanimate nouns. As we can see, this is not true, because also inanimate nouns such as inanimate masculine forms and feminine forms display the same case marking feature and mismatch when accompanied by an adjective (cf. (27)). We believe instead that the apparent mismatch in Number can be explained by the properties of functional heads of the Paucal Phrase (PaucP) heading a nominal Phrase (NP) but at the same time being dominated by a PartP. It is obvious that the checked case GenSg (for PaucP) and the GenPlur (for PartP) is structural, and structural cases are characteristic of functional heads rather than lexical ones. Let us suppose that numerals 5< have changed their lexical status and have become functional heads headed by a Quantifier Phrase for Scope reasons. Because they co-occur with NPs, we can assume further that they would be a part of the extended projection of the counted noun (Miechowicz-Mathiasen 2012). This could explain the mixed dependency relations, that is, the nominal behavior of numerals would be thus expected, rather than surprising. Moreover, it would explain why they have lost their intrinsic agreement features: as functional heads, similarly to such heads as v or T, they can only take agreement on, rather than trigger it; also, their ability to check structural case goes through without stipulation. After having clarified the status of the Case in the numeral pięciu, we turn back to the problem of the relation of Case assignment and Agree of lower numerals in Russian.
Agree and Valuation in Russian in a Phase-Based Model
193
5.3.4 Lower Numerals in Russian (Paucals) In Russian, the genitive singular is assigned to nouns with plural referents after certain numerals in nominative case positions. We take this to be the defining property of the lower numerals: dva “two,” tri “three,” chetyre “four,” and oba “both.” Russian represents a marked situation, in contrast to Polish, for example, in which the corresponding numerals have no morphosyntactic effect and combine with the plural in whatever case the syntactic context requires; e.g., dostałem {dwa / trzy / cztery / oba} listyACC=NOM PL “(I) received {two / three / four / both} letters.” The fact that the Quantitative Genitive is assigned after higher, but not lower, numerals in such a closely related language suggests that in Russian the genitive after lower numerals is independent of that after higher numerals: Polish has the latter without the former. Lower numerals in Russian, then, can be treated as analogous to higher numerals, associated with an “abstract” case expressed by syncretism, except that the value of that case differs in the two kinds of numerals. We assume a PAUCAL case for Russian, which is spelled out on the head noun Pauc0 of a PaucP as the genitive singular. There is, incidentally, fragmentary evidence in Russian of a distinct phonological expression of the paucal category: four words have specially stressed oxytonal forms found only when they follow lower numerals:
Tab. 5: Paucal and Oxytonal Forms from Former Dual (from Rappaport 2003)
The reason for the oxytonal accent of these four words is that the numeral 2 was in Old Russian Dual and the masculine nouns in Dual were originally oxytonal. This oxytonal accent has been generalized as the new masculine plural morpheme –á, cf. professorá, gorodá, uchiteljá, and beregá. These oxytonal forms contrast with the accent on the stem of the forms of Genitive Singular, cf.: proféssora, góroda, uchítelja, and bérega. The problem about these forms is when they appear after the paucal numerals (2–4) and in an NP with adjectives, the morphological form of adjectives in the paucal case is subject to variation. In the masculine gender, the paucal case of adjectives could take the form of either the nominative or the genitive-accusative case as recently as the nineteenth
194
Case and Agree in Slavic Numerals
century, although the nominative is virtually impossible today. In the feminine, either case form is permissible for the paucal today. Thus:
(27) a. dva {malenkich-GEN-ACC PL / *malen’kie-NOM PL} stola-GEN SG MASC ‘two small tables’ b. dve {malen’kich-GEN-ACC PL / malen’kie-NOM PL} devočki-GEN SG FEM ‘two little girls’
If we compare the forms dva with the Old Russian and Old Polish forms in the course of historical development, we can see that they were historically old Dual forms. Common Slavic had a complete singular-dual-plural number system, although the nominal dual paradigms showed considerable syncretism, just as they did in Proto-Indo-European. Dual was fully operable at the time of Old Church Slavic manuscript writings, and it has been subsequently lost in most Slavic dialects in the historical period. Of the living languages, only Slovene, Chakavian, and Sorbian have preserved the dual number as a productive form. In all of the remaining languages, its influence is still found in the declension of nouns of which there are commonly only two: eyes, ears, and shoulders, in certain fixed expressions, and the agreement of nouns when used with numbers. In all the languages, the words “two” and “both” preserve characteristics of dual declension. The following table shows a selection of forms for the numeral “two”: As in the case of higher numerals, two options are made available in the lexicon for lower numerals. The case feature can be valued as [Paucal] on Merge, invoking heterogeneous morphosyntax (1a); the numeral itself is spelled out as a form of the paucal case. Whereas Gilbert C. Rappaport (2003) tries to explain the apparent mismatch between number, case, and gender (animacy) between the Paucal, the noun and the modifier with a special Case feature and with an ad-hoc stipulated animacy feature in addition with an ad hoc stipulated “Paucal Case Syncretism Readjustment rule,” we believe to be able to give an analysis that does not need any additional rules. Furthermore, we believe that our analysis will be in line with the newest development of the Phase theory based on considerations of parsimony and economy. In Russian, the lower numerals 2–4 belong to the A-Class of numerals morphologically, that is, they are inflected for Case in the homogeneous morphosyntax but, as opposed to Polish, they are not inflected for animacy and Gender. On contrary to Rappaport (2003), we do not just assume a Feature [Paucal] on Merge, invoking heterogeneous morphosyntax (1a), but a projection about the NP and below the PartP, called PaucP, cf.:
195
Summary (28)
DP èti
NumP 3. step dva
malen’kich
GEN PL
PartP 2. step PaucP 1. step stola
DP AP 0
malen’ki-
N stol-
NP
We assume that the head of the PaucP – PaucP0 – becomes valuated for this Case Feature [Paucal] by merge: Let’s take a closer look at the steps of the derivation of (27). A. [NumP dva NPstol-[+SG +MAS]] – Merge {AP + NP} B. [PaucP [NumP dva stola[GEN +SG +MAS]]] –Merge {DP + {NumP +NP}} check EPP & assign GENSg in PaucP & Send to Transfer C. [PartP Part φF [Quantitative] {APmalen’ki-} φF[]]]– Merge AP with PartP & assign GENPl in PartP & Send to Transfer D. [QP dva malen’kich stola] – Remerge AP dva with QP & check s-Feature via SpecHeadAgree & Send to Transfer
Recall, that narrow syntax is supposed to be blind as a bat, and thus, would need to be able to look ahead (up the tree, to C-I and A-I interfaces) in order to determine the interpretability of a feature. A transfer of derivational chunks to the interfaces remedies this issue, search space being reduced to a local Domain (a Phase). Under this perspective, we do not need to be faced with the lookahead problem in derivational approaches. All steps are licensed by valuation of unvaluated formal features that has to be checked in the searching domain of probe and goal. As soon as one phase is done, it is sent to the transfer and the rest of the structure can be build bottom-up.
5.4 Summary In the present chapter, we have tried to give an analysis of numeral phrases based on valuation and interpretation of Case features and the operation AGREE. We have tried to demonstrate that most of the problems regarding the numerals that could not be resolved until now, within the former approaches of Government
196
Case and Agree in Slavic Numerals
and Binding Theory and early Minimalism (Franks 1995, Rappaport 2002, 2003a, b, Babby 1987, Kosta 2008), could be explained by the Phase-based approach. External evidence from languages with impoverished morphology gave strong support in favor of a hierarchy of functional projections within the domain of numerals and quantifiers. While previous analyses of Russian and Polish numerals and quantifiers concentrated on the different Case assignment properties in homogeneous and heterogeneous paradigms and often needed ad-hoc formulated rules, our analysis descends from a simple mechanism within the theoretical assumptions of Phase Theory (Chomsky 2000) and the Theory of Valuation (Pesetsky and Torrego 2004). It also seems to make sense as though the analysis presented here could explain several unresolved problems of Slavic syntax and possibly of the syntax of other Slavic languages in a very simple mechanism. A layer beyond the NP that is not necessarily DP is motivated by a number of prima facie, unrelated numeral, and quantifying constructions which seem to involve a functional layer projected immediately above NP. If present, the head hosts a formal feature that can be checked in one of the following ways: • by Merging a Partitive Head with a nominal complement and assigning the abstract Case Quantitative with the F-features of Genitive Plural to its complement – heterogeneous pattern in structural case positions; • by Merging a Paucal Head with its nominal complement via valuation of the Case-features of Genitive Singular; • by merging a modifier – homogeneous pattern in oblique (lexical, inherent) case positions; • by merging a pseudo-partitive head (examples of the type 23b); • by numeral and quantifier-raising of the numeral/quantifier in QP. The PartP configuration is likely to be involved in other nominal constructions, but this issue requires further research. The model outlined above relies on the assumption that the Part° head is not associated with any fixed semantic value but rather with an abstract Case [Case: Quantitative] on PartP0. This makes the proposed analysis less language-specific than an account proposed by Kosta (2008) and by Rutkowski and Progovac (2005). Thanks to subsuming a variety of nominal constructions under one label, the PartP model avoids unnecessary proliferation of functional layers in the region between DP and NP. Moreover, it proves that the case assignment occurs early in the derivation as a result of merge, while the Valuation of non-valuated formal features seems to run hand in hand with the operation Agree, which is thus subordinate to the process of Merge and Case assignment in Narrow Syntax. The good news for
Summary
197
the present analysis is also that we can get rid of the problem of “look ahead” or “look back” (i.e., peering into narrow syntax by the interfaces) due to the Phase-Impenetrability Condition. After completion, the phase is inaccessible to further operations, as formally captured by (PIC). “In phase α with the head H, the domain of H is not accessible to operations outside α, only H and its edge are accessible to such operations” (Chomsky 2000:108).
6 On Phases, Escaping Islands, and Theory of Movement (Displacement) In this chapter, we shall give some evidence that a radical interface-based approach of Radical Minimalism (cf. Kosta and Krivochen 2014, Krivochen and Kosta 2013) is preferred over an approach which stipulates data by ad-hoc features. The generalized theory of Strong Minimalism (Chomsky 2005) assumes that any derivation in syntax must follow principles of economy and parsimony. Furthermore, derivations should respect and even obey local economy (cf. Rizzi 1997). Another important observation is that a theory which is based on phase-by-phase principles of Crash-proof derivation in which each derivational step bottom- up must be done within a Phase (following Phase Impenetrability Condition (PIC) and Edge Features) seems to prove by data. Thus, any principle of UG which serves as common basis of different syntactic derivations must be applied without exception and uniquely for any given natural language. We apply this idea repeated again in Chomsky (2020) and show this on cases of strong Islands, wh-Movement and Scrambling in Slavic languages (in Bulgarian, Russian, and Czech). As opposed to previous assumptions in which we reject any feature-based approach, this theoretical approach respects the need of Label driven syntax. Roadmap We want to show in the following examples of how different displacement operations can be derived from a simple UG principle called Phase Impenetrability Condition (PIC) in (2) and Phase Interpretability Condition in (3) from Chapter 3 with some additional conditions on visibility. The presentation proceeds as follows and specifies the following syntactic operations already introduced but not properly explained in detail from Chapter 3.
6.1 Wh-Movement Recall that in standard Mainstream Generative Grammar (MGG), wh-movement is derived by a [+wh-feature] in an Operator position which attracts the whword to Spec-CP via cyclic movement55. 55 Chomsky, N. 1977 On wh-Movement. In P. Culicover, T. Wasow, and Akmajian A (eds.), Formal Syntax. New York: Academic Press, 71–132.
200
On Phases, Escaping Islands, and Theory of Movement
Examples for short wh-movement (1) what did you loose? (1ʼ) [Spec-CP_OP+wh what C did φ φ/Case you [φ/Case wh- v* [V loose DP [NP [wh-]]]]
In this book (Chapter 3: On Phases, Moves and Islands), I have tried to argue for a Model of Grammars of natural languages which is based on the principle of mutual interaction between derivational steps, narrow syntax, and interfaces. This axiom can be supported by the following arguments: 1. Argument: Crash-proof grammars are superior over crash-ripe ones
While Chomsky’s Minimalism is based on the idea that FI is reached only before the interfaces, we plea for a model in which the interfaces and syntax is connected with a mechanism called Analyze which allows for looking back and forward in order to Converge. Thus, Crash-proof syntax does not wait until Crash or Converge in order to fulfil FI at Interfaces but instead it peers back and forward in order to avoid Crash at every single step of the derivation in Narrow syntax; thus, each derivational step is checked against the notion of optimal solution and in case it crashes another candidate has to be chosen (Frampton and Gutman 2002: 90). Let us recall the two basic notions, namely “Crash” and “Converge” as introduced in Minimalist Program (Chomsky 1995).
(2) Crash: A derivation (D) crashes it does not converge at the interfaces. (3) Converge: A derivation (D) converges whenever the conditions at LF and PF are fulfilled (principle of Full Interpretation / FI) (Chomsky, N. 1993, 1995)
Now, let us recall the basic Principles of RM from Chapter 3: Phase Impenetrability and Phase Interpretability Consider first the example (1) a. yielding a well-formed sentence as opposed to the ungrammatical minimal pair (1) b.
(4) a. There seems t there to be a man in the room. b. *There seems a man to be t a-man in the room
1. Argument: PIC If there in (1a) is in the numeration set, we will have competition between merge and move at the embedded IP and move will win. Or, stated differently, Move is always delayed as much as possible (given the elements available in the numeration). These examples give a rather nice motivation for the notion phase instead
Freezing Effects and Generalizations
201
of X-bar Phrase structure rules. A rather nice conceptual argument for phases concerns also the uninterpretability of features: If Spell-Out has to remove uninterpretable features to avoid a crash at the interfaces, it must know which features are interpretable and which are uninterpretable. However, narrow syntax is supposed to be blind as a bat, and thus, would need to be able to look ahead (up the tree, to LF and PF) in order to determine the interpretability of a feature. A transfer of derivational chunks to the interfaces remedies this issue, search space being reduced to a local Domain (a Phase). Thus, visibility of features from the outside can avoid ill-formed structures. If only edges of a phase, thus only Specifiers and Complements (in the terminology of X-bar notions), can interpret features at the interfaces, the phase impenetrability condition (PIC) in (2) can be reduced to a Phase interpretability condition in (3), cf.:
(5) Phase Impenetrability Condition (PIC)
A phase (CP, vP, VP) is not accessible for further computation (internal or external merge) if:
i. a merged element α has not reached the edge of the phase, ii. the interfaces PF or LF cannot interpret the phi-/wh-features of α[φ] if α in a phase {α[φ] {β[•]}} has not reached an edge phase position.
Problems and predictions: We can see in (2) (ii) that in a merged SO situation, only α can be read off by the interfaces for a feature [φ] since it is at the edge of a phase, while β is internal to the merged phase and cannot be interpreted (we use a dot • for a non-interpretable feature in β[•]). In order to get a Label, the element β[•] has to move via internal merge to a higher position in order to be interpreted. This situation is typical for any kind of movement (be it wh-, NP- or head movement of a non-valued/interpretable lexical or functional category).
6.2 Freezing Effects and Generalizations While the reduction of levels of representations and the invention of Phases launched at this stage of Minimalism a highly parsimonial Model of FLN, the reason for establishing the condition (2) of UG has actually led to a deadlock because the restrictions on cyclic movement only rephrased the basically old problem of MGG on ban of extraction out of islands, moreover the ban of an element once extracted to move again (Freezing effects cf. Rizzi 2004, 2006; the newest explanation cf. Bošković 2020).
202
On Phases, Escaping Islands, and Theory of Movement
(6) Principle of Phase Interpretability (PPI) The formal features (φ/EPP/wh-) of an element α (probe) of a phase π are externally merged with an element β in the Domain Delta 𝜟 iff they are valued at PF by categorial Labels and then deleted
We will concentrate on morphosyntax and PF. The formal condition of Valuation of formal features is formulated under (4). (7) The visibility condition for valuation (UG principle) An element α of an element LEX is valued, if:
It is labeled either at the edge XP or YP or moved/adjoined cyclically to
i. Agree position (φ of the category [Lex_ φ]) or ii. Case position (which is always an A-position) iii. Expletive position
Only phases can undergo movement (cf. Bošković 2016, ex. 11). Now, given the Phase-Impenetrability Condition (PIC) (2), which requires that movement out of phase XP proceed via the edge of XP, movement out of a phase must proceed successive cyclically, targeting the edge of the phase. The PIC has interesting consequences within Chomsky’s (2013) labeling system. Chomsky (2013) proposes a theory of labeling where in the case a head and a phrase merge; the head projects (i.e., provides the label for the resulting object. Regarding the case where two non-minimal projections (i.e., phrases) merge, Chomsky suggests two ways of implementing labeling, via prominent feature sharing or traces, the crucial assumption with the latter being that traces are ignored for the purpose of labeling. To illustrate the former, when what is merged with interrogative C (actually CP at the point of merger) in (5), both the wh-phrase and the CP have the Q-feature; what is projected (i.e., determines the label of the resulting object) then is the Q-feature. This is obviously reminiscent of Spec-Head agreement, where the shared feature is what is involved in Spec-head agreement. (8) I wonder [CP whati [C’ C [John bought ti]]] (9) Whati do you think [CP t’i [C’ that [John bought ti]]] (10) v [VP think [? what [CP that [John bought ti]]]] (for further considerations and details cf. Bošković 2016)
One of the lines of research within the domain of locality of movement that has attracted a considerable amount of attention concerns freezing effects. Many researchers have argued that movement out of moved elements is not possible. The most explicit early statement of the effect goes back to Culicover and Wexler (1977) and Wexler and Culicover (1980), with early minimalist works such as
Improper Movement Revisited
203
Ormazabal, Uriagereka, and Uribe-Echevarria (1994) and Takahashi (1994) providing a new perspective on the effect. Many other works have argued for generalizations along the lines of (1), also providing empirical evidence for it; see Ross (1967: 160, 1974), Postal (1972), Huybregts (1976), Freidin (1992), Diesing (1992), Collins (1994), Müller (1998, 2011a, b), Lasnik (1999), Stepanov (2001), Rizzi (2006), Boeckx (2008), Gallego (2009), Lohndal (2011), Uriagereka (2012), among many others (see Corver in press for a review of the relevant literature and arguments). [(2)]?*I wonder [CP whoi [DP friends of ti]j [vP tj hired Mary]]
Notice in this respect that, as discussed in Stepanov (2007) and Takahashi (1994) with respect to a number of languages, movement from subjects that remain in SpecvP is possible, which led Stepanov (2007) and Takahashi (1994) to blame the ungrammaticality of (2) on the moved status of the subject in this construction. The following contrast from Spanish illustrates the different behavior of moved and unmoved subjects with respect to extraction.
[(3)] a. ¿De qué conferenciantesi te parece que mez van of what speakers CL-2sg seem-PRES.3SG that CL-1SG go-PRES.3PL to a impresionar v [v*P [DP las propuestas ti][tz tv]? to-impress the proposals (cf. Krivochen and Kosta 2013, Kosta and Krivochen 2014a).
6.3 Improper Movement Revisited Improper movement has been formerly explained as an illicit movement from an A-bar-position to an A-position, as in (8).
(11) * [IP Johni seems [CP t'i [IP ti knows it all]]]
Since we do not really know in which position (A-bar or A’) the intermediate trace t'i actually stands (it could well be in vP-spec which is an A-position of unaccusative and transitive verbs, cf. Kosta and Krivochen 2014a), we cannot really be comfortable with this explanation. And besides, the existence of traces under the Phase-based approach has been denied. The explanation with help of condition (3) immediately follows: The NP John cannot move further than to the first edge position of a phase v*, that is, spec vP, because in this position it is already assigned Nominative Case under long-distance Agree with the finite Verb which moved to the position v to check the EPP features. At this position, it could be interpreted by the interfaces (being a proper edge feature) and then deleted. After deletion, the EPP features are invisible/inaccessible for further computation.
204
On Phases, Escaping Islands, and Theory of Movement
6.4 Object Shift Early Minimalism introduced the already mentioned EPP features which were assumed to be fixed in the language under consideration as part of an Elemetary Lexical Unit (= underspecified root √ in the sense of Panagiotids (2014). Chomsky (2001) introduced the notion of EPP features in (8), “where Object θ is the thematic position of the moved object before overt Move, and Object S is an outer specifier position of v* created by OS triggered by the case feature on the light verb v* (cf. Vikner 1994, 2006)” (Broekhuis & Woolford 2013: 133), thus deriving (8). (11ʼ) [α ObjectS [Subject v* [V ... Objectθ]]]
Broekhuis and Woolford (2013) make the important observation that v* as the Case assigner is at least associated with the shifted object to the position Object S with an abstract EPP-feature. As the authors show, Icelandic OS depends on the information structure of the clause. Thus, OS is only possible in cases when the object is part of the presupposition of the clause (cf. also Holmberg 1999). From this, they conclude that OS is at least partly motivated by LF considerations of the IS, so that LF-related features of IS can and thus must be introduced, cf.: (9) a. Jón keypti ekki bókina bókina ∈⊂ focus Jón bought not the book b. Jón keypti bókina1 ekki t bókina bókina ∈⊂ presupposition (Broekhuis & Woolford 2013: 133)
Chomsky mentions the following constellation in which the phonological border of *vP means a position not c-commanded by phonological material within v*P. Thus, it is crucial that an object in situ which has been merged directly from the lexical array takes or can take a position visible for further computation; thus, it can be moved to a higher position if other than case or Agreement features must be checked. We assume that this is the case if either EPP or a feature in the Left periphery (IS feature such as wh- or focus) must be valued. The EPP feature is assigned only if this operation has an effect on outcome. In the same vein, we could argue that a Focus or Topic feature (or a wh-feature) must be checked either overtly by internal Merge (Move) via cyclic Movement or covertly at LF. We summarize the findings so far under (10).
Tab. 1: Ranking of Candidates for Object-Shift (OS)
Clitic Doubling in Bulgarian and English Resumptive Pronouns
(12) [β C [Spec T ...[α XP [Subj v* [VP V ... Obj]]]]] (Chomsky 2001a: 33 (56))
(13) a. v* is assigned an EPP-feature only if that has an effect on outcome. b. The EPP position of v* is assigned Int. c. At the phonological border of v*P, XP is assigned Int’. (Chomsky 2001a: 35 (61))
205
6.5 Clitic Doubling in Bulgarian and English Resumptive Pronouns The fact that EPP-features are dubious in their nature since we do not know what they are good for except of satisfying EPP notwithstanding, we save them for the time being as an often used argument for object shift. Apart from their assumed attracting property to the left- most edge of a phase (CP), in most languages I know fronting of Objects is only possible if they are licensed by a Case feature lower in the derivation path before they move to their target position, either by a resumptive pronoun like in English (13) or by clitic doubling (Bulgarian (14)). We give some examples for each case: (14) Johni, I know himi for sure. (15) a. TatkotoDEF goACC-clitic celuna Maria (Radeva-Bork 2011) BLG FatherDEF himACC kissed Maria “Maria kissed the father” b. *TatkotoDEF celuna Maria (interpretation father = *OS without CL) FatherDEF kissed Maria “Maria kissed the father” c. TatkotoDEF celuna Maria (interpretation father = subject in situ) “The father kissed Maria”
As we can see in (13) and (14), the “interpretability” as subject/object of the upper most DP in Bulgarian and the lower DP is not satisfied at LF if the Case marking category (which in Bulgarian can be only a clitic or full pronoun but not the full DP) would be omitted, thus obtaining an false output (14) b. Thus, Bulgarian, a highly analytical language without overt case marking on full DPs, can save the OS interpretation only by overt Case marking of a lower constituent – the clitic goACC-clitic – by the strategy of clitic doubling (CD). The clitic can only refer to the father due to its Case marker ACC and due to the phi-features [Gender, Num]. Now, there is an evident contradiction in the Model of Minimalism (Chomsky 1993, 1995, 1999, 2000, 2005) w.r.t. order of derivation steps and time. If the formal features for Case and Number/Gender are deleted on the Head of Phase
206
On Phases, Escaping Islands, and Theory of Movement
from which they have been inherited after internal Merge and valuation, how can the system decide between OS interpretation and neutral S-V-O word order interpretation and how come that the Clitic must move further up still marking Case if Case has been deleted before? As discussed in Radeva-Bork (2012), there are other environments of Bulgarian CD, in which the interpretation of the sentence is not straightforward despite the presence of a doubling clitic. An instance of one such environment is a sentence with two arguments that have the same phi-features, as in (15) below. For further environments of CD in Bulgarian, see Radeva-Bork (2012). (16)
Majkata mother3P.FEM.SG.DEF
ja srešta herCL.3P .FEM.SG.ACC combed3P.SG
Maria. Maria3P.FEM.SG.DEF
In this case, both arguments are valid candidates for clitic-doubled associates. In other words, the sentence can mean either (15) (i) or (15) (ii).
(17) a. ‘Maria combed the mother.’ b. ‘The mother combed Maria.’
Although for most Bulgarian speakers, interpretation (15) (i) may be more accessible, grammatically both interpretations are equally legitimate. The problem of Labeling at the Level of S-structure or even PF is far from self-evident. But examples from many Indo-European languages and also from Semitic languages such as Arabic (Standard and Lebanese Arabic) clearly show that Clitic Left dislocation and Case marking feed into the distribution of cliticleft dislocated NPs. It has also become clear by now that the relationship between referentiality and resumption is more complex than has been observed in the literature. It is not the case, for instance, that the antecedent of a resumptive element must always be referential. This was an important conclusion arrived at in the investigation of restrictive relatives in Arabic (cf. Aoun et al.) Clitic-left dislocation (henceforth CLLD) is characterized by the presence of a lexical noun phrase in the left peripheral domain of a clause and a weak pronominal element related to it, inside the clause. Typical examples of this construction in Arabic are given in Aoun et al. under ex. (18) a./b.
(18) a. naadia sˇeef-a saami mbeeri ̄h Lebanese Arabic Nadia saw.3ms-her Sami yesterday ‘Nadia, Sami saw her yesterday.’
b. at-tilmiiðat-u raʔaa-ha saami l-baari ̄ha Standard Arabic the-student.fs-Nom saw.3ms-her Sami the-yesterday ‘The student, Sami saw her yesterday.’
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
207
In Standard Arabic, the dislocated noun phrase typically appears with Nominative Case marking, as shown in (18b). In Lebanese Arabic, the CLLDed noun phrase in matrix contexts can be found either before elements of the complementizer phrase (19a) or after them (19b).
(19) a. naadia sˇu ʔaalət-la l-mʕallme? Nadia what told.3fs-her.Dat the-teacher.fs ‘Nadia, what did the teacher tell her?’ b. sˇu naadia ʔaalət-la l-mʕallme? what Nadia told.3fs-her.Dat the-teacher.fs ‘What Nadia did the teacher tell her?’
Cinque (1977, 1990) distinguishes between clitic-left dislocation and left dislocation (LD), a construction also called hanging topic. LD, as Cinque defines it, is a root clause phenomenon, and only one LDed phrase is allowed per sentence. But as Aoun et al. have recently shown, in Lebanese and Standard Arabic, CLLD can appear in many contexts, but they cannot be introduced by indefinite NPs. This phenomenon can be compared with scrambled or topicalized NPs in Czech which can never be indefinite, as one can see under (16) and this can be confirmed by the PhD diss. of Natalia Slioussar for Russian (Slioussar 1998).
6.6 Scrambling and Clitic Left-Cleft Dislocation in Czech DP? The Case of ten, ta, to a DET-CL-Hypothesis We want briefly recall the problem of Scrambling, embedding it into a theory of Dynamic Full Interpretation: The first question is: Is Scrambling a Displacement Operation called internal Merge (or Movement) or is Scrambling First Merge (Base-Generation)? We want to shortly mention the basic dichotomy in the classical Minimalist approaches towards Scrambling, namely (i) the movement and (ii) the base generation approach. • According to (i), there is one underlying word order and the variety of alternate word order arrangements in a clause is the result of A- vs. A-bar movement. • According to (ii), there is not one basic order for constituents and the variable word order is the result of free generation of constituents in an arbitrary order (for more cf. Kosta 2006, 2009a, Krivochen and Kosta 2013). Furthermore, the second question arises immediately: Is Scrambling A’-movement or A-Movement a Movement at all? Standard approaches to Scrambling assume that it is an instance of optional overt A’–movement, triggered by an uninterpretable feature whose nature varies
208
On Phases, Escaping Islands, and Theory of Movement
with the authors (see Boeckx 2003, Molnárfi 2003 for examples). Thus, there is one underlying word order, and the variety of alternate structures is thought to be the result of Move-α by adjunction of an XP (NP/DP, PP, AP or AdvP) to CP, TP, PP, DP or vP/VP (although some authors claim that only TopP can be a suitable landing site for scrambled constituents, as the process is related to dis-coursal requirements). The base-generation analysis generates all constituent orders considered in the former approaches at the level of D-structure (whereas Move-α applies only after D-Structure); in other words, the major constituents do not have a fixed syntactic position at D-structure (cf. Corver and van Riemsdijk 1994, Kosta 2006). The advantages of the movement approach are clear: universality and structural homogeneity underlying apparent surface chaos. However, the postulation of a single base phrase marker that is the input for an optional transformation rule is highly problematic. On the first place, no clear criterion is made explicit in the literature as to why the rule applies “optionally,” and, moreover, whether that optionality depends on syntactic requirements, interface requirements or neither (e.g., purely stylistic reasons, which, in any case, could be subsumed to the C-I interface, see Fanselow 2003 for a related view). If the algorithm that applies is Move-α, then it must apply after D-structure, in more recent terms, after there is a fully fledged phrase marker. If this is the case, then the adjunction hypothesis requires positions to be created ad hoc to host the moved constituents, which is far from being the optimal and most economical scenario. We will try to provide an interface-based account of scrambling in a real-time derivational approach, along the lines of Krivochen and Kosta (2013).
6.6.1 Scrambling and Islands: In Czech and Elsewhere Scrambling and Islands: Under a movement analysis, it is predicted that scrambling displays properties generally associated with standard movement-derived structures. There must be an antecedent-trace relation, and this relation is apparently unbounded. Scrambling also obeys island constraints. Webelhuth (1989) has shown that Scrambling in German are sensitive to Ross’s (1967) island constraints on movement transformations. The following ill-formed German sentences illustrate the sensitivity of Scrambling to island effects such as the Left Branch Condition (I), the Coordinate Structure Constraint (II), and the PP-island condition (III) (examples taken from Webelhuth 1989; cf. also Corver and van Riemsdijk 1994: 3 passim, Kosta 2006: 55).
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
209
I. Left Branch Condition Ross (1986: 127) proposed the Left Branch Condition (LBC), which blocks movement of the leftmost constituent of an NP. The condition has been used in the literature to block extraction of determiners, possessors, and adjectives out of NP. (20) a. *Whosei did you see [ti father] b. *Whichi did you buy [ti car] c. *Thati he saw [ti car] d. *Beautifuli he saw [ti houses] e. *How muchi did she earn [ti money]
As already noted by Ross, some languages, for example, Latin, some Slavic languages such as SC allow LBE, illustrated by Serbo-Croatian (SC) (19), but Czech (20) (Cz.) is as much subject to islands as German. (Pied-piping of the LBE remnant is also possible. (21) is taken from Uriagereka 1988).
(21) a. ćijegi si vidio [ti oca]? whose are seen father ‘Whose father did you see?’ b. Kakvai si kupio [ti kola]? what-kind-of are bought car ‘What kind of a car did you buy?’ c. Tai je vidio [ti kola]. that is seen car ‘That car, he saw.’ d. Lijepei je vidio [ti kuće]. beautiful is seen houses ‘Beautiful houses, he saw.’ e. Kolikoi je zaradila [ti novca]? how-much is earned money (22) a. *Číhoi jsi viděl t otce b. * Číhoi jsi si koupil t auto c. * Číhoi dohodil komu Kecal t dceru d. *Za jaki vydělala dlouho 50 korun t (23) Cuiami amat Cicero [ti puellam]? whose loves Cicero girl ‘Whose girl does Cicero love?’ (Krivochen and Kosta 2013)
In the light of these phenomena, Bošković (2005) assumes that “LBE focusing on adjectival LBE, with the goal to use it to shed light on the structure of NP, in particular, the structural position of AP within the traditional NP. The point of departure of his analysis is Uriagereka’s (1988: 113) observation that LBE is allowed only in languages that do not have overt articles. Thus, Bulgarian, which
210
On Phases, Escaping Islands, and Theory of Movement
Uriagereka mentions, and Macedonian, the two Slavic languages that have overt articles, differ from SC, Russian, Polish, and Czech, which do not have overt articles, in that they disallow LBE (....). In addition, Latin differs from modern Romance languages in that it allowed LBE and did not have overt articles.” We will show that this generalization is wrong for Colloquial Czech (obecná čeština). The restrictions on LBE seem to be based on the NP/DP distinction. Languages with overt articles disallow LBE because D head blocks by Relativized Minimality the extraction out of an NP complement; while languages without a DP allow for extraction, since there is no intervenient head. Notice that the mention of Latin as a non-overt article language implies that no article equals no D head, and that D is in fact a sort of a phase head, making its domain impenetrable to external probes. There is an alternative approach based on the conjecture between LBE and non-LBE and scrambling considered by Bošković: “The alternative is based on the conjecture that the right way to divide LBE and nonLBE languages does not depend on the presence/absence of DP, but the possibility of scrambling. More precisely, whether, or not a language allows LBE depends on whether, or not it allows scrambling, only scrambling languages allowing it. In this respect, note that Slavic languages that allow LBE, such as Russian, SC, Polish, and Czech, are all scrambling languages. Regarding Bulgarian, which disallows LBE, although Bulgarian displays some freedom of word order, its word order is noticeably more rigid than in SC, a closely related language, which could be interpreted as if Bulgarian had no scrambling. As for Romance, modern Romance languages do not have scrambling and disallow LBE. Latin, on the other hand, had scrambling and allowed LBE. English is another example of a non-scrambling language disallowing LBE.” (Boškovic 2005)
Independently of the syntactic variation across languages and the morphosyntactic expression of Det (so-called article vs. article-less languages), there can be no doubt that semantically, and pragmatically (D-linking) speaking, determination of nouns is a deep-rooted cognitive category which belongs to the semantic universals of human cognition and thus must be expressed in one way or another. In formal semantics, usually four types of nouns are differentiated, expressing four basic types of nominal determination, namely I. sortal < e, t, > II. individual e, III. relational and IV functional < e,e > (cf. Löbner 2011, Schürcks, Giannakidou, and Etxeberria 2014: 1 passim, Friedrich 2009 for Russian). A new hypothesis: Standard written Czech – is a language with no overt articles and thus projecting just an NP and not a DP language. But we will show that the colloquial sub- standard variety of obecná čeština uses an article like demonstrative pronoun, but still does not exclude left branch extractions in certain structures. We predict that it is not the overt expression of an article which prevents LBE in Czech
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
211
but rather either the type of functional modifiers and also the fact that certain types of nominal determination via modifiers disallow extraction or separation from a nominal head for semantic reasons of compositionality. So-called left branch extraction (LBE) is possible in BCS and only partly possible in Czech in wh-adjuncts. (22) a. is out because the NP muže (GenSg of muž) has moved to TOP (internal Merge = Move) before it has been externally merged with the jak-Phase in the first step of derivation. Thus, the UG principle, already introduced before in the Chapter 3, (3) of this book (Merge precedes Move), and generalized under the notion of PIC, has been violated. If, on the opposite, jak-Adverbs are merged with the Edge of the DP [AP silného] first, it can then in the next derivational space undergo overt FocusMovement with the Edge of the DP Phase while the head of the DP undergoes covert Movement at LF to FOKUS in the FORCE Left periphery domain, therefore (22) b. is fine. Both examples are illustrated to show two different types of Movement: one in (24) a. which is not licensed because it violates a general UG-principle (*Merge-over-Move, thus *PIC), and the other in (24) b. licensed, because it is licensed by Merge-over-Move, thus it obeys PIC, in (24) b. Illicit displacement-types such as (24) a. vs. licit displacement-types in (24) b. can be regarded as prototypes of Freezing vs. Anti-Freezing Effects, which, however, is subsumed under a more general principle. Thus, for some types of Displacement we can get away with the Notion Freezing vs. Anti-Freezing56 in favor of a more general principle.
(24) a. *Jak muže viděl Jan t silného LBE how man saw Jan strong ‘How strong a man did Jan see?’ b. [Jak silného] viděl Jan t MUže? √. Topicalisation (GradP-AP-FocN) how strong saw Jan man ‘How strong a man did Jan see?’
There are languages such as Chinese (Mandarin) or Japanese, in which wh-phrases in the syntax (S-structure) can or even have to remain in their
56 Freezing and Anti-Freezing is best exemplified in the work of Gereon Müller and Wolfgang Sternefeld (1993, 1994, 1996), Müller (2011a) for German and English. Cf. also section 6.2. and the work on Freezing Effects mentioned there. For further discussion cf. Chapter 6, 6.2.
212
On Phases, Escaping Islands, and Theory of Movement
base-generated position (in situ). It looks like these languages have neither long nor short wh-movement in the S-structure.
(25) a. ni kanjian-le shei? du – sehen – Aspekt – wen “Wen hast du gesehen?” b. Qing wè, ta xing shenme? “Können sie mir sagen, sie heißt wie?“
To assign the correct semantic interpretation to sentences such as (25) a./b., the question words shei/shenme have to move at LF (covert movement) into a c-commanding domain up to SpecCP, so that we get the following LF reading: (25) c. [SpeC sheii [IP ni kanjian-le ti]] wen – du – sehen – Aspekt “Wen hast du gesehen?”
Another, structurally entirely different language is a Bantu language spoken in some parts of West Kenia, Africa. Ekegusii is spoken in the town Kisii. A sentence such as How strong is the man that John saw? could be expressed only as in (25) d., but never as (25) e. (25) d. Naki omosacha yohana aroche abwate chinguru ching’ana. (Ekegusii, Bantu) how-man John saw have strong how-much. [Merge-over-Move] “How strong is the man John saw?” e. *Naki-omosacha oroche Yohanna chinguru [*Move-over-Merge] How-Mansaw John strong
There are two question markers (wh-words), which must license both the nominal extracted part naki omosacha “how-man” and the adjective chinguru strong chingana “how much.” We assume that the wh-words in Ekegusii license cyclic movement and thus obviate the ban of Merge-over-Move by overt cyclic Movement establishing a wh-chain between both wh-words. An extraction of subparts of a phase (DP) is thus forbidden as much as in Czech. Both Chinese and Ekegusii confirm the Merge-over-Move principle. In Chinese, there is no overt Movement of wh-words, so that such a principle is not in play (which does not mean that it is not viable), and in Ekegusii a subpart of a phase can only be extracted if it first merges with a wh-word (naki “how”) and only then, being licensed by another wh-operator, undergoes overt Movement (internal merge) to the SpecCP. Thus, the theoretical assumption of hypothesis (2) and (3) from Chapter 3 can also be checked by comparing them with data from genetically and structurally unrelated languages. In example (24) c. and d., the same contrast cannot
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
213
be seen in Mandarin, a language that is not suspected of having the same structural properties in morphology and syntax. In Mandarin, wh-Movement is a covert (LF) Movement, the narrow syntax leaves the adverbial degree phrase consisting of the wh-word and the adjective or adverb in situ (the same situation for wh-objects, cf. Kosta 1992: 236–37). If we consider the structure of the adjunct phrase in Czech, it is a degree Phase headed by the Adjective but containing in the SpecAP position the degree particle jak “how.” The degree particle jak is a head of its own projection (DegrP) and takes as a sister node under C-command the complement AP silného. Thus, it cannot be separated from its complement because the complement contains (being itself a head) the degree features of the upper head of the degree Phrase and must be adjacent in a head X of XP to a complement Y of YP. Both Phases contain a subset of features for Degree (call it [+ ∂]) which is transparent on the edges. If one of the features sharing subparts
Tab. 2: Move Precedes Merge
214
On Phases, Escaping Islands, and Theory of Movement
of the chain moves to a higher SpecZP, the features will be non-transparent in the probe-goal relation of the searching domain. With other words, they are not available anymore. In addition, only real Phases can move but not parts of a Phase. Explanation of the ungrammaticality of (22) a. Move-before-Merge: Before the DegrP can Merge, the noun muže moves to Top. On contrary, (22) b. is grammatical because the GradP, AP, and NP build a complex category and merge before they move. And since only Phases – but not single heads – can move, the whole Phase {vP} subcategorizes and L-marks the GradP, AP, and NP allows partial movement leaving the head on the very edge for feature transmission: the features are [+degree [+ ∂]]. Standard approaches to Scrambling assume that it is an instance of optional overt A’–movement, triggered by an uninterpretable feature whose nature varies with the authors (see Boeckx 2003 and Molnárfi 2003 for examples). Thus, there is one underlying word order and the variety of alternate structures is thought to be the result of Move-α by adjunction of an XP (NP/DP, PP, AP or AdvP) to CP, TP, PP, DP or vP/VP (although some authors claim that only TopP can be a suitable landing site for scrambled constituents, as the process is related to discoursal requirements. The base-generation analysis generates all constituent orders considered in the former approaches at the level of D-structure (whereas Move-α applies only after D-Structure); in other words, the major constituents do not have a fixed syntactic position at D-structure (cf. Corver and van Riemsdijk 1994: 9). The advantages of the movement approach are clear: universality and structural homogeneity underlying apparent surface chaos. However, the postulation of a single base phrase marker that is the input for an optional transformation rule is highly problematic. In the first place, no clear criterion is made explicit in the literature as to why the rule applies “optionally,” and, moreover, whether that optionality depends on syntactic requirements, interface requirements or neither (e.g., purely stylistic reasons, which, in any case, could be subsumed to the C-I interface, see Fanselow 2003 for a related view). If the algorithm that applies is Move-α, then it must apply after D-structure, in more recent terms, after there is a fully fledged phrase marker. If this is the case, then the adjunction hypothesis requires positions to be created ad hoc to host the moved constituents, which is far from being the optimal and most economical scenario. We will try to provide an interface-based account of scrambling in a real-time derivational approach, along the lines of Krivochen and Kosta (2013).
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
215
Both approaches entail many problems. First of all, both analysis based on a simple distinction DP/NP and *LBE/LBE or *Scrambling/Scrambling and *LBE/LBE must fail because they seem to be contradicted by many counterexamples from German (23) and Czech (24), both being Scrambling languages in the middle field but one projecting a DP (German with definite and indefinite overt articles); the other language such as Czech and Lower/Upper Sorbian are in fact languages with definite articles projecting a DP rather than an NP:
(26) *... weil meines Bruders gestern [- - Auto] gestohlen wurde because my brother’s yesterday car stolen was (27) * protože mého bratra včera bylo ukradeno [auto - -]
We suggest that NPs are subject to island sensitivity even in Czech, and these effects are to be explained from an interface theory, not adding stipulations to the generative component. Things change if the left-branched extraction is out of not complex NPs or/and if the operation includes Pied-Piping of the extracted element and the Wh-word. This leads us to ask the following question:
6.6.2 Scrambling in Complex NPs and Escaping Islands Is Left branch extraction in Slavic category sensitive? Are some Slavic languages DP-languages? What happens when some additional material comes into play, e.g. the demonstrative (short) form ten, ta, to:
(28) a. Jakou knihu čte Petr? which book reads Peter b. ??/*tu knihuDef1 (čte [DP t1])2 jakou t1 ([čte[DP t1]1]2) Petr 1 ‘Which book is Peter reading?’ // c. [DP Tu knihu]1_TOP/Def čte2 [DP Petr] t2 [DP tu knihu] ‘The book is reading Peter!’
(29) a. To péro je nové. the pen is new ‘The pen is new.’ b. Mé péro je nové. my pen is new ‘My pen is new’. c. To péro je mé. the pen is mine ‘The pen is mine.’
216
On Phases, Escaping Islands, and Theory of Movement d. Nové péro je mé. New pen is mine ‘The new pen is mine.’
(30) a. ta pěkná děvcata these pretty girls ‘these pretty girls’ b. její dlouhé vlasy her long hair ‘her long hair’
(31) a. pěkná ta děvcata b. dlouhé její vlasy long her hair (32) a.
b.
c.
d.
Pavel1 tu knihu [vP odpoledne t1 koupil t2]. PavelNom the bookAcc in the afternoon bought “Pavel bought the book in the afternoon” Pavel1 koupil3 tu knihu2 [vP odpoledne t1 t2 t3]. PavelNom bought the book_Acc in the afternoon “Pavel bought the book in the afternoon” Pavel1 tu knihu2 [vP t1 koupil t2] PavelNom the book_Acc bought “Pavel bought the book” Pavel1 koupil3 tu knihu2 [vP odpoledne Jirkovi t2 t3] PavelNom bought the bookACC in the afternoon Jirka DAT “Pavel bought the book for Jirka in the afternoon”
Agree-Phi-Features and DP internal Scrambling We can now derive the possibility of phase-internal scrambling in Czech because the case feature Agreement phi-features of the entire NP [tu knihu] is at the edge of a phase [DP], given)
(33) [C Spec T .....[φ/Case DP tu [NP knihu [...............
The element tu is thus in fact a definite article in Czech and not a demonstrative; it is in its new grammaticalized function a real Determiner equivalent to the Determiners in English, French, German and Italian. Czech is a language which has a DP. If the interface systems can read the Edges, they can also interpret them and only after features φ/Case can be deleted at PF. The Phase DP is at this stage done, but it can serve as input for further computation if there is another parallel working space (e.g. IS) to check different features (e.g., Top or Foc). Under the light of the data considered, we must modify the proposal we gave in Krivochen and Kosta (2013), and we give the following provisional conclusions, only partly agreeing with Boskovic:
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
217
• (1) Czech is not an LBE language. In Czech (as opposed to Slavic LBE languages such as SBC), degree particles, quantifiers, demonstrative, and possessive pronouns are not adjectives but modifiers and islands, and they thus cannot be extracted out of a specifier position; they can only be extracted and pied-pied together with the entire phase with the head or they can be moved if Labels mark the edge of the Phase. • (2) There is no evidence that SBC has a syntactic category D as opposed to Czech which has a definite article of the ten, ta, to class of former demonstrative pronouns as opposed to real demonstrative pronouns of the tento, tato, toto; tamten, tamta, tamto and onen, ona, ono-class. • (3) Demonstratives of the ten, ta, to class in Czech are not Adjectives adjoined to A‘ as AdjP modifiers to an A head., rather they are Determiners and heads of a DP phase. They are proclitics or maybe even affixes and build a CL-HostUnit which cannot be subextracted. Topicalization vs. Focus-Movement and wh-Movement The problems of Topicalization should generally explain why certain features drive unrestricted movement in certain embedded structures while other features cannot do this. Problems arising with object Raising (also known as “tough movement”) was first noticed and studied within Generative Grammar during the 1980s, particularly in Chomsky (1986). The relevant examples are of the following type (examples are equivalent in all languages).
(34) a. Este problema es difícil de resolver. (Sp.) b. This problem is difficult to solve. (Engl.) c. Ètu problemu trudno rešit’. (Russ.) (examples taken from Krivochen and Kosta 2013: 112 under 4.3.2 Object Raising: “Tough Movement”)
(35) a. Mi domando, a Gianni, se, ieri, QUESTO, alla fine della riunione, avremmo potuto dirgli (non qualcos’altro)3 ‘I wonder, to Gianni, if, yesterday, THIS, at the end of the meeting, we could have said to him (not something else)’ b. * A chi QUESTO hanno detto (non qualcos’altro) ‘To whom THIS they said (not something else)?’ c. * QUESTO a chi hanno detto (non qualcos’altro) ‘THIS to whom they said (not something else)?’ d. * A GIANNI che cosa hanno detto (non a Piero) ‘TO GIANNI what they said (not to Piero)?’ e. * Che cosa A GIANNI hanno detto (non a Piero) ‘What TO GIANNI they have said (not to Piero)?’
218
On Phases, Escaping Islands, and Theory of Movement
If we compare these data with instances of contrastive topic in Italian and other languages, we see that neither a direct/indirect object Raising into a contrastive topic phrase shows up in a wh-movement environment:
(35) f. Ich frage mich, ob wir dem Johann gestern gerade DIEses, am Ende der Sitzung, hätten sagen können und nichts Anderes. g. Ptám se, zda-li jsme Janovi měli řić t zrovna TOTO na konci schůze a nic jiného h. Ptám se, kdo by měl Janovi říct zrovna TOTO na konci schůze a nic jiného i. *Wem gerade DIESES sagten sie (und nichts Anderes) j. *Komu TOTO řekli (a nic jiného)? // Komu řekli TOTO (a nic jiného) k. * TOTO komu řekli (a nic jiného)? // Komu toto řekli (a nic jiného) l. *JANOVI co řekli (a ne Petrovi)? // Co řekli JANOVI (a ne Petrovi) m. *Co JANOVI řekli (a ne Petrovi)? // Co Janovi ŘEKLI (verum-focus-reading)
Why Topicalization and wh-Movement cannot happen simultaneously in the same derivation? Recall: [+wh-feature] and [+top-features] should simultaneously attract their probe(s) since all operations within a phase (except external Merge (EM)) apply (simultaneously) at the phase level. (cf. Hiraiwa 2001). Chomsky (2007: 6) compares derivations within the phase in this way to the construction of formal proofs: “Intuitively, the proof ‘begins’ with axioms and each line is added to earlier lines by rules of inference or additional axioms. But this implies no temporal ordering. It is simply a description of the structural properties of the geometrical object ‘proof ’.” Why are embedded wh-clauses better than root wh-clauses in contexts of object shift? Let us consider the potential structure under consideration for ((29) a., f., g.).
(36) Force/CP...INT...FocP ...Wh...(embedded clauses... Finite SpecTP... vP .... wh-in situ... (NEG) [+mode] [+wh/Foc-feature] [+top] [+Foc-feature in situ] Ptám se, kdo by měl Janovi říct zrovna TOTO na konci schůze a kdo NE
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
219
In a paper of early Minimalism. Chomsky (1989) has proposed two principles which choose among competing transformational derivations. He calls them principles of “Economy of Derivation.” These are the Least Effort principle and the Last Resort principle, seen in (21a-b). (The nomenclature is partially my own: Chomsky uses the term “Principle of Least Effort” for (21a-b) together.)
(37) a. If two derivations from a given D-structure each yield legitimate outputs, and one contains more transformational steps than the other, only the shorter derivation is grammatical. b. Last Resort Principle: “UG principles are applied wherever possible, with language-particular rules used only to “save” a D-structure yielding no output.“
We cannot of course talk of D-structure or S-Structure but rather of external Merge and internal Merge (Move) assuming that external Merge precedes internal Merge/Move. Thus, we can understand (31) as a pre-instance of (32). (38) Merge over Move In any given derivation from equal LA {α, β} i. two elements {α}, {β} have to be externally merge yielding the structure {γ {α, β}}, in which ii: γ dominates both α and β iii. and neither α dominates β, nor β dominates α. (iv) Additional rule: No element of this string can move out of the structure earlier than being merged with each other (Anti-Freezing / Ban on Freezing)
The ban of internal merge before move is thus a reformulation of the Economy Principle. Let us now consider the structure (33a) of (33b). (39) a. ...Force/CP...INT...FocP ...Wh...(embedded clauses... Finite SpecTP... vP .... wh-in situ... (NEG) b. [+mode] [+wh/Foc-feature] [+top] [+Foc-feature in situ] Ptám se, kdo by měl Janovi říct zrovna TOTO na konci schůze a kdo NE
I ask myself who Int Mode JanDat tell only THIS at the end of the meeting and who does NOT. Recall that in standard MGG, wh-movement is derived by a [+wh-feature] in an Operator position which attracts the wh-word to Spec-CP via cyclic movement. The Label or [+wh- feature] of wh-Movement is different from a Label of Topicalization which is [+top]5. We assume that Topicalization is associated with the salient feature for [+definiteness] and also with TO finiteness of an unvalued D-feature in TO (with much evidence coming from Clitic Doubling languages, cf. Corbara, in prep., Radeva-Bork 2012, Schick 1999), and that both features must be checked in the Spec-TP position (via specifier-head Agreement).
220
On Phases, Escaping Islands, and Theory of Movement
We further define SpecTP as the goal position of the subject of the SpecvP because it is by evidence the Finite Clause with the position in which the Case NOM can be assigned. When the Label [+top/+def/+fin] is valued in SpecTP, the finite clause is merged yielding {TP{vP{VP}}}. BUT: since TO inherits its F-features from CO, only then when CO is merged, the lower subject or object of the vP and the verb must stay in their base-merged Spec-vP and v base positions before moving to SpecTP/TO for Case and Agree before overt wh-Movement. We thus derive Economy of Derivation Principle as follows:
(40) Principles of Economy of Derivation (revised) i. If two derivations from a given numeration of the LA each yield legitimate outputs, and one contains more transformational steps than the other, only the candidate is optimal yielding a grammatical output which serves as done or as an input for further computations. ii Special Case wh-Movement before Topicalization: Since T inherits its features from C, only after/when C is Merged, and the wh-phrase(s) must check its whfeature in Spec-CP before TP is merged, the lower subject or object of the finite clause must stay either in its base merged Spec-vP or in its SpecTP [TOP] position and can never move further than to TP.
Recall that in standard MGG, wh-movement is derived by a [+wh-feature] in an Operator position that attracts the wh-word to Spec-CP via overt cyclic movement. The Label or [+wh-feature] of wh-Movement is different from a Label of Topicalization which is [+top]6. We assume that Topicalization is associated with A-positions, and the salient feature for [+definiteness] is the driving force (with much evidence coming from Clitic Doubling languages, cf. Corbara, in prep., Radeva-Bork 2012, Schick 1999), and that this feature must be checked both in the derived position under external Merge (SpecvP) and later in the Spec-TP position which is the first prominent position of the Subject of a Finite Clause and the position in which the Case NOM is assigned. If the Label [+top] has been valued in SpecvP, and then associated with the merged finite clause, it can be checked against SpecTP. Since T inherits its features from C, when C is Merged, and the wh-phrase must check its wh-feature in Spec-CP before TP is merged, the lower subject or object of the finite clause must stay either in its base merged SpecvP or in its SpecTP [TOP] position and can never move further than to TP. The ungrammatical examples in (29) all confirm this analysis because in every single case the internal Merge via DP-movement and verb Movement has taken place before the wh-Movement could take place. If our analysis is correct, then we would expect that any other type of wh-questions with wh-remaining-in situ
Scrambling and Clitic Left-Cleft dislocation in Czech DP?
221
(and by extension in all languages where wh- movement is covert e.g. classical Mandarin) should allow for economy the DP to be merged first because wh does not move at S-structure: This prediction is born out in echo-wh-questions, cf. (35) a.-d.
(41) a. (tell me once again, I didn’t quite understand) You said that (only) THIS Peter should tell WHO? b. (Wie war es noch einmal, ich verstand es nicht ganz) Du sagtest, dass (nur) DIESES Peter WEM sagen sollte? c. (Jak to bylo, docela jsem tomu neporozuměl) Řekl jsi, že jen TOTO Petr řekl KOMU? d. (Kak èto bylo, skaži, ja ne očen ponimaju) Ty skazal čto vot ETO Petr skazal KOMU?
A very important restriction is that we have to differentiate between a Topicalization and Focalization, where only the first is restricted to Spec-TP as target, while wh-Movement is always a Focus-Movement, probing on all potential Spec-XPs cyclically and then reaching Spec-CP/FocP as its final destination for valuating the focus feature. Thus, [+wh] = [focus]. If we assume that wh-movement has precedence in the derivation, any kind of NP- movement should be banned if it happens before wh-movement for economy.
(42) a. [ForceP wh- Für wen] wurde letztes Jahr [TOP das HAUS] [+ Foc geBAUT]? b. *Das Haus für wen wurde letztes Jahr gebaut
If, however, the DP [Das Haus für wen] is taken as one unit of the DP phase in which one part is focalized, the structure is licensed by assumption that contrastive focus features can be assigned to XPs and their subparts in any given constellation without forcing focus movement to the left periphery (ForceP) or any other type of Remove operation: We can see that in restricted and topicalized DPs (27)a./vs. b., wh-words cannot be instances of wh-movement; rather they are CPs, c-selected by the head noun of the DP. This means that the Labels assigned to topicalized DPs as heads of relative Clauses must differ from Labels assigned to wh-XPs. As opposed to this evidence, in mutually comparative contexts (28) a.-d. in which the wh-word is a part of a wh-movement Operation in syntax, the wh-word by definition (34’) must first satisfy the filter (34’) (ii) before successive cyclic internal Merge can be activated: (43) a. [wh- Wer auch immer] es getan hat, [Top DP DERJENIGE] muss es zurückzahlen. b. [DP DERJENIGE] muss es zurückzahlen, [wh- wer auch immer] es getan hat.
222
On Phases, Escaping Islands, and Theory of Movement
6.7 Summary 1. Any Movement (be it internal or external Merge) must fulfill the Merge-overMove principle summarized under the notion PIC in Chapter 3, (3). 2. Anti-Freezing vs. Freezing Effects can be summarized under PIC (3). 3. Labels are superior to features because they not only are more general notions of UG, but they are also controlled by the interfaces in a way which prevents the system of superfluous derivational steps and of the non-economy Operation Remove*. 4. Any Model of Grammar should be based on Crash-proof Grammars.
7 On the Causative/anti-Causative Alternation as Principle of Affix Ordering in the Light of the Mirror Principle, the Lexical Integrity Principle and the Distributed Morphology* 57 Summary During the past 40 years, research of (Anti-)Causativity belonged to the central themes of the general and comparative resp. typological linguistics. In this respect, it is astonishing that in my opinion from the Slavic side, this subject was treated if at all very marginally in the past. My interest was motivated by the fact that Causatives and Anti-Causatives require an analysis, which touches an interface of morphology, semantics, lexicon, word formation, and syntax. Therefore, it is also replicable by the Minimalistic Program (with the inclusion of Distributive Morphology). Furthermore, the theme comprises important observations concerning questions of affix ordering, syntactic structures, and verb movement. Most syntactic accounts of affix ordering and verb movement follow the theory of incorporation by Mark Baker (1988). In this theory, syntactic incorporation is assumed to be an instance of the general syntactic rule Move Alpha, for example, a syntactic operation that derives morphologically complex words from morphologically basic elements (root, stems, or affixes) by head-to-head movement via incorporation. Whereas the traditional view on morphology and word formation is that word formation takes place in lexicon, and that morphological rules are different in nature and apply on different primitive elements than syntactic rules, we shall try to advocate an analysis in which the phenomenon of Anti-Causatives and Causatives has to be derived from different ROOT-Semantics of verbs projecting different trees and syntactic structures by the operations AGREE and MERGE. The Causative Alternation (CAL) will serve as criteria to distinguish between externally and internally caused causation; with the help of the CAL, the Unaccusativity will be divided into two subgroups: alternating Unaccusative (AU-) verbs and non-alternating Unaccusative (NAU-) verbs. In the following, an alternate distinction between AU- and NAU-verbs will be developed, namely the presence/absence of information about how the process to be treated was caused. The universal concept of the encyclopedic lexicon in English, German, Russian, and Czech (partly also other European and non-European languages) seems to
* This article is dedicated to Erika Corbara who made my life a miracle. 57 This article has been published in Zeitschrift für Slawistik in 2015 under the same title and is here reprinted by kind permission of de Gruyter Publishing house. Peter Kosta (2015): “On the Causative/Anti-Causative Alternation as Principle of Affix Ordering in the Light ofthe Mirror Principle, the Lexical Integrity Principle and the Distributed Morphology”, in: Zeitschrift für Slawistik 2015; 60(4): 570-612. ISSN: 2196-7016 DOI: DOI 10.1515/slaw-2015-0038
224
Causative/Anti-causative alternation
assume four different ROOTS of verbs at base to classify the Anti-Causativity-Opposition:√ agentive (murder, assassinate,cut),√ internally caused (blossom, wilt, grow),√ externally caused (destroy, kill, slay)√cause unspecified (break, open, melt).Moreover, it will be shown that unergative/causative pairs depict an independent phenomenon, which does not affect considerations about CAL (correspondent to Alexiadou et al. 2014a, b, Kosta 2010, 2011, Marantz 1997, but dissenting Levin and Rappaport Hovav 1995, Reinhart 2000). This contribution is organized as follows: In section 1, I propose a formulation of the MP based on syntactic features; the examples will be taken from Causatives and Anti-Causatives that are derived by affixes (in Russian, Czech, Polish, German, English, and some other languages of different types and origins) by head-to-head movement. In section 2, I review some basic facts in support of a syntactic approach to Merge of Causatives and Anti-Causatives, proposing that Theta-roles are also syntactic Features that merge functional affixes with their stems in a well-defined way. I first try to give some external evidence in showing that Causatives and Anti-Causatives obey a principle of thematic hierarchy early postulated in generative literature by Jackendoff (1972: 43), and later reformulated in terms of argumentstructure-ordering principle by Grimshaw (1990): Chapter 2. Crucial for my chapter is the working hypothesis that every syntactic theory that tries to capture the data not only descriptively but also explanatively should descend from three levels of syntactic representation: a-structure where the relation between predicate and its arguments (and adjuncts) takes place, thematic structure where the Theta-roles are assigned to their arguments, and event structure, which decides about the aspectual distribution and division of events. Keywords: Anti-Causatives, Argument Structure-Ordering Principle, Causatives, Causative Alternation, Distributed Morphology, Event Semantics, Lexical Functional Grammar, Mental Lexicon, Minimalist program, Montague Grammar, Unaccusatives, Thematic Hierarchy, Mirror Principle, Functional Hierarchy, Universal Grammar
7.1 Mirror Principle and Functional Hierarchies 7.1.1 Preliminary Observations and Comments The starting point of controversy between two fundamentally opposing theoretical viewpoints can be briefly sketched in the following: (1) the syntactic approach is that all morphological operations, that is, of shape-forming processes from single stems or roots to phases start of taking non-derived lexical items from a list (call it lexical array, LA), and combining them via a simple operation, that is, external (and possibly internal) Merge in the corresponding search-goal domain (Chomsky 2005, Krivochen and Kosta 2013). If the “features” of two lexical items (LI(phi1) and LI (phi2)) can be read off at one of the interface levels (S-M or C-I), they will be valued against each other and finally erased building a complex phrase following and extended by the (semantic) principle of Dynamism of Full Interpretation (cf. Krivochen and Kosta 2013: 70). Both valued and not valued features are subject
Mirror Principle and Functional Hierarchies
225
of a mandatory syntactic operation Merge, but the information increases or decreases independently of the S-M-interface. (2) As opposed to this, in the LFG approach grammatical-function changing operations like passivization are said to be lexical. This means that the active-passive or causative-anti-causative relation, for example, is a relation between two types of verb rather than two trees. Active and passive verbs are both listed in the lexicon and involve alternative mapping of the participants to grammatical functions. There are some alternatives in recent literature to the two mentioned alternative approaches, which we cannot mention here (cf. Alexiadou, Borer, and Schäfer eds. 2014, Panagiotidis 2014). Through the positing of productive processes in the lexicon and the separation of structure and function, LFG is able to account for syntactic patterns without the use of transformations defined over syntactic structure. An alternative approach is a theory of thematic representations proposed by Grimshaw (1990). To both alternatives, I shall return in theoretical part 2. Let us consider two simple examples from causative and anti-causative formation as presented in Kosta (2010) and Kosta (2011). Causative constructions (hereafter CC) are grammatical expressions, describing a complex situation which consists of two components (Song 2001: 256–259). (i) the causes event (CAUSER-EVENT), where the CAUSER initiates or causes something and (ii) the caused event (CAUSEE-EVENT/STATE), where the CAUSEE is doing an action or is subject to a change of state, as a result of the initiated or caused action of the CAUSER. The following Japanese sentence describes such a situation of causativation: (1)
Kanako ga Ziroo o ik-ase-ta KanakoNOM ZiroACC go_CAUS_PAST “Kanako made Ziro go.”
Japanese
In (1) the intransitive Verb ik “to go” and the causative affix -ase- clearly combine into a single word at some stage of derivation. Thus, we are led to an analysis similar to one developed in Baker (1988: 147–149) where CC in English and Chichewa (Bantu) in (2) and (3) are treated as Verb incorporation: (2) a. b. (3) a.
b.
Bill made his sister leave before the movie started. Engl. The goat made me break my mother’s favorite vase. Mtsikana ana-chit-its-a kuti mtsuko u-gw-e. Chichewa Girl AGR-do-make-ASP that waterpot AGR-fall-ASP ‘The girl made the waterpot fall.’ Aphuntitsi athu ana-chits-its-a kuti mbuzi zi-dy-e udzu teachers our AGR-do-make-ASP that goats AGR-eat-ASP grass ‘Our teachers made the goats eat the grass.’
226
Causative/Anti-causative alternation
(4)
a.
b.
Mtsikana anau-gw-ets-a mtsuko. Girl AGR-fall-made-ASP waterpot ‘The girl made the waterpot fall.’ Catherine ana-kolol-ets-a mwana Catherine AGR-harvest-made-ASP child ‘Catherine made her child harvest corn’
Chichewa
wake her
chimanga corn
The English sentences in (2) a., b. are biclausal in all respects. In particular, they are biclausal in meaning, with an embedded clause appearing as a semantic argument of the causative predicate in the main clause. For each of the two clauses, two morphologically different verbs are to be seen: make (as the causative verb) and leave break, respectively. The Chichewa and sentence (3) is exactly the same biclausal type of sentence, but (4) and (1) are only similar semantically, but there are differences in morphology and syntax. Japanese and Chichewa contain only one verb each which happens to be morphologically complex. In addition, sentences like (4) can be paraphrases of (3). In our analysis (Kosta 2010, 2011), we have shown that both types of CC – the biclausal and the monoclausal or better monolexical – can be found in Slavic languages, here demonstrated in Russian, Czech, and Polish: (5) a. Bill Bill b. Kozel goat c. Bill Bill (6) a. b. c. d. e. f.
nechal made zastavil made spowodował, made
Dívka The girlNom Dívce the girlDat Devushka girl.Nom U devushki at girl.Gen Dziewczyna The girlNom Szklanka a glassNom
jeho his menja me żeby his
upustila dropped spadla fell uronila dropped upal fell upusciła dropped wypadła fell
sestru odejít dřív, než film začal. Cz sister leave before movie started. slomať ljubimuju vazu moej materi Ru break my mother’s favorite vase. siostra odeszła, niż film się zaczęł. Pl sister leave before movie started.
sklenici a glassAcc sklenice a glassNom stakan glass.Acc na pol on floor.Acc szklankę a glassAcc dziewczynie the girlDat
na zem. // on the floorAcc na zem. on the floorAcc na pol on floor.Acc stakan. glass.Nom na podlogę. on the floorAcc na ziemię on the floorAcc
Cz
Ru
Pl
Mirror Principle and Functional Hierarchies
227
The examples (6) a.-f. are not just a translation of a periphrastic causative construction (CC). The girl let the glass drop on the floor. (6) a., c., e. is the incorporating type of a CC, where the prefix “u” (presumably base generated in a head CAUS0) attracts and then incorporates the verbal head of the lexical verb V0 via head-to-headmovement, making of a lexical verb a causative verb which now assigns to the chain sklenice. … trace accusative case. The semantic differences between a periphrastic CC and its incorporated counterpart are also to be mentioned: For instance, (6) c. does not imply that the girl dropped the glass herself; this is just the most salient interpretation. The glass could fall by itself or be dropped by somebody else (although this is a more problematic interpretation, but one can imagine a context for it). The semantic difference between these two types of CC are even bigger if we take the AC-clauses in (6) b., d. which only imply that the glass is related to the girl (it belongs to her, or she works in a bar and handles glasses, etc.). The prefixes and stems of the incorporating CC and the incorporating A-C are apparently different: If one wants to stress that the glass fell accidentally, one can either say “devushka vyronila stakan iz karmana” (you can “uronit” something on purpose, but normally not “vyronit” – unless you have specific contexts where, for example, you pretend that you did something accidentally) or “stakan vypal/vyskol’znul u devushki iz ruk” (“the glass slept out of the girl’s hands”), cf. (iii). The fact, that there is a semantic difference between (6) a., c., e. vs. (6) b., d., f. will have consequences for a syntactic theory of causation as put forward in section 2 of this chapter. Thus, if we want to mirror the semantic difference between the periphrastic CC, the incorporating type of CC and the AC type, we should mirror it in syntax as follows:
228
Causative/Anti-causative alternation
Mirror Principle and Functional Hierarchies
229
230
Causative/Anti-causative alternation
Mirror Principle and Functional Hierarchies
231
The periphrastic CC and the incorporating CC alternate with so-called AntiCausative Constructions (ACC), as we can further observe under (7) vs. (8) in which anti-causativity is expressed by adding to the same transitive verb otevřít “to open” an anti-causative marker se – homonymous to the Reflexive se-verbs – an operation which not only changes the syntactic structure (reducing or absorbing the external Agent argument or Causer Petr and moving the internal Theme argument dveře to the specifier position of a Cause Phrase) but also the Case and the surface structure. (7) (8)
Petr otevřel dveře [Petr VoiceP [CAUS [dveře √otevřel]]] Dveře se otevřely [SpecCaus the door [CAUS [the door √open]]
Cz
In Krivochen’s paper (2014), the presence of a functional projection of voice in (7) (as argued for in Kosta 2010, 2011 following Schäfer 2008, Alexiadou 2006) is rejected whereas the presence of a CAUS projection is accepted. Our arguments in favor of a Voice head come from the syntactic restrictions on passivization in one class of the verbs where the causer remains unspecified. This is exactly the example (8) where – as Krivochen admits – the Causer remains unspecified. This can be confirmed by the fact that only this class of verbs allows the Causative alternation as demonstrated under (7) vs. (8) where also a passivization of (8) is disallowed resulting in an ungrammatical sentence: (9)
Dveře se otevřely (*Petrem) Cz door AC opened (The) door opened (*by Peter)
Krivochen discusses these examples and rejects the presence of a voice projection at all for economy of representation. This is in fact due to the fact that Krivochen does not differentiate between controlled Cause by an animate Agent (Fillmores semantic case “agentive,” cf. also Kosta 1992) and Cause in the narrow sense (which would correspond to the inanimate cause including also physical and other sources of cause). The empirical facts, however, show the importance and give me the justification and good arguments for such a differentiation because it is mirrored not only on the level of Phonology and Morphology (be it epiphenomenal or not) but in Syntax, mapped from Semantics! With other words, the ordering and mapping of Roots into the RS and Syntax are not just epiphenomenal but follow and reflect deep-rooted semantic differences and features of the Lexicon projected into the syntactic structure.
232
Causative/Anti-causative alternation
For instance, Schäfer (2008) and Alexiadou et al. (2006) justify the need for VOICE projections in the light of empirical data and contrasts like the following (taken from Kosta 2011: 284). (10)
a.
(11)
b. c. a.
b.
John / the explosion
/ Will’s
banging broke Engl. the window. John / exploze / rána Willa rozbil(a) okno Cz Okno bylo rozbito Johnem / explozí / ránou Willa *The window broke by John / by the explosion Engl / by Will’s banging *Okno se rozbilo Johnem / explozí / Cz ránou Willa
Spanish (a Romance language) allows constructions of the type of (12), as in: (12)
La ventana se rompió por la explosion
Spanish
Kosta (2010) notices this fact and makes the caveat that an agent cannot be introduced by means of a PP:
(13) *The window broke from Mary
However, the mistake here seems in view of Krivochen the preposition choice, as (14) is grammatical:
(14) The window broke because of John
While I take it that examples such as (10) in passives are grammatical, I repeat that all examples in (11) are ungrammatical and they should by prediction be excluded: an animate Agent is excluded because in these examples there is only a Causer phrase which projects a Causer with an inanimate adjunct which adjoins to the specifier of the CausP and not a voiceP. Krivochen (2014) himself admits that examples such as (12) (in Krivochen 2014, p. 106, ex. 22) in Romance languages are grammatical). But this is because “explosion” by definition is not agent but causa. Thus, Krivochen’s example (12) [22] is misunderstood because it does not contradict our prediction that Anti-Causatives cannot be passivized, which can be seen in (14’) in which the by-Phrase is an instance of agentive PP in Passives in English: (14’) *The window broke by John
It is not true that the preposition “from” in (13), repeated here as (13’) is the reason for the ungrammaticality: (13’) *The window broke from Mary
Mirror Principle and Functional Hierarchies
233
Actually, the same ungrammaticality would result if we put the preposition “by” typical for Agent-by-Phrases in engl. Passives (cf. 14’). The reason why I took these examples was misinterpreted or mistaken in Krivochen’s statement (Krivochen 2014). The actual reason why I introduced these non-trivial examples was to show that by-Phrases (with an animate Agent argument) can only be introduced in engl. Passives but not in AC (Anti-Causatives, which by definition cannot be passivized), and this was an additional argument to state that only Passives (which are in most if not all examples derived from transitive Actives by NP-Movement) project a Voice Phrase where the Agent takes the position of an adjunct due to the presence of a passive morpheme Argument (cf. Kosta 1992, chapter 5, citing Baker, Johnson, Roberts 1989, confirmed also in). The reason why Krivochen’s example (14) is grammatical is because the phrase “because of John” is not an example for an Agent phrase but an example of a metalinguistic or meta-textual comment on the reason why something happened. This is not the same thing and should not be mixed up with grammatical passive vs. anti-causative which are restricted by the syntax-semantic-interface. This prediction is also precisely born out in my theory because the verb “to break” belongs to the class of verbs that do not have an agent (VoiceP) but an unspecified or underspecified Cause Phrase. Alternatively, we can interpret Krivochen’s example as an adjunct clause, in which “because of John” is in fact a part of a sluicing (IP-ellipsis) (cf. Kosta 2004: 17). For the same reason, I have introduced analytic Passives in Slavic and demonstrated that they behave in a different manner than so-called impersonal se/si-Passives in Slavic and Romance. While only the latter disallow Agent by-Phrases (being ergative/unaccusative verbs or impersonal passives), the first can at least allow for an optional Instrumental phrase, which is the exact equivalent to the Agent by-Phrase in English. I repeat here the arguments in favor of our analysis from Kosta (2010: 256 passim). The discussion of the causative alternation – as discussed and developed in many influential articles and monographs (cf. Marantz 1984, BJR 1989, Kosta 1992, 2010, 2011, Levin and Rappaport Hovav 1995, Reinhart 2000, Schäfer 2008, Alexiadou 2006a, b, recently also in Schäfer, in print) – has repeatedly shown – and this is a proven fact – that Passives proper (in this sense I speak about the analytical Passives in Indo-European Languages) and anti-causatives differ in the following aspects: (A) modification and control, (B) verbal restrictions in the case of passivization. (A) Modification and Control Passives, but not anti-causatives, can be modified (i) by an Agent by-phrase, (ii) by Agent-oriented Adverbs (so-called subject adverbs), and (iii) by control in embedded final sentences:
234
Causative/Anti-causative alternation
(i) Passive-Agents vs. *Anti-Causative Agent (15) a. b.
The boat *The boat
was sunk sank
by Bill. by Bill
(16) a.
Das Boot
wurde
*Das Boot sank
durch Bill versunken. durch Bill
(17) a. b.
Lod’ *Lod’
byla se
potopena potopila
(18) a.
Lódz
została
b.
Lódz
zatoneła
zatopiona prez *przez
Лодка Лодка
была потоплена Биллом затонула *Биллом
b.
(19) a. b.
(AntiCausative) Bilem. Bilem. Billa. Billa (Passive) (AntiCausative)
Engl.
Germ.
(Passive) (AntiCausative) (Passive) (AntiCausative)
Cz.
Pl.
Ru. Ru.
(ii) Agent-oriented Adverbs (20) a. b. (21) a. b. (22) a. b. (23) a. b. (24) a. b.
(Passive) (AntiCausative) (Passiv)
The boat was sunk by purpose. *The boat sank on purpose Das Boot wurde absichtlich versenkt. * Das Boot sank absichtlich Loď byla potopena naschvál/záměrně * Loď se potopila naschvál/záměrně Został celowo zatopiony jacht Łódź zatonęła celowo Лодка была намеренно потопленa * Лодка затонула намеренно
Engl. Germ. Cz. Pl. Ru.
(iii) Control into embedded sentence (25) (26)
a. b. a. b.
(27)
a. b.
[[IA_arb] The boat was sunk [PROarb] to collect the insurance] Engl. *The boat sank to collect the insurance [[IA_arb] Das Boot wurde versenkt, Germ. [PROarb] um eine Versicherungsprämie zu kassieren] *Das Boot sank, [[PROarb] um die Versicherungsprämie zu kassieren] Lod’ byla potopena, [aby [PROarb] se dostali peníze od pojišt’ovny] Cz. *Lod’ se potopila, [aby [PROarb] se dostali peníze od pojišt’ovny]
235
Mirror Principle and Functional Hierarchies (28)
a. b. c.
(29)
a. b.
Łódź została zatopiona, [[PROarb] żeby wyłudzić premię ubezpeczniową] *Łódź zatonęła, żeby [[PROarb] wyłudzić premię ubezpeczniową] ŁódźAcc zatopionąAcc [[PROarb], żeby wyłudzić premię ubezpeczniową] Лодка была потопленa [[PROarb], чтобы получить страховую премию] *Лодка затонула [[PROarb], чтобы собрать страховку]
Pl.
Ru.
(B) Syntactic Restrictions: Theoretically, every transitive verb can be passivized. However, a minor subclass of transitive verbs can also form anti-causatives and thus, by definition, cannot be passivized. This will be demonstrated with the examples of four verb groups. Every extension of a verbal projection is only really projected if the head in the lexicon bears its categorial features, giving the following verbal roots: (30)
(i) √ agentive verbs (murder, assassinate, cut) project a causP and a voiceP (+ caus, + voice) (ii) √ internally caused (blossom, wilt, grow), project only a CausP (+ caus) (iii) √ externally caused (destroy, kill, slay) project a voiceP and a CausP (+voice, +caus) (iv) √ cause unspecified (break, open, melt) project only a voiceP (+voice)
I believe that with respect to the feature CAUS and VOICE one can classify all verbs.
(i) Group: to break, brechen, rozbít (31) a. Bill broke the glass. b. Bill zerschlug das Glas. c. Bil rozbil sklo. (32) a. The glass was broken by Bill. b. Das Glas wurde von Bill zerschlagen. c. Sklo bylo rozbito Billem. (33) a. The glass broke. b. Das Glas zerschlug. c. Sklo se rozbilo.
(ii) Group: to cut, schneiden, krájet, stříhat, řezat (34) a. The baker cut the bread. b. Der Bäcker schnitt das Brot ab. c. Pekař krájel chléb. (35) a. The bread was cut by the baker. b. Das Brot wurde vom Bäcker abgeschnitten. c. Chléb byl krájen pekařem.
236
Causative/Anti-causative alternation
(36) a. *The bread cut b. *Das Brot zerschnitt c. *Chléb se ukrojil
(iii) Group to read, lesen, číst (37) a. John read the book yesterday. b. Johann las das Buch gestern. c. Jan četl včera knihu. (38) a. The book was read yesterday by John. b. Das Buch wurde von Johann gestern gelesen. c. Kniha byla čtena včera Janem. (39) a. *The book read yesterday b. *Das Buch las gestern c. *Kniha se četla včera (only the impersonal passive is possible)
Concerning the two differences, namely modification and control, it is generally agreed that the reason is the presence vs. absence of implicit external arguments in passives vs. anti-causatives. The fact that passives contain an implicit external argument can – according to the passive theory of Baker, Johnson and Roberts (1989) – be explained by the fact that Passives are derived by NP-Movement.
7.1.2 Affix Ordering in Morpho-Syntax and Semantic Classes As we shall see below, the type of CC and the CAL is dependent on the semantic class of verbs, which we will distinguish in the already mentioned four groups (cf. 30). These semantic classes are subject to both morphological and syntactic restrictions. Let us first recall some basics regarding the morphology and word formation of causatives and anti-causatives. Since Perlmutter’s formulation (1978) of the Unaccusative Hypothesis (UH) most theories classified intransitive verbs as either unaccusatives or unergatives. The terminology of argument structure of the UH proceeds from the assumption that “unaccusative predicates select a single internal argument, while unergative predicates select a single external argument” (cf. Harves 2009: 415). In syntax unaccusative predicates project their subjects VP-intern, namely in the position of direct object (DO), while unergative predicates project their subjects VP-extern, similar to transitive verbs projecting basis-generated subjects. My own understanding of the syntactically different entity of unaccusative predicates can be traced back to my joint work with Jens Frasek (Kosta and Frasek 2004), where we developed several syntactic tests, which can be considered as relatively reliable objective empirical means and instruments of diagnosis – not only based on mere logic of argumentation or not proven hypothesis – for the differentiation between both verb classes of intransitive verbs both globally and crosslinguistically. For instance, Perlmutter (1978) demonstrated that the impersonal
237
Mirror Principle and Functional Hierarchies
passive in Dutch can only derive from real unergative, but not unaccusative predicates, as can be shown by the contrast between (40) vs. (41).
(40) Dutch impersonal passives < unergative predicates a. Er wordt hier dooor de jonge lui veel It is here from the young people much “It is danced much by the young people” b. Er wordt in deze kamer vaak geslapen. It is in this room often slept “It is slept often in this room” (Perlmutter 1978: 168)
(41) *Dutch impersonal passives < unaccusative predicates a. *Door de lijken From the corps b. *In dit ziekenhuis In this hospital (Perlmutter 1978: 169)
wird is wordt is
al already (er) (it)
gedanst. danced
ontbonden decomposed door de patienten dikwijls gestorven from the patients often died
As we have already seen, one of the syntactic characteristics of intransitive verbs that differentiate between unergatives und unaccusatives is that unergatives seem to have an external argument as base derived subject with the Theta-role “agent” while unaccusatives derive their subject via internal movement (or remerge) from the internal argument position and the Theta-role is Theme or Patient. This is also the reason why the two structures must be derived in two different ways: (42)
a. b.
Unaccusative Verb: ___ [VP V NP] Unergative Verb: NP [VP V]
(43) a. Человек падает. b. Muž padá. c. Der Mann fällt. ‘The man falls’
(44) a. Человек смеется. b. Muž se směje. c. Der Mann lacht. ‘The man laughs’
The same distinction can bee seen in the following pairs:
(45) Der Schwimmer ertrank The swimmer drowned (46) Das Boot sank The boat sank
238
Causative/Anti-causative alternation
In Slavic languages, we often express this difference in terms of different affixes, as opposed to German and English, which usually express the difference with different lexical stems. In Slavic languages such as Russian and Czech, the “unergativity” and “unaccusativity” affixes at the same time express “telicity” and the achievement or result of a process or change-of-state, for example, the perfective aspect. This is why we assume that the functional projection of Aspect must be above the lexical VP but below the light vP which projects the external argument (either as base generated external subject in case of unergatives or as derived subject, remerged from the position of internal argument – deep object in case of unaccusative). (47) a. b. c. d. (48) a. b. c. d.
Пловец утонул (unergatives) Plavec se utopil /utonul Der Schwimmer ertrank The swimmer drowned Лодка затонула (unaccusatives) Loď se potopila Das Boot sank The boat sank
Despite all superficial similiarities in morphology, semantics, and syntax, my working and leading hypothesis in this chapter is the following: As opposed to these examples of unergativity (47) and unaccusativity (48), in which the causation is typically not projected – and this is the reason why I would not consider unergatives or unaccusatives as synonymous to anti-causatives or to verbs taking part in causative alternation (CAL) – with causative verbs proper causation is expressed by a usually non-active subject. In a broader sense, we can include to the definition of CAL also verbs where the causation is initiated but not expressed in direct way, cf.: (49) a. b. c. d. (50) a. b. c.
Tы меня напугал (Causative) Ty jsi mě polekal. Du hast mich erschreckt You scared me Já jsem se tě polekal (Anti-Causative) Ich erschrack vor dir I was startled by you
The kind of “event” by which the subject (“causer”) has initiated the effect on the object (“cause”) is not assigned Theta-role Agent because the addressed person ty “you” is just the causer and not the agent of the psychological state or effect which
239
Mirror Principle and Functional Hierarchies
has been caused in the object (causee). This caused psychological state or process is expressed with the reflexive form polekal jsem se and in German with an impersonal form of the predicate ich erschrack / ich bin erschrocken. The same content which is expressed by the Czech clause ty jsi mě polekal, German du hast mich erschreckt, English you scared me, and Russian Tы меня напугал can be expressed with a construction in which the holder of the psychological state “being scared” caused by the Causer is merged to the subject position giving the structure (51).
The order of affixes corresponds to the order of functional projections and according to MP (Baker 1988). The sentence ty jsi mě polekal is the causative equivalent where the subject ty is in SpecTP and then remerged in SpecCP; jsi (AUX) is merged in T0 and then remerged in C0, for independent reasons. (52)
a. b. c.
Já jsem se tě polekal Ich erschrak vor dir I was startled by you
(Anti-Causative)
I take it that this position is not – as assumed in Kosta (2011: 289) – just a transformation of the transitive clause (with a light vP), but that this in fact
240
Causative/Anti-causative alternation
is a Anti-Causative projection not equivalent with a VoiceP nor with a CausP (against Schäfer 2008, in print and p.c.). A quite compelling evidence for this assumption I have found in pairs traditionally regarded as taking part in CAL, in which the Theta-roles change in the following clauses taken from Pesetsky (1995: 56).
(53) a. Bill was very angry at the article in the [Times] [Target] b. The article in the [Times] angered/enraged Bill. [Causer]
The truth conditions of the two sentences are noticeably distinct. For (53) a. to be true, Bill must have evaluated the article, and he must have formed a bad opinion of some aspects of it based on emotional or rational arguments. In other words, Bill must find the article objectionable at some point. In (53) b., the situation is different: Bill might be mad at the article in (53) b. as well – the meaning of (53) b. is not consistent with (53) a. Nonetheless, (53) b. is appropriate even if Bill thinks the article is splendid. It can be true, for example, if Bill’s favorite columnist has written, in Bill’s opinion, a great article revealing examples of government corruption. Thus, the article does CAUSE Bill to be angry, but the article itself is not the real Target of his anger, maybe someone described or something inside the article, but Bill is not necessarily angry at the article itself. I believe that many examples of this sort make the distinction between causative verbs and anti-causative verbs quite clear. Anti-Causative verbs like the examples (also in (54) a.) are anti-causatives in which there is no evidence for an explicit causer, cf.:
(54) a. John worried about the [television set]. [Subject Matter] b. [The television] set woried John [Causer]
In (54) a., whenever John experienced the worry described in the example, he was thinking in some way about the television set. Perhaps he was worried that it might catch fire, or that it was perched too precariously and might fall. Whatever the real nature of John’s specific concern was, [the television set] in (54) a. is the Subject Matter of Emotion. In (54) b., however, the DP [the television set] bears now the familiar role of Causer. There is a causal relationship between the set and some state of worry. Especially because the so-called Causative Alternation does not maintain truth conditions it seems that we really need – also for semantic reasons – to take two different argument-Theta-role hierarchy for CausP (for causatives) with an explicit agent or Causer and Anti-Caus (for anti-causatives) with a suppressed or implicit agent or causer. This difference will not only to be reflected by different syntactic projections but also by the fact that we have different Theta-roles and a different a-structure (cf. in section 2).
Derivation of Causatives and Anti-causatives
241
7.1.3 Higher Subjects: Benefactive or Experience Dative Subjects in Impersonals vs. Anti-causatives Anti-Causatives typically resemble intransitive or impersonal constructions in which the surface subject is the Patient or Theme (promoted – similarly to unaccusatives – from the internal object argument position) assigned Nominative, whereas the other argument often appears in a semantic (or inherent) Case such as Benefactive or Experiencer Dative depending on the semantics of the predicate. In the following clause, the expression of Dative is associated with a Benefactive (Theta-Role). (55) Tento román se mi This novelNom Refl to me ClDat “I like to read this novel”
dobře čte well reads
This type of impersonal construction, which Genuisene (1987: 289) classifies as “modal-deagentive reflexives” and Kemmer (1990: 150) calls “propensative” use, is used to imply that the reasons for an action or lack of action are not internal but external: the animate human Dative Subject is not responsible for his or her ability or inability to perform action, nor for its quality (Zolotova 1985: 90). This class of verbs are not transforms of their potential active transitive counter-examples:
(55) ≠ (56) Čtu tento román dobře pro read this novel well
We will not comment on this here, but will try to take up these and similar examples in section 2.
7.2 On Merge, Mirrors and, Syntactic Derivation of Causatives and Anti-causatives 7.2.1 Recent Approaches on Derivation of Grammatical Categories: Theta Hierarchy and Argument Structure Hierarchy Despite the abovementioned classical literature on derivation of phrases – which we do not mention here – in recent approaches (cf. Williams 2007, 2008), it is assumed that there are at least five grammatical processes which target heads, in that they attract them or attach things to them: Affix Hopping, Morphological Lowering, Verb or Head Raising, and less obviously, Adverb Placement, and
242
Causative/Anti-causative alternation
the basic Merge operation. Edwin Williams’ recent proposal tries to unify all these Mirror effects under the label “COMBINE”: “Mirror effects arise under COMBINE but are size-relative, because COMBINE is parameterized for the size of the targeted head, where the values are “XP,” “X,” “stem,” and “root.” (cf. Williams 2008: 1). Before we come to a detailed analysis of the derivation of grammatical categories, we want to show how the theta-hierarchy and argument structure of Causatives (CC) and Anti-Causatives (AC) vs. Nominals, Passives, and Adjectival Participles can be linked to each other. In traditional generative framework, for mainstream generative grammar (hereafter MGG), argument structure was equated with the number of arguments related by predicate. With the increasingly important development of the P&P framework, especially very fruitful and influential in many contributions on Government and Binding (LGB, Chomsky 1981, Kosta 1992), the Theta Criterion and the Projection Principle have been introduced as quite powerful means to capture the descriptive adequacy of the theory of grammar. This happened on par with a further development of lexicalist theories like Lexical Functional Grammar (Bresnan 1982). In all these approaches, one reasonable idea has been pursued, namely, how can the information in lexicon provide an explanation of syntax, or, to be more modest, how can a theory of argument structure explain properties such as adjectival and verbal passives, middles, light verb constructions (e.g. unaccusatives), verbal compounds, causatives, and nominals, among other topics (Levin and Rappaport 1986, 1988, Zubizarreta 1985, 1987, Grimshaw 1986, 1990, Hale and Keyser 1986, 1986, 1988, di Sciullo and Williams 1987, Grimshaw and Mester 1988, to mention just a few). In this chapter, we will, step-by-step, develop a theory, which tries to account for the differences between different types of causatives as shown in section 1 (ex. (1)-(6)). First, we shall show how a prominence theory of theta and argument hierarchy can contribute to a better understanding of what semantics can tell us about syntax ordering of arguments (2.1). Then, we will undermine this theoretical approach by analyzing Nominals and passives (2.2), and further the derivation of adjectival passives (2.3). In section 2.4., we will give some evidence showing that some unaccusatives participate and some do not participate in CAL. We will reject the theory of Pustejovsky and Busa (1995) and Pustejovsky (1996) on causation who argue that a description of the behavior of unaccusatives in terms of fixed classes does not capture the relatedness between the constructions involved in the diathesis alternation or in the unaccusative/unergative alternation for the
Derivation of Causatives and Anti-causatives
243
same predicate (cf. Pustejovsky 1996: 188). We will demonstrate that causatives and unaccusatives are not transforms of each other. Moreover, some psych agentive or non-agentive causatives do not participate in CAL. We shall reject the theory of Pustejovsky’s regarding the description and explanation of Causation and Unaccusativity (Pustejovsky 1996: 188 passim). A short summary (3.) concludes this chapter. In her very influential book, Jane Grimshaw (1990) develops a very insightful theory of representation of argument structure (a-structure). The term refers “to the lexical representation of grammatical information about a predicate” (Grimshaw 1990: 1). We want to take this theory as a possible basis of our own theoretical approach. As opposed to former approaches (Marantz’ DM in 1984 or Levin and Rappaport 1986, 1988, 1995), Grimshaw’s prominence theory of a-structure contrasts in a number of respects with the view that a-structures are sets. The fundamental assumption is that the a-structure of a predicate has its own internal structure, which affects the grammatical behavior of the predicate in many ways. Also, “(the) organization of the a-structure for a predicate is taken to be a reflection of its lexical semantics…” (Grimshaw, op.cit.: 3). As a consequence of such a theory, a-structure cannot be freely altered by rules, since “an argument has whatever a- structure properties it has by virtue of its role in the lexical meaning of the predicate and not by stipulation.” (op.cit.) As a consequence of all this, this prominence theory of a-structure is superior to any theory because it can both explain the syntactic variability and the acquisition of language (along the theory of Landau and Gleitman 1985 or Pinker 1989). Let us recall the main points which would explain the structure of the abovementioned classes of verbs under (30) in (56).
(56) (i) √ agentive (murder, assassinate, cut), (ii) √ internally caused (blossom, wilt, grow), (iii) √ externally caused (destroy, kill, slay), (iv) √ cause unspecified (break, open, melt).
Early work on thematic relations suggested the existence of a thematic hierarchy (Jackendoff 1982: 43). My proposal is – following Grimshaw (1990: 7) – that this hierarchy is properly understood as the organizing principle of a-structures because a-structure is constructed in accordance with the thematic hierarchy, so that the structural organization of the argument array is determined by universal principles based on semantic properties of the arguments.
244
Causative/Anti-causative alternation
I will assume a version of thematic hierarchy in which the agent is always the most salient and thus highest argument. Next ranked is Experiencer, then Goal/Source/Location, and, finally, Theme, a scheme that is represented under (57).
(57) (Agent (Experiencer (Goal/Source/Location (Theme)))) (Grimshaw 1990: 8)
For an agentive verb like murder, the a-structure prominence relation are those given in (58). (58)
murder
(x Agent
(y)) Theme
The a-structure in (58) is not surprising because in agentive clauses the most syntactically prominent argument in Nominative-Accusative languages is the subject. However, this is not always the case. Suppose we apply these principles to construction of the type (55) we have introduced in section 1, here repeated as (59). (59)
[(55)] Tento román se mi This novelNom Refl to me ClDat “I like to read this novel”
dobře čte well reads
Albeit the fact that the transitive verb číst “to read” is a verb which usually takes as external argument an agent, here the subject is a theme in nominative, and the next prominent argument in the hierarchy, an experiencer, takes the lowest argument position, so we get (60). (60)
(x Theme
(y Exp
(z))) ?
In addition, the sentence (55/59) would not be grammatical without the adverb dobře “well.” (61) *Tento román se
mi ____ čte
The syntactic theory which postulates that all adverbs are just syntactic adjuncts and thus completely outside the domain in which a-structure regulates occurrence can easily explain facts like passives but not sentences with modifying adverbs. Moreover, the adjunct analysis of by-phrases does correctly predict why the syntactic tests for Passives apply but not for Anti-Causatives (cf. section 1), cf.:
Derivation of Causatives and Anti-causatives
245
(62) a. The wind was broken by the wind/boys/balls. b. *Windows break easily by the wind/boys/balls c. *The window broke by the wind/boys/balls (ex. from Grimshaw, 1990: 143)
If by-phrases are a-adjuncts, the ill-formedness of (62) b./c. follows, since the a-adjunct are unlicensed. Neither the middle form nor the inchoative form of the verb break has a suppressed argument to license the by-phrase. Contrary to this, since passives are derived by the suppression of the external argument (B/J/R 1989, Grimshaw, op.cit.: 143), which still is syntactically active, the by-phrase (a-adjunct) is licensed. In fact, if by-phrases were real adjuncts, it would not be clear why they cannot occur even with active verbs. In standard MGG approaches, no thetacriterion violation would result, by definition, since the theta-criterion regulates arguments, not adjuncts. For purposes of transparency and exposure, I shall introduce the notion “external arguments” and “a-structure prominence” from Grimshaw (1990: 33 passim). The notion of maximal prominence makes the notion “external argument” from MGG approaches less clear: if we, alongside with Williams (1981), understand an external argument as the argument that is realized outside the maximal projection of the predicate:
(63) arrest (x, y) or (Agent, Theme),
we cannot classify the causative verbs as we should. For purposes of the present classification of causative and anti-causative verbs, the distinction between external and internal arguments being assigned a Theta-role Causer is far from being clear. Let us add some arguments in favor of a theory in which Theta-roles are assigned to their a-structure in a more explicit way.
7.2.2 Nominals and Passives A generally and widely accepted distinction has been made between arguments and adjuncts. Arguments can be selected and subcategorized, in the sense that they are under control of individual predicates, they must be licensed, they can only occur after they have been theta-marked by a predicate as a function of the predicate’s argument structure (cf. Grimshaw 1990: 108). Adjuncts, quite on contrary, are never theta-marked and do not be under control of a verb or be licensed by a relationship to the predicates’ a-structure. Their licensing conditions seem to be fed by extensional relations of situations of possible worlds as I have already
246
Causative/Anti-causative alternation
tried to speculate on in footnote 4. Adjuncts are not subcategorized by the intensional or a-structure of the predicate, hence their form is free, and they are never required by a-structure. The general question of the relationship between nouns and verbs has occupied a central place in theoretical investigation ever since Chomsky’s important investigation (Chomsky 1970). We know by then, that similarly to verbs, nouns can and do have obligatory arguments. This important property of nouns has been obscured by the fact that many nouns are ambiguous between an interpretation in which they do take arguments obligatorily and other interpretations in which they do not (cf. Grimshaw 1990: 45 passim for English and Babby 2002). In Causative constructions of the transitive (psych-verb) frighten type of verbs, nominalization leads to an ambiguity in a change of the event structure and it enables a by-phrase. Usually, a verb, which expresses an event, has an additional meaning of process, and it takes a by-phrase like in passives: (64)
a. b. c. d. e.
(65)
a. b. c. d. e.
Tы меня напугал Ty jsi mě polekal. Du hast mich erschreckt You scared me Przestraszyłes mnie ≠ Zostałem prestraszony prze ciebie Moe ispuganie toboj Mé vyděšení/ polekání tebou Das Erschrecken von mir durch dich The scaring of me by you Straszenia mnie przez ciebie
(Causative)
Contrary to this, with an Anti-Causative construction of the fear-class of verbs, nominalization is excluded with or without a by-phrase:
(66) a. Já jsem se tě polekal (Anti-Causative) b. Ich erschrack vor dir c. I was startled by you
(67) a. *Mé vyděšení/ polekání se (tebou / tebe) b. *Das Erschrecken von mir durch dich c. *The starteling of me by you d. * Moje przestraszenie przez ciebie
Derived nominals thus seem to confirm the hypothesis that Anti-Causatives are not licensed by a by-phrase because they do not have an external argument.
247
Derivation of Causatives and Anti-causatives
Nevertheless, it seems to be the case that causatives properly share all other properties with all other transitive verbs. In the representation prominence theory developed by Grimshaw (1990), nominalization resembles passivization in that it is the external argument of the base verb which is suppressed in both cases. Thus, NP-movement (Passives) and Causatives share the two properties stipulated in Chomsky (1981: 103) and repeated here for convenience and expository reasons:
(68) faire [manger la pomme par Pierre] (“_to have the apple be eaten by Pierre”) (Chomsky 1981: 103, ex. (6))
As Kayne (1975) observes, the embedded phrase has the properties of passive constructions. As in passives, no subject appears at D- or S-structure. The example (64) differs from similar constructions (i.e. passives, PK) in that it lacks passive morphology and there is no movement. Like in real passives (derived by movement), causatives of the type (6) a./c./e. allow for by-phrases even in nominalisations. It follows that neither anti-causatives nor causatives in which the causativisation is not caused by an external argument cannot be nominalized nor passivized with by-phrases: this prediction is born out, cf: (69)
(70)
(71)
a.
Dívce the girlDat
spadla fell
sklenice a glassNom
b.
Spadnutí sklenice
na zem
a.
U devushki at girl.Gen
upal fell
b.
Padenie stakana (*u devushki/ *devushkoj) na pol Szklanka wypadła dziewczynie na ziemię Wypadek szklanki (*przez dziewczynke/*dziewczynki) na ziem Alla bambina è caduto il bicchiere per terra. Il cadere a/per terra del bicchiere (*alla bambina, *da bambina) Le verre de la fille a tombé par la terre La chute du verre (*de la fille/*par la fille) sur la terre
(*dívky/ *dívkouInstr) na pol stakan. on floor.Acc glass.Nom
a. b.
(72)
a. b.
(73)
a. b.
na zem. on the floorAcc
Cz
Cz Ru
Pl Pl Pl
It It F F
248
Causative/Anti-causative alternation
As we can see, the nominalization of an anti-causative verb only leads to ungrammaticality if the Possessive Phrase or the by-phrase is expressed. As noun, it can only mean a state or result. Possessives NPs in nominals like the scaring of me (by you) or the enemy’s destruction of the city and by-phrases of passives like I have been scared by you or The city was destroyed by the enemy. In both cases, the possessive NP and the by-phrase can be suppressed on nominalizations and passives. So they cannot be part of the a-structure. For example, a possessive can never be given the Theta-role of the subject of the corresponding verb, because this Theta-role cannot be assigned by the Noun. The same reasoning holds for passive argument structures. The subject argument of the active verb is suppressed for the passive, similar for nominals. Cf.: (74)
a. The enemy destroyed the city Destroy (x (y)) Agent Theme b. The enemy’s destruction of the city destruction (R (x-⊘ (y))) Agent Theme c. The city was destroyed by the enemy. Destroyed (x-⊘ (y)) Agent Theme (Grimshaw 1990: 108)
For the destroy/break class, the argument in the subject position of the active sentence is more prominent than the object along both dimensions, and also the a-structure and the thematic hierarchy are parallel. (75) a. b.
The enemy destroyed the city Destroy (x (y)) Agent Patient Cause
For the frighten verbs, however, the first position in the thematic organization does not correspond to the first position in the cause dimension, since they are not occupied by the same semantic argument. Instead, the second element in the thematic dimension is associated with the first element in the causal dimension, and the first element in the thematic dimension corresponds to the second position in the causal dimension; (76) a. b.
The building frightened the tourists. frighten (x (y)) Exp Theme Cause …
Derivation of Causatives and Anti-causatives
249
In this respect, non-agentive causatives of the frighten class seem to have its source of asymmetry between a-structure and thematic structure in a conflict of two hierarchies, the subject being most prominent in the causal hierarchy but not in the thematic hierarchy. This observation is confirmed and strengthened by examples of compounding which seem to violate the prominence theory. The ungrammaticality of examples like (77) a. and (77) b. are accounted for exactly because of an asymmetry between a-structure and thematic structure of these compounds which prevents the arguments of the compounds to be assigned the proper Theta-role to its arguments:
(77) a. *A child-frightening storm b. * A storm-frightening child
(77) a. is ungrammatical because it requires the Theme to be theta-marked in a wider domain than the Experiencer, and (77) b. is impossible because ist requires the non-Case to be theta-marked in a wider domain than the Cause. Since there is no way how to theta-mark without violating one or the other of the two sets or levels of prominence relations, there is no well-formed compound corresponding to non-agentive frighten. The correlation and symmetry of two levels of prominence relations vs. their misalignment can be demonstrated on two classes of verbs: the frighten class vs. agentive predicates like the break causative. Only the latter has a nearly perfect alignment of the two dimensions: a-structure and theta hierarchy, because they coincide. The same coincidence can be observed in the class of agentive verbs like arrest or unergative verbs like work. But also the fear class seems to behave like the agentive predicates (cf. ex. 49 vs. 50). As Grimshaw assumes, the notion of cause cannot be the only reason for these differences, since verbs like arrest, work, and fear are not causatives (Grimshaw, op.cit.: 26). The proposal of Grimshaw is that there is another dimension which determines the event structure of the predicates. I assume that each verb has associated with it an event structure, which, when combined with elements in the clause, provides an event structure for the entire sentence. The event structure represents the aspectual analysis of the clause and determines such things as which adjuncts are admissible, what the scope of elements like almost, often, always or just will be, and even which modals and quantifiers can be added (cf. Vendler 1967, Dowty 1979, Bach 1986, Bach, Harnish (1979/1991) Pustejovsky 1988, Tenny 1988, 1989a, c, Grimshaw 1990). The event structure breaks down events into aspectual subparts. For example, a Vendler-Dowty “accomplishment” denotes a complex event
250
Causative/Anti-causative alternation
(which consists of an activity) and a resulting state (cf. Pustejovsky 1988 for a discussion). (78)
event activity
state
An accomplishment like x constructs y Grimshaw (1990: 26) analyzes as an activity in which x engages in construction plus a resulting state in which existence is predicated of y. For x breaks y, the activity is one in which x engages in breaking, and the resulting state is one in which y is broken. Now a cause argument has a standard representation in such an analysis: it will always be associated with the first sub-event, which is causally related to the second sub-event. Thus we have a generalization which concerns the causativity: here the first sub-event entails the cause as its most prominent argument, be it explicit or implicit and the second sub-event determines the argument corresponding to the element whose state is changed. Compare it now with the four classes which take part in the one way or other in causativity in (79). (79)
(i) (ii) (iii) (iv)
√ agentive (murder, assassinate, cut), √ internally caused (blossom, wilt, grow), √ externally caused (destroy, kill, slay), √ cause unspecified (break, open, melt).
For agentive predicates of the first group, the Agent will be the aspectually most prominent argument for all aspectual classes of verbs. Since it is also thematically most prominent, the subject of an gentive verb like cut is most prominent according to the aspectual and thematic hierarchy: (80) Transitive agentive (Agent 1
(Theme)) 2
In case of ditransitive verbs of the type to give I have no particular evidence on the aspectual status of the Goal and Theme, thus I will indicate their ranking with an x. (81)
Ditransitives (Agent 1
(Goal x
(Theme))) x
Derivation of Causatives and Anti-causatives
251
If ditransitives show up in a causative context, it has been demonstrated that Causative Alternation parallels between three-figure (triadic) verbs and transitive CC (den Dikken 1995: 239; see also Kosta 2011: 276). In the theory designed here, nominalization resembles passivization in that the external argument of the base verb is suppressed in both cases. The failure of the passives of the frighten class follows from the fact that verbs in this class have no external argument. Belletti and Rizzi (1988) demonstrate that Italian psych verbs of the preoccupare “to worry” class have no corresponding verbal passive, although they do allow the adjectival passive. The present theory predicts exactly the same for the frighten class in English, Czech, German and Russian, in which the nonagentive frighten has the a-structure of (82), and verbal passivization should be excluded. This prediction is born out: (82) a. b. c. a. b. c.
Já jsem se tě polekal Ich erschrak vor dir I was startled by you *Byl jsem tě polekán tebou *Ich wurde vor dir erschrocken durch dich ?I have been startled by you
7.2.3 The Derivation of Adjectival Passives Verbal Passive can suppress only an external argument, but to derive (82’), an internal argument (the Theme) would have to be suppressed instead. Thus frightened cannot be a verbal passive. The prediction is then that all passive forms like frightened here must be adjectival passives, and the evidence supports this prediction. They constantly pass all the tests for adjectivehood with flying colors. They allow negative un-prefixation, they occur as complements to the verbs that select Aps (e.g. remain, etc.), and they are relatively unfussy about prepositions: frightened can occur with about, by, or at. Unlike Pesetsky (1987), I do not assume that a by-phrase indicates a verbal passive, since it can co-occur with unambiguously adjectival properties: (83) a. b. c. d.
Fred remains completely unperturbed by his student’s behavior. Fred zůstává naprosto nedotknuty chováním svého studenta. Фред остается полностью невозмутимым поведением своего ученика. Fred pozostaje całkowicie niewzruszony zachowaniem swojego ucznia.
252
Causative/Anti-causative alternation
The second evidence to differentiate between verbal passives and adjectival passives is that only the first allow a progressive form which is incompatible with stative predicates, and English adjectives derived from verbs are by and large states (similarly to nominalizations). Thus, a verb like depress can be used with the progressive form only in the active and psychological causative form, but in the passive (which is an adjective state), it is ungrammatical. In Czech, there is no progressive form, so this test is not possible. (84) a. b. c. d. e. f.
The situation was depressing Mary. *Mary was being depressed by the situation *Mary was being depressed about the situation Situace deprimovala Marii. Marie je deprimovaná situací (adjectival passive = longform Adjective). Marie byla deprimována situací (verbal passive, PPP).
With an agentive psychological verb like “terrify,” the paradigm changes in the expected way, and the progressive form is fully grammatical with by: (85) a. b. c. d. e.
The government is terrifying people. People are being terrified by the government. Vláda děsí lidi. Lidé jsou vládou zděšení. (adjectival passive = longform Adjective) Lidé jsou zděšeni vládou.
7.2.4 Three Levels of Representation: a-Structure, Thematic Structure, and Event Semantics As we have already seen, a classification of causatives fails if one tries to postulate a direct link between causativity and unaccusativity. Neither is the class of unaccusatives identical with, nor is it a subset of the much bigger class of causatives. The work of Van Valin (1990), Zaenen (1993), Pustejovsky (1996: 188), and others has illustrated the problem in failing to link “unaccusativity” to the lexical semantics. Pustejovsky and Busa (1995) argue that a description of the behavior of unaccusatives in terms of fixed classes does not capture the relatedness between the constructions involved in the diathesis alternation or in the unaccusative/unergative alternation for the same predicate (cf. Pustejovsky 1996: 188). Pustejovsky (1996, section 9.2, 188 passim) cites Chierchia (1989)
Derivation of Causatives and Anti-causatives
253
who suggests that the lexical representation for unaccusatives is in fact an underlying causative. What we want to show is that causatives and unaccusatives are not transforms of each other, and that some psych agentive or non-agentive causatives do not participate in CAL. Moreover, we can also reject the theory of Pustejovsky’s regarding the description and explanation of Causation and Unaccusativity (Pustejovsky 1996: 188 passim). Pustejovsky combines Causation and Unaccusativity with the concept of underspecified event structure and argues that those unaccusatives which also have causative counterparts are logically polysemous because of the headless nature of the event structure representation of the predicate. For the Italian verb affondare “to sink” he postulates the following unheaded event tree structure below under (86). (86)
e Peter opened the door (Transitive) √root√open ---> c) dominated by a DP ---> the opening √root √open ---> d) dominated by a VoiceP---> the door has been opened (by Peter) root √open ---> e) dominated by a Caus ---> Peter made Paul open the door 2)
Level of a-structure a) _______VP (y) b) (x)___vP__(y) c) _______DP (V) d) (y)__VoiceP_(by_x) e) (x)__CausP_(z)_VP (y)
3)
Level of thematic prominence: a) (Patient (Thema (Instr (....)))) b) (Agens (Experiencer (Source/Goal (Thema)))) c) (Thema (Effect (......))) d) (Thema (Agent)) e) (Causer (Causee (Thema))
4) Level of event structure: e Ancient Greek: ἥ καλή γυνή ἥ γυνή ἥ καλή (2 tokens of the D type, partitive construction)
Does this mean that we have LF variation, as the semantic interface filters out some options in a language L but not in some other language L1? We assume not, as the simplest scenario. We see that the n! option is actually stipulative, since we are determining a priori that there is a 1-to-1 relation between types and tokens, and that is not so. How do we manage the derivation? With the Conservation Principle. There are, then, three options for combination:
(40) a. n!: permutation without repetition b. nn: permutation with repetition c. ∞
We see that the (c) option is the only one that does not stipulate any relation between types-tokens, but it must be limited in some way. That principled way is
292
Radical Minimalist Hypothesis (RMH) and early grammars
the Conservation Principle. Taking recursion seriously, under the light of what we have just said, Σ (our generative system) could generate:
(41) {X, {X, {X, {X, …{Y, {Y, Z}}…}}}}
However, assuming our architecture of the “grammar,” the number of types is determined by CP, based on the RSS and through the A-List. Besides, the possibilities are narrowed down by biology: neurological connections that are recurrently used are reinforced, in our case, of n possible derivations from an LA, only a subset S will actually occur (optimally, S = 1, but this is not usually the case), thus reinforcing certain neurological connections through statistical learning (Thornton and Tesan 2006), without resorting to ad hoc filters. Eliminating empty categories as such, and replacing them by covert tokens whose materialization depends purely on interface conditions simplifies acquisition, since there are less elements to acquire and also simplifies the theory, since we can dispense with ECP, and other principles ruling the distribution of traces. We will present corpus data from English, German, and Russian which clearly demonstrate that children are able to assign the correct values already in the first phase of word combinations. We may only indicate here the example of the following corpora: first we present some L1 data in Russian (3.1.1). We will also compare the structures of early childhood clauses with the structures of the respective TL and discuss two hypotheses: the hypothesis of the so-called small clauses (small clause hypothesis) and the hypothesis of full competence clause structure (3.1.2). We will then analyze the problem of zero-subjects (and zero objects) in the TL compared to the children’s language by the example of Russian, Polish, and Croatian, based on the largest available corpus of CHILDES McWhinney (2nd edition).
8.4.2 Word Order in Children’s Language Languages differ with respect to the position of the lexical and functional heads and the complements of a phrase. Thus, in previous stages of the theory (GB, MP), word order differences have been explained by so-called Head direction parameter: The English, French and most of Slavic languages (with the exception of the Upper Sorbian) were explained such that the head of the VP had its complement to his right (English, French, and Slavic languages = head initial), while Turkish, Japanese, Danish, and German had their complements on the left side of the head, that is, they were called head-final. This parametric variation was expressed as a “head-direction parameter” in (42). (42) Head direction parameter: Is the language of head-initial or head-final? (Rank the values: head initial or head final)
Proposal: Radical Minimalist Hypothesis and Early Grammars
293
However, positing such a “parameter” is risky not only theoretically (as it violates the Occam’s razor, a perfectly reasonable meta-theoretical desideratum), but also empirically: English is head-initial in the verbal domain (i.e., objects follow verbs), but head final in the nominal domain (i.e., modifiers precede nouns), so a single uniform “parameter” does not concur with empirical data. Reference to this (and, optimally, any other) parameter can be avoided if its observable consequences can be derived from independent interfacetriggered phenomena in each domain. For example, theme-hood is a great excuse for putting something at the beginning of the clause (cf. Krivochen 2011b). In this case, we would be dealing with a C-I interface condition, which has effects on word order, as meaning is read off in the transferred structure (in a very Hale & Keyser fashion) which, in turn, determines word order as the optimal scenario (Kayne 1994; specially Uriagereka 1999), taking into account the availability in a language L of Vocabulary Items to be inserted in syntactic terminal nodes and phonological patterns in a local (i.e., word) level and a global (i.e., phrasal) level. In the case of word-level, our thesis would be that the nodes are spelled out mirroring the relation they maintain with the root, from the closest to the most detached (cf. Krivochen 2011d). From this claim, we derive that procedural nodes always have a closer relation with the root than peripheral nodes like Agreement, as they generate categorial interpretations in the semantic interface. However, abandoning Kayne’s LCA in favor of this alternative implies that both interfaces are somehow sensitive to similar factors, which seems to be quite hard to prove (we will come back to this in a moment). The obligatory question at this point would be: How are derivations built so that the adequate node is in the adequate place to have scope over the root? Here is when the Conservation Principle comes into play, which we repeat here for the reader’s comfort: (43) Conservation Principle: Dimensions cannot be eliminated, but they must be instantiated in such a way that they can be read by the relevant level so that the information they convey is preserved.
If we depart from a certain (unaccusative, unergative, or –di-transitive) conceptual structure (see Mateu 2000a, b), then there is no other way to build the (narrow) syntactic structure, or the CP would be violated. If we take the position that language is part of the natural world seriously, then the same principles that apply to other physical systems should optimally apply to Language. In acquisition, the pattern to be learned would be precisely the kind of relation a node maintains with the root and with other nodes, which can be parsed at first, but then works by analogy, as the linguistic system
294
Radical Minimalist Hypothesis (RMH) and early grammars
fossilizes with time. For example, a word-derivation for “dishwasher” would be as follows: (44)
{D} [D] [cause]
Stage III: The early childhood multi-word utterances hardly deviate from the syntax of the target language (TL) in terms of position from the heads and complements, that is, in head initial languages (English, French, and Russian), it is expected that the complements of the lexical or functional head follow the head V of the VP, while in head final languages (German, Japanese, and Turkish), it is expected that the reverse word order would be the case. Thus, the following data of an early language stage of a Russian girl (CHI; 2;7.29) show clear preferences for head-initial structures75:
(45) *MOT: a obaz’jane, rasskazhi of-monkey-tell (me) *CHI: [IPSpec obez’jany, obez’jany [VP zhivut [PP v zooparke]]] monkeys, monkeys live in ZOO *MOT: v zooparke, da In Zoo, yes? *CHI: [e] prygali (they) jumped. *MOT: oni [VP [PP na maSHIny] FOK [VP prygali]]] they on cars jumped *MOT: a chto eshcho delali and what (they) done, too?
75 For technical reasons, we have retained the original transcription from the corpus of CHILDES McWinney, contrary to tradition, to use the ISO transliteration. Below is the notation [e]for the empty category to be determined in greater detail in the functional and structural position of the sentence subject or object phrase.
Proposal: Radical Minimalist Hypothesis and Early Grammars
295
*CHI: [IPSpec [e][VP sideli i [VP [DP peCHEN’je] FOK [VP [e] eli]]]] (they) sat and cake (they) ate *EVA: kto im dal pechen’je who them gave cake *CHI: [IP mama [VP dala pechen’je]] mother gave cake *MOT: a drugije morkovku eli and others carrot ate *CHI: [CP kto-to [IP I° dal [vP obez’jane [VP t banan]]]] someone gave monkey banana MOT: banan? banana? *CHI: a pechen’je Tanino and cake (of) Tanya @Begin @Languages: ru @Participants: CHI Tanja Target_Child, EVA Eva Investigator, MOT Mother @ID: ru|tanya|CHI|2;7.29||||Target_Child|| @ID: ru|tanya|EVA|||| |Investigator|| @ID: ru|tanya|MOT|||||Mother|| @Comment: Last edited 9-JUN-1999 by William Snyder.] (phrase structure in brackets =P.K.)
The fact that the child selects from the unmarked SVO word order does not exclude that also other variants are possible in Russian word order, as long as the change in word order is a “drastic interface effect,” an operation triggered by the need to achieve optimal Relevance in any (or both) of the interfaces. These are either lexically driven and follow the attested stages of lexical learning from less to more complex syntactic structures and properties (unergatives > transitives > causatives > passives) or/and in other cases the acquisition of word order is usually due to factors such as focus-background, topic-comment-or themerheme information structure, with other words: these changes are usually driven through the communicative perspective (information structure) operating on the syntax. As is apparent from the following examples, the child at the age of just 2.7 years already begins to acquire the rules of information structure, see (46). (46) *MOT: ty rasskazhi Eve, ty byla v zooparke *CHI: v zooparke *EVA: v zooparke *CHI: da, v zooparke shtrausy *MOT: shtrausy *CHI: [e]prixodili v mashinu *MOT: v mashinu [e] zagljadyvali, v okoshko *CHI: v mashinu prjamo *MOT: a zhirafy kakije tam, bol’shyje *CHI: zhirafy, xx, myshki zakryty
296
Radical Minimalist Hypothesis (RMH) and early grammars *MOT: a myshek zakryli, chtoby oni ne bezhali. *MOT: a dva medvezhonka igralis’ *CHI: a [CP bejbikaj [vP tj [NegP ne zakrylii [VP ti NP tj]]]] *MOT: da, malen’kix ne zakryli, tol’ko bol’shix *CHI: tok’ko tigra vot *MOT: a tigra zakryli, da *CHI: [CP vsexFoc [IPe [VP zakryli]]] *MOT: da, tigry bol’shije *CHI: a papa snimal *MOT: a papa na kameru snimal *MOT: a eshche [CP kogo [IP ty [VP videla]]] *MOT: olenja, da *CHI: olenja i kenguru *CHI: zebriki *MOT: zebry byli, da *CHI: sloniki, vot tam plavali [@UTF8 @Begin @Languages: ru @Participants: CHI Tanja Target_Child, EVA Eva Investigator, MOT Mother @ID: ru|tanya|CHI|2;7.29||||Target_Child|| @ID: ru|tanya|EVA|||||Investigator|| @ID: ru|tanya|MOT|||||Mother|| @ Comment: Last edited 9-JUN-1999 by William Snyder.]
In the elicited dialog (46), there are two instances of A-bar-Movement, first as cyclic Focus-Movement to the CP-domain of the left periphery (Rizzi 1997) and second as wh-Movement to the Operator position of Spec-CP. This would mean that in the age of 2;7,29 children “learn” the rule of internal Merge already. It is not obvious, though, how to derive and license the empty category in brackets e.
8.5 Adult Grammar Derivations76: We want to use here the classical notion Label (cf. Adger 2003: 69–70) in a more specific sense, when [] indicate semantic primitives, whereas {} indicate recognized labels at the C-I interface, where they are relevant to build an explicature as labeling entails scope. This means that a Radically Bare Phrase Structure could dispense with {}, but not with [] notation. The whole derivation linking semantic primitives to syntactic structures is as follows: Roots enter the working area “uncategorized,” in their (pre-categorial) ψ-state. The Conservation Principle on the one hand and interface conditions on the other constrain possible combinations, as the information conveyed by the RSS must be maintained, and that tells
76 What follows has been taken and adapted from Krivochen (2011b, c).
Adult Grammar Derivations
297
us what the clausal skeleton could look like; and semantic interface conditions filter trivial merges (like {{a}} – see Adger, 2011 – or the previously mentioned {α, Ø}) and other “ill formednesses” ({D, P} and so on) post-syntactically (restrictivist theory). Categorization patterns will be analyzed later on. Syntax proceeds in a strict bottom-up fashion, Merge applying to elements sharing ontological format (e.g., D and √) or structural format (e.g., in traditional terms, “non-terminals,” “Specifiers,” for example, merge this way). The primacy of monotonic Merge and the deletion of the categorial / sub-categorial distinction (which presupposes two levels of syntactic computation, that is, two derivational spaces) give us a unified syntactic model “all-the-way-down” (Phoevos Panagiotidis, p.c.). Notes: (i) [] indicate semantic primitives, whereas {} indicate recognized labels at the C-I interface, where they are relevant to build an explicature as labeling entails scope. This means that a Radically Bare Phrase Structure could dispense with {}, but not with [] notation. (ii) The event and cause nodes are always phonologically empty (i.e., p-defective, see Hale and Keyser 2002). This means that there are no verbs as primitive categories. (iii) √ are roots, pre-categorial linguistic instantiations of a-categorial (and severely underspecified) C-I generic concepts, following the Conservation Principle. The incorporation processes Mateu proposes are not likely to happen “in real (derivational) time,” but historically, as they have to do with the insertion of a coined phonological piece Spelling Out, for example, [manner] + [motion]. Thus, “fly,” for example, is taken as a simple root in both: [Birds fly] (simple Unergative construction) and [I have to fly home] (Path of Motion construction: “manner incorporation”). Compositionality does the rest of the work when building an explicature in C-I2. (iv) “Categorial correlations”: we will assume that D collapses the ψ-state to [entity] (without excluding [cause]) and T collapses it to [event]. Common sense may dictate that the primitive cause appears only in verbal (i.e., eventive) structures, but there is an aspect of the C-I1-syntax interface that we have mentioned elsewhere and is essential to this: this interface is not transparent (i.e., there is no exact correlation between a Relational Semantic Structure and its syntactic realization). Therefore, we can have such alternances as the following:
298
Radical Minimalist Hypothesis (RMH) and early grammars Relational Semantic Structures: (Mateu, 2000a, b) Unaccusative RSS
Possible syntactic realizations: {time}
T
[time]
{event}
[event]
r
[event]
r
Figure
{D} [D] {event}
{P}
{D} P
P
{P}
[event]
{P}
Ground
e.g.: [T arrive [r Mary [r AT] the house]]
Mary arrived (at the house)
Mary´s arrival
{P}
{D}
{D}
P
{D}
(at the house)
Unergative RSS: Possible syntactic realizations: Agent
R
{time} [time] {cause}
R
[cause]
T
[DO] Non-relational element e.g.: [R Mary [R cause [T [T DO] shout]]]
{D}
{D} {cause} [D] {cause}
{cause}
[cause] {event] [event] Mary shouted
√
{D}
[cause]
{event}
[event]
Mary´s shout
√
How does each category emerge? Syntax can maintain a root in its ψ-state for as long as it is needed, since it is blind to the internal characteristics of the elements it manipulates, it is only sensitive to their format. Therefore, “uncategorized” roots can be manipulated by the syntax. Distributed Morphology’s Categorization Assumption (see Embick and Noyer 2004, Panagiotidis 2009) is actually a semantic interface requirement, at best, which can be reformulated as follows: No root can reach the semantic interface without being under within the area of influece of a suitable procedural head (D, T, P ). [D…α…√] = N, being α an X number of procedural nodes that do not collapse categorial dimensions on the root, because they are not categorically specified enough (that is, they can appear in both N and V structures, see above for examples of how both [cause] and [event] are underspecified in this way). No immediate adjacency is needed, we just have to respect Minimality. D is specified enough as regards content and distribution to determine interpretation, and the same happens with the other PPCC. The key is on the rigidity of procedural meaning (see below). [T…α…√] = V [P…α… √] = A, Adv (Hale & Keyser, Mateu)
299
Adult Grammar Derivations
A fully explicit derivation: a) John poemed / sonneted Mary (locatum V) That is a yet “uncoined” expression (to our knowledge), but it could very well be generated, and it is perfectly interpretable in both interfaces. The derivational path, from the very beginning, would be the following: Transitive RSS (C-I1. Syntactic instantiation: {Mod}
R John
R
[+ realis] {Asp} [+ perf]
T
[CAUSE] [GO]
Mary
r
{Time}
{D} r
{Time}
[- pres]
[WITH] [poem / sonnet]
{cause}
{D}
[D]
{cause}
√John [cause]
{event}
{event} [event] √poem/ [D] sonnet
{D} √Mary
Lower-level explicature (C-I2). Decoding: recover the subjacent semantic structure (RSS), interpret who the participants are and what role they play in the event (interpretation of A-structure and Theta-roles, Krivochen 2010c). The interpretation of “what [to poem /sonnet] means” occurs at this point, using the clues given by the syntactic structure. Desambiguation: there are no ambiguous expressions in this example (but see Sperber and Wilson 2003 for an account of “bank” as an ambiguous expression). Referent assignment: (see Krivochen 2010a for extended discussion and data analysis in Spanish) procedural features of D enter into play. Proper names
300
Radical Minimalist Hypothesis (RMH) and early grammars
are usually interpreted as definite, but with common names the interpretation depends on the local relation of {D} and features of Time, Aspect and Modality. Semantic enrichment: not determined by any element present in the syntax or LF (who John is, why would he want to “sonnet” Mary, etc.). Possibilities depend on the semantic properties of the root: √BOTTLE, for example, allows the location / locatum alternation, but the unergative option is hardly relevant (but not entirely clashing); therefore, it is not likely to be considered when building an explicature. “Unpredictability” is just “(our) not being used to that as the most relevant interpretation,” assuming that the first option is the most systematic one. This theory allows us to account for children’s use of roots in ways that differ greatly from adult grammar, without implying that that use is wrong, as the result is in most of the cases perfectly interpretable in both interfaces, and that is all we need. Children manipulate roots in a freer way than adults, as they have the generative system (Merge), the generic concepts that roots instantiate (more or less, this or that, depending on their contact with the phenomenological world), and some phonological signatures to be inserted in terminal nodes, but they are not constrained by that (extralinguistic) feeling of “awkwardness” when a root is used in a (statistically) “non-standard” syntactic context. In child language, there is no such thing as limits to coinage capability, beyond interface requirements, and this allows children to have a wide range of expressive resources. A strong empirical evidence in our favor is that adults can parse these non-standard expressions and understand them, even though they would not produce them. To conclude, we would like to stress that under our analysis of Radical Minimalism, many of the previously not sufficiently explained non-trivial phenomena and data of normal and impaired language development can be explained in a rather elegant and economic way. Moreover, it seems that our theory supports our conviction that a future cooperation between linguistics, biology, and molecular genetics diagnosis might be on the right track.
Bibliography Aarts, Bas. 1992. Small Clauses in English: The Nonverbal Types (Topics in English Linguistics, Vol. 8). Berlin, New York: Mouton de Gruyter. Abels, Klaus. 2003. Successive Cyclicity, Anti-locality, and Adposition Stranding. PhD thesis, University of Connecticut. Abels, Klaus, and Neeleman, Ad. 2007. Left Right Asymmetries and the LCA. lingbuzz, http://ling.auf.net/lingBuzz/000279. Abney, S.P. 1987. The English Noun Phrase in its Sentential Aspect. PhD thesis, MIT. Adani, Flavia. 2008. The Role of Features in Relative Clause Comprehension: A Study of Typical and Atypical Development. Unpublished PhD dissertation, University Milano-Bicocca. Adani, Flavia. 2009. Rethinking the Acquisition of Relative Clauses in Italian: Towards a Grammatically Based Account December 2009. Journal of Child Language 38 (1), 141–165. Adani, Flavia, van der Lely, Heather K. J., Forgiarini, Matteo, and Guasti, Maria Teresa .Ms. Grammatical Feature Dissimilarities Make Relative Clauses Easier: A Comprehension Study with Italian Children. Ms.: University of Potsdam. Ades, A., and Steedman, S. 1982. On the order of words. Linguistics and Philosophy, 6, 517–558. Adger, David. 2003. Core Syntax. Oxford: Oxford University Press. Adger, David. 2008. A Minimalist Theory of Feature Structure. lingBuzz/000583. Adger, David. 2010a. A Minimalist Theory of Feature Structure. In Anna Kibort, and Greville Corbett (eds.), Features: Perspectives on a Key Notion in Linguistics. Oxford: Oxford University Press, 185–218. Adger, David. 2010b. Variability and Grammatical Architecture, lingBuzz/001176. Adger, David. 2011a. Labels and Structures. lingBuzz/001252. Adger, David. 2011b. Labels and Structures. April 2011 Draft of a chapter of a book. Ms. Adger, David. 2013. A syntax of substance. Cambridge, Massachusetts London, England: The MIT Press (Linguistic Inquiry Monographs vol. 64). Adger, David, and Ramchand, Gillian. 2005. Move and Merge: Wh-Dependencies Revisited. Linguistic Inquiry 36, 161–194.
302
Bibliography
Adger, David, and Sevenoius, Peter. 2011. Features in Minimalist Syntax. In Cedric. Boeckx (ed.), The Handbook of Linguistic Minimalism. Oxford: Blackwell, 27–51. Adger, David, Harbour, Daniel, and Watkins, Laurel. 2009. Mirrors and Microparameters: Phrase Structure beyond Free Word Order. Cambridge: Cambridge University Press. Aikhenvald, Alexandra Y. 2012. The Languages of the Amazon. Oxford: Oxford University Press. Alexiadou, Artemis. 2001. Functional Structure in Nominals: Nominalization and Ergativity. Amsterdam: John Benjamins. Alexiadou, Artemis. 2005. A Note on Non-Canonical Passives: The Case of the Get Passive. In H. Broekhuis et al. (eds.), Festschrift für Henk van Riemsdijk. Berlin: Mouton de Gruyter. Alexiadou, Artemis, and Anagnostopoulou, Elena. 2004. Voice Morphology in the Causative-Inchoative Alternation: Evidence for a Non-unified Structural Analysis of Unaccusatives. In A. Alexiadou, E. Anagnostopoulou, & M. Everaert (eds.), The Unaccusativity Puzzle. Oxford: Oxford University Press. Alexiadou, Artemis, and Anagnostopoulou, Elena. 2009. Agent, Causers and Instruments PPs in Greek: Implications for Verbal Structure. MIT Working Papers in Linguistics 57, 1–16. Alexiadou, Artemis, and Doron, E. 2007. The Syntactic Construction of Two Non-active Voices: Passive and Middle. Paper presented at the Workshop on Global Selective Comparison, GLOW XXX, Tromsö. Alexiadou, Artemis, and Schäfer, Florian. 2006. Instrument Subjects Are Agents or Causers. In Donald Baumer, David Montero, and Michael Scanlon (eds.), Proceedings of the 25th West Coast Conference on Formal Linguistics. Somerville, MA: Cascadilla Proceedings Project, 40–48. Alexiadou, Artemis, and Schäfer, Florian. 2014. Towards a Non-uniform Analysis of Naturally Reflexive Verbs. Proceedings of WCCFL 31, 1–10, ed. R. E. Santana-LaBarge. Alexiadou, Artemis, Anagnostopoulou, Elena, and Schäfer, Florian. 2006. The Properties of Anticausatives Crosslinguistically. In M. Frascarelli (ed.), Phases of Interpretation. Berlin: Mouton de Gruyter. Alexiadou, Artemis, Borer, Hagit, and Schäfer, Florian (eds.). 2014. The Syntax of Roots and the Roots of Syntax (Oxford Studies in Theoretical Linguistics 51). Oxford: Oxford University Press. Alexiadou, Artemis, Gehrke, Berit, and Schärfer, Florian. 2014. The Argument Structure of Adjectival Participles Revisited. Lingua 149, 118–138.
Bibliography
303
Alloway, T.P., Rajendran, G., and Archibald, L.M.D. 2009. Working Memory in Children with Developmental Disorders. Journal of Learning Disabilities 42, 372–382. Aoun, Joseph E., Benmamoun, Elabbas, and Choueiri, Lina. 2010. CliticLeft Dislocation and Focus Constructions. In Joseph E. Aoun, Elabbas Benmamoun, and Lina Choueiri. The Syntax of Arabic. Chapter 8 – CliticLeft Dislocation and Focus Cnstructions, 190–213. DOI: http://dx.doi. org/10.1017/CBO9780511691775.008 Cambridge University Press. Arsenijević, Boban, Kresić, Marijana, Leko, Nedžad, Nevins, Andrew, and Willer-Gold, Jana. 2016. Introduction: Agreement Phenomena in Slavic Languages. In Arsenijević, Boban, Marijana Kresić, Nedžad Leko, Andrew Nevins, and Jana Willer-Gold (eds.) 2016. JSL Special Issue. Agreement in Slavic. JSL Volume 24, number 1, winter-spring 2016, 1–260. Austin, J. L. 1962. How to Do Things with Words. Oxford: Oxford University Press. Aydede, Murat. 2010. The Language of Thought Hypothesis. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), http://plato.stanford.edu/archives/fall2010/entries/language-thought/ Babby, Leonard H. 1987. Case, Prequantifiers, and Discontinuous Agreement in Russian. Natural Language and Linguistic Theory 5: 91–138. Babby, Len. 2002. Argument Suppression and Case in Russian Derived Nominals. In Peter Kosta, and Jens Frasek (eds.), Current Approaches to Formal Slavic Linguistics (Linguistik Interational, Vol. 9). Contributions of the Second European Conference on Formal Description of Slavic Languages – FDSL II held at Potsdam University, November 20–22. 1997. Frankfurt am Main: Peter Lang, 9–21. Babyonyshev, Maria, Ganger, Jennifer, Pesetsky, David, and Wexler, Ken. 2001. The Maturation of Grammatical Principles: Evidence from Russian Unaccusatives. Linguistic Inquiry 32, 1–44. Bach, Emond. 1986. The Algebra of Events, Linguistics and Philosophy 9, 5–16. Bach, Kent 1994. Conversational Impliciture. Mind and Language 9. 124–162. Bach, Kent, and Harnish, Robert M. 1979/1991. Linguistic Communication: A Schema for Speech Acts. [Originally in: Linguistic Communication and Speech Acts. MIT Press. Cambridge, Mass. 3–18.] Reprinted in: S. Davis (ed.) Pragmatics. A Reader. New York/Oxford. Baddeley, A.D. 1986. Working Memory. Oxford: Oxford University Press. Baddeley, A.D. 1992. Working Memory. Science 255, 556–559.
304
Bibliography
Baddeley, A.D., and Wilson, B. 1988. Comprehension and Working Memory: A Single Case Neuropsychological Study. Journal of Memory and Language 27, 586–595. Baddeley, A., Papagno, C., and Vallar, G. 1988. When Long-Term Learning Depends on Short-Term Storage. Journal of Memory and Language 27, 586–595. Baeriswyl, F. 1989. Verarbeitungsprozesse und Behalten im Arbeitsgedächtnis. Heidelberg: Asanger. Bailyn, John F., and Citko, Barbara. 1999. Case and Agreement in Slavic Predicates. In Katarzyna Dziwirek, Hebert Coats, and Cynthia M. Vakareliyska (eds.), Formal Approaches to Slavic Linguistics, Vol. 7. Ann Arbor: Michigan Slavic Publications, 17–37. Bailyn, John and Rubin, E. 1991. The Unification of Instrumental Case Assignment in Russian. Cornell Working Papers of Linguistics 9, 99–126. Baker, Mark. 1985. The Mirror Principle and Morphosyntactic Explanation. Linguistic Inquiry 16, 373–415. Baker, Mark. 1988. Incorporation, Chicago: Chicago University Press. Baker, M. C. 1996. The Polysynthesis Parameter. Oxford: Oxford University Press. Baker, M.C. 2001. The Atoms of Language: The Mind’s Hidden Rules of Grammar. NewYork: Basic Books. Baker, Mark 2002. Building and Merging, Not Checking: The Nonexistence of (Aux)-SVO Languages. Linguistic Inquiry 33, 321–328. Baker, M.C. 2003a. Linguistic Differences and Language Design. Trends in Cognitive Sciences 7, 349–353. Baker, M.C. 2003b. Lexical Categories. Verbs, Nouns, and Adjectives. Cambridge: Cambridge University Press. Baker, M.C. 2005. Mapping the Terrain of Language Learning. Language Learning and Language Development 1, 93–124. Baker, M.C. 2008a. The Syntax of Agreement and Concord. Cambridge: Cambridge University Press. Baker, M.C. 2008b. The Macroparameter in a Microparametric World. In T. Biberauer (ed.), The Limits of Syntactic Variation, 351–374, Amsterdam: John Benjamins. Baker, M.C. and Collins, C. 2006. Linkers and the Internal Structure of vP, Natural Language and Linguistic Theory 24, 307–354. Baker, Mark, Johnson, Ken, and Roberts, Ian. 1989. Passive Arguments Raised. Linguistic Inquiry 20, 219–252.
Bibliography
305
Barry, J.G., Yasin, I., and Bishop, Dorothy V. M. 2007. Heritable Risk Factors Associated with Language Impairments. Genes, Brain & Behavior 6 (1), 66–76. Bates, E. S., McNew, B., MacWhinney, A., Devescovi, A., and Smith, S. 1982. Functional constraints on sentence processing. Cognition, 11, 245–299. Beasley, K., and Karttunen, L. 2003. Finite State Morphology. Stanford, CA: CSLI Publications. Belletti, Adriana, and Rizzi, Luigi. 1988. Psych Verbs and θ-Theory. Natural Language and Linguistic Theory 6, 291–352. Benítez-Burraco, Antonio, and Longa, Víctor M. 2010. Evo-Devo – Of Course, But Which One? Some Comments on Chomsky’s Analogies between the Biolinguistic Approach and Evo-Devo. Biolinguistics 4, 308–323. Berl, Madison M., Duke, Elizabeth S., Mayo, Jessica, Rosenberger, Lisa R., Moore, Erin N., VanMeter, John, Ratne, Nan Bernstein, Vaidya, Chandan J., and Gaillard, William Davis. 2010. Functional Anatomy of Listening and Reading Comprehension during Development. Brain & Language 114 (2010), 115–125. Bergen, Benjamin K. 2014. Universal Grammar. http://edge.org/responsedetail/25539, accessed 10 January 2016. Bernstein, Judy. 1993. The Syntactic Role of Word Markers in Null Nominal Constructions. Probus 5 (1–2), 5–38. Berwick, Robert C. 1997. Syntax Facit Saltum: Computation and the Genotype and Phenotype of Language. Journal of Neurolinguistics 10, 231–249. Berwick, Robert C. 2016. Why Only Us? Language and Evolution. Cambridge, MA/London, UK: MIT Press. Berwick, Robert C., and Epstein, S.D. 1995. Merge: The Categorial Imperative, Proceedings of the 5th AMAST Conference. Enschede: University of Twente. Berwick, Robert C., and Niyogi, P. 1996. Learning from Triggers. Linguistic Inquiry 27, 605–622. Berwick, Robert C., and Weinberg, A. S. 1983. The role of grammars as components of models of language use. Cognition 13, 1–61. Berwick, Robert C., and Weinberg, A. S. 1984. The grammatical basis of linguistic performance: language use and acquisition. Cambridge, Mass. & London: MIT Press. Berwick, Robert C., and Weinberg, A. S. 1986. The Grammatical Basis of Linguistic Performance. Cambridge, MA: MIT Press. Bever, T. G. 1970. The cognitive basis for linguistic structure. In J. R. Hayes (Ed.), Cognition and the development of language. New York: John Wiley & Sons.
306
Bibliography
Bhatt, Rajesh. 2002. The Raising Analysis of Relative Clauses: Evidence from Adjectival Modification. Natural Language Semantics 10 (1), 43–90. Bickerton, Derek. 1990. Language and Species. Chicago: Chicago University Press. Bishop, Dorothy V.M. 1994. Is Specific Language Impairment a Valid Diagnostic Category? Genetic and Psycholinguistic Evidence. Philosophical Transactions of the Royal Society of London 346, 105–111. Bishop, Dorothy V.M. 1997. Uncommon Understanding: Development and Disorders of Language Comprehension in Children. Hove: Psychology Press. Bishop, Dorothy V.M. 2002. Putting Language Genes in Perspective. Trends in Genetics 18 (2). 57–59. Bishop, Dorothy V.M. 2004. Specific Language Impairment: Diagnostic Dilemmas. In L. Verhoeven and H. Van Balkom (eds.), Classification of Developmental Language Disorders. Mahwah, NJ: Lawrence Erlbaum, 309–326. Bishop, Dorothy V.M., and Leonard, Laurence B. (eds.). 2000. Speech and Language Impairments in Children: Causes, Characteristics, Intervention and Outcome. Hove: Psychology Press. Bishop, Dorothy, and Norbury, C.F. 2008. Speech and Language Disorders. In M. Rutter, D. Bishop, Dorothy Pine, S. Scott, J. Stevenson, E. Taylor, & A. Thapar (eds.), Rutter’s Child and Adolescent Psychiatry, 782–801. Oxford: Blackwell. Bishop, Dorothy V.M., and Snowling, M.J. 2004. Developmental Dyslexia and Specific Language Impairment: Same or Different? Psychological Bulletin 130 (6), 858–886. Biskup, Petr. 2011 Adverbials and the Phase Model. Amsterdam: John Benjamins. Błaszczak, Joanna, and Dorota Klimek-Jankowska. 2015. Noun and verb in the mind. An interdisciplinary approach. In Joanna Błaszczak, Dorota KlimekJankowska & Krzysztof Migdalski (eds.), How categorical are categories? New approaches to the old questions of Noun, Verb, and Adjective (Studies in Generative Grammar 122), 75−112. Boston, MA: Walter de Gruyter, Inc. Błaszczak, Joanna, Anna Czypionka, and Dorota Klimek-Jankowska. 2018. Why are verbal nouns more verbal than finite verbs? New insights into the interpretation of the P200 verbal signature. Glossa: a journal of general linguistics 3(1), 78, 1–26, DOI: https://doi.org/10.5334/gjgl.365 Biskup, Petr. 2015. Labeling and Other Syntactic Operations. In L. Bauke, and A. Blümel (eds.), Labels and Roots. Berlin: de Gruyter, 91–116. Biskup, Petr. 2017. Decomposing Prepositional Cases in Russian and Polish. In M. Guhl, and Olaf Mueller-Reichau (eds.), Aspects of Slavic
Bibliography
307
Linguistics: Formal Grammar, Lexicon, and Communication. Berlin: de Gruyter, 50–68. Biskup, Petr. 2019. Prepositions, Case and Verbal Prefixes: The Case of Slavic. Amsterdam: John Benjamins. Bloomfield, Leonard. 1979. Language. London, Boston, Sydney: George Allen & Unwin. Bobaljik, Jonathan D. 1995. Morphosyntax: The Syntax of Verbal Inflection. PhD dissertation, MIT Press. Boeckx, Cedric. 2003. Free Word Order in Minimalist Syntax. Folia Linguistica 37 (1–2), 77–102. Boeckx, Cedric. 2006. Linguistic Minimalism. Origins, Concepts, Methods, and Aims. Oxford, New York: Oxford University Press. Boeckx, Cedric. 2008a. Aspects of the Syntax of Agreement. First published 2008 by Routledge New York London Boeckx, Cedric Treelets, not Trees. Talk presented at BCGL 3—Trees and Beyond. May 2123, 2008. Boeckx, Cedric. 2008b. The Nature of Merge. Consequences for Language, Mind and Biology. In M. Piatelli Palmarini, et al. (eds.), Of Minds and Language. Oxford: Oxford University Press, 44–57. Boeckx, Cedric. 2008c. Treelets, not Trees. BCGL 3 — Trees and Beyond. May 21–23, 2008 Boeckx, Cedric. 2009. The Nature of Merge. Consequences for Language, Mind and Biology. In Piatelli Palmarini et al. (eds.), Of Minds and Language. Oxford: Oxford University Press. Boeckx, Cedric 2010a. Defeating Lexicocentrism. lingBuzz/001130 Boeckx, Cedric 2010b. What Principles and Parameters got Wrong. ICREA/ UAB. Boeckx, Cedric. 2010c. Lingustic minimalism. In B. Heine, and H. Narrog (eds.), Oxford Handbook of Linguistic Analysis. Oxford: Oxford University Press, 485–505. Boeckx, Cedric, and Grohmann, K. 2004. Putting Phases into Perspective. Ms., Harvard University, and University of Cyprus. Boeckx, Cedric, and Grohmann, Kleanthes K. 2007. Putting Phases in Perspective. Syntax 10, 204–222. Boeckx, Cedric, Hornstein, Norbert, and Nunes, Jairo. 2010. Control as Movement (Cambridge Studies in Linguistics 126). Cambridge: Cambridge University Press. Boeckx, Cedric, Horno, María Carmen, and Mendívil-Giró, José Luis (eds.). 2012. Language, from a Biological Point of View. Newcastle: Cambridge Scholars Publishing.
308
Bibliography
Boeckx, Cedric. Noam Chomsky, J. Uriagereka, M. Piattelli-Palmarini, and P. Salaburu (eds.) Oxford: Oxford University Press, 44–57. Bondaruk, Anna. 2004. PRO and Control in English, Irish and Polish – A Minimalist Analysis. Lublin: Wydawnictwo KUL. Bondaruk, Anna. 2013a. Ambiguous Markers of Predication in Polish – Predicators or Prepositions? In Anna Bondaruk, and Anna MalickaKleparska (eds.), Ambiguity. Multifaceted Structures in Syntax, Morphology and Phonology. Lublin: Wydawnictwo KUL, 59–90. Bondaruk, Anna. 2013b. Copular Clauses in English and Polish. Structure, Derivation and Interpretation. Lublin: Wydawnictwo KUL. Borer, Hagit. 2005a. Structuring Sense, Vol. I: In Name Only and Vol. II: The Normal Course of Events. Oxford: Oxford University Press. Borer, Hagit. 2005b. The Normal Course of Events: Structuring Sense, Vol II. Oxford: Oxford University Press. Borer, Hagit. 2009. Roots and Categories. Handout presented at XIX Colloquium on Generative Grammar, University of the Basque Country. Borer, Hagit, and Wexler, Ken. 1987. The Maturation of Syntax. In Thomas Roeper, and Edwin Williams (eds.), Parameter-Setting and Language Acquisition. Dordrecht: Reidel. Borer, Hagit, and Wexler, Ken. 1992. Bi-unique Relations and the Maturation of Grammatical Principles. Natural Language and Linguistic Theory 10, 147–189. Bošković, Željko. 2005, April. On the Locality of Left Branch Extraction and the Structure of NP. Studia Linguistica 59 (1), 1–45. Bošković, Željko. 2008a. On Successive Cyclic Movement and the Freezing Effect of Feature Checking. In Jutta M. Hartmann, Veronika Hegedüs, and Henk van Riemsdijk (eds.), Sounds of Silence: Empty Elements in Syntax and Phonology. Amsterdam: Elsevier, 195–233. Bošković, Željko. 2008b. On the Operator Freezing Effect. Natural Language and Linguistic Theory 26, 249–287. Bošković, Željko. 2013. Phases beyond Clauses. In LiliaSchürcks, Anastasia Giannakidou, and Urtzi Etxeberria (eds.), The Nominal Structure in Slavic and Beyond. Berlin: Mouton de Gruyter, 75–128. Bošković, Željko. 2014. Now I’m a Phase, Now I’m Not a Phase: On the Variability of Phases with Extraction and Ellipsis. Linguistic Inquiry 45, 27–89. Bošković, Željko. 2015. From the Complex NP Constraint to Everything: On Deep Extractions across Categories. The Linguistic Review 32, 603–669. Bošković, Željko. 2020. On the Impossibility of Moving IPs and V-2 Clauses and Labeling. In Teodora Radeva-Bork, and Peter Kosta (eds.), Current
Bibliography
309
Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11). Berlin: Peter Lang, 27–46. Bošković, Željko. in press. On the Timing of Labeling: Deducing Comp-Trace Effects, the Subject Condition, the Adjunct Condition, and Tucking in from Labeling. The Linguistic Review: Special Issue on Labeling. Bošković, Željko, and Lasnik, Howard (eds.). 2006. Minimalist Syntax. The Essential Readings. Malden: Wiley-Blackwell. Bošković, Željko, and Takahashi, Daiko. 1998. Scrambling and Last Resort. Linguistic Inquiry 29, 343–366. Bowers, J. 1993. The Syntax of Predication. Linguistic Inquiry 24, 591–657. Bowers, J. 2001. Predication. In M. Baltin, and Ch. Collins (eds.), The Handbook of Contemporary Syntactic Theory. Oxford: Blackwell, 299–333. Brekle, H. E. 1972. Semantik. Eine Einführung in die sprachwissenschaftliche Bedeutungslehre. München: Fink. Bresnan, Joan & Mchombo, Sam. 1995. The Lexical Integrity Principle: Evidence from Bantu. Natural Language and Linguistic Theory 13, 181–254. Brody, Michael. 1993. Theta-theory and Arguments. Linguistic Inquiry 24, 1–23. Brody, Michael. 1997. Perfect Chains. In Liliane Haegeman (ed.), Elements of Grammar. Dordrecht: Kluwer, 139–167. Broselow, J., and McCarthy, J. 1983. A Theory of Internal Reduplication. The Linguistic Review 3, 25–88. Brugè, Laura. 2002. The Positions of Demonstratives in the Extended Nominal Projection. In Guglielmo Cinque (ed.), Functional Structure in DP and IP: The Cartography of Syntactic Structures, Vol. 1. New York: Oxford University Press, 15–53. Burzio, Luigi. 1981. Intransitive Verbs and Italian Auxiliaries. PhD dissertation, MIT Press. Burzio, Luigi. 1986. Italian Syntax, Dordrecht: Reidel. Bühler, Karl. 1934. Sprachtheorie. Die Darstellungsfunktion der Sprache. Jena: Gustav Fischer Verlag. Caha, Pavel. 2009. The Nanosyntax of Case. PhD dissertation, University of Tromsø. Cáha, Pavel, and Ziková, Markéta. 2016. Vowel Length as Evidence for a Distinction Between Free and Bound Prefixes in Czech. Acta Linguistica Hungarica 63 (3), 331–377. Cardinaletti, Anna, and Guasti, Maria Teresa (eds.). 1995. Syntax and Semantics. Vol. 28: Small Clauses. San Diego, New York, Boston.
310
Bibliography
Cardinaletti, Anna, and Starke, Michal. 1999. The Typology of Structural Deficiency: A Case Study of the Three Classes of Pronouns. In Henk van Riemsdijk (ed.), Eurotyp. Volume 5/Part 1: Clitics in the Languages of Europe. Berlin, New York: Mouton de Gruyter, 145–234. Carruthers, P. 2003. Moderately Massive Modularity. In O’Hear (ed.), Mind and Persons. Cambridge: Cambridge University Press, 96–91. Carruthers, P. 2006. The Architecture of the Mind. Oxford: Oxford University Press. Carstens, Vicki. 2010. Implications of Grammatical Gender for the Theory of Uninterpretable Features. In Michael T. Putnam (ed.), Exploring Crash Proof Grammars. Amsterdam: John Benjamins, 31–57. Carston, R. 1998. The Relationship between Generative Grammar and (Relevance-theoretic) Pragmatics. Ms. London: University College. Chachulska, Beata. 2008. Prädikativer Instrumental, Kasuskongruenz oder analytische Markierung bei sekundären Prädikaten im Polnischen. In Christoph Schroeder, Gerd Hentschel, and Winfried Boeder (eds.), Secondary Predicates in Eastern European Languages and Beyond. Oldenburg, 41–68. Chierchia, Gennaro. 1989. A Semantics for Unaccusatives and its Syntactic Consequences, unpublished Ms. Ithaca, NY: Cornell University. Chierchia, Gennaro. 1998. Reference to Kinds across Language. Natural Language Semantics 6 (4), 339–405. Chierchia, Gennaro, and McConnnell-Ginet, Sally.22000. Meaning and Grammar. An Introduction to Semantics. Cambridge, MA, London, UK: MIT Press. Chomsky, Noam. 1957. Syntactic Structures. The Hague: Mouton de Gruyter. Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. Chomsky, Noam. 1966. Cartesian Linguistics. New York: Harper & Row. Chomsky, Noam. 1981. Lectures on Government and Binding. Dordrecht: Foris. Chomsky, Noam. 1970. Remarks on Nominalization. In Roderick A. Jacobs, and Peter S. Rosenbaum (eds.), Readings in English Transformational Grammar. Boston: Ginn, 184–221. Chomsky, Noam. 1972. Remarks on Nominalization. In: Chomsky, Noam. (ed.) Studies on Semantics in Generative Grammar. The Hague: Mouton, 11–61. Chomsky, Noam. 1982. Some Concepts and Consequences of the Theory of Government and Binding. LI Monographs 6. Cambridge, MA: MIT Press. Chomsky, Noam. 1986a. Knowledge of Language: Its Nature, Origin and Use. New York: Praeger.
Bibliography
311
Chomsky, Noam. 1986b. Barriers. MIT LI Monographs 13. Cambridge, MA: MIT Press. Chomsky, Noam 1989. Some Notes on Economy of Derivation and Representation. In Itziar Laka, and Anoop Mahajan (eds.), Functional Heads and Clause Structure (MIT Working Papers in Linguistics, vol. 10), Cambridge, MA: Department of Linguistics and Philosophy, MIT [Published 1991 in Freidin (ed.), 417–54. Chomsky, Noam. 1991. Some Notes on Economy of Derivations and Derivations. In R. Freidin (ed.), Principles and Parameters in Comparative Grammar. Cambridge, MA: MIT Press, 417–454. Reprinted in: Chomsky (1995). Chomsky, Noam. 1993. A Minimalist Program for Linguistic Theory. In Ken Hale, and Samuel J. Keyser (eds.), The View from Building 20. Cambridge, MA: MIT Press, 41–58. [Reprinted as Chapter 3 of Chomsky 1995]. Chomsky, Noam. 1994. Bare Phrase Structure. Working Occasional Papers in Linguistics 5. Cambridge, MA: MIT Press. Chomsky, Noam. 1995. The Minimalist Program. Cambridge, MA: MIT Press. Chomsky, Noam. 1998. On Language. Chomsky’s Classical Works Language and Responsibility and Reflections on Language. New York. London: The New Press. Chomsky, Noam. 1999. Derivation by Phase. Cambridge, MA: MIT Press. Chomsky, Noam. 2000a. Minimalist Inquiries: The Framework. In Robert Martin, David Michaels, and Juan Uriagereka (eds.), Step by Step: Essays in Minimalist Syntax in Honor of Howard Lasnik. Cambridge, MA: MIT Press, 89–155. Chomsky, Noam. 2000b. The Architecture of Language. In Nirmalangshu Mukherji, Bibudhendra Narayan Patnaik, and Rama Kant Agnihotri (eds.), New Delhi: Oxford University Press. Edited version of a lecture delivered at the University of Delhi, January 1996.) Chomsky, Noam. 2001a. Derivation by phase. In Michael Kenstowicz (ed.), Ken Hale: A Life in Language. Cambridge, MA: MIT Press, 1–52. Chomsky, Noam. 2001b. New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press. Chomsky, Noam. 2001c. Su natura e linguaggio (On Nature and Language). In Adriana Belletti, and Luigi Rizzi (eds.), Siena, Italy: Edizioni dell’Università degli Studi di Siena. (Collection of text related to Noam Chomsky’s visit to the Certosa di Pontignano, University of Siena, November 1999.) Chomsky, Noam. 2002. On Nature and Language. In: Adriana Belletti and Luigi Rizzi (eds.), Cambridge: Cambridge University Press. (Collection of text related to Noam Chomsky’s visit to the Certosa di Pontignano, University of Siena, November 1999.)
312
Bibliography
Chomsky, Noam. 2004a. Beyond Explanatory Adequacy. In Adriana Belletti (ed.), The Cartography of Syntactic Structures (Vol. 3, Structures and beyond). Oxford: Oxford University Press. Chomsky, Noam. 2004b. The Generative Enterprise Revisited: Discussions with Riny Huybregts, Henk van Riemsdijk, Naoki Fukui and Mihoko Zushi, Berlin, New York: Mouton de Gruyter. Chomsky, Noam. 2004c. Biolinguistics and the Human Capacity. Talk delivered for the Research Institute for Linguistics, Hungarian Academy of Sciences, Budapest, May 17, 2004. A biolingvisztika és az emberi minõség. Magyar Tudomány 12, 1354–1366. Chomsky, Noam. 2005a. Three Factors in Language Design. Linguistics Inquiry 36 (1) (Winter 2005. 1–22. Expanded from a talk given at the annual meeting of the Linguistics Society of America, January 9, 2004). Chomsky, Noam. 2005b. What we Know: On the Universals of Language and Rights. Boston Review 30 (3–4), 23–27. Chomsky, Noam. 2005c. Universals of Human Nature. Psychotherapy and Psychosomatics 74 (5), 263–268. Based on address delivered while receiving honorary degree in Psychology at the University of Bologna, Italy, April 1, 2005. Chomsky, Noam. 2005d. with W. Tecumseh Fitch and Marc D. Hauser. The Evolution of the Language Faculty: Clarifications and Implications. Cognition 97 (2), 179–210. Chomsky, Noam. 2006. Biolinguistic Explorations: Design, Development, Evolution. International Journal of Philosophical Studies 15 (1), 1–21. Based on a talk given at the University College, Dublin, Ireland, January 20, 2006. Chomsky, Noam. 2007a. Language and Thought: Descartes and Some Reflections on Venerable Themes. In Andrew Brook (ed.), The Prehistory of Cognitive Science. Houndmills, Basingstoke, Hampshire, England: Palgrave Macmillan, 38–66. Chomsky, Noam. 2007b. Approaching UG from Below. In Uli Sauerland, and Hans-Martin Gartner (eds.), Interfaces + Recursion = Language? Chomsky’s Minimalism and the View from Syntax-Semantics. Berlin, New York: Mouton de Gruyter, 1–29. Chomsky, Noam. 2008. On Phases. In Robert Freidin, Carlos P. Otero, and Maria Luisa Zubizarreta (eds.), Foundational Issues in Linguistic Theory, Cambridge, MA: MIT Press, 133–166. Chomsky, Noam. 2010. Some Simple evo-devo Theses: How Might They Be True For Language? In Richard K. Larson, Vivene Depréz, and Hiroko Yamakido (eds.), The Evolution of Human Language. Cambridge: Cambridge University Press, 45–62.
Bibliography
313
Chomsky, Noam. 2013. Problems of Projection. Lingua 130, 33–49. Chomsky, Noam 2020. Minimal Computation and the Architecture of Language. In Teodora Radeva-Bork, and Peter Kosta (eds.), Current Developments in Slavic Linguistics. Twenty Years After (Based on selected papers from FDSL 11). Berlin: Peter Lang, 13–25. Chomsky, Noam, and Halle, Morris. 1968. The Sound Pattern of English. New York: Harper & Row. Chomsky, Noam, and Lasnik, Howard. 1977. Filters and Control, Linguistic Inquiry 8 (3), 425–504. Chomsky, Noam, and Lasnik, Howard. 1993. The Theory of Principles and Parameters. In Joachim Jacobs, Armin vonStechow, Wolfgang Sternefeld, and Theo Vennemann (eds.), Syntax: An International Handbook of Contemporary Research. Berlin: Walter de Gruyter, 506–569. Chomsky, Noam, Gallego, Ángel J., and Ott, Dennis. 2019. Generative Grammar and the Faculty of Language: Insights, Questions, and Challenges. Catalan Journal of Linguistics Special Issue, 2019, 229–261. Cinque, G. 1990. Types of Ā-Dependencies (Vol. 17, Linguistic Inquiry Monographs). Cambridge, MA: MIT Press. Cinque, G. 1999. Adverbs and Functional Heads: A Crosslinguistic Perspective (Oxford Studies in Comparative Syntax). New York: Oxford University Press. Cinque, Guglielmo. 2006. Restructuring and Functional Heads (The Cartography of Syntactic Structure, Vol. 4). Oxford: Oxford University Press. Citko, Barbara. 2008. Missing labels. Lingua 118, 907–944. Citko, Barbara. 2014. Phase Theory: An Introduction. Cambridge: Cambridge University Press. Clahsen, Harald. 1991. Child Language and Developmental Dysphasia. Linguistic Studies of the Acquisition of German (Studies in Speech Pathology and Clinical Linguistics, Vol. 2). Amsterdam, Philadelphia: John Benjamins. Clahsen, Harald. 1990/1991. Constraints on Parameter Setting: A Grammatical Analysis of Some Acquisition Stages in German Child Language. Language Acquisition 1, 361–391. Clahsen, Harald, Bartke, S., and Göllner S. 1997. Formal Features in Impaired Grammars: A Comparison of English and German SLI Children. Journal of Neurolinguistics 10, 151–171. Clahsen, Harald, Eisenbeiss, Sonja, and Penke, Martina. 1996. Lexical Learning in Early Syntactic Development. In Harald Clahsen (ed.), Generative Perspectives on Language Acquisition: Empirical Findings, Theoretical Considerations and Crosslinguistic Comparisons, Amsterdam, Philadelphia: John Benjamins, 129–159.
314
Bibliography
Clahsen, Harald, Penke, Martina and Parodi, Teresa. 1993/1994. Functional Categories in Early Child German. Language Acquisition 3, 395–429. Clifton, C., and Frazier, L. (1989). Successive cyclicity in the grammar and the parser. Language and Cognitive Processes, 4(2), 93–126. https://doi. org/10.1080/01690968908406359 Collins, Chris. 1994. Economy of Derivation and the Generalized Proper Binding Condition. Linguistic Inquiry 25, 45–61. Collins, Chris. 2002. Eliminating Labels. In Samuel David Epstein, and T. Daniel Seely (eds.), Derivation and Explanation in the Minimalist Program. Malden, MA: Blackwell, 42–64. Collins, Chris, and Postal, Paul Martin. 2012. Imposters: A Study of Pronominal Agreement. Cambridge, MA: MIT Press. Collins, Chris, and Stabler, Edwar. 2011. A Formalization of Minimalist Syntax. Submitted. http://ling.auf.net/lingbuzz/001691 Corbara, Erika. 2012. Ökonomie in der Sprache und die Pro-drop-Theorie aus der Perspektive des L1-Erwerbs. Masterarbeit Universität Potsdam. Corbett, Greville G. 1980. Animacy in Russian and Other Slavonic Languages: Where Syntax and Semantics Fail to Match. In: Chvany, Catherine V., and Richard D. Brecht. (eds.) Morphosyntax in Slavic. Michigan: Slavica Publishers 1980, 43–61. Corbett, Greville G. 1982. Gender in Russian: An Account of Gender Specification and Its Relationship to Declension. Russian Linguistics 6, 197–232. Corbett, Greville G. 1991. Gender. Cambridge: Cambridge University Press. Corbett, Greville G. 1993. The Head of Russian Numeral Expressions. In N. M. Fraser, G. G. Corbett, and S. McGlashan (eds.), Heads in Grammatical Theory. Cambridge: Cambridge University Press, 11–35. Corbett, Greville G. 2006. Agreement. Cambridge: Cambridge University Press, 1–328. Corver, Norbert. 1990. The Syntax of Left Branch Extractions. Doctoral dissertation, University of Brabant. Corver, Norbert, and Riemsdijk, Henk van (eds.). 1994. Studies on Scrambling: Movement and nonMovement Approaches to Free Word Order Phenomena. Berlin: Mouton de Gruyter. Costa, João. 1998. Word Order Variation: A Constraint-Based Approach. The Hague: Holland Academic Graphics. Costa, João. 2004. Subject Positions and Interfaces. The Case of European Portuguese. Berlin: Mouton de Gruyter.
Bibliography
315
Costa, João, and Duarte, Inês. 2002. Preverbal Subjects in Null Subject Languages Are Not Necessarily Dislocated. Journal of Portuguese Linguistics 1, 159–176. Costa, João, and Figueiredo Silva, Maria Cristina. 2006. On the (in) dependence Relations between Syntax and Pragmatics. In Valéria Molnar, and Susanne Winkler (eds.), Architecture of Focus. Berlin: Mouton de Gruyter, 83–104. Crain, S., and Steedman, M. 1985. On not being led up the garden path. In D. Dowty, L. Kartunnen, and A. M. Zwickey (Eds.), Natural language parsing: Psycholinguistic, computational, and theoretical perspectives. Cambridge, UK: Cambridge University Press. Croft, W. 1990. Typology and Universals (Cambridge Text books in Linguistics). Cambridge: Cambridge University Press. Culicover, Peter, and Jackendoff, Ray. 2005. Simpler Syntax. Oxford, New York: Oxford University Press. Culicover, Peter, and Wexler, Kenneth. 1977. Some Syntactic Implications of a Theory of Language Learnability. In Peter Culicover, Thomas Wasow, and Adrian Akmajian (eds.), Formal Syntax. New York: Academic Press, 7–60. Damonte, Federico. 2004. The Thematic Field: The Syntax of Valency-Enhancing Morphology. Ph.D thesis, University of Padua. Damonte, Federico. 2010. The Mirror Principle and the Order of Verbal Extensions: Evidence from Pular (Ms.) Danon, Gabi. 2011. Agreement and DP-Internal Feature Distribution. Syntax 14 (4), 297–317. De Belder, M. 2011. Roots and Affixes: Eliminating Categories from Syntax. PhD thesis, Utrecht University. De Belder, M., and Van Craenenbroeck, J. 2011. How to Merge a Root. LingBuzz 001226. De Lancey, S. 2001. Lectures on Functional Syntax. University of Oregon. den Dikken, Marcel. 2014. On Feature Interpretability and Inheritance. In Kosta, Peter et al. (eds.), Minimalism and Beyond: Radicalizing the Interfaces. Amsterdam, Philadelphia: John Benjamins, 37–55. Di Sciullo, Anna Maria 1997. Prefixed-Verbs and Adjunct Identification. In A.M. Di Sciullo (ed.), Projections and Interface Conditions. Essays on Modularity. New York: Oxford University Press, 52–74. Di Sciullo, Anna Maria. 2004. Morphological Phases. In The 4th GLOW in Asia 2003. Generative Grammar in a Broader Perspective. The Korean Generative Grammar Circle, 44–45.
316
Bibliography
Di Sciullo, Anna Maria. 2005a. Asymmetry in Morphology. Cambridge, MA: MIT Press. Di Sciullo, Anna Maria. 2005b. Affixes at the Edge. Canadian Journal of Linguistics 50, 83–117. Di Sciullo, Anna Maria. 2014. Minimalism and I-Morphology. In Peter Kosta, Steven Franks, Teodora Radeva-Bork, and Lilia Schürcks (eds.), Minimalism and Beyond: Radicalizing the Interfaces (Language Faculty and Beyond, Vol. 11). Amsterdam, Philadelphia: John Benjamins, 267–286. Di Sciullo, Anna Maria, and Boeckx, Cedric. (eds.). 2011a. The Biolinguistic Enterprise: New Perspectives on the Evolution and Nature of the Human Language Faculty. Oxford: Oxford University Press. Review Article: Kosta, Peter and Diego Gabriel Krivochen. Di Sciullo, Anna Maria, and Boeckx, Cedric. 2011b. Introduction: Contours of the Biolinguistic Research Agenda. In Anna Maria Di Sciullo, and Cedric Boeckx (2011), pp. Di Sciullo, Anna Maria, and Isac, D. 2008. The Asymmetry of Merge. Biolinguistics 2 (4), 260–290. Di Sciullo, Anna Maria, and Williams, Edwin. 1987. On the Definition of Word. Cambridge, MA: MIT Press. Diesing, Molly. 1992. Indefinites (Linguistic Inquiry Monograph Twenty). Cambridge, MA, London, UK: MIT Press. Dobrovie-Sorin, Carmen. 2012. Number as a Feature. In Laura Brugè, Anna Cardinaletti, Giuliana Giusti, Nicola Munaro, and Cecilia Poletto (eds.), Functional Heads: The Cartography of Syntactic Structures, Vol. 7. New York: Oxford University Press, 304–324. Donati, Caterina. 2006. On Wh-Head-Movement. In Lisa Lai Shen Cheng, and Norbert Corver (eds.), Wh-Movement on the Move. Cambridge MA: MIT Press, 21–46. Dowty, D. R. 1979. Word Meaning and Montague Grammar. Dordrecht: D. Reidel. Einstein, A. 1905. On the Electrodynamics of Moving Bodies. Annalen der Physik (ser. 4), 17, 891–921. Embick, D. 2010. Localism versus Globalism in Morphology and Phonology. Cambridge, MA: MIT Press. Embick, D., and Noyer, R. 2004. Distributed Morphology and the SyntaxMorphology Interface. Draft: 25/10/2004. Emonds, Joseph. 2006. Adjectival Passives. In M. Everaert, and H. van Riemsdijk (eds.), The Blackwell Companion to Syntax. Oxford: Blackwell, 16–60. Emonds, Joseph E. 2007. Discovering Syntax: Clause Structures of English, German and Romance. Berlin, New York: Mouton de Gruyter.
Bibliography
317
Epstein, S. 1999. Un-Principled Syntax and the Derivation of Syntactic Relations. In S. Epstein, and N. Hornstein (eds.), Working Minimalism. Cambridge, MA: MIT Press, 317–346. Epstein, S., and Seely, T. 2000. SPEC-ifying the GF Subject: Eliminating A-Chains and the EPP within a Derivational Model. Ms. University of Michigan. Epstein, S., and Seely, T. 2002. Rule Applications as Cycles in a Level Free Syntax. In S.D. Epstein, and T.D. Seely (eds.), Derivation and Explanation in the Minimalist Program. Oxford: Blackwell, 65–89. Epstein, S., and Seely, T. 2006. Derivations in Minimalism. Cambridge, MA.: Cambridge University Press. Escandell Vidal, M.V. 2006. Introducción a la Pragmática. Barcelona: Ariel. Escandell Vidal, M.V., and Leonetti, M. 2000. Categorías conceptuales y semántica procedimental. In Cien años de investigación semántica: de Michél Bréal a la actualidad, Tomo I. Madrid: Ediciones clásicas, 363–378. Escandel Vidal, M.V., Leonetti, M., and Ahern, A. (eds.). 2011. Procedural Meaning (Crispi Series). Bingley: Emerald. Evans, Vyvyan. 2014. The Language Myth: Why Language Is Not an Instinct. Cambrindge: Cambridge University Press. Evans, Nicholas, and Levinson, Stephen C. 2009. The myth of Language Universals: Language Diversity and Its Importance for Cognitive Science. Behavioral and Brain Sciences 32, 429–448. Evans, Vyvyan, and Green, Melanie. 2006. Cognitive Linguistics. An Introduction. Edinburgh: Edinburgh University Press. Fábregas, A. 2005. La definición de la categoría gramatical en una morfología orientada sintácticamente: nombres y adjetivos. PhD thesis, Universidad Autónoma de Madrid. Fábregas, A. 2010. An Argument for Phasal Spell-Out. Nordlyd 36, Special Issue on Nanosyntax, Peter Svenonius, Gillian Ramchand, Michal Starke, and Tarald Taraldsen (eds.), Tromsø: CASTL, 6–9. Falcaro, M., Pickles, A., Newbury, D. F., Addis, L., Banfield, E., Fisher, S. E., Monaco, A.P., Simkin, Z., Conti-Ramsden, G., and SLI Consortium. 2008. Genetic and Phenotypic Effects of Phonological Short-term Memory and Grammatical Morphology in Specific Language Impairment. Genes, Brain and Behavior 7, 393–402. Fanselow, Gisbert. 2003. Free Constituent Order: A Minimalist Interface Account. Folia Linguistica 37 (1–2), 191–232. Fedden, Sebastian, Audring, Jenny, and Corbett, Greville (eds.), 2018. Noncanonical Gender Systems Oxford: Oxford University Press.
318
Bibliography
Fischer, Silke. 2004. Toward an Optimal Theory of Reflexivization. Doctoral dissertation, Universität Tübingen. Fisher, Simon E. 2005. Dissection of Molecular Mechanisms Underlying Speech and Language Disorders. Applied Psycho-linguistics 26, 111–128. Fisher, Simon E., and DeFries, J.C. 2002. Developmental Dyslexia: Genetic Dissection of a Complex Cognitive Trait. Nature Review Neuroscience 3, 767–780. Fisher, Simon E., and Francks, C. 2006. Genes, Cognition and Dyslexia: Learning to Read the Genome. Trends in Cognitive Sciences 10, 250–257. Fisher, Simon E., and Marcus, G.F. 2006. The Eloquent Ape: Genes, Brains and the Evolution of Language. Nature Reviews Genetics 7, 9–20. Fisher, Simon E., and Scharff, C. 2009. FOXP2 as a Molecular Window into Speech and Language. Trends in Genetics 25 (4), 166–177. Fisher, Simon E.., Lai, C.S.L., and Monaco, A. 2003. Deciphering the Genetic Basis of Speech and Language Disorders. Annual Review of Neuroscience, 26, 57–80. Fisher, Simon E., Vargha-Khadem, F., Watkins, K.E., Monaco, A.P., & Pembrey, M.E. 1998. Localisation of a Gene Implicated in a Svere Speech and Language Disorder. Nature Genetics 18, 168–170. Fisher, Simon E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D.F., Richardson, A.J., et al. 1999. A Quantitative-Trait Locus on Chromosome 6p Influences Different Aspects of Developmental Dyslexia. American Journal of Human Genetics 64, 146–156. Fisher, Simon E.., Francks, C., Marlow, A.J., MacPhie, I.L., Newbury, D.F., Cardon, L.R., et al. 2002. Independent Genome-Wide Scans Identify a Chromosome 18 Quantitative-Trait Locus Influencing Dyslexia. Nature Genetics 30, 86–91. Fitch, W. Tecumseh. 2009. Prolegomena to a Future Science of Biolinguistics. Biolinguistics 3, 283–320. Fitzpatrick, Justin. 2002. On Minimalist Approaches to the Locality of Movement. Linguistic Inquiry 33, 443–463. Flax, J.F., Realpe-Bonilla, T., Hirsch, L.S., Brzustowicz, L.M., Bartlett, C.W., and Tallal, P. 2003. Specific Language Impairment in Families: Evidence for Co-occurrence with Reading Impairments. Journal of Speech, Language, and Hearing Research 46, 530–543. Fodor, Jerry A., Bever, T. G., and Garrett, M. F. 1974. The psychology of language. New York: McGraw-Hill.
Bibliography
319
Fodor, Jerry A. 1983. Modularity of Mind: An Essay on Faculty Psychology. Cambridge, MA: MIT Press. Fodor, Jerry A. 2008. LOT 2: The Language of Thought Revisited. Oxford: Claredon Press. Fodor, Jerry A., and Pylyshyn, Zenon W. 2015. Minds Without Meanings. An Essay on the Concept of Concepts. Cambridge, MA: MIT Press. Fodor, J. D. 1978. Parsing strategies and constraints on transformations. Linguistic Inquiry, 9, 427–473. Ford, M., Bresnan, J., and Kaplan, R. 1982. A competence based theory of syntactic closure. In J. Bresnan (Ed.), The Mental Representation of Grammatical Relations. Cambridge, MA: MIT Press. Forster, K. 1979. Levels of processing and the, structure of the language processor. In W. Cooper and E. C. T. Walker (Eds.), Sentence Processing. Hillsdale, NJ: Erlbaum. Frampton, John, and Gutmann, Sam. 1999. Cyclic Computation, a Computationally Efficient Minimalist Syntax. Syntax 2 (1), 1–27. Frampton, John, and Gutmann, Sam. 2000. Agreement is Feature Sharing. Ms. http://mathserver.neu.edu/~ling/pdf/agrisfs.pdf Frampton, John, and Gutmann, Sam. 2002. Crash-Proof Syntax. In Samuel D. Epstein, and Daniel Seely (eds.), Derivation and Explanation in the Minimalist Program. Oxford: Blackwell, 90–105. Franks, Steven. 1995. Parameters of Slavic Morphosyntax (Oxford Studies in Comparative Syntax). New York, Oxford: Oxford University Press. Franks, Steven. 2005. The Slavic Languages. In Guglielmo Cinque, and Richard S. Kayne (eds.), The Oxford Handbook of Comparative Syntax. Oxford: Oxford University, 373–419. Franks, Steven. 2017. Syntax and Spell-Out in Slavic. Bloomington: Slavica Publishers. Frazier, Lyn, and Clifton, Charles. 1989. Successive cyclicity in the grammar and the parser. Language and Cognitive Processes, 4(2), 93–126. Frege, G. 1892. Über Sinn und Bedeutung. In: Zeitschrift für Philosophie und philosophische Kritik. Neue Folge 100, 1892, 25-50. Reprinted in: Frege, Gottlob 1994. Funktion, Begriff, Bedeutung. Fünf logische Studien. Göttingen: Vandenhoeck & Rupprecht. Herausgegeben und eingeleitet von Günther Patzig, 40–65. Freidin, Robert. 1975. On the Analysis of Passives. Language 51, 384–405. DOI: 10.2307 /412862. Freidin, Robert. 1978. Cyclicity and the Theory of Grammar. Linguistic Inquiry 9, 519–549. http://www.jstor.org/stable/4178081.
320
Bibliography
Freidin, Robert. 1986. Fundamental Issues in the Theory of Binding. In Barbara Lust (ed.), Studies in the Acquisition of Anaphora. Dordrecht: Reidel, 151–188. Freidin, Robert (ed.). 1991. Principles and Parameters in Comparative Grammar, Cambridge, MA: MIT Press. Freidin, Robert. 1992. Foundations of Generative Syntax. Cambridge, MA, London, UK: MIT Press. Freidin, Robert. 1994a. Conceptual Shifts in the Science of Grammar: 1951– 1992. In Carlos P. Otero (ed.), Noam: Critical Assessments, vol. 1, tome 2. London: Routledge, 653–690. Freidin, Robert. 1994b. Generative Grammar: Principles and Parameters Framework. In R.E. Asher (ed.), The Encyclopedia of Language and Linguistics, Vol. 3. Oxford: Pergamon, 1370–1385. Freidin, Robert. 1997. Review Article on Chomsky 1995b. Language 73, 571– 582. DOI: 10.2307/415885. Freidin, Robert. 1999. Cyclicity and Minimalism. In Samuel David Epstein and Norbert Hornstein (eds.), Working Minimalism. Cambridge, MA: MIT Press, 95–126. Freidin, Robert. 2006. Generative Grammar: Principles and Parameters. In E. Keith Brown (ed.), The Encyclopedia of Language and Linguistics. Oxford: Elsevier, 94–102. Freidin, Robert. 2012. A Brief History of Generative Grammar. In Gillian Russell and Delia Graff Fara (eds.), The Routledge Companion to the Philosophy of Language. New York: Routledge, 895–916. Freidin, Robert. 2013. Chomsky’s Contribution to Linguistics: A Sketch. In Keith Allan (ed.), The Oxford Handbook of the History of Linguistics. Oxford: Oxford University Press, 439–467. Freidin, Robert. 2016. Chomsky’s Linguistics: The Goals of the Generative Enterprise. In Peter Graff and Coppe van Urk (eds.), Review Article of Chomsky’s linguistics. Cambridge, MA: MIT Working Papers in Linguistics, 2012, pp. viii, 692. ISBN 9780615567129. $98.95 (Hb). // Language. Vol. 92. No. 3 (2016), 671–723. Freidin, Robert. 2017. Cyclicity in Syntax. In Mark Aronoff (ed.), Linguistics: Oxford Research Encyclopedias. Oxford: Oxford University Press. http://linguistics.oxfordre.com/. Freidin, Robert, and Lasnik, Howard. 2011. Some Roots of Minimalism in Generative Grammar. In Cedric Boeckx (ed.), The Oxford Handbook of Linguistic Minimalism. Oxford: Oxford University Press, 1–26. Friederici, Angela D. 2004. Event-Related Brain Potential Studies in Language. Current Neurology and Neuroscience Reports 4, 466–470.
Bibliography
321
Friederici, Angela D. 2005. Neurophysiologic Markers of Early Language Acquisition: From Syllables to Sentences. TRENDS in Cognitive Science 9 (10), 481–488. Friederici, Angela D., and Weissenborn, Jürgen. 2007. Mapping Sentence form Onto Meaning: The Syntax Semantic Interface. Brain Research, 1146, 50–58. Friedmann, Naama. 2006. Speech Production in Broca’s Agrammatic Aphasia: Syntactic Tree Pruning. In Y. Grodzinsky, and K. Amunts (eds.), Broca’s Region. New York: Oxford University Press, 2006, 63–82. Friedmann, Naama, and Biran, M. 2003. When Is Gender Accessed? A Study of Paraphasias in Hebrew Anomia. Cortex 39 (3). 441–463. Friedmann, Naama, and Costa, João. 2011. Acquisition of SV and VS Order in Hebrew, European Portuguese, Palestinian Arabic, and Spanish. Language Acquisition 18 (1), 1–38. Friedmann, Naama, and Grodzinsky, Y. 1997. Tense and Agreement in Agrammatic Production: Pruning the Syntactic Tree. Brain and Language 56 (3). 397–425. Friedmann, Naama, and Gvion, A. 2002. FriGvi: Friedmann Gvion Battery for Assessment of Phonological Working Memory. Tel Aviv: Tel Aviv University. Friedmann, Naama, and Gvion A. 2003. TILTAN: Battery for the Diagnosis of Dyslexias. Tel Aviv: Tel Aviv University. Friedmann, Naama and Gvion, A. 2007. As Far as Individuals with Conduction Aphasia Understood These Sentences were Ungrammatical: Garden Path in Conduction Aphasia. Aphasiology 21 (6–8), 570–586. Friedmann Naama, and Novogrodsky R. 2011. Which Questions Are Most Difficult to Understand? The Comprehension of Wh Questions in Three Subtypes of SLI. Lingua 121 (3), 367–382. Friedmann Naama, and Shapiro, L.P. 2003. Agrammatic Comprehension of Simple Active Sentences with Moved Constituents: Hebrew OSV and OVS Structures. Journal of Speech Language and Hearing Research 46 (2), 288–297. Friedmann, Naama, Belletti, Adriana, and Rizzi, Luigi. 2009. Relativized Relatives. Types of Intervention in the Acquisition of A-Bar Dependencies. Lingua 119, 67–88. Friedrich, Svetlana. 2009. Definitheit im Russischen (Potsdam Linguistic Investigations, Vol. 4). Frankfurt am Main: Peter Lang. Frisch, Stefan. 2000. Verb-Argument-Struktur, Kasus und thematische Interpretation beim Sprachverstehen. Leipzig: Max-Planck-Institute of Cognitive Neuroscience (MPI Series in Cognitive Neuroscience; 12) = Diss. Univ. Potsdam.
322
Bibliography
Galaburda, A.M., LoTurco, J. Ramus, F., Fitch, R.H., and Rosen, G.D. 2006. From Genes to Behavior in Developmental Dyslexia. Nature Neuroscience 9 (10), 1213–1217. Gallego, Ángel J. 2007. Phase Theory and Parametric Variation. PhD thesis, Universitat autonoma de Barcelona. Gallego, Ángel J. 2009. Minimalism: A Look from Below. SEMINARI DE RECERQUES LINGÜISTIQUES Universitat de Girona, Girona 19 de juny de 2009. Ms. Pdf. Gallego, Ángel J. 2010. Phase Theory. Amsterdam: John Benjamins. Gazdar, G. 1979. Pragmatics: Implicature, Presupposition, and Logical Form. New York: Academic Press. Gehrke, Berit. 2011. Stative Passives and Event Kinds. In I. Reich, E. Horch, and D. Pauly (eds.), Proceedings of Sinn und Bedeutung, Vol. 15. Saarbrücken: Universaar – Saarland University Press, 241–257. Gehrke, Berit. 2012. Passive States. In V. Demonte, and L. McNally (eds.), Telicity, Change, and State: A Cross-Categorial View of Event Structure. Oxford: Oxford University Press, 185–211. Gehrke, Berit. 2013. Puzzled by Adjectival Passives. In R. Folli, C. Sevdali, and R. Truswell (eds.), Syntax and Its Limits. Oxford: Oxford University Press, 175–191. Goldberg, Adele. 2015. Strong evidence that the roots of binding constraints are pragmatic from Cole et al. http://dlc.hypotheses.org/865, (accessed 10 January 2016). Gopnik, M. 1990. Feature-blind Grammar and Dysphasia. Nature 344, 715. Gopnik, M., and Crago, M. 1991. Familial Aggregation of a Developmental Language Disorder. Cognition 39, 1–50. Gould, S.J. 2002. The Structure of Evolutionary Theory. Cambridge MA: Belknap Press of Harvard University Press. Greene, B. 1999. The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. New York: Vintage Series, Random House Inc. Grewendorf, Günther. 2002. Minimalistische Syntax (UTB, Vol. 2313). Tübingen, Basel: Franke Verlag. Grice, H. Paul. 1975. Logic and Conversation. In Peter Cole, and Jerry L. Morgan (eds.), Syntax and Semantics, Vol. 3. New York, 41–58. Grigorenko, Elena L. 2009. Speaking Genes or Genes for Speaking? Deciphering the Genetics of Speech and Language. Journal of Child Psychology and Psychiatry 50 (1–2), 116–125. Grimshaw, Jane. 1986a. A Morpho-syntactic Explanation for the Mirror Principle. Linguistic Inquiry 17, 745–749.
Bibliography
323
Grimshaw, Jane. 1986b. Nouns, Arguments, and Adjuncts. Ms. Waltham, MA: Brandeis University. Grimshaw, Jane 1988. Adjuncts and Argument Structure. Lexicon Project Working Papers #21 and Center for Cognitive Science Working Papers #36. Massachusetts Institute of Technology. Grimshaw, Jane. 1990. Argument Structure (Linguistic Inquiry Monographs, Vol. 18). Cambridge, MA: MIT Press. Grimshaw, Jane. 1991. Extended Projection. Ms, Waltham, MA: Brandeis University. Grimshaw, Jane, and Mester, A. 1988. Light Verbs and θ-Marking. Linguistic Inquiry 19, 205–232. Grohmann, Kleanthes. 2003. Prolific Domains. On the Anti-Locality of Movement Dependencies. Amsterdam: John Benjamins. Grohmann, Kleanthes. 2004. Prolific Domains in the Computational System. In Actas de JEL 2004: Domains. Nantes: AAI, 211–216. Grohmann, Kleanthes, Drury, J., and Castillo, J.C. 2000. No More EPP. In R. Billerey, and B.D. Lillehaugen (eds.), WCCFL 19: Proceedings of the 19th West Coast Conference on Formal Linguistics. Somerville, MA: Cascadilla Press, 153–166. Habermas, Jürgen. 1971. Vorbereitende Bemerkungen zu einer Theorie der kommunikativen Kompetenz. In: Jürgen Habermas, and N. Luhmann (eds.), Theorie der Gesellschaft oder Sozialtechnologie –Was leistet die Systemforschung? Frankfurt am Main.: Suhrkamp, 101–141. Habermas, Jürgen. 1981. Theorie des kommunikativen Handelns. Bd. 1: Handlungsrationalität und gesellschaftliche Rationalisierung. Bd. 2: Zur Kritik der funktionalistischen Vernunft (Suhrkamp Taschenbuch Wissenschaft, 1175). Frankfurt am Main: Suhrkamp. Habermas, Jürgen. 2002. Prolegomenon to a Theory of Argument Structure. MIT Press. Haegeman, Liliane, and Gueron, J. 1999. English Grammar: A Generative Perspective. Oxford: Blackwell. Haegeman, Liliane, and Lohndal, T. 2010. Negative Concord and (Multiple) Agree: A Case Study of West Flemish. Linguistic Inquiry 41 (2), 181–211. Haider, Hubert. 1994. (Un-)heimliche Subjekte. Linguistische Berichte 153 (1994), 372–385. Haider, Hubert. 2001. Parametrisierung in der generativen Grammatik. In Haspelmath, M. et al. (eds.), Language Typology and Language Universals. Sprachtypologie und sprachliche Universalien (An International Handbook/Ein internationals Handbuch. Vol. 1, 1. Halbband, HSK, 20). Berlin, New York: Mouton de Gruyter. 283–293.
324
Bibliography
Haider, Hubert. 2006. Mittelfeld Phenomena (Scrambling in Germanic). In M. Everaert, and H.C. van Riemsdijk (eds.), The Blackwell Companion to Syntax, Vol. 3. Malden, MA: Blackwell, 204–274. Hale, Kenneth, andKeyser, Samuel J. 1986a. Some Transitivity Alternations in English (Lexicon Project Working Papers, Vol. 7). Cambridge, MA: MIT Press. Hale, Kenneth, and Keyser, Samuel J. 1986b. A View from the Middle (Lexicon Project Working Papers, Vol. 10). Cambridge, MA: MIT Press. Hale, Kenneth, and Keyser, Samuel J. 1988. Explaining and Constraining the English Middle. In Carol Tenny (ed.), (1988), 41–57. Hale, K., and Keyser, S.J. 1993. On Argument Structure and the Lexical Expression of Syntactic Relations. In Kenneth Hale, and Samuel J. Keyser (eds.), The View from Building (Essays in Honor of Sylvain Bromberger, Vol. 20). Cambridge, MA: MIT Press, 53–110. Hale, Kenneth, and Keyser, Samuel J (eds.). 1993. The View from Building (Essays in Linguistics in Honor of Sylvain Bromberger, Vol. 20). Cambridge, MA: MIT Press. Hale, Kenneth, and Keyser, Samuel J. 1997a. On the Complex Nature of Simple Predicators. In A. Alsina, J. Bresnan, and P. Sells (eds.), Complex Predicates. Stanford, CA: CSLI Publications, 29–65. Hale, Kenneth, and Keyser, Samuel J. 1997b. The Basic Elements of Argument Structure. Ms. Cambridge, MA: MIT Press. Hale, Kenneth and Keyser, Samuel J. 2002. Prolegomenona to a Theory of Argument Structure (Linguistic Inquiry Monographs, Vol. 39). Cambridge, MA: MIT Press. Halle, Morris. 1990. An Approach to Morphology. Proceedings of the Northeast Linguistic Society 20, 150–184. Halle, Morris. 1995. The Russian Declension: An Illustration of the Theory of Distributed Morphology. In Jennifer Cole, and Charles Kisseberth (eds.), Perspectives in Phonology. Palo Alto: CSLI Publications, 321–353. Halle, Morris and Marantz, Alec. 1993. Distributed Morphology and the Pieces of Inflection. In K. Hale, and Samuel J. Keyser (eds.), 111–176. Hanten, G., and Martin, R. 2001. A Developmental Short-term Memory Deficit: A Case Study. Brain and Cognition 45, 164–188. Harel, S., Greenstein, Y., Kramer, U., Yifat, R., Samuel, E., Nevo, Y., Leitner, Y., Kutai, M., Fattal, A., and Shinnar, S. 1996. Clinical Characteristics of Children Referred to a Child Development Center for Evaluation of Speech, Language, and Communication Disorders. Pediatric Neurology 15, 305–311.
Bibliography
325
Harley, H., and Noyer, R. 1999, April. State-of-the-Article: Distributed Morphology. GLOT 4 (4), 6–9. Harley, Heidi, and Ritter, B. 2002. Structuring the Bundle: A Universal Morphosyntactic Feature Geometry. In H. J. Simon, and H. Wiese (eds.), Pronouns-Grammar and Representation. Amsterdam, Philadelphia: John Benjamins, 23–39. Hartmann, Jutta M. 2005. Wh-Movement and the Small Clause Analyses of the English there Construction. In M. Salzmann, and L. Vicente (eds.), Leiden Papers in Linguistics 2.3, 93–106. Haspelmath, Martin. 2007. Pre-established Categories Don’t Exist: Consequences for Language Description and Typology. Linguistic Typology 11, 119–132. Hauser, Marc, Chomsky, Noam, and Fitch, Tecumseh. 2002, November. The Faculty of Language: What is It, Who Has It and How did It Evolve? Science 298, 1569–1669. Heck, F., and Müller, Gereon. 2007. Extremely Local Optimization. In E. Brainbridge, and B. Agbayani (eds.), Proceedings of the 26th WECOL. Fresno: California State University, 170–183. Heim, Irene, and Kratzer, Angelika. 1998/2012. Semantics in Generative Grammar (Blackwell Textbooks in Linguistics, Vol. 13). Oxford: Blackwell. Heine, Bernd & Kuteva, Tania. 2007. The Genesis of Grammar. Oxford: Oxford University Press. Heisenberg, W. 1999. Physics and Philosophy, New York: Prometheus Books. Hendrick, R. 2003. Minimalist Syntax. Oxford: Blackwell. Hentschel, Gerd. 1993a. Zur Kasusvariation des prädikativen Substantivs in Kopulasätzen: syntaktischer Wandel im Polnischen des 16. und 17. Jh. In Gerd Hentschel, and Roman Laskowski, (eds.), Studies in Polish Morphology and Syntax (Synchronic and Diachronic Problems). München: Sagner, 253–292. Hentschel, Gerd. 1993b. Zur Verbreitung des prädikativen Instrumentals im Polnischen des 16. Und 17. Jahrhunderts. Polonica XVI, 181–191. Hentschel, Gerd 2008. On the Classification of (non-resultative) Predicative Adjuncts. In Christoph Schroeder, Gerd Hentschel, and Winfried Boeder (eds.), Secondary Predicates in Eastern European Languages and Beyond. Oldenburg: BIS-Verlag, 97–123. Hentschel, Gerd 2009. Morphosyntaktische Markierung sekundärer Prädikate. In S. Kempgen, P. Kosta, T. Berger and K. Gutschmidt (eds.), Slavic
326
Bibliography
Languages. Slavische Sprachen. An International Handbook of their Structure, their History and their Investigation (Ein internationales Handbuch ihrer Struktur, ihrer Geschichte und ihrer Erforschung, Vol. 1). Berlin, New York: Mouton de Gruyter, 369–391. Hinzen, W. 2009. Hierarchy, Merge and Truth. In M. Piatelli Palmarini et al. (eds.), Of Minds and Language. Oxford: Oxford University Press, 123–141. Hirschová, Milada 2009. Speech Acts in Slavic Languages. In: Die slavischen Sprachen The Slavic Languages: Ein internationales Handbuch zu ihrer Struktur, ihrer Geschichte und ihrer Erforschung. An International Handbook of their Structure, their History and their Investigation Herausgegeben von / Edited by Sebastian Kempgen, Peter Kosta, Tilman Berger, Karl Gutschmidt. Volume 1 (HSK 32.1), Berlin, New York: Walter de Gruyter, 1055–1090. Höhle, Barbara, and Weissenborn, Jürgen. 2000. The Origins of Syntactic Knowledge: Recognition of Determiners in One Year Old German Children. In Proceedings of the 24th Annual Boston Conference on Language Development. Somerville, MA: Cascadilla Press. Hoffmann, L. 1996. Gegenstandskonstitution' und ‚Gewichtung‘: eine kontrastivgrammatische Perspektive. Jahrbuch Deutsch als Fremdsprache (1995), 104–133. Holmberg, A. 2003. Topic Drop or VP Focus. In Cecilia Falk Lars-Olof Delsing, Gunlög Josefsson, Halldór Á. Sigurðsson (eds.), Grammar in focus. Vol. II: Festschrift for Christer Platzack 18 November 2003. Department of Scandinavian languages. University of Lund, 159–166. Holmberg, A. 2005. Is There a Little Pro? Evidence from Finnish. Linguistic Inquiry 36 (4), 533–564. Horn, Denise, et al. 2010. Identification of FOXP1 Deletions in Three Unrelated Patients with Mental Retardation and Significant Speech and Language Deficits. Human Mutation 31, 1851–1860. Hornstein, Norbert. 1995. Logical Form. From GB to Minimalism. Oxford: Blackwell. Hornstein, Norbert. 2001. Move! A Minimalist Theory of Construal. Oxford: Blackwell. Hornstein, Norbert. 2009. A Theory of Syntax. Minimal Operations and Universal Grammar. Cambridge: Cambridge University Press. Hornstein, Norbert, and Idsardi, William. 2014. A Program Fort he Minimalist Program. In Peter Kosta, Steven L. Franks, Teodora Radeva-Bork, and Lilia Schürcks (eds.), Minimalism and Beyond: Radicalizing the Interfaces (Language Faculty and Beyond, Vol. 11). Amsterdam, Philadelphia: John Benjamins, 9–34. Hornstein, N., and Pietroski, P. 2009. Basic Operations: Minimal SyntaxSemantics. Catalan Journal of Linguistics 8, 113–139.
Bibliography
327
Hornstein, Norbert, Nunes, Jairo, and Grohmann, Kleanthes K. 2005. Understanding Minimalism (Cambridge Textbooks in Linguistics). Cambridge: Cambridge University Press. Huang, C.-T.J. 1982. Logical Relations in Chinese and the Theory of Grammar. PhD thesis, MIT. Huang, C.-T.J. 1989. Pro-Drop in Chinese: A Generalized Control Theory. In O. Jaeggli, and K. Safir (eds.), (1989), 185–214. Huybregts, M.A.C. 1976. Overlapping Dependencies in Dutch. Utrecht Working Papers in Linguistics 1, 24–65. Huybregts, Riny. 2017. Review Article Phonemic Clicks and the Mapping Asymmetry: How Language Emerged and Speech Developed. Neuroscience Bio-behavioral Reviews 81 (Part B), 279–294. Hyams, N. 1986. Language Acquisition and the Theory of Parameters. Dordrecht: D. Reidel. Hyams, N., and Wexler, K. 1993. On the Grammatical Basis of Null Subjects in Child Language. Linguistic Inquiry 24 (3), 421–459. Ingram, T.T.S., and Reid, J.F. 1956. Developmental Aphasia Observed in a Department of Child Psychiatry. Archives of Disease in Childhood 31, 161–172. Israeli, Alina. 1997. Semantics and Pragmatics of the “reflexive” verbs in Russian. PhD dissertation, Yale University. München: Sagner. Jackendoff, Ray. 1983. Semantics and Cognition. Cambridge, MA: MIT Press. Jackendoff, Ray. 1987. The Status of Thematic Relations in Linguistic Theory. Linguistic Inquiry 18, 369–412. Jackendoff, Ray 1990. Semantic Structures. Cambridge, MA: MIT Press. Jackendoff, Ray. 1993. Patterns in the Mind: Language and Human Nature. Hemel Hempstead: Harvester-Wheatsheaf. Jackendoff, Ray. 1997. The Architecture of the Language Faculty. Cambridge, MA: MIT Press. Jackendoff, Ray. 2002. Foundations of Language (Brain, Meaning, Grammar, Evolution). Oxford, New York: Oxford University Press. Jaeggli, Osvaldo, and Safir, Kenneth J. (eds.). 1989a. The Null Subject Parameter. Dordrecht, Boston: Kluwer. Jaeggli, Osvaldo, and Safir, Kenneth J. 1989b. The Null Subject Parameter and Parametric Theory. In Osvaldo Jaeggli, and Safir, Kenneth J. (eds.), (1989a), 1–44. Jakobson, Roman. 1936. Beitrag zur allgemeinen Kasuslehre: Gesamtbedeutungen der russischen Kasus. Travaux du Cercle Linguistique de Prague 6, 240–288.
328
Bibliography
Jakobson, Roman. 1941/1968. Child Language, Aphasia and Phonological Universals. The Hague: Mouton de Gruyter. Johns, C. 2012. Null Subjects and the EPP: Towards a Unified Theory of Pro Drop. lingbuzz/000373. Johnson-Laird, P. N. 1977. Psycholinguistics without linguistics. In N. S. Sutherland (Ed.), Tutorial essays in psychology. Vol. 1. Hillsdale, NJ: Erlbaum. Johnson, D.E., and Lappin, S. 1997. A Critique of the Minimalist Program. Linguistics and Philosophy 20, 273–333. Julien, Marit. 2002. Syntactic Heads and Word Formation. Oxford: Oxford University Press. Jung, Hyo De Smet, Baillieux, Hanne, De Deyn, Peter P., Mariën, d Peter, and Paquier, Philippe. 2007. The Cerebellum and Language: The Story So Far. Folia Phoniatr Logop 59, 165–170. Junghanns, Uwe, and Zybatow, Gerhild. 2009. Grammatik und Informationsstruktur. In S. Kempgen, Peter Kosta, T. Berger, and K. Gutschmidt (eds.), Slavic Languages. Slavische Sprachen. An International Handbook of their Structure, their History and their Investigation (Ein internationales Handbuch ihrer Struktur, ihrer Geschichte und ihrer Erforschung, Vol. 1). Berlin, New York: Mouton de Gruyter, 282–316. Kager, R. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kail, Robert 1994, April. A Method for Studying the Generalized Slowing Hypothesis in Children With Specific Language Impairment. Journal of Speech and Hearing Research 37, 418–421. Karlík, Petr, and Peter Kosta. 2020. Die Nominalisierung von Nebensätzen im Tschechischen. In: Zeitschrift für Slawistik 2020, Heft 4. Katz, J.J., and Fodor, J.A. 1963. The Structure of a Semantic Theory. Language 39, 170–210. Kauffman, Stuart A. 1993. The Origins of Order: Self-Organization and Selection in Evolution. London: Oxford University Press. Kayne, Richard. 1975. French Syntax: The Transformational Cycle. Cambridge, MA: MIT Press. Kayne, Richard. 1981. Unambiguous Paths. In Robert May, and Jan Koster (eds.), Levels of Syntactic Representation. Dordrecht: Reidel, 143–183. Kayne, Richard. 1994. The Antisymmetry of Syntax. Cambridge, MA: MIT Press. Kemmer, S., and Verhagen, A. 1994. The Grammar of Causatives and the Conceptual Structure of Events. Cognitive Linguistics 5 (2), 115–156.
Bibliography
329
Kimball, J. 1973. Seven principles of surface structure parsing in natural language. Cognition, 2, 1547. King, Katherine E. 2015. Mixed Gender Agreement in Russian DPs. A thesis submitted in partial fulfillment of the requirements for the degree of Master of Arts University of Washington, Seattle. Kitahara, Hisatsugu. 1997. Elementary Operations and Optimal Derivations. Cambridge, Mass: MIT Press (Linguistic Inquiry Monograph 31). Klenin, Emily. 2009. Animacy. Personhood/ Belebtheit. Personalität. In: In S. Kempgen, P. Kosta, T. Berger, and K. Gutschmidt (eds.), Slavic Languages. Slavische Sprachen. An International Handbook of their Structure, their History and their Investigation (Ein internationales Handbuch ihrer Struktur, ihrer Geschichte und ihrer Erforschung, Volume 1). Berlin, New York: Mouton de Gruyter, 152–161. Koeneman, Olaf, and Zeijlstra, Hedde. 2014. The Rich Agreement Hypothesis Rehabilitated. Linguistic Inquiry 45, 571–615. Kosta, Peter. 1990. Zur formalen Lizensierung und inhaltlichen Determination leerer Subjekte im Russischen. In W. Breu (ed.), Slavistische Linguistik 1989: Referate des XV. Konstanzer Slavistischen Arbeitstreffens Bayreuth 18.-22.9.1989. München, 117–165. Kosta, Peter. 1992. Leere Kategorien in den nordslavischen Sprachen. Zur Analyse leerer Subjekte und Objekte in der Rektions-Bindungs-Theorie. Frankfurt am Main (online published Habilitationsschrift Frankfurt am Main. http://www. uni-potsdam.de/u/slavistik/wsw/habil/habil.htm) Kosta, Peter. 1997. Empty Categories, Null-Subjects and Null-Objects and How to Treat them in the Minimalist Program. Linguistics in Potsdam 2 (l995/1996), 7–38. Kosta, Peter. 1998. Über Argumentstruktur, Fokussierung und modale Satzadverbien im Tschechischen und Russischen. Zeitschrift für Slawistik 43 (2), 140–154 (Beiträge zum XII. Internationalen Slavistenkongreß in Kraków 1998). Kosta, Peter. 2002a. Minimalism and Free Constituent Order (in Russian as Compared to German). In Peter Kosta, and Frasek, J. (eds.), Current Approaches to Formal Slavic Linguistics (Linguistik International, Band 9). Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien: Peter Lang, 253–272. Kosta, Peter. 2003a. Syntaktische und semantische Besonderheiten von Adverb und Negation im Slavischen. Zeitschrift für Slawistik 48 (3), 377–404. Kosta, Peter 2003b. Negation and Adverbs in Czech. In Peter Kosta, J. Błaszczak, J. Frasek, L. Geist, and M. Żygis (eds.), Investigations into Formal
330
Bibliography
Slavic Linguistics: Proceedings of the Fourth European Conference on Formal Description of Slavic Languages – FDSL 4, Potsdam, 28–30 November 2001. (Linguistik International 10.1-2. Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien: Peter Lang, 601–616. Kosta, Peter. 2003c. The New Animacy Category in Slavic Languages: Open Questions of Syntax, Semantics and Morphology. GERMANOSLAVICA. Zeitschrift für germanoslawische Studien. Jahrgang IX (XIV), Prag 2003, Nr. 2, S. 179–198. Kosta, Peter. 2004. Neakuzativita (ergativita) vs. neergativita v češtině, polštině a jiných slovanských jazycích na rozhraní morfologie a syntaxe. In Z. Hladka, and P. Karlík (eds.), Čestina – univerzália a specifika 5, Praha: Nakladatelství Lidové noviny 2004. (mit Jens Frasek), S. 172–194. Kosta, Peter. 2005. Die Faszination der Stille oder wie erwirbt ein Kind leere Kategorien? Unter der Einbeziehung primärsprachlicher Daten aus dem Korpus „CHILDES“. Zeitschrift für Slawistik 50 (4), 417–443. Kosta, Peter. 2006. On Free Word Order Phenomena in Czech as Compared to German: Is Clause Internal Scrambling A-Movement, A-Bar-Movement or Is It Base Generated? Zeitschrift für Slawistik 51 (3), 306–321. Kosta, Peter. 2008. Quantification of NPs/DPs in Slavic. In Peter Kosta, and Daniel Weiss (eds.), Slavistische Linguistik 2006/2007. Beiträge zum XXXII./ XXXIII. Konstanzer Slavistischen Arbeitstreffen in Boldern und Potsdam (03.09.-06.09.2007). München: Sagner, 247–266. Kosta, Peter. 2009a. Targets, Theory and Methods of Slavic Generative Syntax: Minimalism, Negation and Clitics. In S. Kempgen, P. Kosta, T. Berger, and K. Gutschmidt (eds.), Slavic Languages. Slavische Sprachen. An International Handbook of their Structure, their History and their Investigation (Ein internationales Handbuch ihrer Struktur, ihrer Geschichte und ihrer Erforschung, Volume 1). Berlin, New York: Mouton de Gruyter, 282–316. Kosta, Peter. 2009b. Word Order in Slavic. In Sebastian Kempgen, Peter Kosta, Tilman Berger, and Karl Gutschmidt (eds.), Slavic Languages. Slavische Sprachen. An International Handbook of their Structure, their History and their Investigation (Ein internationales Handbuch ihrer Struktur, ihrer Geschichte und ihrer Erforschung, Vol. 1). Berlin, New York: Mouton de Gruyter, 654–684. Kosta, Peter. 2009c. Prototypensemantik und Stereotypen / Prototype Semantics and the Notion of Stereotypes In Sebastian Kempgen, Peter Kosta, Tilman Berger, and Karl Gutschmidt (eds.), Slavic Languages. Slavische Sprachen. An International Handbook of their Structure, their History and their Investigation (Ein internationales Handbuch ihrer Struktur, ihrer
Bibliography
331
Geschichte und ihrer Erforschung, Vol. 1). Berlin, New York: Mouton de Gruyter, 828–847. Kosta, Peter. 2010. Causatives and Anti-Causatives, Unaccusatives and Unergatives: Or How Big is the Contribution of the Lexicon to Syntax. In Aleš Bican et al. (eds.), Karlík a továrna na lingvistiku Prof. Petru Karlíkovi k šedesátým narozeninám, Brno, 230–273. Kosta, Peter. 2011a. Causatives and Anti-Causatives, Unaccusatives and Unergatives: Or how big is the contribution of the lexicon to syntax? In Peter Kosta, and Lilia Schürcks (eds.), Formalization of Grammar in Slavic Languages. Contributions of the Eighth International Conference on Formal Description of Slavic Languages-FDSL VIII 2009. University of Potsdam, December 2–5, 2009. Frankfurt am Main: Peter Lang (Potsdam Linguistic Investigations. Vol. /Bd. 6), 235–295. Kosta, Peter. 2011b. Konversationelle Implikaturen und indirekte Sprechakte auf dem Prüfstein. In Michail I. Kotin, and Elizaveta G. Kotorova (eds.), Die Sprache in Aktion. Pragmatik. Sprechakte. Diskurs. Language in Action. Pragmatics. Speech Acts. Discourse, Heidelberg: Universitätsverlag WINTER, 55–69. Kosta, Peter 2013. How Can I Lie If I Am Telling the Truth? – The Unbearable Lightness of the Being of Strong and Weak Modals, Modal Adverbs and Modal Particles in Discourse between Epistemic Modality and Evidentiality. In Nadine Thielemann, and Peter Kosta (eds.), Approaches to Slavic Interaction. Amsterdam, Philadelphia: John Benjamins, 167–184. Kosta, Peter. 2014. Adjectival and Argumental Small Clauses vs. Free Adverbial Adjuncts – A Phase-based Approach within the Radical Minimalism, May 2014. https://ling.auf.net/lingbuzz/002088. Kosta, Peter. 2015a. On the Causative/Anti-causative Alternation as Principle of Affix Ordering in the Light of the Mirror Principle, the Lexical Integrity Principle and the Distributive Morphology. Zeitschrift für Slawistik 60 (4), 570–612. Kosta, Peter. 2015b. Třetí faktor „relevance“ mezi sémantikou, pragmatikou a syntax. In Božena Bednaříková, and Monika Pitnerová (eds.), Čítanka textů z kognitivní lingvistiky II. Olomouc: Univerzita Palackého v Olomouci, 57–76. Kosta, Peter. 2019. Adjectival and Argumental Small Clauses Vs. Free Adverbial Adjuncts – A Phase-based Approach within the Radical Minimalism. Bulletin of Moscow Region State University. Series: Linguistics 5, 40–55. DOI: 10.18384/2310-712X-2019-5-40-55. Kosta, Peter. 2020a. Extraction and Clitic Climbing out of Subject-/ObjectControl Clauses and Causative Clauses in Romance and Czech. In Teodora
332
Bibliography
Radeva-Bork, and Peter Kosta (eds.), Current Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11), 185–202. Kosta, Peter. 2020b. Multiple modification in Slavic as compared to Romance and Germanic Languages (the proof by degree phrases and comparatives). In: Digital Online-Festschrift on occasion of 70th birthday of Petr Karlík. Edited by Pavel Caha and Marketa Zikova. Special Issue of «Rivista di Grammatica Generativa Research in Generative Grammar» (eds. by Pavel Caha and Michal Starke). Kosta, Peter. 2020c. Revisiting the Gender-Animacy-Sub-Gender and Case assignment notions: When do Slavic languages agree and when do they disagree? // SLS 15. The 15th Meeting of the Slavic Linguistics Society. Indiana University Bloomington. Indiana, USA. September, 4, 2020 September 6, 2020 (virtuall Zoom-Meeting. Ms.) Kosta, Peter, and Krivochen, Diego. 2012. Some Thoughts on Language Diversity, UG and the Importance of Language Typology: Scrambling and Non-Monotonic Merge of Adjuncts and Specifiers in Czech and German. Zeitschrift für Slawistik 57 (4), 377–407. Kosta, Peter, and Krivochen, Diego. 2014a. Flavors of Movement: Revisiting the A/A’ Distinction. In Peter Kosta, Steven L. Franks, Teodora Radeva-Bork, and Lilia Schürcks (eds.), Minimalism and Beyond: Radicalizing the Interfaces (Language Faculty and Beyond, Vol. 11). Amsterdam, Philadelphia, 236–266. Kosta, Peter, and Krivochen, Diego. 2014b. Inner Islands and Negation: The Case of Which-Clauses and As-Clauses Revistited. In J. Witkoś, and S. Jaworski (eds.), New Insights into Slavic Linguistics. Frankfurt am Main: Peter Lang, 205–220. Kosta, Peter, and Krivochen, Diego. 2014c. Interfaces, Impaired Merge, and the Theory of the Frustrated Mind. Czech and Slovak Linguistic Review 1, 32–75. Kosta, Peter, Diego Gabriel Krivochen, Hartmut Peters. 2011. Delayed Merge in L1 Acquisition as a Problem of Biolinguistics and Molecular Genetics. Kosta, Peter, and Schürcks, Lilia. 2007. The Focus Feature Revisited. In Peter Kosta and Lilia Schürcks (eds.), Linguistic Investigations into Formal Description of Slavic Languages. Frankfurt am Main: Peter Lang, 245–266. Kosta, Peter, and Schürcks, Lilia. 2009. Word Order in Slavic. In Sebastian Kempgen, Peter Kosta, Tilman Berger, and Karl Gutschmidt (eds.), Die slavischen Sprachen. The Slavic Languages. Ein internationales Handbuch zu ihrer Struktur, ihrer Geschichte und ihrer Erforschung An International
Bibliography
333
Handbook of their Structure, their History and their Investigation Band 1/Vol. 1, Berlin, New York: Mouton de Gruyter, 654–684. Kosta, Peter, and Zimmerling, Anton. 2014. Slavic Clitics Systems in a Typological Perspective. In Lilia Schürcks, Anastasia Giannakidou, and Urtzi Etxeberria (eds.), Nominal Constructions: Slavic and Beyond (Studies in Generative Grammar SGG, Vol. 116). Berlin, New York: de Gruyter, 439–486. Kosta, Peter, and Zimmerling, Anton. 2020, in print. Case Assignment. In Jan Fellerer, and Neil Bermel (eds.), The Oxford Guide to the Slavonic Languages Oxford University Press. Kosta, Peter, Franks, Steven L., Radeva-Bork, Teodora, and Schürcks, Lilia (eds.). 2014. Minimalism and Beyond: Radicalizing the Interfaces (Language Faculty and Beyond, Vol. 11). Amsterdam, Philadelphia: John Benjamins. Kramer, Ruth. 2009. Definite Markers, Phi-Features, and Agreement: A Morphosyntactic Investigation of the Amharic DP. PhD dissertation, University of California, Santa Cruz. Kramer, Ruth. 2014. Gender in Amharic: A Morphosyntactic Approach to Natural and Grammatical Gender. Language Sciences 43, 102–115. Kratzer, Angelika. 1996. Severing the External Argument from Its Verb. In J. Rooryck, and L. Zaring (eds.), Phrase Structure and the Lexicon, Dordrecht: Kluwer, 109–137. Krifka, Manfred. 1999. Manner in Dative Alternation. West Coast Conference on Formal Linguistics. Tucson: Cascadilla Press, 260–271. Krivochen, Diego. 2010a. Referencialidad y Definitud en D. Un análisis desde la convergencia entre el Programa Minimalista y la Teoría de la Relevancia. UNLP/Universidad Nacional de Rosario. Krivochen, Diego. 2010b. Algunas notas sobre fases. UNLP. Presented at I Jornadas de Jóvenes Lingüistas, UBA, March, 2011. Krivochen, Diego. 2010c. Theta Theory Revisited. UNL. Under Review. Krivochen, Diego. 2010d. Prolegómenos a una teoría de la mente. UNLP. http:// www.academia.edu/406432/_2010d_Prolegomenos_a_una_teoria_de_la_ mente_Prolegomena_to_a_Theory_of_Mind. Krivochen, Diego. 2011a. A New Perspective on Raising Verbs: Comparative evidence from English and Spanish. lingBuzz/001172. Krivochen, Diego. 2011b. An Introduction to Radical Minimalism I: on Merge and Agree (and related issues). IBERIA 3 (2), 20–62. Krivochen, Diego. 2011c. An Introduction to Radical Minimalism II: Internal Merge Beyond Explanatory Adequacy. lingBuzz/001256. Krivochen, Diego. 2011d. Unified Syntax. Ms. UNLP lingBuzz/001298.
334
Bibliography
Krivochen, Diego. 2011e. The Quantum Human Computer Hypothesis and Radical Minimalism: A Brief Introduction to Quantum Linguistics. International Journal of Language Studies 5 (4), 87–108. Krivochen, Diego. 2012a. Towards a Geometrical Syntax. Ms. Universität Potsdam. http://ling.auf.net/lingbuzz/001444. Krivochen, Diego. 2012b. Prospects for a Radically Minimalist OT. Ms. Universität Potsdam. http://ling.auf.net/lingbuzz/001465. Krivochen, Diego. 2012c. The Syntax and Semantics of the Nominal Construction (Potsdam Linguistic Investigations, Vol. 8). Frankfurt am Main: Peter Lang. Krivochen, Diego. 2013. The Quantum Human Computer (QHC) Hypothesis and its Formal Procedures: Their Pragmatic Relevance. http://www.academia. edu/2334645/The_Quantum_Human_Computer_QHC_Hypothesis_and_ its_Formal_Procedures_Their_Pragmatic_Relevance Krivochen, Diego Gabriel. 2014. Language, Chaos and Entropy: A Physical Take on Biolinguistics. Iberia: IJTL 6, 27–74. https://ojs.publius.us.es/ojs/ index.php/iberia/article/view/3006/2618. Krivochen, Diego Gabriel. 2015. On Phrase Structure Building and Labeling Algorithms: Towards a Non-uniform Theory of Syntactic Structures. The Linguistic Review 323. 515–572. Krivochen, Diego Gabriel. 2016. Divide and…Conquer? On the Limits of Algorithmic Approaches to Syntactic-Semantic Structure. Czech and Slovak Linguistc Review 1, 15–38. Krivochen, Diego Gabriel. 2018. Aspects of Emergent Cyclicity in Language and Computation. Arguments for Mixed Computation. PhD thesis, School of Psychology and Clinical Language Sciences, University of Reading. Krivochen, Diego Gabriel. 2019. On Trans-Derivational Operations: Generative Semantics and Tree Adjoining Grammar. Language Sciences 74, 47–76. Krivochen, Diego, and Kosta, Peter. 2013. Eliminating Empty Categories: A Radically Minimalist View on their Ontology and Justification (Potsdam Linguistic Investigations, Vol. 11). Frankfurt am Main: Peter Lang. Kučerová, Ivona. 2017. MINIMALISTICKÝ PROGRAM. In: Petr Karlík, Marek Nekula, and Jana Pleskalová (eds.), CzechEncy – Nový encyklopedický slovník češtiny. URL: https://www.czechency.org/slovnik/ MINIMALISTICKÝPROGRAM (poslední přístup: 1. 3. 2020) Lai, C.S.L., Fisher, S.E., Hurst, J.A., Vargha-Khadem, F., and Monaco, A.P. 2001. A Forkhead-domain Gene is Mutated in a Severe Speech and Language Disorder. Nature 413, 519–523.
Bibliography
335
Laka, Itzar. 2009. What Is there in Universal Grammar? On Innate and Specific Aspects of Language. In M. Piatelli Palmarini, J. Uriagereka, and P. Salaburu (eds.), Of Minds and Language: A Dialogue with Noam Chomsky in the Basque Country. Oxford: Oxford University Press, 324–343. Lakatos, I. 1978. The Methodology of Scientific Research Programmes (Philosophical Papers, Vol. 1). Cambridge, MA: Cambridge University Press. Lakoff, George. 1991. Cognitive Versus Generative Linguistics: How Commitments Influence Results. Language and Communication 11 (1–2), 53–62. Landau, Idan. 2015, November. DP-Internal Semantic Agreement: A Configurational Analysis. Natural Language & Linguistic Theory 34 (3), 1–46. Landau, B., and Gleitman, L.R. 1985. Language and Experience, Cambridge, MA: Harvard University Press. Langacker, Ronald W. 1987. Nouns and Verbs. Language 63, 53–94. Langacker, Ronald W. 1990. Concept, Image, and Symbol. The Cognitive Basis of Grammar. Berlin, New York: Mouton de Gruyter. Langacker, Ronald W. 2007. Cognitive Grammar. In Dirk Geeraerts, and Hubert Cuyckens (eds.), The Oxford Handbook of Cognitive Linguistics. Oxford: Oxford University Press, 421–462. Langacker, Ronald W. 2008. Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press. Larson, Richard. 1988. On the Double Object Construction. Linguistic Inquiry 19, 335–391. Lasnik, Howard, Uriagereka, Juan, and Boeckx, Cedric. 2005. A Course in Minimalist Syntax. Oxford: Blackwell. Lasnik, Howard. 1999. Minimalist Essays. Oxford: Blackwell. Latham, Peter E., Deneve, Sophie, and Pouget, Alexandre. 2003. Optimal Computation with Attractor Networks. Journal of Physiology. Paris 97, 683–694. Lenneberg, Eric (ed.). 1964. New Directions in the Study of Language. Cambridge, MA: MIT Press. Lenneberg, Eric H. 1967a. Biological Foundations of Language. New York: John Wiley and Sons. Lenneberg, Eric. 1967b. Language in the Light of Evolution and Genetics. In Biological Foundations of Language. New York: John Wiley and Sons. Lenneberg, Eric. 1969. On Explaining Language. Science 164 (3,880), 635–643. Lenneberg, Eric H. 1972/1977. Biologische Grundlagen der Sprache (Stw, 217). Frankfurt am Main: Suhrkamp.
336
Bibliography
Lenneberg, Eric H. 1994. Of Language Knowledge, Apes and Brains. In: Noam Chomsky. Critical Assessments. Edited by Carlos P. Otero. Volume IV: From Artificial Intelligence to Theology: Chomsky’s Impact on Contemporary Thought: Tome I. Part III Cognitive Neurology. London, New York: Routledge, 145–173. Leonard, Laurence B. 2000. Children with Specific Language Impairment. Cambridge, MA: MIT Press. Leonetti, M. 1998. A Relevance-Theoretic Account of the Property Predication Restriction. In V. Rouchota, and A. Jucker (eds.), Current Issues in Relevance Theory. Amsterdam: John Benjamins, 141–167. Leonetti, M. and Escandell, M.V. 2011. On the Ridigity of Procedural Meaning. In V. Escandell-Vidal, M. Leonetti, and A. Ahern (eds.), Procedural Meaning. Bingley: Emerald, 1–17. Leung, T. 2010. On the Mathematical Foundations of Crash-Proof Grammars. In Putnam, M. (ed.), Exploring Crash Proof Grammars. Amsterdam: John Benjamins, 213–244. Levin, Beth, and Rappaport, Malka. 1986. The Formation of Adjectival Passives. Linguistic Inquiry 17, 623–662. Levin, Beth, and Rappaport, Malka. 1988. Non-event –er Nominals: A Probe into Argument Structure, Linguistics 26, 1067–1083. Levin, Beth, and Rappaport, Malka. 1989. An Approach to Unaccusative Mismatches. The Proceedings of NELS 19, 314–329. Levin, Beth, and Rappaport Hovav, Malka. 1995. Unaccusativity. At the Syntax – Lexical Semantics Interface. Cambridge, MA: MIT Press. Levin, B., and Rappaport Hovav, M. 2011. Lexical Conceptual Structure. In K. von Heusinger, C. Maienborn, and P. Portner (eds.), Semantics: An International Handbook of Natural Language Meaning, Vol. I. Berlin: Mouton de Gruyter, 418–438. Levy, Y., and Schaeffer, J.C. (eds.). 2007. Language Competence across Populations: Towards a Definition of Specific Language Impairment. Mahwah, NJ: Lawrence Erlbaum. Lewis, Barbara et al. 2006, December. The Genetic Bases of Speech Sound Disorders: Evidence from Spoken and Written Language. Journal of Speech, Language, and Hearing Research 49, 1294–1312. Lieberman, Philip. 1984. The Biology and Evolution of Language. Cambridge, MA: Harvard University Press. Lieberman, Philip. 2013. The Unpredictable Species: What Makes Humans Unique. Princeton, NJ: Princeton University Press.
Bibliography
337
Lobina, David. 2014. What Linguists Are Talking about When Talking About… Language Sciences 45, 56–70. Lobina, David. 2017. Recursion: A Computational Investigation into the Representation and Processing of Language. Oxford: Oxford University Press. Longa, Víctor M., and Lorenzo, Guillermo. 2012. Theoretical Linguistics Meets Development: Explaining FL from an Epigeneticist Point of View. In Boeckx et al. (eds.), (2012), 52–84. Longobardi, Giuseppe. 1994. Reference and Proper Names: A Theory of N-Movement in Syntax and Logical Form. Linguistic Inquiry 25 (4), 609–665. Longobardi, Giuseppe. 2005. Toward a Unified Grammar of Reference. Zeitschrift Für Sprachwissenschaft 24 (1), 5–44. Lyons, John. 1991. Chomsky. London: Fontana Press. MacWhinney, B. 1991. The CHILDES Project: Tools for Analyzing Talk. Hillsdale, NJ: Lawrence Erlbaum. MacWhinney, B. 1995. The CHILDES Project: Tools for analyzing talk (2nd edn). Hillsdale, NJ: Lawrence Erlbaum. MacWhinney, B. 2000. The CHILDES Project: Tools for Analyzing Talk (3rd edn). Mahwah, NJ: Lawrence Erlbaum. MacWhinney, B. 2005. Item-Based Constructions and the Logical Problem. ACL 2005, 46–54. MacWhinney, B., and Leinbach, J. 1991. Implementations Are Not Conceptualizations: Revising the Verb Learning Model. Cognition 29, 121–157. MacWhinney, B., Bates, E. S., and Kliegel, R. 1984. Cue validity and sentence interpretation in English, German, and Italian. Journal of Verbal Learning and Verbal Behavior, 23, 127–150. MacWhinney, B., Leinbach, J., Taraban, R., and McDonald, J. 1989. Language Learning: Cues or Rules? Journal of Memory and Language 28, 255–277. Madariaga, Nerea. 2015. Russian Patterns of Floating Quantification: (Non)Agreeing Quantifiers. In Peter Kosta, and Lilia Schürcks (eds.), 2007. Linguistic Investigations into Formal Description of Slavic Languages (Potsdam Linguistic Investigations, Vol. 1). Contributions of the Sixth European Conference held at Potsdam University, November 30-December 02, 2005. Frankfurt am Main: Peter Lang, 267–281. Malmberg, Bertil. 1991. Histoire de la linguistique. De Sumer à Saussure. Paris: Presses Universitaires de France.
338
Bibliography
Manning, Christopher D., Sag, Ivan, and Masayo Iida. 1999. The Lexical Integrity of Japanese Causatives. In R.D. Levine, and G.M. Green (eds.), Studies in Contemporary Phrase Structure Grammar, 39–79. Marantz, A. 1984. On the Nature of Grammatical Relations. Cambridge, MA: MIT Press. Marantz, A. 1997. No Escape from Syntax: Don’t Try Morphological Analysis in the Privacy of Your Own Lexicon. In A. Dimitriadis, L. Siegel, C. Surek-Clark, and A. Williams (eds.), Penn Working Papers in Linguistics 4:2: Proceedings of the 21st Annual Penn Linguistics Colloquium. Philadelphia: University of Pennsylvania. 201–225 Marantz, Alec. 2008. Phases and Words. Ms. Cambridge, MA: MIT Press. Marcus, M. 1980. A Theory of Syntactic Recognition for Natural Language. Cambridge, Mass.: MIT Press. Marcus, G., Ullman, M., Pinker, S., Hollander, M., Rosen, T., and Xu, F. 1992. Overregularization in Language Acquisition. Monographs of the Society for Research in Child Development 57 (4), 1–182. Marr, David. 1982. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York: Freeman. Martin, Roger, and Uriagereka, Juan. 2014. On the Nature of Chains in Minimalism. Talk Delivered at the Conference Minimalist Program: Quo Vadis? University of Potsdam, October 3–6, 2011. In Peter Kosta, Steven Franks, Lilia Schürcks, and T. Radeva-Bork (eds.), Minimalism and Beyond: Radicalizing the Interfaces (LFAB, Vol. 11). Amsterdam, Philadelphia: John Benjamins, 169–195. Mateu, J. 2000. Second-Order Characteristics of Spatial Marked Processes with Applications. Nonlinear Analysis: Real World Applications 1, 145–162. Mateu Fontanals, J. 2000a. Universals of Semantic Construal for Lexical Syntactic Relations. Ms. Bellaterra: Universitat Autónoma de Barcelona. Departament de Filología Catalana. Mateu Fontanals, J. 2000b. Why Can’t We Wipe the Slate Clean?. Bellaterra: Universitat Autónoma de Barcelona. Departament de Filología Catalana. Mateu Fontanals, J. 2008. On the l-Syntax of Directionality/Resultativity: The Case of Germanic Preverbs. In Dins A. Asbury et al. (eds.), Syntax and Semantics of Spatial. Amsterdam, Philadelphia: John Benjamins, 221–250. Matushansky, Ora. 2013. Gender Confusion. In Lisa Lai-Shen Cheng and Norbert Corver (eds.), Diagnosing Syntax. Oxford: Oxford University Press, 271–294. McFadden, Thomas, and Sundharesan, Sandhya. 2011. Nominative Case Is Independent of Finiteness and Agreement. Ms. http://ling.auf.net/ lingbuzz/001350. To appear in Papers from BCGL 5: Case at the interfaces. Syntax and Semantics.
Bibliography
339
McIntosh, Mary. 1984. Fulfulde Syntax and Verbal Morphology. Boston: Routledge. McMurray, Bob, and Wasserman, Edward. 2009. Variability in Languages, Variability in Learning? Behavioral and Brain Sciences 32, 458–459. Meisel, J. 1986. Word Order and Case Marking in Early Child Language. Evidence from Simultaneous Acquisition of Two First Languages: French and German. Linguistics 24, 123–185. Mendívil-Giró, José-Luis. 2012. The Myth of Language Diversity. In Boeckx et al. (eds.) (2012), 85–133. Mendívil-Giró, José-Luis. 2014. What Are Languages? A Biolinguistic Perspective. Open Linguistics 1, 71–95. doi: 10.2478/opli-2014-0005. Menzel, Thomas. 2008. On Secondary Predicates in Old Russian. In Christoph Schroeder, Gerd Hentschel, and Winfried Boeder (eds.), Secondary Predicates in Eastern European Languages and Beyond. Oldenburg: BISVerlag, 233–253. Meyer, Roland 2004. Syntax der Ergänzungsfrage. Empirische Untersuchungen am Russischen, Polnischen und Tschechischen (= Slavistische Beiträge, Bd. 436). München: Sagner. Meyer, Roland. 2012. The History of Null Subjects in North Slavonic. A Corpusbased Diachronic Investigation. Habilitationsschrift, Universität Regensburg. Miechowicz-Mathiasen, Katarzyna. 2009. There Is No Independent EPP in Slavic, There Are Only EPP-Effects. In G. Zybatow, U. Junghanns, D. Lenertova, and P. Biskup (eds.), Studies in Formal Slavic Phonology, Morphology, Syntax, Semantics and Information Structure: Proceedings of FDSL-7, Leipzig 2007. Frankfurt am Main: Peter Lang, 169–181. Miechowicz-Mathiasen, Katarzyna. 2011a. On the Distribution and D-Like Properties of the Polish Pronominal Adjective “sam”. Talk delivered at Workshop on Languages with and without articles, Paris, March 3rd 2011. Miechowicz-Mathiasen, Katarzyna. 2011b. Case, Tense and Clausal Arguments in Polish. Ms. Poland: Adam Mickiewicz University. Miechowicz-Mathiasen, Katarzyna. 2012. Licensing Polish Higher Numerals: An Account of the Accusative Hypothesis. Generative Linguistics in Wrocław 2, 57–75. Miechowicz-Mathiasen, Katarzyna, and Gabriel Krivochen, Diego. 2012. Numerals, Numbers and Cognition: Towards a Localist Theory of the Mind. Poster presented at Cognitive Modules and Interfaces, International Workshop at SISSA, Trieste, Italy. September 19th http://ling.auf.net/ lingbuzz/001653
340
Bibliography
Miechowicz-Mathiasen, Katarzyna, and Dziubała-Szrejbrowska, D. 2012. The Role of Gender in the Rise of Numerals as a Separate Category. Journal of Historical Syntax 1 (1), 139. Miller, G.A., and Chomsky, Noam. 1963. Finitary Models of Language Users. In R.D. Luce, R.R. Bush, and E. Galanter (eds.), Handbook of Mathematical Psychology, Vol. 2. New York: Wiley, 419–492. Mintz, T.H., Newport, E.L., and Bever, T.G. 2002. The Distributional Structure of Grammatical Categories in Speech to Young Children. Cognitive Science, 26, 393–424. Mithun, Marianne. 1991. Active/Agentive Case Marking and Its Motivations. Language 67, 510–546. Miyake, A., Carpenter, P., and Just, M.A. 1994. A Capacity Approach to Syntactic Comprehension Disorders: Making Normal Adults Perform Like Aphasic Patients, Cognitive Neuropsychology 11, 671–717. Molnárfi, L. 2003. On Optional Movement and Feature Checking in West Germanic. Folia Linguistica 37 (1–2), 129–162. Montague, Richard. 1970a. English as a Formal Language. In B. Visentini et al. (eds.), Linguaggi nella Società e nella Tecnica. Milan: Edizioni di Comunità, 189–224. Montague, Richard. 1970b. Universal Grammar. Theoria, 36, 373–398. Montague, Richard. 1973. The Proper Treatment of Quantification in Ordinary English. In K.J. Hintikka, J.M.E. Moravcsik, and P. Suppes (eds.), Approaches to Natural Language. Dordrecht: Reidel, 221–242. Moreno, Juan-Carlos, and Mendívil-Giró, José-Luis. 2014. On Biology, History and Culture in Human Language: A Critical Overview. Sheffield: Equinox. Moro, A. 2000. Dynamic Antisymmetry. Movement as a Symmetry Breaking Phenomenon, Cambridge, MA: MIT Press. Morris, M. 2006. Kripke on proper names. In: An Introduction to the Philosophy of Language. Cambridge: Cambridge University Press, 74–93. (Cambridge Introductions to Philosophy). Moser, Michael. 1994. Der prädikative Instrumental. Aus der historischen Syntax des Nordostslavischen. Von den Anfängen bis zur petrinischen Periode. Frankfurt am Main. Mugari, Victor. 2012. An Event Semantic Structure Analysis of Shona Causative Constructions. International Journal of Linguistics 4 (2), 110–123. Mukherji, Nirmalangshu. 2010. The Primacy of Grammar. Cambridge, Mass.: MIT Press. Müller, Gereon. 1995. A-bar Syntax. Berlin: Mouton de Gruyter.
Bibliography
341
Müller, Gereon. 1998. Incomplete Category Fronting. Dordrecht: Kluwer. Müller, Gereon. 2000. Elemente der optimalitätstheoretischen Syntax. Tübingen: Stauffenburg Verlag. Müller, Gereon. 2003. Phrase Impenetrability and Wh-Intervention. Ms. IDS Mannheim. Müller, Gereon. 2004. On Decomposing Inflection Class Features: Syncretism in Russian Noun Inflection. In Olga Arnaudova, Wayles Browne, Maria Luisa Rivero, and Danijela Stojanovic (eds.), Proceedings of FASL 12. Otawa. Müller, Gereon. 2005. Pro-drop and Impoverishment. Ms. Universität Leipzig. http://www.unileipzig.de/~muellerg/. Müller, Gereon. 2007. Some Consequences of an Impoverishment-Based Approach to Morphological Richness and Pro-Drop. Ms. http://www.unileipzig.de/ ~muellerg/mu228.pdf. Müller, Gereon. 2011a. Constraints on Displacement: A Phase-Based Approach. Amsterdam: John Benjamins. Müller, Gereon. 2011b. Optimality Theoretic Syntax. Ms. Universität Leipzig. http://www.unileipzig.de/~muellerg/. Müller, Gereon. 2015. Structure Removal. A New Approach to Conflicting Representations. Lecture Notes, University Leipzig WiSe 2014/2015 Version of March 25, 2015. https://home.uni-leipzig.de/muellerg/mu943.pdf Müller, Gereon, and Sternefeld, Wolfgang. 1993. Improper Movement and Unambiguous Binding. Linguistic Inquiry 24, 461–507. Müller, Gereon, and Sternefeld, Wolfgang. 1994. Scrambling as A-bar Movement. In Norbert Corver and Henk van Riemsdijk (eds.), Studies on Scrambling. Berlin: Mouton de Gruyter, 331–385. Müller, Gereon, and Sternefeld, Wolfgang. 1996. A-bar Chain Formation and Economy of Derivation. Linguistic Inquiry 27, 480–511. Neale, Stephen 1990. Descriptions. Cambridge, MA, MIT Press. Nelson, Katherine (ed.). 2006. Narratives from the Crib. Cambridge, MA: Harvard University Press. Newbury, D.F., and Monaco, A.P. 2010. Genetic Advances in the Study of Speech and Language Disorders. Neuron 68, 309–320. Newbury, Dianne F., Fisher, Simon E., and Monaco, Anthony P. 2010. Recent Advances in the Genetics of Language Impairment. Genome Medicine 2, 6. Nodoushan, S. 2008. The Quantum Human Computer (QHC) Hypothesis. Journal of Educational Technology 4 (4), 28–32. Norris, D. 1987. Syntax, semantics, and garden paths. In A. Ellis (Ed.), Progress in the psychology of language, Vol 3. Hillsdale, NJ: Erlbaum.
342
Bibliography
Noyer, Rolf. 1999. DM’s Frequently Asked Questions. http://www.ling.upenn. edu/~rnoyer/dm/ Noyer, Rolf. 1992. Features, Positions, and Affixes in Autonomous Morphological Structure. PhD thesis, MIT, Cambridge, MA. Noyer, Rolf. 1998. Impoverishment Theory and Morphosyntactic Markedness. In S. La-pointe, D. Brentari, and P. Farrell (eds.), Morphology and Its Relation to Phonology and Syntax. Palo Alto: CSLI Publications, 264–285. Nunes, Jairo 2004. Linearization of Chains and Sidewards Movement (Linguistic Inquiry Monographs, Vol. 43). Cambridge, MA: MIT Press. Ockham, William of (G. de Ockham). 1974. Summa Logicae (Opera Philosophica, vol. I), ed. P. Boehner, G. Gál, S. Brown, St Bonaventure: Franciscan Institute Publications. Ormazabal, Javier, Uriagereka, Juan, and Uribe-Echevarria, Miriam. 1994. Word Order and wh-Movement: Towards a Parametric Account. Presented at 17th GLOW Colloqium Vienna. Osborne, T., Putnam, M., and Gross, T. 2011. Bare Phrase Structure, Label-Less Trees, and Specifier-Less Syntax. Is Minimalism Becoming a Dependency Grammar? The Linguistic Review 28 (3), 315–364. Retrieved 2 Mar. 2020, from doi: 10.1515/tlir.2011.009 Ott, Dennis. 2011. A Note on Free Relative Clauses in the Theory of Phases. Linguistic Inquiry 42(1), 183–192. Ott, Dennis. 2012. Local Instability. Berlin: de Gruyter. Ott, Dennis 2014. An Ellipsis Approach to Contrastive Left-Dislocation. Linguistic Inquiry 45, 269–303. Ott, Dennis. 2016a. Fragment Anchors Do Not Support the Syntactic Integration of Appositive Relative Clauses: Reply to Griffiths & de Vries 2013. Linguistic Inquiry 47, 580–590. Ott, Dennis. 2016b. Ellipsis in Appositives. Glossa 1: article 34. Ott, Dennis. 2017a. The Syntax and Pragmatics of Dislocation: A Nontemplatic Approach. In A. Monti (ed.), Proceedings of the 2017 Annual Conference of the Canadian Linguistic Association. http://cla-acl.ca/ actes-2017-proceedings/ Ott, Dennis. 2017b. Strong Generative Capacity and the Empirical Base of Linguistic Theory. Frontiers in Psychology 8, 1617. Ott, Dennis. 2018. VP-fronting: Movement vs. dislocation. The Linguistic Review 35(2), 243–282. Ouwayda, Sarah. 2013. Where Plurality Is: Agreement and DP Structure. In Stefan Keine, and Shayne Sloggett (eds.), Proceedings of NELS 42. Amherst, MA: GLSA Publications, 81–94.
Bibliography
343
Ouwayda, Sarah. 2014. Where Number Lies: Plural Marking, Numerals, and the Collective– distributive Distinction. PhD dissertation, University of Southern California. Panagiotidis, Phaivos. 2002. Pronouns, Clitics and Empty Nouns. ‘Pronominality’ and Licensing in Syntax (Linguistics Today, Vol. 46). Amsterdam, Philadelphia: John Benjamins. Panagiotidis, Phoevos. 2009. Four Questions on Categorization of Roots. Ms. University of Cyprus. de Chipre. Panagiotidis, Phaivos. 2010a. Categorial Features and Categorizers. The Linguistic Review 28, 325–346. Panagiotidis, Phoevos 2010b. Functional Heads, Agree and Labels. Ms. University of Cyprus. Panagiotidis, Phoevos. 2014a. Categorial Features: A Generative Theory of Word Class Categories. Cambridge: Cambridge University Press. Panagiotidis, Phoevos. 2014b. A Minimalist Approach to Roots. In Peter Kosta et al. (eds.), Minimalism and Beyond. Radicalizing the Interfaces (LFAB, Vol. 11). Amsterdam, Philadelphia: John Benjamins, 287–303. Partee, B. 1973. Some Transformational Extensions of Montague Grammar. Journal of Philosophical Logic 2, 509–534. Partee, B.H. 1975. Montague Grammar and Transformational Grammar. Linguistic Inquiry 6, 203–300. Partee, B.H. (ed.). 1976. Montague Grammar. New York: Academic Press. Partee, B.H., and Hendriks, H.L.W. 1997. Montague Grammar. In J. van Benthem, and A. ter Meulen (eds.), Handbook of Logic and Language. Amsterdam, Cambridge: Elsevier, MIT Press, 5–91. Pereltsvaig, Asya. 2000. Short and Long Adjectives in Russian: Against the Null-N° Analysis. Unpublished manuscript. McGill University. Pereltsvaig, Asya. 2006. Small Nominals. Natural Language and Linguistic Theory 24 (2): 433–500. Pereltsvaig, Asya. 2007a. Copular Sentences in Russian: A Theory of IntraClausal Relations. Dordrecht, the Netherlands: Springer. Pereltsvaig, Asya. 2007b. The Universality of DP: A View from Russian. Studia Linguistica 61 (1), 59–94. Perlmutter, D. 1971. Deep and Surface Structure Constraints in Syntax. New York, Holt: Rinehart and Winston. Pesetsky, David. 1989. Language-Particular Processes and the Earliness Principle. Ms. MIT Press. Pesetsky, David. 1995. Zero Syntax: Experiencers and Cascades. Cambridge, MA: MIT Press.
344
Bibliography
Pesetsky, David. 2007. Probes, Goals and Syntactic Categories. In Y. Otsu (ed.), Proceedings of the 7th Annual Tokyo Conference on Psycholinguistics, Keio University, Japan. Tokyo: Hituzi Syobo Publishing, 2560. Pesetsky, David. 2013. Russian case morphology and the syntactic categories. Cambridge, MA: MIT Press. Pesetsky, David, and Katz, J. 2011. The Identity Thesis for Language and Music. http://ling.auf.net/lingbuzz/000959. Pesetsky, David, and Torrego, E. 2000. T-to-C Movement: Causes and Consequences. In Kenstowicz, K. (ed.), Ken Hale: A Life in Language. Cambridge, MA: MIT Press, 355–426. Pesetsky, David, and Torrego, Esther. 2001. T-to-C Movement: Causes and Consequences. In Michael Kenstowicz (ed.), Ken Hale: A Life in Language. Cambridge, MA: MIT Press, 355–426. Pesetsky, David, and Torrego, Esther. 2004. The Syntax of Valuation and the Interpretability of Features. Ms. MIT Press/UMass Boston. Pesetsky, David, and Torrego, Esther. 2007. The Syntax of Valuation and the Interpretability of Features. In Simin Karimi, Vida Samiian, and Wendy K. Wilkins (eds.), Phrasal and Clausal Architecture: Syntactic Derivation and Interpretation. In Honor of Joseph E. Emonds. Amsterdam: John Benjamins, 262–294. Piattelli-Palmarini, M. 1980. Language and Learning: The Debate between Jean Piaget and Noam Chomsky. Cambridge, MA: Harvard University Press. Picallo, M. Carme. 1991. Nominals and Nominalizations in Catalan. Probus 3 (3), 279–316. Picallo, M. Carme. 2008. Gender and Number in Romance. Lingue E Linguaggio 7 (1), 47–66. Pinker, Steven. 1989. Learnability and Cognition. Cambridge, MA: MIT Press. Platon. 1974. Phaidon. Das Gastmahl. Kratylos. Bearbeitet von Dietrich Kurz. Griechischer Text von Leon Robin und Louis Méridier, In: Platon. Werke in acht Bänden. Griechisch und Deutsch. Dritter Band. Herausgegeben von Gunther Eigler. Darmstadt 1974: Wissenschaftliche Buchgesellschaft. Poeppel, D., and Embick, D. 2005. Defining the Relation between Linguistics and Neuroscience. In A. Cutler (ed.), Twenty-First Century Psycholinguistics: Four Cornerstones. Mahwah, NJ: Lawrence Erlbaum, 103–120. Pollock, Jean-Yves. 1989. Verb Movement, Universal Grammar and the Structure of IP. Linguistic Inquiry 20, 365–424. Postal, Paul. 1972. On some Rules that Are Not Successive Cyclic. Linguistic Inquiry 3, 211–222.
Bibliography
345
Postal, Paul. 1974. On Raising: One Rule of English Grammar and Its Theoretical Implications. Cambridge, MA: MIT Press. Prince, A., and Smolensky, P. 2004. Optimality Theory. Constraint Interaction in Generative Grammar. Oxford: Blackwell. Przepiórkowski, Adam. 2003. A hierarchy of Polish genders. In Piotr Bański and Adam Przepiórkowski (eds), Generative Linguistics in Poland: Morphosyntactic Investigations, 109–122, Warsaw. Institute of Computer Science, Polish Academy of Sciences. Pustejovsky, James 1988. The Geometry of Events, Studies in Generative Approaches to Aspect, Lexicon Project Working Papers 24. Cambridge, MA: Center for Cognitive Science, MIT. Pustejovsky, James 1996. The Generative Lexicon. Cambridge, MA: MIT Press. Pustejovsky, James, and Busa, F. 1995. Unaccusativity and Event Composition. In P.M. Bertinetto, V. Binachi, J. Higginbotham, and M. Squartini (eds.), Temporal Reference: Aspect and Actionality, Turin: Rosenberg and Sellier. Putnam, Michael (ed.). 2010. Exploring Crash-Proof Grammars (LFAB Series). Amsterdam, John Benjamins. Putnam, Michael. 2011. The Thing that Should Not Be: Rethinking the A-A’ Distinction. Universitet i Tromso CASTL Linguistics Colloquium, October 7, 2011. Putnam, Michael, and Stroik, Thomas. 2011. Syntax at Ground Zero. Linguistic Analysis 37 (3–4), 389–404. Pylyshyn, Z. 2007. Things and Places. How the Mind Connects with the World. Cambridge, MA: MIT Press. Quine, W. 1960. Word and Object. Cambridge, MA: Cambridge University Press. Quirk, Randolph and Greenbaum, Sidney and Leech, Geoffrey and Svartvik, Jan. 1985. A Comprehensive Grammar of the English Language. London: Longman. Rackowski, Andrea, and Norvin Richards. 2005. Phase Edge and Extraction: A Tagalog Case Study. Linguistic Inquiry; The MIT Press; Volume 36, Number 4, Fall 2005; pp. 565–599. Radeva-Bork, Teodora. 2012. Single and Double Clitics in Adult and Child Grammar (Potsdam Linguistic Investigations, Vol. 9). Frankfurt am Main: Peter Lang. Radford, Andrew. 1990. Syntactic Theory and the Acquisition of English Syntax: The Nature of Early Child Grammar of English. Oxford: Blackwell. Radford, Andrew. 2004. Minimalist Syntax: Exploring the Structure of English. Cambridge: Cambridge University Press.
346
Bibliography
Raitano, N.A., Pennington, B.F., Tunick, R.A., Boada, R., and Shriberg, L.D. 2004. Pre-literacy Skills of Sub-groups of Children with Speech Sound Disorders. Journal of Child Psychology and Psychiatry 45, 821–835. Rappaport, Gilbert C. 2000. The Slavic Noun Phrase in Comparative Perspective. In George Fowler (ed.), Comparative Slavic Morphosyntax. Bloomington: Slavica Publishers. Rappaport, Gilbert C. 2002. Numeral Phrases in Russian: A Minimalist Approach. In Gerry Greenberg and James Lavine (eds.), Russian Morphosyntax: A Festschrift for Leonard H. Babby. Journal of Slavic Linguistics 10 (1/2), 329–342. Rappaport, Gilbert C. 2003a. Case Syncretism, Features, and the Morphosyntax of Polish Numeral Phrases. In Piotr Banski and Adam Przepiorkowski (eds.), Generative Linguistics in Poland, Vol. 5. Warsaw: Academy of Sciences, 123–137. Rappaport, Gilbert C. 2003b. The Grammatical Role of Animacy in a Formal Model of Slavic Morphology. Published in American Contributions to the Thirteenth International Congress of Slavists (Ljubljana, 2003), ed. by Robert A. Maguire and Alan Timberlake. Volume 1: Linguistics, 149–166. (Bloomington: Slavica). Rappaport, Gilbert C. 2014. Determiner Phrase and Mixed Agreement in Slavic. In Schürcks, Lilia, Anastasia Giannakidou, and Urtzi Etxeberria (eds.), The Nominal Structure in Slavic and Beyond (Studies in Generative Grammar, Vol. 116), 343–390. Rappaport Hovav, Malka, and Levin, Beth. 2011. Lexicon Uniformity and the Causative Alternation. In Martin Everart, M. Marelj, and T. Siloni (eds.), The Theta system: Argument Structure at the Interface. Oxford: Oxford University Press. Reinhart, T. 1976. The Syntactic Domain of Anaphora. Doctoral dissertation, Massachusetts Institute of Technology. Reuland, Eric. 2009. Anaphora and Language Design. Cambridge, MA: MIT Press. Rice, Mabel L., Smith, Shelley D., and Gayán, Javier. 2009. Convergent Genetic Linkage and Associations to Language, Speech and Reading Measures in Families of Probands with Specific Language Impairment. Journal of Neurodevelopmental Disorders 1, 264–282. Richards, Norvin. 2001. Movement in Language. Oxford: Oxford University Press. Riemsdijk, Henk van. 1983. Correspondence Effects and the Empty Category Principle. In Y. Otsu et al. (eds.), Studies in Generative Grammar and Language Acquisition. Tokyo: International Christian University, 5–16.
Bibliography
347
Riesbeck, C. and Schank, R. 1978. Comprehension by Computer. Expectation based analysis of sentences in context. In: Levelt, W.J.M. and G.B. Flores D’Arcals (eds.). Generative Studies in the Perception of Language. New York: Wiley, 247–294. Rijkhoek, Paulien. 1998. On Degree Phrases and Result Clauses. PhD, Groningen. Ritter, Elizabeth. 1993. Where’s Gender? Linguistic Inquiry 24 (4), 795–803. Ritter, Elizabeth. 1995. On the Syntactic Category of Pronouns and Agreement. Natural Language & Linguistic Theory 13 (3), 405–443. Rizzi, Luigi. 1990. Relativized Minimality. Cambridge, MA: MIT Press. Rizzi, Luigi. 1982. Issues in Italian Syntax. Dordrecht: Foris. Rizzi, Luigi. 1986. Null Objects in Italian and the Theory of Pro. Linguistic Inquiry 17 (3), 501–577. Rizzi, Luigi. 1996a. Residual Verb Second and the Wh Criterion. In A. Belletti, and L. Rizzi (eds.), Parameters and Functional Heads. Oxford: Oxford University Press, 63–90. Rizzi, Luigi. 1996b. The Fine Structure of the Left Periphery. In Haegeman, L. (ed.), Elements of Grammar. Dodrecht, Kluwer, 281–337. Rizzi, Luigi. 2001. Relativized Minimality Effects. In M. Baltin, and C. Collins (eds.), The Handbook of Contemporary Syntactic Theory. Oxford: Blackwell, 89–110. Rizzi, Luigi. 2004a. Locality and Left Periphery. In A. Belletti (ed.), Structures and Beyond (The Cartography of Syntactic Structures, Vol. 3). Oxford University Press, 223–251. Rizzi, Luigi (ed.). 2004b. The Cartography of Syntactic Structures (The Structure of CP and IP, Vol. 2). Oxford: Oxford University Press. Rizzi, Luigi. 2005. On the Grammatical Basis of Language Development: A Case Study. In Guglielmo Cinque, and Richard Kayne (eds.), Handbook of Comparative Syntax. Oxford: Oxford University Press, 70–109. Rizzi, Luigi. 2006. On the Form of Chains: Criterial Positions and ECP Effects. In L. Cheng, and N. Corver (eds.), Wh-Movement: Moving on. Cambridge, MA: MIT Press, 97–133. Roberts, Ian. 1997. Comparative Syntax. London, New York, Sydney, Auckland: Arnold. Roberts, Ian. 2003. Some Consequences of a Deletion Analysis of Null Subjects. Paper presented at Meeting of the North-East Syntax Symposium, York.
348
Bibliography
Roberts Ian, and Roussou, Anna. 2003. Syntactic Change. A Minimalist Approach to Grammaticalization (Cambridge Studies in Linguistics, Vol. 100). Cambridge: Cambridge University Press. Rohrbacher, Bernhard 1999. Morphology-Driven Syntax: A Theory of V to I Raising and Pro-Drop. Amsterdam: John Benjamins. Rosen, Carol. 1984. The Interface between Semantic Roles and Initial Grammatical Relations. In David Perlmutter, and Carol Rosen (eds.), Studies in Relational Grammar, Vol. 2. Chicago: University of Chicago Press, 38–77. Rosen, Alexandr. 2015. Hybrid Agreement in Czech Predicates. In Peter Kosta, and Lilia Schürcks (eds.), 2007. Linguistic Investigations into Formal Description of Slavic Languages (Potsdam Linguistic Investigations, Vol. 1). Contributions of the Sixth European Conference held at Potsdam University, November 30-December 02, 2005. Frankfurt am Main: Peter Lang, 309–317. Ross, John. 1967. Constraints on Variables in Syntax. PhD dissertation, Massachusetts Institute of Technology, Cambridge, MA. Published as Ross (1986). Ross, John. 1986. Infinite Syntax! Norwood, NJ: Ablex. Ross, Haj [John R.]. 1984. Inner Islands. In Claudia Brugman, Monica Macaulay et al. (eds.), Proceedings of the 10th Annual Meeting of the Berkeley Linguistics Society. Berkeley: Berkeley Linguistics Society, 258–265. Russel, Bertrand. 1905. On Denoting. Mind 14, 479–493. Russel, Bertrand. 1945. A History of Western Philosophy. London: George Allen & Unwin. Russel, Bertrand. 2016. Philosophie des Abendlandes. München, Berlin, Zürich: Piper. Rutkowski, Paweł. 2006. Grammaticalisation in the Nominal Domain: The Case of Polish Cardinals. Paper presented at the 4th Workshop in General Linguistics, University of Wisconsin, Madison, February 17, 2006. Rutkowski, Paweł. 2007. The Syntactic Structure of Grammaticalized Partitives (Pseudo-Partitives. In T. Scheffler et al. (eds.), University of Pennsylvania Working Papers in Linguistics 13 (1), 337–350. Rutkowski, Paweł. 2012, September. Is nP Part of Universal Grammar? Journal of Universal Language 13 (2), 119–144. Rutkowski, Paweł, and Progovac, Liljana. 2005. Classification Projection in Polish and Serbian: The Position and Shape of Classifying Adjectives. In S. Franks et al. (eds.), Formal Approaches to Slavic Linguistics: The South Carolina Meeting 2004. Ann Arbor: Michigan Slavic Publications, 289–299. Rutkowski, Paweł, and Szczegot, K. 2001. On the Syntax of Functional Elements: Numerals, Pronouns, and Expressions Indicating Approximation.
Bibliography
349
In A. Przepiórkowski, and P. Bański (eds.), Generative Linguistics in Poland: Syntax and Morphosyntax. Warsaw: IPIPAN, 187–196. Růžička, Rudolf. 1995. Structural and Communicative Hierarchies in Participial Adjuncts. In Barbara Partee, and Petr Sgall (eds.), Discourse and Meaning. Amsterdam: John Benjamins, 337–346. Růžička, Rudolf. 1999. Control in Grammar and Pragmatics: A CrossLinguistic Study (Linguistik Aktuell. Linguistics Today, Vol. 27). Amsterdam, Philadelphia. Řezáč, Milan. 2004. Elements of Cyclic Syntax: Agree and Merge. PhD dissertation, University of Toronto. Safir, K. 1985. Syntactic Chains. Cambridge, MA: Cambridge University Press. Sagae, K., Lavie, A., and MacWhinney, B. 2005. Automatic Measurement of Syntactic Development in Child Language. In Proceedings of the 43rd Meeting of the Association for Computational Linguistics. Ann Arbor: ACL, 197–204. Sakuma, Jun’ichi. 2013. Reflexive Verbs and Anti-causativity in the Finnish Language. JSL 9, 21–32. Sauerland, Uli. 2004. A Comprehensive Semantics for Agreement. Paper presented at the Phi-Workshop, McGill University, Montreal, Canada. Schäfer, Florian. 2008. The Syntax of (Anti-)Causatives. External arguments in change-of-state contexts. Amsterdam, Philadelphia: John Benjamins. Schäfer, Florian (Ms.) The argument structure of adjectival participles revisited. In http://ifla.uni-stuttgart.de/institut/mitarbeiter/florian/papers/ADJ-PASS.pdf Schick, Ivanka. 1999. Doubling Clitics and Information-Structure in Modern Bulgarian (Studies in Slavic Linguistics, Vol. 11). München: Lincolm. Schrödinger, E. 1935. The Present Situation in Quantum Mechanics. Proceedings of the American Philosophical Society 124, 323–338. Searle, John R. 1969. Speech Acts. Cambridge: Cambridge University Press Searle, John R. 1975. Speech Acts and Recent Linguistics. Annals of the New York Academy of Sciences 263 (1), 27–38. Sheffer, H.M. 1913. A Set of Five Independent Postulates for Boolean Algebras, with Application to Logical Constants. Transactions of the American Mathematical Society 14, 481–488, JSTOR 1988701 Siskind, J.M. 1999. Learning Word-to-Meaning Mappings. In J. Muure, and P. Broeder (eds.), Cognitive Models of Language Acquisition. Cambridge, MA: MIT Press. Skinner, B.F. 1957. Verbal Behavior. New York Slioussar, Natalia 2007. Grammar and Information Structure. A Study with Reference to Russian. Doctoral dissertation, LOT Publications, Utrecht.
350
Bibliography
Smith, Peter W. 2013. The Syntax of Semantic Agreement in British English. Ms. University of Connecticut. Späth, Andreas. 2006. Determinierung unter Defektivität des Determinierersystems: Informationsstrukturelle und aspektuelle Voraussetzungen der Nominalreferenz slawischer Sprachen im Vergleich zum Deutschen (Language, Context, and Cognition, Vol. 4). Berlin, New York: de Gruyter. Sperber, Dan. 2005. Modularity and Relevance: How Can a Massively Modular Mind Be Flexible and Context-Sensitive? In P. Carruthers, S. Laurence, and S. Stich (eds.), The Innate Mind: Structure and Content. Oxford: Oxford University Press, 53–68. Sperber, Dan, and Wilson, Dan 1986a. Sobre la definición de Relevancia. En Valdés Villanueva, Luis Ml. (Comp.) (1991) En búsqueda del significado. Madrid: Tecnos, 583–598. Sperber, Dan, and Wilson, Dan. 1986b. Relevance: Communication and Cognition. Oxford: Blackwell. Stankiewicz, Edward. 1986. The Grammatical Genders of the Slavic Languages. In: Edward Stankiewicz (eds,), The Slavic Languages. Unity in Diversity. Berlin, New York: Mouton de Gruyter, 127–141. Starke, Michal. 2001. Move Reduces to Merge: A Theory of Locality. PhD thesis, University of Geneva. Starke, Michal. 2009. Nanosyntax: A Short Premiere to a New Approach to Language. In Peter Svenonius, Gillian Ramchand, Michal Starke, and Knut Tarald Taraldsen (eds.), Nordlyd 36.1, special issue on Nanosyntax, CASTL, Tromsø, 1–6. Starke, Michal. 2010. Towards an Elegant Solution to Language Variation: Variation Reduces to the Size of Lexically Stored Trees. lingBuzz/001230 Starke, Michal, and Cardinaletti, Anna. 1999. Cf. Cardinaletti, Anna, and Michal Starke. (1999). Staudinger, Bernhard. 1997. Sätzchen: Small Clauses im Deutschen. Tübingen: Max Niemeyer Verlag. Stepanov, Arthur. 2001. Late Adjunction and Minimalist Phrase Structure. Syntax 4. 94–125. Stepanov, Arthur. 2007. The End of CED? Minimalism and Extraction Domains. Syntax 10 (1), 80–126. Steriopolo, Olga, and Wiltschko, Martina. 2010. Distributed Gender Hypothesis. In G. Zybatow, P. Dudchuk, S. Minor, and E. Pshehotskaya (eds.), Formal Studies in Slavic Linguistics: Proceedings of the Formal Description of Slavic Languages 7.5, 153–170. New York: Peter Lang.
Bibliography
351
Sternefeld, Wolfgang. 1997. The Semantics of Reconstruction and Connectivity, Arbeitspapiere des SFB 340: Universität Stuttgart und Tübingen. Steube, Anita. 1994. Syntaktische und semantische Eigenschaften sekundärer Prädikationen. In Anita Steube, and Gerhild Zybatow (eds.), Zur Satzwertigkeit von Infinitiven und Small Clauses. Tübingen: Niemeyer, 243–264. Stowell, Tim. 1978. What Was There before There Was There? In D. Farkas et al. (eds.), Papers from the Fourteenth Regional Meeting of the Chicago Linguistics Society, Chicago, IL: University of Chicago, 457–471. Stowel, Tim. 1981. Origins of Phrase Structure. PhD dissertation, MIT, Cambridge, MA. Strawson, Peter F. 1950. On Referring. Mind 59, 320–344. Strawson, Peter F. 1959. Individuals: An Essay in Descriptive Metaphysics. London: Routledge. Strigin, Anatoli. 2008. Secondary Predication and the Instrumental Case in Russian. In Christoph Schroeder, Gerd Hentschel, Winfried Boeder (eds.), Secondary Predicates in Eastern European Languages and Beyond. Oldenburg: BIS-Verlag, 381–400. Stroik, Thomas S. 2009. Locality in Minimalist Syntax (LI Monographs, Vol. 51). Cambridge, MA: MIT Press. Stroik, Thomas S., and Putnam, Michael. 2013. The Structural Design of Language. Cambridge: Cambridge University Press. Svenonius, Peter. 2000. Quantifier Movement in Icelandic. The Derivation of VO and OV. Amsterdam: John Benjamins, 255–292. Swinney, D. A. 1982. The Structure and time-course of information interacting during speech comprehension: lexical segmentation, access, and interpretation. In: Mehler, J., Walker, E.C.T., and Garrett, M. (eds.), Perspectives on mental representation: experimental and theoretical studies of cognitive processes and capacities. Hillsdale: Erlbaum. Škrabalová, Hana. 2007. Number Agreement with Coordinate Nouns in Czech. In Peter Kosta, and Lilia Schürcks (eds.), Linguistic Investigations into Formal Description of Slavic Languages (Potsdam Linguistic Investigations, Vol. 1). Contributions of the Sixth European Conference held at Potsdam University, November 30-December 02, 2005. Frankfurt am Main: Peter Lang, 319–332. Takahashi, Daiko. 1994. Minimality of Movement. Doctoral dissertation, University of Connecticut. Talmy, Leonard. 2000. Toward a Cognitive Semantics. Cambridge, MA: MIT Press.
352
Bibliography
Taylor, Alex, and Clayton, Nicola. 2012. Evidence from Convergent Evolution and Causal Reasoning Suggests that Conclusions on Human Uniqueness May Be Premature. Behavioral and Brain Sciences 35 (4), 241–242. Taylor, Kristen, Moss, Helen, and Tyler, Lorraine. 2007. The Conceptual Structure Account: A Cognitive Model of Semantic Memory and Its Neural Instantiation. In John Hart, and Michael Kraut (eds.), The Neural Basis of Semantic Memory. Cambridge: Cambridge University Press, 265–301. Taylor, Alex, Elliffe, Douglas, Hunt, Gavin R., and Gray, Russell D. 2010. Complex Cognition and Behavioural Innovation in New Caledonian Crows. Proceedings of the Royal Society B: Biological Sciences 277, 2637–2643. Tegmark, Max. 2003, May. Parallel Universes. Scientific American Magazine, 41–51. Tegmark, Max. 2007. The Mathematical Universe Hypothesis. Foundations of Physics. http://arxiv.org/pdf/0704.0646v2.pdf Tenny, Carol. (ed.). 1988. Studies in Generative Approaches to Aspect (Lexicon Project Working Papers, Vol. 24), Center for Cognitive Science, MIT. Tenny, Carol. 1989a. The Aspectual Interface Hypothesis (Lexicon Project Working Papers, Vol. 31), Center for Cognitive Science, MIT. Tenny, Carol. 1989b. Event Nominalization and Aspectual Structure. Center for Cognitive Science, MIT. Tenny, Carol. 1989c. The Role of Internal Arguments: Measuring Out Events. Center for Cognitive Science, MIT. Thornton, Rosalind, and Graciela Tesan. 2006. Categorical Acquisition: Parameter-Setting in Universal Grammar. In: Biolinguistics 2007, 1: 49–98. Timberlake, Alan. 2004. Чему еси слѣпилъ бра свои. Templates and the development of Animacy. In: Russian Linguistics 21 (1) – Sep 30, 2004. Timberlake, Alan. 2014a. Goals, Tasks and Lessons in Historical Syntax. In S. Kempgen, P. Kosta, T. Berger and K. Gutschmidt (eds.), Slavic Languages. Slavische Sprachen. An International Handbook of their Structure, their History and their Investigation. Ein internationales Handbuch ihrer Struktur, ihrer Geschichte und ihrer Erforschung. Berlin, New York: Mouton de Gruyter 2014, Vol. 2. 1653–1674. Timberlake, Alan. 2014b. The Simple Sentence. In: ibid., 1675–1700. Todorović, Neda, and Wurmbrand, Susi. 2020. Finiteness across Domains. In Teodora Radeva-Bork and Peter Kosta (eds.), Current Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11). Berlin, Bern, Bruxelles, New York, Oxford, Warszawa, Wien (Potsdam Linguistic Investigations, Vol. 29), 47–66. Tomasello, Michael. 2003. Constructing a First Language: A Usage-Based Theory of Language Acquisition. Cambridge, MA: Harvard University Press.
Bibliography
353
Tomasello, Michael. 2010. Origins of Human Communication. Cambridge, MA: MIT Press. Torrego, Esther. 1989. Revised 1991 Experiencers and Raising Verbs. Ms. MIT Torrego, Esther 1996. Experiencers and Raising Verbs. In R. Freidin (ed.), Current Issues in Comparative Grammar. Dordrecht: Kluwer. Takano, Yuji. 1996. Movement and Parametric Variation in Syntax, unpublished Ph.D. dissertation, University of California, Irvine. Tunick, R.A., and Pennington, B.F. 2002. The Etiological Relationship between Reading Disability and Phonological Disorder. Annals of Dyslexia 52, 75–95. Ullman, M.T., and Pierpont, E.I. 2005. Specific Language Impairment Is Not Specific to Language: The Procedural Deficit Hypothesis. Cortex 41 (3), 399–433. Ura, Hiroyuki. 2000. Checking Theory and Grammatical Functions in Universal Grammar (Oxford Studies in Comparative Syntax). New York, Oxford: Oxford University Press. Uriagereka, Juan. 1988. On Government. PhD dissertation, University of Connecticut. Uriagereka, Juan. 1998. Rhyme and Reason. Cambridge, MA: MIT Press. Uriagereka, Juan. 1999a. Multiple Spell-Out. In N. Hornstein, and S. Epstein (eds.), Working Minimalism, Cambridge, MA: MIT Press, 251–282. Uriagereka, Juan. 1999b. Minimal Restrictions on Basque Movements. Natural Language and Linguistic Theory 17, 403–444. Uriagereka, Juan. 2002. Multiple Spell-Out. In J. Uriagereka (ed.), Derivations: Exploring the Dynamics of Syntax. London: Routledge, 45–66. Uriagereka, Juan. 2012. Spell Out and the Minimalist Program. Oxford: Oxford University Press. Van Valin, R.D. 1990. Semantic Parameters of Split Intransitivity. Language 66, 221–260. Van der Lely, H.K.J., and Stollwerck, L. 1996. A Grammatical Specific Language Impairment in Children: An Autosomal Dominant Inheritance? Brain and Language 52, 484–504. Vargha-Khadem, F., and Passingham, R. 1990. Speech and Language Deficits. Nature 346, 226. Vargha-Khadem, F., Watkins, K.E., Alcock, K., Fletcher, P., and Passingham, R. 1995. Praxic and Nonverbal Cognitive Deficits in a Large Family with a Genetically-transmitted Speech and Language Disorder. Proceedings of the National Academy of Sciences 92, 930–933.
354
Bibliography
Vargha-Khadem, F. et al. 1998. Neural Basis of An Inherited Speech and Language Disorder. Proceedings of the National Academy of Sciences of the U.S.A. 95, 12695–12700. Vargha-Khadem, F. et al. 2005, February. FoxP2 and the Neuroanatomy of Speech and Language. Neuroscience 6, 131–138. Vendler, Zeno. 1967. Verbs and Times, ch. 4 of Linguistics in Philosophy. Ithaca, NY: Cornell University Press, 97–121. Vernes, Sonja C., Spiteri, E., Nicoda, J., Groszera, M., Taylor, J.M., Davies, K.E., Geschwind, D.H., and Fisher, S.E. 2007. High-Throughput Analysis of Promoter Occupancy Reveals Direct Neural Targets of FOXP2, a Gene Mutated in Speech and Language Disorders. The American Journal of Human Genetics 81, 1232–1250. Vernes, Sonja C. et al. 2011, July. Foxp2 Regulates Gene Networks Implicated in Neurite Outgrowth in the Developing Brain. PLoS Genetics 7 (7), 1–17. Veselovská, Ludmila. 2001. Od bariér k minimalismu. Některé aspekty poslední vývojové změny chomskyánského modelu jazyka. Slovo a slovesnost 62, 274–292. Veselovská, Ludmila. 2018. Noun Phrases in Czech. Their Structure and Agreements (Potsdam Linguistic Investigations, Vol. 23). Berlin, Bern, Bruxelles, New York, Oxford, Warszawa, Wien: Peter Lang. von Stechow, Arnim. 2005, Mai. LF in einem Phasenmodell. Bemerkungen anhand von Fischers Bindungstheorie. GGS. Ms. Watkins, K.E., Vargha-Khadem, F., Ashburner, J., Passingham, R.E., Connelly, A., Friston, K.J., Frackowiak, R.S.J., Mishkin, M., and Gadian, D.G. 2002. MRI Analysis of an Inherited Speech and Language Disorder: Structural Brain Abnormalities. Brain 125, 465–478. Webelhuth, Gerd. 1989. Syntactic Saturation Phenomena and the Modern Germanic Languages. PhD thesis, University of Amherst. Webelhuth, Gerd (ed.). 1995. Government and Binding Theory and the Minimalist Program (Principles and Parameters in Syntactic Theory). Oxford, UK, Cambridge, USA: Blackwell. Wechsler, Stephen, and Zlatić, Larisa. 2000. A Theory of Agreement and Its Application to Serbo-Croatian. Language 76 (4), 799–832. Wechsler, Stephen, and Zlatić, Larisa. 2003. The Many Faces of Agreement: Morphology, Syntax, Semantics, and Discourse Factors in SerboCroatian agreement. Stanford, CA: CSLI Publications Weissenborn, Jürgen. 2000. Der Erwerb von Morphologie und Syntax. In: Hannelore Grimm (Hg.) Sprachentwicklung (Enzyklopädie der
Bibliography
355
Psychologie, Themenbereich C: Theorie und Forschung, Serie III: Sprache), Göttingen, Bern, Toronto, Seattle, Hogrefe. Verlag für Psychologie, 141–169. Weissenborn, Jürgen, Höhle, Barbara, Kiefer, D., and Cavar, Damir. 1998. Children’s Sensitivity to Word-Order Violations in German: Evidence for Very Early Parameter-Setting. In A. Greenhill, M. Hughes, H. Littlefield, and H. Walsh (eds.), Proceedings of the 22 Annual Boston Conference on Language Development, Vol. 2, Somerville, MA: Cascadilla Press, 756–767. Weissenborn, Jürgen, Goodluck, Helen, and Roeper, Thomas (eds.). 1992. Theoretical Issues in Language Acquisition. Hillsdale, NJ: Lawrence Erlbaum. Wexler, Kenneth. 1994. Optional Infinitives, Head Movement and the Economy of Derivations. In David Lightfoot, and Norbert Hornstein (eds.), Verb Movement. Cambridge: Cambridge University Press, 305–350. Wexler, Kenneth. 1998. Very Early Parameter Setting and the Unique Checking Constraint: A New Explanation of the Optional Infinitive Stage. Lingua 106, 23–79. Wexler, Kenneth. 2000. Three Problems in the Theory of the Optional Infinitive Stage: Stage/Individual Predicates, Eventive Verbs and Finite Null-Subjects. In Roger Billerey, and Brook Danielle Lillehaugen (eds.), WCCFL 19 Proceedings. Somerville, MA: Cascadilla Press, 560–573. Wexler, Kenneth. Forthcoming. The Unique Checking Constraint as the Explanation of Clitic Omission in SLI and Normal Development. In Celia Jakubowicz, Lea Nash, and Kenneth Wexler (eds.), Essays in Syntax, Morphology and Phonology of SLI. Cambridge, MA: MIT Press. Wexler, Kenneth. 2002. Lenneberg’s Dream: Learning, Normal Language Development and Specific Language Impairment. In Y. Levy, and J. Schaeffer (eds.), Language Competence across Populations: Towards a Definition of Specific Language Impairment. Mahwah, NJ: Lawrence Erlbaum, 11–60. Wexler, Kenneth. 2004. Lenneberg’s Dream: Learning, Normal Language Development and Specific Language Impairment. In L. Jenkins (ed.), Variation and Universals in Biolinguistics. Amsterdam: Elsevier. Wexler, Kenneth, and Culicover, Peter 1980. Formal Principles of Language Acquisition. Cambridge, MA: MIT Press. Wierzbicka, Anna. 1980. The Case for Surface Case. Ann Arbor. Wiese, B. 1999. Unterspezifizierte Paradigmen. Form und Funktion in der pronominalen Deklination. Linguistik Online 4. www.linguistik-online. de/399. Wilder, Chris 1994. Small Clauses im Englischen und in der GB-Theorie. In Anita Steube, and Gerhild Zybatow (eds.), Zur Satzwertigkeit von Infinitiven und Small Clauses. Tübingen: Niemeyer, 219–241.
356
Bibliography
Wilder, Christ, Gärtner, Hans-Martin, and Bierwisch, Manfred (eds.). 1997. The Role of Economy Principles in Linguistic Theory (Studie grammatica, Vol. 40). Berlin: Akademie Verlag. Williams, Edwin. 1991. The Argument-Bound Empty Categories. In R. Freidin (ed.), Principles and Parameters in Comparative Grammar. Cambridge, MA: MIT Press, 77–98. Williams, Edwin. 1994. Remarks on Lexical Knowledge. Lingua 92, 7–34. Williams, Edwin. 1980. Predication. Linguistic Inquiry 11, 203–238. Williams, Edwin. 1984. There-insertion. Linguistic Inquiry 15, 131–153. Williams, Edwin. 1994. Thematic Structure in Syntax. Cambridge, MA: MIT Press. Williams, Edwin. 2006. The Subject-Predicate Theory of There. Linguistic Inquiry 37, 648–651. Williams, Edwin. 2003. Representation Theory (Current Studies in Linguistics, Vol. 38). Cambridge, MA: MIT Press.. Williams, Edwin 2007. Dumping Lexicalism. In G. Ramchand, and C. Reiss (eds.), The Oxford Handbook of Linguistic Interfaces. Oxford: Oxford University Press, 353–382. Williams, Edwin 2008. Merge and Mirrors. http://ling.auf.net/lingbuzz/000747 Willim, Ewa. 2012. Concord in Polish Coordinate NPs as Agree. In Markéta Ziková, and Mojmír Dočekal (eds.), Slavic Languages in Formal Grammar. Proceedings of FDSL 8.5, Brno 2010, Frankfurt am Main: Peter Lang, 233–253. Willim, Ewa. 2015. Case Distribution and Φ-Agreement with Polish Genitive of Quantification in the feature sharing theory of Agree. In Poznan Studies in Contemporary Linguistics 51(2), 2015, pp. 315-357. doi: 10.1515/psicl-2015-0013. Available from: https://www.researchgate.net/ publication/282382321_Case_distribution_and_ph-agreement_with_ Polish_genitive_of_quantification_in_the_feature_sharing_theory_of_Agree [accessed Apr 30 2020]. Wilson, Dan, and Sperber, Dan. 2003. Relevance Theory. In L. Horn, and G. Ward (eds.), Handbook of Pragmatics. Oxford: Blackwell, 607–628. Wintner, S., MacWhinney, B., and Lavie, A. 2007. Formal Grammars of Early Language. ACL. Witkoś, Jacek. 2008. On the Correlation between A-type Scrambling and the Lack of Weak Crossover Effects. Studia Anglica Posnaniensia 44, 297–328. Wunderlich, Dieter. 1996. Minimalist Morphology: The Role of Paradigms. In G. Booij, and J. van Marle (eds.), Yearbook of Morphology 1995. Dordrecht: Kluwer, 93–114.
Bibliography
357
Wurmbrand, Susi. 1999. Modal Verbs Must Be Raising Verbs. In S. Bird, A. Carnie, J.D. Haugen, and P. Norquest (eds.), Proceedings of the 18th West Coast Conference on Formal Linguistics WCCFL 18. Somerville, MA: Cascadilla Press, 599–612. Wurmbrand, Susi. 2011. Reverse Agree. Ms. University of Connecticut. Yang, Charles. 2002. Knowledge and Learning in Natural Language. Oxford: Oxford University Press. Yus, F. 2010. Relevance Theory. In B. Heine, and H. Narrog (eds.), The Oxford Handbook of Linguistic Analysis. Oxford: Oxford University Press, 679–701. Zaenen, A. 1993. Unaccusativity in Dutch: The Interface of Syntax and Lexical Semantics. In: J. Pustejovsky (ed.) Semantics in the Lexicon, Kluwer: Dordrecht. Zeijlstra, Heide 2011. There is Only One Way to Agree. Ms. Wroclaw, Poland. Zimmerling, Anton, and Kosta, Peter. 2013. Slavic Clitics: A Typology. STUF, Akademie Verlag 66 (2), 178–214. Zimmermann, Ilse.☦ 2010. Where Are the Worlds? In: Sentence Types, Sentence Moods, and Illocutionary Forces. International Conference to Honor Manfred Bierwisch, November 4-6, 2010, ZAS, Berlin. Zimmermann, Thomas Ede, and Sternefeld, Wolfgang 2013. Presuppositions. In: Introduction to Semantics. An Essential Guide to the Composition of Meaning. Berlin: de Gruyter, 205–231. Циммерлинг, Антон Владимирович. 2013. Системы порядка слов славянских языков в типологическом аспекте. М.: Языки славянских культур, 2013. 36 а.л. Zubizarreta, Maria-Luísa. 1985. The Relation between Morphophonology and Morphosyntax: The Case of Romance Causatives. Linguistic Inquiry 16, 247–289. Zubizarreta, Maria-Luísa. 1987. Levels of Representation in the Lexicon and in the Syntax. Dordrecht: Foris. Zwart, Jan Wouter. 2019. Head Movement and Morphological Strength. To appear in Oxford Research Encyclopedia of Linguistics, Morphology. https:// ling.auf.net/lingbuzz/004516 Zwart, Jan Wouter. 2010. Prospects for Top-Down Derivation. Catalan Journal of Linguistics 8, 161–187.
Index A A-bar –– A-bar Movement 203, 207, 296, 330, 341, 359 A-bar chain 341, 359 A-bar/A’ –– A-bar chain/A’ chain 341, 359 –– A-bar Syntax 340, 359 –– Dependencies 321, 359 Accusative Case 140, 156, 159, 167, 176, 178, 227, 359 –– Cf. also Genitive- Accusative 111–112, 140, 193, 359 A-Chains –– A-Movement 207, 330, 359 Achievement 47, 63–64, 82, 238, 254, 256–257, 359 Acquisition, see Language Acquisition –– of Meaning 9, 30, 359 Adjacency 298, 359 Adjective attributive adjective S. 119, 359 –– Attributive 119, 131, 157, 187, 291, 359 –– predicative adjective 154, 359 Adjunct 56, 67, 74, 89–90, 141–142, 152, 154, 156–157, 159, 164, 166, 171–172, 211, 213, 224, 232–233, 244–246, 249, 254, 309, 315, 323, 325, 331–223, 349, 359 Adjunction (cf. Chomsky Adjunction) 56, 90, 101, 126, 141, 208, 214, 350, 359 Adverb –– Aspectual 258, 359 –– Lower 38, 359 –– Sentential 105–106, 359 –– Temporal 152, 342, 359
Affix –– Filter (Lasnik Filter) 86–87, 313, 359 –– Hopping 87, 241, 359 Agree –– Agree Domain cf. Domain Agreement –– Rich Agreement 329, 359 Alternation –– Anti-Causatives vs Causatives 65, 90, 223–224, 232–233, 235–236, 238, 240–247, 249, 251, 253, 255–257, 259, 261, 273, 331, 359 Ambiguity 31–33, 105, 246, 261, 308, 359 A-Movement –– A-position 107, 154, 202–203, 220, 359 Animacy –– Based gender 112, 117, 359 Anti-Causatives (see Causatives) 65, 90, 223–224, 232–233, 235–236, 238, 240–247, 249, 251, 253, 255–257, 259, 261, 273, 331, 359 Anti-Freezing (cf. Freezing) 107, 201–202, 211, 219, 222, 227, 308, 359 Anti-locality 301, 323, 359 Aphasia –– Brocca 321, 359 –– Wernicke 30, 359 A-position –– Argument Position 151, 157, 237, 241, 244, 287, 359 Applicatives –– Applicative phrase 74–75, 137, 359
360 Array (lexical) 24, 30, 39, 55, 78–79, 89, 145, 161, 163, 165, 172, 188, 204, 224, 259, 264, 266, 274, 290, 359 Articulatory-Perceptual –– Interfaces 183–184, 360 Asymmetry 60, 75–76, 249, 271, 275, 316, 327, 360 Attract 72–73, 164, 171, 187, 199, 202, 205, 218–220, 227, 241, 275, 360 B Barrier –– Minimal 24, 42, 52, 58, 60, 67, 72–73, 78, 106, 143–144, 149, 200, 269, 276, 313, 326, 353, 360 Binary Branching 56, 360 Binding Theory 196, 354, 360 C Case –– inherent Case 179–181, 183, 185, 196, 241, 360 –– lexical Case (see Dative, Locative, Instrumental) 92, 111, 113–114, 156, 183, 185, 360 –– morphological Case 114, 360 –– structural Case (see Nominative, Accusative) 111, 113–115, 117, 156, 178–181, 192, 196, 360 Category –– Functional 19, 54–55, 63, 66, 82, 85, 91, 107, 182, 191, 201, 259, 287, 314, 360 –– Lexical 19, 39, 42, 54–55, 62–63, 66, 82, 85, 91, 99–100, 107, 114, 171, 182, 201, 259, 268, 304, 360 Causative (cf. Anti-Causative, Causative Alternation) 65, 71–73, 75–77, 90, 98, 101–102, 157–159, 166, 223, 225–227, 231, 233, 236,
Index
238–243, 245–247, 249, 251–254, 256–257, 260–261, 295, 302, 328–330, 336, 338, 344, 349, 357, 360 Checking Theory (cf. Features, Labels) 353, 360 Child Language 273, 300–301, 313, 327–328, 339, 349, 360 Clause –– Cf. Small Clause 16, 67, 141–142, 152, 154, 158, 160, 167–169, 172, 292, 301, 309, 325, 331, 350–351, 355, 360 Clitic –– Auxiliar 66, 360 –– Proclitic 217, 360 –– Pronominal 66, 360 Clitic Cluster 75, 360 Clitic Doubling (CD) 205, 219–220, 360 Clitic-Left Dislocation S. 206–207, 300, 360 Cognition 29–30, 51, 62, 146, 210, 305, 312, 318, 322, 324, 327, 329, 337, 339, 344, 350 352, 360 Cognitive Grammar 335, 360 Cognitive Linguistics 66, 317, 328, 335, 360 Cognitive Principle of Relevance –– Relevance Theory 19, 27–29, 34–35, 283, 288, 336, 356–357, 360 Comparative –– Phrase 346, 360 Competence –– Cf. FLN, FLB 24–25, 58, 258, 273, 292, 319, 336, 355, 360 Complex NP Constraint (CNPC) 308, 360 Complexity –– Cf. Efficiency 60, 81, 91, 99, 161, 264, 281, 360
Index
Computational –– Capacity 81, 360 Concord –– Cf. Agreement 110, 119, 127, 132–135, 304, 323, 360 Condition on Domain Exclusivity 290, 360 Conjunction 20, 284, 360 Constraint(s) 27–29, 32, 58–59, 78, 86, 90, 100, 103, 143–144, 166, 208, 273–274, 276, 278, 284, 305, 308, 313–314, 319, 322, 341, 343, 345, 348, 355, 360 Construction Grammar 67, 360 Contrastive –– Focus (cf. Information Structure) 152, 221, 361 –– Topic (cf. Information Structure) 218, 361 Control 42, 73–77, 90, 105, 53–154, 157, 233–234, 236, 245, 257–258, 287, 290, 307–308, 313, 327, 331, 349, 361 Control Theory –– Object-Control (Verbs) 331, 361 –– Subject-Control (Verbs) 331, 361 Coordinate Structure Constraint (CSC) 208, 361 Coordination 135, 361 Copula –– Predicational 151, 160, 361 –– Zero 151, 157, 361 Copy –– Copy-delete 82, 361 Corpus 93–96, 117, 153–154, 157, 287, 292, 294, 339, 361 corpus data (cf. also introspective data) 287, 292, 361 Covert Movement 211–212, 361 Crash-proof and Crash-rife Grammars (cf. Grammars) 16, 42–43, 78, 104–105, 200, 336, 345, 361
361 Crash-Proof Grammars –– Crash-Rife-Grammars 42–43, 336, 345, 361 Cycle 148, 161, 164, 188, 264, 266, 317, 328, 361 Cyclic Movement 107, 199, 201, 204, 208, 212, 219–220, 267, 275, 308, 361 D Dative –– Dative Subjects 241, 361 –– Datives Addressee 74–75, 157, 361 –– Datives Benefactives 74–75 , 361 –– Lower Dative 74 , 361 Decomposition 62–63, 78, 171–172, 361 Definite Descriptions (cf. Russel’s Definite Descriptions) 20, 22, 361 Degree Phrase 157, 212–213, 332, 347, 361 Deictic –– Pronoun 116, 361 –– Relation 41, 361 Demonstrative 69, 116, 125, 210, 215–217, 309, 361 Dependency Grammar 342, 361 Depictives 16, 155–156, 361 Determiner 22, 39, 110, 113, 122–125, 169, 216–217, 284–286, 326, 346, 361 Determiner Phrase 113, 124, 346, 361 Diachrony, diachronic 116, 141, 143, 148–149, 168, 276, 325, 339, 361 Discourse Dislocation (cf. Displacement) 206–207, 303, 342, 361 Displacement (cf. Dislocation) 77, 80, 82, 89, 103, 107, 199, 207, 211, 289–290, 341, 361
362 Distributed Morphology (DM) 70, 84, 147, 223, 275, 288, 298, 316, 324–325, 361 Ditransitives, ditransitive Constructions 75, 157, 250–251, 255–256, 361 D-linking 40, 210, 361 Domain 61, 72, 80, 89, 106, 111, 133, 153, 163, 165–166, 168, 172, 185, 195–197, 201–202, 206, 210–212, 214, 244, 249, 265, 267, 269, 272, 277, 290, 293, 323, 346, 348, 350, 352, 361 do-support 84, 87–88, 361 double object constructions cf. ditransitive constructions 111, 166, 335, 361 doubling, cf. Clitic Doubling (CD) 205–206, 219–220, 286, 349, 361 DP (cf. Determiner) 28, 39, 41, 55, 61, 72, 74, 76, 82, 93, 106, 109–110, 113–114, 118–120, 124–126, 128, 132–136, 157, 161, 163–164, 167, 169, 181, 186, 195–196, 200, 203, 205, 207–208, 210–212, 214–217, 220–221, 240, 259, 264–265, 286, 295, 309, 315, 329–330, 333, 335, 342–343, 361 DP-Hypothesis 124–125, 362 E Earliness Principle 88, 91, 278, 343, 362 ECM-Verbs 90, 153, 157–158, 167, 362 Economy –– Conceptual e. (cf. also Parsimony, Parsimonial) 99, 362 –– Derivational e. 87, 142, 183, 218, 220, 310, 313, 339, 352, 362
Index
–– Global e. vs. Local e. 38, 43, 199, 362 –– Representational e. 183, 231, 310, 362 Edge (Phase) (cf. also Successive cyclic Movement) 61, 102, 106–107, 114, 165, 189, 197, 199, 201–203, 205, 211, 213–214, 216, 267, 275, 308, 362 Empty Category 86, 159, 273, 286, 292, 294, 296, 329, 334, 346, 356, 362 Empty Category Principle 86, 346, 362 Endocentricity 55, 143, 167, 362 EPP feature (cf. Extended Projection Principle) 53, 153, 164, 203–205, 265, 267, 275, 287, 362 Ergative (cf. Unaccusative) 114, 145, 233, 302, 362 Ergativity 63, 302, 362 Escape Hatch 267, 275, 362 Event 28, 47, 55, 63–65, 70, 82, 92–93, 121, 145–146, 152, 155, 163–164, 170–171, 224–225, 238, 246, 249–250, 252–261, 285, 297–299, 320–321, 334, 338, 342, 349, 362 Event semantics 155, 252, 341, 362 Event structure 63–65, 82, 102, 163, 224, 246, 249, 253–254, 256–261, 322, 362 Evidentiality 331, 362, 369 Exceptional Case Marking (ECM) 72, 90, 151, 153, 156–158, 167–168, 265, 269, 362 Existential 21, 84, 90, 183, 362 Expletive 83–84, 107, 114, 158, 163–165, 167, 202, 265–267, 362, 369
Index
Extended Projection Principle (EPP) 53, 107, 114, 153, 164–165, 169, 195, 202–205, 265–267, 275, 277, 287, 289, 317, 323, 328, 339, 362 Extended Standard Theory (EST) 58, 362 Externalization Hypothesis –– Of Language 44, 362 F Feature inheritance 99–101, 275, 362 Features –– Categorial f. 100, 235, 343, 362 –– interpretable f. vs. Uninterpretable f. 106, 127–128, 148–149, 163, 169, 184–185, 201, 214, 265, 270, 277, 310, 362 –– strong f. vs weak f. 85, 87, 90, 152, 362 –– valued f. vs. unvalued f. 127–129, 184–185, 189, 224, 276–277, 362 Figure-Ground 33, 362 Filter –– Lasnik Filter FLN 15, 19, 27, 30, 35, 52, 55, 78–81, 85, 107, 201, 283, 362 Focus –– Contrastive f. vs Verum f. 152, 218, 221, 362 Free-word-order-languages 287, 302, 307, 314, 330, 362 Freezing (cf. also AntiFreezing) 107, 201–202, 211, 219, 222, 227, 308, 362 Fregean analysis (cf. Frege) 16, 22–24, 35, 37, 40–41, 47, 319, 362 Functional –– Category/Categories 54–55, 62, 66, 82, 85, 91, 107, 182, 191, 201, 259, 287, 314, 362
363 –– Head(s) 80, 102, 113–114, 163, 165, 192, 259, 292, 294, 311, 313–314, 316, 328, 343, 347, 362 G Gender –– semantic gender 118–122, 126–127, 132–133, 136–139, 363 Gender (cf. also Sex) 72, 82, 92, 100, 103, 108–112, 114–128, 131–133, 135–140, 178, 182, 185–186, 189–190, 193–194, 205, 285–286, 310, 314, 317, 321, 329, 332–333, 338, 340, 344–345, 347, 350, 363 Generalized Quantifiers 21, 186, 363 Generative Grammar –– MGG (Main Stream Generative Grammar) 102, 107, 199, 201, 219–220, 242, 245, 363 Generative Semantics 334, 363 Generator 68, 80, 144, 149, 258, 363 Gerund 93, 363 Goal (cf. Probe-Goal-Searching Domain) 15, 43, 72, 80–81, 101, 111, 113–114, 128, 148, 150, 157, 163, 168, 170, 195, 209, 220, 224, 244, 250, 254–255, 259, 263–264, 268–269, 276–278, 289, 320, 344, 352, 363 Government and Binding Theory (cf. LGB) 86, 99–100, 242, 354, 363 Grammars –– Crash-proof G. 16, 42–44, 78, 200, 222, 336, 345, 363 –– Crash-rife. G. 104–105, 363 –– Early G.s 263, 273, 363 Grammatical functions 225, 284, 353, 363
364 Grammaticalization (cf. Lexicalization) 108, 348, 363 Greed (cf. also Procrastinate and Last Ressort Principle, Least Effort Principle) 89, 363 H Head –– Movement 107, 201, 316, 355, 357, 363 Head Driven Phrase Structure Grammar (HPSG) 67, 363 I Idiom 69, 363 Illocution 24, 26, 35, 363 Illocutionary Force 25–26, 265, 357, 363 Inclusiveness Condition 57, 363 Incorporation 101, 144, 157–158, 166, 171–172, 223, 225, 297, 304, 363 Index –– Agreement 133–134, 363 –– Features 133, 363 Index, indices 31, 40, 48, 133–134, 264, 290, 363 Individual level predicates (cf. also stage level predicates) 16, 151, 155, 163, 168, 363 Inflection(al) 167–168, 187, 281, 284–285, 307, 324, 341, 363 Information Structure 204, 295, 339, 349, 363 Interfaces –– Syntax-Semantics i. 52, 312, 326, 363 Inversion –– Subject-AuxiliaryInversion 85, 363
Index
K Kayne’s Linear Correspondence Axiom (LCA) 293, 301, 363 L Label, Label driven movement 16, 35, 68, 72, 107, 199, 201, 219–220, 275, 296, 363 Label (cf. also category, and feature) 15–16, 24, 28, 40, 42, 55, 57–58, 66, 68–69, 71–72, 78, 102, 202, 216, 221–222, 274–275, 284–285, 296–297, 301, 306, 312–313, 341, 363 Language Acquisition –– First Language Acquisition (L 1) 58, 89, 273, 281, 286, 292, 314, 332, 363 –– Second Language Acquisition (L 2) 58, 364 Language development 276, 281–284, 286, 300, 304, 326, 347, 355, 364 Last Resort 86–88, 90, 219, 309, 364 Late Insertion 74, 88–89, 364 Learnability 151, 286, 315, 344, 364 Least Effort (cf. Economy) 86–87, 219, 364 Left Branch Condition (LBC) 208–209, 364, 65 153, 204, 211, 221, 287, 296, 347, 364 Levels of representation –– D-Structure 86, 208, 214, 219, 364 –– Logical Form (LF) 21, 87, 278, 322, 326, 337, 364 –– Phonological Form (PF) 32, 288, 364 –– S-Structure 87–90, 142, 206, 211–212, 219, 221, 247, 364 –– Surface Structure 93, 231, 254, 329, 343, 364
Index
Lexical Functional Grammar (LFG) 65, 67, 225, 242, 258, 364 Lexical Integrity 223, 309, 331, 338, 364 Lexical Subarray cf. numeration 188, 364 Linear Correspondence Axiom (LCA) 293, 301, 364 Linearization 269, 342, 364 Linking 81, 91, 296, 364 Linking rules 91, 364 Locality 142, 166, 202, 278, 308, 318, 347, 350–351, 364 Locative 111, 114, 146, 166, 179, 181, 364 Logic –– Mathematical Logic 23, 364 Logical Form 21, 87, 278, 322, 326, 337, 364 Long-distance –– agreement 164, 364 –– Anaphora 166, 364 Look-Ahead (Problem) 195, 364 LOT = Language of Thought Hypothesis 45, 319, 349, 364 M Markedness 342, 364 Mathematical logic 23, 364 Meaning –– Extensional M. (cf. Extension) 40, 47, 364 –– Intensional M. (cf. Intension) 40, 42, 47, 68, 364 –– Procedural 24, 28, 35, 37, 298, 317, 336, 364 –– Propositional 24, 26, 37, 46–47, 364 –– Referential 26, 39, 47, 93, 364
365 Merge –– External 42, 55, 66, 80–82, 107, 114, 170, 201, 218–220, 222, 268, 274, 284, 291, 364 –– Internal (cf. Move) 38, 54–55, 61, 82, 84, 101, 107, 114, 127, 201, 204, 206–207, 211–212, 219–221, 224, 268, 274, 284, 287, 296, 333, 364 Merge-over-Move Principle –– Phase impenetrability Condition Methods 66, 283, 307, 330, 364 Middle (cf. Passive, Voice, Diathesis) 20–24, 46, 215, 245, 259, 302, 323, 364 Minimal Link Condition (MLC) 58, 72–73, 80, 135, 364 Minimalism, Radical (cf. Radical Minimalism) 7, 15–17, 19, 32, 35, 44, 54, 56, 80–81, 99, 102, 141–142, 150–151, 165, 167, 170, 199, 270, 276, 279, 300, 331, 333–334, 364 Minimalism. The Minimalist Program 43, 52, 58, 72–73, 82, 183, 263, 311, 314, 317, 319, 328–329, 353–354, 364 Minimality 72, 74, 166, 210, 277, 298, 347, 351, 364 Mirror Principle (MP) 223–224, 304, 315, 322, 331–322, 364 Modal verb 90, 357, 364 Modality 82, 146, 300, 331, 365 Modifier (cf. Specifier) 38, 68, 118–119, 124, 128, 132–133, 176–177, 179, 181, 185–186, 188, 192, 194, 196, 211, 217, 293, 365 Montague Grammar 258, 316, 343, 365
366 Mood 106, 357, 365 Morphology 16, 66, 69–70, 79, 84, 93, 100–102, 110–114, 122, 133, 135–136, 147, 160, 178, 186, 189, 196, 213, 223–224, 226, 231, 236, 238, 247, 274–275, 281, 287–288, 298, 302, 305, 308, 315–317, 324–325, 330–331, 339, 342, 344, 346, 348, 354–357, 365 Move –– internal Move (cf. internal Merge or Copy) 237, 365 Move Alpha 223, 365 Multiple-Spell-Out 42, 161, 264, 353, 365 N Nano Syntax 309, 317, 350, 365 native speaker 153, 190, 365 Negation (cf. Negative Islands) 20–24, 84–85, 90, 273, 329–330, 332, 365 Negative concord 323, 365 Negative Polarity Negative Polarity Items (NPI) 83, 365 No Tampering Condition (NTC) 271, 365 Nominalists –– Cf. Realists 50, 365 –– Cf. Universals 50, 365 Nominalization 41, 70, 92–93, 98–99, 126, 246–248, 252, 261, 285, 302, 310, 344, 352, 365 Null Subject Languages (NSL) 286, 315, 365 Null-Subject 274, 329, 355, 365 Null-Subject Parameter 327, 365 Numeration (cf. also Lexical Subarray) 106, 136, 164–165, 188, 200, 220, 264, 266, 365
Index
O object control (cf. Control) 73–77, 154, 331, 365 object shift 204–205, 218, 267, 365 Ockham’s Razor 50–51, 53, 263, 365 Optimality Theory (OT) 58–59, 150, 327, 338, 334, 365 P Parameter –– Directionality 338, 365 –– Headedness 55, 254, 365 –– Hierarchy 58–59, 82, 105, 125–126, 136–137, 196, 224, 240–244, 248–250, 255, 260, 326, 345, 365 –– Macroparameter 304, 365 –– Microparameter 302, 365 –– Null-Subject P. 327, 315, 365 Parser, Parsing 91, 279, 315, 319, 329, 365 Particle –– Discourse 331, 349, 354, 365 –– Negative 83, 109, 151, 169, 183, 251, 323, 365 –– Partitive 182–183, 186–188, 192, 196, 291, 348, 365 Passive 65, 70, 79, 90, 98, 154, 156, 224–225, 232–234, 236–237, 242, 244–248, 251–252, 261, 265, 269, 302, 304, 322, 365 Percolation (of features) 185, 365 Person 25, 70, 82, 92, 100–101, 110, 118–119, 123, 126, 135, 138, 140, 152, 156, 169, 192, 238, 285, 288, 365 Phase 16, 51, 54, 58–61, 63, 67, 74, 77, 83, 89, 91, 101–102, 105–107, 114, 141–143, 149, 154, 161, 163–166, 172, 175, 183, 185, 187–189, 194–197, 199–203, 205, 210–214, 216–218, 221, 263–269,
367
Index
275–278, 292, 302, 306–308, 310–313, 315, 322, 331, 341, 345, 366 Phase Impenetrability Condition –– PIC 16, 61, 72, 83, 106–107, 165, 185, 197, 199, 200–202, 211, 267, 366 Phi/φ-Features 71, 80, 82, 110, 113, 118–119, 125–126, 132, 134, 161, 163, 168–169, 205–206, 216, 268, 277, 289, 333, 366 Phonetic/Phonological Form (PF) 28, 33, 42, 57, 60, 66–68, 70–72, 78, 80, 82–84, 86, 89–91, 105–107, 112–113, 115–116, 128, 148–149, 161, 163, 166, 170, 175, 183–185, 200–202, 206, 216, 264–265, 268–269, 271–272, 366 Phrase Structure –– Bare 55, 296–297, 311, 342, 366 –– HPSG 67, 366 –– Rules 82, 106, 201, 366 Pied-Piping 153, 209, 215, 366 Polarity 83–84, 366 Polysynthesis (cf. polysynthesis Parameter) 304, 366 Positive Polarity 83, 366 Positive Polarity Items (PPI) Pragmatics 16, 19, 24–25, 27, 35, 142, 303, 310, 315, 322, 327, 331, 342, 349, 356, 366, 369, 370 Predication –– Secondary predication (cf. Small Clause) 16, 154, 159, 351, 366 predicative –– Adjective 154, 366 Preposition 41, 64, 160, 178, 183, 185–186, 232–233, 251, 259, 284, 307–308, 366 Presupposition 21–24, 37, 204, 322, 357, 366
Principle of Full Interpretation (cf. Full Interpretation Principle PFI) 104–105, 169, 184, 200 pro-drop 167, 289, 314, 327, 341, 348, 366 PRO Theorem 74, 154, 158–159, 234–235, 290, 307, 366 Processing 32, 35, 52, 74, 104, 171, 279, 305, 319, 337–338, 366 Procrastinate (cf. Economy, Principles of) 89, 366 Projection 54–55, 57, 61, 78–79, 87–88, 90, 113–114, 124–125, 134, 160–163, 166, 168–169, 181–182, 185, 192, 194, 196, 202, 213, 231–232, 235, 238–240, 242, 245, 268, 275, 309, 313, 315, 323, 348, 366 Proper Government (cf. proper and strict government) 86–87, 366 Proposition 19–20, 22–24, 26–27, 32, 37–38, 41, 43, 93, 106, 141, 265, 366 Prosody 16, 366 Q Quantifier –– Generalized quantifier (Theory) 21, 186, 366 Quantum Mind 29–32, 35, 81, 366 Quasi-Argument 74, 366 R Radical Minimalism (cf. Minimalism, Radical) 7, 15–17, 19, 32, 35, 44, 54, 56, 80–81, 99, 102, 141–142, 150–151, 165, 167, 170, 199, 270, 276, 279, 300, 331, 333–334, 366
368 Raising 39, 73, 85, 87–88, 90, 98, 164, 187, 196, 217–218, 241, 265–267, 306, 333, 345, 348, 353, 357, 367 Realists 45, 50, 367 Reconstruction 43, 60, 71–72, 351, 367 Recursion 65, 78, 81, 292, 312, 337, 367 Redundancy 59, 367 Reference 23, 31, 35, 40–41, 47, 58, 82, 108, 110, 112, 115–116, 118–120, 132, 144, 171, 254, 283, 290, 293, 310, 337, 345, 349, 367 Referential 26, 39–42, 47–48, 81, 93, 108, 110, 112, 114, 126, 134–136, 138, 143, 169–170, 206, 288, 367 Referentiality 169, 206, 367 Relative clause 221, 291, 301, 306, 342, 367 Relativized Minimality 72, 74, 210, 347, 367 Resultative 16, 70–71, 94–95, 168, 257–258, 367 Rheme 295, 367 R-pronoun (= Referential pronoun, cf. ABC of the binding theory) 134, 367 S Scope –– Narrow 21, 367 –– Wide 21, 367 Scrambling 356, 367 A-type Scrambling 199, 207–208, 214–216, 308, 314, 323, 330, 339, 356, 367 –– Free 77, 367 –– Midrange 77, 367 Sluicing 233, 367
Index
Small clause 16, 67, 141–142, 152, 158, 167, 172, 292, 301, 309, 325, 331, 350–351, 355, 367 Specifier-Head Agreement 114, 219 , 367 Specifiers 106, 201, 267, 297, 332, 367 Spell-out –– Multiple Spell-out 42, 161, 264, 353, 367 Stage-level-predicate replace by stage level predicate (cf. Individual level predicate, secondary predicate) 156, 163–165, 168, 172–173, 367 Subcategorization 53, 367 Subjacency (cf. Bounding Theory) 73, 367 Subject control 73–75, 153–154, 367 Successive cyclic movement 267, 275, 308, 367 Super Raising 73, 367 Superiority 73, 367 Syntax-Semantics Interface 52, 367 T Telicity 238, 322, 367 Tense 83, 90, 92, 146, 152, 163, 265, 268, 321, 339, 367 Theory of Meaning –– Mental Lexicon 77, 367 Theta-Criterion 67, 245, 254, 367 Theta-Hierarchy 242, 367 Theta-Roles 66–67, 72, 86, 102, 137, 142, 224, 240, 245, 254, 260, 265, 299, 367 Theta-Theory 91, 309, 367 Thought –– Cf. LOT = Language of Thought Hypothesis 37, 45, 303, 367
369
Index
–– Thought and Language 16, 37, 44, 47, 303, 312, 318, 334, 367 Topic 16, 24–25, 43, 116, 124, 204, 207, 217, 242, 273, 295, 301, 326, 367 Tough-Movement 217, 367 Transformations 86, 91–92, 208, 225, 367 truth values (cf. values) 19, 22–23, 37–38, 82, 367 U Unaccusative 63–65, 145–146, 154, 157, 164, 203, 223–224, 233, 236–238, 241–242, 252–254, 257, 259, 265, 273, 293, 298, 302–303, 310, 331, 336, 367 –– Ergative 114, 145, 233, 302, 368 Underspecification 171, 368 Unergative 63–64, 90, 145, 236–238, 242, 249, 252, 256–257, 273, 293, 295, 297, 300, 329, 368 Universals –– cf. Realists vs Nominalists 50, 368
V V2 Verb Second 347, 368 Values –– Truth V. 19–23, 37–38, 41, 82, 368 Verb Movement 85, 220, 223, 344, 355, 368 W Wh-in-situ 218–219, 368 Wh-Movement 73, 199–200, 211–212, 217–221, 287, 296, 316, 325, 342, 347, 368 Wh-questions 220, 368 Word order; cf. Linearity 75, 153, 206–208, 210, 214, 282–283, 287, 292–295, 302, 307, 314, 330, 332, 339, 342, 368 X X-bar-Structure 368 X-bar phrase 59, 106, 201, 368 X-bar-Theory 157, 368
Potsdam Linguistic Investigations Potsdamer Linguistische Untersuchungen Recherches Linguistiques à Potsdam Edited by / Herausgegeben von / Edité par Peter Kosta, Gerda Haßler, Teodora Radeva-Bork, Lilia Schürcks, Nadine Thielemann and / und / et Vladislava Maria Warditz
Band 1
Peter Kosta / Lilia Schürcks (eds.): Linguistics Investigations into Formal Description of Slavic Languages. Contributions of the Sixth European Conference held at Potsdam University, November 30–December 02, 2005. 2007.
Band 2
Lilia Schürcks: Binding and Discourse. Where Syntax and Pragmatics Meet. 2008.
Band 3
Christiane Hümmer: Synonymie bei phraseologischen Einheiten. Eine korpusbasierte Untersuchung. 2009.
Band 4
Svetlana Friedrich: Definitheit im Russischen. 2009.
Band 5
Matthias Guttke: Strategien der Persuasion in der schriftkonstituierten politischen Kommunikation. Dargestellt an Parteiprogrammen der Neuen Rechten in Polen. 2010.
Band 6
Peter Kosta / Lilia Schürcks (eds.): Formalization of Grammar in Slavic Languages. Contributions of the Eighth International Conference on Formal Description of Slavic Languages – FDSL VIII 2009. University of Potsdam, December 2–5, 2009. 2011.
Band 7
Roman Sukač (ed.): From Present to Past and Back. Papers on Baltic and Slavic Accentology. 2011.
Band 8
Diego Gabriel Krivochen: The Syntax and Semantics of Nominal Construction. A Radically Minimalist Perspective. 2012.
Band 9
Teodora Radeva-Bork: Single and Double Clitics in Adult and Child Grammar. 2012.
Band 10
Anja Hennemann: A Context-sensitive and Functional Approach to Evidentiality in Spanish or Why Evidentiality needs a Superordinate Category. 2013.
Band 11
Diego Gabriel Krivochen / Peter Kosta: Eliminating Empty Categories. A Radically Minimalist View on Their Ontology and Justification. 2013.
Band 12
Christina Behme: Evaluating Cartesian Linguistics. From Historical Antecedents to Computational Modeling. 2014.
Band 13
Kathleen Plötner: Raum und Zeit im Kontext der Metapher. Korpuslinguistische Studien zu französischen und spanischen Raum-Zeit-Lexemen und Raum-Zeit-Lokutionen. 2014.
Band 14
Marion Eva Ernst: Produktnamen der Lebensmittelindustrie. Eine empirisch-strukturelle Untersuchung. 2014.
Band 15
Stefanie Wagner: Eine „unbekannte“ Sprache lesen oder Von der Entdeckung des Nissart durch Interkomprehension. 2015.
Band 16
Nataša Todorović: The Indicative and Subjunctive da-complements in Serbian: A SyntacticSemantic Approach. 2015.
Band 17
Vladislava Warditz / Beatrix Kreß (eds.): Multilingualism and Translation. Studies on Slavonic and Non-Slavonic Languages in Contact. 2015.
Band 18
Nadia Varley: Optionality and overgeneralization patterns in second language acquisition: Where has the expletive ensconced itself? 2015.
Band 19
Verónica Böhm: La imperfectividad en la prensa española y su relación con las categorías semánticas de modalidad y evidencialidad. 2016.
Band 20
Roland Wagner: Reflexivität im tschechisch-deutschen Sprachvergleich. Möglichkeiten und Grenzen einer Prognose. 2016.
Band 21
Ray C. H. Leung: Institutional Construction of Gamblers’ Identities. A Critical Multi-method Discourse Study. 2017.
Band 22
Markéta Ziková: Licensing of Vowel Length in Czech. The Syntax-Phonology Interface. 2018.
Band 23
Ludmila Veselovská: Noun Phrases in Czech. Their Structure and Agreements. 2018.
Band 24
Carmen Conti Jiménez: Complejidad lingüística. Orígenes y revisión crítica del concepto de lengua compleja. 2018.
Band 25
Nadine Thielemann / Nicole Richter (eds.): Urban Voices: The Sociolinguistics, Grammar and Pragmatics of Spoken Russian. 2019.
Band 26
Forthcoming
Band 27
Davide Fanciullo: Temporal expression in nominals: tripartite deictics in the Bulgarian Rhodope dialects. 2019.
Band 28
Olga Flug: Russisch und Ukrainisch im Wandel. Eine korpusbasierte Untersuchung zur Destandardisierung am Beispiel der Anglisierung in der Werbesprache nach 1985. 2019.
Band 29
Teodora Radeva-Bork / Peter Kosta (eds.): Current Developments in Slavic Linguistics. Twenty Years After (based on selected papers from FDSL 11). 2020.
Band 30
Anja Hennemann: Topic and Focus Markers in Spanish, Portuguese and French. 2020.
Band 31
Peter Kosta: The Syntax of Meaning and the Meaning of Syntax. Minimal Computations and Maximal Derivations in a Label-/Phase-Driven Generative Grammar of Radical Minimalism. 2020.
www.peterlang.com