Syntax - Theory and Analysis: Volume 3 9783110363685, 9783110362985

This Handbook represents the development of research and the current level of knowledge in the fields of syntactic theor

256 70 3MB

English Pages 707 [708] Year 2015

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
VII. Syntactic Sketches
41. German: A Grammatical Sketch · Stefan Müller
42. Hindi-Urdu: Central Issues in Syntax · Alice Davison
43. Mandarin · Lisa L.-S. Cheng and Rint Sybesma
44. Japanese · Takao Gunji
45. Georgian · Alice C. Harris and Nino Amiridze
46. The Bantu Languages · Leston Chandler Buell
47. Tagalog · Paul Schachter
48. Warlpiri · Kenneth L. Hale, Mary Laughren and Jane Simpson
49. Creole Languages · Derek Bickerton
50. Northern Straits Salish · Ewa Czaykowska-Higgins and Janet Leonard
51. Syntactic Sketch: Bora · Frank Seifart
VIII. The Cognitive Perspective
52. Syntax and Language Acquisition · Sonja Eisenbeiss
53. Syntax and Language Disorders · Martina Penke
54. Syntax and Language Processing · Claudia Felser
IX. Beyond Syntax
55. Syntax and Corpora · Heike Zinsmeister
56. Syntax and Stylistics · Monika Doherty
57. Syntax and Lexicography · Rainer Osswald
58. Computational Syntax · Emily M. Bender, Stephen Clark and Tracy Holloway King
59. Reference Grammars · Irina Nikolaeva
60. Language Documentation · Eva Schultze-Berndt
61. Grammar in the Classroom · Anke Holler and Markus Steinbach
Indexes
Language index
Subject index
Recommend Papers

Syntax - Theory and Analysis: Volume 3
 9783110363685, 9783110362985

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Syntax − Theory and Analysis HSK 42.3

Handbücher zur Sprach- und Kommunikationswissenschaft Handbooks of Linguistics and Communication Science Manuels de linguistique et des sciences de communication Mitbegründet von Gerold Ungeheuer Mitherausgegeben (1985−2001) von Hugo Steger

Herausgegeben von / Edited by / Edités par Herbert Ernst Wiegand

Band 42.3

De Gruyter Mouton

Syntax − Theory and Analysis An International Handbook Volume 3 Edited by Tibor Kiss Artemis Alexiadou

De Gruyter Mouton

ISBN 978-3-11-036298-5 e-ISBN (PDF) 978-3-11-036368-5 e-ISBN (EPUB) 978-3-11-039315-6 ISSN 1861-5090 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2015 Walter de Gruyter GmbH, Berlin/Munich/Boston Typesetting: Meta Systems Publishing & Printservices GmbH, Wustermark Printing and binding: Hubert & Co. GmbH & Co. KG, Göttingen Cover design: Martin Zech, Bremen 앝 Printed on acid-free paper 앪 Printed in Germany www.degruyter.com

This handbook is dedicated to the memory of our dear friend Ursula Kleinhenz (1965−2010). The light that burns twice as bright burns half as long.

Contents Volume 3 VII. Syntactic Sketches 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51.

German: A Grammatical Sketch · Stefan Müller . . . . . . . . . . . . Hindi-Urdu: Central Issues in Syntax · Alice Davison . . . . . . . . . Mandarin · Lisa L.-S. Cheng and Rint Sybesma . . . . . . . . . . . . . Japanese · Takao Gunji . . . . . . . . . . . . . . . . . . . . . . . . . . . Georgian · Alice C. Harris and Nino Amiridze . . . . . . . . . . . . . The Bantu Languages · Leston Chandler Buell . . . . . . . . . . . . . Tagalog · Paul Schachter . . . . . . . . . . . . . . . . . . . . . . . . . . Warlpiri · Kenneth L. Hale, Mary Laughren and Jane Simpson . . . . Creole Languages · Derek Bickerton . . . . . . . . . . . . . . . . . . . Northern Straits Salish · Ewa Czaykowska-Higgins and Janet Leonard Syntactic Sketch: Bora · Frank Seifart . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

1447 1478 1518 1559 1588 1622 1658 1677 1710 1726 1764

52. Syntax and Language Acquisition · Sonja Eisenbeiss . . . . . . . . . . . . . . 53. Syntax and Language Disorders · Martina Penke . . . . . . . . . . . . . . . . 54. Syntax and Language Processing · Claudia Felser . . . . . . . . . . . . . . . .

1792 1833 1875

VIII. The Cognitive Perspective

IX. Beyond Syntax 55. 56. 57. 58.

Syntax and Corpora · Heike Zinsmeister . . . . . . . . . . . . . . . . Syntax and Stylistics · Monika Doherty . . . . . . . . . . . . . . . . Syntax and Lexicography · Rainer Osswald . . . . . . . . . . . . . . Computational Syntax · Emily M. Bender, Stephen Clark and Tracy Holloway King . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59. Reference Grammars · Irina Nikolaeva . . . . . . . . . . . . . . . . . 60. Language Documentation · Eva Schultze-Berndt . . . . . . . . . . . 61. Grammar in the Classroom · Anke Holler and Markus Steinbach . .

. . . . . . . . . . . . . . .

1912 1942 1963

. . . .

. . . .

2001 2036 2063 2095

Language index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subject index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2127 2133

. . . .

. . . .

. . . .

Indexes

viii

Contents

Volume 1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I.

vii

Introduction: Syntax in Linguistics

1.

Syntax − The State of a Controversial Art · Tibor Kiss and Artemis Alexiadou . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Syntactic Constructions · Peter Svenonius . . . . . . . . . . . . . . . . . . . . 3. Syntax and its Interfaces: An Overview · Louise Mycock . . . . . . . . . . .

1 15 24

II. The Syntactic Tradition 4. The Indian Grammatical Tradition · Peter Raster . . . . . . . . . . . . . . . . 5. Arabic Syntactic Research · Jonathan Owens . . . . . . . . . . . . . . . . . . 6. Prestructuralist and Structuralist Approaches to Syntax · Pieter A. M. Seuren

70 99 134

III. Syntactic Phenomena 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.

Syntactic Categories and Subcategories · Hans-Jürgen Sasse . . . . . . Grammatical Relations · Beatrice Primus . . . . . . . . . . . . . . . . . . Arguments and Adjuncts · Peter Ackema . . . . . . . . . . . . . . . . . . The Morpho-Syntactic Realisation of Negation · Hedde Zeijlstra . . . . The Syntactic Role of Agreement · Stephen Wechsler . . . . . . . . . . Verb Second · Anders Holmberg . . . . . . . . . . . . . . . . . . . . . . . Discourse Configurationality · Katalin É. Kiss . . . . . . . . . . . . . . . Control · Barbara Stiebels . . . . . . . . . . . . . . . . . . . . . . . . . . Pronominal Anaphora · Silke Fischer . . . . . . . . . . . . . . . . . . . . Coordination · Katharina Hartmann . . . . . . . . . . . . . . . . . . . . . Word Order · Werner Frey . . . . . . . . . . . . . . . . . . . . . . . . . . Ellipsis · Lobke Aelbrecht . . . . . . . . . . . . . . . . . . . . . . . . . . Syntactic Effects of Cliticization · Anna Cardinaletti . . . . . . . . . . . Ergativity · Amy Rose Deal . . . . . . . . . . . . . . . . . . . . . . . . . Relative Clauses and Correlatives · Rajesh Bhatt . . . . . . . . . . . . . Voice and Valence Change · Edit Doron . . . . . . . . . . . . . . . . . . Syntax and Grammar of Idioms and Collocations · Christiane Fellbaum

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

158 218 246 274 309 342 383 412 446 478 514 562 595 654 708 749 777

Minimalism · Marc Richards . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lexical-Functional Grammar · Miriam Butt and Tracy Holloway King . . . Optimality-Theoretic Syntax · Gereon Müller . . . . . . . . . . . . . . . . . . HPSG − A Synopsis · Stefan Müller . . . . . . . . . . . . . . . . . . . . . . .

803 839 875 937

Volume 2 IV. Syntactic Models 24. 25. 26. 27.

Contents 28. 29. 30. 31.

ix

Construction Grammar · Mirjam Fried . . . . . . . . . . . . . . . . . . . Foundations of Dependency and Valency Theory · Michael Klotz . . . Dependency Grammar · Timothy Osborne . . . . . . . . . . . . . . . . . Categorial Grammar · Jason Baldridge and Frederick Hoyt . . . . . . .

. . . .

. . . .

. . . .

974 1004 1027 1045

V. Interfaces 32. 33. 34. 35. 36.

Syntax and the Lexicon · Artemis Alexiadou . . . . . . . . The Syntax-Morphology Interface · Heidi Harley . . . . . . Phonological Evidence in Syntax · Michael Wagner . . . . The Syntax-Semantics Interface · Winfried Lechner . . . . The Syntax-Pragmatics Interface · George Tsoulas . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

1088 1128 1154 1198 1256

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

1284 1321 1357 1400

VI. Theoretical Approaches to Selected Syntactic Phenomena 37. 38. 39. 40.

Arguments and Adjuncts · Daniel Hole Models of Control · Tibor Kiss . . . . Theories of Binding · Silke Fischer . . Word Order · Klaus Abels . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

VII. Syntactic Sketches 41. German: A Grammatical Sketch 1. 2. 3. 4. 5. 6. 7. 8. 9.

Topological fields for description German as an SOV language German as a verb second language The order of elements in the Mittelfeld Extraposition Subjects, passive, case, and agreement Summary Abbreviations References (selected)

Abstract This paper provides an overview of the most important grammatical properties of German. A large part of the paper is concerend with the basic clause types of German. I start with the Topological Fields Model, which is very useful as a descriptive tool, but − as will be shown − not sufficient for a thorough account of German clausal structure. I therefore explain additional theoretical assumptions that were made in order to assign structure to the observable linear sequences. After a sketch of an analysis of the basic sentence patterns in the sections 2−5, I give an account of passive, case assignment, and subject-verb agreement in section 6.

1. Topological fields for description Drach (1937) developed terminology for talking about regions in the German clause. The terminology was changed and adapted over the years. More recent and more appropriate variants can be found in Reis (1980) and Höhle (1986). The starting point for the motivation of topological fields is the sentences in (1). The verbs are realized adjacent to each other only in subordinated sentences like (1a). In finite sentences without a complementizer the finite verb occurs to the left of other verbs and of non-verbal arguments and adjuncts (1b). (1)

gegessen hat a. dass Max gestern das Eis that Max yesterday the ice.cream eaten has ‘that Max ate the ice cream yesterday’ gegessen? b. Hat Max gestern das Eis has Max yesterday the ice.cream eaten ‘Did Max eat the ice cream yesterday?’

[German]

1448

VII. Syntactic Sketches

Since all examples in this text are in German, the language tag [German] is omitted in the remainder of the text. The complementizer in (1a) and the finite verb in (1b) on the one side and the remaining verbal material on the other side form a bracket around the non-verbal material. The part of the clause which hosts gestern das Eis ‘yesterday the ice cream’ is called the Mittelfeld ‘middle field’, that hosting dass/hat is called the linke Satzklammer ‘left sentence bracket’ and that hosting gegessen hat/gegessen is called the rechte Satzklammer ‘right sentence bracket’. The rechte Satzklammer can contain non-finite verbs, the finite verb, or a verbal particle as in (2b). (2)

aufisst a. dass Max das Eis that Max the ice.cream up.eats ‘that Max eats up the ice cream’ auf? b. Isst Max das Eis eats Max the ice.cream up ‘Does Max eat up the ice cream?’

Predicative adjectives in copula constructions and resultative constructions pattern with particles and should be assigned to the rechte Satzklammer too (Müller 2002). In sentences like (3) the rechte Satzklammer then consists of the adjective treu ‘faithful’ and the copula and the resultative predicate (leer ‘empty’) and the matrix verb, respectively: (3)

ist a. dass er seiner Frau treu that he his wife faithful is ‘that he is faithful to his wife’ b. dass er den Teich leer fischt that he the pond empty fishes ‘that he fishes the pond empty’

Additional fields can be identified to the left of the linke Satzklammer and to the right of the rechte Satzklammer. In (4a) Max is placed in the so-called Vorfeld ‘pre field’ and in (4b) the relative clause that modifies Eis ‘ice cream’ is extraposed. It is located in the Nachfeld ‘post field’. (4)

gegessen. a. Max hat gestern das Eis Max has yesterday the ice.cream eaten ‘Max ate the ice cream yesterday.’ gegessen, das Barbara gekauft hat. b. Max hat gestern das Eis Max has yesterday the ice.cream eaten that Barbara bought has ‘Max ate the ice cream yesterday that Barbara bought.’

In addition to the fields already discussed, Höhle suggested a clause-initial field for conjunctions like (und ‘and’, oder ‘or’, aber ‘but’) and a field between this initial field and the Vorfeld for left dislocated elements as for instance der Montag ‘the monday’ in (5). See Altmann (1981) on left dislocation.

41. German: A Grammatical Sketch (5)

1449

Aber der Montag, der passt mir gut. but the monday it suits me well ‘But Monday suits me well.’

Höhle calls the latter field KL. It is sometimes also called the Vorvorfeld ‘pre pre field’. The examples above show that not all fields have to be filled in a German clause. For instance, in (5) the rechte Satzklammer and the Nachfeld are empty. The most extreme case is shown in (6a). (6)

a. Schlaf! sleep b. (Jetzt) lies das Buch! now read the book ‘Read the book now!’

In imperatives the finite verb is serialized in the linke Satzklammer and the Vorfeld may remain empty. In (6a), there is only a finite verb, that is, only the linke Satzklammer is filled. All other fields are empty. Sometimes the fact that fields may be unfilled leads to situations in which the assignment to topological fields is not obvious. For instance the rechte Satzklammer is not filled by a verb or verb particle in (7). So in principle it could be to the left or to the rechte Satzklammer. The relative clause could be considered as part of the Nachfeld or part of the Mittelfeld, depending on the decision made with respect to the location of the bracket. (7)

das Buch, die er kennt. Er gibt der Frau he.M gives the woman.F.SG the book.N.SG who.F.SG he knows ‘He gives the book to the woman he knows.’

Fortunately, there is a test that helps to determine the position of the rechte Satzklammer. The test is called Rangprobe ‘embedding test’ and was developed by Bech (1955: 72): One can fill the rechte Satzklammer by using a complex tense like the perfect or the future. The tense auxiliary takes the position in the linke Satzklammer and the non-finite verb is placed in the rechte Satzklammer. Applying this test to (7) shows that the nonfinite verb has to be placed before the relative clause. Placing it after the relative clause results in ungrammaticality: (8)

a. Er hat der Frau das Buch gegeben, die er kennt. he has the woman the book given who he knows ‘He gave the book to the woman he knows.’ b. *Er hat der Frau das Buch, die er kennt, gegeben. he has the woman the book who he knows given

As was pointed out by Reis (1980: 82), topological fields can contain material that is internally structured. For instance the Vorfeld in (9b) contains the non-finite verb gewusst in the rechte Satzklammer and the clause dass du kommst in the Nachfeld.

1450 (9)

VII. Syntactic Sketches a. Wir haben schon seit langem gewusst, dass du kommst. we have already since long known that you come ‘We have known for a long time that you are coming.’ b. [Gewusst, dass du kommst,] haben wir schon seit langem. known that you come have we already since long

There is no obvious way to relate the clause type (declarative, imperative, interrogative) to the topological fields model. The reason for this is that irrespective of the clause type, all fields can remain empty (Müller 20014a). The Vorfeld is usually filled in declarative main clauses, but it may be empty as in instances of Vorfeldellipse ‘topic drop’, see Fries (1988), Huang (1984) and Hoffmann (1997): (10) a. Das hab ich auch gegessen. that have I also eaten ‘I ate that too.’ b. Hab’ ich auch gegessen./? have I also eaten ‘I also ate him/her/it.’ or (with different intonation) ‘Did I also eat?’ On the other hand there are examples in which more than one constituent seems to be located in the Vorfeld. These will be discussed in section 3. Similarly, yes/no questions are usually verb-first utterances, as in the second reading of (10b). But with a question intonation V2 is possible as well: (11) Das hab’ ich auch gegessen? that have I also eaten ‘Did I eat that too?’ Conversely, V1 sentences are not necessarily questions: (12) a. Kommt ein Mann zum Arzt. comes a man to.the doctor ‘A man comes to the doctor.’ b. Gib mir das Buch! give me the book ‘Give me the book!’ (12a) is a special form of declarative clause that is used at the beginning of jokes or stories (Önnerfors 1997: Chapter 6.1). (12b) is an imperative. Imperatives are not necessarily V1, as (13) shows (see also Altmann, 1993: 1023): (13) Jetzt gib mir schon das Buch! now give me already the book ‘Give me the book now!’

41. German: A Grammatical Sketch

1451

To make matters worse, there are even verbless sentences in German. As Paul (1919: 13, 41) noted, there is a variant of the copula that is semantically empty and hence it may be omitted if information about tense corresponds to the default value present. was noch passiert, der Norddeutsche (14) a. Doch egal, but never.mind what still happens the North.German Rundfunk steht schon jetzt als Gewinner fest. PART broadcasting.company stands already now as winner ‘But never mind what happens, it is already certain that the Norddeutscher Rundfunk (North German broadcasting company) will be the winner.’ (Spiegel, 12/1999: 258) b. Interessant, zu erwähnen, daß ihre Seele völlig in Ordnung war. interesting to mention that her soul completely in order was ‘It is interesting to point out that she was completely sane.’ c. Ein Treppenwitz der Musikgeschichte, daß die Kollegen von Rammstein a stair.joke of.the music.history that the colleagues of Rammstein vor fünf Jahren noch im Vorprogramm von Sandow spielten. before five years still in.the before.program of Sandow played ‘It is an irony of musical history that the colleagues from (the band) Rammstein were still playing as the support group of Sandow a few years ago.’ (taz, 12. 07. 1999: 14) (14b) is taken from Michail Bulgakow, Der Meister und Margarita. München: Deutscher Taschenbuch Verlag. 1997: 422. In the sentences in (14) the copula sein ‘be’ has been omitted. The sentences in (14) correspond to the sentences in (15). (15) a. Doch was noch passiert, ist egal, … but what still happens is never.mind ‘But never mind what happens …’ b. Zu erwähnen, daß ihre Seele völlig in Ordnung war, ist interessant. was is interesting to mention that her soul completely in order ‘It is interesting to point out that she was completely sane.’ c. Dass die Kollegen von Rammstein vor fünf Jahren noch im that the colleagues of Rammstein before five years still in.the Vorprogramm von Sandow spielten ist ein Treppenwitz der before.program of Sandow played is a stair.joke of.the Musikgeschichte. music.history ‘It is an irony of musical history that the colleagues from (the band) Rammstein were still playing as the support group of Sandow a few years ago.’ So, the sentences in (14) are declarative clauses, but as Paul (1919: 13) noted, questions without a verb are possible as well:

1452

VII. Syntactic Sketches

(16) Niemand da? nobody there ‘Is anybody there?’ (Paul 1919: 13) This situation leaves us in a state where it is very difficult to get a clear picture of the connection between order and clause type. The situation can be improved by stipulating empty elements, for instance, for empty pronouns in topic drop constructions and empty copulas for the constructions in (14) and (16). The empty copula would be placed after Treppenwitz in (14c) and before niemand in (16) and hence the sentences would have a verb in first or second position, respectively. However, see Finkbeiner and Meibauer (2014) and Müller (2014) for arguments for a constructional treatment of such structures. Similarly, the Vorfeld in (10b) would be filled by an empty element and hence the clause would be a verb second clause. With such fillings of the respective fields it is reasonable to state that prototypical declarative clauses are V2 clauses in German and yes/no questions prototypically are V1.

2. German as an SOV language Starting with Fourquet (1957, 1970: 117−135), Bierwisch (1963: 34), and Bach (1962), German was analyzed as an SOV language, that is, the SOV order is considered the basic order and other orders like the V1 order in (17b) and the V2 order in (17c) are related to the SOV order in (17a). aufisst (17) a. dass Max das Eis that Max the ice.cream up.eats ‘that Max eats up the ice cream.’ auf? b. Isst Max das Eis eats Max the ice.cream up ‘Does Max eat up the ice cream?’ isst Max auf. c. Das Eis the ice.cream eats Max up ‘Max eats up the ice cream.’ The initial proposals by Forquet, Bierwisch, and Bach were adapted and further motivated by Reis (1974), Thiersch (1978: Chapter 1), and den Besten (1983). (See also Koster 1975 on Dutch.) The analysis of German as an SOV language is nowadays standard in GB/Minimalism and also adopted in various competing frameworks (GPSG: Jacobs 1986: 110, LFG: Berman 2003a: 41, HPSG: Kiss and Wesche 1991; Meurers 2000: 206−208; Müller 2005a, b). The following observations motivate the assumption that SOV is the basic order: verb particles and idioms, the order in subordinated and non-finite clauses (Bierwisch 1963: 34−36) and the scope of adverbials (Netter 1992: section 2.3). The relevant data will be addressed in the following subsections.

41. German: A Grammatical Sketch

1453

2.1. Non-finite forms, verb particles, and idioms In contrast to SVO languages like English, non-finite verbs cluster at the end of the clause in German: (18) a. [weil] er nach hause kommt because he to home comes ‘because he comes home’ b. [weil] er nach hause gekommen ist because he to home come has ‘because he has come home’ c. [weil] er nach hause gekommen sein soll because he to home come be should ‘because he should have come home’ In main clauses only the finite verb is placed in initial or second position, but non-finite verbs stay in the position they take in embedded clauses: (19) Er soll nach hause gekommen sein he should to home come be ‘He should have come home.’ Verb particles form a close unit with the verb. The unit is observable in verb final sentences only, which supports an SOV analysis (Bierwisch 1963: 35). er morgen anfängt (20) a. weil because he tomorrow at.catches ‘because he starts tomorrow’ b. Er fängt morgen an. he catches tomorrow at ‘He starts tomorrow.’ The particle verb in (20) is non-transparent. Such particle verbs are sometimes called mini idioms. In fact the argument above can also be made with real idioms: Many idioms do not allow rearrangement of the idiom parts. This is an instance of Behaghel’s law (1932) that things that belong together semantically tend to be realized together. The exception is the finite verb. The finite verb can be realized in initial or final position despite the fact that this interrupts the continuity of the idiomatic material. Since the continuity can be observed in SOV order only, this order is considered basic. Verbs that are derived from nouns by back-formation often cannot be separated and verb second sentences therefore are excluded (see Haider 1993: 62, who refers to unpublished work by Höhle 1991): sie das Stück heute uraufführen (21) a. weil because they the play today play.for.the.first.time ‘because they premiered the play today’

1454

VII. Syntactic Sketches heute das Stück. b. *Sie uraufführen they play.for.the.first.time today the play c. *Sie führen heute das Stück urauf. they guide today the play PREFIX.PART

Hence these verbs can only be used in the order that is assumed to be the base order. Similarly, it is sometimes impossible to realize the verb in initial position when elements like mehr als ‘more than’ are present in the clause (Haider 1997; Meinunger 2001): (22) a. dass Hans seinen Profit letztes Jahr mehr als verdreifachte that Hans his profit last year more than tripled ‘that Hans increased his profit last year by a factor greater than three’ b. Hans hat seinen Profit letztes Jahr mehr als verdreifacht. Hans has his profit last year more than tripled ‘Hans increased his profit last year by a factor greater than three.’ c. *Hans verdreifachte seinen Profit letztes Jahr mehr als. Hans tripled his profit last year more than So, it is possible to realize the adjunct together with the verb in final position, but there are constraints regarding the placement of the finite verb in initial position.

2.2. Order in subordinate and non-finite clauses Verbs in non-finite clauses and in subordinate finite clauses starting with a conjunction always appear finally, that is, in the rechte Satzklammer. For example, zu geben ‘to give’ and gibt ‘gives’ appear in the rechte Satzklammer in (23a) and (23b): (23) a. Der Clown versucht, Kurt-Martin die Ware zu geben. the clown tries Kurt-Martin the goods to give ‘The clown tries to give Kurt-Martin the goods.’ b. dass der Clown Kurt-Martin die Ware gibt that the clown Kurt-Martin the goods gives ‘that the clown gives Kurt-Martin the goods’

2.3. Scope of adverbials The scope of adverbials in sentences like (24) depends on their order: the left-most adverb scopes over the following adverb and over the verb in final position. This was explained by assuming the following structure:

41. German: A Grammatical Sketch

1455

er [absichtlich [nicht lacht]] (24) a. weil because he deliberately not laughs ‘because he deliberately does not laugh’ er [nicht [absichtlich lacht]] b. weil because he not deliberately laughs ‘because he does not laugh deliberately’ An interesting fact is that the scope relations do not change when the verb position is changed. If one assumes that the sentences have an underlying structure like in (24), this fact is explained automatically: (25) a. Lachti er [absichtlich [nicht _i]]? laughs he deliberately not ‘Does he deliberately not laugh?’ b. Lachti er [nicht [absichtlich _i]]? laughs he not deliberately ‘Doesn’t he laugh deliberately?’ It has to be mentioned here, that there seem to be exceptions to the claim that modifiers scope from left to right. Kasper (1994: 47) discusses the examples in (26), which go back to Bartsch and Vennemann (1972: 137). der Nachhilfestunden gut. (26) a. Peter liest wegen Peter reads because.of the tutoring well ‘Peter reads well because of the tutoring.’ der Nachhilfestunden. b. Peter liest gut wegen Peter reads well because.of the tutoring (26a) corresponds to the expected order in which the adverbial PP wegen der Nachilfestunden outscopes the adverb gut, but the alternative order in (26b) is possible as well and the sentence has the same reading as the one in (26a). However, Koster (1975: section 6) and Reis (1980: 67) showed that these examples are not convincing evidence since the rechte Satzklammer is not filled and therefore the orders in (26) are not necessarily variants of Mittelfeld orders but may be due to extraposition of one constituent. As Koster and Reis showed, the examples become ungrammatical when the rechte Satzklammer is filled: der Nachhilfestunden gelesen. (27) a. *Hans hat gut wegen Hans has well because.of the tutoring read der Nachhilfestunden. b. Hans hat gut gelesen wegen Hans has well read because.of the tutoring ‘Peter read well because of the tutoring.’ The conclusion is that (26b) is best treated as a variant of (26a) in which the PP is extraposed.

1456

VII. Syntactic Sketches

While examples like (26) show that the matter is not trivial, the following example from Crysmann (2004: 383) shows that there are examples with a filled rechte Satzklammer that allow for scopings in which an adjunct scopes over another adjunct that precedes it. For instance, in (28) niemals ‘never’ scopes over wegen schlechten Wetters ‘because of the bad weather’: (28) Da muß es schon erhebliche Probleme mit der Ausrüstung gegeben haben, there must it PART severe problems with the equipment given have da [wegen schlechten Wetters] ein Reinhold Messner [niemals] since because.of bad weather a Reinhold Messner never aufgäbe. give.up.would ‘There must have been severe problems with the equipment, since someone like Reinhold Messner would never give up just because of the bad weather.’ However, this does not change the fact that the sentences in (24) and (25) have the same meaning independent of the position of the verb. The general meaning composition may be done in the way that Crysmann suggested. Another word of caution is in order here: There are SVO languages like French that also have a left to right scoping of adjuncts (Bonami et al. 2004: 156−161). So, the argumentation above should not be seen as the only fact supporting the SOV status of German. In any case the analyses of German that were worked out in various frameworks can explain the facts nicely.

3. German as a verb second language The Vorfeld can be filled by arguments or adjuncts of the verb: hat dem Jungen gestern den Ball gegeben. (29) a. Der Mann the man.NOM has the boy.DAT yesterday the ball.ACC given ‘The man gave the boy the ball yesterday.’

(subject)

b. Den Ball hat der Mann dem Jungen gestern the ball.ACC has the man.NOM the boy.DAT yesterday gegeben. given

(accusative object)

c. Dem Jungen hat der Mann gestern den Ball the boy.DAT has the man.NOM yesterday the ball.ACC gegeben. given

(dative object)

d. Gestern hat der Mann dem Jungen den Ball gegeben. yesterday has the man.NOM the boy.DAT the ball.ACC given

(adjunct)

41. German: A Grammatical Sketch

1457

In addition arguments and adjuncts of other heads can appear in the Vorfeld: zwei Millionen Mark]i soll er versucht haben, [eine (30) a. [Um around two Million Mark should he tried have an Versicherung _i zu betrügen ]. insurance to cheat ‘He is said to have cheated an insurance of two Million Marks.’ (taz, 04. 05. 2001: 20) glaubt er, daß er _i ist?“ erregte sich ein Politiker b. „Weri, who.NOM believes he.NOM that he is excited REFL a politician vom Nil. from.the Nile ‘“Who does he think he is”, a politician from the Nile asked excitedly.’ (Spiegel, 8/1999: 18) glaubst du, daß ich _i gesehen habe. c. Weni who.ACC believe you that I seen have ‘Who do you believe that I saw?’ (Scherpenisse 1986: 84) d. [Gegen ihn]i falle es den Republikanern hingegen schwerer, against him fall it the Republicans but more.difficult [[Angriffe _i] zu lancieren]. attacks to launch ‘It is more difficult for the Republicans to start attacks against him.’ (taz, 08. 02. 2008: 9) The generalization is that a single constituent can be put in front of the finite verb (Erdmann 1886: Chapter 2.4; Paul 1919: 69, 77). Hence, German is called a verb second language. Crosslinguistically verb second languages are rare. While almost all Germanic languages are verb second languages, V2 in general is not very common among the languages of the world. Sentences like the ones in (29) and (30) are usually analyzed as combination of a constituent and a verb first clause from which this constituent is missing (Thiersch 1978; den Besten 1983; Uszkoreit 1987). The examples in (30b, c) show that the element in the Vorfeld can originate from an embedded clause. Since the dependency can cross clause-boundaries it is called an unbounded dependency. In any case it is a non-local dependency as all examples in (30) show. The vast majority of declarative main clauses in German is V2. However, it did not go unnoticed that there appear to be exceptions to the V2 rule in German (Engel 1970: 81; Beneš 1971: 162; van de Velde 1978; Dürscheid 1989: 87; Fanselow 1993: 67; Hoberg 1997: 1634; G. Müller 1998: Chapter 5.3). Some examples are given in (31): errang Clark 1965 … (31) a. [Zum zweiten Mal] [die Weltmeisterschaft] for.the second time the world.championships won Clark 1965 ‘Clark won the world championships for the second time in 1965.’ (Beneš 1971: 162)

1458

VII. Syntactic Sketches b. [Besonders schnell] [in die Zahlungsunfähigkeit] rutschen demnach especially fast in the insolvency slip according.to.this junge Unternehmen und Betriebe mit Umsätzen unter 100.000 B. young companies and firms with turnovers below 100.000 A ‘According to this young companies with a turnover below 100.000 A slip into insolvency especially fast.’ c. „Wir erarbeiten derzeit Grundsätze für den Einsatz von Videoüberwachung“, sagte Jacob der taz. […] [Völlig] [auf die Überwachung] könne aber nicht verzichtet werden, absolutely on the surveillance can but not go.without be um „Inventurverluste“ zu vermeiden. to stocktaking.losses to avoid ‘But the surveillance cannot be completely stopped, since this is the only way to avoid stocktaking losses.’ (taz, 17./18. 05. 2008: 6)

Example (31b) is from tagesschau, 03. 12. 2008, 20:00, http://www.tagesschau.de/multimedia/sendung/ts8914.html. A documentation and discussion of various combinations of constituents can be found in Müller (2003). My web page provides an updated list of examples. While the acceptability of examples like (31) is surprising, it is not the case that anything goes. As Fanselow (1993: 67) pointed out the fronted constituents have to be parts of the same clause: (32) a. Ich glaube dem Linguisten nicht, einen Nobelpreis gewonnen zu haben. I believe the linguist not a Nobel.price won to have ‘I do not believe the linguist’s claim to have won a Nobel price.’ b. *Dem Linguisten einen Nobelpreis glaube ich nicht gewonnen zu haben. Nobel.price believe I not won to have the linguist a This can be captured by an analysis that assumes an empty verbal head in the Vorfeld that corresponds to a verb in the rest of the sentence. The fronted constituents are combined with this empty verbal head. The analysis of (31a) is thus similar to the one of (33): (33) [[Zum zweiten Mal] [die Weltmeisterschaft] errungen] hat Clark 1965. for.the second time the world.championships won has Clark 1965 ‘Clark won the world championships for the second time in 1965.’ See G. Müller (1998: Chapter 5.3) and S. Müller (2005b) for analyses of this type with different underlying assumptions. The analyses share the assumption that apparently multiple frontings of the type discussed here are instances of partial fronting (see Müller 1998; Meurers 1999a; Müller 1999: Chapter 18) and that the V2 property of German can be upheld despite the apparent counter evidence. This is the place for a final remark on SOV as the basic order: all facts that have been mentioned as evidence for SOV as the basic order can be and have been accounted for in approaches that do not assume an empty verbal head (Uszkoreit 1987; Pollard 1996; Reape 1994; Kathol 2001; Müller 1999, 2002, 2004b). However, such approaches

41. German: A Grammatical Sketch

1459

do not extend to examples like (31) easily: Since no overt verbal element is present in the Vorfeld, the only way to account for the data seems to be the stipulation of an empty verbal head or an equivalent grammar rule (Müller 2005a). Head movement approaches assume this element anyway and hence do not require extra stipulations for examples of apparent multiple frontings.

4. The order of elements in the Mittelfeld German is a language with relatively free constituent order: the arguments of a verb can be ordered freely provided certain constraints are not violated. A lot of factors play a role: animate NPs tend to be ordered before inanimate ones, short constituents before long ones (Behaghel 1909: 139; Behaghel 1930: 86), pronouns tend to appear before non-pronouns in a Mittelfeld initial position which is called Wackernagelposition, and definite NPs before indefinite ones. See Lenerz (1977) and Hoberg (1981) for discussion. Another important constraint is that given information precedes new information (Behaghel 1930: 84). Höhle (1982) looked at German constituent order in information structural terms and developed criteria for determining the unmarked constituent order. According to him the unmarked order is the one that can be used in most contexts. Applying Höhle’s tests one can determine that the order in (34a) is the unmarked one: dem Jungen den Ball gibt (34) a. dass der Mann that the man.NOM the boy.DAT the ball.ACC gives ‘that the man gives the boy the ball’

(nom, dat, acc)

den Ball dem Jungen gibt b. dass der Mann that the man.NOM the ball.ACC the boy.DAT gives

(nom, acc, dat)

der Mann dem Jungen gibt c. dass den Ball that the ball.ACC the man.NOM the boy.DAT gives

(acc, nom, dat)

dem Jungen der Mann gibt d. dass den Ball that the ball.ACC the boy.DAT the man.NOM gives

(acc, dat, nom)

den Ball gibt e. dass dem Jungen der Mann that the boy.DAT the man.NOM the ball.ACC gives

(dat, nom, acc)

dass dem Jungen den Ball der Mann gibt that the boy.DAT the ball.ACC the man.NOM gives

(dat, acc, nom)

f.

While the reference to utterance contexts makes it possible to determine the unmarked order, this does not tell us how the marked orders should be analyzed. One option is to derive the marked orders from the unmarked one by transformations or something equivalent (Ross 1967). In a transformational approach, (34b) is derived from (34a) by movement of den Ball ‘the ball’: (35) dass der Mann [den Ball]i dem Jungen _i gibt gives that the man the ball the boy

1460

VII. Syntactic Sketches

Another option is to allow all possible orders and constrain them by linearization rules. This option is called base-generation in Transformational Grammar since the various constituent orders are generated by phrase structure rules before transformations apply, that is, they are part of the transformational base (Fanselow 1993). Non-transformational theories like LFG, HPSG, and CxG can implement analyses that are equivalent to movement transformations, but this is rarely done (see Choi 1999 for an example). Instead the analyses are surface-oriented, that is, one does not assume an underlying order from which other orders are derived. The surface-oriented approaches come in two varieties: those that assume flat structures or flat linearization domains (Uszkoreit 1987; Reape 1994; Bouma and van Noord 1998; Kathol 2001) and those that assume binary branching structures (Berman 2003a: 37 building on work by Haider 1991; Kiss 1995; Müller 2005a). One way to analyze (34b) with binary branching structures is to allow a head to combine with its arguments in any order. This was suggested by Gunji (1986) for Japanese in the framework of HPSG and is also assumed in many HPSG grammars of German. Fanselow (2001) makes a similar proposal for German in the Minimalist Program. The fact that adverbs can appear anywhere in the Mittelfeld is straightforwardly accounted for in analyses that assume binary branching structures: [dem Jungen [den Ball [gestern gab]]]] (36) a. dass [der Mann that the man.DAT the boy.DAT the ball.ACC yesterday gave ‘that the man gave the boy the ball yesterday’ [dem Jungen [gestern [den Ball gab]]]] b. dass [der Mann that the man.NOM the boy.DAT yesterday the ball.ACC gave [gestern [dem Jungen [den Ball gab]]]] c. dass [der Mann that the man.NOM yesterday the boy.DAT the ball.ACC gave [dem Jungen [den Ball gab]]]] d. dass [gestern [der Mann that yesterday the man.NOM the boy.DAT the ball.ACC gave The verb is combined with one of its arguments at a time and the results of the combination are available for modification by adverbial elements. This also accounts for the iteratability of adjuncts. In flat structures one would have to admit any number of adjuncts between the arguments. While this is not impossible (Weisweber and Preuss 1992; Kasper 1994), the binary branching analysis is conceptually simpler. Proponents of movement-based analyses argued that scope ambiguities are evidence for movement. While a sentence in the unmarked order is not ambiguous as far as quantifier scope is concerned, sentences with scrambled NPs are. This was explained by the possibility to interpret the quantifiers at the base-position and at the surface position (Frey 1993). So for (37b) one gets jedes > einem (surface position) and einem > jedes (reconstructed position). jedes (37) a. Es ist nicht der Fall, dass er mindestens einem Verleger fast it is not the case that he at.least one publisher almost every Gedicht anbot. poem offered ‘It is not the case that he offered at least one publisher almost every poem.’

41. German: A Grammatical Sketch

1461

jedes Gedichti mindestens einem b. Es ist nicht der Fall, dass er fast one it is not the case that he almost every poem at.least Verleger _i anbot. publisher offered ‘It is not the case that he offered almost every poem to at least one publisher.’ As it turned out this account overgenerates and hence, the scope data can be used as an argument against movement-based analyses. Both Kiss (2001: 146) and Fanselow (2001: section 2.6) point out that the reconstruction analysis fails for examples with ditransitive verbs in which two arguments are in a marked position but keep their relative order. For example mindestens einem Verleger ‘at least one publisher’ in (38) is predicted to be interpretable at the position _i . This would result in a reading in which fast jedes Gedicht ‘almost every poem’ outscopes mindestens einem Verleger. (38) Ich glaube daß mindestens einem Verlegeri fast jedes Gedichtj nur dieser I believe that at.least one publisher almost every poem only this Dichter _i _ j angeboten hat. poet offered has ‘I believe that only this poet offered at least one publisher almost every poem.’ Such a reading does not exist. In recent analyses in the Minimalist Program (Chomsky 1995) it is assumed that movement of phrases is feature driven, that is, an element moves to a specifier position in a syntactic tree if it can check a feature at this position. Frey (2004a) assumes a KontrP (contrast phrase) and Frey (2004b) a TopP (topic phrase) in order to provide for targets for movement (see also Rizzi 1997 for TopP and FocP ‘focus phrase’ in Italian and Haftka 1995; Grewendorf 2002: section 2.6, 2009; Abraham 2003: 19; Laenzlinger 2004: 224; Hinterhölzel 2004: 18 for analyses of German using TopP and/or FocP). Constituents have to move into the specifier position of one of these functional heads depending on their information structural status. Fanselow (2003) showed that such movement-based approaches fail, since there are cases of so-called altruistic movement (see Rosengren 1993: 290−291 and Krifka 1998: 90). That is, elements do not move because of their properties, but rather in order to free positions for other elements. For instance, assuming the main accent at the default position immedeately before the verb, the object will not be part of the focus in (39b). (39) a. dass die that the ‘that the b. dass die that the

Polizei gestern Linguisten verhaftete police yesterday linguists arrested police arrested linguists yesterday’ Polizei Linguisten gestern verhaftete police linguists yesterday arrested

If the object stays in the position next to the verb as in (39a), it gets the structural accent (focus accent) and has to be interpreted as part of the focus. Fanselow gives the following generalization with respect to reorderings: a direct object can be placed at a marked position if the information structure of the sentence requires that another constituent is in focus or that the object is not part of the focus. In

1462

VII. Syntactic Sketches

languages like German partial focussing can also be established by intonation, but choosing a marked constituent order helps in marking the information structure unambiguously, especially in written language. German differs from languages like Spanish (Zubizarreta 1998) in that the (altruistic) movement is optional in the former language but obligatory in the latter one. It follows that it is not reasonable to assume that constituents move to certain tree positions to check features. However, this is the basic explanation for movement in current Minimalist theorizing. Fanselow (2003: Abschnitt 4, 2006: 8) also showed that order restrictions that hold for topic and focus with respect to sentence adverbials can be explained in an analysis such as the one that was laid out above. The positioning of sentence adverbs directly before the focused part of the sentence is explained semantically: since sentence adverbials behave like focus sensitive operators, they have to be placed directly before the element they take scope over. It follows that elements that are not part of the focus (topics) have to be placed to the left of sentence adverbs. No special topic position for the description of local reorderings is necessary.

5. Extraposition In section 3 we discussed fronting data. In this section I discuss dislocations of elements to the right. Extraposition can be used to postpone heavy elements. This is useful since otherwise the sentence brackets may be too far away from each other to be processed successfully. (40) is an example of a train announcement: (40) Auf Gleis drei fährt ein der ICE aus Hamburg zur on platform three drives PART the ICE from Hamburg to.the Weiterfahrt nach München über … continuation.of.the.journey to Munic via ‘The ICE train from Hamburg to Munic via … is arriving at platform three.’ The syntactic category of the extraposed element is not restricted. PPs, VPs, clauses and − as evidenced by (40) − even NPs can be extraposed. See Müller (1999: Chapter 13.1) and Müller (2002: ix−xi) for further naturally occuring examples of NP extraposition of different types. Despite the tendency to extrapose heavy constituents, extraposition is not restricted to heavy phrases: (41) a. [[_i Bekannt] dazui ] hatte sich die „Kämpfende Kommunistische confessed there.to had REFL the fighting communist Partei“, eine Neugruppierung aus den Resten der altterroristischen party, a reformation from the remainders of.the old.terrorist Roten Brigaden. Red Brigades ‘The Fighting Communist Party, a reformation of remainders of the old terrorist group Red Brigades, confessed this.’ (Spiegel, 44/1999: 111)

41. German: A Grammatical Sketch

1463

Ruhe.“ b. „Würde der sich doch aufhängen, jetzt, dann wäre would he REFL only hang now then would.be silence ‘If he would hang himself now, peace would be restored.’ (taz, 18. 11. 1998: 13) In (41a) the pronominal adverb dazu is placed to the right of the non-finite verb, that is, it is in the Nachfeld in a complex Vorfeld. In (41b) the adverb jetzt is extraposed. The following example by Olsen (1981: 147) shows that sentential arguments may be realized in the Mittelfeld. (42) Ist, dass Köln am Rhein liegt, auch in Amerika bekannt? is that Cologne at.the Rhine lies also in America known ‘Is it known in America as well that Cologne is located at the Rhine?’ Hence, it is plausible to assume that verbs take their arguments and adjuncts to the left but, due to extraposition, the arguments and adjuncts may appear in the Nachfeld to the right of the verb. In connection with the Subjacency Principle (Chomsky 1973: 271, 1986: 40; Baltin 1981, 2006) it was claimed for German that extraposition is a restricted process in which only two maximal projections may be crossed (Grewendorf 1988: 281; Rohrer 1996: 103). Which projections may be crossed is said to be due to language-specific parameterization (Baltin 1981: 262, 2006; Rizzi 1982; Chomsky 1986: 40). According to Grewendorf (1988: 81, 2002: 17−18) and Haider (2001: 285), NP is such a bounding node in German. As the data in (43) show, extraposition in German is clearly a nonlocal phenomenon that can cross as many NP nodes as we can come up with: (43) a. Karl hat mir [eine Kopie [einer Fälschung [des Bildes [einer Karl has me a copy of.a forgery of.the picture of.a Frau _i]]]] gegeben, [die schon lange tot ist]i . woman given who already long dead ist ‘Karl gave me a copy of a forgery of the picture of a woman who has been dead for a long time.’ b. Ich habe [von [dem Versuch [eines Beweises [der Vermutung _i]]]] I have of the attempt of.a proof of.the assumption gehört, [dass es Zahlen gibt, die die folgenden Bedingungen erfüllen]i . heard that it numbers gives that the following conditions satisfy ‘I have heard of the attempt to prove the assumption that there are numbers for which the following conditions hold.’ (43a) shows an example of adjunct extraposition and (43b) shows that complement extraposition is possible as well. For discussion and corpus data see Müller (1999: 211, 2004a, 2007). Koster (1978: 52) provides Dutch examples parallel to (43a). See also Strunk and Snider (2013) for German and English data. A discussion of the differences between examples like (43) and the ungrammatical examples that have previously been discussed in the literature as evidence for subjacency constraints can be found in Crysmann (2013). The data from section 3 show that fronting to the left can cross clause boundaries. In contrast, extraposition seems to be clause bounded. The clause-boundedness constraint

1464

VII. Syntactic Sketches

was first discussed by Ross (1967) and later termed the Right Roof Constraint (RRC). However, the Right Roof Constraint was called into question by Kohrt (1975) and Meinunger (2000). Kohrt’s examples and most of Meinunger’s examples can be explained as mono-clausal structures involving several verbs that form a verbal complex and, hence, do not constitute evidence against the RRC. But Meinunger (2000: 201) pointed out that sentences like (44) pose a challenge for the RRC: (44) Peter hat, [dass er uns denjenigen Computer _i schenkt] fest versprochen, computer gives firmly promised Peter has that he us the.one [den er nicht mehr braucht]i . that he not anymore needs ‘Peter can’t go back on his promise that he will give us the computer he no longer needs as a present.’ (45) shows a naturally occurring example: (45) [„Es gibt viele wechselseitige Verletzungen _i“], befindet er, [in die sich injuries finds he in which REFL it gives many reciprocal einzumischen er nicht die geringste Neigung zeigt]i. to.involve he not the slightest inclination shows ‘He finds that there are many reciprocal injuries and he does not show the slightest inclination to get involved in these injuries.’ (taz, 01. 04. 2009: 16) However, (45) differs from (44) in that it could be explained as a parenthetical insertion of befindet er ‘finds he’ into a normal sentence (see Reis 1995 on parenthesis in German). According to the parenthetical analysis, (45) would not involve extraposition at all. While the above examples are marked − (44) is more marked than (45) −, it is an open question how these cases should be handled. For the corresponding restrictions on left-ward movement it has been pointed out that both information structure (Goldberg 2006: Chapter 7.2; Ambridge and Goldberg 2008) and processing constraints (Grosu 1973; Ellefson and Christiansen 2000; Gibson 1998; Kluender 1992; Kluender and Kutas 1993) influence extractability. So, a combination of similar factors may play a role for movement to the right as well and hence, the Right Roof Constraint would not be a syntactic constraint but the result of other restrictions.

6. Subjects, passive, case, and agreement German is a language that allows for subjectless constructions. There are a few verbs like grauen ‘to dread’, schwindeln + dative/accusative ‘to feel dizzy’, and frieren + accusative ‘to be cold’ that can be used without a subject. (46) shows an example: (46) Den Studenten graut vor der Prüfung. dreads.3 before the exam DAT . PL SG the student. ‘The students dread the exam.’

41. German: A Grammatical Sketch

1465

The dative and accusative arguments of the verbs mentioned above are not subjects since they do not agree with the verb (46), they are not omitted in controlled infinitives, in fact control constructions are not possible at all (47a), and the verbs do not allow imperatives to be formed (47b) (Reis 1982). versuchte, (dem Student) nicht vor dem Examen (47) a. *Der Student the student.DAT not before the exam the student.NOM tried zu grauen. to dread ‘The student tried not to dread the exam.’ b. *Graue nicht vor der Prüfung! dread not before the exam ‘Do not dread the exam!’ As Reis (1982) argued, German subjects are always NPs in the nominative. The view that clauses are never subjects is not shared by everybody (see for instance Eisenberg 1994: 285). In particular in theories like LFG, in which grammatical functions are primitives of the theory, there is an ongoing debate concerning the status of sentential arguments: Dalrymple and Lødrup (2000); Berman (2003b, 2007); Alsina, Mohanan, and Mohanan (2005); Forst (2006). In any case, the status of sentential arguments does not affect the fact that subjectless constructions exist in German. German also allows for passivization of intransitive verbs resulting in subjectless sentences: (48) a. Hier tanzen alle. here dance all.NOM ‘Everybody dances here.’ b. Hier wird getanzt. here is danced ‘Dancing is done here.’ c. Die Frau hilft dem Mann. the woman.NOM helps the man.DAT d. Dem Mann wird geholfen. the man.DAT is helped ‘The man is being helped.’ tanzen is an intransitive verb. In the passive sentence (48b), no NP is realized. helfen is a verb that governs the nominative and the dative (48c). In passive sentences the subject is suppressed and the dative object is realized without any change (48d). The sentences in (48b) and (48d) are subjectless constructions. German differs from languages like Icelandic in not having dative subjects (Zaenen, Maling, and Thráinsson 1985). One test for subjecthood that Zaenen, Maling, and Thráinsson (1985: 477) apply is the test for controllability of an element.

1466

VII. Syntactic Sketches

(49) *Der Student versucht, getanzt zu werden. the student tries danced to get Intended: ‘The student tries to dance.’ or ‘The student tries to make somebody dance.’ Like (49), infinitives with passivized verbs that govern only a dative cannot be embedded under control verbs, as (50) shows. (50) *Der Student versucht, geholfen zu werden. the student tries helped to get Intended: ‘The student tries to get helped.’ This shows that the dative in (48d) is a complement and not a subject. There is a very direct way to analyze the passive in German (and other languages) that goes back to Haider (1984, 1986). Haider suggests to designate the argument of the verb that has subject properties. This argument is the subject of unergative and transitive verbs. Unaccusative verbs do not have a designated argument, since it is assumed that their nominative argument has object properties (see Grewendorf 1989 for an extensive discussion of unaccusativity in German, see Kaufmann 1995 for a discussion of semantic factors, and Müller 2002: Chapter 3.1.1 for problems with some of the unaccusativity tests). (51) shows some prototypical argument frames with the designated argument underlined: ankommen ‘to arrive’, tanzen ‘to dance’, auffallen ‘to notice’, lieben ‘to love’, schenken ‘to give as a present’, and helfen ‘to help’. (51)

arguments a. ankommen (unaccusative): )NP[str]* b. tanzen (unergative):

)NP[str]*

c. auffallen (unaccusative):

)NP[str], NP[ldat]*

d. lieben (transitive):

)NP[str], NP[str]*

e. schenken (transitive):

)NP[str], NP[str], NP[ldat]*

f.

)NP[str], NP[ldat]*

helfen (unergative):

In the valence frames in (51) str stands for structural case and ldat for lexical dative. Structural case is case that changes depending on the syntactic environment. For instance the second argument of schenken can be realized as accusative in the active and as nominative in passive sentences: dem Jungen den Ball geschenkt hat (52) a. dass sie that she.NOM the boy.DAT the ball.ACC given has ‘that she gave the boy the ball’ geschenkt wurde b. dass dem Jungen der Ball was that the boy.DAT the ball.NOM given ‘that the ball was given to the boy’

41. German: A Grammatical Sketch

1467

I follow Haider (1986: 20) in assuming that the dative is a lexical case. As shown in (48d) the dative does not change in the werden passive. (Since arguments that are dative in the active can be realized as nominative in the bekommen ‘become’ passive, the status of the dative as structural or lexical case is controversial. See Müller 2002: Chapter 3 for a treatment of the bekommen passive and further references.) The arguments are ordered with respect to obliqueness (Keenan and Comrie 1977), which is relevant for many phenomena, for instance, topic drop as in example (10b), case assignment, and pronoun binding (Grewendorf 1985; Pollard and Sag 1992). The morphological rule that licenses the participle blocks the designated argument. (53) shows the participles and their blocked arguments. (53)

DA

a. angekommen (unaccusative): )*

SUBCAT

)NP[str]*

b. getanzt (unergative):

)NP[str]*

)*

c. aufgefallen (unaccusative):

)*

)NP[str], NP[ldat]*

d. geliebt (transitive):

)NP[str]*

)NP[str]*

e. geschenkt (transitive):

)NP[str]*

)NP[str], NP[ldat]*

f.

)NP[str]*

)NP[ldat]*

geholfen (unergative):

The passive auxiliary combines with the participle and realizes all unblocked arguments (52b), while the perfect auxiliary deblocks the designated argument and realizes it in addition to all other arguments of the participle (52a). Having explained which arguments are realized in active and passive, I now turn to case assignment and agreement: In verbal domains, nominative is assigned to the least oblique argument with structural case. All other arguments with structural case are assigned accusative in verbal domains. See Yip, Maling, and Jackendoff (1987) and Meurers (1999b); Przepiórkowski (1999); Müller (2008) for further details on case assignment along this line. In the analysis developed here, the verb agrees with the least oblique argument that has structural case. If there is no such argument, the verb is 3rd person singular. Such an analysis of passive, as opposed to a GB analysis à la Grewendorf (1989) can explain the German data without the stipulation of empty expletive elements. The problem for movement based analyses of the German passive in the spirit of Chomsky (1981) is that there is no movement. To take an example, consider the passive of (54a). The unmarked serialization of the arguments in the passivized clause is (54b) not the serialization in (54c), which could be argued to involve movement of the underlying accusative object (Lenerz 1977: section 4.4.3). schenkt (54) a. dass das Mädchen dem Jungen den Ball that the girl.NOM the boy.DAT the ball.ACC gives.as. a.present ‘that the girl gives the boy the ball as a present’ geschenkt wurde b. dass dem Jungen der Ball that the boy.DAT the ball.NOM given was ‘that the ball was given to the boy’

1468

VII. Syntactic Sketches dem Jungen geschenkt wurde c. dass der Ball was that the ball.NOM the boy.DAT given

The object in the active sentence is serialized in the same position as the subject of the passive sentence. Grewendorf captured this by assuming that there is an empty expletive element in the position where nominative is assigned and this empty element is connected to the subject which remains in the VP and gets case by transfer from the subject position. The same would apply to agreement information. Given recent assumptions about the nature of linguistic knowledge (Hauser, Chomsky, and Fitch 2002; Goldberg 2006; Tomasello 2003), analyses that assume empty expletive elements are not adequate since they cannot account for language acquisition. In order for the respective grammars to be learnable there has to be innate language specific knowledge that includes knowledge about subject positions and knowledge about the obligatoriness of subjects. In the analysis suggested here, no such knowledge is necessary.

7. Summary In this article I sketched the main building blocks of German clausal syntax. I assume a binary branching verb final structure. This structure is assumed for verb initial and for verb final clauses. In verb initial clauses the verb is related to a trace in the rechte Satzklammer. The arguments of the verb can be discharged in any order and adverbs can appear between the arguments at any place in the Mittelfeld. The subject is selected by the verb like any other argument. This gives a straightforward account of subjectless sentences. While I hope to have been able to sufficiently motivate such an analysis throughout the individual sections, the analysis remains sketchy. Due to space limitations I could not go into the details, but the pointers to the relevant publications will enable the interested reader to get more information. Of course pointers to publications of authors working in different frameworks do not guarantee that a sketch can be turned into a consistent grammar fragment, but the reader may rest assured that the things that I represented here are consistent: They have been implemented in a downloadable, computer processable grammar fragment that is described in detail in Müller (2013).

41. Acknowledgements I thank Felix Bildhauer, Philippa Cook, Jakob Maché, Bjarne 0̸rsnes, and an anonymous reviewer for comments on an earlier version of this paper.

41. German: A Grammatical Sketch

1469

8. Abbreviations The following is a list of abbreviations that are not definied by the Leipzig Glossing Rules, which are used throughout the paper. PART PREFIX

particle prefix

9. References (selected) Abraham, Werner 2003 The syntactic link between thema and rhema: the syntax-discourse interface. Folia Linguistica 37 1−2: 13−34. Alsina, Alex, KP Mohanan, and Tara Mohanan 2005 How to get rid of the COMP. In: Miriam Butt and Tracy Holloway King (eds.), Proceedings of the LFG 2005 Conference Stanford, CA: CSLI Publications. Altmann, Hans 1981 Formen der „Herausstellung“ im Deutschen: Rechtsversetzung, Linksversetzung, freies Thema und verwandte Konstruktionen. (Linguistische Arbeiten 106.) Tübingen: Max Niemeyer Verlag. Altmann, Hans 1993 Satzmodus. In: Jacobs, Joachim, Arnim von Stechow, Wolfgang Sternefeld and Theo Vennemann (eds.), Syntax − Ein Internationales Handbuch zeitgenössischer Forschung 9.2, 1006−1029. (Handbücher zur Sprach- und Kommunikationswissenschaft.) Berlin, New York: Walter de Gruyter. Ambridge, Ben, and Adele E. Goldberg 2008 The island status of clausal complements: Evidence in favor of an information structure explanation. Cognitive Linguistics 19: 349−381. Bach, Emmon 1962 The order of elements in a Transformational Grammar of German. Language 8 3: 263−269. Baltin, Mark 1981 Strict bounding. In: Carl Lee Baker and John J. McCarthy (eds.), The Logical Problem of Language Acquisition. Cambridge, MA/London, England: MIT Press. Baltin, Mark 2006 Extraposition. In: Martin Everaert, Henk van Riemsdijk, Rob Goedemans and Bart Hollebrandse (eds.), The Blackwell Companion to Syntax, 237−271. (Blackwell Handbooks in Linguistics.) Oxford: Blackwell Publishing Ltd. Bartsch, Renate, and Theo Vennemann 1972 Semantic Structures. A Study in the Relation between Semantics and Syntax. (AthenäumSkripten Linguistik 9.) Frankfurt/Main: Athenäum. Bech, Gunnar 1955 Studien über das deutsche Verbum infinitum. (Linguistische Arbeiten 139.) Tübingen: Max Niemeyer Verlag. 2nd unchanged edition 1983. Behaghel, Otto 1909 Beziehung zwischen Umfang und Reihenfolge von Satzgliedern. Indogermanische Forschungen 25: 110−142. Behaghel, Otto 1930 Von deutscher Wortstellung. Zeitschrift für Deutschkunde 44: 81−89.

1470

VII. Syntactic Sketches

Behaghel, Otto 1932 Die deutsche Syntax. Eine geschichtliche Darstellung. Band IV: Wortstellung. Periodenbau. (Germanische Bibliothek) Heidelberg: Carl Winters Universitätsbuchhandlung. Beneš, Eduard 1971 Die Besetzung der ersten Position im deutschen Aussagesatz. In: Hugo Moser (ed.), Fragen der Strukturellen Syntax und der Kontrastiven Grammatik, 160−182. (Sprache der Gegenwart − Schriften des IdS Mannheim 17.) Düsseldorf: Pädagogischer Verlag Schwann. Berman, Judith 2003a Clausal Syntax of German. (Studies in Constraint-Based Lexicalism.) Stanford, CA: CSLI Publications. Berman, Judith 2003b Zum Einfluss der strukturellen Position auf die syntaktische Funktion der Komplementsätze. Deutsche Sprache 3: 263−286. Berman, Judith 2007 Functional identification of complement clauses in German and the specification of COMP. In: Annie Zaenen, Jane Simpson, Tracy Holloway King, Jane Grimshaw, Joan Maling and Chris Manning (eds.), Architectures, Rules, and Preferences. Variations on Themes by Joan W. Bresnan. Stanford, CA: CSLI Publications. Bierwisch, Manfred 1963 Grammatik des deutschen Verbs. (studia grammatica 2.) Berlin: Akademie Verlag. Bonami, Olivier, Danièle Godard, and B. Kampers-Manhe 2004 Adverb classification. In: Francis Corblin and Henriëtte de Swart (eds.), Handbook of French Semantics, 143−184. Stanford, CA: CSLI Publications. Bouma, Gosse, and Gertjan van Noord 1998 Word order constraints on verb clusters in German and Dutch. In: Erhard W. Hinrichs, Andreas Kathol and Tsuneko Nakazawa (eds.), Complex Predicates in Nonderivational Syntax, 43−72. (Syntax and Semantics 30.) San Diego: Academic Press. Choi, Hye-Won 1999 Optimizing Structure in Scrambling. Scrambling and Information Structure. (Dissertations in Linguistics.) Stanford, CA: CSLI Publications. Chomsky, Noam 1973 Conditions on transformations. In: Stephen R. Anderson and Paul Kiparski (eds.), A Festschrift for Morris Halle, 232−286. New York: Holt, Rinehart & Winston. Chomsky, Noam 1981 Lectures on Government and Binding. Dordrecht: Foris Publications. Chomsky, Noam 1986 Barriers. (Linguistic Inquiry Monographs 13.) Cambridge, MA/London, England: MIT Press. Chomsky, Noam 1995 The Minimalist Program. (Current Studies in Linguistics 28.) Cambridge, MA/ London, England: MIT Press. Crysmann, Berthold 2004 Underspecification of intersective modifier attachment: Some arguments from German. In: Stefan Müller (ed.), Proceedings of the 11th International Conference on HeadDriven Phrase Structure Grammar. Stanford, CA: CSLI Publications. Crysmann, Berthold 2013 On the locality of complement clause and relative clause extraposition. In: Gert Webelhuth, Manfred Sailer and Heike Walker (eds.), Rightward Movement in a Comparative Perspective, 369−396. (Linguistik Aktuell/Linguistics Today 200.) Amsterdam/Philadelphia: John Benjamins Publishing Co.

41. German: A Grammatical Sketch

1471

Dalrymple, Mary, and Helge Lødrup 2000 The grammatical functions of complement clauses. In: Miriam Butt and Tracy Holloway King (eds.), Proceedings of the LFG 2000 Conference. Stanford, CA: CSLI Publications. den Besten, Hans 1983 On the interaction of root transformations and lexical deletive rules. In: Werner Abraham (ed.), On the Formal Syntax of the Westgermania: Papers from the 3rd Groningen Grammar Talks, Groningen, January 1981, 47−131. (Linguistik Aktuell/Linguistics Today 3.) Amsterdam/Philadelphia: John Benjamins Publishing Co. Drach, Erich 1937 Grundgedanken der deutschen Satzlehre. Darmstadt: Wissenschaftliche Buchgesellschaft. 4., unveränderte Auflage 1963. Dürscheid, Christa 1989 Zur Vorfeldbesetzung in deutschen Verbzweit-Strukturen. (FOKUS 1.) Trier: Wissenschaftlicher Verlag. Eisenberg, Peter 1994 Grundriß der deutschen Grammatik. Stuttgart, Weimar: Verlag J. B. Metzler, 3. edition. Ellefson, Michelle R., and Morten Christiansen 2000 Subjacency constraints without universal grammar: Evidence from artificial language learning and connectionist modeling. In: Proceedings of the 22nd Annual Conference of the Cognitive Science Society, 645−650. Mahwah, NJ, Lawrence Erlbaum Associates. Engel, Ulrich 1970 Regeln zur Wortstellung. Forschungsberichte des Instituts für deutsche Sprache 5, Institut für deutsche Sprache, Mannheim. Erdmann, Oskar 1886 Grundzüge der deutschen Syntax nach ihrer geschichtlichen Entwicklung, Vol. 1. Stuttgart: Verlag der J. G. Cotta’schen Buchhandlung. Reprint: Hildesheim: Georg Olms Verlag, 1985. Fanselow, Gisbert 1993 Die Rückkehr der Basisgenerierer. Groninger Arbeiten zur Germanistischen Linguistik 36: 1−74. Fanselow, Gisbert 2001 Features, θ-roles, and free constituent order. Linguistic Inquiry 32 3: 405−437. Fanselow, Gisbert 2003 Free constituent order: A Minimalist interface account. Folia Linguistica 37 1−2: 191− 231. Fanselow, Gisbert 2006 On pure syntax (uncontaminated by information structure). In: Patrick Brandt and Eric Fuss (eds.), Form, Structure and Grammar: a Festschrift Presented to Günther Grewendorf on Occasion of His 60 th Birthday, 137−157. (Studia grammatica 63.) Berlin: Akademie Verlag. Finkbeiner, Rita and Jörg Meibauer 2014 Richtig gut, das Paper! Satz, non-sententiale/unartikulierte Konstituente, Konstruktion? In: Finkbeiner, Rita and Jörg Meibauer (eds.), Satztypen und Konstruktionen im Deutschen. (Linguistik − Impulse und Tendenzen.) Berlin, Boston: de Gruyter. Forst, Martin 2006 COMP in (parallel) grammar writing. In: Miriam Butt and Tracy Holloway King (eds.), The Proceedings of the LFG ’06 Conference. Stanford, CA, CSLI Publications. Fourquet, Jean 1957 Review of: Heinz Anstock: Deutsche Syntax − Lehr- und Übungsbuch. Wirkendes Wort 8: 120−122.

1472

VII. Syntactic Sketches

Fourquet, Jean 1970 Prolegomena zu einer deutschen Grammatik. (Sprache der Gegenwart − Schriften des Instituts für deutsche Sprache in Mannheim 7.) Düsseldorf: Pädagogischer Verlag Schwann. Frey, Werner 1993 Syntaktische Bedingungen für die semantische Interpretation. Über Bindung, implizite Argumente und Skopus. (studia grammatica 35.) Berlin: Akademie Verlag. Frey, Werner 2004a The Grammar-Pragmatics Interface and the German Prefield. Forschungsprogramm Sprache und Pragmatik 52, Germanistisches Institut der Universität Lund. Frey, Werner 2004b A medial topic position for German. Linguistische Berichte 198: 153−190. Fries, Norbert 1988 Über das Null-Topik im Deutschen. Forschungsprogramm Sprache und Pragmatik 3, Germanistisches Institut der Universität Lund, Lund. Gibson, Edward 1998 Linguistic complexity: Locality of syntactic dependencies. Cognition 68 1: 1−76. Goldberg, Adele E. 2006 Constructions at Work. The Nature of Generalization in Language. (Oxford Linguistics.) Oxford, New York: Oxford University Press. Grewendorf, Günther 1985 Anaphern bei Objekt-Koreferenz im Deutschen. Ein Problem für die Rektions-Bindungs-Theorie. In: Werner Abraham (ed.), Erklärende Syntax des Deutschen, 137−171. (Studien zur deutschen Grammatik 25.) Tübingen: originally Gunter Narr Verlag now Stauffenburg Verlag. Grewendorf, Günther 1988 Aspekte der deutschen Syntax. Eine Rektions-Bindungs-Analyse. (Studien zur deutschen Grammatik 33.) Tübingen: originally Gunter Narr Verlag now Stauffenburg Verlag. Grewendorf, Günther 1989 Ergativity in German. (Studies in Generative Grammar 35.) Dordrecht: Holland, Providence: U. S.A.: Foris Publications. Grewendorf, Günther 2002 Minimalistische Syntax. (UTB für Wissenschaft: Uni-Taschenbücher 2313.) Tübingen, Basel: A. Francke Verlag GmbH. Grewendorf, Günther 2009 The left clausal periphery. Clitic left dislocation in Italian and left dislocation in German. In: Benjamin Shear, Philippa Cook, Werner Frey and Claudia Maienborn (eds.), Dislocated Elements in Discourse. Syntactic, Semantic, and Pragmatic Perspectives, 49−94. (Routledge Studies in Germanic Linguistics.) New York: Routledge. Grosu, Alexander 1973 On the status of the so-called right roof constraint. Language 49 2: 294−311. Gunji, Takao 1986 Subcategorization and word order. In: William J. Poser (ed.), Papers from the Second International Workshop on Japanese Syntax, 1−21. Stanford, CA: CSLI Publications. Haftka, Brigitta 1995 Syntactic positions for topic and contrastive focus in the German middlefield. In: Inga Kohlhof, Susanne Winkler and Hans-Bernhard Drubig (eds.), Proceedings of the Göttingen Focus Workshop, 17 DGFS, March 1−3, 137−157. (Arbeitspapiere des SFB 340 No. 69.) Eberhard-Karls Universität Tübingen. Haider, Hubert 1984 Was zu haben ist und was zu sein hat − Bemerkungen zum Infinitiv. Papiere zur Linguistik 30 1: 23−36.

41. German: A Grammatical Sketch

1473

Haider, Hubert 1986 Fehlende Argumente: vom Passiv zu kohärenten Infinitiven. Linguistische Berichte 101: 3−33. Haider, Hubert 1991 Fakultativ kohärente Infinitivkonstruktionen im Deutschen. Arbeitspapiere des SFB 340 No. 17, IBM Deutschland GmbH, Heidelberg. Haider, Hubert 1993 Deutsche Syntax − generativ. Vorstudien zur Theorie einer projektiven Grammatik. (Tübinger Beiträge zur Linguistik 325.) Tübingen: Gunter Narr Verlag. Haider, Hubert 1997 Typological implications of a directionality constraint on projections. In: Artemis Alexiadou and T. Alan Hall (eds.), Studies on Universal Grammar and Typological Variation, 17−33. (Linguistik Aktuell/Linguistics Today 13.) Amsterdam/Philadelphia: John Benjamins Publishing Co. Haider, Hubert 2001 Parametrisierung in der Generativen Grammatik. In: Martin Haspelmath, Eckehard König, Wulf Oesterreicher and Wolfgang Raible (eds.), Sprachtypologie und sprachliche Universalien − Language Typology and Language Universals. Ein internationales Handbuch − An International Handbook, 283−294. Berlin: Mouton de Gruyter. Hauser, Marc D., Noam Chomsky, and W. Tecumseh Fitch 2002 The faculty of language: What is it, who has it, and how did it evolve? Science 298: 1569−1579. Hinterhölzel, Roland 2004 Language change versus grammar change: What diachronic data reveal about the interaction between core grammar and periphery. In: Carola Trips and Eric Fuß (eds.), Diachronic Clues to Synchronic Grammar, 131−160. Amsterdam/Philadelphia: John Benjamins Publishing Co. Hoberg, Ursula 1981 Die Wortstellung in der geschriebenen deutschen Gegenwartssprache. (Heutiges Deutsch. Linguistische Grundlagen. Forschungen des Instituts für deutsche Sprache 10.) München: Max Hueber Verlag. Hoberg, Ursula 1997 Die Linearstruktur des Satzes. In: Hans-Werner Eroms, Gerhard Stickel and Gisela Zifonun (eds.), Grammatik der deutschen Sprache, Vol. 7.2 of Schriften des Instituts für deutsche Sprache, 1495−1680. Berlin/New York, NY: Walter de Gruyter. Hoffmann, Ludger 1997 Zur Grammatik von Text und Diskurs. In: Hans-Werner Eroms, Gerhard Stickel and Gisela Zifonun (eds.), Grammatik der deutschen Sprache, Vol. 7.1 of Schriften des Instituts für deutsche Sprache, 98−591. Berlin/New York, NY: Walter de Gruyter. Höhle, Tilman N. 1982 Explikation für „normale Betonung“ und „normale Wortstellung“. In: Werner Abraham (ed.), Satzglieder im Deutschen − Vorschläge zur syntaktischen, semantischen und pragmatischen Fundierung, 75−153. (Studien zur deutschen Grammatik 15.) Tübingen: originally Gunter Narr Verlag now Stauffenburg Verlag. Höhle, Tilman N. 1986 Der Begriff „Mittelfeld“, Anmerkungen über die Theorie der topologischen Felder. In: Walter Weiss, Herbert Ernst Wiegand and Marga Reis (eds.), Akten des VII. Kongresses der Internationalen Vereinigung für germanische Sprach- und Literaturwissenschaft. Göttingen 1985. Band 3. Textlinguistik contra Stilistik? − Wortschatz und Wörterbuch − Grammatische oder pragmatische Organisation von Rede?, 329−340. (Kontroversen, alte und neue 4.) Tübingen: Max Niemeyer Verlag.

1474

VII. Syntactic Sketches

Höhle, Tilman N. 1988 Verum-Fokus. Netzwerk Sprache und Pragmatik 5, Universität Lund-German. Inst., Lund. Höhle, Tilman N. 1991 Projektionsstufen bei V-Projektionen. Tübingen, ms. Höhle, Tilman N. 1997 Vorangestellte Verben und Komplementierer sind eine natürliche Klasse. In: Christa Dürscheid, Karl Heinz Ramers and Monika Schwarz (eds.), Sprache im Fokus. Festschrift für Heinz Vater zum 65. Geburtstag, 107−120. Tübingen: Max Niemeyer Verlag. Huang, C.-T. James 1984 On the distribution and reference of empty pronouns. Linguistic Inquiry 15 4: 531−574. Jacobs, Joachim 1986 The syntax of focus and adverbials in German. In: Werner Abraham and S. de Meij (eds.), Topic, Focus, and Configurationality. Papers from the 6th Groningen Grammar Talks, Groningen, 1984, 103−127. (Linguistik Aktuell/Linguistics Today 4.) Amsterdam/Philadelphia: John Benjamins Publishing Co. Kasper, Robert T. 1994 Adjuncts in the Mittelfeld. In: John Nerbonne, Klaus Netter and Carl J. Pollard (eds.), German in Head-Driven Phrase Structure Grammar, 39−70. (CSLI Lecture Notes 46.) Stanford, CA: CSLI Publications. Kathol, Andreas 2001 Positional effects in a monostratal grammar of German. Journal of Linguistics 37 1: 35−66. Kaufmann, Ingrid 1995 Konzeptuelle Grundlagen semantischer Dekompositionsstrukturen. Die Kombinatorik lokaler Verben und prädikativer Elemente. (Linguistische Arbeiten 335.) Tübingen: Max Niemeyer Verlag. Keenan, Edward L., and Bernard Comrie 1977 Noun phrase accessibility and universal grammar. Linguistic Inquiry 8 1: 63−99. Kiss, Tibor 1995 Infinite Komplementation. Neue Studien zum deutschen Verbum infinitum. (Linguistische Arbeiten 333.) Tübingen: Max Niemeyer Verlag. Kiss, Tibor 2001 Configurational and relational scope determination in German. In: Walt Detmar Meurers and Tibor Kiss (eds.), Constraint-Based Approaches to Germanic Syntax, 141−175. (Studies in Constraint-Based Lexicalism 7.) Stanford, CA: CSLI Publications. Kiss, Tibor, and Birgit Wesche 1991 Verb order and head movement. In: Otthein Herzog and ClausRainer Rollinger (eds.), Text Understanding in LILOG, 216−242. (Lecture Notes in Artificial Intelligence 546.) Berlin/ Heidelberg/New York, NY: Springer Verlag. Kluender, Robert 1992 Deriving island constraints from principles of predication. In: Helen Goodluck and Michael Rochemont (eds.), Island Constraints: Theory, Acquisition, and Processing, 223− 258. Dordrecht/Boston/London: Kluwer Academic Publishers. Kluender, Robert, and Marta Kutas 1993 Subjacency as a processing phenomenon. Language and Cognitive Processes 8 4: 573− 633. Kohrt, Manfred 1975 A note on bounding. Linguistic Inquiry 6: 167−171. Koster, Jan 1975 Dutch as an SOV language. Linguistic Analysis 1 2: 111−136.

41. German: A Grammatical Sketch

1475

Koster, Jan 1978 Locality Principles in Syntax. Dordrecht: Holland, Cinnaminson: U. S.A.: Foris Publications. Krifka, Manfred 1998 Scope inversion under the rise-fall contour in German. Linguistic Inquiry 29 1: 75−112. Laenzlinger, Christoph 2004 A feature-based theory of adverb syntax. In: Jennifer R. Austin, Stefan Engelberg and Gisa Rauh (eds.), Adverbials: The Interplay Between Meaning, Context, and Syntactic Structure, 205−252. (Linguistik Aktuell/Linguistics Today 70.) Amsterdam/Philadelphia: John Benjamins Publishing Co. Lenerz, Jürgen 1977 Zur Abfolge nominaler Satzglieder im Deutschen. (Studien zur deutschen Grammatik 5.) Tübingen: originally Gunter Narr Verlag now Stauffenburg Verlag. Meinunger, André 2000 Syntactic Aspects of Topic and Comment. (Linguistik Aktuell/Linguistics Today 38.) Amsterdam/Philadelphia: John Benjamins Publishing Co. Meinunger, André 2001 Restrictions on verb raising. Linguistic Inquiry 32 4: 732−740. Meurers, Walt Detmar 1999a German partial-VP fronting revisited. In: Gert Webelhuth, Jean-Pierre Koenig and Andreas Kathol (eds.), Lexical and Constructional Aspects of Linguistic Explanation, 129−144. (Studies in Constraint-Based Lexicalism 1.) Stanford, CA: CSLI Publications. Meurers, Walt Detmar 1999b Raising spirits (and assigning them case). Groninger Arbeiten zur Germanistischen Linguistik (GAGL) 43: 173−226. Meurers, Walt Detmar 2000 Lexical Generalizations in the Syntax of German Non-Finite Constructions. Arbeitspapiere des SFB 340 No. 145, Eberhard-Karls-Universität, Tübingen. Müller, Gereon 1998 Incomplete Category Fronting. A Derivational Approach to Remnant Movement in German. (Studies in Natural Language and Linguistic Theory 42.) Dordrecht/Boston/London: Kluwer Academic Publishers. Müller, Stefan 1999 Deutsche Syntax deklarativ. Head-Driven Phrase Structure Grammar für das Deutsche. (Linguistische Arbeiten 394.) Tübingen: Max Niemeyer Verlag. Müller, Stefan 2002 Complex Predicates: Verbal Complexes, Resultative Constructions, and Particle Verbs in German. (Studies in Constraint-Based Lexicalism 13.) Stanford, CA: CSLI Publications. Müller, Stefan 2003 Mehrfache Vorfeldbesetzung. Deutsche Sprache 31 1: 29−62. Müller, Stefan 2004a Complex NPs, subjacency, and extraposition. Snippets 8: 10−11. Müller, Stefan 2004b Continuous or discontinuous constituents? a comparison between syntactic analyses for constituent order and their processing systems. Research on Language and Computation, Special Issue on Linguistic Theory and Grammar Implementation 2 2: 209−257. Müller, Stefan 2005a Zur Analyse der deutschen Satzstruktur. Linguistische Berichte 201: 3−39. Müller, Stefan 2005b Zur Analyse der scheinbar mehrfachen Vorfeldbesetzung. Linguistische Berichte 203: 297−330.

1476

VII. Syntactic Sketches

Müller, Stefan 2007 Qualitative Korpusanalyse für die Grammatiktheorie: Introspektion vs. Korpus. In: Gisela Zifonun and Werner Kallmeyer (eds.), Sprachkorpora − Datenmengen und Erkenntnisfortschritt, 70−90. (Institut für Deutsche Sprache Jahrbuch 2006) Berlin: Walter de Gruyter. Müller, Stefan 2008 Depictive secondary predicates in German and English. In: Christoph Schroeder, Gerd Hentschel and Winfried Boeder (eds.), Secondary Predicates in Eastern European Languages and Beyond, 255−273. (Studia Slavica Oldenburgensia 16.) Oldenburg: BIS-Verlag. Müller, Stefan 2013 Head-Driven Phrase Structure Grammar: Eine Einführung. (Stauffenburg Einführungen 17.) Tübingen: Stauffenburg Verlag, 3. edition. Müller, Stefan 2014a Elliptical constructions, multiple frontings, and surface-based syntax. In: Paola Monachesi, Gerhard Jäger, Gerald Penn and Shuly Wintner (eds.), Proceedings of Formal Grammar 2004, Nancy, 91−109. Stanford, CA: CSLI Publications. Müller, Stefan 2014b Satztypen: Lexikalisch oder/und phrasal? In: Rita Finkbeiner und Jörg Meibauer (eds.), Satztypen und Konstruktionen im Deutschen. (Linguistik − Impulse und Tendenzen.) Berlin, New York, Heidelberg: de Gruyter. Netter, Klaus 1992 On non-head non-movement. An HPSG treatment of finite verb position in German. In: Günther Görz (ed.), Konvens 92. 1. Konferenz „Verarbeitung natürlicher Sprache“. Nürnberg 7.−9. Oktober 1992, 218−227. (Informatik aktuell.) Berlin/Heidelberg/New York, NY: Springer Verlag. Olsen, Susan 1981 Problems of seem / scheinen Constructions and their Implications for the Theory of Predicate Sentential Complementation. (Linguistische Arbeiten 96.) Tübingen: Max Niemeyer Verlag. Önnerfors, Olaf 1997 Verb-erst-Deklarativsätze. Grammatik und Pragmatik. (Lunder Germanistische Forschungen 60.) Stockholm: Almquist and Wiksell International. Paul, Hermann 1919 Deutsche Grammatik. Teil IV: Syntax, Vol. 3. Halle an der Saale: Max Niemeyer Verlag. 2nd unchanged edition 1968, Tübingen: Max Niemeyer Verlag. Pollard, Carl J. 1996 On head non-movement. In: Harry Bunt and Arthur van Horck (eds.), Discontinuous Constituency, 279−305. (Natural Language Processing 6.) Berlin/New York, NY: Mouton de Gruyter. Veröffentlichte Version eines Ms. von 1990. Pollard, Carl J., and Ivan A. Sag 1992 Anaphors in English and the scope of binding theory. Linguistic Inquiry 23 2: 261−303. Przepiórkowski, Adam 1999 On case assignment and “adjuncts as complements”. In: Gert Webelhuth, Jean-Pierre Koenig and Andreas Kathol (eds.), Lexical and Constructional Aspects of Linguistic Explanation, 231−245. (Studies in Constraint-Based Lexicalism 1.) Stanford, CA: CSLI Publications. Reape, Mike 1994 Domain union and word order variation in German. In: John Nerbonne, Klaus Netter and Carl J. Pollard (eds.), German in Head-Driven Phrase Structure Grammar, 151− 198. (CSLI Lecture Notes 46.) Stanford, CA: CSLI Publications. Reis, Marga 1974 Syntaktische Hauptsatzprivilegien und das Problem der deutschen Wortstellung. Zeitschrift für Germanistische Linguistik 2 3: 299−327.

41. German: A Grammatical Sketch

1477

Reis, Marga 1980 On justifying topological frames: ‘positional field’ and the order of nonverbal constituents in German. Documentation et Recherche en Linguistique Allemande Contemporaine 22/23: 59−85. Reis, Marga 1982 Zum Subjektbegriff im Deutschen. In: Werner Abraham (ed.), Satzglieder im Deutschen − Vorschläge zur syntaktischen, semantischen und pragmatischen Fundierung, 171−211. (Studien zur deutschen Grammatik 15.) Tübingen: originally Gunter Narr Verlag now Stauffenburg Verlag. Reis, Marga 1995 Wer glaubst Du hat recht? on the so-called extractions from verb-second clauses and verb-first parenthetical constructions in German. Sprache und Pragmatik 36: 27−83. Rizzi, Luigi 1982 Violations of the wh island constraint and the Subjacency Condition. In: Luigi Rizzi (ed.), Issues in Italian Syntax, 49−76. (Studies in Generative Grammar.) Dordrecht: Holland, Cinnaminson: U. S.A.: Foris Publications. Rizzi, Luigi 1997 The fine structure of the left periphery. In: Liliane Haegeman (ed.), Elements of Grammar, 281−337. Dordrecht: Kluwer Academic Publishers. Rohrer, Christian 1996 Fakultativ kohärente Infinitkonstruktionen im Deutschen und deren Behandlung in der Lexikalisch Funktionalen Grammatik. In: Gisela Harras and Manfred Bierwisch (eds.), Wenn die Semantik arbeitet. Klaus Baumgärtner zum 65. Geburtstag, 89−108. Tübingen: Max Niemeyer Verlag. Rosengren, Inger 1993 Wahlfreiheit und Konsequenzen: Scrambling, Topikalisierung und FHG im Dienste der Informationsstrukturierung. In: Marga Reis (ed.), Wortstellung und Informationsstruktur, 251−312. (Linguistische Arbeiten 306.) Tübingen: Max Niemeyer Verlag. Ross, John Robert 1967 Constraints on variables in syntax. Ph.D. thesis, MIT. Reproduced by the Indiana University Linguistics Club. Scherpenisse, Wim 1986 The Connection Between Base Structure and Linearization Restrictions in German and Dutch. (Europäische Hochschulschriften, Reihe XXI, Linguistik 47.) Frankfurt/M.: Peter Lang. Strunk, Jan, and Neal Snider 2013 Extraposition without subjacency. In: Gert Webelhuth, Manfred Sailer and Heike Walker (eds.), Rightward Movement in a Comparative Perspective. (Linguistik Aktuell/Linguistics Today 200.) Amsterdam/Philadelphia: John Benjamins Publishing Co. Thiersch, Craig L. 1978 Topics in German syntax. Dissertation, M.I.T. Tomasello, Michael 2003 Constructing a Language: A Usage-Based Theory of Language Acquisition Cambridge, MA: Harvard University Press. Uszkoreit, Hans 1987 Word Order and Constituent Structure in German. (CSLI Lecture Notes 8.) Stanford, CA: CSLI Publications. van de Velde, Marc 1978 Zur mehrfachen Vorfeldbesetzung im Deutschen. In: Maria-Elisabeth Conte, Anna Giacalone Ramat and Paolo Ramat (eds.), Wortstellung und Bedeutung: Akten des 12. Linguistischen Kolloquiums, Pavia 1977, 131−141. (Linguistische Arbeiten 61.) Tübingen: Max Niemeyer Verlag.

1478

VII. Syntactic Sketches

Weisweber, Wilhelm, and Susanne Preuss 1992 Direct parsing with metarules. In: Antonio Zampoili (ed.), 14th International Conference on Computational Linguistics (COLING ’92), August 23−28, 1111−1115. Nantes, France, Association for Computational Linguistics. Yip, Moira, Joan Maling, and Ray S. Jackendoff 1987 Case in tiers. Language 63 2: 217−250. Zaenen, Annie, Joan Maling, and Höskuldur Thráinsson 1985 Case and grammatical functions: The Icelandic passive. Natural Language and Linguistic Theory 3 4: 441−483. Zubizarreta, Maria Luisa 1998 Prosody, Focus and Word Order. Cambridge, MA: MIT Press.

Stefan Müller, Berlin (Germany)

42. Hindi-Urdu: Central Issues in Syntax 1. 2. 3. 4. 5. 6. 7. 8. 9.

Urdu and Hindi, Hindi and Urdu Sentence constituents and basic order Alternations in clause structure Basic clause structure Finite and non-finite clauses Finite subordinate clauses Summary Abbreviations References (selected)

Abstract Hindi-Urdu is an Indo-European language which preserves many syntactic and morphological traits of the older Indic language; it shows the influence of Persian and Arabic in vocabulary, as well as Sanskrit. It is primarily a head-final language, with inflections for case, tense, aspect and agreement. Complex predicates are productive source of new vocabulary, along with verb-verb compounds. Subject properties are associated with dative noun phrases, reflexive pronouns and auxiliary verbs, in addition to nominative and ergative case, which marks transitive subjects. There is also differential object marking of direct objects. Finite clauses differ significantly from non-finite clauses, both in position and head direction. Agreement, reflexive binding and operator wh-scope are possible across non-finite clause boundaries, but are restricted within finite clauses.

1. Urdu and Hindi, Hindi and Urdu Urdu and Hindi are two terms for essentially one language which originated out of the Indic dialect spoken in Delhi, in approximately the 16th and 17th centuries; see Masica

42. Hindi-Urdu: Central Issues in Syntax

1479

(1991: 27−30) for a fuller account. Common to colloquial Hindi and Urdu is a large vocabulary which was mostly derived from Indic roots, but also borrowed from Persian and Arabic. English borrowings are increasingly common. The syntax and morphology of Hindi and Urdu are virtually identical, except for a small number of constructions influenced by Persian, or directly borrowed. Urdu is distinct from Hindi in its writing system, which is a modified Perso-Arabic script, and in Persian and Arabic vocabulary used in formal vocabulary. Hindi is now written in the Devanagari syllabic script also used for Sanskrit, and borrows much technical and formal vocabulary from Sanskrit. This sketch of Hindi-Urdu morphology and syntax can be supplemented in more depth by various useful reference grammars and pedagogical grammars which have insightful descriptions. Masica (1991) is a particularly clear and detailed survey of the Indic languages, of which Hindi-Urdu is one, allowing for comparison with more or less closely related languages. Subbarao (2012) is a linguistically based comparison of languages of South Asia, showing typological similarities and differences among languages of different families; there is much discussion of Hindi-Urdu. Platts (1990), Bailey (1963) and Schmidt (1999) focus on Urdu, though they contain much information which applies to Hindi as well. Descriptive and pedagogical grammars of Hindi with descriptive uses include Porizka (1963); McGregor (1995); Montaut (2006) and Kachru (1980, 2006). Linguistic analyses of various features of Hindi-Urdu are included as references in the text and bibliography.

1.1. Basic features of the clause Hindi-Urdu is a mostly head-final language with case clitics, agreement morphology on the verbal complex. It distinguishes finite from non-finite clauses in ways which very generally affect coindexing and scope relations. Hindi-Urdu has verb final structure shown in (1), and in most other respects the basic clause is consistently head-final (see section 4.4 below). Some important features of the basic clause are illustrated in (1) and (2), showing the same sentence in the future tense, and the present perfect. Note that all the examples in this paper are from Hindi-Urdu. (1)

laRkee aap=koo eek ciTThii. vee those.NOM boy.M.PL.NOM you=DAT one letter.F.SG.NOM likh-eeNgee write-FUT.3.M.PL ‘Those boys will write you a letter.’

[Hindi-Urdu]

(2)

laRkooN=nee aap=koo eek ciTThii likh dii un those.OBL boy.M.PL.OBL=ERG you=DAT one letter.F.SG.NOM write give.PFV.F. hai. be.PRS.3SG ‘Those boys have written a letter to you.’

The unmarked order of constituents is subject, indirect object, direct object and verb; variations are nevertheless possible for discourse effect (see for example Kidwai 2000).

1480

VII. Syntactic Sketches

The case of the transitive subject varies; in present and future sentences such as (1), the subject is nominative, which is the direct, unmarked case. In past or perfect sentences, the subject has the ergative postposition =nee (see section 2.2 for information on case marking). Agreement morphology is expressed on the verbal complex. It reflects the properties of the nominative subject in (1), but if the subject is marked by a postposition case marker, agreement reflects the properties of the nominative direct object (see section 2.3 for information on agreement). Postpositional marking also requires a morphological change in some nouns and pronouns, to the oblique inflectional form, as shown in the subject of (2). Finally, the verbal complex in (2) combines the main verb likh ‘write’ with another verb ‘give’, which adds the idea of completion of the event and benefit to the indirect object (see section 4.2.1 for more information about complex predicates). I have used this pair of examples to point out briefly some of the important features of this language. These features will be described in greater depth in sections to follow. Here are some highlights of the topics to be included. In this language, there is extensive agreement for person, number and gender in both nominal and verbal categories. Case distinctions are expressed with postpositional clitics, except for nominative “direct” case which is null. Transitive subjects have ergative case in perfective finite sentences, an instance of split ergativity. Verbs are inflected for tense and aspect, in many possible combinations. Verb compounding expresses several relations, including aspectual distinctions. Subordinate clauses differ in syntactic status as adjuncts or argument, depending whether their inflection is finite or non-finite. Non-finite clauses may be sentence internal, in argument or modifier positions. Finite clauses are prohibited from argument positions. Instead they must be adjoined, either to the matrix clause or to a sentenceinternal nominal. Finite clauses are autonomous domains for agreement, reflexive binding and relative/question scope, while non-finite clauses are transparent to long-distance coindexing relations. In this chapter, I focus on the syntax and related of Hindi-Urdu as seen through the perspective of a (Chomskyan) generative theory of grammar, such as Chomsky (2004) and earlier work. I use this kind of syntactic theory because it is a useful way for organizing and labeling the linguistic data from a specific language, while providing a general definition of crucial categories and relationships shared by human languages. I refer to work done in different linguistic theories, as well as work which is basically descriptive. The references in each section give a much fuller account of the data and the problems at issue. Hindi-Urdu, like many languages, presents problems for generalizing from the most descriptive level of analysis. Both Hindi-Urdu and other Indic languages have properties which follow neither from their Indo-European roots nor from typological similarities to non-Indo-European languages of South and East Asia. Hindi-Urdu is an Indo-European language with ergative subject case, like Basque and Georgian. It is head-final, has nonnominative subjects like Japanese and Korean, but its verbal inflection and agreement patterns are unlike what is found in these languages. Unlike English, it has both locally and long-distance bound anaphors, which have only a subject antecedent. Like its IndoEuropean ancestors, Hindi-Urdu retains the correlative type of sentence-adjoined relative clause. Because of its origin as a lingua franca, it shows certain influences from Persian. So the analysis of Hindi-Urdu cannot follow easily from the results achieved over the last thirty years in generative grammars for the analysis of other languages, like English, Spanish, Chinese and Japanese, for example, whatever the specific theory used. The organizing plan of this chapter is to start with basic clause structure, to note case and

42. Hindi-Urdu: Central Issues in Syntax

1481

coindexing relations within the clause, then to categorize non-finite clauses, and finally to contrast complex sentences with non-finite and finite clauses. The references give further data, as well as different positions on how to analyze these constructions. The goal is to point out aspects of the language which present interesting problems for further research.

2. Sentence constituents and basic order In this section, I give a brief overview of the lexical and functional categories in clause structure, pointing out important features of case and agreement, tense and aspect.

2.1. Sentence arguments and transitivity In unmarked sentence orders, the verb is final; the following sentences have an intransitive verb in (3), a transitive verb in (4) and a ditransitive verb in example (5). Each verb has the appropriate number of arguments which it selects, plus an optional adverb modifier, such as jaldii ‘soon, quickly’ in (5). (3)

saarii raat soo-tee haiN. baccee child.M.PL.NOM whole.F night.F sleep-IPFV be.PRS.3PL ‘(The) children sleep through the whole night.’

(4)

yah film kal deekh-oogee. tum you.FAM this film.F.SG.NOM tomorrow see-FUT.2PL ‘You will see this film tomorrow.’

(5)

aap=koo ciTThii jaldii bheej-ee-gaa. woo aadmii that man.M.SG.NOM you.FORM=DAT letter.F.SG.NOM quickly send-FUT.PL-M.SG ‘That man will soon send you a letter.’

2.2. Case Each of the arguments of the verb is case-marked. The indirect object aap=koo ‘you= DAT’ is marked by the clitic postposition =koo, which is obligatory (see Butt and King 2004 for discussion of the clitic status of postpositions in Hindi-Urdu). The subjects in (3)−(5) have the unmarked or direct case, which involves the absence of a postposition. I have glossed the case as nominative, the default case. This case is found also on direct objects, for instance in the examples (4) and (5). It is possible to mark the direct object with =koo if it is specific or animate, as in (6): (6)

is film=koo/ hamaaree doost=koo deekh-oogee. tum this. film. our. friend=ACC see-FUT.2PL FAM OBL F . SG . OBL = ACC OBL you. ‘You will see this film/our friend.’

1482

VII. Syntactic Sketches

The case situation in Hindi-Urdu is somewhat contradictory: subjects and direct objects may have nominative case, and direct objects may have either nominative case or the postposition =koo, which I have glossed as accusative. The dative use of =koo is obligatory and invariant, but =koo as a direct object marker depends on the specificity and animate reference of the object. See Legate (2004) for discussion of unmarked or zero default case, Butt (1993) for the referential properties of =koo and Aissen (2003) for a comprehensive account of the reference conditions for direct object marking; in HindiUrdu compared with other languages. Transitive subjects also have ergative case in the perfective finite sentences (7)−(9). The subject is marked by the ergative postposition =nee. (7)

*baccooN=nee saarii raat soo-yaa hai. child.M.PL=ERG whole.F night.F sleep-PFV.M.SG be.PRS.3SG ‘(The) children sleep through the whole night.’

(8)

yah film kal deekh-ii hai. tum=nee you.FAM=ERG this film.F.SG.NOM yesterday see-PFV.F.SG be.PRS.3SG ‘You have seen this film yesterday.’

(9)

us aadmii=nee aap=koo ciTThii jaldii bheej-ii. that.OBL man.M.SG=ERG you.FORM=DAT letter.F.SG.NOM quickly send-PVF.F.SG ‘That man quickly sent you a letter.’

The main use of the postposition =nee is to mark a transitive/ditransitive subject in finite sentences with perfective aspect. Agency is not the main factor, as experiencers such the subject of (8) have ergative case (Davison 2004; see Butt and King 2004 and Woolford 2006 for a contrary view). Nevertheless, the majority of ergative marked subjects refer to volitional, causative agents. There are some options for ergative case on intransitive verbs, with varying degrees of grammaticality for different speakers or varieties of the language. (10) kal raat kuttooN=nee bhauN-kaa. yesterday night dog.M.PL.OBL=ERG bark-PFV.M.SG ‘Yesterday night (the) dogs barked.’ (11) raam/ raam=nee zoor=see cillaa-yaa. Ram.NOM Ram=ERG force=with shout-PFV.M.SG ‘Ram shouted loudly.’ (Mohanan 1994: 71) (12) (*)baccee=nee roo-yaa. child.MSG.OBL=ERG cry-PFV.M.SG ‘The child cried (on purpose).’ The verb bhauNk-naa ‘bark’ may have an ergative-marked subject without the assumption that the dogs barked on purpose as in (10). Other verbs with subjects referring to human beings may convey that the act was done on purpose as in (11)−(12), but speakers I have consulted reject sentences like (12). See discussion of the semantic quality of the ergative in Mohanan (1994: 71−72), Butt (1995: 15) and Butt and King (2004).

42. Hindi-Urdu: Central Issues in Syntax

1483

The case uses in (1)−(9) reflect grammatical functions as structural cases in Chomsky’s terminology, with the exception of the dative =koo, which is a lexical case linked to the thematic role of goal. Other postpositions related to specific thematic roles are found in Hindi-Urdu; they will be discussed in sections below.

2.3. Agreement The verbal complex is marked for agreement in person, number and gender. Person and number are required in finite clauses, while number and gender are characteristic of nonfinite inflection. Agreement is obligatory in sentences which have a nominative argument. Note that in (3), there is a feminine adverbial saarii raat ‘all night’, which has no postposition, but as a non-argument, it does not trigger agreement. Instead, the masculine plural subject baccee ‘children’ determines agreement. If there are two nominative arguments, as in (4), the subject takes precedence. If the subject has a postpositional case, as it does in (8)−(9), then the object triggers agreement. The agreement is the default third person masculine singular if both the subject and direct object have postpositions, as occurs when the subject is ergative and the direct object has the accusative postposition =koo, as in (13). (13) tum=nee is film=koo kal deekh-aa hai. you.FAM=ERG this.OBL film.F.SG=ACC yesterday see-PFV.M.SG be.PRS.3SG ‘You have seen this film yesterday.’ The verbal complex may consist of the verb alone, as in (6), where the tense is future, and has features for person, number, and somewhat anomalously, also for gender. Or there may be combination of a non-finite participle, imperfective or perfective, and a finite copula, as in (3) and (8). Both components of the verbal complex have the same agreement features. In the terminology of Bhatt (2005), the participle and copula are covalued and therefore show the same agreement features, number and gender on the participles, and number and person on the copula.

2.4. Tense and aspect In this section I survey briefly the tense and aspectual morphology of Hindi-Urdu. For a more detailed account, including the nuances of meaning, and the possibilities of combination in Hindi-Urdu, see Schmidt (1999), Montaut (2006), McGregor (1995), and Butt and Rivzi (2010). The finite tenses are the present, represented by the copula hai ‘be.PRS’ and past thaa ‘be.PST’. These are indicative, contrasting with the subjunctive hoo. The future indicative is formed from the subjunctive, with the addition of a suffix -gaa, as in hoo-gaa, e.g. jaa-oo-gaa ‘go-FUT.3SG.M’; sentences (5) and (6) have future verb forms showing the complex pattern of agreement. These are all more or less suppletive forms of the verb hoo-naa ‘be-INF’. The infinitive suffix itself -naa could be regarded as non-finite tense,

1484

VII. Syntactic Sketches

dependent for its tense reference on the tense of the matrix clause; infinitive clauses are discussed below with other embedded non-finite clauses. Aspect is expressed by the imperfective suffix -taa (3), the perfective suffix -(y)aa in (8) and (9), and by a progressive auxiliary rahaa (14). Complex aspectual combinations can be formed from the participle affixes in combination with main verbs (15). See Butt and Rivzi (2010) for more examples of composed aspectual combinations and their meanings. (14) kuttee bhauNk rahee haiN. PROG.M.PL be.PRS.3PL dog.M.PL bark ‘(The) dogs are barking.’ (15) kuttee bhauNk-tee rah-tee haiN. dog.M.PL bark-IPFV.M.PL stay-IPVF.M.PL be.PRS.3PL ‘(The) dogs are continually barking, keep on barking.’ The perfective participle is used in combination with the copula to express the present, past or future perfect (8). Used alone in a non-embedded sentence, the perfective expresses a kind of neutral past or aorist, as in (9)−(10), discussed in Montaut (2006: 103−106). Perfective and imperfective participles are used as subordinate clauses, often as modifiers, but also as complements; these will be discussed further below. There is another aspectual form which is used only as a modifier. This is the conjunctive participle, a bare verb stem with the invariant suffix -kar, as in (16). It normally means that the embedded clause event is completed in relation to the matrix tensed verb, but it also may be used adverbially, with a perfective meaning which includes the resulting state overlapping with the matrix verb in (17). (16) [PRO(i) yah khabar sun-kar] woo(i) xush hoo ga-ii. this news.NOM hear-CP 3SG.NOM happy be go-PFV.F.SG ‘[PRO(i) having heard this news] she(i) became happy.’ (17) woo(i) [PRO(i) sooc samajh-kar] ciTThii likh rahaa thaa. think understand-CP letter.F.SG.NOM write PROG be.PST.3.M.SG 3SG.NOM ‘He was carefully writing a letter.’ (Literally, ‘having thought and understood.’) For additional properties of the conjunctive participle -kar, see section 2.5.3 below. In many languages with tense inflection, person and number are represented in finite clauses, while number and gender are typical of non-finite inflection. This generalization applies to a limited degree to Hindi-Urdu, but as Butt and Rivzi (2010) point out, person and number are expressed only on the copula, the imperative and subjunctive/future. Past tense is expressed by the perfect participle without person features. See Davison (2002) for an explanation of how the perfect participle can have aorist, neutral past meaning, contrasting with an overt past marker in an eastern Hindi language, Kurmali. So is there a real difference between finite and non-finite clauses in Hindi-Urdu? I believe there is, and it is revealed in complex sentences. Coindexing, agreement and wh-scope relation may cross non-finite clause boundaries (5.3), but not finite clause boundaries (6.2).

42. Hindi-Urdu: Central Issues in Syntax

1485

2.5. Subject properties As in many languages, the subject is hard to define by absolute criteria in Hindi-Urdu. The subject is not always in first position, in a language with some freedom of phrase order. It does not uniquely have nominative case, or uniquely determine agreement (8)− (9). But there are other clause relationships which (nearly) always define a subject. One of them is anteceding a reflexive pronoun, regardless of case. Subjects have other case choices, determined by specific semantic classes of predicates or constructions. These classes will be discussed below in the section on diatheses and verb classes. Subjects may be required to have dative case (18)−(19), genitive (30) or locative cases (21). This is true of biclausal sentences such as (19); see Davison (2008) for discussion of this analysis. (18) baccooN(i)=koo apnii(i/*j) billii dikhaaii dii. child.M.PL=DAT self.GEN.F cat.F.SG.NOM sight give-PFV.F ‘The children saw, caught sight of their cat.’ (19) raam=koo [PRO(i) apnee(i/*j) bhaaii=koo khijaa-naa] nahiiN caahiyee. self.GEN brother=DAT tease-INF not ought Ram=DAT ‘Ram(i) ought not [PRO(i) to tease his(i/*j) brother].’ (20) raam=kee caar baccee thee. Ram=GEN.M.PL four child.M.PL.NOM be.PST.M.PL ‘Ram had four children.’ (Mohanan 1994: 139) (21) raam=meeN bilkul nahiiN thii. dayaa be.PST.F.SG Ram=in completely mercy.F.SG.NOM not ‘Ram had no mercy at all.’ (Mohanan 1994: 139) Dative subjects are required for psychological predicates such as (18), and constructions of obligation like (19), but see Bashir (1999) for some variation in meaning between uses of =koo and =nee in obligation sentences. Genitive case is used for inalienable possession in (20), while locative case is used for inherent qualities in (21). Other kinds of subject marking will be shown in section 3 on diatheses.

2.5.1. Binding of reflexive pronouns The reflexive pronoun is invariant for person, number and gender, and the reference of the antecedent must be to an animate entity (See discussion in Davison 2001). The possessive apnaa ‘self’s’ is bound by a subject, regardless of case, shown in (22)−(23), and never bound by a non-subject, as in (24). (22) baccii(i) apnee(i/*j) bhaaii=koo tang kar rahii hai. child.F.SG self.GEN.OBL.M brother=DAT vexed do PROG.F.SG be.PRS.3SG ‘The little girl is teasing/tormenting her brother.’

1486

VII. Syntactic Sketches

(23) baccii(i)=nee apnee bhaaii=koo tang ki-yaa (hai). child.F.SG=ERG self.GEN.OBL.M brother=ACC vexed do-PFV.M.SG be.PRS.3SG ‘The little girl (has) teased/tormented her brother.’ (24) eek baccee(i)=nee duusree baccee(j)=see apnaa(i/*j) khilaunaa chiin one child.OBL=ERG second child.OBL=from self’s.M.SG toy.M.SG.NOM snatch li-yaa. take-PFV.M.SG ‘One child snatched his own toy from another child.’ The full reflexive, apnee (aap) ‘self’s (self)’ also requires a subject antecedent, as in (25): (25) maaN (i) baccee(j)=koo apnee aap(i/*j)-see kaisee alag kar sak-tii self’s self-from how separate do can-IPFV mother child=ACC hai? be.PRS ‘How can the mother(i) separate the child(j) from herself/*himself?’ Only subjects can control the null subject of the conjunctive participle, as in (26): (26) [PRO(i/*j) yah baat sun-kar] pitaa(i)=koo beeTee(j)=par taras pity this matter hear-CP father=DAT son=on aa-yaa. come-PFV.M.SG ‘[PRO(i/*j) having heard this], the father(i) felt pity for (his) son(j).’

2.5.2. Auxiliary verb orientation Non-nominative subjects, like nominative subjects, are in the semantic scope of subject oriented auxiliaries, such as baiTh-naa ‘to do something inadvertently’ (27) and paanaa ‘manage’ (28): (27) maiN /*=nee aap=kii Daak paRh baiTh-aa. I.NOM =ERG you=GEN.F mail.F.SG read sit-PFV.M.SG ‘I inadvertently read your mail (by mistake).’ (28) koosT gaarD=koo yah naaNw dikh nahiiN paa-ii. manage-PFV.F.SG Coast Guard=DAT this ship.F.SG be.seen not ‘The Coast Guard did not manage to spot this ship.’ (Also: ‘This ship did not manage to be visible to the Coast Guard.’) The auxiliary ascribes properties to the subject, such as ability in (25), inadvertence in (27), or success in (28). The ambiguity of which argument is the subject will be discussed below in the section on diatheses, or conditioned variation in what the subject and object may be. The VV combination in (27) is discussed in 4.3.1; note that the intransitive baiTh ‘sit’ blocks the ergative =nee on the subject of the transitive verb paRh ‘read’.

42. Hindi-Urdu: Central Issues in Syntax

1487

2.6. Object properties Indirect objects consistently have the lexical dative case =koo. They are semantic goals, and have no subject properties (see 3.1 below). Direct objects have a range of case forms, including the unmarked nominative, the postposition =koo as well as locative cases and other cases lexically selected by the clause predicate. Objects of N-V complex predicates have genitive case (4.3). Direct objects control agreement when they are nominative and the subject is not. They have some subject properties in passive sentences (3.1).

2.6.1. Case, differential object marking There is variation in the case of direct objects in Hindi-Urdu which is determined by the semantic/pragmatic factors animacy and definiteness. This variation is an instance of Differential Object Marking (DOM), found in many languages, such as Spanish, Hebrew, Turkish, Persian and other Indic languages (Aissen 2003; Masica 1991: 364−369). Definite, specific indefinite and animate direct objects are morphologically marked either with a distinct accusative suffix, such as Persian -ra, or with a form identical to indirect object marking, as in Spanish and Hindi-Urdu, which uses the case clitic =koo. Languages differ in how semantic and pragmatic factors determine whether Differential Object Marking is required, optional or absent. In Hindi-Urdu, animacy is a stronger factor than definiteness/specificity, while in Persian, definiteness/specificity outranks animacy (Aissen 2003). There is much individual speaker variation. Wherever ergative subject marking is possible, differential object marking is also possible; exceptionally the DOM marker =koo can be used where the ergative is not possible for specific transitive verbs, as in (37). (See 3.2 below for verb classes in which ergative marking is not found.)

3. Alternations in clause structure 3.1. Diatheses Natural languages typically lack an absolute one-to-one association of case and grammatical functions. Hindi-Urdu has many variations in case marking of arguments (Montaut 1991). The alternation of nominative and ergative case on transitive subjects in (1)−(2) is one example, dependent on the aspectual and other morphology of the verbal complex. There are also contrasts of largely synonymous predicates which allow ergative or dative subjects, syntactic alternations of passive and active sentences, as well as lexically derived intransitive and causative predicates which require different case marking from the corresponding transitive predicates. Many psychological predicates require dative experiencers, which have the subject properties of controlling reflexives and participle null subjects. Dative case is privative. It is correlated with the absence of agency and volitionality. Ergative case is not priva-

1488

VII. Syntactic Sketches

tive; it is consistent with either volitionality or its absence. There are corresponding ergative-subject predicates which overlap in meaning with experiencer predicates. (29) a. baccooN=nee acaanak eek bhaaluu deekh li-yaa. child.M.PL=ERG suddenly one bear.NOM see take-PFV.M.SG ‘The children suddenly/by chance saw a bear.’ b. baccooN=koo acaanak eek bhaaluu deekhaaii di-yaa. child.M.PL=DAT suddenly one bear.NOM sight.F.NOM give-PFV.M.SG ‘The children suddenly/by chance saw a bear.’ The dative subject version in (29b) is marked in the sense that it has only a non-volitional interpretation. The ergative subject verb is not specified for volitionality but it is consistent with a range of interpretations, from non-volitional in (29a) to volitional in (30). See Davison (2004) for further discussion. (30) baccooN=nee dhyaan=see eek bhaaluu deekh li-yaa. child.M.PL=ERG thought=with one bear.M.SG.NOM see take-PFV.M.SG ‘The children carefully looked at a bear.’ Another near minimal pair involves goal subjects: ghuus mil-tii hai. (31) a. usee 3SG.DAT bribe.F.SG.NOM get-IPFV.F.SG be.PRS.3SG ‘He/she gets bribes.’ ghuus lee-taa hai. b. woo 3SG.M.NOM bribe.F.SG.NOM take-IPFV.M.SG be.PRS.3SG ‘He takes bribes.’ The verb mil-naa ‘get/receive/obtain’ allows a dative goal subject, which involves a recipient who receives something in the normal course of events; it is used for shopping in the usual way as well as for situations in which something is given to a more or less passive recipient. It can also mean ‘to meet by chance’. Syntactic passive sentences show shifts of case as well as grammatical function. With the addition of the auxiliary jaa-naa ‘go’ and perfective morphology on the main verb, the direct object in (32a) assumes subject properties in (32b, c) but the differential object marker =koo may or may not be omitted in passive sentences (Mohanan 1994: 183− 184). The differential object marker is a structural case, unlike the goal =koo ‘DAT’, which is never omitted. Yet it is unusual among language with passive sentences for an accusative case to be retained optionally. Also somewhat unusual is the fact that the negative passive sentence in (32d) has an idiomatic ability meaning. (32) a. pulis-nee coor-koo pakaR-aa. police-ERG thief-ACC seize-PFV.M.SG ‘The police caught the thief.’ (Mohanan 1994: 183)

42. Hindi-Urdu: Central Issues in Syntax

1489

pakaR-aa ga-yaa. b. (pulis-see) coor police-INS thief.NOM seize-PFV.M.SG go-PFV.M.SG ‘The thief was caught (by the police).’ (Mohanan 1994: 183) ga-yaa. c. coor=koo pakaR-aa thief=DAT seize-PFV.M.SG go-PFV.M.SG ‘The thief was caught.’ (Mohanan 1994: 183) nahiiN ga-yaa. d. pulis-see coor=koo pakaR-aa go-PFV.M.SG police-INS thief=ACC seize-PFV.M.SG not ‘The police could not bring themselves to catch the thief.’ (Mohanan 1994: 183) Passive movement does not apply to indirect objects to create a new subject. There are passive sentences like (33), retaining the dative case on the goal. The goal does not bind a reflexive or the null PRO subject of a conjunctive participle with -kar, which are strictly subject oriented. (33) *[PRO(i) ghar badal-kar] usee(i) apnii(i) Daak pahuNc-aa-ii house change-CP 3SG.DAT self.GEN mail.F.SG.NOM reach-CAUS-PFV.F nahiiN ga-yii. not go-PFV.F ‘Having moved, he/she could not be forwarded self’s mail.’ This sentence also has a logophoric reading, ‘[PRO(i) having moved], I(i) could not forward my(i) mail to him/her(i).’ Many verbs in Hindi-Urdu, both transitive and intransitive, have causative counterparts. For example, the intransitive verb baiTh-naa ‘to sit’ combines with the suffixes -aa and -vaa to form causative verbs (often with phonological changes in the stem). (See Ramchand 2008 for a recent formal account of the event structure of Hindi-Urdu causative sentences.) Both the nominative subject and the instrumental agent of a passive sentence can be the antecedent of a reflexive; both the agent and theme have subject properties in passive sentences like (33) The subject of baiTh- ‘sit’ binds a reflexive pronoun in (34a). In an active causative sentence, the semantic subject of baiTh is a syntactic object, and only the external argument/causative agent has subject properties in (34b). apnii(i/*j) saaiikal=par baiTh ga-yaa. (34) a. ravii(i) go-PFV.M.sg Ravi.M.SG.NOM self.F.SG bicycle.F.SG=on sit ‘Ravi(i) sat on his(i/*j) bicycle.’ b. vijay(j)=nee ravii=koo apnii(*i/j) saaikal=par biTh-aa-yaa. Vijay=ERG Ravi=DAT self.GEN bicycle=on sit-CAUS-PFV.M.SG ‘Vijay(j) seated Ravi(i)) on self’s(*i/j) bicycle. (Mohanan 1994: 123)

1490

VII. Syntactic Sketches vijay(j)=see apnii(i/j) saaikal=par biTh-aa-yaa ga-yaa. c. ravii(i) Ravi.NOM Vijay=INS self.GEN bicycle=on sit-CAUS-PFV go-PFV.M.SG ‘Ravi(i) was seated by Vijay(j) on self’s(i/j) bicycle.’ (Mohanan 1994: 123)

The passive version (34c) shows that both the agent vijay the object ravii count as an antecedent for the reflexive, though only the syntactic subject counts as the antecedent in the causative active sentence (34b). This fact about reflexive coindexing suggests that subject properties are not unique to a specific constituent or specific case. But see section 2.5 for instances in which subject properties are consistent for one constituent, regardless of case.

3.2. Causative verbs and the effects on sentence structure The case of arguments is determined by lexical alternations, including causative derivation and detransitivization. An intransitive, transitive or ditransitive verb is embedded below a causative head, such as deekh-naa ‘see’ (35a), and bheej-naa ‘send’ (35b): dikh-aa-ii. (35) a. siitaa(i)=nee baccee(j)=koo apnii(i/*j) tasviir child=DAT self.GEN picture.F.NOM see-CAUS-PFV.F.SG Sita=ERG ‘Sita(i) showed the child(j) self’s (i/*j) picture.’ (Montaut 1991: 75) bhij-vaa-ii. b. maiN=nee raam=see miiraa=koo kitaab Ram=INS Mira=DAT book.F.NOM send-CAUS-PFV.F.SG I=ERG ‘I had Ram send Mira the book.’ (Montaut 1991: 77) The causee subject has dative or instrumental case, and though it is the thematic subject of the embedded verb, it does not bind a reflexive, as in (35). The causee, as a direct object, may acquire subject properties by passive movement, as in (34).

3.3. Derived intransitive verbs Many transitive verbs have derived intransitive forms, which can be combined with instrumental agent phrases, as in (36a, b). For a fuller summary of the productivity of these derived forms, see Montaut (2004: 85−89). kaT ga-yaa (kaaT-naa ‘cut’). (36) a. maalii=see peeR gardener=INS tree.M.SG.NOM be.cut go-PFV.M.SG ‘The gardener cut the tree (by mistake, inadvertently).’ bik nahiiN ga-ii (beec-naa ‘sell’). b. mujh=see yah kitaab this book.F.NOM be.sold not go-PFV.F.SG I=INS ‘I could not bring myself to sell this book.’ (R. Pandharipande, p.c)

42. Hindi-Urdu: Central Issues in Syntax

1491

3.4. Lexical classes of bivalent verbs and case requirements Ergative case is normally the case of transitive subjects in Hindi-Urdu, but there is actually greater freedom of subject case in bivalent verbs, choices which are lexically determined. Most classes have a variety of semantic types of verb, except for Class C, which contains primarily psychological verbs. So I will use psychological verbs in all the examples, showing that the differences of classes are fundamentally structural rather than semantic. In Davison (2004), I proposed four classes of bivalent predicates, defined by the case of the subject and objects (37). Examples are given in (38)−(41): (37) Tab. 42.1: Transitive verb classes, by case (Lexical case in bold) Case of subject

Case of direct object

Case of indirect object

Class A

Obligatorily ergative

Nominative or dative

Dative

Class B

Optionally ergative

Nominative or dative

*

Class C

Dative

Nominative

*

Class D

Nominative

Lexical postposition

*

(38) Class A, obligatory ergative subject: bhaaluu=nee apnee daaNtooN=see baccooN=koo Dar-aa-yaa. bear=ERG self.GEN teeth=INS children=DAT fear-CAUS-PFV.M.SG ‘The bear frightened the children with its teeth; caused the children to be afraid of its teeth (by growling, showing teeth).’ (39) Class B, optionally ergative subject: samajh-aa, a. jab maiN=nee maasTar=jii=see sawaal teacher=HON=from question.NOM understand-PFV.M.SG when I=ERG too maiN=nee usee dubaaraa apnee aap hal kar-kee then I=ERG 3SG.DAT again self.GEN self solution do-CP deekh-aa. see-PFV.M.SG ‘When I understood the question from the teacher, then I saw it again solved.’ (Nespital 1997: 1122) [ki raakeesh apnii b. maiN yah baat pahlee hii samajh-aa I.NOM this matter first only understand-PFV.M.SG that Rakesh self.GEN zid=par dŗRh hai]. obstinacy=on fixed be.PRS.3SG ‘I understood from the first that Rakesh had become fixed on his own obstinacy.’ (Nespital 1997: 1122) (40) Class C, dative subject: usee [eek khaalii rikshaa aa-taa] dikh-aa. tabhii then.EMPH 3SG.DAT one empty riksha.NOM come-IPFV be.seen-PFV.M.SG ‘Just then he saw [an empty riksha coming].’ (Nespital 1997: 701)

1492

VII. Syntactic Sketches

(41) Class D, locative object: bhaaluu=see Dar-tee haiN /Dar ga-yee. baccee child.M.PL.NOM bear=from fear-IPFV be-PRS.3.M.PL fear go-PFV.M.PL ‘The children are afraid of the bear/became afraid of the bear.’ Classes A and B have ergative subjects as well as differentially case marked direct objects with -koo, sensitive to the referential properties of the object. The subject is always available as a binder of a reflexive. Classes C and D never have differential object case. There is an interesting alternation in Class C, which allows the experiencer to bind either reflexive or a pronoun, and under certain circumstances, the internal object may bind a reflexive in the dative argument. Some verbs allow reversal of grammatical functions, so that the theme is subject and the experiencer is a non-subject. In Davison (2004), I propose that Classes C and D consist of a VP projection only, allowing some freedom in which argument in Class C moves to the specifier of TP. Both are equidistant from TP, because there is no vP projection in (45b). Class A and B predicates, however, project a vP projection which introduces the external argument above VP, as in the tree structure in (42b). This vP projection plays some role − which is yet to be determined − in differential case marking with =koo, as the VP projection alone, in Classes C and D, does not allow =koo on direct objects. In vP, the internal object argument in VP cannot move to the specifier of TP, because the external subject argument is closer. Class A predicates are the only ones which project a dative indirect object, perhaps through a higher projection within vP, as in (42b), or a higher dative phrase above v. The structure must reflect the fact that the indirect object normally precedes the direct object. Bhatt (p.c.) speculates that the direct object with =koo moves to a higher vP projection which is higher than the indirect object. See Butt (1993: 98−100) for discussion of a proposal involving weak and strong structural case positions which can apply to HindiUrdu. Case licensing of the direct object remains something of a puzzle, whether it is = koo or the unmarked nominative. The v head of VP might have an [acc] feature which values the =koo marker as accusative case, but the nominative would seem also to be an accusative case. Otherwise the Tense head would have to value two nominative arguments, the subject, which is closest, and the direct object within the vP, across a phase boundary. Monovalent verbs fall into several distinct classes in other languages. See Ahmed (2010) for a discussion of the question whether Hindi-Urdu verbs can be divided unambiguously into unaccusative and unergative verbs.

4. Basic clause structures 4.1. Transitive and intransitive structures I want to sum up the precedent discussion of clause structure, grammatical functions and case of arguments by putting this information together in two illustrative phrase structure trees, one containing a class A main verb (42), and another with a class C verb (45).

42. Hindi-Urdu: Central Issues in Syntax

1493

mujhee meerii Daak pahuNc-aa rahee haiN. (42) a. vee 3.M.PL.NOM I.DAT my.F mail.F.NOM reach-CAUS PROG.M.PL be.PRS.3PL ‘They are forwarding my mail to me.’ b.

TP 3 DP T′[EPP] vee [Nom] 3 ‘they’ AspP Tense [Present] 3 vP Aspect [Progressive] 3

v′ 3 VP v [causative] 3 DP.DAT V′ mujhee 3 ‘I’ DP.NOM V meerii Daak pahuNc ‘my mail’ ‘arrive’

The constituent order in (42a) is the unmarked order in Hindi-Urdu (and other head final languages). Variations of order are possible. Movement (scrambling) both to the left (43) and right (44) alters the canonical Subject-IO-DO-V order. (43) [meerii Daak] vee mujhee pahuNc-aa rahee haiN. my.F mail.F 3.M.PL.NOM I.DAT reach-CAUS PROG.M.PL be.PRS.3PL ‘They are forwarding my mail [emphatic] to me.’ [DO S IO V] (44) yah kitaab dii thii [siitaa=nee] [raam=koo]. Ram=DAT that book.F.SG.NOM give.PFV.F.SG be.PST.F.SG Sita=ERG ‘Sita had given that book to Ram.’ [DO V S IO] (Bhatt and Dayal 2007: 288) Movement to the left has different consequences from movement to the right. Mahajan (1990a) shows that leftward movement has effects on coindexing properties; the constituent which is moved to the left to some higher position is able to bind a lower TP internal constituent which would otherwise c-command it (see below in section 3, and Kidwai 2000 for further discussion). Rightward scrambling does not have the same effect on coindexing, but does restrict the scope of wh-phrases (Bhatt and Dayal 2007) (see section 6.3.2 below). The variations in (43) and (44) are in some sense marked, in that they have various effects on interpretation, and they support the conclusion that constituent order in this language is not free in some arbitrary way, but is basically linearized with heads preceding complements, except for CP and DP heads. The support for this conclusion will be given in the following sections. These variations and their effects are reported for sentences like the examples above, which have transitive, class A verbs. Below I will give my proposed structure for a

1494

VII. Syntactic Sketches

sentence with a class C verb, lacking a vP phrase. (Alternatively, there would be a vP projection with radically different case and thematic role properties; see the remarks in sections 3.1 and 3.2 above.) suuujh ga-yaa. (45) a. siitaa=koo eek upay Sita=DAT one means.M.SG.NOM be.visible go-PFV.M.SG ‘Sita saw a solution; a solution came to mind for Sita.’ b.

TP 3 DP[DAT] T′ [EPP] 3 ASPECT Tense[Aorist] 3 VP Aspect [Perfective] 3 V′ 3 DP[NOM]M.SG V 3 V V suujh ga-yaa ‘be.visible’ ‘go-PFV.M.SG’

The dative experiencer DP, which originates in VP, moves to the specifier of TP to satisfy the requirement that a TP must have a subject. The nominative theme stays in place within VP, and as the only nominative argument, controls the agreement. The V has the perfective aspect standing for the simple past or aorist tense. What is new in this phrase structure is the syntactic combination of the main verb suujh ‘be visible, come to mind’ with another verb jaa-naa/ga-yaa ‘to go, go-PFV’. In the next section I will summarize briefly two kinds of verb combinations and their effect on aspect and case.

4.2. Combinations of verbs and other constituents The clause structure of Hindi-Urdu involves tense/aspect nodes, which can be filled with an affix or a full finite auxiliary verb. In addition, Hindi-Urdu makes frequent and productive use of combinations involving verbs. There are two kinds of combinations, VV combinations in which a main verb combines in the syntax with a vector or light verb, and N or A and V, which form lexical items. The VV construction is discussed in Hook (1974), Butt (1995), and Nespital (1997), while the compounding of a light main verb with N or A to form a lexical unit which behaves like a single predicate is discussed in Hook (1979), Verma (1990), Mohanan (1994), and Davison (2005).

42. Hindi-Urdu: Central Issues in Syntax

1495

4.2.1. Verbal combinations VV VV consists of a main verb followed by another (vector or light) verb which adds some aspectual or adverbial information to the representation of the event. roo paR-aa. (46) a. baccaa [paR-naa ‘fall’] child.NOM cry fall-PFV.M.SG ‘The child burst into tears, suddenly began to cry.’ (cf. Butt 1997: 121) b. us=nee roo Daal-aa. 3SG=ERG cry put.down-PFV.M.SG ‘He/she wept copiously on purpose.’

[Daal-naa ‘put down’] (Butt 1997: 123)

kitaab paRh lii. c. baccee=nee [lee-naa ‘take’] child.OBL=ERG book.F.SG read take.PFV.F.SG ‘The child read the book (completely) (for his own benefit).’ peeT d. maiN zaruurat=see zyaadaa khaa ga-yaa, [jaa-naa ‘go’] I.NOM necessity=from more eat go-PFV.M.SG belly.NOM phuul ga-yaa, jhapakii lag ga-ii. swell go-PFV.M.SG drowsiness.F.NOM strike go-PFV.F.SG ‘I ate (up)/gulped more than was necessary, my belly swelled up, and I felt sleepy.’ (Montaut 2006: 126) Hook (1974) and Nespital (1997) show that the set of possible vector verbs is not completely fixed, and speakers differ in the details of what vector verbs occur with which main verbs. All vector verbs also occur as main verbs. They add information about inception (46a), intentionality or its absence (46b, d), and telicity (46c) (Hook 1974; Nespital 1997; Butt 1995, 1997). There are subtle constraints on which combinations are possible, and what the contribution of the vector verb is (Nespital 1997). For example, verbs with stative inherent aspect tend not to combine with vector verbs. Vector verbs are either transitive, as in (46b, c) or intransitive, as in (46a, d), affecting the case of the subject of the combined VV. Typically transitive verbs have transitive vectors, and intransitive main verbs have intransitive vectors, so that the subject case reflects transitivity in (46a, c, d) But mixed combinations are possible, as in (46b, d) and (27). The combination of vector and main verb is complex. The main verb argument structure prevails, while the vector verb determines the subject case. See Butt (1995, 1997) for a very clear and comprehensive account of how the event structure of the vector verb is bleached, losing some of its specifications, so that it refers to the inception of the event referred to by the main verb (46a).

4.2.2. Verbal compounds with nouns and adjectives The combination of a noun or adjective with a verb such as transitive kar-naa ‘do’ or intransitive hoo-naa ‘be, become’, aa-naa ‘come’ is a very productive source of new

1496

VII. Syntactic Sketches

predicates (Verma 1993), in fact the only source in recent times. The nouns are complex event nominals. The result combines the event structures and the arguments of the noun and the verb. There are several different surface forms which result in the case of transitive NV compounds (47c, d). These compounds often coexist with monomorphemic doublets. But the compounds are not fully compositional; yaad kar-naa ‘do memory’ means not only ‘to remember’ but also ‘to miss someone’, as in (47c, d), while khooj kar-naa means not only ‘to search for’ but also ‘to discover’ in (47a).There can be subtle differences of meaning; the monomorphemic verb khooj-naa ‘search, find’ compared with khooj kar-naa ‘find, discover’. The monomorphemic verb is unspecified for telicity, while the combination is telic (Hook 1974). kii (47) a. ganapati singh=nee [eek naii bimaarii=kii khooj] Ganpat Singh=ERG one new illness=GEN search.F.SG.NOM do.PFV.F hai. be.PRS ‘Ganpat Singh has discovered a new disease.’ (Bahl 1974: 222) lag-taa hai. b. baccooN=koo billii=see Dar cat=from fear.NOM strike-IPFV be.PRS child=DAT ‘The children are afraid of the cat.’ ki-yaa. c. us=nee moohan=koo bahut yaad 3SG=ERG Mohan=DAT much memory.F.SG do-PFV.M.SG ‘He/she remembered Mohan very much.’ (Bahl 1974: xxix) yaad] kii. d. maiN=nee [un=kii 3PL=GEN.F memory.F.SG.NOM do.PFV.F.SG I=ERG ‘I remembered, recalled them; I missed them.’ The light verb determines the case of the subject, which is ergative in (47a, c, d) or dative in (47b). The N itself has properties of a syntactic object. It has nominative case, and triggers verb agreement in (47a, b, d). Case marking of the object arguments in a sentence with a complex predicate follows more than one pattern. Genitive case is used for the logical object of N in (47a, d). A second option is locative case in (47b), which is selected by the root Dar ‘fear’. A final option is the normal differential object marker =koo in (47c), suggesting that the N and V have fused syntactically as a single unit with case valuing properties; see Hook (1979); Davison (2005). If locative or dative case were the only possibilities for the thematic object, then it would be possible to propose that N incorporates into V, perhaps by raising, so that argument structures would merge in this derived head. But this analysis does not account for the genitive case, which is the default case for a combination of N and a DP. The genitive case suggests instead that the event nominal N first combines with its DP thematic object, licensing its thematic object. This DP-N phrase then undergoes argument merger with V. These issues are discussed in greater detail in Davison (2005). I conclude that Hindi-Urdu does not form complex predicates by syntactic incorporation in (47a, b, c), because the N rather than N+V selects thematic object case.

42. Hindi-Urdu: Central Issues in Syntax

1497

4.3. Noun Incorporation Incorporation in Hindi-Urdu is very restricted. There is a small number of N-V pairs which directly mark the direct object like (47c) (see Hook 1979; Davison 2005). In addition it has been proposed by Mohanan (1995) that a generic indefinite direct object is semantically incorporated with the main verb: kitaab beec-naa ‘book sell-INF, do bookselling’. The incorporated object is generic and non-referential. Dayal (2011) shows that what is incorporated is a phrase (NumP) which though singular in form can have plural reference. This kind of incorporation is limited and non-compositional, for example, makkhii maar-naa ‘lit. kill flies, waste time’ (Dayal 2011: 134). In many languages, many verbs of manner of motion incorporate directionality: ‘float under the bridge’. Narasimhan (2001) shows that Hindi-Urdu manner of motion verbs require directionality to be expressed separately, without lexical incorporation.

4.4. Branching direction Hindi-Urdu is mostly left branching in its lexical categories, and some of the functional categories like tense and aspect.

4.4.1. Lexical categories, left branching I will discuss first the lexical categories V, A, N and P, which have complements to the left. Verbs have direct objects to the left, as we see in (4) and (5). Adjectives have complements to the left, with case selected by the adjective in (48) and (49): (48) [PRO(arb) jaa-nee=koo] taiyaar. go-INF.OBL=DAT ready ‘Ready [PRO to go]’ Nouns take complements to the left: (49) [zindagii=kii] (eek) kahaanii. life=GEN.F one story.F ‘(A) story [of (one’s) life]’ The genitive case is the typical unmarked case for noun complements. The genitive postposition has adjectival inflection, agreeing with the NP it modifies − see Payne (1995). Note that the genitive phrase may be outermost (8a). See Verma (1967) for an overview of the Hindi Noun Phrase. Postpositions fall into two categories, both left branching. Case markers for ergative, dative, genitive and locative case are expressed as clitics to the right of their complements; see Butt and King (2004) for evidence that Hindi-Urdu postpositions are clitics and not suffixes. They require oblique morphology. Other postpositions do not express case, but instead have adverbial meanings derived in many case from nouns, and because of their nominal category, they require genitive case on their complements, which itself is oblique in inflection:

1498

VII. Syntactic Sketches

aa-nee=kee] kaaraN (50) a. [un=kee 3PL=GEN.OBL come-INF=GEN.OBL reason ‘because [of their coming]’ b. [qaanuun =kee] xilaaf law=GEN.OBL opposition/contrary ‘against [the law]’ A small number of adpositions, including Persian borrowings occur as both prepositions and postpositions, shown in (51a, b): pheeNk dii. (51) a. maiN=nee [binaa [paRh-ee]] yah kitaab without read-PFV.OBL this book.F.SG throw give.PFV.F.SG I=ERG ‘I threw this book away without [having read (it)]’ b. [paRh-nee-kee] binaa read-INF-GEN.OBL without ‘without [reading] There are also Persian prepositions, such as baa- ‘with’ and bee- ‘without’, which form Hindi-Urdu compound words (Schmidt 1999: 250−252): (52) a. baa-iimaan ‘with faith, faithful’ b. bee-sharm ‘without shame, shameless’ Another interesting construction borrowed from Persian is the ezafe construction, which in Hindi-Urdu, especially in Urdu, contrasts with the genitive =kaa postposition (Schmidt 1999: 246−247). The ezafe is a clitic on the possessor, which precedes the possessed in (53a) (Bögel and Butt 2010) makaan (53) a. [maalik=ee] master.M.SG=EZ house ‘owner of the house, landlord’ maalik b. [makaan=kaa] house.M.SG=GEN.M.SG owner.M.SG ‘owner of the house, landlord.’

4.4.2. Functional categories The functional categories include Focus, Aspect, Tense and Negation, which are headfinal, and Complementizer, which is head-initial. The focus particles ‘also’,‘only’ and ‘even’ are final clitics which may precede or follow case clitics (54a, b):

42. Hindi-Urdu: Central Issues in Syntax

1499

hii baat nahiiN hai. (54) a. [caah-nee=kii] want-INF=GEN.F.SG only matter not is ‘It is not only a matter of wanting.’ (Montaut 2006: 288) [niitaa=see] bhii sundar hai. b. siitaa Sita.NOM Nita=from even beautiful is ‘Sita is even more beautiful than Nita.’ (Montaut 2006: 294) Tense and Aspect follow their verbal complements. Negation has several forms. Besides nahiiN ‘not’, there are variant forms of negation. In imperative sentences, mat is possible, and nonfinite clauses require na. The position of negation is somewhat less fixed. Constituent negation follows what is negated, with special intonation on the complement, as in (55a): khariid-ii, (kisii aur=nee khariid-ii). (55) a. siitaa=nee nahiiN kitaab Sita=ERG not book.F.SG.NOM buy-PFV.F.SG some else=ERG buy-PFV.F.SG ‘Sitaa didn’t buy a/the book, someone else did.’ (Vaisishth 2000: 111) rooTii khaa-taa. nahiiN thaa. b. raam Ram.M.SG.NOM bread.NOM eat-IPFV.M.SG not be.PST.M.SG ‘Ram did not (used to) eat bread.’ (Vaisishth 2000: 109) rooTii nahiiN khaa-taa thaa. c. raam Ram.M.SG.NOM bread.NOM not eat-IPFV.M.SG be.PST.M.SG ‘Ram did not (used to) eat bread.’ (Vaisishth 2000: 109) rooTii khaa-taa thaa nahiiN. d. *raam Ram.M.SG.NOM bread.NOM eat-IPFV-MSG be-PST.M.SG not ‘Ram did not (used to) eat bread.’ (Vaisishth 2000: 110) Sentential negation is sentence-final (55d) only with special emphasis. Otherwise, it occurs before the main verb, or within the verb-auxiliary sequence. The variant orders could be derived by assuming that Negation is the head of a projection above vP and Aspect, as in (55b). In (55c), v-Aspect optionally raises to Tense (55c), stranding Neg in surface structure. (See Mahajan 1990 for a proposal which raises Neg to Tense at LF, deriving the right c-command relation for negative polarity indefinites.) Hindi-Urdu is not an entirely harmonic language in terms of head direction, that is, in consistency of head direction. The CP and DP are head-initial (56). See examples below of finite complement clauses (section 6.1). (56) a.

CP 3 C TP ki ‘that’

b.

DP 3 D vee ‘those’

NP loog ‘people’

1500

VII. Syntactic Sketches

In contrast to the nearly universal head-final structure of projection heads in this language, the complementizer ki ‘that’ and other finite clause markers like agar ‘if’ are always initial (56a). The complementizer and conjunctions in Hindi-Urdu are examples of a higher projection which may be disharmonic with lower phrasal projections (Biberauer et al. 2009); the highest projection may be head-initial when the lower projections are head-final. These highest projections have markedly different properties from nonfinite clauses, both in order within the complex sentence (section 2.1.2) and in crossclausal coindexing relations (section 6.2).

5. Finite and non-finite clauses Finiteness is notoriously difficult to define in a universal way (see the contributions to Nikolaeva 2007). The fundamental difference between finite and non-finite clauses in Hindi-Urdu lies in the inflection expressed on the verb. Person inflection is found only in finite clauses, a standard finding in many languages. Both finite and non-finite clauses express number and gender on both the tense and aspect projections. Curiously, however, person agreement may be absent in past tense finite clauses, without affecting their intrinsic finite qualities, such as licensing ergative case; see (9), (24) above. This occurs when the tense auxiliary hai ‘is’ or thaa ‘was’ is omitted in the simple past or aorist (30); see Montaut (2006). Even the past tense forms of hoo-naa ‘be’ are inflected only for number and gender, as in (17) and (20) above.

5.1. Finite inflection Finite inflection includes indicative and subjunctive or contingent mood, as well as imperative inflection. Indicative inflection expresses past, present and future inflection, as in the examples above; see also Butt and Rivzi (2010). (57) a. Contingent/subjunctive: [ki kahiiN us=kaa bhaaii na aa siitaa=koo Dar lag-aa Sita=DAT fear strike-PFV that somewhere 3SG=GEN brother not come pahuNc-ee]. arrive-CONT.3SG ‘Sita was afraid [that her brother would turn up].’ (Montaut 2006: 244) b. Imperative (familiar form): dee doo! mujhee paisaa I.DAT money.NOM give give.IMP.2SG.FAM ‘Give me money!’ With verbs of fearing, the contingent complement contains a pleonastic negation na ‘not’, as in (57a). The familiar imperative, contingent and future are related morphologi-

42. Hindi-Urdu: Central Issues in Syntax

1501

cally; compare (57a) and (1) above. Imperatives include the verb stem with -iyee(gaa) (polite form), and the bare, non-agreeing infinitives, essentially non-finite forms used in finite contexts. Subordinate finite clauses will be discussed in section 6 below. They include complements, adverbial clauses, and relative clauses.

5.2. Non-finite subordinate clauses Non-finite inflection includes the participles and the infinitive suffixes mentioned above in section 2.4. Non-finite clauses are distinguished from finite clauses in several ways: verbal inflection, the form of negation (na instead of nahiiN), and occurrence in sentence internal position. They normally occur as complements or modifiers, typically to the left of a head, in unmarked phrase order. These include infinitives (58), imperfective participles (59), perfective participles (60) and the conjunctive participle (61). Note the case on the non-finite clause, selected by the matrix clause. In (59) and (60), I am assuming an analysis of the complement as a proposition, the complement of a small number of verbs deekh ‘see’, sun ‘hear’, paa ‘find’ and lag ‘strike, seem’. [PRO saaikal calaa-naa] aa-taa hai. (58) a. usee bicycle.F drive-INF come-IPFV.M be.PRS.3SG 3SG.DAT ‘He/she knows [(how) to ride a bicycle].’ sooc rahee haiN. b. ham [PRO wahaaN jaa-nee]=kii we.M there go-INF.OBL=GEN.F think PROG.M.PL be.PRS.3PL ‘We are thinking of [PRO going there].’ loog [PRO macchlii pakaR-nee]=kee liyee aa-yee. c. vee 3.M.PL people.M fish.F catch-INF=GEN.OBL for come-PFV.M.PL ‘These people have come [PRO to catch fish].’ (Subbarao 1984: 33) (59) us=nee [billii=koo duudh pii-tee hu-ee] deekh-aa. 3SG=ERG cat.F=ACC milk.M.NOM drink-IPFV.OBL be-PFV.OBL see-PFV.M.SG ‘He/she saw [the cat drinking the milk].’ bahut thak-ii hu-ii] lag-tii hai. (60) a. [woo 3.F.SG.NOM much be.tired-PFV.F be-PFV.F strike-IPFV.F be.PRS.3SG ‘She seems very tired.’ leekhak=kii likh-ii hu-ii] kitaab mujhee bahut b. [is this.OBL writer=GEN.F write-PFV.F be-PFV.F book.F.NOM I.DAT much pasand hai. liking is ‘I very much like (the) book [written by this author].’ too abhii [PRO khabar paa-kar] aa-yaa huuN. (61) a. maiN I.M.SG.NOM topic now.EMPH news get-CP come-PFV be.PRS.1SG ‘Having heard the news, I have come just now / As soon as I heard the news, I came.’ (Porizka 1963: 152)

1502

VII. Syntactic Sketches b. us=nee [PRO sooc-samajh-kar] ciTThii likh-ii. 3SG=ERG think-understand-CP letter.F.SG write-PFV.F.SG ‘He/she wrote the letter carefully.’

Non-finite clauses may be used as complements, as in (59a, b)−(60), and also adverbially as in (61a, b). The core meaning of -kar is ‘having Ved’ in (61a), conveying that the event expressed by V is completed in relation to the matrix clause event. A common use, therefore, expresses temporal sequence. There is another use shown in (61b), in which the conjunctive participle is used as a modifier and part of the same event as the main clause. See Narasimhan (2001) for a discussion of this construction with manner of motion verbs. Whether complements or adverbials, non-finite forms have the privilege of occurring sentence-internally. Finite clauses, however, must be externalized by being adjoined to TP, vP or DP (section 6 below).

5.2.1. Non-finite clause types and lexical selection Control verbs select infinitives, which may be postpositionally case-marked like nominals, as in (62) and (63). These inflected forms are determined by the matrix predicate in these sentences. Infinitives can also be marked by postpositional case clitics, such as the genitive in (63). The default case for subject and object infinitives is the direct or nominative case, as in (64). (62) madhuu=nee [PRO baahar jaa-nee]=see inkaar ki-yaa. Madhu.F=ERG outside go-INF.OBL=from refusal do-PFV.M.SG ‘Madhu refused [PRO to go outside].’ (Subbarao 1984: 36) (63) raadhaa=nee moohan=koo [PRO kitaab paRh-nee]=kee liyee/par majbuur book read-INF.OBL=GEN for on forced Radhaa=ERG Mohan=DAT ki-yaa. do-PFV.M.SG ‘Radhaa forced Mohan [PRO to read a book].’ (Subbarao 1984: 34) (64) [us=kaa na aa-naa] ajiib=sii baat hai. 3SG.OBL=GEN not come-INF.NOM strange=like matter be.PRS.3SG ‘[For him/her not to come] is a strange thing.’ Non-finite inflection is itself inflected for oblique case; there is also gender agreement in the direct form, which will be shown in section 5.3.1. Raising/Exceptional Case Marking verbs select perfective and imperfective participles, as in (59) and (60a) above. I return to syntactic differences between finite and non-finite clauses in section 6.

5.2.2. A case restriction on controlled null subjects Subordinate clauses may have null subjects, as well as overt genitive subjects like the one in (64) above. That subject could be null, with an arbitrary reference for the subject.

42. Hindi-Urdu: Central Issues in Syntax

1503

Other infinitives, as well as the conjunctive particle have obligatorily null subjects, which is coindexed (controlled) by a matrix antecedent. Some examples include (60a, b). There is an interesting restriction, found in some other languages as well, on the case of the null (PRO) subject. We have seen sentences with dative subjects in (26) and (29a) above; they may be antecedents of the conjunctive participle and the subject-oriented reflexive. If the PRO subject itself has dative case by virtue of the predicate it is associated with, the sentence is strongly ungrammatical. Substituting a nominative-subject predicate makes the sentence entirely grammatical. (65) a. [*PRO(i) kroodh aa-kar] woo(i) ghar=see nikal ga-yaa. anger come-CP 3SG house=from go.out go-PFV.M.SG ‘[PRO(i) having gotten angry] he(i) left the house.’ kroodh aa-naa nahiiN caah-tii b. *maiN(i) [PRO(i) us=par 3SG.OBL=on anger come-INF not want-IPFV.F I.NOM huuN. be.PRS.1SG ‘I(i) don’t want [PRO(i) to get angry at him/her].’ See Davison (2008) for more information about this case restriction, and its implications for the syntactic account of obligatory control. This kind of ungrammaticality can be used as a test for obligatory rather than optional control.

5.3. Properties of non-finite clauses in combination The properties of single clauses have been described above, both independent clauses and the principal types of embedded non-finite clauses. This section reviews how clauses combine. I first note how agreement, reflexive binding and wh-scope may cross nonfinite clause boundaries.

5.3.1. Non-finite complements and cross-clausal coindexing Non-finite embedded clauses include infinitive/gerunds, participle complements and relative modifiers, and conjunctive participles/converbs marked by Verb-kar ‘having Ved’, which have been outlined in section 2.5.3 above. Here, I focus on infinitive/gerunds, whose -naa tense suffix has the nominal properties of case, number and gender, and also adjectival properties of agreement in number and gender. See Butt (1995) for an analysis of infinitives as nominalizations which allow multiple local agreement relations.

5.3.2. Agreement within and across clause boundaries In single clauses, the highest nominative argument determines agreement (section 2.3 above). Nominative objects of an embedded complement clause have the option of con-

1504

VII. Syntactic Sketches

trolling agreement (Butt 1995; Bhatt 2005). Agreement is found on the infinitive as well as the matrix clause (66a), or is absent (66b). Two or more degrees of embedding allow extended agreement (66c). (66) a. Long-distance Agreement: laRkooN=nee [PRO yah kitaab paRh-nii /* naa] caah-ii. boys=ERG this book.F.NOM read-INF.F INF.M want-PFV.F.SG ‘The boys wanted [PRO to read this book].’ b. Default Agreement (3SG masculine): laRkooN=nee [PRO yah kitaab paRh-naa] caah-aa. this book read-INF.M want-PFV.M.SG boys=ERG ‘The boys wanted [PRO to read this book].’ c. Extended Long-distance Agreement: calaa-nii] siikh-nii] nahiiN caah-ii. maiN=nee [PRO [PRO gaaRii car.F.NOM drive-INF.F learn-INF.F not want-PFV.F I=ERG ‘I didn’t want [PRO to learn [PRO to drive a car]].’ (P. Dasgupta p.c.) Long-distance agreement is not possible if the complement is finite, as in (78) below. Nominative subjects of intransitive verbs may also trigger long-distance agreement: (67) [bas-eeN Dipoo-see nikal-nii] shuruu hu-iiN. bus-F.PL depot-from come.out-INF.F beginning.M be-PFV.F.PL Buses began to come out of the depot.’ (K.V. Subbarao, p.c.) Various explanations have been proposed to account for how agreement can be extended and why its extension is optional. These proposals include raising the embedded object to the matrix (Mahajan 1990a), making the nominal embedded clause agree with the matrix verb after it has agreed with the nominative object (Butt 1995), and complex predicate formation: the embedded verb combines with the matrix clause because of a (lexical) option, so that the infinitive object clause is realized as a projection which is not fully clausal, lacking a PRO subject (Bhatt 2005).

5.3.3. Anaphoric binding within and across clause boundaries Reflexive pronouns in Hindi-Urdu are invariant for person, number and gender, and are subject oriented (2.5.2). But they may be simplex (apnaa) or complex (apnee aap). Simplex reflexives in embedded non-finite clauses may be locally or long-distance bound, complex reflexives are only locally bound (Kachru and Bhatia 1978; Subbarao 1984; Gurtu 1992; Davison 1999, 2001).

42. Hindi-Urdu: Central Issues in Syntax

1505

(68) maaN=nee shyaam=koo [PRO apnee=koo/apnee aap=koo gumnaam patr mother=ERG Shyam=DAT self’s=DAT/self’s self=DAT anonymous letters kiyaa. bheej-nee ]=see manaa forbidden do.PFV.M.SG send-INF=from ‘Mother forbade Shyam(i)[PRO(i) to send self anonymous letters].’ Ambiguity: apnee-koo = Mother, Shyam (Local and long-distance reading) Non-ambiguous: apnee-aap-koo = Shyam/*Mother (Only local reading) (Davison 1999) (69) maaN=nee raadhaa(i)=koo [PRO(i) apnee=koo/apnee aap=koo deekh-nee] self’s=ACC/self’s self=ACC see-INF.OBL mother=ERG Radhaa=DAT nahiiN di-yaa. not give-PFV.M.SG ‘Mother did not allow Radha(i) [PRO(i) to look at self].’ Ambiguity: apnee=koo = Mother, Radha Non-ambiguous: apnee aap=koo = Radhaa, *Mother See the papers in Cole, Huang and Hermon (2001) for a discussion of how these properties of binding may be accounted for in a Chomskyan theory of syntax. There are clear parallels between long-distance agreement and long-distance binding, in that both are limited by finite inflection in embedded clauses. But long-distance reflexive binding is available in more embedded contexts than long-distance agreement. Reflexive binding is possible across a case-marked infinitive, such as -see ‘from’ in (68). Case marking of this sort blocks long-distance agreement (70): (70) baccooN=nee [PRO ciTThiiyaaN likh-nee / *-ni=see inkaar letter.F.PL.NOM write-INF.OBL.M INF.F=from refusal children=ERG ki-yaa / *kiiN. do-PFV.M.SG do.PFV.F.PL ‘The children refused [PRO to write letters].’

5.3.4. Wh-scope in non-finite clauses Non-finite clauses are also transparent for wh-scope, unlike finite clauses. An interrogative DP in a non-finite clause must have matrix interrogative scope. This is true for the complement infinitive (71) and the participial relative in (72): (71) aap(i)=nee [[PRO(i) kyaa kar-nee ]=kaa phaisalaa ki-yaa? what do-INF.OBL=GEN decision do-PFV.M.SG you=ERG ‘What did you decide [PRO to do ___]?’ (Not ‘You decided [what PRO to do].’) (72) [[kis=kii likh-ii hu-ii] kitaab] aap=koo sab=see pasand hai? who=GEN write-PFV be-PFV book.F you=DAT all=from liked is *‘Who(i) do you like best the book [which(j) __(i) wrote ___(j)]?’ The Hindi-Urdu sentence is fully grammatical with matrix interrogative scope, because the perfective participle modifier clause is non-finite. The finite counterpart in section

1506

VII. Syntactic Sketches

6.3.1 below is as ungrammatical as the English translation of (72). See section 6.3.2 below for the limits placed on wh-scope in finite clauses, and how these restrictions may be evaded.

6. Finite subordinate clauses 6.1. Finite clauses as verbal complements Complement clauses are normally adjoined to the right (73), but are possible on the left (74). They are simply ungrammatical in argument position, in which they would receive case (75). Instead, finite clauses can be coindexed with an optional nominal in the argument position. There are many differing views on exactly how the construction is derived. For example, the finite complement complement clause might be generated in preverbal object position and extraposed to the right (or left), leaving a pronoun copy. Or the finite complement may be generated to the right as an adjunct, coindexed with an expletive in object position. The pronoun in object position may be the object, coindexed with the finite adjunct clause. There is still no agreement on the most plausible analysis. Finite clauses used as complements are optionally marked by the clausal prefix, the complementizer ki ‘that’: (73) ham (yah) jaan-tee haiN [(ki) vee aa rahee haiN. we.NOM this.NOM know-IPFV.M.PL be.PRS.3PL that 3PL come PROG be.PRS.3PL ‘We know [that they are coming].’ (cf. Subbarao 1984) (74) [(*ki) vee aa rahee haiN], maiN aisaa/yah sooc-taa huuN. that 3PL come PROG be.PRS.3PL I.NOM such this think-IPFV be.PRS.1SG ‘[They are coming], so/this I think.’ In many cases the pronominal yah ‘this’ and the complementizer ki ‘that’ in (72) may be omitted. Subbarao (1984), Mahajan (1990a) and Montaut (2004) explore the conditions for omission. Though the ki-clause is interpreted as the complement of jaan-tee ‘know-IPFV’, this clause cannot occur in preverbal object position (75), unless the ki-clause is right-adjoined to a noun phrase like yah baat ‘this fact’ (76). This restriction holds for HindiUrdu and some other Indic languages, but not for all; see Bayer (1999), who notes that there are either initial or final complementizers in Indic languages. It is proposed in Davison (2007) that some Indic languages like Marathi and Bengali have an initial high complementizer position (force) on the left, as well as a lower projection on the right (focus or polarity) which licenses a complementizer. This expansion of the complementizer into different positions derives from Rizzi’s (1997) proposal about the left periphery of finite clauses.

42. Hindi-Urdu: Central Issues in Syntax

1507

(75) *ham [ki vah aa nahiiN sak-aa] nahiiN jaan-tee. be.able-PFV not know-IPFV.M.PL we.NOM that 3SG.NOM come not ‘We did not know [that he was not able to come].’ (76) ham yah baat [ki vee aa rahee haiN] jaan-tee haiN. we.NOM this matter.NOM that 3PL come PROG are know-IPFV be.PRS.3PL ‘We know *(this fact) [that they are coming].’ (cf. Subbarao 1984) A finite clause in (76) is adjoined to a clause internal full DP, which occurs in argument position. There are two possible analyses of complement clauses. The syntactic complement could be the finite ki-clause, which is forced by some principle to adjoin to the right or left side of TP (see Manetta 2010). Alternatively, the syntactic complement may be the optional pronoun or NP yah, yah baat, aisaa ‘this, this fact, such.’ Some evidence for the second position is shown in a previous section; non-finite complements have case postpositions selected by the matrix verb (Subbarao 1984: 34−36) (see [62]−[63] above), and this case also carries over to the pronoun in object position when the subordinate clause is finite and adjoined to the right, as in (77). (77) maaltii(i)=nee is (baat)=see inkaar ki-yaa [ki woo(i) baahar jaa-ee]. Malti=ERG this matter=from refusal do-PFV that 3SG outside go-CONT.3SG ‘Malti refused/denied [that she would go outside].’ In sum, the actual complement is a nominal element in preverbal object position, coindexed with the adjoined finite clause. This issue remains controversial, as we note in section 6.3.3 below. I will nevertheless refer to these finite clauses as complements, as they are interpreted as the main verb complement either directly or indirectly.

6.2. Coindexing relations Finiteness in complement clauses introduces opacity in coindexing relations, such as the long-distance agreement and reflexive binding which is possible in the non-finite complement clauses described in section 5.3.2 above. In contrast to the examples in 5.3.2 above, the sentences in (78) and (79) are ungrammatical if agreement in (78) or binding in (79) crosses a finite clause boundary. (78) laRkooN=nee kah-aa /*kah-ii [ki unhooN=nee yah kitaab boys=ERG say-PFV.M.SG say-PFV.F.SG that 3PL=ERG this book.F.SG.NOM paRh-ii / *paRh-aa]. read-PFV.F.SG read-PFV.M.SG ‘The boys said [that they read this book].’ The only possible agreement is found within the complement clause, in which the perfective verb agrees with the feminine singular direct object. The matrix verb cannot show

1508

VII. Syntactic Sketches

agreement with this object, unlike the sentence with a non-finite complement (section 5.3.1). Similarly, reflexive binding in (78) is not possible across finite clause boundaries, unlike the long-distance binding which is an option for embedded non-finite complements in section 5.3.2. (79) maaN(i)=nee raadhaa(j)=see kah-aa [ki usee(j) apnee(*i/j)=koo deekh-naa say-PFV that 3SG.DAT self=ACC see-INF mother=ERG Radhaa=INS nahiiN caahiyee]. not ought ‘Mother(i) told Radha(j) [that she(j) should not look at self(*i/j).’ Within the finite ki-clause, the anaphor apnee is locally bound by the subject usee ‘3SG.DAT’. The matrix subject antecedent maaN=nee ‘mother=ERG’ is not available. In contrast, the matrix subject is a possible binder if the embedded clause is non-finite, as in (68) above. In sum, subordinate complement clauses in Hindi-Urdu are external to matrix object position, whether by base generation or adjunction later in the derivation. Coindexation is limited within the single finite clause, with respect to agreement and reflexive binding.

6.3. Adjunct clauses Finite adverbial clauses introduced by conjunctions are also adjoined, probably to the highest projection of the matrix clause. The adverbial clause may precede or follow the matrix, modified clause. The conditional clause typically precedes in example (80). (80) [agar tum is kuursii=par aisee baiTh-oogee] too woo sit-FUT.2.FAM.SG then 3PL if 2.FAM.SG this.OBL chair.F.OBL=on so TuuT jaa-eegii. break.INTR go-FUT.F.3SG ‘If you sit that way on this chair, it will break.’ Historically, adjunct clauses derive from the relative clauses found in earlier stages of Indic. Many of the conjunctions which introduce adverbial clauses belong to the IndoEuropean y-relative series, like yad ‘which, if’, which was replaced by the Persian agar ‘if’.

6.3.1. Finite relative clauses Relative clauses in Hindi-Urdu and other Indic language fall into several different categories, distinguished primarily by order, but also by form. The specific relations among the different type are a matter of considerable discussion; the broad outlines of the issues will be noted below. The correlative relative clause construction in modern Hindi-Urdu consists of a relative clause marked by a determiner/pronoun belonging to the relative jseries (in contrast to the interrogative k-series). The relative determiner joo ‘which’

42. Hindi-Urdu: Central Issues in Syntax

1509

marks the relativized constituent, which may be in situ, or at the left periphery of the clause. The matrix clause has a corresponding constituent, the one which is modified; it may be null, a pronoun or a full NP (see McCawley 2004 for more detailed discussion). The relative clause normally precedes in (81), but may also follow the matrix clause (82). Both versions must have a restrictive reading. (81) [aap=nee joo kitaab-eeN(i) kal khariid-iiN] vee(i) khoo ga-ii you=ERG REL book.F-PL.NOM yesterday buy-PFV.F.PL 3PL.NOM get.lost go-PFV.F haiN. be.PRS.3PL ‘The books [which you bought t yesterday] are missing.’ (82) vee kitaab-eeN(i) khoo ga-ii haiN [ joo (kitaab-eeN)(i) aap=nee 3PL book.F-PL.NOM get.lost go-PFV.F be.PRS.3PL REL book.F-PL.NOM you=ERG kal khariid-iiN]. yesterday buy-PFV.F.PL ‘The books have gotten lost [which you bought t yesterday].’ The two possibilities of ordering in correlative clauses reflect the possibilities in earliest Sanskrit (Hock 1989). An innovation, probably influenced by Persian, allows a relative clause to be right-adjoined to a NP in (83). This structure has the restrictive reading of the correlative structures, but it is also the only way that non-restrictive readings can be expressed in modern Hindi-Urdu (Dayal 1996). (83) mujhee [woo aadmii [joo (*aadmii) siitaa=koo acchaa lag-taa hai]] I.DAT 3SG man which man Sita=DAT good strike-IPVF be.PRS.3SG pasand nahiiN hai. liking not be.PRS.3SG ‘I do not like the man [who Sita likes].’ (Mahajan 2000: 203) Multiple relatives are possible in left adjoined relative clauses, each relative corresponding to a correlate in the matrix clause: (84) [jis(i)=nee joo(j) [PRO(i) kar-naa] caah-aa] us(i)=nee woo(j) kiyaa. which=ERG which.NOM do-INF want-PFV 3SG=ERG 3SG do.PFV Lit.’Who wanted to do what, he/she did that.’ (Bhatt 2003: 486) There is disagreement among speakers about whether multiple relatives are also possible in relatives on the right. There are various views on the derivation of correlatives clauses. There is a general consensus that right-adjoined relative clauses like (82) are extraposed to the right from a NP adjoined structure (84) (Dayal 1996: 153; Bhatt 2003: 488). The left correlatives like (81) may be derived by leftward movement from a NP adjoined relative clause (Mahajan 2000; Bhatt 2003), subject to locality conditions. Movement of a relative clause from a single head NP is a plausible account, but the existence of multiple rela-

1510

VII. Syntactic Sketches

tives with multiple heads argues against movement of a single relative clause with multiple relative constituents. For this reason, Dayal (1996) treats correlatives as adjuncts to the matrix IP; Davison (2009) makes a case for a type of IP adjunction which accounts for locality conditions. Bhatt (2003) proposes a mixed analysis: relative clauses with a single relative are adjoined to the left of the head NP and optionally moved to adjoin to the left, while clauses with multiple relatives are base-generated as IP left adjuncts.

6.3.2. Interrogative scope Interrogative phrases are distinguished by the k-series of determiners and pronouns, such as kaun/ kis ‘who (direct/oblique)’, in (71) and (72) above, and (85) and (86) below. Unlike languages like English, there is no requirement for the interrogative phrase to move to the left periphery of the finite clause; instead, the interrogative usually appears in a preverbal focus position (Kidwai 2000). Only yes/no questions have an initial interrogative. As is the case in many languages, this interrogative is ‘what’, kyaa in HindiUrdu (see Davison 2007 for comparison of Hindi-Urdu with other Indic languages, in the way that yes/no-questions and clausal subordination are expressed). (85) yah kitaab kis=koo sab=see acchii lag-ii? this book.F.SG.NOM who=DAT all=from good.F strike-PFV.F.SG ‘Who likes this book the best?’ (86) kyaa aap=koo yah kitaab sab=see acchii lag-tii hai? what you=DAT this book.F.SG.NOM all=from good.F strike-IPFV.F be.PRS.3SG ‘Do you like this book the best?’ Interrogative scope is restricted to the minimal finite clause. For example in (87), the interrogative force and scope of kaun ‘who’ is confined to the subordinate clause marked by the complementizer ki. (87) pulis sooc rahii hai [ki coor kaun hai]. police.F.SG think PROG.F be.PRS.3SG that thief.NOM who be.PRS.3SG ‘The police are thinking (about) [who the thief is ___ ].’ Not: ‘Who are the police thinking [that the thief is ____]?’ The addition of kyaa ‘what’ in the matrix clause extends the scope of kaun ‘who’ to the matrix clause in (88): (88) pulis kyaa sooc rahii hai [ki coor kaun hai]. police.F.SG what think PROG.F be.PRS.3SG that thief.NOM who be.PRS.3SG ‘Who are the police thinking [that the thief is ____]?’ Not: ‘The police are thinking (about) [who the thief is ___ ].’ The interrogative in the most deeply embedded clause can have scope over multiple clauses, provided that each successive matrix clause contains kyaa ‘what’ (89):

42. Hindi-Urdu: Central Issues in Syntax

1511

(89) us=nee kyaa kah-aa [ ki pulis kyaa sooc rahii hai [ki 3SG=ERG what say-PFV that police.F.SG.NOM what think PROG.F be.PRS.3SG that kaun hai]]? coor thief.NOM who be.PRS.3SG ‘Who did he/she say [that the police are thinking [that the thief is ___]]? (cf. Davison 1988) Without kyaa ‘what’ in each of the clauses, the sentence fails to have matrix scope for kaun ‘who’. Explaining and giving a formal account of the role of kyaa ‘what’ has proved to be a surprisingly difficult task. There have been several important contributions to this question, particularly Dayal (1996) and previous work, Lahiri (2002) and Manetta (2010). The main difference among the accounts lies in the status of the matrix kyaa ‘what’ and the subordinate ki-clause. On the direct dependency analysis of Manetta (2010), kyaa is a meaningless expletive with interrogative features which agrees with the interrogative in the ki-clause. This clause is the real direct object of the matrix verb (and aligned to the right by some unspecified phonetic process). By contrast, the indirect dependency analysis of Dayal and Lahiri, the kyaa is the direct object of the matrix clause, and the ki-clause containing the interrogative is an adjunct to the matrix clause. The semantics of the matrix sentence with the object kyaa includes interrogative sentences such as the ki-adjunct clause, and effectively combines two questions into one, which gives the embedded interrogative scope over the matrix clause. See Manetta (2010) for a discussion of this account, and of another interrogative structure which involves movement. Finite clauses restrict interrogative scope where the kyaa ‘what’ strategy is not possible, in relative clauses, for example in (90). Compare this finite modifier clause with the non-finite participle in (72) above, and (90) below. (90) *[joo(i) kitaab kis=nee likh-ii hai] (woo(i) aap=koo sab=see which book who=ERG write-PFV.F be.PRS.3SG 3SG.NOM you=DAT all=from hai? acchii lag-tii good.F strike-IPFV.F be.PRS.3SG *‘Who(i) do you like best the book [which(j) __(i) wrote ___(j)]?’ The status of embedded clauses as adjuncts has some historical foundation. As arguments or modifiers, these clauses are external to the matrix clause, typically to the right in the case of complements, and to the left in the case of correlative clauses and adverbial finite clauses. The synchronic reason for externalization is unclear: finite complement clauses have categorial properties which are inconsistent with case marking. Non-finite clauses occur in sentence internal arguments and modifier positions, in part because of their nominal or adjectival properties which are reflected in their inflection, but even uninflected -kar conjunctive participles can be internal, adjoined to v/VP. Diachronically, all types of finite clauses in Indic languages have shown no markers of syntactic subordination until quite recently (Davison 2009).

1512

VII. Syntactic Sketches

6.3.3. Limits on interrogative scope It is possible that all finite clauses are islands, restricting coindexing relations across finite clause boundary. The scope of interrogatives and relatives is limited to the local finite clause. For example, the interrogative in (91) has scope over the entire matrix CP, including the relative modifier clause. But an interrogative in a finite relative clause is ungrammatical, in example (90) above. It cannot have scope over more than its local finite clause, and in the local clause it is incompatible with the typing of the clause as relative. (91) Interrogative in main clause: hai] woo kis=koo(j) [joo kitaab(i) us=nee t(i) likh-ii 3SG=ERG write-PFV.F be.PRS.3SG 3.F.SG.NOM who=DAT REL book sab=see acchii lag-ii? all=from good.F strike-PFV.F.SG ‘Who likes best the book [that he/she wrote___]?’ Interrogatives within a finite clause have scope only over that clause. This point is illustrated in sentence (87) above, in which the interrogative word kaun ‘who’ has scope over only the subordinate clause. In sentence (88), the presence of kyaa ‘what’ in the matrix clause extends the scope of kaun ‘who’. Similarly, relative phrases have scope only over a single finite clause, but no scope extension is possible by adding a pronoun yah ‘this’ in the matrix clause, in (92): (92) *[us=nee (yah) kah-aa [ki [joo kitaab(i) aap=nee likh-ii]] 3SG=ERG this say-PFV that REL book.F.SG.NOM you=ERG write-PFV.F.SG woo(i) acchii hai. 3SG.F.NOM good.F be.PRS.3SG ‘The book [which he said [that you wrote which]] is good.’ Note that this sentence is grammatical with a different constituent structure and different meaning, ‘He said [that the book [which [you wrote which]] is good].’ Here the relative phrase has scope over its minimal finite clause.

6.3.4. Obligatory wide scope and non-finite inflection Interrogatives within non-finite clauses have only matrix scope, not local scope in (71). This fact suggests that non-finite clauses are TP, lacking the Complementizer Phrase projection with the interrogative feature which licenses a local wh-scope interpretation in (87). (93) vee [PRO kyaa kar-nee]=kii sooc rahee haiN? what do-INF.OBL=GEN.F think PROG.M.PL be.PRS.3PL 3.M.PL.NOM ‘What are they thinking of [PRO doing what]?’ Not: ‘They are thinking of [what PRO to do what].’

42. Hindi-Urdu: Central Issues in Syntax

1513

There are no non-finite relative clauses as such, but there is a modifier use of participles which is equivalent to a relative clause, shown in (94). The interrogative must have wide scope. (94) [[kis=kii 0(i) likh-ii hu-ii] kitaab(i)] sab=see acchii hai? write-PFV.F be-PFV.F book.F.SG all=from good.F be.PRS.3SG who=GEN.F Who is the book which who wrote which the best?’ The participle subject kis=kii ‘who=GEN’ has scope over both the participle clause and the matrix clause. No narrow scope reading is possible. The facts in (93)−(94) suggest that non-finite clauses have no Complementizer Phrase projection above the Tense Phrase, accounting for the absence of non-finite clauses marked by joo ‘relative’ phrases. Non-finite clauses must be full clauses in some sense, however, because they are domains for reflexive binding and local agreement (section 5.3).

7. Summary 7.1. The structure of the clause I began this discussion of Hindi-Urdu with a survey of the clausal projections within the Tense Phrase, grammatical functions like subject and direct object, and the morphology of nominal case and verbal inflection which reflect and govern the composition of the clause. Case reflects grammatical functions in various ways, in part depending on finiteness, in part on lexical class and diatheses. There is a clear distinction between structural case related to grammatical functions, and lexical case determined by specific verbs and related to semantic roles. Finiteness of clauses affects not only morphology but also syntactic relations of many kinds. It affects where finite clauses may be combined with matrix clauses; they must be in some sense external and adjoined, in examples (73)− (74), even if they are semantic complements. Non-finite clauses in contrast are clauseinternal as complements and modifiers. Finiteness governs coindexing relations of several sorts. Verbal agreement, reflexive binding and wh-scope may hold across non-finite clause boundaries, while these relations are strictly local within finite clauses. Exactly how finite verb morphology is represented within the finite clause structure and how it limits these three coindexing patterns is an issue which deserves a more detailed theoretical account than is possible in the scope of this chapter.

7.2. Major features of clause structure and morphology in Hindi-Urdu There are three themes in the outline of Hindi syntax and morphology in sections 1−3. The first is syntactic structure, which is largely left-branching. The verbal projections and the Aspect and Tense projections are head-final. Finite CP and DP have initial heads. Finite CPs are far more restricted than non-finite clauses; finite clauses may only be external adjuncts to TP and DP, while non-finite clauses may be complements or phrase-

1514

VII. Syntactic Sketches

internal adjuncts in a variety of phrases. In section 5, we have seen how finite CPs limit the possibilities for agreement, coindexing and wh-scope. A second theme is the relation of case to grammatical functions and thematic roles. A clear distinction is found in Hindi-Urdu between lexically selected cases like dative (for experiencers and goals) and locatives (for source, etc.), in contrast to structural cases which are valued and licensed in specific syntactic configurations. These include genitive, ergative case on transitive subjects, and the dative of specific, animate direct objects. The licensing of the structural cases is restricted by finiteness, aspect and lexical class; the four lexical categories of verbs discussed above may derive from different verbal projections which license case in different ways on the arguments. Verb compounding and diatheses also affect case licensing and the projection of arguments. A third theme involves chains of related constituents, antecedents which bind reflexive pronouns, and nominative arguments which determine person, number and gender (phi-features) on verbs. These relations are, however, independent; reflexives are strictly subject-oriented, while agreement may be determined by nominative objects as well as subjects. Non-finite clauses allow coindexing relations (reflexive binding, agreement, wh-scope) to cross clause boundaries. Finite CPs also have wh-scope relations, primarily in relative clauses and constituent questions. These constructions involve a chain of a TP-internal constituent and the specifier position of CP. Finite clauses limit scope indexing more strictly than non-finite clauses.

8. Abbreviations CONT CP EMPH EZ

contingent conjunctive participle emphatic ezafe

FAM FORM HON

familiar formal honorific

42. Acknowledgments Many thanks to an anonymous reviewer who made constructive suggestions and provided useful additional sources. I am grateful to the Obermann Center for Advanced study of the University of Iowa for support for writing the first draft of this chapter. This research was supported in part by an Arts and Humanities Initiative grants in 2008, 2000 and a Career Development project from the University of Iowa. I have been developing the ideas in this work over many years, beginning with a project supported by a grant to Cornell University from the National Science Foundation in 1989, RII-88-00534. I’d like to thank K.V. Subbarao and Peri Bhaskararao and the Tokyo University of Foreign Studies for a major impetus to study case and grammatical functions. Thanks to Rashmi Gupta, Rajiv Sahay, Aushima Thakur, Sami Khan, Rajiv Ranjan and Makur Jain for native speaker judgements. Thanks also to Y. Romero, S. Cassivi, D. J. Berg, M.Yao, A. Comellas and A. Azar.

42. Hindi-Urdu: Central Issues in Syntax

1515

9. References (selected) Ahmed, Tafseer 2010 Unaccusativity/unergative distinction in Urdu. Journal of South Asian Linguistics 3(1). Aissen, Judith 2003 Differential object marking: iconicity vs. economy. Natural Language and Linguistics Theory 21: 435−483. Bahl, Kali Charan 1974 Studies in the Semantic Structure of Hindi. Delhi: Motilal Banarsidass. Bailey, T. Grahame 1967 Urdu. London: the English Universities Press Ltd. Bashir, Elena 1999 The Hindi and Urdu ergative postposition -ne: Its changing role in the grammar. In: Rajendra Singh (ed.), Yearbook of South Asian Languages and Linguistics, 11−36. New Delhi: Sage Publications. Bayer, Josef 1999 Final complementizers in hybrid languages. Journal of Linguistics 35: 233−271. Bhatia, Tej 1993 Punjabi. London: Routledge. Bhatt, Rajesh K. 2003 Locality in correlatives. Natural Language and Linguistic Theory 21: 485−541. Bhatt, Rajesh K. 2005 Long-distance agreement in Hindi-Urdu. Natural Language and Linguistic Theory 23: 757−807. Bhatt, Rajesh K., and Veneeta Dayal 2007 Rightward scrambling as rightward remnant movement. Linguistic Inquiry 38: 287−301. Biberauer, Theresa, Glena Newton, and Michelle Sheehan 2009 The Final-over-Final constraint and predictions for diachronic change. Toronto Working Papers in Linguistics 31. Bögel, Tina, Miriam Butt, and Sebastian Sulger 2008 Urdu ezafe and the morphology-syntax interface. In: Proceedings of the LFG08 conference. Stanford: CSLI Publications. Butt, Miriam 1993 Object specificity and agreement in Hindi/Urdu. In: Papers from the 29 th Regional Meeting of the Chicago Linguistic Society, 89−103. Butt, Miriam 1995 The Structure of Complex Predicates in Urdu. Stanford: Center for the Study of Language and Information. Butt, Miriam 1997 Complex predicates in Urdu. In: Alex Alsina, Joan Bresnan and Peter Sells (eds.), Complex Predicates, 107−150. Stanford: CSLI Publications. Butt, Miriam, and Tracy Holloway King 2004 The status of case. In: Veneeta Dayal and Anoop Mahajan (eds.), Clause Structure in South Asian Languages, 153−198. Dordrecht: Kluwer Academic Publishers. Butt, Miriam, and Jafar Rivzi 2010 Tense and aspect in Urdu. In: Patricia Cabredo Hoffherr and Brenda Laca (eds.), Layers of Aspect, 43−66. Stanford: Center for the Study of Language and Information. Chomsky, Noam 2004 Beyond explanatory adequacy. In: Adriana Belletti (ed.), Structures and Beyond, 104− 131. Oxford: Oxford University Press. Cole, Peter, C-T. James Huang, and Gabriella Hermon (eds.) 2001 Long Distance Anaphora. Irvine: Academic Press.

1516 Davison, 1988 Davison, 1999

VII. Syntactic Sketches

Alice Operator binding, gaps and pronouns. Linguistics 26: 181−214. Alice Lexical anaphors in Hindi-Urdu. In: Kashi Wali, K.V. Subbarao, Barbara Lust and James Gair (eds.), Lexical Anaphors and Pronouns in Some South Asian Languages: A Principled Typology, 397−470. Berlin: Mouton de Gruyter. Davison, Alice 2001 Long-distance anaphors in Hindi-Urdu: issues. In: Peter Cole, Gabriella Hermon and James C.-T. Huang (eds.), Long Distance Anaphora, 47−82. Irvine: Academic Press. Davison, Alice 2002 ‘Agreement features and properties of TENSE and ASPECT’. In: Rajendra Singh (ed.), The Yearbook of South Asian Languages and Linguistics, 27−58. New Delhi: Sage Publications. Davison, Alice 2004 Structural case, Lexical case and the verbal projection. In: V. Dayal, and A. Mahajan (eds.), Clause Structure in South Asian Languages, 199−225. Dordrecht: Kluwer Academic Publishers. Davison, Alice 2005 Phrasal predicates: How N combines with V in Hindi-Urdu. In: Tanmoy Bhattacharya (ed.), Yearbook of South Asian Languages and Linguistics: 83−116. Berlin: Mouton DeGruyter. Davison, Alice 2007 Word order, parameters and the Extended COMP Projection. In: J. Bayer, Tanmoy Bhattacharya, and Hany Babu (eds.), Linguistic Theory and South Asian Languages, 175− 198. Amsterdam: John Benjamins. Davison, Alice 2008 A case restriction on control: implications for movement. Journal of South Asian Linguistics 1(1): 29−54. www.jsal.org Davison, Alice 2009 Adjunction, features and locality in Sanskrit and Hindi/Urdu correlatives. In: Aniko Liptak (ed.), Correlatives Cross-Linguistically, 223−262. Amsterdam: John Benjamins. Dayal, Veneeta 1996 Locality in Wh-Quantification. Dordrecht: Kluwer Academic Publishers. Dayal, Veneeta 2000 Scope marking, cross-linguistic variation in indirect dependency. In: Uli Lutz, Gereon Müller, and Arnim von Stechow (eds.), Wh-Scope Marking, 157−194. Amsterdam: John Benjamins Publishing Company. Dayal, Veneeta 20011 Hindi pseudo incorporation. Natural Language and Linguistic Theory 29(1): 123−167. Gurtu, Madhu 1992 Anaphoric Relations in Hindi and English. Delhi: Munshiram Manoharlal. Hock, Hans Henrich 1989 Conjoined we stand: theoretical implications of Sanskrit relative structures. Studies in the Linguistic Sciences 19(1): 93−126. Hook, Peter 1973 The Compound Verb in Hindi. Ann Arbor: University of Michigan Center for South and Southeast Asian Studies. Hook, Peter 1979 Intermediate Hindi Structures. Ann Arbor: University of Michigan Center for South and Southeast Asian Studies. Kachru, Yamuna 1980 Aspects of Hindi Grammar. Delhi: Manohar Publications.

42. Hindi-Urdu: Central Issues in Syntax

1517

Kachru, Yamuna 2006 Hindi. Philadelphia: John Benjamins Publishing Company. Kachru, Yamuna, and Tej Bhatia 1978 Evidence for global constraints: The case of reflexivization. Studies in the Linguistic Sciences 5(1): 42−73. Kidwai, Ayesha 2000 XP-Adjunction in Universal Grammar, Scrambling and Binding in Hindi-Urdu. Oxford: Oxford University Press. Lahiri, Utpal 2002 Questions and Answers in Embedded Contexts. Oxford: Oxford University Press. Legate, Julie 2008 Morphological and abstract case. Linguistic Inquiry 39(1): 55−101. Mahajan, Anoop 1990 LF conditions on negative polarity licensing. Lingua 80: 333−348. Mahajan, Anoop 2000 Relative asymmetries and Hindi correlatives. In: Artemis Alexiadou, Andre Meinunger, Chris Wilder and Paul Law (eds.), The Syntax of Relative Clauses, 201−230. Amsterdam: John Benjamins. Manetta, Emily 2010 Wh expletives in Hindi-Urdu: the vP phase. Linguistics Inquiry 41: 1−34. Masica, Colin P. 1991 The Indo-Aryan Languages. Cambridge: Cambridge University Press. McGregor, R.A. 1995 Outline of Hindi Grammar. Oxford: Oxford University Press. Mohanan, Tara 1994 Arguments in Hindi. Stanford: CSLI Publications. Mohanan, Tara 1995 Wordhood and lexicality: Noun incorporation in Hindi. Natural Language and Linguistic Theory 13.1: 75−134. Montaut, Annie 1991 Aspects, Voix et Diathèses en Hindi. Louvain: Peeters. Montaut, Annie 2006 Hindi Grammar. Munich: LINCOM-Europa. Nasasimhan, Bhuvana 2001 Motion events and the lexicon: a case study of Hindi. Lingua 113: 123−160. Nespital, Helmut 1997 Hindi Verb Dictionary. Allahabad: Lokbharati Press. Nikolaeva, Irina (ed.) 2007 Finiteness. Oxford: Oxford University Press. Payne, John 1995 Inflecting postpositions in Indic and Kashmiri. In: Frans Plank (ed.), Double Case, 283− 300. Oxford: Oxford University Press. Platts, John T. 1990 A Grammar of the Hindustani or Urdu Language. New Delhi: Munshiram Manoharlal Publishers. Pořízka, Vincenc 1963 Hindština, Hindi Language Course. Prague: Státni Pedagogiké Nakadateltsví. Ramchand, Gillian 2008 Verb Meaning and the Lexicon: A First-Phase Syntax. Cambridge: Cambridge University Press. Rizzi, Luigi 1997 On the fine structure of the left periphery. In: L. Haegeman (ed.), Elements of Grammar, 281−337. Dordrecht: Kluwer Academic Publishers.

1518

VII. Syntactic Sketches

Schmidt, Ruth Laila 1999 Urdu: An Essential Grammar. London: Routledge. Subbarao, K.V. 1984 NP Complementation in Hindi. Delhi: Academic Publications. Subbarao, K.V. 2012 South Asian Languages: A Syntactic Typology. Cambridge: Cambridge University Press. Vasishth, Shravan 2000 Word order, negation and negative polarity in Hindi. OSU Working Papers in Linguistics 53: 108−131. Verma, Manindra K. 1967 The Structure of the Noun Phrase in English and Hindi. Delhi: Motilal Banarsidass. Verma, Manindra K. (ed.) 1993 Complex Predicates in South Asian Languages. Delhi: Manohar. Verma, Manindra K. 1990 The Notion of Subject in South Asian Languages. Madison: University of Wisconsin South Asian Studies Publication Series 2.

Alice Davison, Iowa City (USA)

43. Mandarin 1. 2. 3. 4. 5. 6.

Introduction The nominal domain The verbal domain The sentence Abbreviations References (selected)

Abstract This article provides an overview of the basic properties of the most common syntactic structures in the nominal, verbal and sentential domains in Mandarin. It takes into account different analyses that have been proposed for some of these. It especially highlights issues and properties that have been the subject of lively discussion in the linguistic literature.

1. Introduction Mandarin belongs to the Sinitic language family, which in turn is part of the Sino-Tibetan language family, though there is some debate on the question how close the relation between Sinitic and the other Sino-Tibetan languages (the Tibeto-Burman languages) is (Norman 1988; Handel 2008). Mandarin is spoken, as a first language, in a large area,

43. Mandarin

1519

a broad band going from the North-East to the South-West of China. The language variation in this area, although it certainly exists, is smaller than one would expect on the basis of the huge area it is spoken in. Ramsey (1987) hypothesizes that this is partly due to the fact that very large Mandarin speaking areas (e.g., the three provinces in the North-East and Sichuan) were populated by Mandarin speakers only recently (in the last 200 years or so). This paper is about the northern variety of Mandarin. Other members of the Sinitic family include Cantonese, Min (Hokkien), Xiang, Gan and Wu, which are spoken in the South-East. They differ from Mandarin to a large extent in lexicon, phonology, morphology and syntax (Norman 1988; Chappell 2001). Mandarin (more specifically, northern Mandarin) is the official language of China. It is also the official language (or one of the official languages) in other places, such as Taiwan and Singapore (Chen 1999). In total, Mandarin is spoken by more than 1.3 billion people as their first or second language. Like all other Sinitic languages, Mandarin is a tone language. It has four tones: high level, high rising, low (or low dipping: it has a rise before pauses) and falling. Like phonemes, tone is distinctive; the tones are occasionally referred to as tonemes. Mandarin is often referred to as an analytic language. Sagart (2001: 123) puts it this way: “Modern Standard Chinese is a textbook example of an isolating language with little morphology, in which word order is the principal means through which grammatical meaning is expressed. Although a few suffixes are present (notably aspect markers and noun suffixes), these are often etymologically transparent and do not appear to be very ancient.” Although this is true to some extent, the picture this evokes of a language with a rigid word order without much morphology is not correct. First of all, as we will see below, due to such phenomena as topicalization, word order is actually quite free. The fact that we find pro-drop (the arguments of the sentence can remain covert) enhances the image of a language that relies on much more than just word order. Topicalization will be discussed in section 4 below; here is an example of pro-drop (see also section 3.1.3 below; see J. Huang 1984 for restrictions on object pro-drop). All non-English examples in this article are Mandarin examples. (1)

Q: Zhāng Sān kànjiàn-le Lǐ Sì ma? Li Si Q Zhang San see-PRF ‘Did Zhang San see Li Si?’

[Mandarin]

A: kànjiàn-le. see-PRF ‘(He) saw (him).’ Secondly, Mandarin actually has a lot of morphology. Although it does not have agreement and tense morphology, it has numerous suffixes, especially in the nominal and verbal domains (but Sagart is right that many of these are “etymologically transparent”). It also features a morphological process which is actually quite un-analytic by nature, viz., reduplication. We find reduplication in all domains, though mostly with verbs and adjectives, with different effects. While reduplication of adjectives leads to intensification, reduplication of verbs yields an effect of de-intensification. In verb reduplication, yi ‘one’ can be optionally inserted:

1520 (2)

VII. Syntactic Sketches a. gāoxìng ‘happy’

gāogāo xìngxìng ‘very happy’

b. cháng ‘taste’

cháng (yi) cháng ‘taste a bit’

In this article, we discuss the syntax of Mandarin in three sections, concentrating separately on issues in the nominal domain, the verbal domain and the sentence.

2. The nominal domain In this section we discuss several aspects of the nominal domain. We will first investigate what different forms the nominal phrase can take, after which we look at several of the constituting parts separately, including the classifier and the modifiers. Besides, we will look at the referential properties the different nominal phrases can have. Finally, we discuss issues related to mass and count.

2.1. Constituents and constituent order A Mandarin noun phrase may contain the following elements: a demonstrative, a numeral, a classifier, modifiers and the head N itself. If a phrase contains all these elements, the base order is the following: (3)

Dem Nume Clf Modifier (de) N

Demonstratives always precede the numeral, the classifier and/or the N, the numeral always precedes the classifier and the classifier always precedes the N (except in certain cases, such as lists, as in: wǒ mǎi-le shū yī-běn [I buy-PRF book three-CLF]; see Tang 1996). Modifiers may (in some cases must) be followed by the element de (etymologically different from the two des used in the VP, see section 3), which we will discuss below (see also Tang 1990). The only deviation from this base order is that modifiers can also precede the demonstrative or the numeral: (4)

Modifier (de) (Dem) Nume Clf N

The effects of this variation in word order will be discussed briefly below. Aside from the complete form in (3), many other forms are possible. For a start, N may appear bare. It may also be preceded by just the classifier. In Mandarin, the distribution of such [Clf N] phrases is limited and there are good reasons to assume that a covert numeral is present. It is either covert for phonological reasons (it gets suppressed in fast speech), or for syntactic reasons (the numeral can be left empty in so-called governed positions) (see Cheng and Sybesma 1999). Whether there is always a numeral following the demonstrative is not clear. The demonstratives each come in two forms, as given in (5).

43. Mandarin (5)

1521

a. zhè ‘this’

nà ‘that’

b. zhèi ‘this’

nèi ‘that’

Of these, the forms in (5b) have presumably incorporated the numeral yī ‘one’. This fact does not seem to have any repercussions for the distribution within the bigger noun phrase, however. Both forms can be combined directly with the classifier, and both forms can also be followed by other numerals: (6)

a. zhè(i) běn shū this CLF book ‘this book’ b. zhè(i) sān běn shū this three CLF book ‘these three books’

The only distributional difference (in those varieties that have both forms) is that the forms in (5b) cannot constitute a phrase by themselves: (7)

a. zhè/*zhèi shì shénme? this be what ‘What is this?’ b. nà/*nèi bù hǎo. that not good ‘That is not good.’

To return to the demonstrative within the noun phrase, in spoken Mandarin, the demonstrative is often attached to the noun directly (see for instance Wang 2005). We get phrases such as: (8)

zhè shū bù hǎo-kàn. this book not good-to.read ‘This book is not good.’

Significantly, the sentence in (8) has one more reading: ‘these books are not good’, with a plural interpretation of the subject. We return to this point in the following sub-section. Next, Tao (2006) discusses cases (previously discussed in Dù 1993 and Jìng 1995) with a numeral, but without a classifier ([9a] from Dù, [9b] adapted from Tao): (9)

yí tuōlājī a. mǎlù-shang lái-le road-top come-PRF one tractor ‘On the road came a tractor.’

1522

VII. Syntactic Sketches b. nèi-shí yǒu yí rén … that-time have one person ‘That time, there was someone …’

As Tao argues, the classifier may not be physically there, but its presence is reflected in the tone of the numeral. The citation tone of yī ‘one’ is high level. Before rising tones, it is realized as a falling tone, and before the level, falling and dipping tones, it becomes a rising tone (tone sandhi is not marked in the examples). The rising tone on yí in (9), unexpected if we consider the surface data only, can easily be explained if we assume, as Tao does, that yī ‘one’ is, or, in any case, originally was, followed by the classifier ge, which has a falling tone (despite its being so weak that we do not mark it in our transcriptions, as is general practice). Due to frequency effects, the sound of the classifier itself eroded, but the sandhi effects lasted. Finally, N itself may also be elided. We find noun ellipsis licensed in two environments, following the modification marker de and following the classifier (as was observed by Arsenijevic and Sio 2007 for Cantonese; see also Shi and Li 2002): píngguǒ, nǐ yě yīnggāi chī yī-ge. (10) a. tā gāngcái chī-le yī-ge 3SG just.now eat-PRF one-CLF apple 2SG also ought eat one-CLF ‘He just ate an apple, you should also eat one.’ b. tā bù xǐhuān nèi-běn shū, tā xǐhuān zhèi-běn. that-CLF book 3SG like this-CLF 3SG NEG like ‘He does not like that book, he likes this one.’ de. (11) a. wǒ xǐhuān hóng-sè de xié, tā xǐhuān huáng-sè red-color MOD shoe 3SG like yellow-color MOD 1SG like ‘I like red shoes, he likes yellow ones.’ b. tā zuótiān mǎi-le yī-jiàn xīn de máoyī, wǒ mǎi-le yī-jiàn 3SG yesterday buy-PRF one-CLF new MOD sweater 1SG buy-PRF one-CLF jiù de. old MOD ‘He bought a new sweater yesterday, I bought an old one.’ In sum, noun ellipsis cases and modifiers aside, the Mandarin noun phrase can have the following forms: (12) a. b. c. d. e. f. g.

Dem Nume Clf N Dem (0̸one?) Clf N Dem N Nume Clf N 0̸one Clf N N yí 0̸ge N

As noted, the only form that raises a question is the one in (12b): is there a covert number one in between the classifier and the N? Whereas for (12e) and (12g) we have

43. Mandarin

1523

enough evidence to assume that we have a covert yī ‘one’ and an underlying ge respectively, we do not seem to have any evidence for any empty element in (12b). We return to this question shortly.

2.2. The classifier In Mandarin, the classifier has a grammatical function as well as a more lexical aspect. To start with the former, disregarding our discussion on the different forms of the demonstrative, we can say that in Mandarin the distribution of the classifier is bound to the presence of a numeral (whether the numeral is overt or not): we only use the classifier when we count. Or, phrased from another perspective, when we do not count, we do not need the classifier. (13) wǒ kàn-le liǎng (*běn) shū. 1SG read-PRF two CLF book ‘I read two books.’ The classifier refers to the unit of counting. Despite that, classifiers are not the same as measure words, such as cupful or kilo, which exist in all languages. The difference is that whereas measure words create a unit of counting, classifiers simply name the unit that is already part of the semantic denotation of the noun (Croft 1994; Cheng and Sybesma 1998a). We briefly return to this in section 4.5 below. There is a relation between the classifier and number. We saw in (8) that an N not accompanied by a classifier is unspecified for number; (14) is another example of a bare N. Depending on the context, (14) can contain a reference to a single book, or a plurality of books: (14) wǒ bǎ shū fàng-zài zhuōzi-shàng le. 1SG BA book put-at table-top SFP ‘I put the book(s) on the table.’ When we add a classifier to (8), the number ambiguity disappears: (15) zhè běn shū bù hǎo-kàn. this CLF book not good-to.read only: ‘This book is not good.’ Although this may be ascribed to a covert numeral one between the demonstrative and the classifier, we know from other varieties of Chinese that the classifier alone signals singularity, so there is no reason to assume an empty one. Mandarin numerals must be looked upon as multipliers: they denote multiplications of singularities (three times one book, instead of three books). The singular classifier has a counterpart for unspecified plural, xiē: (16) zhè xiē shū bù hǎo-kàn. this CLF.PL book not good-to.read only: ‘These books are not good.’

1524

VII. Syntactic Sketches

The element xiē can only co-occur with demonstratives and the numeral yī (see Iljic 1994; see also Y.-H. Li 1998). With yī it is translatable as ‘some’ or ‘a few’: (17) Zhāng Sān mǎi-le yī-xiē shū. Zhang San buy-PRF one- CLF.PL book ‘Zhang San bought a few books.’ Let us now turn to the lexical aspect of the classifier − the very reason why they are called “classifier”. There are quite a number of different classifiers in Mandarin, each of which is used for a group of words, which fall in the same category from one perspective or another. In Mandarin, the classification is primarily based on criteria formulated in reference to shape or function. As such, we have tiáo for long, thin, flexible things such as snakes and ropes; zhī for long, thin, stiff objects such as rifles and pens; zhāng for flat rectangular objects, such as cards and tables; lì for small round things such as seeds and marbles; kē for (slightly) bigger round things such as eggs, melons and stars; bǎ for things one can grab with one hand, such as chairs; liǎng for vehicles; zhī (another one) for small animals; tóu for cattle and sheep; etc. (see for instance, Chao 1968; Y. Shi 1996; Zhang 2007). For abstract notions and ideas, as well as people we use ge, which originally refers to a bamboo stake. The classifier ge is used with so many different categories of things, that it is sometimes called a “general” classifier. Another reason for calling it that is that it is some kind of default. Yet another reason is that often, in conversations, the first mention of an object is accompanied by the “correct” classifier, while the subsequent references to it contain ge (Erbaugh 2002; Loke 1994; H.-Y. Liu 2003; Myers 2000). The grammatical function of the classifier is quite robust in Chinese grammar. This is clear from what we observe in language acquisition and in language loss. As Erbaugh (2002) has shown, children acquire the use of the classifier very early. In the beginning, they only have one, ge. Even with words that do not go with ge in adult Mandarin, children use ge. The same is true for some aphasic patients (Ahrens 1994; Tzeng, Chen, and Hung 1991): they often fall back on ge when they lack access to the right classifier. In other words, these patients, like the children, rather make a lexical mistake than a grammatical one.

2.3. Modification Modifiers and nouns are separated from one another by the element de, regardless of what the nature of the modifier is. Here are some examples. (18) a. dà (de) yú big MOD fish ‘big fish’ dà de yú b. fēicháng extraordinarily big MOD fish ‘very big fish’

simple adjective

complex adjective

43. Mandarin

1525

c. Zhāng Sān de yīfu Zhang San MOD clothes ‘Zhang San’s clothes’ d. méi mǎi-guo shū de rén NEG buy-EXP book MOD person ‘people who have never bought a book’

possessor

r(elative) c(lause)

e. tā chàng gē de shēngyīn 3SG sing song MOD voice ‘the voice with which he sings’ f.

gapless RC

duì érzi de tàidu regarding son MOD attitude ‘the attitude towards his son’

g. yǐqián de zǒngtǒng former MOD president ‘the former president’

PP

non-predicative modifier

In all these cases, except (18a) which combines two heads, de is obligatory. Even in (18a), de is not truly optional in the sense that omission or insertion correlates with a change in meaning. Consider the following minimal pairs: (19) a. zāng shuǐ dirty water ‘dirty water’ b. zāng de shuǐ dirty MOD water ‘dirty water’

dà yú big fish ‘big fish’

cōngmíng rén intelligent person ‘intelligent person’

dà de yú big MOD fish ‘big fish’

cōngmíng de rén intelligent MOD person ‘intelligent person’

In Paul’s (2005) terms, the adjectives in (19a) (without de) describe a “defining property” whereas the ones in (19b) describe an “accessory property”. Thus, with dà yú ‘big fish’, for instance, we have a fish that is naturally big, like a shark, whereas with dà de yú ‘big MOD fish’ we describe a fish that happens to be big − like a big herring. This only applies to the combination of two heads. Certain modifiers, in particular possessors and relative clauses, can appear without de in front of the demonstrative without the effect of changing from “accessory” to “defining”. With respect to the placement of relative clauses, Chao (1968) claims that relative clauses preceding the demonstrative are restrictive, while the ones between the demonstrative and the noun are non-restrictive. There has been a lot of discussion on this claim, not only addressing the question whether it is correct, but also the question whether Mandarin has non-restrictive relative clauses at all (see J. Huang 1982b; Tsai 1994a; Del Gobbo 2001). Lin (2004) shows convincingly, that non-restrictive relatives do exist (if the head is a pronoun or a proper name; see also Del Gobbo 2010 for detailed discussion) and that relative clauses in both positions can be both restrictive and non-restrictive. The difference between the pre-demonstrative and the post-demonstrative relatives seems to be merely a matter of contrastiveness: the former is contrastive, while the latter is not (Lin 2004; Cheng 1998; Sio 2006).

1526

VII. Syntactic Sketches

A final question we need to address is what the status of the modification marker, de, is (see Paris 1979). It has been analysed as a C0 (Cheng 1986a), as a D0 (Simpson 2003, à la Kayne 1994), as the head of a Modification Phrase (Rubin 2003), as a marker of predicate inversion having taken place (Den Dikken and Singhapreecha 2004), as a marker of the division of the NP in two syntactico-semantic domains (Paul 2005), as a semantic type-lowerer (S. Huang 2006) − to mention just some of the more influential or relatively recent proposals. Very recently, Arsenijevic and Sio (2007) and Cheng and Sybesma (2009) have argued that de (and its Cantonese counterpart ge3), is a type of classifier. The idea behind this is that, just like when you count, you need the unit for counting (Sybesma 2009), in modification you also need to specify the unit which is the object of modification.

2.4. Referential properties Noun phrases can have different referential properties. They can be definite, indefinite (in two different ways: specific and non-specific), generic, kind and non-referential. How does a language like Mandarin, which lacks definite and indefinite articles, express these notions? In this section we match these referential options with the different forms which noun phrases in Mandarin can take, which were listed in (12). The list in (12) can be summarized as in (20). Judging superficially, we have three different forms: (a) with the demonstrative; (b) without the demonstrative but with numeral and classifier; and (c) bare Ns. (20) a. Dem (0̸one?/Nume Clf) N (= 12a, b, c) b. 0̸one/Nume Clf N (= 12d, e; this one includes 12g) c. N (= 12f) Let’s start with definiteness. Both the forms in (20a) and those in (20c) are used to refer to entities which have been introduced into the conversational space and are known to both hearer and speaker. With respect to (20a), the distal demonstrative is used more generally than the proximal. It is occasionally claimed that the distal demonstrative, nà or nèi, is developing into a definite article. There are reasons, however, to think that the bare N is the purer definite form of the two and that the (20a) forms will always carry some of the contrastive aspects inherent to demonstratives; it seems that phrases with a demonstrative are just very specific indefinites (Sybesma and Sio 2008). These reasons have to do with the choice between the two forms in two different contexts. One is in the reference to unique objects, such as the sun and the queen, where the use of the bare N is much more natural than the use of a demonstrative (to some native speakers consulted, the latter form is not acceptable). The second contrast has to do with the translation of the second line in the following scenario: (21) a. A man and a woman came into a bar. b. The man was already drunk.

43. Mandarin

1527

Although both (22a) and (22b) are acceptable as renderings of (21b), the former is considered the better option by the majority of speakers consulted. There is, by the way, also a group of speakers who prefer (22c). (22) a. nánde yǐjīng hē-zuì-le. man already drink-drunk-PRF ‘The man was already drunk.’ b. nèi-ge nánde yǐjīng hē-zuì-le. already drink-drunk-PRF DEM-CLF man ‘That man was already drunk.’ c. nèi nánde yǐjīng hē-zuì-le. already drink-drunk-PRF DEM man ‘That man was already drunk.’ As to kind-referring noun phrases, Mandarin uses bare N only. (23) is one example: (23) shīzi hěn kuài jiù huì juézhǒng. lion very quick then will be.extinct ‘Lions will be extinct very soon.’ Turning to indefinites, we observe that the forms in (20b), with a (covert or overt) numeral are always indefinite (see M. Wu 2006 for special cases where numeral phrases are purported to have a definite reading). Such indefinite noun phrases can also have a generic interpretation if they occur in environments which facilitate the presence of a generic operator (e.g., characterizing sentences; see Krifka et al. 1995), as in (24) (see also Cheng and Sybesma 2012): (24) yī wèi hǎo lǎoshī bù jǐnjǐn jiāo xuéshēng zěnme xuéxí one CLF good teacher not just teach student how study ‘A good teacher doesn’t just teach students how to study.’ The bare N in (20c) sometimes also expresses indefiniteness. All these forms can be non-specific indefinite and only the forms with an overt numeral can be specific indefinite. This is clear from the occurrence of indefinites following bǎ in the bǎ-construction (see section 3 below). Nouns in that position are “strong”, that is, definite or specific indefinite (see Sybesma 1992 and references cited there; see also Yang 2007, who challenged the claim in Sybesma 1992) and nominal phrases without an overt numeral are blocked from this position: (25) *qǐng bǎ zhī bǐ jiè gěi wǒ yòng, hǎo ma? please BA CLF pen loan to 1SG use good Q ‘Please loan me a pen (any pen) for a moment, okay?’ It has been claimed for yí 0̸CLF, yī ge as well as 0̸one , ge that they are developing into indefinite articles (e.g. Chen 2003; Tao 2006).

1528

VII. Syntactic Sketches

Finally, only bare N can have a non-referential interpretation. Nouns are non-referential if they do not set up a referential frame (De Swart and Zwarts 2009). An example in English is: ‘John plays the piano. #It is a very old one.’ This sentence is anomalous because the piano in the first sentence is non-referential in the sense defined. In Mandarin bare N is often non-referential. We will see examples when we discuss the so-called dummy objects in section 3. Here is another example: (26) Zhāng Sān mǎi shū qù le. #dōu hěn hòu. Zhang San buy book go PRF all very thick ‘Zhang San went to buy books. They’re all very big.’ Matching form and referential properties, we see that forms with a demonstrative are always definite (or, more precisely, very specific; see the discussion above), forms without the demonstrative but with a numeral and classifier are all indefinite and that bare Ns can be definite, kind-referring, indefinite and non-referential. The question is what the underlying structure of all these different forms is. Works such as Cheng and Sybesma (1999, 2009), Sio (2006) and Sybesma and Sio (2008) have led to the idea that the Chinese noun phrase involves at least the following three functional layers on top of the lexical NP. The lowest of these three is the layer which marks the phrase as definite. Comparison with other varieties of Chinese led Cheng and Sybesma to associate this layer with the projection headed by the classifier, ClP. On top of this layer, we find the numeral phrase, NumeP, which undoes the definiteness and marks the phrase as indefinite. The third layer is associated with the demonstrative; it is called “specificity phrase” in Sio (2006): it marks the phrase as specific indefinite. The idea is that it is the underlying structure which determines the referential properties of the phrase. Thus, a superficially bare N which is definite, is a ClP, a superficially bare N which is indefinite is a NumeP. A specific indefinite NumeP would also involve the SpecP. For an overview of recent discussion, see Cheng and Sybesma (2012).

2.5. Matters of count and mass The fact that one cannot count in Mandarin without the intervention of a classifier has inspired many to claim that Chinese nouns are all mass nouns and that the classifier turns them into count nouns. This claim is questionable for two reasons. First, Cheng and Sybesma (1998a) show that measure words which (as we mentioned above) create a unit of measurement that is not naturally there, differ in several syntactic respects from classifiers, which simply name the unit of counting that is part of the semantic denotation of the noun. One such respect is the fact that, unlike (most) classifiers, measure words can be modified by adjectives such as dà ‘big’ and xiǎo ‘small’ (see [27]) (see X. Li 2009 who challenged this claim); the other is that measure words can be followed by the modification marker de, whereas this is not possible for (most) classifiers (as illustrated in [28]) (see Hsieh 2008 for the conditions under which de can follow a sortal classifier: either the quantity is approximate, or there is emphasis or contrastive focus on the classifier).

43. Mandarin

1529

(27) a. yī dà hé qiǎokēlī one big box chocolate ‘one big boxfull of chocolates’ b. *yī dà ge nánzǐhàn one big CLF bloke Intended: ‘one big bloke’ (28) a. yī hé de qiǎokēlī one box MOD chocolate ‘one boxfull of chocolates’ b. *yī ge de nánzǐhàn one CLF MOD bloke Intended: ‘one big bloke’ Cheng and Sybesma (1998a) conclude that the difference between mass and count nouns exists in Chinese, and that it is reflected at the level of the classifier. Secondly, Cheng, Doetjes, and Sybesma (2008) observe that bare nouns in Mandarin do not get the mass interpretation which is expected under the assumption that all nouns in Mandarin are mass and are only turned into count by the classifier. Whereas in English, bare nouns can easily shift to a mass reading, as in (29a) and (30a), we rarely get such reading in Chinese. (29) after the explosion, a. there was dog all over the wall. b. qiáng-shàng quán shì gǒu-*(ròu) wall-top all be dog-flesh ‘there was dog all over the wall.’ (30) a. He likes to eat elephant. b. tā xǐhuān chī dàxiàng-*(ròu). eat elephant-flesh 3SG like ‘He likes to eat elephant.’ In both (29b) and (30b) we see that on its own the bare noun cannot be interpreted as a mass; we need to add the noun ròu ‘flesh, meat’ to get the intended meaning. Note that we use the lexical item dà-xiàng ‘elephant’ in (30) because elephant meat is not a typical or common culinary item. Some lexical items such as jī ‘chicken’ are more easily switched to a mass reading; cf. Chierchia (2010). Compare sān-pán jī ‘three-dish chicken’ with ?*sān-pán niú ‘three-dish cow’. The latter is not felicitous. See Cheng, Doetjes, and Sybesma (2008) for a discussion of the trigger of coerced readings. We conclude that Chinese nouns do not all start out as mass nouns.

3. The verbal domain In this section, we discuss issues related to the verbal domain. We do so in three subsections: 3.1 is loosely organized around the question how the (Vendlerian) verb classes are

1530

VII. Syntactic Sketches

instantiated in Mandarin (Aktionsart); 3.2 deals with a number of intransitive structures, and 3.3 is about tense and (viewpoint) aspect.

3.1. States, accomplishments and activities 3.1.1. The expression of states and related issues: Adj, Prep and nominal predicates This section introduces the characteristics of how the different verb classes in Mandarin are instantiated, basically an overview from an Aktionsart point of view. Though the focus will be on accomplishments and activities, we start off from a brief discussion on issues related to the expression of states. The first thing to mention is that according to some researchers, the category of “adjective” does not exist in Mandarin (McCawley 1992). There is a group of predicative elements, generally called “stative verbs” in the literature in English, which by and large cover the semantic domain covered by adjectives in most Indo-European languages; the Chinese term is xíngróngcí ‘descriptive word’. These stative verbs differ from IndoEuropean adjectives in not needing a copular verb when they function as the main predicate of a sentence. To express a stative situation, they do, however, need something else, viz., a degree marker. (31) a. Zhāng Sān gāo/cōngmíng. Zhang San tall/intelligent ‘Zhang San is taller/more intelligent (than someone else in the context).’ not: ‘Zhang San is tall.’ b. Zhāng Sān hěn gāo/cōngmíng. Zhang San very tall/intelligent ‘Zhang San is tall/intelligent.’ As these examples show, when used bare, Mandarin adjectives seem to have a comparative interpretation; the sentence in (31a) is only felicitous in a context in which, for instance, it was just asked who of the two children was taller or more intelligent. To make a stative, that is, non-comparative, claim, we need an additional element, viz., a degree marker. In (31b) this is the non-emphatic hěn ‘very’ but other degree modifiers can also be used. It is unclear why stative verbs need this additional element when used predicatively. S. Huang (2006) proposes that these elements function as type lifters; by lifting the N-type stative verbs from to , they enable them to function as predicates. Grano (2012), approaching the problem from how Mandarin expresses comparative as well as positive semantics, argues that the appearance of hěn ‘very’ is inserted to satisfy a requirement connecting T(ense) and the verb. For one other proposal, see C. Liu (2010). Aside from these “stative verbs”, there are other verbs expressing states, such as xǐhuān ‘like’, zhīdao ‘know (information)’ and rènshi ‘know (people)’, only some of which (an example is xǐhuān ‘like’), can be modified by degree modifiers.

43. Mandarin

1531

Nouns functioning as predicates are generally accompanied by the copula shì. Nominal predicates can be bare Ns, but can also be preceded by (yí)-ge ‘one-CLF’ meaning one, a (with slight meaning differences between nominal predicates with and those without): (32) Zhāng Sān shì ((yí)-ge) tiāncái. Zhang San be one-CLF genius ‘Zhang San is a genius.’ The category Prep(osition) is also not unproblematic. First, it is not clear how many members the category has, if it exists at all. Although there are a small number of elements that only function prepositionally, most counterparts of prepositions in IndoEuropean languages can probably be considered as verbs that can function as the main or as a subordinate predicate in a sentence. Here are some examples, with zài ‘be.at/at’, gěi ‘give/to, for’ and yòng ‘use/with’: (33) a. tā bù zài jiā. 3SG not be.at home ‘She is not at home.’ chī fàn. a′. wǒmen píngcháng zài jiā 1PL normally be.at home eat rice ‘We normally eat at home.’ b. tā méi yǒu gěi wǒ nàme duō. 3SG not have give 1SG so much ‘He did not give me that much.’ b′. tā gěi nǐ mǎi-le sān-běn shū. 3SG give 2SG buy-PRF three-CLF book ‘She bought you three books.’ yǔfǎ-shū. c. wǒ bù xiǎng yòng tā-de 1SG not want use 3SG-MOD grammar-book ‘I don’t want to use his grammar.’ chī fàn. c′. wǒmen píngcháng yòng kuàizi 1PL normally use chopstick eat rice ‘We normally eat with chopsticks.’ In (33), we see the same element as a main predicate (33a, b, c) and in a subordinate function (the primed counterparts). How the primed sentences should be analysed is controversial. Some see these examples as evidence for the idea that Mandarin features serial verb constructions (Li and Thompson 1981), but others, such as Paul (2008), have reasons to doubt that there is any substance to this idea at all.

3.1.2. Accomplishments Mandarin is the ideal language to illustrate the idea that accomplishments feature multiple layers. Accomplishments are often regarded as consisting of three layers: an initiation

1532

VII. Syntactic Sketches

layer (represented by little vP), a process layer, represented by VP and the layer expressing the resulting state (a small clause in our analysis below) (Chomsky 1995; Hoekstra 1988; Sybesma 1992). Whereas in an English accomplishment verb like kill, these layers remain obscure, in the following Mandarin sentence, we can actually see all three layers. (34) Zhāng Sān bǎ Lǐ Sì shā sǐ le. Zhang San BA Li Si KILL dead PRF ‘Zhang San killed Li Si.’ This sentence can be analysed as involving the following underlying structure (ignoring the aspectual particle le for now; see section 3.3 below): (35) [νP Zhāng Sān [ν⁰ bǎ [VP -- [V⁰ shā [SC Lǐ Sì sǐ ]]]]] dead KILL where bǎ instantiates little v, assigning an initiator role to Zhāng Sān, shā the V, expressing the process of going through the motions of killing (respresented as “KILL”, in small caps), and the small clause Lǐ Sì sǐ ‘Li Si dead’ describing the state that the motions of killing result in. The subject of the small clause (Lǐ Sì) surfaces as the object of the sentence as a whole. The bare structure of the verbal complex of an accomplishment sentence (ignoring le) is given in (36), filled in with the lexical items of (35) repeated in (36a). The further derivation involves movement of Zhāng Sān, the subject of little v to the matrix subject position and movement of the small clause subject Lǐ Sì into SpecVP, in both cases for reasons of Case (i.e., licensing) (see also Huang, Li, and Li 2009 and references therein). We also insert the element bǎ in the head of vP. The relevant part of the structure after these operations have taken place is given in (36b). (36) [νP (Spec) Initiator/subj [ν⁰ v [VP (Spec) -- [V⁰ V [SC SC-subj R(esult pred) ]]]]] a. [νP Zhāng Sān

[ν⁰

[VP --

[V⁰ shā [SC Lǐ Sì

sǐ ]]]]]

b. [νP [ν⁰ bǎ [VP Lǐ Sì [V⁰ shā [SC sǐ ]]]]] KILL dead There is one more step, not represented above, which forms a complex head consisting of V, R(esult) and le. Although all layers can be overtly realized, they do not always have to be; (37) expresses the same meaning as the sentence in (34): (37) Zhāng Sān shā-le Lǐ Sì. Zhang San KILL-PRF Li Si ‘Zhang San killed Li Si.’ There are good reasons, however, to assume that the underlying structure of (37) is the same as that of (34), namely (36). First of all, all sentences with bǎ have a counterpart without it, with virtually the same meaning. If there is any meaning difference at all, it has to do with the “information

43. Mandarin

1533

structure”. As discussed in section 4, it is generally assumed that, in Mandarin, the new information focus is on the postverbal material in the sentence (Li, Thompson, and Zhāng 1998). As a result, old information tends to be moved towards the left. The difference between a bǎ-sentence and its non-bǎ-counterpart is that in the former the object precedes the verb while it follows it in the latter; the noun following bǎ has been called a secondary topic (e.g., Tsao 1987). The non-bǎ-counterpart of (34) is (38). The derivational difference with (34) lies in the movement of the complex V-R-le (shā-sǐ-le ‘KILL-dead-le’) into the head of vP, instead of insertion of bǎ into that position; see (39) (once more disregarding le). (38) Zhāng Sān shā sǐ-le Lǐ Sì. Zhang San KILL dead-PRF Li Si ‘Zhang San killed Li Si’ (39) [νP (Spec) Initiator/subj [ν⁰ v [VP (Spec) -- [V⁰ V [SC SC-subj R ]]]]] a. [νP Zhāng Sān [ν⁰ [VP --

[V⁰ shā

[SC Lǐ Sì

sǐ ]]]]]

b. [νP [ν⁰ shā-sǐ [VP Lǐ Sì [V⁰ [SC ]]]]] KILL-dead KILL dead In other words, although the little v layer is not realized by a separate element, both the word order and the meaning of the VP tell us that it is there. This applies to (38), as well as to (37). The sentence in (37) seems to lack the result layer as well. Here too, there are reasons to assume that this is only apparent. First of all, if the analysis of the sentences above is more or less on the right track, especially with respect to the treatment of the object, which is really the subject of the resultative small clause, the fact that the object in (37) behaves in all relevant respects exactly the same as that in (34), suggests that (34) and (37) are structurally the same, which means that the sentence in (37) must involve a phonologically empty counterpart of sǐ ‘dead’ in (34). The sentence in (38) already shows that we can fill this element in, without any repercussions for meaning or word order. For the sake of completenes, the structure and derivation of (37) is given in (40) (disregarding le). (40) [νP (Spec) Initiator/subj [ν⁰ v a. [νP Zhāng Sān

[ν⁰

[VP (Spec) -- [V⁰ V [SC SC-subj R ]]]]] [VP --

[V⁰ shā [SC Lǐ Sì

b. [νP [ν⁰ shā-e [VP Lǐ Sì

e]]]]]

[V⁰ [SC ]]]]]

(41a−c) are some more minimal pairs, showing the same pattern: R can be left empty; empty R always has an overt counterpart. zìxíngchē. (41) a. tā mài (diào)-le wǒ-de 3SG sell off-PRF 1SG-MOD bicycle ‘He sold my bicycle.’

1534

VII. Syntactic Sketches míngzi ma? b. nǐ wàng (diào)-le tā-de 2SG forget off-PRF 3SG-MOD name Q ‘Did you forget his name?’ c. wǒ kàn (wán)-le zhè-běn shū. 1SG read done-PRF this-CLF book ‘I read/finished this book.’

Elements such as diào ‘off’ and wán ‘done, finished’ in (41) belong to a class of elements which Chao (1968) calls “phase complements”; we return to them below. In conclusion, although the sentence in (37) does not have overt instantiations for all the layers, like (34) does, we see that it is easy to recognize that they actually do exist in the underlying structure of the sentence. Facts and insights like these, and especially the realization that in accomplishments the R is always there, have led to the claim that Mandarin has no inherent accomplishment verbs (Tai and Chou 1975; Tai 1984). This can be illustrated by the famous sentence (taken from Tai 1984): (42) Zhāng Sān shā-le tā sān cì, kěshì tā méi sǐ. Zhang San KILL-PRF 3SG three time but 3SG not.have die ‘Zhang San went through the motions of killing him three times, but he did not die.’ Aside from little v, all accomplishments clearly consist of an a-telic activity verb and another constituent, the small clause in our analysis above, to provide the telicity.

3.1.3. Activities The verb shā ‘KILL’ (not ‘kill’ − see above) is typical for the activity verbs in Mandarin in several respects. First of all, it is a-telic, and can be easily connected with a small clause to make it telic; this is, in fact, true for activity verbs in all languages. What is characteristic of Mandarin activity verbs, however, is that (with one or two exceptions; see below) they must always have a complement: either the result denoting small clause or an object, be it a contentful object or what we may call a “dummy object” (Cheng and Sybesma 1998b). Let us illustrate this with reference to the verb chī ‘eat’: (43) a. wǒmen chī-qióng-le Zhāng Sān. eat-poor-PRF Zhang San 1PL ‘We ate Zhang San poor.’ b. wǒmen hěn xǐhuān chī píngguǒ. very like eat apple 1PL ‘We like to eat apples very much.’ c. wǒmen zhèng-zài chī fàn ne. just-be.at eat rice SFP 1PL ‘We are eating.’

43. Mandarin

1535

In (43a), we see the verb chī ‘eat’ complemented by a result denoting small clause; ‘eat’ is an activity, a-telic, and the small clause provides the end point: we ate and the result of our eating was that Zhang San had no money left. But chī ‘eat’ can of course also be complemented by a regular object, like píngguǒ ‘apple(s)’, as in (43b) or fàn ‘rice’ in (43c). The latter is what we call a “dummy object”: it does not have any referential value; it is there simply because the complement slot of the verb has to be filled. That the dummy object has no referential value is reflected in the translation. When you say you are eating in the sense of having a meal, you use the expression chī fàn ‘eat rice’, regardless of what you are actually eating (it could be a meat pie or a pizza). When in languages such as English we can leave the object slot unfilled (eat something or other: eat), we have to use a dummy object in Mandarin. This is why we said that in Mandarin, activity verbs must always have a complement. The complement status of the resultative small clause, the contentful object and the dummy object is confirmed by the fact that they cannot co-occur. The use of the dummy object may be related to the existence of pro-drop in Mandarin (see section 4 below): subjects and objects can be left out whenever it is clear from the context who or what is meant. Thus, the following sentence is a proper answer to a question like: What happened to the chicken? (44) māo chī-le. cat eat-PRF ‘The cat ate it.’ We can analyse these sentences as involving an empty object which refers back to the ‘chicken’ (which may or may not be present as an empty topic in [44]; J. Huang 1984). What we see, then, is that languages can have empty objects: in English the empty object is non-referential (when do you think we should eat? where eat means ‘eat something or other, have a meal’) and in Mandarin the empty object is referential in that it refers back to a salient object in the linguistic context, as we saw in (44). It seems reasonable to hypothesize that a language cannot have both referential and non-referential empty objects. Since Mandarin has referential empty objects, it needs dummy objects for the non-referential cases (Cheng and Sybesma 1998b). Chī fàn ‘eat’ is not the only case. Going one step further, in fact, with the exception of xiào ‘laugh’ and kū ‘cry’, all Mandarin unergatives are transitive verbs. This is true, not only for the counterparts of verbs that are transitive in languages like English (e.g. eat), but also for the ones that are not, such as walk and run. Here are a few examples. Notice that they can be divided in two groups: the ones with a dummy object and the ones with a dummy verb (for the latter, see Hale and Keyser 1993). (45) a. zǒu-lù hé pǎo-bù walk-road and run-step ‘walk and run’ b. dǎ-pēnti hé zuò-mèng hit-sneeze and make-dream ‘sneeze and dream’

1536

VII. Syntactic Sketches

3.2. Intransitive structures In this section we will briefly look at constructions with a single, internal argument. First of all, there are simple unaccusative verbs, like lái ‘come’ and chén ‘sink’. Unaccusatives have the characteristic that the only argument can occur in postverbal position when it is indefinite. lái-le bù shǎo rén, Zhāng Sān yě lái-le. (46) a. wǒmen jiā home come-PRF not few people, Zhang San also come-PRF 1PL ‘A lot of people have come to our house; Zhang San has also come.’ sǐ-le hěn duō rén. b. fāshēng-le yí-jiàn dà chē-huò, happen-PRF one-CLF big car-accident, die-PRF very many people ‘A big car accident happened, many people died.’ Aside from simple unaccusatives, there are phrases consisting of two verbal elements, a V and an R. They are best analysed as involving the structure we discussed in the context of accomplishments: a V with a resultative small clause complement (Sybesma 1992). The nominal argument in such sentences is underlyingly the subject of the resultative small clause. (47) a. zhèi-kè shù zhǎng-xié-le. this-CLF tree grow-inclined-PRF ‘This tree is grown tilted.’ b. Zhāng Sān zhuàng-sǐ-le. Zhang San crash-die-PRF ‘Zhang San crash to death.’ c. fàn shāo-hú-le. rice cook-burnt-PRF ‘The rice got burnt.’ There are good reasons to think that the lower layer in accomplishments (V plus resultative small clause) are all unaccusative in that they have no external argument, the external argument in accomplishments being related to the “little v”-layer. Most of the V-R phrases in (47) can be preceded by the element gěi, which is actually the verb meaning ‘give’ (see [33b, b′] above): (48) a. Zhāng Sān gěi zhuàng-sǐ-le. Zhang San give crash-die-PRF ‘Zhang San got crashed to death.’ b. fàn gěi shāo-hú-le. rice give cook-burnt-PRF ‘The rice was burnt.’ In sentences such as those in (48), it functions as an element that introduces an external force. The difference between (47) and (48) is that the former is really unaccusative in

43. Mandarin

1537

there being no external argument or external force, while in the latter, it seems that an external force has been added, although it is not itself overtly realized. Concretely, in (47c) the rice just got burnt, it just happened. In (48b), on the other hand, someone did it (Shěn and Sybesma 2010; see also Cheng, Huang, Li, and Tang 1997). The external force can be made explicit, in a way comparable to the addition of a byphrase in English passives, by adding a bèi-phrase, although it must be stressed that structurally, English passives and Mandarin passives are quite different. (49) a. Zhāng Sān bèi Lǐ Sì (gěi) zhuàng-sǐ-le. Zhang San BEI Lǐ Sì give crash-die-PRF ‘Zhang San was crashed to death by Li Si.’ b. fàn bèi Lǐ Sì (gěi) shāo-hú-le. rice BEI Lǐ Sì give cook-burnt-PRF ‘The rice was burnt by Li Si.’ Sentences with bèi are generally referred to as passives. The element gěi ‘give’ is optional; see references mentioned above and Huang, Li, and Li (2009).

3.3. Tense and aspect 3.3.1. Tense Mandarin has no overt morphological reflex of tense. One often comes across statements to the effect that the temporal interpretation of a Mandarin sentence is determined by adverbial phrases and/or context (see references in Sybesma 2007). For instance, the sentences in (50) illustrate the use of an adverbial to manipulate tense: (50) a. wǒmen zhù zài Táizhōng. 1PL live at Taichung ‘We live in Taichung.’ b. wǒmen nèi-shíhou zhù zài Táizhōng. 1PL that-time live at Taichung ‘In those days we lived in Taichung.’ Whereas the sentence in (50a) is interpreted as a present tense sentence, the one in (50b) is past tense, presumably due to the adverbial nèi-shíhou ‘in those days’. Similarly, the sentence in (51b) is interpreted as reporting on a past event due to the context given in (51a). (51) a. wǒmen zuótian qù Lúndūn mǎi shū … 1PL yesterday go London buy book ‘we went book shopping in London yesterday …

1538

VII. Syntactic Sketches b. zhèng yào jìn shūdiàn de shíhou, pèng-dào Zhāng Sān! just want enter bookstore MOD time run-into Zhang San ‘and when we were about to enter the bookstore, we ran into Zhang San!’

However, three factors complicate this picture: (i) the fact that a VP’s Aktionsart is a factor in determining its temporal interpretation; (ii) the fact that the use of adverbials and context to influence the temporal interpretation of a sentence is limited; and, finally, (iii), the fact that viewpoint aspect can be used for tense purposes. We will briefly discuss these issues here. From works such as Smith and Erbaugh (2005) and Lin (2006), we know that there is a relation between the Aktionsart of the predicate and the (default) interpretation it gets. In particular, telic predicates are interpreted as referring to past events while states are seen as referring to states that are current. Here are some examples, from Lin. (52) a. Zhāng Sān hěn máng. Zhang San very busy ‘Zhang San is very busy.’ ma? b. nǐ dǎ lánqiú 2SG play basketball Q ‘Do you play basketball?’ huāpíng. (53) a. Zhāng Sān dǎpò yī-ge Zhang San break one-CLF vase ‘Zhang San broke a vase.’ b. tā dài wǒ qù Táiběi. 3SG take 1SG go Taipei ‘He took me to Taipei.’ The example in (52a) is like (51a), a simple state, which, in the absence of any other cues is interpreted as reporting on the current situation. In (52b), we find a simple, atelic activity which, in isolation, also has a stative/habitual interpretation. The sentences in (53) exemplify the generalization that telic predicates by default get a past tense interpretation. In short, in isolation and without adverbials, the temporal interpretation of a sentence is determined by the Aktionsart properties of the predicate. Interestingly, addition of a temporal adverb can change the interpretation, but not in all cases. In particular, in (53a), addition of míngtiān ‘tomorrow’, while okay in all other cases in (52) and (53), does not yield a grammatical sentence; as observed by (Hsieh 2001: 276), we need huì ‘can, will’ or yào ‘want, will’ as well (see below). What is even more interesting is that the default interpretation can only be changed using linguistic means: we need an adverb or other lexical cues, or linguistic context; non-linguistic context cannot do the job. As observed in Sybesma (2007), if we take a deceased person as the subject of a state, we do not get the result of a temporal switch. For example, the following sentence does not mean ‘Premier Zhao Ziyang was (or used to be) very busy’, which one would expect, knowing that Zhao Ziyang died in 2005, if non-linguistic context can change the temporal interpretation of a sentence; instead, it means ‘Premier Zhao Ziyang is very busy’ and is as strange or inappropriate as the English translation.

43. Mandarin

1539

(54) Zhào Zǐyáng zǒnglǐ hěn máng. Zhao Ziyang premier very busy ‘Premier Zhao Ziyang is very busy.’ This can be taken as evidence for the presence of a T-node in the structure of the Mandarin sentence after all: a node in the structure of the sentence. The final factor that needs to be mentioned in the context of tense in Mandarin is the fact that, aside from Aktionsart, viewpoint aspect also plays a role in determining the temporal interpretation of a sentence. Just as is the case in many other languages (see Giorgi and Pianesi 1997), perfective aspect, in Mandarin expressed by le (see immediately below), is generally used to report on past events. Just like in languages such as Italian and Dutch, the Mandarin translation of the English I bought a book (yesterday) involves the perfective: (55) wǒ (zuótiān) mǎi-le yī-běn shū. 1SG yesterday buy-PRF one-CLF book ‘I bought a book (yesterday).’ Before moving on to the section on viewpoint aspect, we mention that future tense (in as far as it is tense) is expressed with the use of the modal verbs yào (Northern speakers) and huì (Southern speakers): (56) wǒmen jīntiān wǎnshang yào zài nǎr chī-fàn ne? 1PL today evening will at where eat-rice SFP ‘Where shall we have dinner tonight?’

3.3.2. Aspect As to viewpoint aspect, Mandarin is generally said to express the progressive, experiential and completive (or perfective) aspects with (more or less) morphological means. Other aspects are expressed using adverbials (e.g., píngcháng ‘normally’ for the habitual).

3.3.2.1. Progressive The progressive aspect is expressed with the element zài ‘be.at’ in front of the verb; zài can be strengthened with zhèng ‘just’; an example was given in (43c). As we saw in (33a, a′) zài is a locative element, which can be used as a preposition but it can also function as (part of) the main predicate. The progressive aspect is strengthened by attaching sentence final particle ne to the sentence. We find zhèng-zài mostly with activity verbs. Another element associated with the progressive is the suffix zhe. Like zhèngzài, it can also be used in the main clause, see (57a) (though it seems more bookish than zhèng-

1540

VII. Syntactic Sketches

zài), but it is more generally found in subordinate or adverbial clauses, modifying the main predicate, as illustrated in (57b). (57) a. wǒmen chī-zhe fàn ne. eat-PROG rice SFP 1PL ‘We are eating.’ qù. b. wǒmen dǎsuàn zǒu-zhe plan walk-PROG go 1PL ‘We plan to go on foot (lit. walking).’ In its subordinate use, zhe does not only occur with activity verbs, expressing that the action in question is going on, it also occurs with telic events, expressing that the projected endpoint has been reached and that the resulting state pertains (Cheng 1986b). de chènshān jìn-lai-le. (58) a. tā chuān-zhe lán-sè enter-come-PRF 3SG put.on-PROG blue-color MOD shirt ‘He came in, wearing a blue shirt.’ ná-zhe yī-tiáo kùzi jìn-lai-le. b. tā shǒu-lǐ 3SG hand-inside take-PROG one-CLF pants enter-come-PRF ‘He came in with a pair of pants in his hand.’ The verb chuān in (58a) means ‘wear’ in the sense of ‘put on’ and with zhe signalling that the projected endpoint has been reached and that the resulting state continues, we get the meaning ‘wear’ in the sense of ‘have on, be wearing’. In (58b) we see something similar: ná means ‘take’, ná-zhe means ‘hold’.

3.3.2.2. Experiential The suffix guo expresses that a certain activity has taken place at least once (Iljic 1990). Its occurrence is restricted to verbs that denote eventualities which are in principle repeatable. Here is an example. (59)

nǐ chī-guo Zhōngguó-fàn ma? 2SG eat-EXP China-food Q ‘Have you ever eaten Chinese food?’

Verb phrases with guo are negated with méi-yǒu ‘not-have’.

3.3.2.3. Completive To express that an event has been completed, the element le is attached to the verb or verbal complex (“verb-le”). Verb-le is one of the most popular topics in Chinese linguistics (Lin 2006, and references cited there). Its completive effect is clearest in the context

43. Mandarin

1541

of telic events: with such events, it signals that the projected endpoint has been reached. We have already seen many examples. The negative counterpart of a sentence with verble has méi ‘not.have’ or méi-yǒu ‘not-have’; méi and méi-yǒu cannot co-occur with le (as in [60]): nèi-pī mǎ. (60) a. tā qí-lèi-le 3SG ride-tired-PRF that-CLF horse ‘he rode that horse tired’ nèi-pī mǎ. b. tā méi-yǒu qí-lèi-(*le) 3SG not-have ride-tired-PRF that-CLF horse ‘he did not ride that horse tired’ Verb-le has many faces. In some contexts, to give just one example, it seems to signal inchoativity (Smith 1990; Liú 1988) as the following example shows (adapted from Sybesma 1997): (61) chī-le cái juéde yǒu yī-diǎr xiāngwèr. eat-PRF only feel have a-bit taste ‘Only after I started eating, I felt there was some nice flavor to it.’ Aside from le, Mandarin has a number of other elements which can be used to express completion. These are the elements referred to earlier as “phase complements”, such as diào ‘arrive’ and wán ‘finish’ in (41). In the example sentences in (41), we can still see them as somehow predicating of the object. In other words, we can analyse them as the predicate of the resultative small clause. In other cases, however, this is not possible and they express that the event has come to an end (Xuān 2008). In the following sentence, for instance, instead of only having a direct (or indirect) predication relation with one particular nominal constituent (the NP that is interpreted as the “object”), wán ‘finish’ seems to have scope over the entire event. (62) wǒmén chī wán fàn zài zǒu. 1PL eat finish food then leave ‘We’ll only leave after we’re done eating.’ We find this use of the phase complements in sentences in which there is no specific or referential object, such as dummy fàn ‘rice, food’ in (62), which does no more than plugging the object slot, as argued above.

4. The sentence This section presents an overview of the sentential structure of Mandarin. As we have already seen from the numerous sentences above, Mandarin is basically a head-initial language, with SVO as the basic word order, with nominal and clausal complements following the verb (see also [63]).

1542

VII. Syntactic Sketches

(63) Lǐ Sì xiāngxìn Zhāng Sān bù xǐhuān Huáng Róng. Li Si believe Zhang San not like Huang Rong ‘Li Si believes that Zhang San does not like Huang Rong.’ We discuss some of the core features of the sentence structure of Mandarin by looking at the distribution of adverbs, verb copying, sentence final particles, topic-focus stuctures as well as the formation of questions.

4.1. Pre- and postverbal adverbs With some exceptions, to be discussed shortly, adverbials occur preverbally. The unmarked position seems to be between the subject and the verb. However, with the exception of adverbs such as chángcháng ‘often’ and manner adverbs such as mànmānr-de ‘slowly’, they can also occur in pre-subject position. Manner adverbs are generally suffixed by de; other adverbs are not marked in any way. Here are some examples. (64) a. {Yǐqián} Zhāng Sān {yǐqián} chángcháng kàn diànyǐng. before Zhang San before often watch movie ‘Zhang San used to often watch movies.’ b. {Xiǎnrán} Zhāng Sān {xiǎnrán} gēn tāmen qù kàn diànyǐng le. obviously Zhang San obviously with 3PL go watch movie SFP ‘Obviously Zhang San went to watch a movie with them.’ c. {Zuótiān} wǒmen {zuótiān} mànmānr-de zǒu huí jiā. yesterday slow-ADV walk back home yesterday 1PL ‘Yesterday we walked home slowly.’ Compared to the unmarked post-subject position, the adverbs in sentence-initial position tend to be more contrastive; they are more like topics (see below). Two types of adverbial modifiers occur postverbally: durational and frequentative expressions and a certain type of manner adverb, as illustrated in (65) and (66) respectively. Durational modifiers can be integrated into the sentence in many different ways which are partly determined by the referential properties of the object if there is one (see Sybesma 1992 for an overview). (65) a. tā kàn shū kàn-le sān-ge xiǎoshí. 3SG read book read-PRF three-CLF hour ‘He read for three hours.’ b. tā kàn-le nèi-běn shū sān cì. 3SG read-PRF that-CLF book three times ‘He read that book three times.’ (66) a. Zhāng Sān kū de hěn shāngxīn. Zhang San cry DES very sad ‘Zhang San is crying very sadly.’

43. Mandarin

1543

b. tā kàn xiǎoshuō kàn de hěn kuài. read DES very fast 3SG read novel ‘He reads novels very fast.’ Postverbal manner adverbs are separated from the verb by the element de (etymologically different from the de that is suffixed to preverbal manner adverbs), which forms a phonological unit with the verb. Semantically, post- and preverbal manner adverbials are quite different. To illustrate, consider the following examples: (67) a. tā mànmānr-de zǒu-guò-lái. 3SG slow-ADV walk-pass-come ‘He is slowly walking by.’ hěn màn. b. tā zǒude 3SG walk.DES very slow ‘He walks very slowly.’ or: ‘He is walking very slowly.’ The difference is that the preverbal manner adverb can only be used to modify events that are in progress; thus, (67a) can only be used while pointing at someone who is walking by. The postverbal one can also be used that way, but (67b) can also be used to describe someone more generally: you may utter (67b), while pointing at someone who is sitting on the couch watching TV: ‘he is a slow walker’. There is no agreement on how sentences with postverbal adverbials with de should be analysed. There are factors (e.g., definiteness/specificity) which affect whether or not both a nominal complement and an adjunct can follow the verb (see J. Huang 1982b; Y.-H. Li 1990; see also Y. Li 1999). The restriction on postverbal elements also leads to a frequently discussed phenomenon, namely verb-copying, to be discussed in the following section. For insightful discussion on the distribution of the different types of adverbials relative to the verb in Mandarin, see Ernst (1999, 2002).

4.2. Verb copying The examples (65a) and (66b) reveal another interesting property of Mandarin. Although Mandarin is basically head-initial, there is a restriction on the number of postverbal phrases in a clause. This concerns both nominal complements as well as postverbal adjuncts. When we have more than one constituent which should follow the verb in the same sentence, we often see that the verb is repeated. This can be seen in (65a), where we have an object and a durational expression, and in (66b), which has an object and a manner adverbial. In (68) we see the same phenomenon illustrated with resultatives: (68) a. tā dǎ Lǐ Sì dǎ de hěn cǎn. 3SG hit Li Si hit RES very miserable ‘He hit Li Si to the extent that Li Si became very miserable.’

1544

VII. Syntactic Sketches b. tā chī fàn chī de hěn bǎo. 3SG eat rice eat RES very full ‘He ate and became very full.’

The verb copying in (68a, b) illustrates in fact two different verb copying strategies. Though the two sentences are superficially very similar, they illustrate very different readings: in (68a), the resultative de-clause modifies the object argument Lǐ Sì, while in (68b), the resultative de-clause modifies the subject argument tā ‘he’. Cheng (2007), following Sybesma (1999a), proposes that the base structures of the two sentences are in fact different, which leads to different verb copying strategies. The sentence in (68a) has the base structure in (69a), comparable to its bǎ-counterpart (69b) (see discussion above). (69) a. [νP tā ν [VP dǎ [deP Lǐ Sì hěn cǎn]] 3SG hit Lǐ Sì very miserable b. tā bǎ Lǐ Sì dǎ de hěn cǎn. 3SG BA Li Si hit RES very miserable ‘He hit Li Si to the extent that Li Si became very miserable.’ The fact that Lǐ Sì is the subject of the clause labeled “deP” ensures the reading where the result is related to the object argument. To derive (68a), the subject of the resultative de-clause, Lǐ Sì, moves to SpecVP, while the verb dǎ ‘hit’ moves to little ν. Verb copying comes about in this case, according to Cheng, because the lower copy is not deleted after movement, due to morphological fusion of the verb and de. This yields the effect that the lower copy is not visible to the chain reduction operation (see Nunes 2004), thus allowing copies of the verb to be pronounced. The sentence in (68b) has a different base structure, given in (70), the difference being that here we have no vP-layer (see the discussion in section 3.2 above). (70) [IP [VP chī fàn] [VP chī [deP tā hěn bǎo]] eat rice 3SG very full eat Like Lǐ Sì in (69), tā ‘he’ starts out as an internal argument (inside the resultative deP), yielding the reading in which the resultative de-clause is related to the constituent that will eventually surface as the subject, because, unlike Lǐ Sì in (69), the argument, tā ‘he’, in (70) moves to SpecIP. Because there is no vP, the verb remains in VP. According to Cheng, the higher V-O combination (i.e., chī fàn ‘eat rice’ in [68b]) is derived by a process called “sideward movement” (see Nunes 2001, 2004): the object noun phrase (fàn ‘rice’) needs to be licensed by a verb, and in this case, a copy of the main verb is used for this purpose. This generates a structure in which the V-O complex is adjoined to the verb phrase; see J. Huang (1982b, 1992) for a similar structure. This yields the second verb copy strategy, since the chain-reduction operation cannot delete either of the copies. Note that cases comparable to (68b) do not have to have a dummy object; a full noun phrase can also appear (see Cheng 2007; see also Cheng and Vicente 2013 verb copying triggered by verb fronting).

43. Mandarin

1545

4.3. Sentence final particles One of the eye-catching features of Chinese, including Mandarin, is the presence of sentence final particles (SFPs), which color the sentence one way or another. Let us start with the ones that are often discussed in the syntactic literature, the SFPs which are generally considered to be question particles: ma and ne. (71) a. nǐ chī píngguǒ ma? 2SG eat apple Q ‘Do you eat apples?’ (adapted from Li and Thompson 1981) b. nǐ chī-bu-chī píngguǒ? 2SG eat-not-eat apple ‘Do you eat apples?’ (72) a. Hóngjiàn xǐhuān shénme ne? Hongjian like what SFP ‘What does Hongjian like?’ (adapted from B. Li 2006) b. Bàba ne? father SFP ‘What about father?’ Li and Thompson (1981) note that yes-no questions marked by the A-not-A form of the verb such as the one in (71b) are used in neutral contexts (see J. Huang 1991; McCawley 1994 as well as the discussion below), while the one with the SFP ma in (71a) is associated with some presupposition. For instance, (71a) can only be used in a context where the speaker does not think that the hearer actually eats apples. Ne is oftened considered to be a wh-particle. B. Li (2006) argues however that ne is an evaluative marker, used in declaratives (73a), wh- and A-not-A questions (73b) (examples adapted from B. Li 2006). (73) a. Xiānggǎng zuìjìn xià xuě le (ne). Hong.Kong recently fall snow SFP SFP ‘It snowed in Hong Kong lately.’ b. Hóngjiàn xǐ-bù-xǐhuān zhè běn shū (ne)? Hongjian like-NEG-like this CLF book SFP ‘Does Hongjian like this book?’ Proposing that ne is an evaluative marker, B. Li suggests that, in declaratives, it indicates that the proposition is considered extraordinary by the speaker and that in questions (both wh and yes/no questions), the element signals that the question is considered to be of particular importance to the speaker. In cases such as (72b), B. Li considers ne to be a topic marker (following G. Wu 2006). Aside from the ma that we see in (71a) − the one used in yes/no questions, which is called ma1 in B. Li (2006), there is another ma, marked as ma2, exemplified in (74):

1546

VII. Syntactic Sketches

(74) Wǒ shuō jīntīan shì xīngqīsān ma2 --- (nǐ shuō bú shì). 1SG say today be Wednesday SFP 2SG say NEG be ‘I said it was Wednesday today --- (you said it wasn’t).’ B. Li argues that ma1 is actually not a marker for yes/no questions, and that ma1 and ma2 are one and the same particle. She suggests that ma is a degree marker, and that the same applies to the particle ba, in (75a). (75) a. Hóngjiàn zài bàngōngshì ba. SFP Hongjian at office ‘(Probably) Hongjian is in his office.’ b. Hóngjiàn zài bàngōngshì ma. Hongjian at office SFP ‘(Obviously/certainly) Hongjian is in his office.’ In particular, B. Li argues that in declaratives, ba marks a low degree of the speaker’s commitment to the assertion, while ma marks a high degree of the speaker’s commitment (as we can see from the contrast between [75a] and [75b], where ma is pronounced with low pitch). In questions, we see a similar difference: whereas with ma, a question is quite urgent (as we just saw), ba is used when the question is in some sense not so urgent (the speaker thinks that s/he actually already knows the answer); compare the following example with (71a): (76) nǐ chī píngguǒ ba? 2SG eat apple SFP ‘You eat apples, right?’ Then there is a, a final particle that appears in various contexts, with questions and declaratives alike. Chu (2002) considers it to be a discourse marker: it eases the sentence into the conversational context. It does so in two ways: on the one hand it makes the sentence it is attached to less abrupt, and on the other hand, it alerts the hearer that the speaker means to make an especially relevant contribution to the exchange. Some examples can be seen in (77a, b), adapted from B. Li (2006). le a? (77) a. Bàba huí-lai father return-come PRF SFP ‘Father is back?’ hěn piányi a. b. Jùshuō Huá-Háng hearsay China-Airlines very cheap SFP ‘I heard that China Airlines was very cheap.’ Without a, the sentence in (77b), for instance, would be an out of the blue sentence, with no relation to the context. As is, it is probably uttered in the context of a conversation on traveling to Europe in which the previous speaker may have said that tickets are hardly affordable.

43. Mandarin

1547

Based on co-occurrence restrictions between the particles, B. Li (2006) proposes that the left-periphery of Mandarin has the following heads with the particular order: (78) Discourse > Degree > Force > Evaluative > Mood > Fin a ba, ma ne For B. Li, the order put forth in (78) is a hierarchical order. The assumption made in B. Li is that these left periphery heads are right-headed. To derive the correct word order, the complement of the head moves to the specifier-position of the head, and subsequent movements of that type get us multiple SFPs (see also Sybesma 1999b). The last SFP to be mentioned is le (also known as “sentence-le”, to be distinguished from so-called “verb-le”, discussed in section 3 above). Le is different from the SFPs discussed so far in terms of what semantics it adds to the sentence. Whereas the other particles modify the sentence in signalling higher or lower degrees of urgency or relevance or commitment, le stands much closer to the sentence (also literally: if it co-occurs with other SFPs, it always precedes the other ones). It has been associated with an interpretation akin to certain functions associated with finiteness in other languages. More particularly, by adding le to a sentence, one enhances the link with the utterance time, thus enhancing the actuality. An interesting side effect of this is that le is often connected with a change of state: the actuality is so urgent, that the implication is that whatever situation is described, did not pertain before (Li and Thompson 1981). To illustrate: yǔ le. (79) a. xià come.down rain SFP ‘It is raining now.’ b. wǒmen bù qù le. not go SFP 1PL ‘We are no longer going.’ Without le, the sentence in (79a) would simply mean: ‘It is raining’ − a simple statement of fact. As is, with le, it implies that earlier on, it was not raining (either objectively or subjectively: it is possible that it had been raining for hours, but that the speaker only discovers that it is raining now). Similarly, (79b) without le would say: ‘We are not going’. Le adds the relevance to the moment of speech implying that this is a new situation: earlier on, we were still going, now, this is no longer the case.

4.4. Topic and focus Mandarin has been called a Topic-prominent language (Li and Thompson 1974). One reason for this claim is that it features so-called “aboutness” topics (such as nà-chǎng huǒ ‘that fire’ in the famous sentence reproduced here in [80], originally from Chao 1968, but subsequently quoted in virtually every work on topics in Chinese), which are not related to any element or constituent in the sentence, but which only have a relation with the sentence as a whole.

1548

VII. Syntactic Sketches

(80) nà-chǎng huǒ, xìngkuī xiāofángduì lái-de-kuài. that-CLF fire luckily fire.brigade come-DES-fast ‘As to that fire, fortunately the fire brigade arrived quickly.’ Topics occur in topic-comment sentences, where the topic presents old information, and the comment new information. As a consequence, topics are definite or generic. Aside from the Aboutness-topic, Badan (2007) argues that Mandarin also distinguishes between Hanging Topics (HT) and Left-Dislocation Topics (LDT). These two types of topics can be distinguished based on the fact that LDTs can accommodate prepositional phrases while HTs cannot. The examples in (81)−(83) illustrate that HTs and LDTs further differ on several counts: (a) HTs can be resumed by an epithet, whereas LDTs cannot ([81a] vs. [81b]); (b) multiple LDTs are allowed but multiple HTs are not ([82a] vs. [82b]), and (c) HTs precede LDTs when they co-occur ([83a] vs. [83b]) (examples [81]−[83] are from Badan 2007). yī-fēng. xìn. (81) a. Zhāng Sāni, wǒ gěi [nà-ge shǎzi]i jì-le Zhang San 1SG to that-CLF imbecile send-PRF one-CLF letter ‘Zhang San, I sent a letter to that imbecile.’ yī-fēng xìn. b. *gěi Zhāng Sāni , wǒ gěi [nà-ge shǎzi]i jì-le to Zhang San 1SG to that-CLF imbecile send-PRF one-CLF letter ‘To Zhang San, I sent a letter to that imbecile.’ (82) a. cóng zhè-jiā yínháng, tì Zhāng Sān, wǒ zhīdao wǒmen kéyǐ jièdào for Zhang San 1SG know 1PL can borrow from this-CLF bank hěnduō qián. much money ‘From this bank, for Zhang San, I know we can borrow a lot of money.’ b. *zhè-jiā yínháng, Zhāng Sāni , wǒ zhīdao wǒmen kéyǐ cóng nàlǐ tì tāi Zhang San 1SG know 1PL can from there for 3SG this-CLF bank jièdào hěnduō qián. borrow much money (83) a. Zhāng Sāni, cóng zhè-jiā yínháng, wǒ zhīdao wǒmen kéyǐ tì tāi jièdào Zhang San, from this-CLF bank 1SG know 1PL can for 3SG borrow hěnduō qián. much money Lit: ‘Zhang Sani, from that bank, I know that we can borrow a lot of money for himi.’ b. *cóng zhè-jiā yínháng, Zhāng Sān, wǒ zhīdao wǒmen kéyǐ tì tā from this-CLF bank Zhang San 1SG know 1PL can for 3SG jièdào hěnduō qián borrow much money Whether or not topics are base-generated or moved has been a hotly debated issue (see Xu and Langendoen 1985; J. Huang 1982b; D. Shi 2000 among others). The issue may be settled if we establish more carefully the kind of topics we have. Badan (2007) argues that HTs are base-generated while LDTs involve movement (of a null operator).

43. Mandarin

1549

Turning to focus, in Mandarin, both so-called contrastive focus and so-called information focus appear “in-situ”: in-situ focus carries phonological prominence (i.e., carries stress), as illustrated in (84a−c) (stress indicated by small caps). (84) a.

píngguǒ. chī-le yī-ge Zhang San eat-PRF one-CLF apple ‘ZHANG SAN ate an apple.’

(SUBJECT focus)

ZHĀNG SĀN

b. Zhāng Sān chī-le YĪ-GE píngguǒ. c. Zhāng Sān chī-le yī-ge PÍNGGUǒ.

(NUMERAL+CLASSIFIER focus) (OBJECT focus)

The lián … dōu ‘even’-construction has been an often discussed construction in Chinese linguistics (see Tsai 1994; Shyu 1995 among others). The examples in (85a, b) illustrate that an object which is under the scope of lián must raise to the left of dōu (see Cheng 1995, 2009; S. Huang 1996, Lin 1998; Tsai 1994; Hole 2004 for discussions of dōu). Further, the element lián is optional, and the lián-DP can either be post-subject or presubject. (85) a. Zhāng Sān (lián) zhè-běn shū dōu kàn-wán le. Zhang San even this-CLF book DOU read-finish SFP ‘Zhang San read even this book.’ b. (lián) zhè-běn shū Zhāng Sān dōu kàn-wán le. Badan (2007) shows that the typical properties of an even-construction, i.e., additivity, and scalarity, is expressed by two different elements in Mandarin: lián provides additivity, while dōu gives scalarity. Aside from these “typical” cases of topics and foci, it should be noted that there are cases of object preposing which have a controversial status. These are cases in which the object is preposed to a post-subject position (similar to the position of lián-DP in [85a]): (86) Zhāng Sān zhè-běn shū yǐjīng kàn-wán le. Zhang San this-CLF book already read-finish SFP ‘Zhang San has already read this book.’ Badan (2007) shows that the object in this position does not have an information focus reading (i.e., it cannot be used to answer a what-question, e.g., what did Zhang San already read?). Nor is it a contrastive focus. Instead, it is a contrastive topic, since it requires a contrastive context. Badan argues that it is in a low Topic projection (comparable to the topic position in the lower periphery à la Belletti 2004). Similar cases have been discussed in Ernst and Wang (1995) and Paul (2002), who come to similar conclusions.

4.5. The formation of questions In section 2.3, we already encountered yes-no and wh-questions in connection with SFPs. Here we discuss the formation of questions (which in some cases will lead us back to the SFPs).

1550

VII. Syntactic Sketches

4.5.1. Yes-no questions Like many languages in the world, yes-no questions in Mandarin can be marked simply by rising intonation. In addition to this intonational method, yes-no questions in Mandarin can be formed by the A-not-A form of the verb ([87a], see also [73b] above), as well as putting the negation at the end of the sentence, forming the so-called negative particle questions (87b). (Here we do not discuss again yes-no questions which are accompanied by the particle ma; following B. Li we think that these are intonational questions, accompanied by the strengthener ma, see above.) (87) a. tā xiǎng-bu-xiǎng lái? 3SG want-not-want come ‘Does he want to come?’ méi-yǒu? b. tā lái-le 3SG come-PRF not-have ‘Has he come or not?’ J. Huang (1991) shows that A-not-A questions such as (87a) are similar to constituent wh-questions in that the distribution and interpretation of the A-not-A form exhibits island effects. For instance, a sequence of the form [A not A] cannot be properly embedded in a sentential subject, as in (88). (88) *[wǒ qù bu qù Měiguó] bǐjiào hǎo? 1SG go not go America more good ‘Is it better for me to go to America or not?’ (adapted from J. Huang 1991, 33c) J. Huang (1991) posits a [+Q] operator in INFL0, which is spelled-out by a reduplication rule as well as insertion of the negation element (see McCawley 1994 for a discussion of the negation and how these questions are similar to disjunctive yes-no questions). Note that J. Huang distinguishes two forms of A-not-A questions, namely [A not AB] and [AB not A], where B is the object of A, the verb, as shown here (examples adapted from J. Huang 1991): (89) a. nǐ xǐhuān-bu-xǐhuān zhè-běn shū? 2SG like-not-like this-CLF book b. nǐ xǐhuān zhè-běn shū bu xǐhuān? 2SG like this-CLF book not like ‘Do you like this book or not?’

[A not AB] [AB not A]

According to J. Huang, unlike the [A not AB] form, [AB not A] is not derived by the reduplication rule. Instead, it is derived by a process of anaphoric ellipsis of the form [[AB] not [AB]]. In other words, the base sentence of (89b) is the one in (90). (90) nǐ xǐhuān zhè-běn shū bu xǐhuān zhè-běn shū? 2SG like this-CLF book not like this-CLF book ‘Do you like this book or not?’

43. Mandarin

1551

By deleting the second B (i.e., the second occurrence of zhè-běn shū ‘this book’), we derive the sentence in (89b). Turning now to the negative particle questions (NPQs) illustrated in (87b), we note that Cheng, Huang, and Tang (1997) argue that such questions are derived from disjunctive háishì ‘or’ questions. NPQs such as (91a) are derived from (91b) by (a) deletion of háishì ‘or’, (b) anaphoric deletion of the second occurrence of lái ‘come’, and (c) reanalysis of the negation as a C0 question particle. (91) a. tā lái bu? 3SG come not ‘Is he coming?’ b. tā lái háishì bù lái? not come 3SG come or ‘Is he coming or not coming?’ This analysis can capture the fact that in Mandarin NPQs, the negative “particle” is sensitive to the verbal aspect, as shown in (92a, b): the negation has to be compatible with the verbal form, which will be the case given a disjunctive question on a par with (91b). (92) a. tā qù-le méi-yǒu/*bu? 3SG go-PRF not-have/not ‘Did he go?’ b. tā huì/néng qù bu/*méi-yǒu? 3SG will/can go not/not-have ‘Will/can he go?’

4.5.2. Wh-questions Wh-words in Mandarin stay in-situ in wh-questions, regardless of whether we are dealing with wh-arguments or wh-adjuncts. Also, wh-words stay in-situ not only in matrix questions but also in embedded questions. (93) a. Zhāng Sān xiāngxìn Lǐ Sì mǎi-le shénme? Zhang San believe Li Si buy-PRF what ‘What does Zhang San believe that Li Si bought?’ lái b. Zhāng Sān bù zhīdào Lǐ Sì wèishénme méi Zhang San not know Li Si why not.have come ‘Zhang San doesn’t know why Li Si didn’t come.’ Though both arguments and adjuncts can stay in-situ, their behavior is not the same (Lin 1992; Tsai 1994). First, as (94a, b) show, adjuncts such as wèishénme ‘why’ cannot stay in islands while arguments can (see J. Huang 1982a; Aoun and Li 1993 among others). Second, Soh (2005) shows that wh-adjuncts in Mandarin show intervention effects (i.e.,

1552

VII. Syntactic Sketches

the wh-adjuncts cannot be under the scope of operator-like elements such as focus markers or negation), as in (95) (examples adapted from Soh 2005). zuòjiā xiě de shū? (94) a. Zhāng Sān xǐhuān nǎ-ge Zhang San like which-CLF author write MOD book ‘For which x, x an author, such that Zhang San likes the books that x wrote?’ b. *Zhāng Sān kàn-guo Lǐ Sì wèishénme xiě de wénzhāng. write MOD article Zhang San read-EXP Li Si why Intended: ‘For what reason x, is such that Zhang San read the article that Li Si wrote because of x?’ (95) a. *nǐ {zhǐ/bù} rènwéi Lǐ Sì wèishénme kàn zhēntàn-xiǎoshuō? read detective-novel 2SG only/not think Li Si why ‘What is the reason x such that you {only/don’t think} Li Si reads detective novels for x?’ b. tā {zhǐ/bù} mǎi shénme? 3SG only/not sell what ‘What is the thing x such that he {only sells/does not sell} x?’ Based on the difference wh-adjuncts and wh-arguments display in terms of intervention effects, Soh (2005) concludes that wh-adjuncts in Mandarin undergo covert feature movement (see also Pesetsky 2000). Soh further suggests that wh-arguments undergo covert phrasal movement, which is in line with what J. Huang (1982a) proposes, but it is in contrast with proposals along the lines of Reinhart (1998), where it is argued for a non-movement account based on choice function application.

5. Abbreviations CLF DES EXP MOD

classifier descriptive marker experiential marker modification marker

NUME SC SFP

numeral small clause sentence final particle

43. Acknowledgement We would like to thank the reviewer for very helpful comments and suggestions.

6. References (selected) Aoun, Joseph, and Audrey Yen-Hui Li 1993 Wh-elements in-situ: syntax or LF? Linguistic Inquiry 24: 199−238. Ahrens, Kathleen 1994 Classifier production in normals and aphasics. Journal of Chinese Linguistics 22(2): 202−246.

43. Mandarin

1553

Arsenijevic, Boban, and Joanna Sio 2007 Talking about classifiers, the Cantonese ge is a curious one. Paper presented in Wednesday Syntax Meeting, Leiden University. Badan, Linda 2007 High and low periphery: a comparison of Italian and Chinese. Doctoral dissertation, Università degli Studi di Padova. Belletti, Adriana 2004 Aspects of the low IP area. In: Luigi Rizzi (ed.), The Structure of IP and CP. The Cartography of Syntactic Structures, Volume 2, 16−51. Oxford: Oxford University Press. Chao, Yuen-Ren 1968 A Grammar of Spoken Chinese. Berkeley: University of California Press. Chappell, Hilary (ed.) 2001 Sinitic Grammar. Synchronic and Diachronic Perspectives. Oxford: Oxford University Press. Chen, Ping 1999 Modern Chinese. Cambridge: Cambridge University Press. Chen, Ping 2003 Indefinite determiner introducing definite referent: a special use of ‘yi ‘one’+classifier’ in Chinese. Lingua 113(12): 1169−1184. Cheng, Lisa Lai-Shen 1986a de in Mandarin. Canadian Journal of Linguistics 31: 313−326. Cheng, Lisa Lai-Shen 1986b Clause structures in Mandarin Chinese. MA thesis, University of Toronto. Cheng, Lisa Lai-Shen 1995 On Dou-quantification. Journal of East Asian Linguistics 4: 197−234. Cheng, Lisa Lai-Shen 1998 Marking modification in Cantonese and Mandarin. Paper presented at SOAS, London. Cheng, Lisa Lai-Shen 2007 Verb copying in Mandarin Chinese. In: Norbert Corver and Jairo Nunes (eds.), The Copy Theory of Movement on the PF side, 151−174. Amsterdam: John Benjamins. Cheng, Lisa Lai-Shen 2009 On every type of quantificational expression in Chinese. In: Monika Rather and Anastasia Giannakidou (eds.), Quantification, Definiteness, and Nominalization, 53−75. Oxford: Oxford University Press. Cheng, Lisa Lai-Shen, Jenny Doetjes, and Rint Sybesma 2008 How universal is the Universal Grinder. In: Marjo van Koppen en Bert Botma (eds.), Linguistic in the Netherlands 2008, 50−62. Amsterdam: John Benjamins. Cheng, Lisa Lai-Shen, James C. T. Huang, and C.-C. Jane Tang 1997 Negative Particle Questions: a dialectal comparison. Journal of Chinese Linguistics, Special issue, 66−112. Cheng, Lisa Lai-Shen, and Rint Sybesma 1998a Yi-wan tang, yi-ge Tang: Classifiers and massifiers. Tsing-Hua Journal of Chinese Studies New Series Vol. 28: 385−412. Cheng, Lisa Lai-Shen, and Rint Sybesma 1998b On dummy objects and the transitivity of run. In: Renée van Bezooijen and René Kager (eds.), Linguistics in the Netherlands 1998, 81−93. Amsterdam: John Benjamins. Cheng, Lisa Lai-Shen, and Rint Sybesma 1999 Bare and not-so-bare nouns and the structure of NP. Linguistic Inquiry 30: 509−542. Cheng, Lisa Lai-Shen, and Rint Sybesma 2005 Classifiers in four varietes of Chinese. In: Guglielmo Cinque and Richard Kayne (eds.), Handbook of Comparative Syntax, 259−292. Oxford: Oxford University Press.

1554

VII. Syntactic Sketches

Cheng, Lisa Lai-Shen, and Rint Sybesma 2005 A Chinese relative. In: Norbert Corver, Hans Broekhuis, Riny Huybregts, Ursula Kleinhenz, and Jan Koster (eds.), Organizing Grammar. Linguistic Studies in Honor of Henk van Riemsdijk, 69−76. Berlin: Mouton. Cheng, Lisa Lai-Shen, and Rint Sybesma 2009 De as an underspecified classifier: first explorations. Yǔyánxué Lùncóng 39: 123−156. Cheng, Lisa Lai-Shen, and Rint Sybesma 2012 Classifiers and DP. Linguistic Inquiry 43(4): 634−650. Cheng, Lisa Lai-Shen, and Luis Vicente 2013 Verb doubling in Mandarin Chinese. Journal of East Asian Linguistics 22(1): 1−37. Chierchia, Gennaro 2010 Mass nouns, vagueness and semantic variation. Synthese 174(1): 99−149. Chomsky, Noam 1995 The Minimalist Program. Cambridge, Mass.: MIT Press. Chu, Chauncey C. 2002 Relevance theory, discourse markers and the Mandarin utterance-final particle a/ya. Journal of Chinese Language Teachers’ Association 37: 1−42. Croft, William 1994 Semantic universals in classifier systems. Word 45: 145−171. Del Gobbo, Francesca 2001 Appositives schmappositives. In: Maki Irie and Hajime Ono (eds.), University of California Irvine Working Papers in Linguistics 7: 1−25. University of California, Irvine. Del Gobbo, Francesca 2010 On Chinese appositive relative clauses. Journal of East Asian Linguistics 19(4): 385− 417. den Dikken, Marcel, and Pornsiri Singhapreecha 2004 Complex noun phrases and linkers. Syntax 7: 1−54. Dù, Yǒngdào 1993 Běijīng-huà zhōng de ‘yi+N’ [Yi+N in the Beijing Dialect]. Zhōngguó Yǔwén 1993(2): 142. Erbaugh, Mary 2002 Classifiers are for specification: complementary functions of sortal and general classifiers in Cantonese and Mandarin. Cahiers de linguistique Asie Orientale 31(1): 33−69. Ernst, Tom 1999 Adjuncts, the Universal Base, and word order typology. In: Pius Tamanji, Masako Hirotani and Nancy Hall (eds.), NELS 29, 209−223. Amherst, Mass.: GLSA. Ernst, Tom 2002 The Syntax of Adjuncts. Cambridge: Cambridge University Press. Ernst, Tom, and Chengchi Wang 1995 Object preposing in Mandarin Chinese. Journal of East Asian Linguistics 4(3): 235−260. Giorgi, Allesandra, and Fabio Pianesi 1997 Tense and Aspect, from Semantics to Morphosyntax. Oxford: Oxford University Press. Grano, Thomas 2012 Mandarin hen and universal markedness in gradable adjectives. Natural Language and Linguistic Theory. 30(2): 513−565. Hale, Kenneth, and Samuel Jay Keyser 1993 On argument structure and the lexical expression of syntactic relations. In: Kenneth Hale and Samuel Jay Keyser (eds.), The View from Building 20: Essays in Linguistics in Honor of Sylvain Bromberger, 53−109. Cambridge, Mass.: MIT Press. Handel, Zev 2008 What is Sino-Tibetan? Snapshot of a field and a language family in flux. Language and Linguistics Compass 2(3): 422−441.

43. Mandarin

1555

Hoekstra, Teun 1988 Small clause results. Lingua 74: 101−139. Hole, Daniel 2004 Focus and Background Marking in Mandarin Chinese: System and Theory behind cai, jiu, dou and ye. London: Routledge Curzon. Hsieh, Miao-Ling 2001 Form and meaning. Negation and question in Chinese. Doctoral dissertation, University of Southern California. Hsieh, Miao-Ling 2008 The Internal Structure of Noun Phrases in Chinese. Taipei: Crane. Huang, Cheng-Teh James 1982a Move wh in a language without wh movement. The Linguistic Review 1: 369−416. Huang, Cheng-Teh James 1982b Logical relations in Chinese and the theory of grammar. Doctoral dissertation, MIT. Huang, Cheng-Teh James 1984 On the distribution and reference of empty pronouns. Linguistic Inquiry 15: 531−574. Huang, Cheng-Teh James 1991 Modularity and Chinese A-not-A questions. In: Carol Georgopoulos and Roberta Ishihara (eds.), Interdisciplinary Approaches to Language: Essays in Honor of S.-Y. Kuroda, 305−332. Dordrecht: Kluwer Academic Publishers. Huang, Cheng-Teh James 1992 Complex predicates in control. In: Richard Larson, Sabine Iatridou, Utpal Lahiri (eds.), Control and Grammar, 109−147. Dordrecht: Kluwer Academic Publishers. Huang, Cheng-Teh James, Audrey Yen-Hui Li, and Yafei Li 2009 The Syntax of Chinese. Cambridge: Cambridge University Press. Huang, Shi-Zhe 1996 Quantification and Predication in Mandarin Chinese: A case study of dou. Doctoral dissertation, University of Pennsylvania. Huang, Shi-Zhe 2006 Property theory, adjectives and modification in Chinese. Journal of East Asian Linguistics 15: 343−369. Iljic, Robert 1990 The verbal suffix -guo in Mandarin Chinese. Lingua 81: 301−326. Iljic, Robert 1994 Quantification in Mandarin Chinese: two markers of plurality. Linguistics 32: 91−116. Jìng, Sōng 1995 Běijīng kǒuyǔ zhōng liàngcí de tuōluò [The drop of classifiers in Beijing dialect]. Xué Hànyǔ 1995(8): 13−14. Kayne, Richard 1994 The Antisymmetry of Syntax. Cambridge, Mass: MIT Press. Krifka, Manfred, Francis Jeffry Pelletier, Gregory N. Carlson, Alice ter Meulen, Gennaro Chierchia, and Godehard Link 1995 Genericity: An introduction. In: Gregory N. Carlson, and Francis Jeffry Pelletier (eds.), The Generic Book, 1−124. Chicago: University of Chicago Press. Li, Boya 2006 Chinese final particles and the syntax of the periphery. Doctoral Dissertation, Leiden University. Li, Charles N., and Sandra A.Thompson 1974 Chinese as a topic-prominent language. Paper presented at the 7th International Conference on Sino-Tibetan Language and Linguistics, Atlanta. GA. Li, Charles N., and Sandra A. Thompson 1981 Mandarin Chinese: A Functional Reference Grammar. Berkeley: University of California Press.

1556

VII. Syntactic Sketches

Li, Charles, Sandra Thompson, and Bójiāng Zhāng 1998 Cóng huàyǔ jiǎodù lùnzhèng yǔqìcí “de” [The particle “de” as an evidential marker in Chinese [sic]]. Zhōngguó Yǔwén 1998(2): 93−102. Li, Xuping 2009 Pre-classifier adjectival modification: the big/small issue of mass/count classifiers. Ms., Bar-Ilan University, Tel Aviv. Li, Yafei 1999 Cross-componential causativity. Natural Language and Linguistic Theory 17: 445−497. Li, Yen-Hui Audrey 1990 Order and Constituency in Mandarin Chinese. Dordrecht: Kluwer Academic Publishers. Li, Yen-Hui Audrey 1998 Argument determiner phrases and number phrases. Linguistic Inquiry 29: 693−702. Lin, Jo-Wang 1992 The syntax of zenmeyang ‘how’ and weishenme ‘why’ in Mandarin Chinese. Journal of East Asian Linguistics 1: 219−253. Lin, Jo-Wang 1998 On existential polarity wh-phrases in Chinese. Journal of East Asian Linguistics 7: 219−255. Lin, Jo-Wang 2004 On restrictive and non-restrictive relatives in Mandarin Chinese. Tsinghua Journal of Chinese Studies 33(1): 199−240. Lin, Jo-Wang 2006 Time in a language without Tense: the case of Chinese. Journal of Semantics 23(1): 1−53. Liu, Cheng-Sheng Luther 2010 The positive morpheme in Chinese and the adjectival structure. Lingua 120, 1010−1056. Liu, Hsin-Yun 2003 A Profile of the Mandarin NP. Possessive Phrases and Classifier Phrases in Spoken Discourse. München: Lincom. Liu, Xunning 1988 Xiàndi Hànyǔ cíwěi ‘le’ de yǔfǎ yìyì. [The grammatical meaning of Modern Chinese affix ‘le’]. Zhōngguó Yǔwén 1988(5): 321−330. Loke, Kit-Ken 1994 Is ge merely a “general classifier”? Journal of the Chinese Language Teachers’ Association 29(3): 67−78. McCawley, James D. 1992 Justifying part-of-speech assignment in Mandarin Chinese. Journal of Chinese Linguistics 20(2): 211−245. McCawley, James D. 1994 Remarks on the syntax of Mandarin yes-no questions. Journal of East Asian Linguistics 3: 179−194. Myers, James 2000 Rules vs. analogy in Mandarin classifier selection. Language and Linguistics 1: 187− 209. Norman, Jerry 1988 Chinese. Cambridge: Cambridge University Press. Nunes, Jairo 2001 Sideward Movement. Linguistic Inquiry 31: 303−344. Nunes, Jairo 2004 Linearization of Chains and Sideward Movement. Cambridge, Mass.: MIT Press. Paris, Marie-Claude 1979 Nominalization in Mandarin Chinese. The Morpheme de in the Shi…de Construction. Paris: D.R.L., Université Paris 7.

43. Mandarin

1557

Paul, Waltraud 2002 Sentence-internal topics in Mandarin Chinese: the case of object preposing. Language and Linguistics 3(4): 695−714. Paul, Waltraud 2005 Adjectival modification in Mandarin Chinese and related issues. Linguistics 43: 757− 793. Paul, Waltraud 2008 The serial verb construction in Chinese: a tenacious myth and a Gordian knot. The Linguistic Review 25(3–4): 367−411. Pesetsky, David 2000 Phrasal Movement and its Kin. Cambridge, Mass: MIT Press. Ramsey, S. Robert 1987 The Languages of China. Princeton: Princeton University Press. Reinhart, Tanya 1998 Wh-in-situ in the framework of the Minimalist Program. Natural Language Semantics 6: 29−56. Rubin, Edward 2003 The structure of modifiers. Ms., University of Utah. Sagart, Laurent 2001 Vestiges of Archaic Chinese derivational affixes in Modern Chinese dialects. In: Hilary Chappell (ed.), Sinitic Grammar. Synchronic and Diachronic Perspectives, 123−142. Oxford: Oxford University Press. Shen, Yang, and Rint Sybesma 2010 Jùfǎ jiēgòu biāojì “gěi” yǔ dòngcí jiēgòu de yǎnshēng guānxi [Grammatical marker “gei” and its derivational relation with the VP]. Zhōngguó Yǔwén 2010(3): 222−237. Shi, Dingxu 2000 Topic and topic-comment constructions in Mandarin Chinese. Language 76(2): 383− 408. Shi, Yuzhi 1996 Proportion of extensional dimensions: the primary cognitive basis for shaped based classifiers in Chinese. Journal of Chinese Language Teachers’ Association 31(2): 37−59. Shi, Yuzhi, and Charles N. Li 2002 The establishment of the classifier system and the grammaticalization of the morphosyntactic particle de in Chinese. Language Sciences 24: 1−15. Shyu, Shu-ing 1995 The syntax of focus and topic in Mandarin Chinese. Doctoral dissertation, University of Southern California. Simpson, Andrew 2003 On the status of ‘modifying’ DE and the structure of the Chinese DP. In: Sze-Wing Tang, and Chen-Sheng Liu (eds.), On the Formal Way to Chinese languages, 74−101. Stanford: CSLI. Sio, Joanna Ut-seong 2006 Modification and reference in the Chinese nominal, Doctoral dissertation, Leiden University. Smith, Carlota S. 1990 Event types in Mandarin. Linguistics 28: 309−336. Smith, Carlota S., and Mary S. Erbaugh 2005 Temporal interpretation in Mandarin Chinese, Linguistics 43(4): 713−756. Soh, Hooi Ling 2005 Wh-in-situ in Mandarin Chinese. Linguistic Inquiry 36: 143−155. de Swart, Henriëtte, and Joost Zwarts 2009 Less form − more meaning: why bare singular nouns are special. Lingua 119: 280−295.

1558

VII. Syntactic Sketches

Sybesma, Rint 1992 Causatives and accomplishments: The case of Chinese ba. Doctoral dissertation, Leiden University. Sybesma, Rint 1997 Why Chinese verb-le is a resultative predicate. Journal of East-Asian Linguistics 6(3): 215−261. Sybesma, Rint 1999a The Mandarin VP. Dordrecht: Kluwer Academic Publishers. Sybesma, Rint 1999b Overt wh-movement in Chinese and the structure of CP. In: H. Samuel Wang, Feng-fu Tsai, and Chin-fa Lien (eds.), Selected Papers from the Fifth International Conference of Chinese Linguistics, 279−299. Taipei: The Crane Publishing Co. Sybesma, Rint 2007 Whether we Tense-agree overtly or not. Linguistic Inquiry 38(3): 580−587. Sybesma, Rint. 2009 Classifiers, number and countability. Ms., Leiden University. Sybesma, Rint, and Joanna Sio 2008 D is for Demonstrative. Investigating the position of the demonstrative in Chinese and Zhuang. In: Huba Bartos (ed.), The Linguistic Review. Special Issue on Syntactic Categories and their Interpretation in Chinese, 25(3–4): 453−478. Tai, James H. Y. 1984 Verbs and times in Chinese: Vendler’s four categories. Papers from the Parasession on Lexical Semantics, Chicago Linguistic Society, 289−296. Tai, James H. Y., and Jane Y. Chou 1975 On the equivalent of “kill” in Mandarin Chinese. Journal of Chinese Language Teachers’ Association 10(2): 48−52. Tang, Chih-Chen Jane 1990 Chinese phrase Structure and the extended X'-theory. Doctoral dissertation, Cornell University. Tang, Chih-Chen Jane 1996 ta mai-le bi shi-zhi and Chinese phrase structure. The Bulleton of the Institute of History and Philology 67(3): 445−502. Tao, Liang 2006 Classifier loss and frozen tone in spoken Beijing Mandarin: the YI+GE phono-syntactic conspiracy. Linguistics 44(1): 91−133. Tsai, Wei-Tien Dylan 1994a On economizing the theory of A-bar dependencies. Doctoral dissertation, MIT. Tsai, Wei-Tien Dylan 1994b On nominal islands and LF extractions in Chinese. Natural Language and Linguistic Theory 12: 121−175. Tsao, Feng-fu 1987 A topic-comment approach to the ba construction. Journal of Chinese Linguistics 15: 1−53. Tzeng, Ovid J. L., Sylvia Chen, and Daisy L. Hung 1991 The classifier problem in Chinese aphasia. Brain and Language 41(2): 184−202. Wáng, Dàoyīng 2005 “Zhè”, “Nà” de Zhǐshì Gōngnéng Yánjiū [The Study of the Deictic Function of “This” and “That”]. Shànghǎi: Xuélín. Wu, Guo 2006 Zhǔwèiwèn − tán “fēiyíwèn xíngshì + ne” yíwènjù [The ‘thematic question’ − on ‘noninterrogative constituent + particle ne’ questions]. Yǔyánxué Lùncóng 32: 64−82.

44. Japanese

1559

Wu, Mary A 2006 Can numerals really block definite readings in Mandarin Chinese? In: Raung-fu Chung, Hsien-Chin Liou, Jia-ling Hsu, and Dah-an Ho (eds.) On and Off work, Festschrift in Honor of Professor Chin-Chuan Cheng on his 70th Birthday, 127−142. Taipei: Institute of Linguistics, Academia Sinica. Xu, Liejiong, and D. T. Langendoen 1985 Topic structures in Chinese. Language 61: 1−27. Xuān, Yuè 2008 Wánjiéduǎnyǔ jiǎshè hé Hànyǔ xūhuà jiéguǒ bǔyǔ yánjiū [On the telic phrase hypothesis and weak resultative complements in Chinese]. Doctoral dissertation, Peking University. Yang, Ning 2007 The indefinite object in Mandarin Chinese: its marking, interpretation and acquisition. Doctoral dissertation, Radboud University, Nijmegen. Zhang, Hong 2007 Numeral classifiers in Mandarin Chinese. Journal of East Asian Linguistics 16: 43−59.

Lisa L.-S. Cheng, Leiden (The Netherlands) Rint Sybesma, Leiden (The Netherlands)

44. Japanese 1. 2. 3. 4. 5. 6.

Introduction Headed structures Unbounded dependencies Word-order variation Sentence levels References (selected)

Abstract This article sketches various constructions in Japanese. Concentrating on headed-structures including complement-head structures and adjunct-head structures, constructions such as causatives, passives, and unbounded dependencies including topicalization, relativization, and reflexivization are discussed. Other topics include word-order variation and the levels of sentences with respect to the pragmatic status of some of the constituents. Though the presentation is generally informal, formal apparatuses in recent phrase structure grammar theories are occasionally used.

1. Introduction Typologically, Japanese is an SOV-order language. It is in fact a strictly head-final language. For example, a verb follows its complements, an auxiliary follows its verbal

1560

VII. Syntactic Sketches

complement, a postposition follows its complement noun phrase, etc. Moreover, a noun follows its modifying adjectives and a sentential complement is marked by a postsentential complementizer, a conjunction follows the conjunct, and so on. Thus, we have the following paradigm (all examples in this article are Japanese): (1)

a. Naomi-wo miru Naomi-ACC see ‘see Naomi’

[Japanese]

b. [Ken-ga Naomi-wo miru] koto Ken-NOM Naomi-ACC see fact ‘(the fact) that Ken sees Naomi’ c. Ken-to Naomi-to Marie Ken-and Naomi-and Marie ‘Ken, Naomi, and Marie’ d. atui hon thick book ‘thick book’ e. [hon-wo yomi]-tai book-ACC read-want ‘want to read a book’ In (1a), the verb miru ‘see’ follows its object complement Naomi-wo, where wo (sometimes transcribed as o also) is a postposition for accusative marking. In (1b), the subject complement Ken-ga (ga is a postposition for nominative marking) precedes the verb phrase Naomi-wo miru ‘see Naomi’. In this case, the verb phrase is a phrasal head of the sentence Ken-ga Naomi-wo miru. Unlike English, Japanese verbal heads, whether lexical or phrasal, always follow their complements. In addition, the complementizer koto is the head of the entire phrase, making the sentence a nominal sentential complement. (1c) shows that the nominal conjunction to ‘and’ follows each nominal conjunct except the last one. As for the case of (1d), the adjective atui ‘thick’ precedes the noun hon ‘book’. In this case, the noun is the head of the noun phrase atui hon ‘thick book’, and the adjective is the adjunct (modifier) of the head noun. (1e) shows the case of an auxiliary following a verb phrase. In this case, the auxiliary tai ‘want’ is a lexical head of the larger verb phrase hon-wo yomi-tai ‘want to read a book’, in which the inner verb phrase hon-wo yomi ‘to read a book’ is the complement. The “auxiliary” is a head of a verbal phrase in the sense that the tense and/or the aspect of the sentence is carried by it. Moreover, the polarity of the sentence is determined by whether the auxiliary is of the negative form or not. Here are more examples: (2)

a. Ken-ga hon-wo yomi-takat-ta. Ken-NOM book-ACC read-want-PST ‘Ken wanted to read a book.’ b. Ken-ga hon-wo yomi-taku-nai. Ken-NOM book-ACC read-want-NEG ‘Ken does not want to read a book.’

44. Japanese

1561

The structures of (2a) and (2b) are roughly as indicated in (3a) and (3b), respectively: (3)

a. [S[S Ken-ga [VP[VP hon-wo yomi] takat]] ta] b. [S[S Ken-ga [VP[VP hon-wo yomi] taku]] nai]

Thus, sentence-final auxiliaries are verbal heads taking either a verb phrase or a sentence as their complements. Moreover, the past tense morpheme and the negation morpheme are also heads taking a sentential phrase as their complements. In this regard, the order of SOV is more appropriately characterized as S(OV), where the substructure consisting of the object and the head verb (OV) is also a phrasal head. The head-final nature of Japanese is further exemplified by the internal structure of both the subject and the object; they consist of a noun phrase complement and a postpositional head following it. This head-final nature is also demonstrated in the fact that the verb may be followed by an auxiliary morpheme. Schematically, a typical Japanese sentence has the following fundamental structure: (4)

S S

H

S

H

C

H

C C

H H

C C

H H

C

H

where C stands for a complement and H a head. Each complement (and head) may consist of its own head and complement or adjunct. Japanese sentences are distinguished as belonging to one of several “levels” depending on the head morpheme, which contributes to add a communicative force to a sentence. For example, the question marker no indicates that the content of the sentential complement is a question: (5)

Ken-ga hon-wo yomi-tai-no? Ken-NOM book-ACC read-want-Q ‘Does Ken want to read a book?’

In the following, I will sketch various constructions in Japanese. The next section concentrates on headed-structures; in particular, on complement-head structures, which includes causatives and passives. Some specific cases of adjunct-head structures are discussed in the following section on unbounded dependencies, where both topicalization and relativization, which are typical examples of unbounded dependencies, are discussed as representative cases of adjunct structures. The case of reflexivization will be included

1562

VII. Syntactic Sketches

in this section since, unlike English, Japanese reflexives are not subject to the “clausemate” condition, which restricts the domain of reflexivization within a clause. The next section deals with word-order variation involving “scrambling” and “floating” quantifiers. In the final section, the “levels” of sentences mentioned above are discussed with respect to the pragmatic status of some of the constituents. The theoretical apparatus used for the description of specific syntactic constructions in Japanese is based on generative grammar, though the presentation is generally informal. In some cases, where more formality is necessary, the concepts in phrase structure grammar (cf. Gazdar et al. 1985; Pollard and Sag 1987, 1994; Sag, Wasow, and Bender 2003. See also IV. 27) are mostly used, but occasionally analyses based on other generative frameworks, especially the Minimalism or its precursory frameworks (cf. Chomsky 1965, 1981, 1986, 1995. See also IV. 24), are mentioned for comparison. See Gunji (1987, 1999) and Gunji and Hasida (ed.) (1998a) for more formal presentations of the phrase structure grammar analysis of Japanese sketched in this article. The final section concentrates on the “functional” aspects of Japanese, rather than “formal” aspects discussed in the preceding sections. The description in this section is accordingly more informal than that in the other sections.

2. Headed structures 2.1. Verbs and complements Japanese verbs always follow their complements. Thus, a typical sentence consists of a series of complements followed by the verb at the sentence-final position: (6)

a. Ken-ga heya-ni iru. Ken-NOM room-LOC be ‘Ken is in the room.’ b. Ken-ga Naomi-wo heya-de mi-ta. Ken-NOM Naomi-ACC room-LOC see-PST ‘Ken saw Naomi in the room.’ c. Ken-ga Naomi-ni hon-wo age-ta. Ken-NOM Naomi-DAT book-ACC give-PST ‘Ken gave Naomi a book.’ d. Ken-ga Naomi-ni at-ta. Ken-NOM Naomi-DAT meet-PST ‘Ken met Naomi.’ hanaseru koto e. Ken-ni Eigo-ga Ken-DAT English-NOM can.speak fact ‘the fact that Ken can speak English’

44. Japanese f.

1563

Ken-ga Naomi-ga sukina koto Ken-NOM Naomi-NOM like fact ‘the fact that Ken likes Naomi’

In (6a), heya-ni ‘in the room’ is usually considered to be a locative adjunct (modifier) and not a complement of the verb iru ‘be’. However, exact distinction between a complement and an adjunct is sometimes very difficult. Superficially, both complements and adjuncts precede the head verb and they often have the same internal structure; they consist of a noun phrase and a postposition. Moreover, sometimes the same postposition can be used for different case markings (e.g., ni for dative and locative). In this article, I will not go into the details of the distinction between complements and adjuncts, and follow the somewhat intuitive distinction between transitive and intransitive verbs. The usual subject marker is the nominative postposition ga. Some verbs, however, take a dative subject, headed by ni (cf. [6e]). The direct object is usually marked by the accusative postposition wo (cf. [6b]), while the indirect object of a ditransitive verb is headed by the dative postposition ni (cf. [6c]). There are some transitive verbs that take a dative object (cf. [6d]), and there are some stative verbs (cf. [6e]) and adjectives (cf. [6f]) that take a nominative object. The choice of the postposition is lexically determined by the verbal head and is part of their subcategorization properties of the verbal categories. Japanese adjectives of predicative use directly follow its subject, without a copula. Thus, they have their own inflectional pattern when followed by a postverbal marker such as a tense marker. In this sense, Japanese adjectives actually constitute a subcategory of verbs with a distinct inflectional pattern. As for inflection, Japanese lacks syntactic concepts of person, number, and gender, though, of course, it is possible to linguistically express these concepts in semantic and pragmatic terms. Thus, the verbal inflection only occurs when verbal categories are followed by some particular morphological markers; there is no agreement in the sense of European languages. There has been some controversy over whether the object complement and the transitive verb form a constituent in Japanese phrase structure. Some have treated all the complements as sisters of the verb. Hence the subject, the object, and possibly other complements and adjuncts, are all arranged in a “flat” structure as follows: S

(7) PP

PP

PP

V

Ken-ga

Naomi-wo

heya-de

mita

(cf. Inoue 1976; Shibatani 1978; Farmer 1984; Ishikawa 1985; among others, for this position). However, after X′-theory was widely acknowledged in Japanese syntax in the mid 80s, more attention is paid to the phrase structure of Japanese and a more hierarchical structure such as the following seems to be preferred:

1564

VII. Syntactic Sketches

(8)

S PP

VP

Ken-ga

PP

TVP

Naomi-wo

PP

V

heya-de

mita

(cf. Hasegawa 1980; Saito 1985; Hoji 1985; Fukui 1986; Gunji 1987; among others, for such a position). Some of the arguments for the existence of a VP-node in Japanese concerns the possibility of pronominal reference, where a structure-dependent notion such as “c-command” plays a crucial role in specifying the condition of coreference (cf. Saito and Hoji 1983). Similar arguments concern with the possibility of reflexivization, to which I will return in section 3.3. These arguments, of course, are based on the assumption that the phrase structure is central to the determination of syntactic conditions. The approaches taking such an assumption include the Government-Binding theory (cf. Saito 1985; Hoji 1985; Fukui 1986; Nishigauchi 1990; among others) and phrase structure grammar (cf. Gunji 1987). Other approaches, notably, that by Lexical-Functional Grammar (cf. Kaplan and Bresnan 1982), may not be required to assume a hierarchical phrase (constituent) structure (cf. Ishikawa 1985 for the case of Japanese). However, since essentially the same hierarchical structure is reproduced in their functional structure and the similar notion of “f-command” is used in the description of pronominalization in Lexical-Functional Grammar, the need for a hierarchical structure of some form seems to be unanimously acknowledged in the case of Japanese syntax, contrary to some claims that Japanese is one of the so-called “nonconfigurational” languages like Warlpiri (cf. Hale 1980). A brief note before proceeding is in order. In some analyses, the subject and the object are assumed to be a noun phrase, with the postposition being some kind of suffix. In these analyses, Ken-ga and Naomi-wo above are treated as NPs, while heya-de is treated as a PP. In this article, however, every phrase headed by a postposition is treated as a PP, emphasizing the status of the postposition as the head. This also reflects the fact that the grammatical relation of such a phrase is determined solely by the head postposition; the complement noun phrase itself has no effect on the grammatical status of the postpositional phrase. In the phrase structure grammar analysis, complements of a verb are represented by the value of the argument structure feature ARG-ST, which is a list of the complements of the verb. For example, the transitive verb mi ‘see’ has the following ARG-ST feature: (9)

mi: [ARG-ST )PP, PP*]

The members in the ARG-ST value are ordered according to their obliqueness. Thus, since the subject is the least oblique, it is always the first (leftmost) member of the ARG-ST

44. Japanese

1565

list. The direct object, if any, follows the subject, and the indirect object and other oblique objects come further to the right. While the ARG-ST feature represents a lexical property of the verb, valence features represent the status of saturation of a phrasal category. For valence features, SUBJ and COMPS features are used. They are canonically related to one another in the following way by an argument realization principle (see section 3 for noncanonical cases): (10) mi:

where , as the value of the SUBJ feature, indicates that it is identical to the first member of the ARG-ST feature. Similarly for . Thus, canonically, the first member in the ARGST list is realized as the subject and the second as the direct object. The value of a valence feature becomes empty when the category takes a complement or a subject to form a phrase. For example, when a transitive verb takes one complement (object) and forms a verb phrase such as Naomi-wo mi ‘see Naomi’, the verb phrase has the following feature structure: (11) Naomi-wo mi:

Naturally, the values of the valence features of a full sentence like Ken-ga Naomi-wo mi are both empty. (12) Ken-ga Naomi-wo mi:

It is often the case in Japanese that some of the complements are suppressed in the sentence. If a complement is obvious from the context, it will be replaced by a “zero pronoun”, a phonologically null pronominal. In general, a “zero” pronoun will appear in Japanese where an overt pronoun would be used in English. (13) A: Ken-ga Naomi-wo mita-no? Ken-NOM Naomi-ACC saw-Q ‘Did Ken see Naomi?’ B: Un, 0̸ 0̸ mita. yes he her saw ‘Yes he saw her.’ “Zero” pronouns are often used in the place of the first and second person pronouns. In this sense, Japanese is a highly context-dependent language.

1566

VII. Syntactic Sketches

2.2. Causativization Japanese causativization is characterized by the auxiliary morpheme sase ‘cause’, which semantically corresponds to either coercive causation or permissive causation. The distinction essentially depends on the context. Thus, the following causative sentence can be ambiguous. (14) Ken-ga Naomi-ni hon-wo yom-ase-ta. Ken-NOM Naomi-DAT book-ACC read-CAUS-PST ‘Ken made/let Naomi read the book.’ Note that ase is an allomorph of sase, where the first consonant of sase is dropped following a verb whose stem ends with a consonant. The following phrase structure tree is based on the assumption that causativization is a case of complement-head structure, where the causative morpheme as a head takes a verb phrase as a complement (the tense marker is omitted for simplicity) and forms a transitive verb phrase (TVP): S

(15) PP

VP

Ken-ga

PP

TVP

Naomi-ni

VP

V

PP

V

hon-wo

yom

sase

Some analyses treat a sequence like yomase as a complex verb (cf. Miyagawa 1980; Ishikawa 1985; Manning, Sag, and Iida 1999; among others) and assume that it behaves as if it were a single lexical item. In this article, the causative morpheme sase is assumed to be a head of a TVP and takes a verb phrase as its complement. One of the reasons for the current treatment is the fact that the complement can be a complex verb phrase consisting of coordinated VPs: (16) Ken-ga Naomi-ni [VP[VP kare-no hon-wo yomi] [VP syohyoo-wo kak]] Ken-NOM Naomi-DAT he-GEN book-ACC read review-ACC write ase-ta. CAUS-PST ‘Ken made/let Naomi read his book and write a review.’ This sentence shows that the causative morpheme sase can be considered to attach not to a lexical verb but to a verb phrase. See Gunji (1999) for more discussion for this type of structure, and Manning, Sag and Iida (1999) for counterarguments.

44. Japanese

1567

In yet other analyses, notably those within the Government-Binding framework, an empty pronoun PRO is posited as the subject of the complement of sase, making the complement a full sentence rather than a verb phrase: (17) Ken-ga Naomii -ni [S PROi hon-wo yom] sase The PRO is assumed to be given the same (semantic) “index” as the object PP by the “Control Theory”; hence it is made to be semantically identical to the object PP. That is, causativization is a case of object control. In the analysis where the causative morpheme sase takes a VP complement, object control is expressed as the lexical property of sase. The value of ARG-ST of this morpheme consists of three elements: the subject, the object, and the verb phrase. The fundamental idea is that the semantic value of the object in the ARG-ST value of the causative is identical to the semantic value of the subject (the SUBJ value) of the verb phrase that is in the ARG-ST value of the causative. To put this more straightforwardly, the following scheme will give the general idea: (18) sase:

[ARG-ST ) PP, PPi , VP[SUBJ ) PPi *] *] where the indices represent the semantic values of the categories. The lexical information of sase shown in (18) explicitly states the object control property of causativization without involving an abstract empty category. The object-marker for causativization has been the source of some controversy, since two distinct markers are usually possible: the dative ni and the accusative wo. The former has sometimes been associated with the permissive causation, while the latter with the coercive causation. However, such a distinction, if any, is easily overridden by the context and not universally acknowledged. Moreover, the choice of the object-marker is limited to ni when the embedded verb phrase has its own object marked by wo, which is known as the “Double wo Constraint” (cf. Shibatani 1976). That is, we cannot use wo in the place of ni in (14): (19) *Ken-ga Naomi-wo hon-wo yom-ase-ta. Ken-NOM Naomi-ACC book-ACC read-CAUS-PST Thus, the distinction between the ni-marked and the wo-marked objects is somewhat vague and not so clear-cut as is sometimes claimed.

2.3. Passivization Like causativization, Japanese passivization is characterized by the morpheme rare, which takes a verbal phrase as its complement. Passivization is slightly more complicated than causativization, since there are apparently two types of passive morpheme rare, which behave differently in both syntactic and semantic terms.

1568

VII. Syntactic Sketches

One type of passive morpheme subcategorizes for a transitive verb phrase (TVP) to form a larger TVP. This type corresponds to the passive in English and many other languages. The semantic function of this type of morpheme is to “promote” the object to the subject, with the subject “demoted” to an (oblique) object (marked by ni). We have the following example: (20) a. Ken-ga Naomi-wo home-ta. Ken-NOM Naomi-ACC praise-PST ‘Ken praised Naomi.’ b. Naomi-ga Ken-ni home-rare-ta. Naomi-NOM Ken-DAT praise-PASS-PST ‘Naomi was praised by Ken.’ As with the causative, the fact that rare does not simply attach to a lexical transitive verb, but instead subcategorizes for a transitive verb phrase is exemplified by the following sentence: (21) a. Naomi-ga Ken-ni kare-no hon-wo yom-ase-rare-ta. Naomi-NOM Ken-DAT he-GEN book-ACC read-CAUS-PASS-PST ‘Naomi was made to read Ken’s book by him.’ b.

S PP

VP

Naomi-ga

PP

TVP

Ken-ni

TVP

V

VP

V

kare-no hon-wo yom

sase

rare

Note that the causative sase subcategorizes for a VP. The phrase consisting of the VP and sase is a TVP, subcategorizing for a subject and an object. This TVP becomes the complement of rare, forming another TVP. Some derivational analyses (e.g., the Government-Binding theory) assumed, just like the passivization in English, an application of the Move transformation for Japanese passivization. Thus, a passive sentence like (20b) was derived from a d-structure like the following: (22) a. e Ken Naomi home-rare-ta. Ken Naomi praise-PASS-PST b. Naomii -ga Ken-ni ti home-rare-ta. Naomi-NOM Ken-by praise-PASS-PST ‘Naomi was praised by Ken.’

44. Japanese

1569

In this type of analysis, the coindexing between the trace left by the application of Move and the subject is assumed to assure the appropriate semantic role of the subject with respect to the verb. In the phrase structure like (21b), the semantic correspondence is assumed to be designated in the lexical property of the passive morpheme, just like the case of causative we have seen. The argument structure of rare has the following value: (23) rare (transitive):

Thus, semantically, the subject of the complement TVP corresponds to the (oblique) object of rare, while the object of the complement TVP corresponds to the subject of the passive morpheme. In this analysis, no empty category like the “trace” (ti) in (22b) is assumed to assure the semantic correspondence. The lexical property specified for rare in the lexicon represents both syntactic and semantic information. Japanese is known to have a second type of passivization. The morphologically identical morpheme rare subcategorizes for an intransitive verb phrase (VP), rather than a TVP, as its complement. The following is a typical example. (24) a. Naomi-ga nai-ta. Naomi-NOM cry-PST ‘Naomi cried.’ b. Ken-ga Naomi-ni nak-are-ta. Ken-NOM Naomi-DAT cry-PASS-PST ‘(lit.) Ken was (adversely) affected by Naomi’s crying.’ Note that, in (24b), are is an allomorph of rare, which loses the first consonant following a consonant-ending verb stem. In this case, since the complement of the passive morpheme is an intransitive verb nak ‘cry’, which subcategorizes for only a subject, the syntactic function of the morpheme is only to “demote” the subject of the complement VP to an (oblique) object. The subject of the passive sentence is supplied independently of what the complement VP subcategorizes for. The phrase structure of (24b) is as follows: S

(25) PP

VP

Ken-ga

PP

Naomi-ni

TVP VP

V

nak

are

The following examples are to show that the complement of this type of rare is a verb phrase, not merely a lexical verb:

1570

VII. Syntactic Sketches

(26) a. Ken-ga Naomi-ni kodomo-wo home-rare-ta. Ken-NOM Naomi-DAT child-ACC praise-PASS-PST ‘Ken was affected by Naomi’s praising his child.’ b. Ken-ga Naomi-ni [VP[VP kare-no hon-wo yomi] [VP syohyoo-wo he-GEN book-ACC read review-ACC Ken-NOM Naomi-DAT kak]] are-ta. write PASS-PST ‘Ken was (adversely) affected by Naomi’s reading his book and writing a review.’ The phrase structure grammar analysis of this type of passive morpheme is again based on the lexical property of this type of rare: (27) rare (intransitive):

[ARG-ST ) PP, PPj , VP[SUBJ ) PPj *] *] which is exactly the same type of argument structure as the causative morpheme (cf. [18]). As with causatives, analyses with empty categories might assume an embedded sentence with PRO as the subject for this type of passives: (28) Ken-ga Naomii -ni [S PROi nak] are cry PASS Ken-NOM Naomi-DAT The existence of two types of passives in Japanese has been a source of controversy in transformational grammar. Instead of assuming a movement transformation for the passive of the type of (22) (often called the “direct” passive) (cf. McCawley 1972; Kuno 1973), some have argued that the two should have the same kind of underlying structure, namely, the one with an embedded sentence like (28) (cf. Kuroda 1965; Howard and Niyekawa-Howard 1976; Kuno 1983; among others). The latter, so-called “uniform”, theory is known to have the defect that it fails to explain the nonambiguous interpretation of the reflexive in the “direct” passive, to which I will return in section 3.3. Even though the passive of the type of (24b) has often been called the “adversity” passive, the adversity connotation is sometimes missing. For example, even though (26a) is syntactically of the type of “adversity” passive, the “adversity” interpretation is hardly present. Moreover, even a “direct” passive sometimes has an adversity connotation (cf. [21a]). Thus, the passives like (24b) are sometimes called “indirect” passive without mentioning any “adversity” connotation. In this article, these two kinds of passive are distinguished by the subcategorization property as intransitive and transitive passives, respectively. (See Kuno 1983 for an attempt to explain the origin of “adversity” connotation in intransitive passives.)

44. Japanese

1571

2.4. Adjuncts Japanese adjuncts are of two types: adnominal and adverbial. The former adjoins to nominal heads and includes adjectives of attributive use, postpositional phrases headed by no, and a closed class of lexical adnominals (rentaisi). As with complements, adjuncts precede the head they adjoin to. Here are some examples: (29) a. utukusii hana beautiful flower ‘beautiful flower’ b. Ken-no hon Ken-GEN book ‘Ken’s book’ c. ookina kuruma large car ‘large car’ The form of adjectives of attributive use is indistinguishable from their form of predicative use. In this sense, they can be considered to be cases of relative clauses. In fact, verbs can also be used attributively in the prenominal position: (30) a. yoku hasiru kuruma well run car ‘car that runs well’ tunda hana b. kesa this.morning picked flower ‘flower someone picked this morning’ I will discuss relative clauses in section 3.2. The genitive postposition no is unique in that it makes an adnominal phrase: all the other postpositions either make a postpositional phrase as a complement of a verb or an adverbial phrase as an adjunct to a verbal head. The semantic content of no is quite general and covers any plausible relationship between the two noun phrases connected by it. The lexical adnominal ookina is semantically identical to the adjective ookii but the former doesn’t have an inflected form and is never used predicatively: (31) a. sono kuruma-ga ookii / ookikat-ta. that car-NOM large / large-PST ‘That car is large / was large.’ b. *sono kuruma-ga ookina. that car-NOM large The class of lexical adnominals includes tiisana ‘small’, sono ‘that’, kono ‘this’, etc.

1572

VII. Syntactic Sketches

The adverbials include inflected forms of adjectives, postpositional phrases headed by various postpositions, and a small closed class of lexical adverbs. hana (32) a. utukusiku saita beautifully blossomed flower ‘flower blossomed beautifully’ b. Tegami-ga Ken-kara kita. letter-NOM Ken-from came ‘A letter came from Ken.’ c. itumo utukusii hana always beautiful flower ‘flower that is always beautiful’ The adnominal postposition no can convert some of the adverbials to adnominals: (33) a. Ken-kara-no tegami Ken-from-GEN letter ‘a letter from Ken’ hana b. itumo-no always-GEN flower ‘the usual flower’

3. Unbounded dependencies In this section, two of the productive adjunct formation processes are discussed, namely, topicalization and relativization. They both involve the existence of a gap in a sentential constituent and the gap can be related to a constituent separated by an unlimited number of sentence boundaries. Hence, they are treated as subcases of unbounded dependencies.

3.1. Topicalization Japanese topicalization is just like the counterpart in English in that the topic appears at the sentence-initial position as an adverbial adjunct, with the verbal (sentential) head following the topic. The topic is always a postpositional phrase headed by wa. We have the following examples: (34) a. Ken-wa Naomi-wo aisiteiru. Ken-TOP Naomi-ACC love ‘Ken loves Naomi.’ b. Ano otoko-wa Ken-ga kiratteiru. that man-TOP Ken-NOM dislike ‘That man, Ken dislikes.’

44. Japanese

1573

c. Ken-kara-wa tegami-ga konakatta. Ken-from-TOP letter-NOM did.not.come ‘From Ken, no letter came.’ As can be seen in (34a) and (34b), the usual postposition for the nominative subject (ga) and the accusative object (wo) are suppressed when they are followed by the topic marker wa. Other postpositions remain and appear before wa. (34c) is an example of such a case. In this case, the adverbial adjunct phrase Ken-kara ‘from Ken’ is topicalized. The sentential heads in the above examples all have a gap somewhere: (34a) has a gap at the subject position, (34b) at the object position, and (34c) at an adverbial adjunct position just before the verb. Thus, derivational analyses assume an application of Move to derive these kinds of topicalized sentences and hence they have a trace somewhere: (35) a. Keni -wa ti Naomi-wo aisiteiru. Ken-TOP Naomi-ACC love ‘Ken loves Naomi.’ b. Ano otokoi -wa Ken-ga ti kiratteiru. that man-TOP Ken-NOM dislike ‘That man, Ken dislikes.’ c. Ken-karai -wa tegami-ga ti konakatta. Ken-from-TOP letter-NOM did.not.come ‘From Ken, no letter came.’ The phrase structure grammar analysis treats a gap by designating it as the value of one of the features concerning binding: GAP. Roughly speaking, a phrasal node dominating a gap has the category corresponding to the gap as the value of GAP. For example, the node corresponding to Ken-ga kiratteiru in (34b) has the following feature structure (the tense and aspect are ignored here): (36) Ken-ga kiratteiru:

Since the values of the valence features are empty, it is a kind of sentence, with the gap corresponding to the object. The argument realization principle allows the head verb to have the following feature structures: (37) a. kiratteiru (canonical):

1574

VII. Syntactic Sketches b. kiratteiru (gapped):

(37a) is the canonical case. (37b), on the other hand, instead of realizing the second member of ARG-ST as a complement, it is put in the value of GAP. The information about the gap as the value of GAP is propagated from the lexical verbal head kiratteiru to upper nodes by a principle concerning the inheritance of binding features. This principle essentially requires that the value of a binding feature of a mother node is identical to the union of the values of the binding feature of all its daughters. Thus, if a head has a gap, that information is propagated to the mother node and this process is repeated up to the sentential head of the topic. At the very top node, the topic semantically binds the value of the GAP of the head. This mechanism requires information about the gap to be propagated only locally, in the sense that a node in the phrase structure tree is related with only its mother, sisters, and/or daughters. That is, a node cannot be directly related to an “aunt” node or a “grandmother” node, for example. The unbounded nature of topicalization is illustrated in the following examples: (38) a. Sono hon-wa Naomi-ga Ken-ga yonda-to omotteiru. that book-TOP Naomi-NOM Ken-NOM read-COMP think ‘That book, Naomi thinks Ken has read.’ itteiru. b. Ken-kara-wa Naomi-ga moo tegami-ga konai-to Ken-from-TOP Naomi-NOM more letter-NOM not.come-COMP say ‘From Ken, Naomi says that no letter will come.’ These are natural consequences from the assumption that both Move and the GAP propagation mechanism can cross the sentence boundary. Another class of examples of topicalization involves sentences without a gap: hana-ga nagai. (39) a. Zoo-wa elephant-TOP trunk-NOM long ‘As for elephants, their trunks are long.’ umai. b. Susi-wa edomae-ga sushi-TOP Edo.style-NOM delicious ‘As for sushi, the Edo style is delicious.’ In this type of topicalized sentences, the topic is generated at the sentence-initial position even in transformational analyses. Since the semantic relationship between the topic and the sentential head in these sentences varies depending on the sentence and its context, we can only say that there is some kind of “topic-comment” relationship between the topic and the sentential head. I will return to the status of the “topic” in the final section.

44. Japanese

1575

3.2. Relativization There is much parallelism in topicalization and relativization in Japanese. The only contrast is that, while in topicalization, the topic is an adverbial adjunct to a sentential head, in relativization, it is the relative clause that is an adnominal adjunct to a nominal head. Since Japanese does not have counterparts of relative pronouns, the relative clause immediately precedes the nominal head. (40) a. Naomi-wo aisiteiru Ken Ken Naomi-ACC love ‘Ken, who loves Naomi’ b. Ken-ga kiratteiru ano otoko Ken-NOM dislike that man ‘that man, who Ken dislikes’ c. Ken-ga kiratteiru otoko Ken-NOM dislike man ‘a man who Ken dislikes’ In general, there is no syntactic distinction between the restrictive and nonrestrictive relative clauses. The nature of the nominal head and the context usually determines which is intended. A usual interpretation of (40b) would involve a nonrestrictive relative clause, while that of (40c) would involve a restrictive relative clause. The mechanisms used to analyze relative clauses are exactly the same as those for topicalization: Move in the derivational theory and the GAP feature in phrase structure grammar. Again, we can see the unbounded nature of these mechanisms. (41) a. Naomi-ga Ken-ga yonda-to omotteiru hon book Naomi-NOM Ken-NOM read-COMP think ‘book which Naomi thinks Ken has read’ itteiru Ken b. Naomi-ga moo tegami-ga konai-to Naomi-NOM more letter-NOM not.come-COMP say Ken ‘Ken, who Naomi says that no letter will come from’ As with topicalization, a relative clause without a gap is also abundant in Japanese: (42) a. hana-ga nagai zoo trunk-NOM long elephant ‘elephants, whose trunks are long’ umai susi b. edomae-ga Edo.style-NOM delicious sushi ‘sushi, of which the Edo style is delicious’ As has been mentioned in section 2.4, a single adjective or an intransitive verb can constitute a relative clause. In such cases, it will look exactly like a lexical adnominal adjunct:

1576

VII. Syntactic Sketches

(43) a. akai hana red flower ‘red flower’ b. sinda hito died person ‘person who died’ Note that akai in (43a) and sinda in (43b) are considered to be sentences with subject gaps, rather than being a lexical adjective or verb. In this respect, an attributive adjective and a relative clause whose head is an adjective are hardly distinguishable. In fact, the former can be considered to be simply a case of relativization.

3.3. Reflexivization Reflexivization in Japanese is indicated by a special noun zibun, which corresponds to “self” without reference to gender, person, or number. The antecedent of zibun is a “subject” in some sense. It is not necessarily restricted to the subject in the same clause in which the reflexive appears. In this sense, the so-called “clause-mate” condition in English or a similar restriction does not apply in Japanese. Moreover, the antecedent subject of zibun need not be overtly expressed, either. This will result in apparent object control of the reflexive (see [47] below). Since zibun is syntactically an ordinary noun, it can be followed by any postposition, including the genitive no. (44) a. Ken-ga zibun-wo butta. Ken-NOM self-ACC hit ‘Ken hit himself.’ b. Ken-ga Naomi-ni zibun-no hon-wo miseta. Ken-NOM Naomi-DAT self-GEN book-ACC showed ‘Ken showed Naomi his/*her book.’ c. Ken-ga Naomi-ga zibun-no hon-wo yabutta-to itta. Ken-NOM Naomi-NOM self-GEN book-ACC tore-COMP said ‘Ken said that Naomi tore his/her book.’ Note that in (44b), the object cannot be the antecedent of zibun. Since there are two subjects in (44c), one in the embedded clause and the other in the matrix clause, zibun is ambiguous in this sentence. One way to handle the unbounded nature of reflexivization in phrase structure grammar would be to posit a binding feature REFL, which takes a PP as its value (cf. Gunji 1987, 2010). Then, the lexical specification of zibun is as follows: (45) zibun: N[REFL ) PPi *]i That is, the semantic value of zibun is identical to the semantic value of the PP in the value of the REFL feature. The value of REFL is propagated in the same way as GAP and

44. Japanese

1577

becomes identical to a subject in the value of SUBJ at a verb phrase node. For example, the VP zibun-wo butta ‘hit self’ in (44a) has the following feature specification: (46) a. butta:

b. zibun-wo butta:

Due to a construction-specific constraint on VP, the PPi in the value of REFL propagated from zibun can be identified with the PPj in the value of SUBJ in a VP (hence i becomes equal to j), in which case the value of REFL is no longer propagated upward. The PPj in the value of SUBJ in turn becomes identical to the subject of the sentence, Ken, and hence the reflexive eventually becomes coreferential with the subject of the sentence. Thus, it is assumed that the value of REFL is not directly bound by the subject of the sentence but is indirectly bound by being identical to a PP in the SUBJ value of a verb phrase. Alternatively, like English, we could assume a constraint similar to Binding Condition A (in HPSG) on the value of ARG-ST in the form that the least oblique argument (i.e., the subject) binds more oblique reflexive argument in the same ARG-ST list. However, since ARG-ST is strictly a lexical feature, there is no way to propagate the information that one of the arguments in the lower clause is a reflexive. So, the unbounded dependency would not be captured in this approach, unless, say, we assume an intermediate head and its own ARG-ST to propagate such information. Actually, this approach is an interesting one if we take these intermediate heads to express some kind of functional information (cf. section 5). Further investigation may be worthwhile. The assumption that the value of REFL is indirectly bound by being identical to a PP in the SUBJ value of a verb phrase is crucial in explaining the apparent object control of the reflexive in causatives and one type of passives (intransitive passive): (47) a. Ken-ga Naomi-ni zibun-no hon-wo yom-ase-ta. Ken-NOM Naomi-DAT self-GEN book-ACC read-CAUS-PST ‘Ken made/let Naomi read his/her book.’ b. Ken-ga Naomi-ni zibun-no hon-wo yabuk-are-ta. Ken-NOM Naomi-DAT self-GEN book-ACC tear-PASS-PST ‘Ken was adversely affected by Naomi’s tearing his/her book.’ Note that in these examples, we have two verb phrases with their own SUBJ features. For example, in (47a), whose phrase structure tree is shown in (48), the SUBJ value of the lower VP zibun-no hon-wo yom includes the PP that is semantically bound by the object Naomi due to the lexical semantics of the causative morpheme (cf. section 2.2).

1578

VII. Syntactic Sketches S

(48) PP

VP

Ken-ga

PP

TVP

Naomi-ni

VP

V

PP

V

zibun-no hon-wo

yom

sase

On the other hand, the SUBJ value of the higher VP Naomi-ni zibun-no hon-wo yom-ase in (48) also includes a PP, which becomes identical to the subject of the sentence Ken. Thus, since the value of REFL can be identical to either one of the PPs in these SUBJ values, there will be ambiguity of the reflexive in this kind of examples. Generally speaking, this kind of ambiguity occurs whenever there is an embedding of VP whose PP in the SUBJ value is controlled by the object in the higher clause. Analyses involving an empty pronoun can also explain the ambiguity of these constructions by allowing an empty category to be an antecedent of the reflexive. Since the empty pronoun is a subject in the embedded sentence and is controlled by the object, the reflexive can be controlled indirectly by the object, although it is only directly bound by the subject of the embedded sentence. Thus, in the structure below, Naomi controls the subject of the embedded clause PRO (or pro), which in turn binds the reflexive in the embedded clause. (49) Ken-ga Naomii -ni [S PROi zibun-no hon-wo yom] sase] As mentioned in section 2.3, the so-called “uniform” theory of passivization faces a problem when both kinds of passive are assumed to have an embedded sentence. Note that the embedded subject in these structures will have to be able to control the reflexive. However, the reflexive is not ambiguous in the “direct” passive, as seen below: (50) Ken-ga Naomi-ni zibun-no heya-de home-rare-ta. Ken-NOM Naomi-DAT self-GEN room-LOC praise-PASS-PST ‘Ken was praised by Naomi in his/*her room.’ A “uniform” theory would assume an embedded sentence as shown in (51). As in (49), since Naomi controls the subject in the embedded clause, and the latter can bind the reflexive in the embedded clause, the reflexive could potentially be coreferential with Naomi, which is not the case. (51) [S Kenj -ga Naomii -ni [S PROi PROj zibun-no heya-de home] rare] Even though there has been a proposal to stipulate an additional constraint on the reflexive, such a constraint is known to be subject to counterexamples. One of the more recent attempts to explain the interpretation of the reflexive in “direct” passives exploits

44. Japanese

1579

“functional” concepts (such as “involvement”) in syntax (cf. Kuno 1983, 1987). Since Japanese is known to heavily use “functional” information in determining the syntactic and morphological structures (cf. section 5), such an attempt deserves to be pursued further. Even though, at the current stage, these “functional” concepts are often vague and lack formal definitions, both “functional” and “formal” analyses will be used in compensatory situations to enrich the description of the grammar of Japanese. See Iida (1992) for such an attempt. In the non-uniform analysis, the non-ambiguity of zibun in (50) simply comes from the fact that there is only one VP node in the phrase structure, namely, Naomi-ni zibunno heya-de home-rare ‘praised by Naomi in self’s room’. Since the PP in the SUBJ feature of this VP becomes identical only to the subject of the sentence, zibun can only be coreferential with Ken. S

(52) PP

VP

Ken-ga

PP

Naomi-ni

TVP PP

zibun-no heya-de

TVP TVP

V

home

rare

4. Word-order variation Japanese is often cited as a language of relatively free word order. In fact, complements of a verb can generally be freely ordered (“scrambled”). (53) a. Ken-ga Naomi-ni hon-wo ageta. Ken-NOM Naomi-DAT book-ACC gave b. Naomi-ni Ken-ga hon-wo ageta. Naomi-DAT Ken-NOM book-ACC gave c. Hon-wo Ken-ga Naomi-ni ageta. book-ACC Ken-NOM Naomi-DAT gave d. Ken-ga hon-wo Naomi-ni ageta. Ken-NOM book-ACC Naomi-DAT gave e. Naomi-ni hon-wo Ken-ga ageta. Naomi-DAT book-ACC Ken-NOM gave f.

Hon-wo Naomi-ni Ken-ga ageta. book-ACC Naomi-DAT Ken-NOM gave ‘Ken gave Naomi a book.’

1580

VII. Syntactic Sketches

The default order is SOV; other orders are used if there is a motivation for putting a complement other than the subject at the sentence-initial position. That is, non-subject complements are put at the sentence-initial position when they bear some kind of “old information’. The rest of the complements bear “new information” and are relatively more important. I will return to the problem of old/new information in section 5. Some derivational analyses treat this kind of word order variation as the result of application of Move. Thus, the above sentences other than (53a) would have a trace or traces of the non-subject complements somewhere. (54) a. b. c. d. e. f.

Ken-ga Naomi-ni hon-wo ageta. Naomii -ni Ken-ga ti hon-wo ageta. Honj -wo Ken-ga Naomi-ni tj ageta. Ken-ga honj -wo Naomi-ni tj ageta. Naomii -ni honj -wo Ken-ga ti tj ageta. Honj -wo Naomii -ni Ken-ga ti tj ageta.

It is known that scrambling doesn’t affect the propositional content of the sentences. Thus, in the following pair, the quantifier ni-sya ‘two companies’ in the subject binds the pronoun soko ‘it’ in the object and both sentences have the same interpretation: soko-no syain-wo suisensita. (55) a. Sukunakutomo ni-sya-ga at.least two-company-NOM it-GEN employee-ACC recomended ‘At least two companies recommended its employee.’ sukunakutomo ni-sya-ga suisensita. b. Soko-no syain-wo it-GEN employee-ACC at.least two-company-NOM recomended ‘At least two companies recommended its employee.’ In both sentences, the quantifier ni-sya ‘two companies’ in the subject binds the pronoun soko ‘it’ in the object. Since binding involves the hierarchical relationship among the constituents at LF (i.e., c-command) or a similar concept, the subject must c-command the object at LF in both (55a) and (55b). This is straightforward in the case of (55a). However, in (55b), the object precedes the subject at least at PF, which suggests that the quantifier in (55b) is at a hierarchically higher position at LF. In order to achieve this, some propose an LF reconstruction analysis (cf. Saito 1989, 1992). In this type of analysis, the object is moved back to the pre-movement position at LF. On the other hand, some argue that the same effect can be achieved if the movement occurs after Spell-Out and involves only PF. As Hoji (2003, fn. 24) puts it, it is not clear whether there is any empirical evidence to choose one over the other. Another important property involving scrambling and movement was pointed out by Ueyama (1998, 2003) and concerns a sentence like the following: (56) Sukunakutomo ni-sya-wo soko-no syain-ga suisensita. at.least two-company-ACC it-GEN employee-NOM recomended ‘In at least two companies, its employee recommended it.’ In (56), the quantifier ni-sya ‘two companies’ appears in the object and binds the pronoun soko ‘it’ in the subject. This is not expected if the subject originally precedes the object

44. Japanese

1581

and then the object moves to the sentence-initial position at PF. Ueyama (1998, 2003) argues that, in this type of sentence, the object is base-generated at the sentence-initial position and no movement is involved either at LF or PF. She calls this type of sentence the “deep OS” type. Another phenomenon in Japanese related to word order involves so-called “floating” quantifiers. A Japanese numeral phrase consists of a number and a classifier and appears in a variety of environments, as seen below: gakusei-ga kita. (57) a. San-nin-no three-CLF-GEN student-NOM came ‘(The) Three students came.’ b. Gakusei-ga san-nin kita. student-NOM three-CLF came ‘Three students came.’ c. San-nin gakusei-ga kita. three-CLF student-NOM came ‘Three students came.’ In (57a), the numeral phrase san-nin ‘three-CLF’ appears prenominally before the associated subject gakusei ‘student’ in combination with the genitive postposition no. The classifier nin is used to count people. The same numeral phrase appears postnominally in (57b). The interpretation is almost the same between the two sentences, except that the prenominal one tends to be used to refer to a definite set of people. (57c) is a scrambled version of (57b). The sentences in (58) are counterparts of those in (57) where the numeral phrases are associated with the object. The classifier satu is used to count books. (58) a. Gakusei-ga san-satu-no hon-wo katta. student-NOM three-CLF-GEN book-ACC bought ‘A student/students bought (the) three books.’ b. Gakusei-ga hon-wo san-satu katta. student-NOM book-ACC three-CLF bought ‘A student/students bought three books.’ c. Gakusei-ga san-satu hon-wo katta. student-NOM three-CLF book-ACC bought ‘A student/students bought three books.’ d. San-satu gakusei-ga hon-wo katta. three-CLF student-NOM book-ACC bought ‘A student/students bought three books.’ Note that, in (58d), the subject intervenes between the numeral phrase and the object associated with it. When a numeral phrase is associated with the subject of a transitive verb, it behaves somewhat differently from a numeral phrase associated with the object. The following three patterns are acceptable just like (58a−c):

1582

VII. Syntactic Sketches

gakusei-ga hon-wo katta. (59) a. San-nin-no three-CLF-GEN student-NOM book-ACC bought ‘Three students bought a book/books.’ b. Gakusei-ga san-nin hon-wo katta. student-NOM three-CLF book-ACC bought ‘Three students bought a book/books.’ c. San-nin gakusei-ga hon-wo katta. three-CLF student-NOM book-ACC bought ‘Three students bought a book/books.’ However, the following, which corresponds to (58d) in the sense that the object intervenes between the subject and the numeral phrase, is not generally acceptable (cf. Kuroda 1980). (60) *Gakusei-ga hon-wo san-nin katta. student-NOM book-ACC three-CLF bought There have been several approaches proposed for “floating” quantifiers. (60) is a challenge to any approach. As syntactic approaches (e.g., Miyagawa 1988, 1989; among others) utilize movement of the numeral phrase from a position closer to the associated noun phrase, some kind of constraint on movement is assumed to explain the unacceptability of sentences like (60). On the other hand, Gunji and Hasida (1998b), taking numeral phrases as adverbials, argue against such syntactic approaches citing acceptable sentences as the following: (61) Amerikazin-ga Nihon-wo sanman-nin otozureta. Americans-GEN Japan-ACC 30,000-CLF visited. ‘30,000 Americans visited Japan.’ Their argument involves the semantic concepts such as “incremental theme” in the sense of Dowty (1991). The crucial difference between (60) and (61) is that the former involves the object hon ‘book’, which can be interpreted as an incremental theme. That is, the object has a semantic property such that an increase of the amount of the referent corresponds to an increase of the event involving the referent of the object. On the other hand, the object in the latter, Nihon ‘Japan’, is not an incremental theme semantically. See Gunji and Hasida (1998b) and Gunji (2005) for further details. There might be other potential approaches to “floating” quantifiers. For example, a constraint on PF by Ueyama (1998) to effectively forbid movement over an object at non-canonical position might be utilized. This seems to be still one of the active topics in Japanese grammar.

5. Sentence levels Perhaps the most perspicuous phenomenon in Japanese syntax that demonstrates the head-final property of this language is the sentence-final clusters of markers. What fol-

44. Japanese

1583

low the verb stem include: voice markers such as sase (causative) and rare (passive), aspectual markers such as i (progressive, resultative, or experiential) and simaw (perfective), tense markers such as ru (present) and ta (past), modal markers such as daroo (supposition), etc. Each of these takes a distinct level of sentential complements. The following shows a schematic hierarchy of a sentential structure from the functional point of view (cf. Minami 1974; Takubo 1987). Note that the names for levels are somewhat tentative and show only the schematic hierarchical relationship among pragmatically characterized levels. Although it is not necessarily assumed to correspond directly to a syntactic structure, the study of such correspondence might be an important topic, as appears to be the case in the so-called cartography approach, cf. Rizzi (1997) and subsequent works by him and his colleagues, e.g., Rizzi (ed.) (2004). (62)

Statement Judgment

Mood Comment

Topic

Event Process State Agent

Modal

Tense

Aspect

Action

Patient

Action

Action

Voice

The mood markers include those for question ka or no (colloquial), for the speaker’s sex wa (female speakers), for confirmation ne, etc. They all contribute to make the sentence have some kind of communicative force. In addition, specific adjuncts may adjoin to some of the levels. For example, manner adverbials may adjoin to the “action” level, restrictive sentential adverbials to the “event” level, and non-restrictive sentential adverbials to the “judgment” level, etc. Each of these functional levels is syntactically realized as a verbal category. For example, the “action” level is realized as either a transitive or an intransitive verb phrase, and the “state” level as a sentence. All the higher levels including “process”, “event”, “comment”, and “judgment” are realized as sentences headed by respective markers. Thus, if a sentence is headed by a tense marker, it is a sentence at the “event” level. As for the “statement” level, the categorial status of the “mood” markers is somewhat unclear. Traditionally, they are classified as postpositions, but the functions of these markers are quite different from other typical postpositions such as nominative and accusative markers. In fact, some theoreticians in traditional Japanese grammar have stressed the importance of these markers (the so-called tinzyutu-zi ‘statement marker’) and put them in the center of their theories. Note the position of “topic”, headed by the topic marker wa, in this hierarchy. In general, the sentential head of the topic (“comment”) includes the markers for the aspect,

1584

VII. Syntactic Sketches

tense, and modality. On the other hand, the “agent”, usually headed by the nominative marker ga, appears inside the complement of an aspectual marker (“state”). Thus, we have the following contrast: kaetta. (63) a. Ken-ga kita-node Ken-NOM came-because returned ‘Because Ken came, (someone) went home.’ kaetta. b. Ken-wa kita-node Ken-TOP came-because returned ‘Because (someone) came, Ken went home.’ The postposition node ‘because’ takes a sentential complement at the “event” level. Thus, in (63a), the complement of node is Ken-ga ki-ta ‘Ken came’. The same postposition in (63b), however, cannot take Ken-wa ki-ta as the complement, since the “topic” is outside the “event” level. In fact, Ken-wa ki-ta in (63b) is not a constituent, though Ken-ga kita in (63a) is. We have roughly the following phrase structures for (63a) and (63b), respectively. (64) a. [S Ken-ga ki-ta] node [S pro kaet-ta] b. Ken-wa [S [S pro ki-ta] node [S gap kaet-ta]] The verb kaetta ‘returned’ in (63a) has a “zero” pronominal subject (designated by pro in [64a]). On the other hand, the topic Ken-wa in (63b) binds the gap subject of the verb kaetta. The subject of kita ‘came’ in (64b) is also a “zero” pronoun. The referents of these “zero” pronouns are determined by the context. If the “zero” pronominal positions are lexically filled, the sentences in (63) would have variants like the following: Naomi-ga kaetta. (65) a. Ken-ga kita-node Ken-NOM came-because Naomi-NOM returned ‘Because Ken came, Naomi went home.’ kaetta. b. Ken-wa Naomi-ga kita-node Ken-TOP Naomi-NOM came-because returned ‘Because Naomi came, Ken went home.’ In this sense, the sentential head to which a “topic” adjoins is a much higher level of sentence, a level of “comment” (about the “topic”) that constitutes some kind of “judgment” together with the “topic”. The topic marker wa is usually associated with old, or given, information, namely, information already introduced into the discourse by the speaker. In (65b) above, the topic Ken corresponds to old information, while what is commented on Ken, namely, the “event” that he went home because Naomi came, is new information. Since the subject appears inside an “event”, the subject marked by the nominative ga naturally corresponds to new information. In this respect, interrogative nominals such as dare ‘who’ and nani ‘what’ are never used with wa, since they are never old information. (66b) and (66e) show that interrogatives cannot be topicalized. (What is glossed as POLITE below is a politeness suffix.)

44. Japanese

1585

(66) a. Dare-ga ki-masi-ta-ka? who-NOM come-POLITE-PST-Q ‘Who came?’ b. *Dare-wa ki-masi-ta-ka? who-TOP come-POLTITE-PST-Q nani-wo sita-no? c. Ano hito-ga that person-NOM what-ACC did-Q ‘What did that person do?’ sita-no? d. Nani-wo ano hito-ga what-ACC that person-NOM did-Q ‘What did that person do?’ sita-no? e. *Nani-wa ano hito-ga what-TOP that person-NOM did-Q Japanese interrogatives need not be fronted to the sentence-initial position; (66d) is merely a case of intrasentential “scrambling” and both (66c) and (66d) are grammatical. Thus, the existence of “Wh-movement” before Spell-Out was sometimes doubted. At the level of LF, however, the existence of “Wh-movement” can be hypothesized (cf. Nishigauchi 1990). The contrast exhibited in (66) can be generalized as a constraint that interrogatives cannot appear inside the topic phrase: sitteiru-no? (67) a. [dare-ga kita koto]-wo minna who-NOM came fact-ACC everyone know-Q ‘Who is the person such that everyone knows that he came?’ sitteiru-no? b. *[dare-ga kita koto]-wa minna who-NOM came fact-TOP everyone know-Q byooki-ni nari-masi-ta-ka? c. [nani-wo tabeta hito]-ga what-ACC ate person-NOM sick-DAT become-POLITE-PST-Q ‘What is the thing such that the person who ate it became sick?’ d. *[nani-wo tabeta hito]-wa byooki-ni nari-masi-ta-ka? what-ACC ate person-TOP sick-DAT become-POLITE-PST-Q These facts all follow from the assumption that the topic bears old information; a bearer of new information such as an interrogative cannot be included in it.

6. References (selected) Chomsky, 1981 Chomsky, 1986

Noam Lectures on Government and Binding. Dordrecht: Foris. Noam Barriers. Cambridge, MA: MIT Press.

1586

VII. Syntactic Sketches

Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. Dowty, David R. 1991 Thematic proto-roles and argument selection. Language 67.3: 547−619. Farmer, Ann 1984 Modularity in Syntax: A Study of Japanese and English. Cambridge, MA: MIT Press. Fukui, Naoki 1986 A theory of category projection and its applications. Ph.D. dissertation, Massachusetts Institute of Technology. Gazdar, Gerald, Ewan Klein, Geoffrey K. Pullum, and Ivan A. Sag 1985 Generalized Phrase Structure Grammar. Oxford: Oxford University Press. Gunji, Takao 1987 Japanese Phrase Structure Grammar. Dordrecht: D. Reidel. Gunji, Takao 1999 On lexicalist treatments of Japanese causatives. In: Levine, Robert, and Georgia Green (eds.), Studies in Contemporary Phrase Structure Grammar, 119−160. Cambridge: Cambridge University Press. Gunji, Takao 2005 Measurement and quantification revisited. TALKS (Theoretical and Applied Linguistics at Kobe Shoin) 8: 21−36. Gunji, Takao 2010 Saikika saikoo [Reflexivization revisited]. TALKS (Theoretical and Applied Linguistics at Kobe Shoin) 13: 1−14. Gunji, Takao, and Kôiti Hasida (eds.) 1998a Topics in Constraint-Based Grammar of Japanese. Dordrecht: Kluwer. Gunji, Takao, and Kôiti Hasida 1998b Measurement and quantification. In: Gunji, Takao, and Kôiti Hasida (eds.), Topics in Constraint-Based Grammar of Japanese, 39−79. Dordrecht: Kluwer. Hale, Kenneth 1980 Remarks on Japanese phrase structure: Comments on the papers on Japanese syntax. In: Otsu, Yukio and Ann Farmer (eds.), Theoretical Issues in Japanese Linguistics, 185− 203. (MIT Working Papers in Linguistics 2.) Department of Linguistics and Philosophy, Massachusetts Institute of Technology. Hasegawa, Nobuko 1980 The VP constituent in Japanese. Linguistic Analysis 6: 115−131. Hoji, Hajime 1985 Logical form constraints and configurational structures in Japanese. Ph.D. dissertation, University of Washington. Hoji, Hajime 2003 Falsifiability and repeatability in generative grammar: A case study of anaphora and scope dependency in Japanese. Lingua 113. 4−6: 377−446. Howard, Irwin, and Agnes M. Niyekawa-Howard 1976 Passivization. In: Shibatani, Masayoshi (ed.), Syntax and Semantics. vol. 5, Japanese Generative Grammar, 201−237. New York: Academic Press. Iida, Masayo 1992 Context and binding in Japanese. Ph.D. dissertation, Stanford University. [Published from Stanford: CSLI Publications in 1996.] Inoue, Kazuko 1976 Henkei Bunpoo to Nihongo [Transformational Grammar and Japanese]. Tokyo: Taishukan. Ishikawa, Akira 1985 Complex predicates and lexical operations in Japanese. Ph.D. dissertation, Stanford University.

44. Japanese

1587

Kaplan, Ronald M, and Joan Bresnan 1982 Lexical-functional grammar: A formal system for grammatical representation. In: Bresnan, Joan (ed.), The Mental Representation of Grammatical Relations, 173−281. Cambridge, MA: MIT Press. Kuno, Susumu 1973 The Structure of the Japanese Language. Cambridge, MA: MIT Press. Kuno, Susumu 1983 Sin Nihon Bunpoo Kenkyuu [New Studies on Japanese Grammar]. Tokyo: Taishukan. Kuno, Susumu 1987 Functional Syntax: Anaphora, Discourse, and Empathy. Chicago: Chicago University Press. Kuroda, Shige-Yuki 1965 Generative grammatical studies in the Japanese language. Ph.D. dissertation, Massachusetts Institute of Technology. [Published from New York: Garland in 1979.] Kuroda, Shige-Yuki 1980 Bun-kozo-no hikaku [Comparison of sentence structures]. In: Kunihiro, Tetsuya (ed.), Bunpoo [Grammar], 23−61. Tokyo: Taishukan. [Reprinted in Kuroda, Shige-Yuki 2005 Nihongo-kara Mita Seiseibunpoo [Generative Grammar as Seen from the Japanese Language], 109−144. Tokyo: Iwanami Publishers.] Levine, Robert, and Georgia Green (eds.) 1999 Studies in Contemporary Phrase Structure Grammar. Cambridge: Cambridge University Press. Manning, Christopher, Ivan A. Sag, and Masayo Iida 1999 The lexical integrity of Japanese causatives. In: Levine, Robert, and Georgia Green (eds.), Studies in Contemporary Phrase Structure Grammar, 39−79. Cambridge: Cambridge University Press. McCawley, Noriko Akatsuka 1972 On the treatment of Japanese passives. Papers from the 8th Regional Meeting, Chicago Linguistic Society, 259−270. Minami, Fujio 1974 Gendai Nihongo-no Koozoo [The Structure of Modern Japanese]. Tokyo: Taishukan. Miyagawa, Shigeru 1980 Complex verbs and the lexicon. Ph.D. dissertation, University of Arizona. [Available as Coyote Papers: Working Papers in Linguistics from A → Z, vol. 1. University of Arizona.] Miyagawa, Shigeru 1989 Structure and Case Marking in Japanese. (Syntax and Semantics, vol. 22.) San Diego: Academic Press. Nishigauchi, Taisuke 1990 Quantification in the Theory of Grammar. Dordrecht: D. Reidel. [Revision of his Ph.D. dissertation. University of Massachusetts, 1986]. Pollard, Carl J., and Ivan A. Sag 1987 Information-based Syntax and Semantics. vol. 1, Fundamentals. Stanford: CSLI Publications, Stanford University. Pollard, Carl J., and Ivan A. Sag 1994 Head-driven Phrase Structure Grammar. Chicago: University of Chicago Press. Rizzi, Luigi 1997 The fine structure of the left periphery. In Haegeman, Lilian (ed.), Elements of Grammar, 281−337. Dordrecht: Kluwer. Rizzi, Luigi (ed.) 2004 The Structure of CP and IP: The Cartography of Syntactic Structures, Vol. 2. Oxford: Oxford University Press.

1588

VII. Syntactic Sketches

Sag, Ivan A., Thomas Wasow, and Emily Bender 2003 Syntactic Theory: A Formal Introduction, 2nd ed. Stanford: CSLI Publications. Saito, Mamoru 1985 Some asymmetries in Japanese and their theoretical implications. Ph.D. dissertation, Massachusetts Institute of Technology. Saito, Mamoru 1989 Scrambling as semantically vacuous A′ movement. In: Baltin, Mark R. and Anthony S. Kroch (eds.), Alternative Conceptions to Phrase Structure, 182−200. Chicago: The University of Chicago Press. Saito, Mamoru 1992 Long distance scrambling in Japanese. Journal of East Asian Linguistics 1.1: 69−118. Saito, Mamoru, and Hajime Hoji 1983 Weak crossover and move α in Japanese. Natural Language and Linguistic Theory 1: 245−259. Shibatani, Masayoshi 1976 Causativization. In: Shibatani, Masayoshi (ed.), Syntax and Semantics. vol. 5, Japanese Generative Grammar, 239−294. New York: Academic Press. Shibatani, Masayoshi 1978 Nihongo-no Bunseki [Analysis of Japanese]. Tokyo: Taishukan. Takubo, Yukinori 1987 Toogo koozoo to bunmyaku zyoohoo [Syntactic structures and contextual information]. Nihongo-gaku [Japanese Studies] (Meijishoin) 6.5: 37−48. Ueyama, Ayumi 1998 Two types of dependency. Ph.D. dissertation, University of Southern California. Ueyama, Ayumi 2003 Two types of scrambling constructions in Japanese. In: Barss, Andres (ed.), Anaphora: A Reference Guide, 23−71. London: Blackwell.

Takao Gunji, Kobe (Japan)

45. Georgian 1. Simple sentences 2. Complex sentences 3. References (selected)

Abstract Georgian is spoken in Georgia in the Caucasus; it is a member of the Kartvelian (South Caucasian) family. The main purpose of this chapter is to give an impression of the wealth of natural language by highlighting those typological facts that distinguish Georgian (and the Kartvelian family) from other languages. We may note from the outset that Modern Georgian is a typical SOV language, that it is non-configurational and is a candidate for ergativity, and that it has complex morphosyntax, partially inflectional,

45. Georgian

1589

but predominantly agglutinative. An additional goal of section 1 of the chapter is to illustrate the application of relational grammar. Section 1 of this chapter treats the syntax of simple sentences, while section 2 discusses complex sentences. All the material presented in the paper is from Georgian.

1. Simple sentences 1.1. Word order While Georgian has SOV order, it is not a strict verb final language. The most frequent orders are Subject-Indirect Object-Direct Object-Verb, and Subject-Direct Object-VerbIndirect Object, but order of major constituents is very free (Apridoniʒe 1986). According to a recent experimental study (Skopeteas, Féry, and Asatiani 2009), sentence prosody is as important as word order in encoding information structure in Georgian. It has postpositions (magida-ze ‘table on’), possessors, adjectives and numerals preceding head nouns (cemi ori didi c'igni ‘my two big book[s]’), standard of comparison − comparative adjective order (sen-ze didi ‘bigger than you’, lit. ‘you-on big’) or comparative adjective − standard of comparison (upro didi vidre sen, lit. ‘more big than you’), and auxiliaries most often following main verbs (camosuli var ‘I have arrived’ lit. ‘arrived I.am’); there is some flexibility in the last characteristic. As discussed in section 2.1, relative clauses may precede or follow the head noun.

1.2. Subject, direct object, indirect object Perhaps the most salient problem in the syntax of Georgian is the identification of subject, direct object, and indirect object and the encoding of these grammatical relations. In section 1.2.1 we describe the encoding systems in a pretheoretical way using traditional terminology; various analyses of these facts are discussed in sections 1.2.2−1.2.5.

1.2.1. Description Georgian has three sets of tense-aspect-mood paradigms, traditionally called “series”; each appears to have a distinct way of assigning cases. Some of the differences can be seen in the following examples: the tense-aspect-mood paradigm (henceforth TAM) illustrated in (1) is in Series I, (2) in Series II, and (3) in Series III. The names used for the cases are traditional and should not be taken literally; for the transitive verb, illustrated here, the narrative (NAR) case marks subject in Series II, while the so-called nominative marks the direct object in Series II and III and the subject in Series I. (1)

oršimo-ti γvino-s amoiγebs. merab-i Merab-NOM dipper-INS wine-DAT take.out ‘Merab will take out wine with a dipper.’

1590

VII. Syntactic Sketches

(2)

amoiγo. merab-ma oršimo-ti γvino Merab-NAR dipper-INS wine.NOM take.out ‘Merab took out wine with a dipper.’

(3)

merab-s oršimo-ti γvino amouγia. Merab-DAT dipper-INS wine.NOM take.out ‘Merab evidently took out wine with a dipper.’

However, these sentences only partially represent the facts, for there is also variation according to the class to which the verb belongs. Case marking variation is summarized in Table 45.1. (4)

Tab. 45.1: Case Assignment in Series I, II and III Subject of Class 1 Series I Series II Series III

Subject of Class 3

Subject of Class 2

Direct Object

Indirect Object

NOM

NOM

NOM

DAT

DAT

NAR

NAR

NOM

NOM

DAT

DAT

DAT

NOM

NOM

-tvis

The classes referred to in Table 45.1 are sets of finite verb forms. They are defined morphologically (Harris 1981, 1985; Holisky 198la) and are characterized by clusters of morphological, syntactic, and semantic traits that correlate for most verbs in the language. The correlations are summarized in Table 45.2; in the table the syntactic and semantic characteristics are not signalled specifically by the morphological trait on the same line, but are independent characteristics of the same set of verbs. (5)

Tab. 45.2: Class Correlations in Modern Georgian Class 1

2

3

Morphological

Syntactic

Semantic

a. preverb

a. transitive

a. active

b. -s, -en

b. narrative case

b. telic

c. -es

c. inversion

a. preverb or e-

a. intransitive

a. inactive

b. -a, -an

b. nominative case

b. telic or stative

c. -nen

c. no inversion

a. i- -(eb)

a. intransitive

a. active

b. -s, -en

b. narrative case

b. atelic

c. -es

c. inversion

Three morphological traits distinguish among the classes. Morphological trait (a) refers to the formation of the future, conditional, future subjunctive, and aorist TAMs. Morphological trait (b) gives the suffixes used to mark subjects of the third person singular and plural in the future TAM. (c) represents the suffix used to mark third person plural

45. Georgian

1591

subjects in the aorist. While there are additional morphological variations among the classes, these three are the most consistent. Syntactic trait (a) shows the transitivity of verb forms of this class, where a form is considered transitive if it has subject and direct object, and intransitive otherwise. Syntactic trait (b) shows the case used to mark the subject of verb forms of this class when they are in Series II TAMs. Line (c) refers to verbs that have dative subjects in Series III (“inversion”) and those that do not (“no inversion”); see also section 1.2.3 below. Semantic trait (a) shows whether verbs of this class are volitional and under the control of the subject (“active”) or not (“inactive”), (b) refers to a stative/non-stative distinction and within the latter to verbs that express an action marked for having an end point (“telic”) or without indication of such an end point (“atelic”). Many of these notions are discussed further below. Some verb forms are exceptions to one or more of these trait correlations. Class 1 is exemplified in (1)−(3) above; examples of other classes are given below. Class 4 is omitted here but is described in section 1.2.3. Verb agreement also encodes information about subjects, direct objects, and indirect objects, as summarized in Table 45.3. The allomorphs of the third person indirect object marker and the several variants of the third person plural subject marker are not included here. Morphological and syntactic constraints on the co-occurrence of these in a single verb form are given in (Harris 1981: 30−31.) The prefixal portion of the third person indirect object marker, s-, is replaced by u- under certain circumstances (Harris 1981: 89−90). (6)

Tab. 45.3: Three Sets of Agreement Markers Subject Singular 1. person

v-

Direct Object Plural

v-

Singular

Plural

-t

m-

gv-

g-

g-

2. person



-t

3. person

-s/a/o

-en, etc.



Indirect Object Singular

Plural

m-

gv-

-t

g-

g-

-t



s-

s-

-t

In Series I and II, the subject is marked by the set of subject markers in Table 45.3, the direct object by the direct object markers, and the indirect object by the indirect object markers. In Series III, on the other hand, the notional subject of Class 1 and 3 verbs is marked instead by the indirect object markers, while the notional direct object of these verbs is marked by subject markers, and their indirect objects are marked with the postposition -tvis. For Class 2 verb forms in Series III, the subjects are marked by the subject markers, the indirect objects by indirect object markers. The facts stated above are described in all of the handbooks; various analyses of them are discussed in sections 1.2.2−1.2.4.

1.2.2. Is case marking in Series II ergative? One of the points of debate in the analysis of the facts described in 1.2.1 concerns the issue of whether case marking in Series II is ergative. Contrasts of the sort illustrated in (7)/(8) lead to considering case marking in this series to be ergative.

1592

VII. Syntactic Sketches

(7)

kal-ma švil-i gazarda. woman-NAR child-NOM raise ‘The woman raised her child.’

(8)

gaizarda. švil-i child-NOM grow ‘The child grew.’

Although we have translated these two verb forms differently into English, they are forms of a single root. The form in (7) is a Class 1 form, and that in (8) a Class 2 form. The prefixless ga-zard-a is a transitive form. The i- in ga-i-zard-a is one of several formants of Class 2. Traditionally, these Class 2 intransitives are considered to be derived from the corresponding transitives. (Both classes also contain unpaired forms, which govern the case marking characteristic of the class.) On the basis of this distribution, case marking in Series II was traditionally considered ergative, for the subject of the intransitive in (8) and the direct object in (7) are marked with one case, while the subject of the transitive in (7) is marked with a distinct case (see also Boeder 1979: 459). In this analysis, Class 3, which contains mostly monovalent verbs, has been ignored, or treated as irregular, or termed transitive; this class is illustrated in Series II in (9). (9)

k'ac-ma irbina. man-NAR run ‘The man ran.’

Holisky (198la) showed that the verbs of Class 3 are, in fact, quite regular with respect to morphology, syntax, and semantics and constitute a large and productive type. Given their overall regularity, we cannot exclude these verbs when considering the issue of whether Series II case marking is ergative. Most of the verbs of Classes 2 and 3 are intransitive; and, thus, in Series II some intransitives (Class 2) govern nominative case subjects, while others (Class 3) govern narrative. Since the division of verbs governing nominative vs. narrative case subjects in Series II corresponds predominantly to those that have inactive vs. active semantics (see definition above following Table 45.2), the case marking pattern of this series in Georgian falls into the type often called active/ inactive, or just active (Sapir 1917; Fillmore 1968: 54). Neither the case marking of major constituents nor other major rules (Asatiani 1994: 28−29; Amiridze 2006: 8−12, 15−32, but see Harris 1981: 174) of Georgian morphosyntax are truly ergative. Some of the western dialects differ from literary Modern Georgian in case marking patterns (Fähnrich 1967; Harris 1985: 113−115, 376−380).

1.2.3. Is there inversion in Series III? The problem of inversion in Series III is closely tied to the identification of basic grammatical relations throughout the language. Inversion is the name of the construction in which notional subjects have dative case marking (see Table 45.1) and indirect object agreement (see Table 45.3), while notional direct objects (if present) have nominative case marking and subject agreement. This occurs with Class 1 and 3 verbs in Series III

45. Georgian

1593

and with certain “affective” verbs in all series. The issue here is whether or not these facts are to be accounted for by means of a syntactic rule that changes grammatical relations. Most traditional descriptions have taken the notional grammatical relations in Table 45.1 as basic (e.g. Čikobava 1961, 1968a; Tschenkeli 1958), saying that Series I and II simply have different case marking rules; they have assumed a rule of inversion for Series III. By this approach, the three sentences (1)−(3) containing Class 1 verbs and the three (10)−(12) containing Class 3 verbs have the same “logical” (or “real”) subject, and (l)−(3) have the same logical direct object. (3) and (12) differ from the others in having undergone inversion, so that their logical subjects are their “grammatical” (or “morphological”) indirect objects, and their logical direct objects, if they have them, are their grammatical subjects. (10) merab-i t'iris. Merab-NOM cry ‘Merab is crying.’

(Series I)

(11) merab-ma it'ira. Merab-NAR cry ‘Merab cried.’

(Series II)

(12) merab-s (Series III) ut'iria. Merab-DAT cry ‘Merab evidently cried.’ In the inversion example, (12), the notional subject, Merab, is in the dative case and conditions u-, a marker of third person singular indirect object agreement. The suffix -a in (3) marks third person singular subject agreement with γvino (‘wine’); in (12), where there is no overt final subject, the third person singular form appears by default. Aronson (1970: 294), following Čikobava (1968a), seeks a more formal approach that will accord with the analysis described above. He defines the subject as “that member of the predication which can be marked by the verb for number in the third person”. This principle works well except in Series III, exactly where the grammatical relations are at issue. In Series III, dative nominals condition number agreement, except when a first or second person nominative nominal (notional object) is present. (13) turme st'udent'eb-s gamougzavnia-t gela. Gela.NOM apparently students-DAT send.him-PL ‘Apparently the students (have) sent Gela.’ (14) turme st'udent'eb-s gamougzavni-xar (šen). apparently students-DAT send-you.SG you.SG.NOM ‘Apparently the students (have) sent you.’ Without further restrictions, the principle stated above has the unfortunate consequence of suggesting that ‘students’ is the subject of (13), but not of (14), since ‘students’ conditions agreement in the former, but not in the latter. While there is dialectal variation in agreement of the sort illustrated in (13) and (14), these represent the grammatical norm.

1594

VII. Syntactic Sketches

Vogt (1971: 81) departs from the traditional view, defining the subject as that nominal that conditions one of the agreement affixes from the set called “subject markers” in Table 45.1. Consequently, in his view, ‘Merab’ would be subject in (1) and (2), but ‘wine’ would be subject in (3). Harris (1981) takes a different approach, identifying as subject those nominals that have a particular cluster of properties, including triggering Tav-Reflexivization and conditioning Subject Agreement (see also Boeder 2002). In Series III, where this cluster of properties is divided between two nominals, one is viewed as the initial subject, while the other is a final subject resulting from the application of the rule of Unaccusative, applying with Inversion. According to this analysis, ‘Merab’ in (3) is the initial subject and final indirect object (henceforth “inversion nominal”). A similar approach is taken to the other relations. Thus, this analysis combines the insights of the traditional works and of Vogt (1971) by attributing different subject properties to different levels of derivation. On this view, the grammatical relations that head the columns in Table 45.1 refer in verbs of Classes 1 and 3 in Series III to initial, rather than final, status. Working within the framework of relational grammar, Harris (1981) views the initial indirect object marked with -tvis in Series III as a chomeur, as predicted by a universal principle, The Chomeur Law (see section 1.5 below). The syntactic behavior of this tvis-nominal had not previously been described. One advantage of the analysis is that it accounts for this behavior and permits generalization with nominals marked with -tvis in other constructions (see section 1.5 below). A problem with this analysis is that it makes reference both to initial subjecthood, in order to identify the trigger of Number Agreement, and to final indirect-objecthood, to identify the set of agreement markers used; this problem is taken up again in section 1.2.4. In addition to the three classes of verb forms described in section 1.2.1, there is a Class 4, containing mostly affective verbs. These verbs have a single case pattern in all three series, illustrated in (15) and (16) from Series I. (15) merab-s natela uq'vars. Merab-DAT Natela.NOM love ‘Merab loves Natela.’ (16) (me) mciva. I.DAT I.cold ‘I am cold.’ Most linguists have analyzed the Class 4 forms in all series in the same way they analyze the Class 1 and 3 forms in Series III. Class 4 forms meet the number agreement criterion of Čikobava (1968a) and Aronson (1970), and they would see ‘Merab’ as the subject in (15). Class 4 verbs have the same syntactic properties found by Harris for Class 1 and 3 inversion in Series III, and that analysis considers ‘Merab’ the initial subject and final indirect object, while ‘Natela’ is the initial direct object and final subject. However, Chanidzé (1963), and Šaniʒe (1973) treat Class 4 differently from Series III; he sees ‘Natela’ as the subject in (15), objecting that its translation equivalent in Russian is not sufficient reason for treating it as object. Merlan (1982) excludes from consideration the intransitives of the type illustrated in (16) and cites the existence of this type as grounds for rejecting analyses that involve inversion, evidently feeling that optional elements are inadmissible in syntactic rules. In

45. Georgian

1595

this discussion, Merlan (1982: 298) does not address the fact that the Inversion rule in Harris (1981) affects only the initial subject and so applies in the same way to transitives and intransitives. She argues (Merlan 1982: 299) that the traditional analyses made by native grammarians overlook the fact that there is a semantic difference between Class 4 and the other classes, and that because of this difference they must have a different structure. That article does not propose ways to account for the various syntactic traits that the inversion nominal shares with subjects, as established in Harris (1981).

1.2.4. How complex is agreement? Agreement may be regarded as a morphological process, but its relation to syntax has been one of the important theoretical issues discussed in recent years. As part of this debate, Anderson (1982, 1984a) supports the position that morphological operations depend upon syntactic information, and that morphology forms a distinct component that operates on the output of syntactic rules. He proposes to account for the fact that the first person subject marker v- does not co-occur with the second person object marker g- without deletion rules of the sort proposed in Harris (1981: 31), which he finds implausible. He accomplishes this through the device of so-called disjunctive ordering, which refers to the practice of arranging rules in blocks, where all the rules in a block apply disjunctively. Jensen and Stong-Jensen (1984) have argued that this device is too powerful and have demonstrated that these same Georgian facts can be accounted for through lexical morphology without the device of disjunctive ordering and without deletion, but at the expense of accuracy. Observing that most of the subject properties of the inversion nominal are syntactic ones, while its indirect object properties are morphological ones, Anderson (1984b) proposes to account for the indirect object properties of this nominal directly through the morphology. On his analysis, ‘Merab’ in (12) is the subject in the syntax, and it is the indirect object in the morphology. At the center of the issue is the rule of Number Agreement for Series III as described in all of the handbooks of Georgian. In the modern language in Series I and II, subjects are the only nominals that condition the rule; in Series III, the nominal marked in the dative case conditions Number Agreement (one of its subject properties), though the morpheme -t that marks it is otherwise used only for objects. This is the rule that Harris (1981) formulates as a global rule. Number Agreement is complicated still further by variation according to the person of the arguments involved, as illustrated in (13)/(14) in the preceding subsection. Anderson’s analysis circumvents the problem that Number Agreement refers to two levels of derivation by making reference instead to syntactic subjecthood, in order to identify the trigger of Number Agreement, and to morphological indirect-objecthood, to identify the set of agreement markers used. Aissen and Ladusaw (1988) address the question of whether certain complex types of agreement systems, for which multistratal analyses have been proposed, can be accounted for within monostratal theories. One of the complex agreement systems they consider is the Number Agreement described above for Modern Georgian. They propose a monostratal account of Number Agreement in Modern Georgian, based on a rule that refers to the inversion construction and on a convention that unmarked agreement (here Number Agreement conditioned by the subject) can be overridden by stipulating a marked agreement relation in the construction referred to.

1596

VII. Syntactic Sketches

K'iziria (1985) has turned our understanding of Number Agreement in Modern Georgian upside down. Although it had long been known that indirect objects, and occasionally even direct objects, can condition number agreement in some non-standard dialects (Čikobava 1968b; Harris 1985: 313−314), all native grammarians and others had considered the system described in the preceding paragraphs to be the norm (eg. Normebi 1970: 182−183; Tschenkeli 1958: 484−490). K'iziria has now documented the existence of Number Agreement with objects outside the inversion construction in the literary language. Tuite (1987, 1988, 1998) surveys Number Agreement in the Kartvelian languages with unprecedented completeness. He shows that agreement with indirect objects that are not initial subjects (that is, outside the inversion construction) is optional from a syntactic point of view, and he suggests that this sort of number agreement may depend upon pragmatic factors, such as topicality or empathy.

1.3. Aspect Mač'avariani (1974) has studied morphological aspect in Georgian and has shown that the modern language distinguishes the following categories: perfective/imperfective, punctual/durative, and one-time/repeated. Holisky, in a series of works, has shown that in addition to these categories, lexical aspect plays an important role in the language. Holisky (1978) establishes diagnostics for stativity in Georgian and relates these to universals of language. Georgian also distinguishes the category telic/atelic, where a verb is telic if it is marked for having an end point (Holisky 1979). Holisky (1981b) explores the relationship between morphological and lexical aspect. Following studies of English aspect by Vendler (1967) and by Dowty (1977), she shows that aspect in Georgian is better understood through the use of four categories, illustrated below. (17) Accomplishments: Activities: States: Achievements:

gac'minda itamaša civa ip'ova

‘s/he cleaned it’ ‘s/he played’ ‘it is cold’ ‘s/he found it’

The illustrations provided here have the imputed property inherently, but derivation can alter these. Holisky (1981a) relates lexical aspect to morphological and syntactic categories, as briefly summarized in section 1.2.1.

1.4. Voice Some linguists, including Schuchardt (1895), have proposed that languages with ergative case marking − here they include Georgian Series II (see section 1.2.1 above) − are characterized by structures in which all transitive verbs are passive. Čikobava (1942, 1961) has argued effectively that neither Georgian nor other languages with ergative case marking, including especially other languages of the Caucasus, are accurately characterized in this way. He observes that a passive, a marked syntactic construction, cannot exist in the absence of a corresponding unmarked active. Although it is now known that

45. Georgian

1597

ergative case marking in some languages does originate through a reinterpretation of passives, it is doubtful that the Kartvelian family, to which Georgian belongs, can ever be reconstructed far enough back to determine whether the case marking of Series II originated in this way. Relational grammar distinguishes a passive construction from an unaccusative. Both involve rules that promote direct objects to subjects; they differ in that the passive applies to transitive structures, while the unaccusative applies to a direct object in the absence of a subject. Both constructions are found in Georgian; (18) and (19) illustrate the passive, formed analytically from a past passive participle and an auxiliary. (18) čven mier gamotkmuli-a varaud-i… us by stated-it.be conjecture-NOM ‘The conjecture … has been stated by us.’ (From scientific prose in Abesaʒe 1963: 17, 1.10) (19) … mivlinebuli viq'avi udur ena-ze samušaod. sent I.be Udi language-on to.work ‘I was sent to work on the Udi language.’ (From the scientific prose in Pančviʒe 1937: 295) Syntactically, both examples involve initial direct objects (varaudi ‘conjecture’ and ‘I’) that are promoted to subjecthood. The deletion of unemphatic pronouns, as in (19), is frequent in Georgian; in this example, the person and number of the nominal are indicated by agreement in the auxiliary. Evidence for the initial relations are based on suppletion phenomena; while case marking, verb agreement, and syntactic rules that are outside the scope of the present chapter support the claim that these are final subjects (Harris 1981: 104−109). Both examples also involve initial subjects; in (18) this is čven (‘we, us’), and in (19) it is unspecified. When the initial direct object is promoted to subject, the initial subject becomes a chomeur. In Georgian this is marked with the postposition mier (‘by’); transitive subject chomeurs are also marked with mier in deverbal nominal constructions (see section 1.5 below). Unaccusatives, formed synthetically with the prefix i- or its variants, are illustrated by (8) above and by (20). (20) … orive porma i-brunvis… both form.NOM i-decline ‘… Both forms decline …’ (From the scientific prose in Pančviʒe 1942: 399) In (20), unlike (19), there is no implied agent, no unspecified subject. Evidence that the final subject, in this instance orive porma (‘both forms’), is the initial direct object is drawn from a wide variety of phenomena, including case marking, suppletion, preverb alternation, inversion, and − in Old Georgian − plural agreement (Harris 1981: Ch. 13, 1982). Both the passive and the unaccusative are Class 2 verb forms. Some verbs have both analytic passive forms and synthetic unaccusative forms with distinct syntax, semantics,

1598

VII. Syntactic Sketches

and functions, thus providing a clear basis for distinguishing between these (examples in Harris 1981: 192−193). On the other hand, there are synthetic forms with the i- formant which have the syntax and semantics of the passive; that is, they permit agentive subjects marked with the postposition mier. Thus, there is not a one-to-one correspondence of morphology and syntactic construction with regards to unaccusatives and passives.

1.5. Other phenomena involving grammatical relations in simple sentences Version in Georgian refers to syntactic processes that create possessive, benefactive, or other indirect objects and to the morphological marking of this on the finite verb form. Version has been studied by Boeder (1968) within a traditional framework, by Harris (1981: Ch. 6) within the framework of relational grammar and an analysis within the framework of construction grammar is proposed in Gurevich (2006). It has been suggested that causatives in Georgian are derived from a biclausal underlying form, in which the matrix clause contains an abstract predicate CAUSE, which is realized as the causative suffix -ev/-evin (Harris 1981: Ch. 5). Davies and Rosen (1988) have shown, however, that the same range of facts can be accounted for in a monoclausal analysis for Georgian and other languages. Work within the framework of relational grammar revealed for the first time that there exists a category of retired term in Georgian, the occurrence of which is predictable on the basis of universal principles. In Georgian, retired terms occur in a number of disparate constructions, including inversion, analytic passives, causatives, infinitives, and other non-finite verb forms. Syntactically, retired terms are characterized by the fact that they behave like initial terms (initial subjects, initial direct objects, or initial indirect objects) with respect to those phenomena that are stated on initial termhood, while they behave like non-terms with regard to those rules that are sensitive to final termhood. In Georgian, retired transitive subjects are marked with the postposition mier, retired intransitive subjects and direct objects with the genitive case alone, and retired indirect objects with the postposition tvis. The remarkably regular marking of retired terms is illustrated below by retired indirect objects from various constructions (see Harris 1981: Ch. ll et passim). The reader should bear in mind that final indirect objects are not marked with tvis, but with the dative case, in all series, as for example in (21a). (On the analysis presented here, developed more fully in Harris 1981, in Series III it is the initial subject of verbs of Classes 1 and 3 that is the final indirect object. In this Series the tvis-marked nominal is the retired indirect object.) (21b) is an inversion construction, with a retired indirect object; the corresponding (21a) in Series II has a final indirect object and no retired terms. daurek'a. (21) a. vano-m deda-s Vano-NAR mother-DAT he.phone.her ‘Vano phoned (to) his mother.’ dedis-tvis daurek'avs. b. vano-s Vano-DAT mother.GEN-tvis he.phone.her ‘Vano has evidently phoned (to) his mother.’

45. Georgian

1599

(22) illustrates a causative in Series II; the retired indirect object, ‘mother’, is marked the same way in other series as well. (22) mama-m vano-s miacemina sačukar-i dedis-tvis. father-NAR Vano-DAT he.give.him.it.CAUS gift-NOM mother.GEN-tvis ‘Father made Vano give the gift to mother.’ (23) shows a retired indirect object with a masdar (deverbal noun), (24) with an infinitive of purpose, and (25) with an infinitive in object raising. (23) q'vavil-eb-is micema masp'inʒlisa-tvis flower-PL-GEN giving host.GEN-tvis ‘giving flowers to the host’ (24) vašl-i viq'ide masc'avleblis-tvis misacemad. apple-NOM I.buy.it teacher.GEN-tvis give.INF ‘I bought an apple to give to the teacher.’ (25) sačukar-i šeuperebeli-a anzoris-tvis misacemad. gift-NOM unsuitable-it.be Anzor.GEN-tvis give.INF ‘The gift is unsuitable to give to Anzor.’ One of the contributions of relational grammar has been the prediction of retired terms according to universal principles and the study of their language-particular characteristics, such as this highly regular marking in Georgian.

1.6. Analytical constructions vs. synthetic verb forms In modern Georgian, analytical constructions with the light verbs moxdoma ‘happen’ / moxdena ‘make happen’ (26a) become popular (cf. the traditional synthetic verb form in [26b]). According to (Amiridze and Gurevich 2006: 216), the analytical construction (26a) is interchangeable with a traditional synthetic form (26b) and sometimes is the only way to express a complex event in the absence of a synthetic verb (27a vs. 27b). axdenen današaul-ta (26) a. samartaldamcav-eb-i law.enforcement.official-PL-NOM they.make.it.happen crime-GEN.PL mičkmalva-s. hiding.away-DAT Lit.: Law.enforcement.officials make-happen the hiding of crimes. ‘Law enforcement officials hide crimes.’ čkmalaven današaul-eb-s. b. samartaldamcav-eb-i law.enforcement.official-PL-NOM they.hide.them crime-PL-DAT ‘The law enforcement officials hide crimes.’ ([26a] and [26b] adapted from Amiridze and Gurevich 2006: 216)

1600

VII. Syntactic Sketches

(27) a. danarčen-is rek'onst'rukcia-s mexsiereba axdens. rest-GEN reconstruction-DAT memory.NOM it.makes.it.happen Lit.: of.rest reconstruction memory it.make.it.happen ‘The rest is reconstructed by the memory.’ b. *danarčen-s mexsiereba arek'onst'ruirebs. rest-DAT memory.NOM it.make.it.reconstruct ‘The rest is reconstructed by the memory’ ([27a] and [27b] adapted from Amiridze and Gurevich 2006: 222) The construction is argued to be a recent contact-induced change of the 20th century, under the influence of Russian as a model language (Amiridze and Gurevich 2006: 217− 219). In both languages the analytical constructions have been used in official style. However, unlike Russian, in Georgian, the use extends to other areas of language use, such as academic writing (see [28a] which is a new alternative to the use of passive in academic style [28b]), and occasionally to poetry (29). moʒieba. masala-ta (28) a. xdeboda … it.be.happening material-GEN.PL searching.NOM Lit.: it.was.happening of.materials searching. ‘Searching for materials was carried out.’ b. moiʒieboda masal-eb-i. it.be.searched material-PL-NOM ‘Materials were being searched.’ ([28b] cited from the scientific prose in Tuite and Bukhrashvili 2002: 23) (29) drodadro daq'ra. xdeba par-xmal-is from.time.to.time it.happen shield-sword-GEN dropping.down.NOM Lit.: From time to time it.happens of.shield.and.of.sword dropping.down ‘From time to time one gives up.’ (An extract from a 1984 poem by Lebaniʒe 1987: 17) The analytical construction in Georgian can replace synthetic verbs of any semantic class. The construction is changing from a pragmatically marked to an unmarked use pattern. Although contact-induced changes might lead to changes in the overall typological profile of a language (Heine and Kuteva 2005: 157−165), it is still early to say that the spread of the use of analytical construction instead of the corresponding synthetic verbs in Georgian alters its typological profile as a flectional language towards an analytical language.

1.7. Formation of content questions In Georgian, question words, such as vin (‘who’), ra (‘what’), rodis (‘when’), sad (‘where’), rogor (‘how’), and rat'om (‘why’), occupy focus position, immediately preceding the finite verb. Since the verb itself may occupy a variety of positions, the ques-

45. Georgian

1601

tion word may be sentence-initial, as in (30b), or internal, as in (30a). (30c) is ungrammatical because the question word is separated from the verb. (30) a. kalak-ši rodis c'axvedit? city-in when you.go ‘When did you go into the city?’ b. rodis c'axvedit kalak-ši? when you.go city-in ‘When did you go into the city?’ c. *rodis kalak-ši c'axvedit? when city-in you.go ‘When did you go into the city?’ In Georgian, question words cannot move outside the clauses in which they originate, as illustrated in (31). (31) *ra / ra-s ggonia nino acxobs? what.NOM what-DAT you.think.it Nino.NOM she.bake.it ‘What do you think Nino is baking?’ Thus, although we must assume a movement rule to account for the placement of question words within their clauses, this rule differs from familiar WH-Movement in that it does not move the word to initial position and in that it does not cross clause boundaries. Additional details and exceptions to the generalizations stated here are discussed in Harris (1984).

2. Complex sentences Relative clauses are discussed in section 2.1, complement clauses in 2.2, and adverbial clauses in 2.3.

2.1. Relative clauses Early subsections here describe various types of finite relative clauses (2.1.1), the possibility of extraposing them (2.1.2), the relations relativizable (2.1.3), and structural differences between restrictive and non-restrictive relatives (2.1.4). Headless relatives are described in subsection 2.1.5, and participles in 2.1.6.

2.1.1. Structural types Modern Georgian employs a variety of strategies in the formation of relative clauses. In the written language there is a strong preference for relatives formed with a relative

1602

VII. Syntactic Sketches

pronoun and a particle (PRT) -c (romelic ‘which’, vinc ‘who’, sadac ‘where’, etc.), as in (32) and (33). (32) čit-i, romel-i-c bambi-dan gak'etebuli-a calico-NOM which-NOM-PRT cotton-from made-it.be ‘calico, which is made from cotton’ (33) c'inadadeba-ši, roml-is šemasmenel-i-c aris zmna sentence-in which-GEN predicate-NOM-PRT it.be verb.NOM ‘in a sentence whose predicate is a verb’ (From the scientific prose of Enukiʒe 1985: 309) The relative pronoun always occupies clause-initial position, as in these examples. We assume a movement rule to account for this, but note that the rule for question words in Georgian is not comparable (section 1.6 above). In Modern Georgian the case of the relative pronoun is determined strictly by features of the relative clause: the grammatical relations of the relative pronoun in its clause, the class and series of the verb. The case of the head is determined by the same features of its own clause. The number of the pronoun is generally the same as that of the head noun, as in (34). (34) sul mankan-eb-i, roml-eb-sa-c sxva-a gamomtvlel-i entirely other-it.be computing-NOM machine-PL-NOM which-PL-DAT-PRT šeuʒliat inpormaci-is miγeba da gadamušaveba they.can.it information-GEN receiving.NOM and processing.NOM ‘computers, which can receive and process information, are entirely different’ (Cited in K'vant'aliani 1983: 70) However, under a variety of circumstances a plural head noun may be followed by a relative pronoun that is singular in form; this happens especially when the relative word is inanimate as in (35) (K'vant'aliani 1983: 71−77). (35) p'erang-eb-i ecva, romel-i-c muxl-eb-amdis sc'vdeboda. shirt-PL-NOM he.wear.it which-NOM-PRT knee-PL-until it.reach ‘He wore shirts which reached to his knees.’ (Cited in K'vant'aliani 1983: 71) Masdars (deverbal nouns) and abstract nouns are not ordinarily put in the plural; but when they have plural meaning in context, relative pronouns referring to them may be in the plural, as in (36). (36) ilap'arak'eben im did da p'asuxsageb sakmianoba-ze, roml-eb-sa-c they.talk those great and responsible business-on which-PL-DAT-PRT isini ak'eteben. they.NOM they.do.it ‘They will talk about the great, important business which they are doing.’ (Cited in K'vant'aliani 1983: 74)

45. Georgian

1603

The head noun sakmianoba is in Georgian an abstract noun, formed with a suffix of abstract nouns, -oba, and having a meaning more precisely equivalent to ‘busy-ness’. In instances of this sort, the relative pronoun is thought to agree with the semantic plurality of the head noun, rather than with its grammatical number. The particle -c(a) marks relativized NPs or NPs containing relativized constituents, and it differentiates a relative pronoun (eg. romel-i-c ‘which’) from the corresponding interrogative pronoun (eg. romeli ‘which?’). The particle is enclitic to the entire NP of which the relative pronoun is a constituent. In (32) and (34)−(36) the pronoun itself constitutes the entire NP; in (33), on the other hand, the pronoun is a constituent of the NP romlis šemasmenel-i-c (‘whose predicate’), and the particle is enclitic to the full NP. The particle may be attached to a postposition, as in romel-tana-c (which-with-PRT, ‘with whom’) (see Aronson 1972: 140; Ʒiʒiguri 1969: 252 for examples). It is entirely possible, too, for the particle to be omitted when the relative pronoun is only one constituent of an NP within its clause; this is illustrated in (41) in the next subsection. Since in structures of the type illustrated in (32)−(36) the relative clause follows the head noun and makes use of a relative pronoun, in what follows the structure is termed the (postnominal) relative pronoun strategy. All of the remaining types, seldom found in written Georgian but very common in the spoken language, make use of a particle rom/ ro (‘that’). The relativizer rom (unlike the complementizer of the same form) generally occurs in the “floating” position; that is, it occurs between the first constituent and the verb of its clause. In some instances it can be clause-initial (for examples see Harris 1994). The first of the new types of relative clauses, referred to here as the postnominal gap strategy, is illustrated in (37). In this example and those that follow in this subsection, the relative clause is bracketed. (37) xalxi [k'areb-tan axlos ro idga], aq'aq'anda. people(SG) doors-at close that he.stand he.jabber ‘The people who sat close by the doors began to jabber.’ (38) ert-i matgani [tma-ši rom bandi akvs čac'nuli] one-NOM them hair-in that band she.have.it tied ‘one of them who has a band tied in her hair’ (Examples [37] and [38] cited by Vogt 1971: 51.) In a gapped relative clause, the relative nominal is not overtly represented in its clause; in the postnominal gap type, the relative clause follows its head. The head noun bears the case required by its function in the main clause, and it is this fact which permits us to see the structure of the clause clearly. For example, in (38), the NP erti matgani (‘one of them’) would be in the dative case, which would be required by the verb of its clause, akvs (‘has’), if it were a constituent of that clause. In addition to postnominal relative clauses, Modern Georgian has relative clause types that precede the head noun. We may consider first the prenominal gap strategy, illustrated in (39). (39) [šen-gan ro miviγeb], im pul-it me gadavixdi val-s. you-from that I.receive.it that money-INS I.NOM I.pay.it debt-DAT ‘I will pay off the debt with the money which I receive from you.’ (Tschenkeli 1958: 203)

1604

VII. Syntactic Sketches

Like clauses produced with the postnominal gap strategy, this clause type lacks overt representation of relative nominal and contains the particle ro(m). The prenominal gap type generally precedes not only the head noun, but the entire main clause, as in this example. The final major relative clause type is the (prenominal) non-reduction strategy, in which the relative nominal is represented as a noun, rather than being “reduced” to a pronoun or gap. The main clause contains a pronoun (or in [44] an adverb) coreferential with the relative nominal; we assume that this pronoun acts as head. (40) minda, [betania-ši rom k'olmeurnoba-a], vnaxo. is I.want.it Betania-in that collective.farm-it.be it.NOM I.see.it ‘I want to see the collective-farm that is in Betania.’ (Cited by Vogt 1971: 51) Relative clauses may also be formed with anaphoric pronouns, but this type is considered non-standard by speakers we have consulted and is not further discussed here.

2.1.2. Extraposition The relative clause may be separated from its head or pronominal representation in the matrix clause with all types of relativization, as illustrated in (41)−(44). In these examples the demonstrative ‘that’ is glossed ‘that(DEM)’ to distinguish it from the subordinating particle ‘that’. (41) Postnominal Relative Pronoun Strategy gumbat-i šeik'ra, [roml-is tav-ze cxovel-i taigul-i ert-i one-NOM dome-NOM it.attach which-GEN head-on living-NOM bouquet-NOM gamoisaxa]. it.show ‘One dome was attached, on the top of which a living bouquet showed.’ (Važa Pšavela, quoted in Ʒiʒiguri 1969: 236) (42) Postnominal Gap Strategy im ek'lesia-ši minda šesvla, [lenin-is moedni-dan rom čans]. that(DEM) church-in I.want.it going.into Lenin-GEN square-from that it.show ‘I want to go in that church that can be seen from Lenin Square.’ (43) Prenominal Gap Strategy is [me rom gavak'ete], čven-s st'umar-s daalevine I.NAR that I.make.it our-DAT guest-DAT you.cause.drink.it.him that(DEM).NOM γvino? wine.NOM ‘Did you have our guest drink the wine that I made?’

45. Georgian

1605

(44) Non-reduction Strategy mdinar-is nap'ir-ze ro(ma)-a], minda rom šen ik [rest'oran-i Restaurant-NOM river-GEN bank-on that-it.be I.want.it that you there c'amiq'vano. you.take.me ‘I want you to take me to the restaurant that is on the bank of the river.’

2.1.3. Grammatical relations relativizable With all relative clause strategies, a wide variety of grammatical relations are relativizable. For the relative pronoun strategy, a range of possibilities is illustrated in the two preceding subsections, including subject, possessor, and object of a postposition; although not illustrated here, all other noun functions can be relativized with this strategy. With the postnominal and prenominal gap types, subjects and objects can clearly be relativized. Many informants also permit relativization of objects of postpositions, as long as the relation of the gapped nominal within the relative clause is clear from the context. More restrictive informants do not like such structures, though they do occur in spoken language. (45) Postnominal Gap Strategy es pilm-i, čem-i megobar-i rom mtavar rol-s tamašobs, axla this.NOM film-NOM my-NOM friend-NOM that main role-DAT he.play.it now gadis tbilis-ši. it.show Tbilisi-in ‘This film, that my friend plays a main role [in], is showing now in Tbilisi.’ (46) Non-reduction Strategy šen rom c'ign-i ačuke (im) megobar-s, is you.NAR that book-NOM you.give.it.him that(DEM) friend-DAT s/he.NOM movida st'umr-ad. s/he.come guest-as ‘The friend to whom you gave the book came to visit.’ Other interpretations of sentence (46) are possible, as described below in connection with adverbial clauses.

2.1.4. Restrictive and non-restrictive relatives Restrictive relative clauses may optionally be distinguished from non-restrictive ones by inclusion of the distal demonstrative pronoun before the head noun. (47) provides an illustration with the postnominal gap type.

1606

VII. Syntactic Sketches

(47) ikneba es is mela ar iq'os, čem-s diasaxlis-eb-s it.be this.NOM that.NOM fox.NOM not it.be.SBJV my-DAT housewife-PL-DAT rom it'acebda-o. that he.carry.her-QUOT ‘Perhaps this is not the fox that carried off my housewives, he said.’ (Važa Pšavela, cited by Ʒiʒiguri 1969: 332) A more literal translation of (47) would be ‘… not that fox …’ This sort of construction is also used with the relative pronoun strategy (for examples, see Ʒiʒiguri 1969: 332− 333). In the spoken language, some non-restrictive relatives formed with a relative pronoun may sound artificial, as does (48). (48) svaneti, romeli-c dasavlet sakartvelo-ši-a Svaneti which-PRT western Georgia-in-it.be ‘Svaneti, which is in western Georgia’ A more natural counterpart of (48) is (49). (49) svaneti, dasavlet sakartvelo-ši ro(ma)-a Svaneti western Georgia-in that-it.be ‘Svaneti, which is in western Georgia’ However, in the written language the relative pronoun strategy is certainly used for both restrictive and non-restrictive varieties. (35) is a clear example of a restrictive relative, and (50) provides an illustration of a non-restrictive one. (50) k'. lortkipaniʒe, romel-sa-c … damoc'mebuli akvs ikceodin … he.has ikceodin K'. Lortkipaniʒe which-DAT-PRT verified ‘K'. Lortkipaniʒe, who has verified [the occurrence of the form] ikceodin …’ (From the scientific prose of K'ik'naʒe 1961: 277)

2.1.5. Headless relatives The structural types discussed above have in common that they modify a head, at least in abstract structure. In Georgian there are also so-called headless relatives that modify no head in the matrix clause. These are formed primarily with relative pronouns and have the characteristics noted above for this clause type; an example is (51). (51) vin-c mohk'vda, tav-sa mouk'vda-o. self-DAT he.die.him-QUOT who.NOM-PRT he.die ‘Whoever died, died for himself, they say.’ (I.C., cited by Ertelišvili 1963: 73) Such clauses have general meanings, ‘a person who’, ‘someone who’, ‘a place where’, etc. It is possible for a headless relative to have a correlative pronoun, as shown in (52).

45. Georgian

1607

(52) visa-c gač'irveba ar unaxavs, man lxena ar icis. who.DAT-PRT need.NOM not s/he.see.it s/he.NAR joy.NOM not he.know.it ‘Whoever has not known trouble does not know joy.’ (Proverb, cited by Ertelišvili 1963: 71) The structure in (52) then resembles a prenominal non-reduced structure, but differs from it (i) in having a relative pronoun, and (ii) in lacking rom.

2.1.6. Participles In traditional Georgian linguistics, discussion of relative clauses is usually restricted to finite clauses, but recent works on other languages often include non-finite clauses as relatives. Accordingly, participles are briefly described here. Future participles, present participles, and past passive participles can all be used as pre-nominal modifiers, but it is the last-named type that is most likely to involve additional constituents, as in (53). emq'arebian … mat mier ča-c'eril masala-s. (53) avt'or-eb-i them by down-written material-DAT author-PL-NOM they.use.it ‘The authors use … material recorded by them[selves].’ (From the scientific prose of Čxubianišvili 1972: 132) It would be possible to express the same meaning with a relative clause with the head masala-s; in that case, the relative clause could have either an active or a passive structure. With the participle, only the passive structure can occur. The participial clause may be compared with the passive finite clause (54) (see above for the structure of passives). mat mier iq'o čac'eril-i. (54) es masala this material.NOM them by it.be down.written-NOM ‘This material was recorded by them.’ In (54) the passive form of the verb consists of an auxiliary and a past passive participle; when the participial clause occurs as a prenominal modifier, it is non-finite and accordingly lacks the auxiliary. In (54) the participle has the suffix -i of the nominative case; in (53) its form agrees with the dative case of its head, though this agreement is only minimally indicated (see Tschenkeli 1958: 38). The noun relativized, in this case masala (‘material’), is gapped from the participial clause. Other features of finite passives, including the optional agent, are present also in the reduced participial clause, as in (54).

2.2. Complementation 2.2.1. Grammatical relations Sentential subjects and sentential direct objects are common in Georgian, as illustrated below. Complement clauses may also bear oblique grammatical relations; this is discussed in subsection 2.2.4.

1608

VII. Syntactic Sketches

(55) cnobili-a, rom labialuri tanxmovn-eb-is gavlen-it a xmovan-ma known-it.be that labial consonant-PL-GEN influence-INS a vowel-NAR šeiʒleba miiγos o an u xmovn-is saxe. it.possible it.receive.it.SBJV o or u vowel-GEN appearance.NOM ‘It is [well] known that under the influence of a labial consonant the vowel a can take on the appearance of the vowel o or u.’ (From the scientific prose of Melikišvili 1981: 80) (56) vpikrobt, rom am-is sapuʒvel-i šekmna … we.think.it that this-GEN basis-NOM it.created.it ‘We believe that it created the basis for this …’ In (55) the clause beginning rom is the subject of cnobilia (‘it is known’). In (56) the dependent clause is the direct object of vpikrobt (‘we think’).

2.2.2. Complement types Modern Georgian has the following types of complementation: indicative, subjunctive, and deverbal noun (masdar). Infinitives occur in Georgian, but their use is limited to purpose clauses and object raising constructions. Noonan (1985) has classified complement-taking predicates into semantic types and has made cross-language predictions about the sort(s) of complements they will permit. Vamling’s (1987) research on Georgian complementation on the whole supports Noonan's predictions; the choice of indicative vs. subjunctive form, for example, is usually determined by the predicate of the main verb, as shown by the following examples from Vamling (1987: 33−34). (57) maxsovs, rom is gavak'ete/ *gavak'eto. I.remember.it that it I.do.it.IND I.do.it.SBJV ‘I remember that I did it.’ (58) vap'ireb is *gavak'eteb/ gavak'eto. I.intend.it it I.do.it.IND I.do.it.SBJV ‘I intend to do it.’ The list of predicates taking subjunctive complements promises to be very long, for it includes not only the verbs listed by Noonan (1985), but complex predicates like mizania (‘the goal is’), mtavari-a (‘the main [thing] is’), and the predicate in the main clause of (59), where the complement clause is gvepikra in the subjunctive. (59) sc'ori ar ikneboda gvepikra, rom … correct not it.be we.think.it.SBJV that ‘It would not be correct for us to think that …’ (From the scientific prose of Dondua 1967: 84) Masdars (MAS) are often possible where either indicative or subjunctive may occur, as in (60)−(61), likewise from Vamling (1987: 33−34).

45. Georgian

1609

(60) maxsovs mis-i gak'eteba. I.remember.it it.GEN-NOM do.MAS.NOM ‘I remember doing it.’ (Vamling: ‘I remember that I did it’) (61) vap'ireb mis gak'eteba-s. I.intend.it it.GEN do.MAS-DAT ‘I intend to do it.’ Vamling notes that the semantic categories cannot fully predict the complement type; a particular problem is presented by the two sets of predicates of fearing and of negative prepositional attitude; predicates of both of these groups permit complements of all three types. The sentences below (from Vamling 1987: 37−38) illustrate the possibility of taking either indicative (62) or subjunctive (63). (62) meeč'veba, rom mova. I.doubt.it that he.come.IND ‘I doubt that he’ll come.’ (63) meeč'veba, rom movides. I.doubt.it that he.come.SBJV ‘I doubt that he would come.’ The use of the subjunctive in complements in Georgian was found to be related primarily to determined time reference and irrealis states and events. Vamling (1987: 53−55) found that complement clauses could occur as masdars with those main clause verbs that had noun indirect objects, while the complement must occur as a finite clause with main clause verbs that had noun direct objects.

2.2.3. Complementizers In terms of frequency of occurrence and flexibility of use, rom (‘that’) is the most important complementizer in Modern Georgian. As it occurs in almost every kind of dependent clause in Georgian, rom itself does not indicate the type of dependent clause but does serve to mark the dependent status of the clause (although it can occur in certain kinds of questions also). In noun clauses, rom occurs with both sentential subjects and sentential objects. With indirect questions, the complementizer tu (‘whether’) occurs, as in (64); if the embedded question is a content question, tu may occur together with the question word or may be absent, as in (65). Additional conditions on the occurrence of tu are noted in Harris (1984). (64) maint'eresebs, tkven-i ʒaγl-i tu ik'bineba. it.interest.me your-NOM dog-NOM whether he.bite ‘I am interested in whether your dog bites.’ (65) maint'eresebs, (tu) sad midixar. it.interest.me whether where you.go ‘I am interested in where you are going.’

1610

VII. Syntactic Sketches

Asyndetic constructions are also found, where a dependent clause is joined to a matrix clause without an overt conjunction. In some cases, the conjunction seems to have been deleted, as in (66b), which seems to be from (66a) (Basilaia 1974: 30). (66) a. msurs, rom k'argad iq'os. I.want.it that well s/he.be.SBJV ‘I want him/her to be fine.’ b. msurs k'argad iq'os. I.want.it well s/he.be.SBJV ‘I want him/her to be fine.’ In some cases, it would be stylistically awkward to have a conjunction, as in (59) above, where rom added to introduce the clause gvepikra would be infelicitous because of the rom which introduces the next clause; examples of this kind are numerous in our collection. Basilaia (1974: 30−32) has argued that omission of the complementizer is not determined by the potential ambiguity of the resulting sentence. Since several conjunctions, especially rom, are used in multiple functions, they do not necessarily disambiguate the relation between the main and dependent clauses. Therefore, even sentences in which a conjunction does occur may have an unclear relationship between the clauses, and this cannot be the determining factor in the omission of the complementizer. In some examples, verb tense and mood help to disambiguate the relation between the two conjunctionless clauses; in other examples the correlative does this, as in (67) (Basilaia 1974: 38). (67) čem-i ocneba isa-a, p'irvel-i. sc'avla-ši viq'o my-NOM dream.NOM that.NOM-it.be, studying-in I.be.SBJV first-NOM ‘My dream is this: I would be first [best] in studying.’ (I. Griš, cited in Basilaia 1974: 35) In (67) the use of the subjunctive mood in the predicate of the embedded clause (viq'o ‘I would be’) contributes to marking this as the dependent clause. The boundary between the two clauses is primarily marked by intonation, though verb-final word order in the first clause may be an added clue. (Note that pitch of the voice, in addition to the pause between the clauses, contributes to the intonational marking; Basilaia 1974: 39−41.) Finally, the correlative, by the case it bears, makes it clear that the dependent clause is the predicate nominal.

2.2.4. Correlatives A variety of pronouns occur in main clauses as correlatives, but the most important are the demonstratives − the proximate es, the distal igi, and the remote is. When a correlative is used with a complement clause, the full range of complementizers is available: rom ‘that’, as in (68), an interrogative pronoun with or without tu (‘whether’), as in (69), or no complementizer, as in (67) above.

45. Georgian

1611

(68) nišnavs tu ara es imas, rom apxazur-adiγuri subst'rat'-is it.mean.it or not this.NOM that.DAT that Abxaz-Adighe substrate-GEN sak'itx-i zanur-svanur-ši saertod moxsnili-a? question-NOM Zan-Svan-in generally resolved-it.be ‘Does this mean or not that the question of an Abxaz-Adighe substrate in general in Zan-Svan is resolved?’ (From the scientific prose of Mač'avariani 1966: 170−171) (69) … mnišvneloba akvs imas, tu romel k'avšir-s significance.NOM it.have.it that.DAT whether which conjunction-DAT moixdens … c'inadadeba. it.bear.it clause.NOM ‘Which conjunction the … clause bears has significance’ or ‘It has significance which conjunction the … clause bears.’ (From the scientific prose of K'vač'aʒe 1950: 85) In (68) the subject pronoun, es (‘this.NOM’), refers to what has preceded; the correlative pronoun, imas (‘that.DAT’) refers to the dependent clause that follows. It is the contrast between the proximate form (es) and the remote (is, here in the dative case, imas) that reveals which pronoun refers forward and which backward; that is, in Georgian discourse proximate forms generally refer to what has come before, while remote forms refer to what is to come. (This generalization, pointed out to us by Nani Č'anišvili, holds true of the modern written technical language. Nevertheless, counterexamples exist in literature not far removed in time or place; see Ʒiʒiguri 1969: Ch. 6, for a sample.) The case forms of the pronouns reveal the grammatical relations they, together with the propositions they refer to, bear in the main clause. In this they follow the general rules stated above in this chapter; as the verb here is in a Series I form, the nominative pronoun refers to the logical subject of the clause, while the pronoun in the dative refers to the logical direct object. The verb akvs (‘has’) in the main clause of (69) is an inversion verb; the correlative imas and the complement clause represent the initial subject (final indirect object) of this verb. The complement is an indirect question and, as such, has a question word, here with tu (‘whether’) as the complementizer. Most often the correlative precedes its dependent clause, as in (68)−(69), but the opposite order is also found, as in (70). (70) rom masdarul-i k'onst'rukcia infinitivus finalis-is badal-i-a, that masdar-NOM construction.NOM infinitivus finalis-GEN substitute-NOM-it.be es k'argad čans šemdegi-dan-ac … this.NOM well it.show following-from-too ‘That the masdar construction is a substitute for the infinitivus finalis is shown clearly by the following …’ (From the scientific prose of Ʒiʒiguri 1969: 469) (70) illustrates a sentential subject, with a correlative in the proximate form, referring to what precedes it, and in the nominative case, representing its role as subject in the main clause. Although the translation makes use of a passive for greater naturalness in English, the voice of the main clause in Georgian is active, not passive.

1612

VII. Syntactic Sketches

It is possible for the complement clause to be separated from its correlative pronoun, as in (71). (71) da es imas moc'mobs, rom eg sit'q'v-eb-i… c'evr-eb-s and this.NOM that.DAT it.confirm.it that this word-PL-NOM constituent-PL-DAT c'armoadgendnen … they.represent.it ‘… And this confirms that these words represented … constituents.’ (From the scientific prose of Ʒiʒiguri 1969: 441) Again it is imas, the remote correlative in the dative case, that refers to the clause that follows and indicates that it bears the relation of direct object in the main clause. While the subject and direct object are the grammatical relations most often borne by complement clauses, with a correlative in the main clause it is possible for a dependent clause to bear an oblique relation. For example, in (72), mdgomareobs is an intransitive verb which may occur with an oblique nominal marked with the postposition -ši (‘in’); here the complement clause bears that oblique relation, and the correlative is marked with -ši. ima-ši mdgomareobs, rom … (72) … sxvaoba that difference.NOM that.DAT-in it.stand ‘… The difference consists in [the fact] that …’ (From the scientific prose of Basilaia 1974: 51) In (73) and other examples like it, the complement clause and its correlative pronoun represent the initial direct object which is, however, not a direct object in final structure. gasark'vevad, rogor-i-a damok'idebul-i c'inadadeba … (73) … imis what.kind-NOM-it.be dependent-NOM clause.NOM that.GEN to.clarify ‘… [in order] to clarify what kind of clause a dependent clause is …’ (From the scientific prose of K'vač'aʒe 1950: 85) It is entirely regular for the retired direct object of infinitives of purpose to be in the genitive case (Harris 1981: 155−157), as this one is. In addition to the types described above, Georgian has so-called appositive clauses or noun complements. These seem to occur in or with the full range of grammatical relations, complement types, complementizers, and correlatives. Two examples follow. pakt'-i, rom mimartebit-i sit'q'veb-i … (74) cnobili-a is known-it.be that.NOM fact-NOM that relative-NOM words-NOM nacvalsaxelisa-gan c'armoišva. pronoun-from it.derive ‘The fact that the relative words [i.e. relative pronouns, etc.] derive from … pronouns is [well] known.’ (From the scientific prose of Dondua 1967: 86) The demonstrative pronoun is in (74) is evidently functioning as a correlative, even though here it modifies a noun pakt'i and in previous examples did not.

45. Georgian

1613

(75) ra sapuʒveli gvakvs… p'repiks-i mravlobit-is nišn-ad what basis we.have.it prefix-NOM plural-GEN marker-ADV davsaxot? we.designate.it.SBJV ‘What basis have we to designate the … prefix as the marker of the plural?’ (From the scientific prose of Čikobava 1946: 96) Although the verb of the complement clause in (75) is translated with an infinitive, it is a finite form in the subjunctive mood. In all examples in our collection, the complement of pakt'i (‘fact’) is in the indicative, as it is in (74); complements of sapuʒveli (‘basis’), varaudi (‘supposition’), and sabuti (‘evidence’), on the other hand, are in the subjunctive.

2.3. Adverbial clauses Georgian has a wide variety of adverbial clauses, including clauses of time, location, manner, purpose, reason, condition, and concession, not all of which can be illustrated in this brief sketch. They may be set off by one of three devices: (i) a subordinating morpheme, (ii) one of two non-finite verb forms that are restricted primarily to adverbial clauses, or (iii) intonation (punctuation) alone.

2.3.1. Subordinating morphemes Most types can be constructed either (i) with a subordinating conjunction that expresses lexical content and is specific to the particular type, such as roca (‘when’), particular to the adverbial clause of time, or tu (‘if’), particular to the conditional, or vinaidan (‘because’), particular to the clause of reason, or (ii) with the grammatical morpheme rom, which has no lexical meaning (see Thompson and Longacre 1985: 172). Because of the large number of conjunctions available, one can express subtleties of interclausal meaning-relations through the conjunction. An example of the expression of similar meanings with a lexical or grammatical subordinator is given in (76) and (77). (76) gaiara ramdenime c'utma am mdgomareoba-ši, rodesa-c erti cxenosani it.pass some moment this situation-in when-PRT one horseman gamočnda … he.appear ‘Several moments passed in this situation, when a horseman appeared.’ (Q'azb., cited by Ʒiʒiguri 1969: 75) (77) ar gauvlia ert k'vires, rom amas meore šemtxveva-c daerto. not it.pass one week that this.DAT second incident-too it.added.to.it ‘Not a week had passed, when a second incident also occurred.’ (Ak'ak'i, cited by Ʒiʒiguri 1969: 74)

1614

VII. Syntactic Sketches

With either variant, the ‘when’ clause may optionally precede or follow the main clause. With this ‘when’ type, as with most others, a correlative may optionally be used, as in (78). angariš-s mašin mogtxov, roca moval. (78) amis this.GEN bill-DAT then I.request.it.you when I.come ‘I will ask you for the bill for this when I come.’ The correlative opens up the possibility of center-embedding the adverbial clause, as in (79). angariš-s, roca moval, mašin mogtxov. (79) amis this.GEN bill-DAT when I.come then I.request.it.you ‘I will ask you for the bill for this when I come.’ (Gr. Orbel., cited by Ʒiʒiguri 1969: 338) The grammatical subordinator rom may also be used with a correlative, and with amit'om, (i)mit'om (‘because of this, because of that’) is nearly always used, as in (80). (80) mit'om vambob, rom mšoblebma da masc'avleblebma igulisxmon. therefore I.say.it that parents and teachers they.assume.it.SBJV ‘I say it because parents and teachers assume it.’ (Ak'ak'i, cited by Ʒiʒiguri 1969: 348)

2.3.2. Special verb forms At least two non-finite verb forms can signal specific adverbial clauses. The synchronic infinitive is usually called by a term that describes its diachronic origin − the masdar in the adverbial case − even in works where it is explicitly described as an infinitive. The -isas form does not have a commonly accepted name at all. Their synchronic syntax has been little discussed (see 2.4 below on this), though they have often been treated from a diachronic point of view. Use of the infinitive in Modern Georgian has been described in only two constructions − in adverbial clauses of purpose and in object raising constructions (Harris 1981: 154−156, 53−54); it is only the first of these that concerns us in this subsection. (81) … [damok'idebul-i c'inadadeb-is dasak'avšireblad mtavar-tan] ixmareba dependent-NOM clause-GEN join.INF main-with it.use ʒiritadad sam-i sax-is int'onacia … basically three-NOM sort-GEN intonation.NOM ‘Mainly intonation of three sorts are used to join the dependent clause with the main [clause] …’ (Basilaia 1974: 39) The infinitive generally lacks overt expression of a subject, but in the following example gamosaxat'avad (‘express.INF’) does have a nominal (retired) subject, marked here by the postposition mier.

45. Georgian

1615

(82) [ori c'inadadebis mier movlena-ta an mokmedeba-ta dap'irisp'ireb-is two clauses by phenomenon-GEN.PL or action-GEN.PL opposition-GEN gamosaxat'avad] … gamoiq'eneba map'irisp'irebel-i int'onacia. express.INF it.use contrastive-NOM intonation.NOM ‘Contrastive intonation is used … for two clauses to express an opposition of phenomena or of actions.’ (Basilaia 1974: 25) The dependent clause is bracketed. Note that all terms of an infinitive are retired terms (Harris 1981: 165−167), and retired transitive subjects are regularly marked by mier (see section 1.5 above and Harris 1981: 171). A second type of adverbial clause uses a verb form in -isas; this type expresses simultaneity. Examples are given in (83) and (84). (83) [sazγap'ro pant'ast'ik'-is sp'ecipik'-is daxasiateb-isas] gansak'utrebul fairytale fantasy-GEN specific-GEN characterizing-while special mnišvnelobas iʒens misi šinaarsobrivi mxare. significance it.acquire.it its contentful side ‘In characterizing the specifics of fairytale fantasy, its content acquires special significance.’ (From the scientific prose of Kurdovaniʒe 1977: 129) (84) … ar aris ertnianoba [pazuri zmneb-is šemcveli k'onst'rukci-is not it.be unity phase verb-GEN containing construction-GEN analiz-isas]. analyzing-while ‘There is no unity in analyzing the construction containing phase verbs.’ (From the scientific prose of Enukiʒe 1985: 316) In all the examples we have collected of the -isas construction, the subject is omitted and is understood as ‘one’ or as ‘we’ or ‘you’ in their general senses. Note that, as in these two examples, it is not necessary that the subject of the -isas form be the same as a constituent in the main clause. The English translation of (83), considered ungrammatical by many, in popular usage has this same property.

2.3.3. Intonation alone Basilaia (1974: 46−54) has pointed out that adverbial clauses may occur as true dependent clauses without any conjunction. Their dependent status is marked by intonation (or punctuation) and, in most cases, by use of a subjunctive verb form. A wide variety of adverbial clause types can be so marked, including temporal, causal, purpose, manner, conditional, and result clauses. An example is given below in (85). (85) q'oveli γone ixmares, es xidi … gamomdgariq'o. all strength they.use.it this bridge it.prove.useful.SBJV ‘They used all [their] strength in order that the bridge … might prove useful.’ (K'. Lort., cited by Basilaia 1974: 48)

1616

VII. Syntactic Sketches

2.4. Shared syntax of dependent clauses The syntax of finite dependent clauses of all three major varieties is essentially like that of main clauses: there is no special word order, agreement, or case marking used for dependent clauses. Although the subjunctive might be considered a special morphology associated with dependency, the subjunctive occurs also in main clauses, while dependent clauses may have other forms, including the indicative. On the other hand, non-finite clauses, which occur only as dependent clauses, have certain syntactic characteristics that set them apart. First, unlike finite clauses, non-finite ones have no agreement to encode subject, direct object, or indirect object. Second, the range of syntactic constructions possible is reduced in non-finite clauses. For example, causatives occur in non-finite forms with only a few specific verbs: passive and inversion constructions do not occur at all in masdars or forms derived from them. Third, the marking assigned to subjects and objects with non-finite forms is different from that assigned with finite forms. In particular with non-finite forms, subjects of intransitives and direct objects are marked with the genitive case, subjects of transitives are marked with the postposition mier, and indirect objects are marked with the postposition -tvis (both postpositions governing the genitive case). It is noteworthy that the same marking is found with participles used as relative clauses (see [53]), with masdars as complement clauses (see [60]), with infinitives as purpose clauses (see [81]−[82]), and with the -isas forms based historically on masdars (see [83]−[84]). Examples of all possibilities can be found in Harris (1981: Chapters 10 and 11) and Lacabiʒe (1975). The unmarked word order of non-finite clauses is the same, mutatis mutandis, as that of finite clauses. For example in (82) the retired subject, ori c'inadadeba (‘two clauses’) is the first constituent, the retired direct object movlena-ta an mokmedeba-ta dap'irisp'irebis (‘an opposition of phenomena or of actions’) is second, and the verb gamosaxat'avad (‘to express’) is clause-final. We may observe that, loosely speaking, this is the order of noun phrases as well, in the sense that genitive attributes and other modifiers precede the head. With respect to the presence vs. absence of arguments, different kinds of clauses present various pictures. In relative clauses of the postnominal relative pronoun type and pre-nominal non-reduction type, in complement clauses of all types, and in adverbial clauses formed with subordinating morphemes, all arguments of the verb may be present. In pre- and postnominal relative clauses of the gap type, the one constituent that is the same as the head must be gapped. In infinitives the subject is ordinarily deleted on coreference with a term argument of the matrix clause, but in (82) the infinitive has a subject. Forms in -isas evidently do not permit overt expression of a subject. In the sections above, relative clauses, complement clauses, and adverbial clauses are distinguished according to function in the usual way (cf. Keenan 1985; Noonan 1985; Thompson and Longacre 1985). An exception is the so-called headless relative, which functions as a nominal. It is difficult, however, to find a formal criterion in Georgian which would group them in this way. We have noted already that the conjunction rom is used to signal dependent clauses of all three major varieties. Correlatives, too, occur with all three, as illustrated above. There are clauses that would be considered complements or adverbials by functional criteria, yet which have the formal marking -c associated primarily with relatives. (86) illustrates a noun complement with the “relative”

45. Georgian

1617

pronoun ra-c; (87) illustrates a clause of location with the “relative” pronoun sada-c (see also Thompson and Longacre 1985: 183). (86) … dasabutebas imisa, ra-c natkvami-a mtavar c'inadadeba-ši substantiation it.GEN what-PRT said-it.be main sentence-in ‘[expresses explanation or] substantiation of what is said in the main clause’ (From the scientific prose of Basilaia 1974: 39) (87) sada-c ševdivar, q'velgan dabrʒandi-s meubnebian. where-PRT I.enter everywhere sit.down-DAT they.say.it.me ‘Where[ever] I enter, they invite me to sit down.’ (Ak'ak'i 404, cited by Ʒiʒiguri 1969: 60) The temporal clause in (76) also contains a conjunction rodesa-c (‘when’) that has the enclitic particle that characterizes relative words. In conclusion, there does not seem to be any formal marking or syntax that characterizes any one of the major types of dependent clause (relative, complement, adverbial) to the exclusion of the others. The particle -c marks relative words; but the relative clauses in (37)−(40) (and others) lack this marking, while the adverbial clauses in (76) and (86)−(87) have it. Some relative clauses are characterized by gaps, but others have relative pronouns or otherwise lack gaps. Masdars may be limited to complements, but the complement clauses in (55)−(59) are finite. Infinitives and -isas forms are limited to certain adverbial clauses, but not all adverbial clauses have them. All or most semantic types of adverbial clause may be marked with contentful subordinating conjunctions, such as tu (‘if’); but adverbial clauses can be marked by other means instead. In short, while we may distinguish these three major types by function, we cannot find formal correlates of these functions. (Some of the data adduced in the present paper were elicited by the first author in the USSR in 1975 on a research trip supported by the International Research and Exchanges Board.)

3. References (selected) Abesaʒe, Nia 1963 Rom k'avširi kartvelur enebši. [The conjunction rom in the Kartvelian languages.] Tbilisis Saxelmc'ipo Universit'et'is Šromebi 96: 11−20. Aissen, Judith L., and William A. Ladusaw 1988 Agreement and multistratality. In: Papers from the 24th Annual Regional Meeting of the Chicago Linguistic Society, Part 2, 1−15. Chicago Linguistic Society. Amiridze, Nino 2006 Reflexivization Strategies in Georgian. (LOT Dissertation Series 127.) Utrecht, Netherlands. Amiridze, Nino, and Olga Gurevich 2006 The sociolinguistics of borrowing: Georgian moxdoma and Russian proizojti ‘happen’. In: Rudolf Muhr (ed.), Innovation and Continuity in Language and Communication of Different Language Cultures, 215−234. Frankfurt a. M.: Peter Lang Verlag. Anderson, Stephen R. 1982 Where’s morphology? Linguistic Inquiry 13: 571−612.

1618

VII. Syntactic Sketches

Anderson, Stephen R. 1984a Rules as “morphemes” in a theory of inflection. In: 1983 Mid-America Linguistics Conference Papers, 3−21. Boulder. Anderson, Stephen R. 1984b On representation in morphology: Case, agreement, and inversion in Georgian. Natural Language and Linguistic Theory 2: 157−218. Apridoniʒe, Šukia 1986 Sit'q'vatganlageba Axal Kartulši [Word Order in Modern Georgian]. Tbilisi: Mecniereba. Aronson, Howard I. 1970 Towards a semantic analysis of case and subject in Georgian. Lingua 25: 291−301. Aronson, Howard I. 1972 Some notes on relative clauses in Georgian. In: The Chicago Which Hunt (Papers from the Relative Clause Festival), 136−143. Chicago Linguistic Society. Asatiani, Rusudan 1994 Kartvelur Enata T'ip'ologiis Sak'itxebi [Issues in the Typology of Kartvelian Languages]. Tbilisi: Mecniereba. Basilaia, Nik'andro 1974 Uk'avširo Rtuli C'inadadeba [The Conjunctionless Complex Sentence]. Tbilisi: Tbilisi University Press. Boeder, Winfried 1968 Über die Versionen des georgischen Verbs. Folia Linguistica 2: 82−152. Boeder, Winfried 1979 Ergative syntax and morphology in language change: The South Caucasian languages. In: Frans Plank (ed.), Ergativity, 435−480: New York: Academic Press. Boeder, Winfried 2002 Syntax and morphology of polysynthesis in the Georgian verb. In: Nicholas Evans, and Hans-Jürgen Sasse (eds.), Problems of Polysynthesis, 87−111. (Studia Typologica, Beihefte [zur Zeitschrift] Sprachtypologie und Universalienforschung, volume 4.) Berlin: Akademie-Verlag. Chanidzé, Akaki 1963 Le sujet grammatical de quelques verbes intransitifs en géorgien. Bulletin de la Société de Linguistique de Paris 58: 1−27. Čikobava, Arnoldi 1942 Ergat'iuli k'onst'rukciis p'roblemisatvis k'avk'asiur enebši: am k'onst'rukciis st'abiluri da labiluri variant'ebi [On the problem of the ergative construction in Caucasian languages: Stable and labile variants of this construction]. Enimk'is Moambe 12: 221−239. Čikobava, Arnoldi 1946 Mravlobitobis aγnišvnis ʒiritadi p'rincip'isatvis kartul zmnis uγvlilebis sist'emaši. [On the basic principle of marking plurality in the conjugation system of the Georgian verb.] Iberiul-K'avk'asiuri Enatmecniereba 1: 91−127. Čikobava, Arnoldi 1961 Ergat'iuli K'onst'rukciis P'roblema Iberiul-K'avk'asiur Enebši II [The Problem of the Ergative Construction in Ibero-Caucasian Languages]. Tbilisi: Sakartvelos SSR Mecnierebata Ak'ademiis Gamomcemloba. Čikobava, Arnoldi 1968a Mart'ivi C'inadadebis P'roblema Kartulši, I [The Problem of the Simple Sentence in Georgian]. 2nd edition. Tbilisi: Mecniereba. Čikobava, Arnoldi 1968b Mart'ivi c'inadadebis evoluciis ʒiritadi t'endenciebi kartulši [Basic tendencies of the evolution of the simple sentence in Georgian]. Mart'ivi C'inadadebis P'roblema Kartulši I, 269−280. Tbilisi: Mecniereba. (First published 1941.)

45. Georgian

1619

Čxubianišvili, Darejan 1972 Inpinit'ivis Sak'itxisatvis Ʒvel Kartulši. [On the Question of the Infinitive in Old Georgian.] Tbilisi: Mecniereba. Davies, William, and Carol Rosen 1988 Unions as multi-predicate clauses. Language 64: 52−88. Dondua, K'arp'ez 1967 Damok'idebuli c'inadadebis ganvitarebis ist'oriidan ʒvel kartulši. [From the history of the development of dependent clauses in Old Georgian.] Rčeuli Našromebi, I, 75−92. Tbilisi: Mecniereba. Dowty, David 1977 Toward a semantic analysis of verb aspect and the English imperfective progressive. Linguistics and Philosophy 1: 45−77. Enukiʒe, Leila 1985 K'onperencia ‘t'ip'ologiuri metodebi sxvadasxva sist'emis enata sint'aksši’. [Conference ‘Typological methods in the syntax of languages of various systems’.] Iberiul-K'avk'asiuri Enatmecniereba 23: 308−319. Ertelišvili, Parnaoz 1963 Rtuli C'inadadebis Ist'oriisatvis Kartulši I: Hip'ot'aksis sak'itxebi [On the History of Complex Sentences in Georgian I: Questions of Hypotaxis]. Tbilisi: Tbilisis Saxelmc'ipo Universit'et'i. Fähnrich, Heinz 1967 Georgischer Ergativ im intransitiven Satz. Beiträge zur Linguistik und Informationsverarbeitung 10: 34−42. Fillmore, Charles J. 1968 The case for case. In: Emmon Bach, and Robert T. Harms (eds.), Universals in Linguistic Theory, 1−88: New York: Holt, Rinehart and Winston. Gurevich, Olga 2006 Constructional morphology: The Georgian version. Doctoral dissertation, University of California, Berkeley. Harris, Alice C. 1981 Georgian Syntax: A Study in Relational Grammar. Cambridge: Cambridge University Press. Harris, Alice C. 1982 Georgian and the unaccusative hypothesis. Language 58: 290−306. Harris, Alice C. 1984 Georgian. In: William S. Chisholm, Jr., Louis T. Milic, and John A. C. Greppin (eds.), Interrogativity, 63−112. (Typological Studies in Language 4.) Amsterdam: John Benjamins. Harris, Alice C. 1985 Diachronic Syntax: The Kartvelian Case. (Syntax and Semantics 18.) New York: Academic Press. Harris, Alice C. 1994 On the history of relative clauses in Georgian. In: Howard I. Aronson (ed.), Non-Slavic Languages of the USSR. Papers From the Fourth Conference, 130−142. Columbus, Ohio: Slavica Publishers. Heine, Bernd, and Tania Kuteva 2005 Language Contact and Grammatical Change. (Cambridge Approaches to Language Contact.) Cambridge: Cambridge University Press. Holisky, Dee Ann 1978 Stative verbs in Georgian and elsewhere. The classification of grammatical categories. In: Bernard Comrie (ed.), International Review of Slavic Linguistics 3, 139−162.

1620

VII. Syntactic Sketches

Holisky, Dee Ann 1979 On lexical aspect and verb classes in Georgian. The elements. In: Papers from the Conference on Non-Slavic Languages of the USSR, 390−401. Chicago Linguistic Society. Holisky, Dee Ann 1981a Aspect and Georgian Medial Verbs. Delmar, NY: Caravan Books. Holisky, Dee Ann 1981b Aspect theory and Georgian aspect. In: Phillip J. Tedeschi, and Annie Zaenen (eds.), Tense and Aspect, 127−144. (Syntax and Semantics 14.) New York: Academic Press. Jensen, John T., and Margaret Stong-Jensen 1984 Morphology is in the lexicon. Linguistic Inquiry 15: 474−498. Keenan, Edward L. 1985 Relative clauses. In: Timothy Shopen (ed.), Language Typology and Syntactic Description II: Complex Constructions, 141−170: Cambridge: Cambridge University Press. K'ik'naʒe, Levan 1961 Uc'q'vet'lis xolmeobitis mc'k'rivi ʒvel kartulši [The screeve of imperfect indicative in Old Georgian]. In: Ak'ak'i Šaniʒe (ed.), 3veli Kartuli Enis K'atedris Šromebi, 7, 229− 279: Tbilisi: Tbilisis Saxelmc'ipo Universit'et'is Gamomcemloba. K'iziria, Anton 1985 Obiekt'is mier zmnis šetanxmeba mravlobit ricxvši tanamedrove kartulši [Agreement of the object with the verb in plural number in Modern Georgian]. Iberiul-K'avk'asiuri Enatmecniereba 24: 100−111. Kurdovaniʒe, Teimuraz 1977 Pant'ast'ik'is arsi jadosnur zγap'arši [The essence of fantasy in fairytales]. Macne 4: 129−131. K'vač'aʒe, Levan 1950 Rtuli C'inadadebis Sc'avlebis Metodik'a [Methodology of Teaching Complex Sentences]. Tbilisi: Sak. SSR Ganatlebis Saminist'ros P'ed. Mecn. Inst'it'ut'is Gamomcemloba. K'vant'aliani, Leila 1983 Misamarti Sit'q'visa da Mimartebiti Nacvalsaxelis Šetanxmeba Kartulši [Agreement of Relative Words and Relative Pronouns in Georgian]. Tbilisi: Mecniereba. Lacabiʒe, Lia 1975 Natesaobiti brunvis pormata semant'ik'uri da sint'aksuri k'avširi zmnastan tanamedrove kartulši [The syntactic and semantic relation of genitive case forms with the verb in Modern Georgian]. Macne 4: 150−157. Lebaniʒe, Murman 1987 Rčeuli Lirik'a [Selected Lyrics]. Tbilisi: Sabč'ota Sakartvelo. Mač'avariani, Givi 1966 Subst'rat'is sak'itxistvis dasavlur kartvelur (zanur-svanur) enobriv arealši [On the issue of substrate in the western Kartvelian (Zan-Svan) linguistic area]. Iberiul-K'avk'asiuri Enatmecnereba 15: 162−171. Mač'avariani, Givi 1974 Asp'ekt'is k'at'egoria kartvelur enebši [The category of aspect in the Kartvelian languages]. Kartvelur Enata St'rukt'uris Sak'itxebi 4: 118−141. Melikišvili, Irine 1981 Kartvelur enata ori izolirebuli bgeratpardobis axsnisatvis. [On the explanation of two isolated sound correspondences.] Tanamedrove Zogadi Enatmecnierebis Sak'itxebi 6: 70−84. Merlan, Francesca 1982 Another look at Georgian “inversion”. Papers from the Second Conference on the NonSlavic Languages of the USSR. Folia Slavica 5: 294−312.

45. Georgian

1621

Noonan, Michael 1985 Complementation. In: Timothy Shopen (ed.), Language Typology and Syntactic Description II: Complex Constructions, 42−140: Cambridge: Cambridge University Press. Normebi 1970 Tanamedrove Kartuli Salit'erat'uro Enis Normebi [The Norms of the Modern Georgian Literary Language]. Tbilisi: Mecniereba. Pančviʒe, Vladimer 1937 Uduri ena da misi k'iloebi [The Udi language and its dialects]. Enimk'is Moambe 2: 295−316. Pančviʒe, Vladimer 1942 “Inpinit'ivis” pormata c'armoeba da mnišvneloba udur enaši [Formation and meaning of “Infinitive” in the Udi language]. Sakartvelos SSR Mecnierebata Ak'ademiis Moambe 3.4: 199−205. Šaniʒe, Ak'ak'i 1973 Kartuli Enis Gramat'ik'is Sapuʒvlebi, I, Morpologia [Foundations of Georgian Grammar, I, Morphology]. (Works of the Chair of the Old Georgian Language of the Tbilisi State University, 15.) Tbilisi: Tbilisi University Press. Sapir, Edward 1917 Review of Het passieve karakter van het verbum transitivum of van het verbum actionis in talen van Noord-Amerika, by C. C. Uhlenbeck, 1916. International Journal of American Linguistics 1: 82−86. Schuchardt, Hugo 1895 Über den passiven Charakter des Transitivs in den kaukasischen Sprachen (Sitzungsberichte der Akademie der Wissensch. 133, 1.) Wien. Skopeteas, Stavros, Caroline Féry, and Rusudan Asatiani 2009 Word order and intonation in Georgian. Lingua 119.1: 102−127. Thompson, Sandra A., and Robert E. Longacre 1985 Adverbial clauses. In: Timothy Shopen (ed.), Language Typology and Syntactic Description II. Complex Constructions, 171−234: Cambridge: Cambridge University Press. Tschenkeli, Kita 1958 Einführung in die georgische Sprache. Vol. I. Zurich: Amirani Verlag. Tuite, Kevin 1987 Indirect transitives in Georgian. In: Proceedings of the Thirtheenth Annual Meeting of the Berkeley Linguistic Society, 296−309. Tuite, Kevin 1988 Number agreement and morphosyntactic orientation in the Kartvelian languages. Doctoral dissertation, University of Chicago. Tuite, Kevin 1998 Kartvelian Morphosyntax: Number Agreement and Morphosyntactic Orientation in the South Caucasian Languages. LINCOM Europa, München. Tuite, Kevin, and Paata Bukhrashvili 2002 Central Caucasian religious systems and social ideology in the post-Soviet period. Amirani, VII: 7−24. Vamling, Karina 1987 Complementation in Georgian. Licentiat Thesis, University of Lund, Sweden. Vendler, Zeno 1967 Verbs and times. In: Zeno Vendler (ed.), Linguistics in Philosophy, 97−121: Ithaca, NY: Cornell University Press. Vogt, Hans 1971 Grammaire de la Langue Géorgienne. (Instituttet for sammenlignende kulturforskning, Serie B: Skrifter, LVII.) Oslo: Universitetsvorlaget.

1622

VII. Syntactic Sketches

Ʒiʒiguri, Šota 1969 K'avširebi Kartul Enaši [Conjunctions in the Georgian Language]. Tbilisi: Tbilisis Saxelmc'ipo Universit'et'i.

Alice C. Harris, Amherst, Massachusetts (USA) Nino Amiridze, Utrecht (The Netherlands)

46. The Bantu Languages 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Introduction Noun phrases Prepositional phrases Subject and object agreement Argument structure and verbal morphology Complementation and focus within the verb phrase Non-verbal predication Tense, mood, and aspect Dependent clauses Questions, focus, and topicalization Conclusion Abbreviations References (selected)

Abstract This article provides an overview of the syntactic properties of the Bantu languages, particularly those spoken in Eastern and Southern Africa. While largely theory-neutral, the discussion is informed by the Government and Binding and Minimalist frameworks, with the references focussing on the most recent syntactic literature. In addition to features that can be considered to be the most typical for these languages, such as the noun class system, verbal morphology, and agreement phenomena, aspects of these languages that have received particular attention in the literature and those which are particularly noteworthy are discussed, such as inversion constructions, tone cases, and the syntactic expression of focus. Attention is paid to various domains of the clause, including the nominal domain (noun phrases and prepositional phrases), the thematic and inflectional domains (subject and object agreement; argument structure and the related morphology; focus within the verb phrase; and tense, mood, and aspect), and the complementizer domain (dependent clauses and the left periphery).

1. Introduction The Bantu language family forms a branch of the Niger-Congo language family. The Bantu languages are between 500 and 680 in number (Nurse and Philippson 2003: 2)

46. The Bantu Languages

1623

and boast around 240 million speakers (Grimes 2000). These languages are spoken on the African continent roughly from the northeast corner of Kenya on the east and the lower half of Cameroon on the west down to the southernmost tip of the continent. The languages of this family are often referred to with a code (a Guthrie number) (Guthrie 1967; most recently updated in Maho 2003), consisting of a letter indicating the zone and a numeral identifying a specific language within the zone. For example, Zulu and Swahili, the languages most referred to in this article, have the numbers S42 and G42, respectively. Broadly speaking, there are two naming conventions for Bantu languages, one in which just the stem of the name is used, such as Swahili and Zulu, and the other which also includes the noun class prefix, such as the corresponding Kiswahili and Isizulu. This article consistently uses the first convention, but an alternative language name is given in parentheses after the first occurrence if it occurs frequently in the syntactic literature. In spite of the large number and broad geographic range of the Bantu languages, there are a number of characteristics which can be thought of as typically Bantu. Phonologically, almost all of the languages have an overwhelmingly CV syllable structure. Most of these languages are tonal (although Swahili is not), and tone is typically relevant for the morphosyntax. In addition to more mundane sounds such as aspirated stops, their phonemic inventories often include syllabic nasals, implosive voiced stops, and prenasalized stops. More relevant to this article are the morphological and syntactic features they share, such as the noun class system, the morphological structure of the verb word, and pervasive agreement phenomena, and it is these many common characteristics that justify considering them collectively as well as individually from a syntactic standpoint. Although, broadly speaking, these languages are syntactically similar, there is a large degree of morphosyntactic variation between them (Marten, Kula, and Thwala 2007). As a syntactic sketch of the Bantu languages, this article has two main goals. The first is to give the syntactician a picture of the most typical and salient characteristics of these languages, and the second is to point out issues that pose special syntactic problems, especially those which have received attention in the literature. Syntactic issues have been framed within the line of syntactic theory that includes GB (Government and Binding) and Minimalism, making use of familiar notions such as movement and silent categories. A short sketch of such a large language family cannot pretend to be comprehensive or even totally impartial, and the emphasis here is on the languages of Eastern and Southern Africa, an emphasis that unfortunately reflects a longstanding imbalance in the syntactic literature itself. Space constraints do not allow for citing all relevant literature for any given topic, and the emphasis has been given to more recent work, in which the reader can find references to earlier literature. Glossing in cited examples has been modified (without indication) to help the reader compare across languages. For overview articles about the Bantu language family on a wide range of topics (including one on syntax: Bearth 2003), the reader is referred to Nurse and Philippson (eds., 2003), a useful volume which also contains sixteen profiles on individual Bantu languages and subgroups. Given the immense size of this language family, generalizations made in this overview (“always”, “never”, etc.) should be interpreted with due caution; exceptions in languages unknown to the author may well exist.

1624

VII. Syntactic Sketches

This article is organized as follows. The noun phrase and prepositional phrase are discussed in sections 2 and 3, respectively. Section 4 discusses subject and object agreement, which are essential for understanding the remaining issues, which are presented roughly by working from the bottom of the clause upwards. Sections 5, 6, and 7 discuss issues at the level of the verb phrase: argument structure and agreement; complementation and focus within the verb phrase; and non-verbal predication. Section 8 looks briefly at tense, mood, and aspect. And at the highest level of the clause, section 9 discusses dependent clauses and section 10 discusses questions, focus, and topicalization. Section 11 concludes the article.

2. Noun phrases In this section we will consider various properties of nouns and noun phrases, including the noun class system, nominal derivation, and modification. For an overview of Bantu nominal morphology, the reader is referred to Katamba (2003).

2.1. The noun class system Bantu nouns are divided into noun classes distinguished by prefixes to the noun stem and by the agreement they control. These classes are referred to with a numbering system (Meinhof 1948) which makes it possible to refer to them across languages. Many of these noun classes are associated in pairs, one singular and another plural, which together can be described as constituting a gender. Each class or pair of classes is associated with certain semantic concepts, but the strength of this association varies from class to class. For example, in Zulu, noun class 1 is essentially limited to singular human nouns (although such nouns can also be found, for example, in class 7). Most class 1 nouns have their plural in class 2. So, in this pairing we find the words úmúntù/ábántù ‘person/ people’, úḿfúndì/ábáfúndì ‘student/students’, and úmákhélwànè/ábákhélwànè ‘neighbour/neighbours’. Most animals in Zulu are in the class pairing 9/10, abstract nouns derived from adjectival and nominal stems such as úbûhlé ‘beauty’ are in class 14, and infinitives are in class 15. Certain nominal derivatives are also associated with a particular noun class. Beyond this, class membership is largely arbitrary. Up to 23 noun classes have been reconstructed in Proto-Bantu (see Katamba 2003), and the modern languages vary with respect to how many classes they have. Each noun begins with a noun class prefix, such as the m- in the Swahili class 1 noun mtu ‘person’. Sometimes, however, the prefix is either a zero morpheme or a consonant mutation, such as prenasalization. In many languages, including Zulu, the usual form of the noun includes an additional initial prefix called an augment, as described in section 2.4. As an example, compare the citation form ú-mú-ntù ‘person’ in Zulu with its Swahili cognate m-tu, which lacks an augment. At least two syntactic issues arise in relation to noun class and the agreement system. The first has to do with how to treat conjoined noun phrases. For example, when the subject of a clause is of the form X, Y and Z, different patterns of verbal agreement are attested (Marten 2000; Riedel 2009, chapter 7). Two of these patterns are that the subject

46. The Bantu Languages

1625

marker on the verb agrees either agree with X (first conjunct agreement) or with Z (last conjunct agreement), while in a third pattern the subject marker takes on the features of a default or generic plural noun class. The last case is illustrated in (1), where the subject marker agrees with neither of the two conjuncts, but rather with class 8. (1)

Sabuni na maji vi-ta-ku-saidi-a. 9soap and 6water 8SM-FUT-2SG.OM-help-FS ‘Soap and water will help you.’ (Krifka 1995: 1400)

[Swahili]

The second issue has to do with a preference in some languages, such as Swahili, to use classes 1 and 2 as the agreement class for all animate (or human) nouns regardless of the morphological class they belong to. For example, Swahili jaji/majaji ‘judge’ is a 5/ 6 noun on the basis of its singular/plural morphological alternation, yet it is treated as a 1/2 noun with respect to the agreement features it triggers on other parts of speech. This animacy-based agreement pattern also manifests itself in conjunct agreement. While nonanimate conjuncts can trigger class 8 subject agreement as in (1), animate conjuncts trigger class 2.

2.2. Nominal derivation Every noun belongs to some noun class, and the simplest type of nominal derivation consists of either changing the noun class of an existing noun or creating a noun out of an adjective stem, and attachment of the necessary class prefix, shown in Swahili in (2) and (3). (2)

kipofu/vipofu (7/8) ‘a blind person/blind people’ upofu (14) ‘blindness’

[Swahili]

(3)

-zuri ‘beautiful; good’ uzuri (14) ‘beauty’

[Swahili]

More complex nominalization involves attachment of a suffix in addition to designation of a noun class. Agent nouns in Swahili, for example, can be formed by adding the suffix -ji to a verb stem and assigning the result to noun classes 1/2, as in (4). (4)

-imba ‘sing’ mwimbaji/waimbaji (1/2) ‘singer/singers’

[Swahili]

Of particular interest to the syntactician are nominalizations that have some clausal properties (Carstens 1991: 56−70). For example, one type of agent noun in Swahili can contain an object, as in (5b). It is difficult to argue that (5b) is simply the product of morphological compounding, because nyimbo za huzuni, with its agreeing associative particle za, has internal syntactic structure. However, compounds like that in (5b) are also subject to constraints which distinguish them from typical structures containing a

1626

VII. Syntactic Sketches

verb phrase, such as the fact that no object marker (an agreeing morpheme, see section 4.1) can be prefixed to the verb stem. (5)

a. -imba ‘sing’

[Swahili]

b. mw-imba nyimbo za huzuni 1-sing 10songs 10of sadness ‘a singer of sad songs’ (Carstens 1991: 61) Infinitives also constitute a sort of nominalization with clausal properties. These are discussed in the next section (section 2.3). For an overview of derivational morphology (both nominal and verbal), the reader is referred to Schadeberg (2003).

2.3. Special noun classes There are two types of noun classes that deserve special attention because they have both nominal and non-nominal properties. These are infinitives and locative nouns. Infinitives. Infinitives in Bantu languages belong to a designated noun class (typically class 15). This fact leads to issues involving the teasing apart of the verbal and nominal properties of infinitives. Among the nominal properties of infinitives is the ability to trigger agreement on other parts of speech, as shown in Xhosa (S41) in (6). (In examples [6] through [9], the infinitival noun class prefix has been glossed separately.) (6)

Oku ku-cula ku-lung-ile. 15this 15-sing 15SM-good-PRF.CJ ‘This singing is good.’ (Du Plessis and Visser 1992: 89)

[Xhosa]

In this sentence, class 15 agreement is seen both on the demonstrative oku and on the subject marker. However, the infinitive also has verbal properties, such as the ability to license a negative prefix, as shown in Xhosa in (7), or an object marker. (7)

u-thetha u-ku-nga-sebenz-i. 1SM-prefer AUG-15-NEG-work-NEG ‘The employer prefers not to work.’ (Du Plessis and Visser 1992: 91)

U-mqeshi

[Xhosa]

AUG-1employer

The dual infinitival and nominal nature of infinitives has received due attention in the literature (Visser 1989; Creissels and Godard 2005; Mugane 2003; Myers 1987: 95−99). For example, syntactic differences have been noticed between infinitives behaving like a nominal direct object and those behaving like a clausal complement (du Plessis 1982) in Xhosa. First note that in a clause with a canonical noun object like intsimi ‘field’, an adverb cannot intervene between the verb and the object, as in (8). However, if the verbal complement is an infinitival clause, the situation is more complicated, as seen in (9). While the adverb kakhulu ‘much’ may intervene between the matrix verb and the

46. The Bantu Languages

1627

infinitival complement, as shown (9a), this possibility disappears when the infinitive is modified by a demonstrative or an agentive associative phrase (here kwabo, literally ‘of them’). This pattern is explained by assuming that the demonstrative and agentive phrase force a nominal construal of the infinitive and that such a “nominal” infinitive clause has the same distribution as a canonical object like intsimi ‘field’ in (8). (8)

u-lima kakhuhle i-ntsimi 1SM-plough well AUG-9field Intended: ‘The farmer ploughs the field well.’ (Visser 1989: 159)

[Xhosa]

u-thanda kakhulu u-ku-lima i-ntsimi. a. U-mlimi AUG-9farmer 1SM-like much AUG-15-plough AUG-9field ‘The farmer likes to plough his field a lot.’

[Xhosa]

*U-mlimi

AUG-9farmer

(9)

u-thanda kakhulu oku ku-lima kwabo i-ntsimi. b. *U-mlimi AUG-9farmer 1SM-like much 15that 15-plough 15their AUG-9field Intended: ‘The farmer likes that ploughing of the field of theirs a lot.’ (Visser 1989: 159) Locative noun classes. In some languages, there are three distinct locative noun classes. In these languages, class 16 designates specific locations (‘at’, ‘on’), class 17 designates directions and general locations (‘to’, ‘from’, ‘at’), and class 18 designates enclosed locations (‘inside’). These classes typically do not have any underived members, except perhaps for a single word meaning ‘place’. Rather, their presence as noun classes makes itself evident in agreement controlled by derived locative phrases. Consider the Swahili data in (10). (10) a. Chumba hiki ki-me-kodi-w-a. 7room 7this 7SM-PRF-rent-PASS-FS ‘This room has been rented.’

[Swahili]

chumba-ni. b. Ni-na-lal-a 1SG.SM-PRS-sleep-FS 7room-LOC ‘I sleep in the room.’ muzuri. c. Chumba-ni mu-li-kuw-a 7room-LOC 18SM-PST-be-FS 18pretty ‘The (inside of the) room was pretty.’ In (10a) chumba appears as an underived class 7 noun. In (10b), it appears in its locative form chumbani. In this sentence, chumbani corresponds to an English prepositional phrase. In (10c), however, we see that chumbani behaves as a full-fledged class 18 noun phrase, standing in subject position and triggering class 18 subject agreement on both the subject marker of the verb and on the adjective. Locative noun phrases give rise to numerous syntactic issues. The first has to do with their categorial status. Were it not for the fact that locative noun phrases can control subject concord and adjectival agreement as in (10c), we would probably refer to them as prepositional phrases. However, although endowed with these nominal properties, a

1628

VII. Syntactic Sketches

locative noun phrase in postverbal position does not have all the properties that an object has. For example, locative phrases in sentences like (10b) in many Bantu languages cannot be replaced with an object marker (section 4.1) as can an object. A second syntactic issue has to do with the structure of noun phrases containing modifiers. In some languages, such modifiers agree with the locative noun class (16, 17, or 18), while in other languages they agree with the embedded canonical noun, as exemplified with possessive pronouns in Swahili in (11) and in Zulu in (12). Note in (11) how in Swahili the agreement features of the possessive pronoun change depending on whether or not the head noun is in locative form, while (12) shows that in Zulu the possessive always bears the agreement features of the canonical noun. (11) chumba changu, chumba-ni mwangu 7room 7my 7room-LOC 18my ‘my room, in my room.’ (12) í-ndlù yámì yámì, é-ndlì-nì AUG-9room 9my LOC.AUG-9room-LOC 9my ‘my room, in my room’

[Swahili]

[Zulu]

Thirdly, locative noun phrases participate in locative inversion constructions, discussed in section 4.2. Finally, in languages with eroded locative systems, the use of a locative class for expletive subject agreement complicates the analysis of various structures involving a preverbal locative phrase, as also discussed in section 4.2.

2.4. Augments In some Bantu languages, including Zulu, the noun usually appears with an augment or pre-prefix preceding the noun class prefix. In many languages, the augment consists of a single vowel and is thus sometimes called the initial vowel. The augment is typically omitted only in restricted environments. Aspects of its distribution in Zulu are described briefly here. In Zulu the form of the augment is dependent on the class of the noun, as shown in (13). (13) ú-múntù, á-bántù AUG-1person AUG-2people ‘person, people’

[Zulu]

This augment can be omitted under the scope of negation, giving an emphatic (and sometimes rude) interpretation of ‘not any’ (Halpert 2012a, b; Progovac 1983). The contrast between the presence and absence of the augment in this environment is shown in (14). í-màlí. (14) a. À-ngì-thòl-ángà NEG-1SG.SM-find-NEG AUG-9money ‘I didn’t find the money/any money.’

[Zulu]

46. The Bantu Languages

1629

màlí. b. À-ngì-thòl-ángà NEG-1SG.SM-find-NEG 9money ‘I didn’t find any money at all.’ Most Bantu languages lack indefinite and definite articles. Although in some languages the augment has certain article-like properties, the interpretations given in the translation in (14a) show that the augment does not have the exclusive interpretation of either. Two prominent contexts in which omission of the augment is obligatory in Zulu are after a demonstrative, as in (15), and in a vocative phrase. Except when an augmentless form is licensed from inside the noun phrase, as in (15), the augment can never be omitted if the noun has raised to subject position or is otherwise dislocated. For more detailed descriptions of augments see von Staden (1973), Ferrari-Bridgers (2008), and Hyman and Katamba (1993). (15) Lézì zì-yá-sí-hlúph-à. (*í-)zìnkíngà 10these AUG-10problems 10SM-PRS-1PL.OM-annnoy-FS ‘These problems annoy us.’

[Zulu]

2.5. Modification of noun phrases Bantu noun phrases can be modified by various means, including with demonstratives, adjectives, lexicalized possessives (‘my’), and associative phrases (of-phrases). All these types of modifiers can agree in noun class with the head noun. Relative clauses, which can also modify nouns, are discussed later in section 8.2. The Swahili example in (16) shows a noun phrase with several types of modifiers. Here we will discuss only three of the most syntactically noteworthy types of modification: adjectives, the associative marker, and quantifiers. For a discussion of the relative ordering of such modifiers, see Lusekelo (2009) and the references therein. (16) kitabu changu cha hesabu kile kikubwa 7book 7my 7of 9maths 7that 7big ‘that big maths book of mine’

[Swahili]

Adjectives. Adjectives in some Bantu languages can be divided into agreeing adjectives, which take an adjectival prefix agreeing with the head noun, like the Swahili adjective -kubwa ‘big’ in (17a), and non-agreeing adjectives, which lack this prefix, like muhimu ‘important’ in (17b). The agreeing type of adjective is generally a closed class, with a relatively small number of members. mrefu (17) a. kitabu kirefu, mtu 7book 7long 1person 1long ‘a long book, a tall person’ muhimu b. kitabu muhimu, mtu 7book important 1person 1important ‘an important book, an important person’

[Swahili]

1630

VII. Syntactic Sketches

In the Swahili examples in (17a), the adjective bears only noun class agreement, but in some languages, like Zulu, adjectives functioning as modifiers within the noun phrase have the form of a relative clause and thus display both a relativization prefix and subject marker in addition to adjectival agreement, as in (18). (18) ú-ḿntwànà ó-ḿ-ncánè AUG-baby REL.1SM-1-small ‘a/the little baby’ (lit., ‘baby that small’)

[Zulu]

It is important to note that there are solid morphological grounds against considering adjectives like -ncánè ‘small’ in (18) to be a verb. For example, adjectives in Zulu cannot take the same suffixes that verbs can, and, just like other non-verbal predication types, adjectives trigger an allomorph of certain pre-stem prefixes distinct from that occurring with verbs. The associative particle. The associative particle, typically -a, is in many ways analogous to English of. This particle agrees with the noun it modifies (the head noun), as seen in (19). The particle has a variety of uses, some of which are of syntactic interest. Simple uses include possessor phrases as in (19). More syntactically complex uses include what are called infinitival relatives, exemplified in (20). For discussion of the use of associative phrases and other possessive phrases to express the subject of infinitives (as in [9b] above), see Carstens (1991: 187−190). (19) vitabu vya mwalimu 8books 8of 1teacher ‘the teacher’s books’

[Swahili]

(20) a. mavazi ya ku-vuti-a 6clothing 6of 15-attract-FS ‘attractive clothing’

[Swahili]

nyama b. kisu cha ku-kat-i-a 7knife 7of 15-cut-APPL-FS 9meat ‘a knife for cutting meat’ Quantifiers. Some quantifiers behave like adjectives with respect to their agreement morphology. Note in (21), for example, how the quantifiers -engi ‘much, many’ and -tatu ‘three’ in Swahili, take the same class 8 prefix vi- as the descriptive adjective -kubwa ‘big’ . (21) a. vitabu vingi, vitabu vitatu 8books 8many 8books 8three ‘many books, three books’

[Swahili]

b. vitabu vikubwa 8books 8big ‘big books’ Many languages have an agreeing word meaning ‘all’, such as -onke in Zulu, whose agreement paradigm, unlike that of adjectives, expresses person and number features

46. The Bantu Languages

1631

when referring to first or second person. Plural forms of this word can sometimes be stranded (floated). The Zulu sentence in (22) illustrates both properties. (22) Thìná sì-fìk-ê sónkè. we 1PL.SM-arrive-PRF.CJ 1PL.all ‘We have all arrived.’

[Zulu]

There are typically no negative quantifiers in a Bantu language. Furthermore, words for ‘nobody’ and ‘nothing’ must generally appear in a position c-commanded by verbal negation. This is shown in (23) with the Zulu word lúthò ‘nothing’ (which is distinct from íntò ‘something’/‘thing’). Here lúthò ‘nothing’ is the patient of the passivized verb. In (23a) lúthò is inside the verb phrase and hence c-commanded by the negative verb, and the sentence (which is an expletive subject construction) is grammatical. In contrast, in (23b) lúthò has raised to preverbal subject position, where it is not c-commanded by negation, and the sentence is ungrammatical. lúthò. (23) a. À-kù-bhàl-w-ángà NEG-17SM-write-PASS-NEG 11nothing ‘Nothing was written.’ (lit. ‘There was written nothing.’)

[Zulu]

a-lu-bhal-w-anga. b. *Lutho 11nothing NEG-11SM-write-PASS-NEG For an overview of quantification in Bantu languages, see Zerbian and Krifka (2008).

3. Prepositional phrases Bantu languages typically have a very small inventory of simple, underived prepositions. Swahili, for example, has na ‘with’, bila ‘without’, tangu ‘since’, and perhaps a couple more clearly monomorphemic prepositions. The associative marker discussed in section 2 might also be considered an agreeing preposition, and the preposition kwa ‘for’ could be considered the class 17 form of the associative marker. Other elements functioning as prepositions are more complex. Among these are words that are historically complex but which have now probably become reanalysed as simple, like katika ‘in’, which contains the morpheme kati ‘middle’; adverbs used in combination with the associative marker, like baada ya ‘after’; and words that have the form of an infinitival verb, like kutoka ‘from; to leave’. It should also be remembered that locative nouns of the sort discussed in section 2.3 function largely in the same way as prepositional phrases. Prepositions cannot be stranded. When the complement of a preposition is moved or extracted, resumption always takes place, typically with a pronominal enclitic, as shown in (24). na mwalimu. (24) a. Ni-li-onge-a 1SG.SM-PST-talk-FS with 1teacher ‘I talked with the teacher.’

[Swahili]

1632

VII. Syntactic Sketches na-ye b. mwalimu ni-li-ye-onge-a 1teacher 1SG.SM-PST-1PRON-talk-FS with-1PRON ‘the teacher who I talked with’

4. Subject and object agreement In this section we will consider the agreement properties of simple clauses. This will involve a discussion of subject and object markers and of inversion constructions.

4.1. Subject and object markers A salient characteristic of the Bantu languages is their use of subject and object markers, agreeing prefixes that appear in the verb word. An object marker, if present, always appears immediately before the verb stem. The subject marker appears earlier in the word, often to the left of certain tense and aspect prefixes. The morphological make-up of the verb word shown in the simplified schema in (25) for Swahili is typical. Note that not all of the slots need be filled. The verb in the Swahili sentence in (26) displays all of these properties. (25)

negative marker

subject marker

tense/aspect morpheme

object marker

root

applicative, passive, etc.

(26) Ha-tu-ta-ku-tayarish-i-a chakula. NEG-1PL.SM-FUT-2SG.OM-prepare-APPL-FS 7food ‘We won’t prepare any food for you.’

final suffix

[Swahili]

Now consider the Swahili sentences in (27), which illustrate some of the basic distributional features of subject markers. In (27a), we see that a simple tensed verb has a subject marker. More interestingly, in (27b) we see that in a compound tense both the auxiliary verb (ku)wa and the lexical verb have a subject marker. chakula. (27) a. Safia a-me-tayarish-a 1Safia 1SM-PRF-prepare-FS 7food ‘Safia has made the food.’ b. Wanawake wa-li-kuw-a wa-me-tayarish-a chakula. 2women 1SM-PST-be-FS 1SM-PRF-prepare-FS 7food ‘The women had made the food.’ ku-tayarish-a chakula. c. A-na-kata-a 1SM-PRS-refuse-FS 15-prepare-FS 7food ‘He/she refuses to make the food.’ In most Bantu languages, a lexical subject can be dropped, as in (27c).

[Swahili]

46. The Bantu Languages

1633

Now let’s consider object markers. The pattern in the Swahili sentences in (28) is typical. Sentence (28a) shows that an object marker need not occur with an overt, in situ object. Sentence (28b) shows that an object marker yields a pronominal interpretation if the lexical object is dropped. This use of an object marker to pronominalize the object is sometimes called object cliticization. Sentence (28c) shows that in a compound tense, the object marker appears on the lexical verb rather than on the auxiliary. This thus also serves as an example of a broader generalization, namely that Bantu languages do not exhibit clitic climbing such as found in Italian and Spanish. kitabu. (28) a. Ni-na-on-a 1SG.SM-PRS-see-FS 7book ‘I see the book.’

[Swahili]

b. Ni-na-ki-on-a. 1SG.SM-PRS-7OM-see-FS ‘I see it.’ ni-me-ki-on-a. c. Ni-li-kuw-a 1SG.SM-PST-be-FS 1SG.SM-PRF-7OM-see-FS ‘I had already seen it.’ Bantu languages show variation when we consider the co-occurrence of a lexical object and a corresponding object marker, a phenomenon known as object doubling. Some languages, like Swahili, allow such co-occurrence, even if the object does not appear to be displaced (as shown further below in [33]). In other languages, such as Zulu, the doubling of a postverbal object is possible, but in that case the object is always displaced (van der Spuy 1993). There are several types of evidence for this displacement. Here we will consider only word order, as illustrated in (29), in which the element to the right of the square bracket is dislocated. In (29b), without any object marker, the canonical S V IO DO word order is licit, while the S V DO IO order without an object marker, as in (29a) is ungrammatical. However, if the indirect object is doubled with an object marker, this order is no longer possible. The conclusion drawn from this and other evidence is that in a language like Zulu, clitic doubling entails dislocation of the lexical noun phrase. í-zìngánè ú-(*zi)-ník-ê á-mákhékhè. ] (29) a. Ú-mámà AUG-1mother 1SM-10OM-give-PRF.CJ AUG-10children AUG-6biscuits

[Zulu]

á-mákhékhè ] í-zìngánè. ú-zì-nìk-ê b. Ú-mámà AUG-1mother 1SM-10OM-give-PRF.CJ AUG-6biscuits AUG-10children ‘Mother gave the children some biscuits.’ Three more points must be mentioned concerning object marking. The first is that many languages have stricter doubling requirements for animate (or human) objects than for inanimate ones. In Swahili, for example, while it is possible to double an inanimate object, such doubling becomes obligatory with an analogous animate object, as in (30). kitabu. (30) a. Ni-li-(ki)-on-a 1SG.SM-PST-7OM-see-FS 7book ‘I saw the book.’

[Swahili]

1634

VII. Syntactic Sketches mwalimu. b. Ni-li-*(mw)-on-a 1SG.SM-PST-1OM-see-FS 1teacher ‘I saw a/the teacher.’

Secondly, object doubling in some languages facilitates a definite interpretation of the object. The final point is that some languages (a minority of them) allow multiple object markers, as in the Ruanda (D61, also called Kinyarwanda) example in (31). In this example, kí- encodes the direct object (‘it’), mú- the indirect object (‘to him’), and báthe applicative object (‘for them’). In these languages, questions arise as to how the object markers are ordered and what their interdependencies are. For example, in some languages, it is not possible to double a direct object of a ditransitive verb without also doubling the intervening indirect object. For a detailed discussion of all three issues, see Riedel (2009). (31) Yw-a-kíx-múy-báz-er-e-ye. 1SM-PST-7OM-1OM-2OM-give-APPL-ASP ‘Hew gave itx to himy for themz.’ (Kimenyi 1978: 180, indices added)

[Ruanda]

We now turn to the nature of subject and object markers, an issue which has been of interest since Bresnan and Mchombo (1987). First consider the Zulu sentences in (32). í-nkíngò. (32) a. Ú-ḿfùndísì ú-cház-à AUG-1teacher 1SM-explain-FS AUG-9problem ‘The teacher is explaining the problem.’

[Zulu]

í-nkíngò. b. Ú-cház-à 1SM-explain-FS AUG-9problem ‘She is explaining the problem.’ We see in (32b) that suppression of the lexical subject results in a pronominal interpretation. This has led Bresnan and others working in the LFG (Lexical Functional Grammar) framework to argue that Bantu subject markers can be incorporated anaphoric pronouns, rather than grammatical agreement markers (Bresnan and Mchombo 1987). Under such an analysis, the apparent lexical subject in a sentence like (32a) is actually an adjunct rather than a subject, because the grammatical subject is the subject marker u-. The conclusions of this body of work are often cited, but it is important to bear in mind that the LFG framework does not have the same analytical options as GB and Minimalism. (See also Henderson 2006: 167−180.) Unlike GB, the LFG framework does not have any empty categories, silent elements such as silent pronouns (pro). Because the sentence must have a subject, in LFG we must look for an overt element to fulfill that role, leading to the conclusion that the subject marker is the subject. In contrast, a theory with pro at its disposal has two analytical options for the pattern in (32b), namely: (a) The subject marker is a pronoun, or (b) the subject marker is an agreement morpheme agreeing with a silent pro subject. Under the (a) option, Bantu languages are not pro-drop languages, because the subject is always the overt subject marker. Furthermore, under this option a lexical subject either must be an adjunct or it must occupy something

46. The Bantu Languages

1635

analogous to a topic position. Under the (b) option, where the subject can be pro, one could assume that the lexical subject is the grammatical subject where present, while pro is the grammatical subject only in the absence of a lexical subject. A second possibility, available only under option (b), is that pro is always the element with which the subject marker agrees, with the consequence that lexical subjects are always adjuncts (as proposed by Kim 2004 for Spanish, for example). For the GB or Minimalist syntactician, the relevance of the LFG literature on Bantu subject markers is thus not the pronominal or agreement nature of the subject marker, but the syntactic and discourse properties of the lexical subject. Syntacticians working on Bantu languages within GB and Minimalist frameworks often crucially assume that subject markers are agreement morphemes rather than pronouns in their treatment of agreement and word order patterns (for example, Henderson 2006; Carstens 2005; Kinyalolo 1991; Riedel 2008). Similar issues arise in the discussion of object markers. Here the discussion is more interesting because of the way the Bantu languages differ, as can be shown by comparing Swahili and Zulu. The Swahili sentence in (33) shows that with the indirect object mtoto ‘child’, the object marker is obligatory even though the lexical object is present (because it is animate). Furthermore, neither object in (33) seems to be displaced. (33) Ni-li-*(m-)nunu-li-a mtoto biskuti. 1SG.SM-PST-1OM-give-APPL-FS 1child 9biscuit ‘I bought the child a biscuit.’

[Swahili]

With this type of distribution, object marking is often characterized as grammatical agreement. In contrast, it was shown above in (29) that in Zulu the indirect object cannot be doubled by an object marker unless it is displaced. In the context of such a distribution, the object markers are often characterized as incorporated pronouns. In the same way as with subject markers, the object marker results in a pronominal interpretation if the lexical object is dropped (as in [28b] above). However, this fact alone does not allow us to conclude that the object marker is an incorporated pronoun, because it could just as easily be a morpheme agreeing with an object pro. The discussion of the pronominal versus agreeing nature of the object markers continues to this day (Baker 2008). However, Riedel (2009) has shown that there is no clear cluster of properties correlating to languages which are variously claimed to have agreeing or pronominal object markers.

4.2. Inversion constructions In our discussion so far, the logical subject of the clause has also been the grammatical subject, in the sense that the subject marker agrees with it. We will now consider three different constructions where this is not the case: expletive subject constructions, locative inversion, and subject/object reversal. We will refer to these collectively as inversions. Inversions in Bantu languages have long been of interest to syntacticians (Bokamba 1979; Bresnan and Kanerva 1989). As an example of an expletive subject construction, consider the Zulu sentence in (34a). In this sentence, the verb bears class 17 features rather than agreeing with the

1636

VII. Syntactic Sketches

logical subject úbàbá ‘father’. The logical subject appears to be in a syntactically low position, such as inside the verb phrase. There are semantic differences between the expletive subject construction and its canonical subject-initial counterpart. In Zulu, for example, a subject cannot be focused by modifying it with a word meaning ‘only’ in preverbal position in (34a), while it can in the expletive subject construction in (34b) (Buell 2008). ú-bàbá (kúphêlà). (34) a. Kú-cúl-ê 17SM-sing-PRF.CJ AUG-7father only ‘(Only) father sang.’

[Zulu]

(*kúphêlà) ú-cúl-ìlè. b. Ú-bàbá AUG-7father only 1SM-sing-PRF.DJ ‘Father sang.’ Many Bantu languages also have locative inversion constructions (Salzmann 2001). In languages with contrasts in the agreement system between the different locative noun classes, these take the form of clauses in which the subject marker clearly agrees with the locative expression, which is locative in form (that is, either in a prepositional phrase or bearing locative noun class morphology). This is shown in Herero (R31, also “Otjiherero”) in (35) and (36). Sentence (35a) has the canonical preverbal subject with an agreeing subject marker on the verb. Sentence (35b) is the locative inversion of this sentence, and the subject marker now bears class 18 features to agree with the preposed locative noun phrase. The contrast between the class 18 and 16 subject markers in (35b) and (36) shows that this is a genuine agreement phenomenon rather than a sort of default or expletive agreement, because the agreement covaries with the locative noun class of the locative phrase. (35) a. Òvà-ndù v-á-hìtí mó !-ngándá. 2-people 2SM-PST-enter 18-9house ‘The guests entered the house/home.’ (Marten 2006: 98)

[Herero]

òvá-ndù. b. Mò-ngàndá mw-á-hìtí 18–9house 18SM-PST-enter 2-people ‘Into the house/home entered (the) guests.’ (36) Pò-ndjúwó pé-tjáng-èr-à òvá-nàtjè ò-mbàpírà. 16-9.house 16SM.HAB-write-APPL-FS 2-children 9-letter ‘At the house write (the) children a letter.’ (Marten 2006: 115)

[Herero]

In some languages, the locative noun appears in canonical form (that is, not in a prepositional phrase or in morphologically locative form), and the subject marker agrees with a canonical (non-locative) noun class. An independent variable is that in some languages the verb must also bear a locative clitic in a locative inversion construction, as illustrated with the -mó enclitic in the Bukusu (J30, also “Lubukusu”) sentence in (37).

46. The Bantu Languages (37) Mú-músiirú mw-á-kwá-mó kú-músaala. 18-3forest 18SM-PST-fall-18LOC 3-3tree ‘In the forest fell a tree.’ (Diercks 2011: 703)

1637 [Bukusu]

The literature on these locative inversions has paid the most attention to two issues. The first is the classes of verbs that admit the construction. In some languages the construction is restricted to unaccusative verbs, while others allow a much wider range. Furthermore, some languages with reduced locative systems use the same noun class for the subject marker in both locative inversion and expletive subject constructions. The second issue has thus been whether in locative inversions in these languages the locative phrase is a grammatical subject triggering agreement on the subject marker as in Herero, or whether they are simply expletive subject constructions with locative topics. This question is often difficult to answer. For example, Demuth and Mmusi (1997) argue that in a Tswana (S31, also “Setswana”) sentence such as (38), the locative phrase controls agreement on the subject marker, as in Herero, while Zerbian (2006a) and Buell (2007) claim, on the basis of Northern Sotho (S33) and Zulu, respectively, that sentences like (38) are better analysed as an expletive subject construction with a left-peripheral locative topic. (38) Kó-Maúng gó-tlá-ya roná maríga. 17-Maung 17SM-FUT-go we winter ‘To Maung we shall go in winter.’ (Demuth and Mmusi 1997: 5)

[Tswana]

Finally, some languages have an inversion construction known as subject/object reversal (Bokamba 1979; Morimoto 2000). In this construction the logical object appears in preverbal subject position, triggering agreement on the subject marker. This construction is illustrated in Ruanda in (39b). Subject/object reversal is distinct from the passive construction, exemplified in (40), in which the verb bears passive morphology and in which an agent, if present, appears in a prepositional phrase. igitabo. (39) a. Umuhuûngu a-ra-som-a 1boy 1SM-PRS-read-ASP 7book ‘The boy is reading the book.’

[Ruanda]

umuhuûngu. b. Igitabo cyi-ra-som-a 7book 7SM-PRS-read-ASP 1boy ‘The BOY is reading the book.’ (Kimenyi 1978: 141, translation modified) (40) Umugóre y-a-boon-y-w-e n’ ûmugabo. 1man 1SM-PST-see-ASP-PASS-ASP by 1woman ‘The woman was seen by the man.’ (Kimenyi 1978: 126)

[Ruanda]

There are generally severe restrictions on the types of verbs and arguments that can be used in the subject/object reversal construction. For example, the logical subject must

1638

VII. Syntactic Sketches

generally be a human agent while the logical object must be non-human. Furthermore, the logical object cannot be doubled with an object marker and the logical subject (the postverbal agent) cannot be dropped.

5. Argument structure and verbal morphology In this section we consider the argument structure of the verb and how this relates to derivational morphology.

5.1. Argument structure Most underived verbs in Bantu languages license either one or two arguments, that is, a logical subject and possibly also one object. Very few, such as those meaning ‘give’, also license a second object. However, a characteristic of the Bantu languages is a set of derivational suffixes that increase or decrease the number of arguments a verb licenses. The behaviour of these morphemes and their associated arguments in Bantu languages has played an important role in our understanding of argument structure more generally (Baker 1988; Marantz 1993). Of those that increase the valence of the verb, the most common of these morphemes are the causative and the benefactive. In the Bantu literature, benefactives are commonly referred to as applicatives, a term which is more correctly a cover term for several different types of arguments, which in some languages are all licensed by the same morpheme. For example, in Swahili, a single applicative suffix can variously license a benefactive, locative, or instrumental argument. Morphemes that decrease the number of noun phrases a verb licenses include passive and reciprocal. All derivational suffixes precede the TAM-dependent final suffix (glossed here as FS). The benefactive is illustrated in Swahili in (41b). Valence-changing morphemes can be combined in various ways. For example, a benefactive can be passivized. nyama. (41) a. Juma a-na-kat-a Juma 1SM-PRS-cut-FS 9meat ‘Juma is cutting the meat.’

[Swahili]

watoto nyama. b. Juma a-na-wa-kat-i-a Juma 1SM-PRS-1OM-cut-APPL-FS 2children 9meat ‘Juma is cutting the meat for the child.’ Due to inherent differences in the valence of different underived verbs and the various ways derivational suffixes can be added, Bantu verbs can display a wide variety of argument structures. Syntacticians working in all frameworks have long concerned themselves with accounting for asymmetries between the different arguments (Alsina and Mchombo 1993; Harford 1993). The most famous of these asymmetries involves ditransitive verbs, such as the Zulu verb níkà ‘give’ in (42). A given Bantu language is often characterized as either symmetric or asymmetric with respect to these two objects (Bresnan and Moshi 1990). In all languages, the indirect object (more accurately called the

46. The Bantu Languages

1639

primary object) may raise to preverbal subject position under passivization, as in (43a), or alternatively be pronominalized with an object marker, as shown in (43b). In a symmetric language, such as Zulu, the same is also true of the direct object, as shown in (44). (42) Ú-mámà í-zìngánè ú-ník-ê á-mákhékhè. AUG-mother 1SM-give-PRF.CJ AUG-10children AUG-6biscuits ‘Mother gave the children some biscuits.’

[Zulu]

á-mákhékhè. zì-nìk-w-ê (43) a. í-zìngánè AUG-10children 10SM-give-PASS-PRF.CJ AUG-6biscuits ‘The children were given biscuits.’

[Zulu]

á-mákhékhè. ú-zì-nìk-é b. Ú-mámà AUG-1mother 1SM-10OM-give-PRF.CJ AUG-6biscuits ‘Mother gave them biscuits.’ í-zìngánè. (44) a. á-mákhékhè á-nìk-w-ê AUG-6biscuits 1SM-give-PASS-PRF.CJ AUG-10children ‘The biscuits were given to the children.’

[Zulu]

í-zìngánè. ú-wá-ník-ê b. Ú-mámà AUG-1mother 1SM-2OM-give-PRF.CJ AUG-10children ‘Mother gave them to the children.’ In contrast, in asymmetric languages, only the indirect object is available to these operations (see, for instance, Rugemalira 1991 and Woolford 1993). In Swahili, for instance, the equivalents of (44) are ungrammatical. This pattern supports the idea that the indirect object is structurally higher than the direct object (also called the secondary object), as well as the idea that these operations are subject to locality conditions. That is, the reason that the Swahili direct object cannot be passivized in a ditransitive phrase is that the indirect object is in some sense structurally closer to the subject position. The difficulty in accounting for the difference between languages like Zulu and Swahili lies in accounting for the fact that the direct object is accessible in some languages but not in others. It should also be mentioned that the dichotomy between symmetric and asymmetric languages is not as clear-cut as the literature often suggests. For example, while Zulu appears to be symmetric on the basis of (43) and (44), an asymmetry does show up when one object is passivized while the other is pronominalized (Adams 2010: 136−151). Under the standardly assumed Predicate-Internal Subject Hypothesis, according to which the subject is introduced in the thematic domain under the inflectional region of the clause, the subject/object reversal and locative inversion constructions discussed in section 4.2 seem conceptually similar. In both cases there are two arguments, and in some languages only the hierarchically higher one (the logical subject) can raise to preverbal subject position (the grammatical subject position), while in other languages, either argument can reach this position. Other purely syntactic issues involving argument structure include the peculiarities of certain argument types that cannot be attributed to competition with other arguments. For example, in Swahili, while a benefactive argument can be passivized, a reason applicative argument cannot, as shown in (45).

1640

VII. Syntactic Sketches

na vijana. (45) a. Wazee wa-li-imb-i-w-a 2elders 2SM-PST-sing-APPL-PASS-FS with 8young.people ‘The elders were sung to by the young people.’ (Ngonyani 1998: 77)

[Swahili]

na mwenyekiti. b. *Matusi ya-li-kasirik-i-a 6insults 6SM-PST-get.angry-APPL-FS with 1chairperson intended: ‘Insults were gotten angry at by the chairperson.’ Because this fact holds regardless of whether another object is present, the problem cannot be stated in terms of a competing object being closer to the verb.

5.2. Relating morphology and syntax As we have seen, the Bantu verb word can be composed of a number of morphemes related to different syntactic domains: the verb root and valence-changing morphemes are related to the thematic domain, TAM prefixes are related to the inflectional domain, and morphemes appearing only in relative clauses are related to the complementizer domain. Much work has been done to relate the morphological composition of these complex words to the syntactic structure of the clause (Baker 1988; Marantz 1993). It should first be noted that Bantu derivational morphology largely obeys the Mirror Principle (Baker 1985), a morphosyntactic mapping principle that states that the more deeply embedded a morpheme is in a word with respect to other morphemes, the more deeply embedded its corresponding head is in the corresponding syntactic structure. This can be seen, for example, in the way morpheme ordering reflects semantic scope in the following Zulu example: á-máphóyìsá. á-[[fíhl-án]-él]-à (46) a. Á-másélà AUG-6thieves 6SM-hide-RECP-APPL-FS AUG-6police ‘The thieves are hiding each other from the police.’

[Zulu]

í-màlí. á-[[fíhl-él]-án]-à b. Á-másélà AUG-6thieves 6SM-hide-APPL-RECP-FS AUG-9money ‘The thieves are hiding the money from each other.’ In both sentences, the verb contains both an applicative suffix and a reciprocal suffix, but they are ordered differently. In (46a), the reciprocal suffix -àn is attached to the transitive verb stem -fíhl- ‘hide’ and thus saturates the patient role. In contrast, in (46b), the reciprocal suffix is attached after the applicative suffix, and -àn saturates the applicative role. The Mirror Principle predicts that in the syntactic structure, the head corresponding to the reciprocal morpheme in (46a) is lower than that of the reciprocal morpheme, and that precisely the opposite hierarchy holds in (46b). These are also the hierarchies predicted on the basis of the interpretations indicated in the translations. Because the Mirror Principle generally holds, morpheme order can be used to argue for syntactic hierarchy. For example, it is not clear what would differentiate the two relative scopes of passive and locative applicative, and in fact in Nsenga (N41) the two morphemes can appear in either order. However, Simango (1995: 258) shows that the

46. The Bantu Languages

1641

two morpheme orderings correspond to distinct syntactic behaviours, a fact which hence supports the availability of two relative hierarchies in the syntax. Morpheme order can thus provide important clues to syntactic structure in cases where the semantic scope is not discernible. Mirror Principle violations (cases where the morpheme ordering contradicts the semantic scope) have also been given due attention (Hyman 2003). Other work has attempted to characterize the properties of particular morphemes from a structural viewpoint. One example is Pylkkänen (2002), who, using a model that encodes event structure in the syntactic structure, argues that the Bantu applicative head relates the applicative argument to an event, allowing it to take a VP complement of arbitrary internal structure. The analysis of the Bantu applicative stands in contrast with English double object constructions like John sent Mary a letter, in which the indirect object is related not to an event but to the direct object, with the consequence that the indirect and direct object must form a constituent inside the VP. Another example of analysing a verbal affix as a syntactic head is Ndayiragije (2003), who examines the reciprocal suffix. Across the Bantu languages this suffix shows certain properties which cannot be considered reciprocal in nature. Ndayiragije captures these properties by arguing that the suffix is a kind of v0 head which can license a PRO object argument. Finally, morphological facts have increasingly been brought to bear on the question of whether the Bantu conjugated verb is a single complex head. Julien (2003) argues on the basis of a large cross-linguistic survey that head movement results in suffixation but not in prefixation. A consequence of this proposal is that Bantu TAM prefixes should be analysed as in situ heads (Buell 2005: 18−24), an analysis which was already prefigured by Barrett Keach (1985), who split the Swahili verb word into an auxiliary and a lexical verb. This view is also supported by analyses in which the verb stem (i.e., verb root and valence-related suffixes) itself contains phrasal structure (Muriungi 2009).

6. Complementation and focus within the verb phrase Here we will discuss some phenomena involving a special relationship between the verb and an element immediately following it. These are junctivity, tone cases, and IAV (Immediate After the Verb) position focus effects.

6.1. Junctivity Many Bantu languages have verbal alternations that encode information structure or constituency rather than the semantics of tense, aspect, or mood. The two values of these alternations are variously termed conjoint/disjoint, conjunctive/disjunctive, and (for some languages) short/long. In languages having such an alternation, only the disjoint form can be used clause-finally, as shown in Zulu in (47). (The gloss 1A refers to a noun class distinct from class 1, but which controls class 1 agreement.) ú-fík-ìlè. (47) a. Ú-bàbá AUG-1A.father 1SM-arrive-PRF.DJ ‘Father has arrived.’

[Zulu]

1642

VII. Syntactic Sketches ú-fík-ê. b. *Ú-bàbá AUG-1A.father 1SM-arrive-PRF.CJ

The conjoint form is typically required to question a postverbal constituent in situ. In some languages, the alternation appears to encode focus. In Makhuwa (P31), for example, while an object can be immediately preceded by either a conjoint or disjoint verb form, it may have a focused interpretation only if preceded by a conjoint verb form, as seen in (48) (van der Wal 2009: 215−262). In other languages, such as Zulu, it has been argued that junctivity correlates to constituency rather than directly to focus: in that language a disjoint verb form is constituent-final, while a conjoint form is non-final (van der Spuy 1993; Buell 2005). maláshi. (48) a. Enyómpé tsi-náá-khúúrá 10cows 10SM-PRS.DJ-chew 6grass ‘The cows eat grass.’

(disjoint)

[Makhuwa]

malashí. (conjoint) b. Enyómpé tsi-n-khúúrá 10cows 10SM-PRS.CJ-chew 6grass ‘The cows eat GRASS.’ (Jenneke van der Wal, p.c.) See also Creissels (1996) for Tswana and Meeussen (1959) for Rundi (D62, also called Kirundi).

6.2. Tonally encoded complementation In some languages a noun appears in tonally distinct forms in accordance with a phenomenon referred to as a tone case system or (as appropriate only for certain languages) predicative lowering. Mbundu (H21, also called Umbundu), for example, has tonally distinct forms known as Predicative Case, Object Case (OC), and Common Case (CC) (Schadeberg 1986). Predicative case is used to make a nominal predicate and is also used as the citation form. The CC form is used, for example, in preverbal subject position, while the OC form is used for the first complement of the verb. Although these two simple distribution facts might suggest that CC is nominative and OC is accusative, the system does not resemble either accusative or ergative case systems. One way in which it differs is that in a ditransitive construction, OC is only used on the first object following the verb, as can be seen in the distribution of èpako (CC) and épakò (OC) ‘fruit’ in (49). In both sentences, this noun is the direct object, yet it only appears in object case in (49a), when it is the first element following the verb. The so-called Object Case is further used for an immediately postverbal locative adjunct, as well as for the second member of a conjoined subject. Furthermore, Common Case is used instead of Object Case in several specific contexts in which the noun is an immediately postverbal object, such as after a post-auxiliary infinitive or a negative verb form. yácá épakò kòmál˜a. (49) a. Ònjalí parent.CC gave fruit.OC to.children.CC ‘The parent gave the fruit to the children.’

[Mbundu]

46. The Bantu Languages

1643

yáh˜á ómál˜a èpako. b. Ònjalí parent.CC gave children.OC fruit.CC ‘The parent gave the children the fruit.’ (Schadeberg 1986: 434) Tone cases can interact with junctivity. For example, in Makhuwa, a direct object appears in a tonally lowered form if it follows a conjoint verb form, but without tonal lowering if it follows a disjoint verb form (van der Wal 2006b: 220). This can be seen by comparing the citation form eparáthú ‘plate’ with the tonally lowered form eparathú in the two examples in (59) further below.

6.3. IAV focus effects In many Bantu languages, the IAV position (Immediate After the Verb position, Watters 1979) is a privileged linear position for focused elements, such as a wh-phrase or a contrastively focused phrase. In languages with a conjoint/disjoint alternation, typically the conjoint verb form must be used in this case. However, the IAV requirement and the conjoint verb requirement are independent. This is illustrated in Zulu in (50). Although (50b) has the required conjoint verb form, the questioned phrase is not in the required IAV position. ú-ḿfánà í-ncwàdí. (50) a. Ù-nìk-ê 2SG.SM-give-PRF.CJ AUG-1boy AUG-9book ‘You gave the boy a book.’

[Zulu]

ú-ḿfánà yìphí í-ncwàdí? b. *Ù-nìk-ê 2SG.SM-give-PRF.CJ AUG-1boy 9which AUG-9book Int. ‘Which book did you give to the boy?’ yìphí í-ncwàdí ] ú-ḿfánà? c. Ù-m̀-nìk-ê 2SG.SM-1OM-give-PRF.CJ 9which AUG-9book AUG-1boy ‘Which book did you give to the boy?’ IAV focus effects have given rise to a literature addressing the question as to whether the IAV position is a structural low focus position (as Belletti 2002 proposed for Italian), possibly headed by the junctivity morpheme. Aboh (2006) and Riedel (2009: 165) have argued for such a proposal, while others have argued against it (Hyman and Polinsky 2009; Buell 2006). A similar idea was earlier proposed for Rundi (Ndayiragije 1999), a language in which the postverbal focused element appears not in an immediately postverbal position, but far in the right periphery, as shown in (51). In both of these sentences the verb is in the conjoint form, giving a postverbal element a focused interpretation. However, while in most languages described the focus must be on the immediately postverbal element, in Rundi the focused element appears in clause-final position. [ PROi ku-risha ] inkai . (51) a. Yohani a-á-zanye 15-graze cows SM PST -bring: PRF John 1 ‘John brought cows (not goats) to graze.’

[Rundi]

1644

VII. Syntactic Sketches inka [ PROi ku-risha. ] b. Yohani a-á-zanye 15-graze John 1SM-PST-bring:PRF cows ‘John brought cows to graze (not to sleep).’ (Ndayiragije 1999: 427)

7. Non-verbal predication Bantu languages have a number of predication types which are headed by some element other than a verb. These include adjectival, possessive (‘to have’), existential, nominal, and locative predication, of which we will briefly examine the last two, using Swahili. Consider the nominal predicative clause in (52). (52) a. Juma ni mwalimu. 1Juma COP 1teacher ‘Juma is a teacher.’

[Swahili]

(ni) mwalimu. b. Juma a-li-kuw-a 1Juma 1SM-PST-be-FS COP 1teacher ‘Juma was a teacher.’ In (52a), we find the copular particle ni. The sentence can only be interpreted as present tense. In (52b), while the verb (ku)wa appears, its purpose is not to serve as the head of predication, but rather to support certain types of tense, mood, or negation morphology requiring a verb, as shown by the fact that it can be followed by the copular particle ni. For a second example of non-verbal predication, consider locative predication in Swahili. For this type of predication no copular particle is used. Rather, in the present tense, agreement morphology is attached to a locative pronominal enclitic, as in (53). (53) Juma yu-ko shule-ni. 1Juma 1SM-17PRON 9school-LOC ‘Juma is at school.’

[Swahili]

Non-verbal predication in Bantu has only just recently begun to interest syntacticians. It has been shown that while Zulu non-verbal predicates share certain properties with verbal ones, only the latter can participate in inversions of the type in (34a) above. Zeller (2012) and Buell and de Dreu (2013) have used facts such as these to argue for the presence of the PredP projection first proposed by Bowers (1993).

8. Tense, mood, and aspect Bantu languages are famous for having rich tense and aspect systems (Nurse 2008; Botne and Kershner 2008). Two main factors lead to this richness in a given Bantu language: a large number of simple tenses and moods and the availability of compound tenses. To give an example of an inventory of simple tenses, in the affirmative indicative, Zulu has

46. The Bantu Languages

1645

six simple forms: remote past, recent past, present, near future, remote future, and potential (‘can, would’). This number of tenses is then increased by complex tenses (one tensed form embedded under an auxiliary). As seen above in (27b), both the auxiliary verb and the lexical verb of the compound tenses are conjugated. Typically, no element can intervene between the auxiliary and the lexical verb. Most Bantu languages lack adjective-like participles of the European type. With regards to mood, a Bantu language typically has a distinct subjunctive or optative mood. The infinitive was already discussed in section 2.3. Of particular interest to the syntactician are non-subjunctive subordinate moods found in some languages. For example, the Swahili -ki- “tense” can be used in situative clauses without an overt complementizer, as in the conditional clause in (54a) and the pictive in (54b). This “tense” thus resembles the English present participle in usage, but unlike a true participle it bears subject person features. Similarly, Zulu has a participial submood (subordinate clause) counterpart for each principal submood (matrix clause) indicative tense. These forms are used in certain subordinate contexts, such as the lexical verb in a compound tense, in conditional clauses, and (in a slightly distinct form) in relative clauses. mw-ambi-e ni-me-kwend-a (54) a. Mwalimu a-ki-j-a, 1teacher 1SM-PTCP-come-FS 1OM-tell-SBJV 1SG.SM-PRF-go-FS ku-lal-a. 15-sleep-FS ‘When the teacher arrives, tell him that I have gone to bed.’

[Swahili]

simba a-ki-nyemele-a banda la ng’ombe. b. Ni-li-mw-on-a 1SG.SM-PST-1OM-see-FS 9lion 1SM-PTCP-creep-FS 5shed 5of 10cows ‘I saw a lion stealthing toward the cow-stable.’ (Loogman 1965: 202)

9. Dependent clauses We will now consider two different types of dependent clauses: raising and control structures, and relative clauses.

9.1. Raising and control Infinitival clauses were first discussed in section 2.3. In European languages, infinitival subordinate clauses often appear in two syntactically interesting and well-studied contexts: raising and control structures. Raising of the sort found in English (Johni seems ti to be missing) with an infinitival embedded clause does not typically occur in Bantu languages (but see Zeller 2006 for such a case). However, Kimenyi (1978: 149−172) discusses cases of both subject-to-subject raising and subject-to-object raising in Ruanda. Many languages exhibit constructions with arguable subject-to-subject raising out of a tensed embedded clause, as exemplified in Shona (S10, also called Chishona) in (55). (Also see Carstens 2011.)

1646

VII. Syntactic Sketches

(55) Mbavháii í-no-fungir-w-a [Shona] kuti ti y-áka-vánd-á mú-bako. 9thief 9SM-PRS-suspect-PASS-FS that 9SM-PST-hide-FS 18-cave ‘The thief is suspected to have hidden in the cave.’ (Harford 1985: 81) Subject control is commonplace and is illustrated in Swahili in (56). (56) Ni-na-fikir-i-a ku-nunu-a gari jipya. 1SG.SM-PRS-think-APPL-FS 15-buy-FS 5car 5new ‘I’m thinking of buying a new car.’

[Swahili]

Object control (i.e., Mary convinced Johni PROi to sing.), shown in Swahili in (57a), is less widely attested in Bantu. Typically, a subjunctive complement clause is used instead, as in the more readily accepted variant in (57b). ku-tok-a. (57) a. Mwalimu a-li-ni-lazimish-a 1teacher 1SM-PST-1SG.OM-force-FS 15-leave-FS

[Swahili]

ni-tok-e. b. Mwalimu a-li-ni-lazimish-a 1teacher 1SM-PST-1SG.OM-force-FS 1SG.SM-leave-SBJV ‘The teacher forced me to leave.’

9.2. Relative clauses Bantu languages display a wide range of patterns for finite relative clauses with respect to placement of the relative morphology, word order, and subject and object marking. We will consider a few different patterns here to give the reader an idea of this range. (For overviews of Bantu relativization strategies, see Nsuka-Nkutsi 1982; Henderson 200; and Zeller 2004b.) Swahili has two relativization patterns, which can be termed analytic and synthetic. In the analytic form, shown in (58a), a left-peripheral word appears, composed of amba(historically a verb stem meaning ‘say’) and a pronominal clitic agreeing with the element relativized. In an analytic relative clause, the verb may appear in essentially any indicative tense and the clause has the usual subject-initial word order. (58) a. kitabu amba-cho mwalimu a-li-(ki)-som-a (analytic) 7book REL-7PRON 1teacher 1SM-PST-7OM-read-FS mwalimu b. kitabu a-li-cho-ki-som-a 7book 1SM-PST-7PRON-7OM-read-FS 1teacher ‘the book that the teacher read’

[Swahili]

(synthetic)

In the synthetic form in (58b), the pronominal enclitic appears internal to the verb word, and the subject must follow the verb (Demuth and Harford 1999). Furthermore, only a few tenses can form the basis of a synthetic relative. The synthetic forms raise interesting problems. For example, the fact that the enclitic appears inside the verb word suggests

46. The Bantu Languages

1647

that the tense marker and the verb stem belong to distinct syntactic heads (Barrett Keach 1985), while the verb-initial word order suggests that the verb word is a single head which has moved over the subject to C0 (Ngonyani 1999). Non-subject relatives in some languages, including Makhuwa (P31), have a pattern in which subject agreement is expressed as an enclitic rather than as a subject marker, as in (59b), in which the subject agreement appears as the suffix -ááwé. Except for their lack of an agreeing prefix, these enclitics are morphologically identical to possessive pronouns, as reflected in the gloss. eparathú. (59) a. Eliísá aa-ráp-íh’ 1Lisa 1IPFV.CJ-wash-CAUS 9plate ‘Lisa washed a plate.’

[Makhuwa]

Eliísa b. eparáthú y-aa-ráp-íh-ááwé 9plate 9-IPFV-wash-CAUS-1POSS 1Lisa ‘the plate that Lisa washed’ (van der Wal 2010: 214) Note that the “subject marker” in the relative clause in (59b) agrees with the extracted object rather than with the logical subject. In this case and others, there is a question as to whether the word-initial agreement morpheme is a subject marker (arguably an I0 or Agr0 head) or a higher, A′-related agreement head, such as an agreeing C0 head (Kinyalolo 1991; Letsholo 2002; see also Morimoto 2006 for topic agreement). While it is generally difficult morphologically to distinguish between the two possibilities, in some languages both morphemes can surface in a single verb form. This is shown in Lega (D25, also called Kilega) in (60), in which Kinyalolo glosses the agreeing C0 head u- as RM, for relative marker, while the subject marker mú- is also present. (60) mwána u-mú-k-énd-a ná-gé ku-Ngando… 1child 1RM-2PL.SM-FUT-go-FS with-AGR 17-Ngando ‘the child with whom you will go to Ngando’ (Kinyalolo 1991: 23)

[Lega]

Many Bantu languages have a morphological distinction between subject and non-subject relatives which is only manifest with a noun class 1 subject. The contrast is shown in Zulu in (61). In both clauses, the relative clause has a class 1 subject. In (61a), the subject itself is relativized and the subject marker fused with the relative morpheme is o-, whereas in (61b), where the direct object is relativized, that prefix is a-. This problem has been analysed as an anti-agreement phenomenon (Schneider-Zioga 2007; Cheng 2006). í-ngòmà ] (61) a. ú-mákhélwànè [ ô-zò-cùl-à AUG-1neighbour REL.1SM-FUT-sing-FS AUG-9song ‘the neighbour who will sing a song’ [ ú-mákhélwànè â-zò-yì-cúl-à ] b. í-ngòmà AUG-6women AUG-1neighbour REL.1SM-FUT-9OM-sing-FS ‘the song that the neighbour will sing’

[Zulu]

1648

VII. Syntactic Sketches

10. Questions, focus, and topicalization In this section we look at a variety of issues related to the highest domain in the clause, the complementizer domain. These include questions, topicalization, right dislocation, focus, and clefts. For an in-depth study of these issues in Northern Sotho (S32) see Zerbian (2006b).

10.1. Questions Bantu languages typically have two primary strategies for forming constituent questions: in situ questioning and cleft questions. These are illustrated in Zulu in (62a) and (62b), respectively. yìphí í-ngòmà? z-â-cùl-á (62) a. Í-zìngánè AUG-10children 10SM-PST-sing-FS 9which AUG-9song ‘Which song did the children sing?’

[Zulu]

í-ngòmà í-zìngánè b. Kw-á-kú-yì-yîphí 17SM-PST-17SM-COP-9which AUG-9song AUG-10children é-z-à-yí-cûl-à? REL-10SM-PST-9OM-sing-FS ‘Which song was it that the children sang?’ Instead of leaving the questioned constituent strictly in situ, in some languages, such as Sambaa, the constituent is moved to the IAV position (see section 6.3) (Riedel 2009: 165). In many Bantu languages, such as Zulu, subjects cannot be questioned in their canonical preverbal position, as shown in (63b). In Zulu, this seems to be related to a general ban on focused preverbal subjects, as shown with the subject modified by kúphêlà ‘only’ in (63c) (Buell 2008). In these languages, another strategy, such as a cleft question, must be used for questioning or focusing the subject. In some languages, like Swahili, however, the equivalent of (63b) is grammatical. zì-cúl-ìlè. (63) a. Í-zìngánè AUG-10children 10SM-sing-PRF.DJ ‘The children sang.’

[Zulu]

b. *O-bani ba-cul-ile? AUG-2who 2SM-sing-PRF.DJ intended: ‘Who sang?’ kuphela zi-cul-ile. c. *I-zingane AUG-10children only 10SM-sing-PRF.DJ intended: ‘Only the children sang.’ Yes/no-questions typically differ from statements only in prosody, but some languages also have interrogative particles, such as Swati’s (S43) yini and na, shown in (64).

46. The Bantu Languages

1649

Thwala (2006) has used the fact that these particles are intrinsically ordered and the way which they can be separated by dislocated items to argue for remnant movement of different parts of the clause to the complementizer domain when they precede a question particle. (64) Bafana ba-to-tseng-a imoto (yini) (na)? Q 2boys 2SM-FUT-buy-FS 10car Q ‘Will boys buy a car?’ (Thwala 2006)

[Swati]

10.2. Topicalization and right dislocation Most Bantu languages allow topical elements in the left periphery of the clause, such as the temporal adverb ízòlò ‘yesterday’ and the object lélò phèphándàbà ‘this newspaper’ in the Zulu sentence in (65). (65) Ízòlò lélò phèphándàbà ngì-lì-fùnd-ílè. yesterday 5that 5newpaper 1SG.SM-PST-read-PRF.DJ ‘Yesterday I read this newspaper.’

[Zulu]

Many languages also allow right dislocation, as shown in Zulu with the subject úbàbá ‘father’ in (66), in which, on the basis of the agreeing subject marker, is dislocated. (66) Ú-fík-ìlè ízòlò ú-bàbá. 1SM-arrive-PRF.DJ yesterday AUG-7father ‘Father came yesterday.’ (literally ‘He came yesterday, father.’)

[Zulu]

A growing body of literature has concerned itself with topicalization and dislocation in Bantu languages (Zerbian 2006b; Morimoto 2000; Zeller 2009; Cheng and Downing 2009; Buell 2008). Because an object follows a verb in both its usual position and when it is right-dislocated, an important element in work on object marking involves determining when an object is or is not dislocated (Riedel 2008: 67−73).

10.3. Clefts and left-peripheral focus Bantu languages typically exhibit clefts. In this biclausal construction, a predicate nominal is followed by a relative clause. The construction is illustrated in (67). (67) Ng-ù-ḿfùndísì [Zulu] kúphêlà í-zìngánè é-z-à-ḿ-bíngèlèl-â-yò. COP-AUG-1teacher only AUG-10children REL-10SM-PST-1OM-greet-FS-REL ‘It’s only the teacher that the children greeted.’ In some languages a construction having a similar interpretation exists in which the morpheme preceding the focused element is usually analysed as a focus particle rather

1650

VII. Syntactic Sketches

than a copula. Under this analysis, the resulting construction is not a cleft in these languages, but rather a monoclausal structure in which the focused element appears in the left periphery of the clause (É. Kiss 1998). This analysis is most tenable in languages like Gikuyu (E51, also called Kikuyu) (Bergvall 1987; Schwarz 2007) and Tharaka (E54, also called Kîîtharaka) (Abels and Muriungi 2008), in which the focus particle can also appear in positions that make a cleft analysis problematic. Consider the ne focus morpheme in the Gikuyu sentences in (68). In this language, an object can be focused in two ways: in situ (i.e., inside the verb phrase) and ex situ (i.e., in a left-peripheral position). For in situ focus the ne morpheme is absent, as in (68a), while in ex situ focus, the object appears in a left-peripheral position after the ne morpheme, as in (68b). Finally, for the interpretation where the object is not in focus, the object appears in postverbal position, while the verb can be preceded by the ne morpheme, as in (68c). mae. (68) a. Abdul a-ra-nyu-ir-ɛ Abdul 1SM-T-drink-ASP-FS 6water ‘Abdul drank WATER.’

[Gikuyu]

b. Ne mae Abdul a-ra-nyu-ir-ɛ. FOC 6water Abdul 1SM-T-drink-ASP-FS ‘Abdul drank WATER.’ mae. c. Abdul (ne)-a-ra-nyu-ir-ɛ Abdul FOC-1SM-T-drink-ASP-FS 6water ‘Abdul drank water.’ (Schwarz 2007: 140, 142) There are certain similarities between the distribution of these focus markers as in (68a) and (68c) and the conjoint/disjoint alternations discussed in section 6.1 in terms of whether or not the postverbal term is focused. When prefixed to the verb as in (68c), the focus particle resembles a disjoint verb form, while the absence of the morpheme resembles a conjoint verb form. This similarity can also be seen in the fact that the focus particle is obligatory when the verb lacks an object, as seen in the Tharaka example in (69). In languages showing a conjoint/disjoint alternation, a disjoint form would be required in this context, as seen above in (47). (69) Maria *(n-)a-kiny-ir-e. 1Maria FOC-1SM-arrive-PRF-FS ‘Maria arrived.’ (Abels and Muriungi 2008: 692)

[Tharaka]

11. Conclusion In the foregoing sections we examined some of the most salient syntactic characteristics of the Bantu languages, touching on various aspects and components of the clause. We also saw that numerous issues in Bantu languages have caught the interest of syntacticians, especially those issues related to argument structure, agreement, and word order.

46. The Bantu Languages

1651

Some of these have even had an important influence on linguistic theory (see Bresnan 1990; Henderson, 2011). The number of syntacticians working on Bantu languages has grown considerably over the past two decades, and their work is likely to be brought increasingly to bear on syntactic theory (Buell et al. 2011). Some of their work will simply help us refine our current understanding of a particular well-understood topic, such as the structure of the left periphery. Other work, however, will likely pose serious challenges to currently popular proposals, such as work on constructions in which the verb agrees with both the subject and an element in the complementizer domain, such as the Lega relative clause in (60). These constructions seem incompatible with Chomsky’s (2007, 2008) proposal that T0 inherits the agreement features of C0, a proposal which assumes that C0 has only a single set of agreement features (Carstens 2010). Yet other phenomena, such as junctivity and tone cases, lack obvious analogues in the better-studied European languages, and it is as yet unclear how they can best be integrated into our understanding of syntax or in what ways they will force us to modify our current theories.

12. Abbreviations The following abbreviations not covered by the Leipzig Glossing Rules appear in the glosses: ASP AUG CC CJ DJ FS HAB

aspect(ual marker) augment common case conjoint disjoint final suffix habitual

OC OM PRON RM SM T

object case object marker pronoun relative marker subject marker tense

46. Acknowledgements I wish to thank my Zulu informant Meritta Xaba and my Swahili informant (and fellow linguist) Sandra Barasa. I am also grateful to Vicki Carstens, Michael Diercks, Lutz Marten, and Thilo Schadeberg, and especially to Kristina Riedel, Jenneke van der Wal, and an anonymous reviewer for their thoughtful input.

13. References (selected) Abels, Klaus, and Peter Muriungi 2008 The focus marker in Kîîtharaka: Syntax and semantics. Lingua 118: 687−731. Aboh, Enoch 2006 Leftward focus versus rightward focus: The Kwa-Bantu conspiracy. SOAS Working Papers in Linguistics 15: 80−104.

1652

VII. Syntactic Sketches

Adams, Nikki 2010 The Zulu ditransitive verb phrase. Ph.D. dissertation, University of Chicago. Alsina, Alex, and Sam Mchombo 1993 Object asymmetries and the Chicheŵa applicative construction. In: Sam Mchombo (ed.), Theoretical Aspects of Bantu Grammar, 17−45. Stanford: CSLI. Baker, Mark 1985 The Mirror Principle and morphosyntactic explanation. Linguistic Inquiry 16: 373−415. Baker, Mark 1988 Incorporation: A Theory of Grammatical Function Changing. Chicago: University of Chicago Press. Baker, Mark 2008 The Syntax of Agreement and Concord. Cambridge: Cambridge University Press. Barrett Keach, Camillia N. 1985 The syntax and interpretation of the relative clause construction in Swahili. Ph.D. dissertation, University of Massachussetts. Bearth, Thomas 2003 Syntax. In: Derek Nurse and Gérard Philippson (eds.), The Bantu Languages, 121−142. London: Routledge. Belletti, Adriana 2002 Aspects of the low IP area. In: Luigi Rizzi (ed.), The Structure of IP and CP: The Cartography of Syntactic Structures 2, 16−51. Oxford: Oxford University Press. Bergvall, Victoria L. 1987 Focus in Kikuyu and Universal Grammar. Ph.D. dissertation, Harvard University. Bokamba, Eyamba G. 1979 Inversions as grammatical relation changing rules in Bantu languages. Studies in the Linguistic Sciences 9 (2): 1−24. Botne, Robert, and Tiffany L. Kershner 2008 Tense and cognitive space: On the organization of tense/aspect systems in Bantu languages and beyond. Cognitive Linguistics 19, 2: 145−218. Bowers, John 1993 The syntax of predication. Linguistic Inquiry 24: 591−656. Bresnan, Joan 1990 African languages and syntactic theories. Studies in the Linguistics Sciences 20: 35−48. Bresnan, Joan, and Jonni M. Kanerva 1989 Locative inversion in Chichewa: A case study of factorization in grammar. Linguistic Inquiry 20(1): 1−50. Bresnan, Joan, and Lioba Moshi 1990 Object asymmetries in comparative Bantu syntax. Linguistic Inquiry, 21(2): 147−185. Bresnan, Joan, and Sam A. Mchombo 1987 Topic, pronoun and agreement in Chicheŵa. Language 63(4): 741−782. Buell, Leston 2005 Issues in Zulu verbal morphosyntax. Ph.D. dissertation, University of California, Los Angeles. Buell, Leston 2006 The Zulu conjoint/disjoint verb alternation: focus or constituency? ZAS Papers in Linguistics 43: 9−30. Buell, Leston 2007 Semantic and formal locatives: implications for the Bantu locative inversion typology. SOAS Working Papers in Linguistics 15: 105−120. Buell, Leston 2008 VP-internal DPs and right-dislocation in Zulu. Linguistics in the Netherlands 2008: 37−49.

46. The Bantu Languages

1653

Buell, Leston and Merijn de Dreu 2013 Subject raising in Zulu and the nature of PredP. The Linguistic Review 30: 423−466. Buell, Leston, Kristina Riedel, and Jenneke van der Wal 2011 What the Bantu languages can tell us about word order and movement. Lingua 121: 689−701. Carstens, Vicki 1991 The morphology and syntax of determiner phrases in Kiswahili. Ph.D. dissertation, University of California, Los Angeles. Carstens, Vicki 2005 Agree and EPP in Bantu. Natural Language and Linguistic Theory 23: 219−279. Carstens, Vicki 2010 Implications of grammatical gender for the theory of uninterpretable features. In: M. T. Putnam (ed.), Exploring Crash-Proof Grammars, 31−58. Amsterdam: Benjamins. Carstens, Vicki 2011 Hyperactivity and hyperagreement in Bantu. Lingua 121: 721−471. Cheng, Lisa 2006 Decomposing Bantu relatives. Proceedings of the North Eastern Linguistic Society 36: 197−215. Cheng, Lisa, and Laura. J. Downing 2009 Where’s the topic in Zulu? The Linguistic Review 26: 207−238. Creissels, Denis 1996 Conjunctive and disjunctive verb forms in Setswana. South African Journal of African Languages 16(4): 109−115. Creissels, Denis, and Danièle Godard 2005 The Tswana infinitive as a mixed category. In: Stefan Müller (ed.), Proceedings of HPSG05, 70−90. de Blois, Kornelis Frans 1970 The augment in the Bantu languages. In: Africana Linguistica IV: 85−165. (Annales, 68.) Tervuren: MRAC. Demuth, Katherine, and Carolyn Harford 1999 Verb raising and subject inversion in Bantu relatives. Journal of African Languages and Linguistics 20: 41−61. Demuth, Katherine, and Sheila Mmusi 1997 Presentational focus and thematic structure in comparative Bantu. Journal of African Languages and Linguistics 18: 1−19. Diercks, Michael 2011 The morphosyntax of Lubukusu locative inversion and the parameterization of Agree. Lingua 121: 702−720. du Plessis, J. A., and Marianna Visser 1992 Xhosa Syntax. Pretoria: Via Afrika. É. Kiss, Katalin 1998 Identificational focus versus information focus. Language 74(2): 245−273. Ferrari-Bridgers, Franca 2008 The syntax-semantics interface of Luganda initial vowel. Research in African Languages and Linguistics 8.2. Grimes, Barbara F. (ed.) 2000 Languages of the World: Ethnoloque, 13th edition, Dallas: SIL International. Guthrie, Malcolm 1967 The Classification of the Bantu Languages. London: Dawsons. Halpert, Claire 2012a Case, agreement, EPP and information structure: A quadruple-dissociation in Zulu. In: Jaehoon Choi et al. (eds.), Proceedings of the 29 th West Coast Conference on Formal Linguistics, 90−98. Somerville, MA: Cascadilla Proceedings Project.

1654

VII. Syntactic Sketches

Halpert, Claire 2012b Argument licensing and agreement in Zulu. Ph.D. dissertation, Massachusetts Institute of Technology. Harford, Carolyn 1985 Aspects of complementation in 3 Bantu languages. Ph.D. dissertation, University of Wisconsin−Madison. Harford, Carolyn 1993 The applicative in Chishona and Lexical Mapping Theory. In: Sam Mchombo (ed.), Theoretical Aspects of Bantu Grammar, 93−111. Stanford: CSLI. Henderson, Brent 2006 The syntax and typology of Bantu relative clauses. Ph.D. dissertation, University of Illinois at Urbana-Champaign. Henderson, Brent 2011 African languages and syntactic theory: impacts and directions. In: Eyamba G. Bokamba et al. (eds.): Selected Proceedings of the 40 th Annual Conference on African Linguistics, 15−25. Somerville, MA: Cascadilla Proceedings Project. Hyman, Larry 2003 Suffix ordering in Bantu: A morphocentric approach. In: Geert Booij and Jaap van Marle (eds.), Yearbook of Morphology 2002, 245−281. Dordrecht: Kluwer Academic Publishers. Hyman, Larry, and Maria Polinsky 2009 Focus in Aghem. In: Malte Zimmermann and Caroline Fery (eds.), Information Structure: Theoretical, Typological, and Experimental Perspectives. Oxford: Oxford University Press. Hyman, Larry, and Alessandro Duranti 1982 On the object relation in Bantu. In: Paul J. Hopper and Sandra A. Thompson (eds.), Studies in Transitivity, 217−239. New York: Academic Press. Hyman, Larry, and Francis X. Katamba 1993 The augment in Luganda: syntax or pragmatics? In: Sam Mchombo (ed.), Theoretical Aspects of Bantu Grammar, 209−256. Stanford: CSLI. Julien, Marit 2003 Syntactic Heads and Word Formation. Oxford University Press. Katamba, Francis X. 2003 Bantu nominal morphology. In: D. Nurse and G. Philippson (eds.), The Bantu Languages, 103−120. London: Routledge. Kim, Jun-Han 2004 La habilitación del pro expletivo y el Principio de Proyección Extendido (PPE) en el español. Ph.D. dissertation, Universidad Autónoma de Madrid. Kimenyi, Alexandre 1978 A Relational Grammar of Kinyarwanda. University of California Press. Kinyalolo, Kasangati K. W. 1991 Syntactic dependencies and the SPEC-head Agreement Hypothesis in KiLega. Ph.D. dissertation, University of California, Los Angeles. Krifka, Manfred 1995 Swahili. In: Joachim Jacobs et al. (eds.), Syntax: Ein Internationales Handbuch Zeitgenössischer Forschung, Vol. 2, 1397−1418. Berlin: Walter de Gruyter. Letsholo, Rose 2002 Syntactic Domains in Ikalanga. Ph.D. dissertation, University of Michigan. Loogman, Alfons 1965 Swahili Grammar and Syntax. New York: The Ad Press. Lusekelo, Amani 2009 The structure of the Nyakyusa noun phrase. Nordic Journal of African Studies 18(4): 305−331.

46. The Bantu Languages

1655

Maho, Jouni 2003 A classification of the Bantu languages: An update of Guthrie’s referential system. In: Derek Nurse and Gérard Philippson (eds.), The Bantu Languages, 90−102. London: Routledge. Marantz, Alec 1984 On the Nature of Grammatical Relations. Cambridge, Mass.: MIT Press. Marantz, Alec 1993 Implications of asymmetries in double object constructions. In: Sam Mchombo (ed.), Theoretical Aspects of Bantu Grammar, 113−150. CSLI: Stanford, California. Marten, Lutz 2000 Agreement with conjoined noun phrases in Swahili. Afrikanistische Arbeitspapiere 64 (Swahili Forum VII): 75−96. Marten, Lutz 2006 Locative inversion in Otjiherero: More on morpho-syntactic variation in Bantu. ZAS Papers in Linguistics 43: 97−122. Marten, Lutz, Nancy Kula, and Nhlanhla Thwala 2007 Parameters of morpho-syntactic variation in Bantu. Transactions of the Philological Society 105(3): 253−338. Meeussen, Achille E. 1967 Bantu grammatical reconstructions. Africana Linguistica 3: 80−122. Meinhof, Carl 1948 Grundzüge einer vergleichenden Grammatik der Bantusprachen. Hamburg: Dietrich Reimer. Second edition. First published 1906. Morimoto, Yukiko 2000 Discourse configurationality in Bantu morphosyntax. Ph.D. dissertation, Stanford University. Morimoto, Yokiko 2006 Agreement properties and word order in comparative Bantu. In: Laura Downing, Lutz Marten, and Sabine Zerbian (eds.), Papers in Bantu Grammar and Description, 161− 188. Mugane, John M. 2003 Hybrid consructions in Gikuyu: Agentive nominalizations and infinitive-gerund constructions. In: Miriam Butt and Tracy Holloway King (eds.), Nominals: Inside and Out, 235−265. Stanford: CSLI. Muriungi, Peter 2009 Phrasal movement inside Bantu verbs: Deriving affix scope and order in Kîîtharaka. Ph.D. dissertation, University of Tromsø. Myers, Scott 1987 Tone and the structure of words in Shona. Ph.D. dissertation, University of Massachussetts. Ndayiragije, Juvénal 1999 Checking economy. Linguistic Inquiry 30(3): 399−444. Ndayiragije, Juvénal 2003 Théories linguistiques et réciprocité en Chichewa: la leçon du Kirundi. In: Patrick Sauzet and Anne Zribi-Hertz (eds.), Typologie des langues d’Afrique & universaux de la grammaire, vol. 1, 169−210. Paris: L’Harmattan. Ngonyani, Deogratias 1998 Properties of applied objects. Studies in African Linguistics 27(1): 67−95. Ngonyani, Deogratias 1999 X0-Movement in Kiswahili relative clauses. Linguistic Analysis 29: 137−159. Ngonyani, Deogratias 2001 The morphosyntax of negation in Kiswahili. Afrikanistische Arbeitspapiere 68: 17−33.

1656

VII. Syntactic Sketches

Nsuka Nkutsi, F. 1982 Les structures fondamentales du relatif dans les langues bantoues. (Annales, 108 [erroneously numbered 107]) Tervuren: Musée Royal de l’Afrique Centrale. Nurse, Derek 2008 Tense and Aspect in Bantu. London: Oxford University Press. Nurse, Derek, and Gérard Philippson 2003 Towards a historical classification of the Bantu languages. In: Derek Nurse and Gérard Philippson (eds.), The Bantu Languages, 164−181. London: Routledge. Nurse, Derek and Gérard Philippson (eds.) 2003 The Bantu Languages. London: Routledge. Progovac, Ljiljana 1993 Non-augmented NPs in Kinande as negative polarity items. In: Sam Mchombo (ed.), Theoretical Aspects of Bantu Grammar, 209−256. Stanford: CSLI. Pylkkäanen, Mariliina 2002 Introducing arguments. Ph.D. dissertation, MIT. Riedel, Kristina 2009 The syntax of object marking in Sambaa: A comparative Bantu perspective. Ph.D. dissertation, Leiden University. Utrecht: LOT. Riedel, Kristina 2007 Object marking in Sambaa. Linguistics in the Netherlands 2007: 199−210. Rugemalira, Josephat M. 1991 What is a symmetric language? Multiple object constructions in Bantu. In: Kathleen Hubbard (ed.), Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society: Special Session on African Language Structures, 200−209. Sabel, Joachim, and Jochen Zeller 2006 Wh-question formation in Nguni. In: John Mugane et al. (eds.), Selected Proceedings of the 35th Annual Conference on African Linguistics, 271−283. Somerville, Massachussetts: Cascadilla. Salzmann, Martin 2001 Theoretical approaches to locative inversion. Revised master’s thesis, University of Zurich. Schadeberg, Thilo 1995 Object diagnostics. In: E. ’Nolue Emenanjo and Ozo-mekuri Ndimele (eds.), Issues in African Languages and Linguistics: Essays in Honour of Kay Williamson, 173−180. Aba: National Institute for Nigerian Languages. Schadeberg, Thilo 2003 Derivation. In: Derek Nurse and Gérard Philippson (eds.), The Bantu Languages, 71− 89. London: Routledge. Schadeberg, Thilo, and Francisco Mucanheia 2000 Ekoti: The Maka or Swahili Language of Angoche. Cologne: Rüdiger Köppe Verlag. Schadeberg, Thilo 1986 Tone cases in Umbundu. Africana Linguistica 10: 427−445. Schneider-Zioga, Patricia 2007 Anti-agreement, anti-locality and minimality: The syntax of dislocated subjects. Natural Language and Linguistic Theory 25 (2): 403−446. Schwarz, Florian 2007 Ex-situ focus in Kikuyu. In: Enoch Aboh, Katharina Hartmann, and Malte Zimmermann (eds.), Focus Strategies in African Languages: The Interaction of Focus and Grammar in Niger-Congo and Afro-Asiatic, 139−159. Berlin: Walter de Gruyter. Simango, Silvester R. 1995 The syntax of Bantu double object constructions. Ph.D. dissertation, University of South Carolina.

46. The Bantu Languages

1657

Thwala, Nhlanhla 2006 A unified analysis of questions in Nguni Languages. Unpublished manuscript. van der Spuy, Andrew 1993 Dislocated noun phrases in Nguni. Lingua 90: 335−355. van der Wal, Jenneke 2006a The disjoint verb form and an empty Immediate After Verb position in Makhuwa. ZAS Papers in Linguistics 43: 233−256. van der Wal, Jenneke 2006b Predicative tone lowering in Makhuwa. Linguistics in the Netherlands 23: 224−236. van der Wal, Jenneke 2009 Word order and information structure in Makhuwa-Enahara. Ph.D. dissertation, Leiden University. Utrecht: LOT. van der Wal, Jenneke 2010 Makhuwa non-subject relatives as participial modifiers. Journal of African Languages and Linguistics 31: 205−231. Visser, Marianna 1989 The syntax of the infinitive in Xhosa. South African Journal of African Languages 9(4): 154−185. von Staden, P. M. S. 1973 The initial vowel of the noun in Zulu. African Studies 32(3): 163−181. Watters, John R. 1979 Focus in Aghem. In: Larry M. Hyman (ed.), Aghem Grammatical Structure, Southern California Occasional Papers in Linguistics No. 7. Los Angeles: University of California Department of Linguistics. Woolford, Ellen 1993 Symmetric and asymmetric passives. Natural Language and Linguistic Theory 11: 679−728. Zeller, Jochen 2004 Relative clause formation in the Bantu languages of South Africa. Southern African Linguistics and Applied Languages Studies 22(1&2): 75−93. Zeller, Jochen 2006 Raising out of finite CP in Nguni: The case of fanele. Southern African Linguistics and Applied Language Studies 24(3): 255−275. Zeller, Jochen 2009 On clitic left dislocation in Zulu. In: Sonja Ermisch (ed.), Frankfurt African Studies Bulletin 18 (2006): Focus and Topic in African Languages, 131−156. Köln: Rüdiger Köppe. Zeller, Jochen 2012 Instrument inversion in Zulu. In: Michael R. Marlo, Nikki B. Adams, Christopher R. Green, Michelle Morrison, and Tristan M. Purvis (eds.), African Languages in Context (selected Proceedings of the 42nd Annual Conference on African Linguistics), 134−148. Somerville, Massachusetts: Cascadilla Proceedings Project. Zerbian, Sabine 2006 Inversion structures in Northern Sotho. Southern African Linguistics and Applied Language Studies 24(3): 361−376. Zerbian, Sabine, and Manfred Krifka 2008 Quantification across Bantu languages. In: Lisa Matthewson (ed.), Quantification: A Cross-Linguistic Perspective, 383−414. (North Holland Linguistic Series: Linguistic Variations, vol. 64.) Emerald.

Leston Chandler Buell, Amsterdam (The Netherlands)

1658

VII. Syntactic Sketches

47. Tagalog 1. 2. 3. 4. 5. 6. 7.

Introduction Basic clause structure Word order Nominals and other arguments Non-declarative sentences and negation Subject properties References (selected)

1. Introduction Tagalog, a member of the West Indonesian branch of the Austronesian language family, is native to the southern part of the Philippine island of Luzon. Since its adoption in 1937 as the Philippine national language (under the name Pilipino), it has spread rapidly over the entire Philippine archipelago, and it is estimated that by the year 2000 over 98 % of all Filipinos will speak Tagalog as a first or second language. More than 300 years of contact with Spanish and a briefer period of contact with English have heavily influenced the Tagalog lexicon, and have had some influence on the phonology as well (leading, for example, to a phonemicization of an originally allophonic distinction between high and mid vowels). But contact with Spanish and English appears to have had negligible influence on Tagalog syntax. The following selective sketch presents an overview of the major syntactic structures of Tagalog, with emphasis on certain aspects of Tagalog syntax that are of particular theoretical interest. For a more complete account of the grammar as a whole, see Schachter and Otanes (1972).

2. Basic clause structure Tagalog is basically a predicate-initial language (but see section 3 for certain non-predicate-initial constructions), in which the dominant clause type contains a predicate followed by one or more argument expressions. (There is also a set of weather and time predicates that do not take following arguments: e.g., the verb kumulog ‘(to) thunder’ and the adjective maaga ‘early’.) Predicates may be verbs, nouns, adjectives, or prepositional phrases. By far the greatest syntactic complexity and interest is to be found in clauses with verbal predicates, the subject of sections 2.1 through 2.1.2. Clauses with nonverbal predicates are discussed in section 2.2.

2.1. Clauses with verbal predicates A basic clause with a verbal predicate contains a verb followed by a string of arguments all but one of which are case-marked to indicate the argument’s semantic role. (The

47. Tagalog

1659

order of arguments after the initial verb is in general not fixed − see section 3 for details.) The one argument per clause that is not case-marked has its semantic role indicated by an affix on the verb. (There is also affixation for aspect, with finite verbs occurring in one of three aspect-marked forms: irrealis, imperfective, perfect. Since aspect does not in general affect syntactic structure, it is henceforth ignored.) The non-case-marked argument will be referred to here as the Trigger. (Some previous treatments have referred to the argument in question as the topic and some as the subject. However, as will become clear below, each of these labels appears to carry some inappropriate connotations, making a neutral term like Trigger seem preferable.) The following abbreviations are used in the glosses: AT = Actor-Trigger affix, T = Trigger marker, TH = Theme marker, D = Direction marker, THT = Theme-Trigger affix, A = Actor marker, DT = Direction-Trigger affix. Note that the Trigger, although not case-marked, has its own distinctive marking. (1)

(2)

Mag-aabot ang babae ng laruan sa bata. woman TH toy AT.will:hand T D child ‘The woman will hand a toy to a/the child.’ ng babae ang laruan sa bata. woman T toy D child ‘A/The woman will hand the toy to a/the child.’

Iaabot

THT.will:hand A

(3)

ng babae ng laruan ang bata. Aabutan child T will:hand.DT A woman TH toy ‘A/The woman will hand a toy to the child.’

As the translations indicate, the Trigger argument is regularly interpreted as definite, a non-Trigger Theme as indefinite, and other non-Trigger arguments as either definite or indefinite. It is the regular association of the Trigger with definiteness that has led some previous analysts (e.g., Schachter and Otanes 1972) to identify the Trigger as a topic, but the Trigger does not necessarily have the other pragmatic properties typically associated with topics: e.g., its referent is not necessarily what the sentence is about. The variation in verb form illustrated in sets of sentences like (1)−(3) is reminiscent of voice-marking, and one might be tempted to analyze (1) as active and (2) and (3) as different types of passives (in which case the Trigger would correspond to the active or passive subject). However, there is good reason to distinguish the Tagalog Triggermarking system from the voice systems of familiar languages. In the first place, non-ActorTrigger clauses such as (2) and (3) generally outnumber Actor-Trigger clauses such as (1) in texts − a situation that is certainly unexpected if (1) is active and (2) and (3) passive. Second, while voice systems typically make just a two-way distinction (between active and passive), the Tagalog Triggermarking system is much richer, allowing a wide range of semantic roles to be registered on the verb. For example, in addition to arguments with the semantic roles illustrated in (1)−(3), arguments with roles like beneficiary, instrument, location, and cause may also serve as Triggers, with a distinct verbal affix indicating the role in each case. There also seems to be good reason to reject the term focus − which is the traditional term in the Philippinist literature − for the verb-form variation in question. Focus now has a well-established usage as a term of pragmatics, where it indicates new or salient information. This pragmatic function is irrelevant to the registration of the role of the

1660

VII. Syntactic Sketches

Trigger, which, as noted, is regularly definite − a non-focus-like property. It therefore seems advisable to use the neutral term Trigger system for the phenomenon in question, and to refer to the verb forms that comprise this system as x-Trigger forms, where x ranges over the semantic roles that the system distinguishes. As for the functions of the Trigger system − i.e., the conditions that determine the choice of one or another Trigger form − an unpublished textual study by Fay Wouk suggests that the central function has to do with marking the referential status of the arguments. The Trigger is always referential. Its reference is usually definite, but may it also be generic, or indefinite but specific (‘a certain …’). Non-Trigger arguments, on the other hand, are not necessarily referential, and one − the non-Trigger Theme − is almost always indefinite. The choice of Trigger is thus to a considerable extent conditioned by the relative referential status of the arguments in the clause. If, for example, the argument with the role of Theme has a definite referent, the clause will in the great majority of cases be a Theme-Trigger clause. Each Tagalog verb is lexically specified for the semantic roles of the arguments for which it is subcategorized. The system of semantic roles assumed here is based on the thematic-role system of Jackendoff (1972) and much subsequent work in generative grammar, with certain modifications that are needed to account for the Tagalog facts. The central semantic role in Jackendoff’s system is the role of Theme, which is the role played by the argument whose position, possession, etc. are at issue. (“Position” and the like must often be interpreted abstractly: e.g., in John explained the situation to Mary, the situation is the Theme.) Other roles in the system are Agent (typically, the causer of the event expressed), Location, Source, and Goal. One important characteristic of the system is that a single argument may in some cases be assigned two different roles. For example, in (1)−(3), above, babae ‘woman’ is both Agent and Source. The modifications in the thematic-role system that are assumed here involve the recognition of two macroroles, Actor and Direction. (A macrorole is a generalized semantic role that subsumes certain sets of thematic roles.) The Actor is the argument whose role in any event is presented as central, the argument from whose perspective or point of view the event is presented. (In English translations of Tagalog, the Actor regularly corresponds to the active subject.) If the verb assigns the thematic role of Agent to an argument, then this argument is always the Actor. For non-agentive experiential verbs, the Actor is the experiencer (which, being the locus of the experience, carries the thematic role of Location in Jackendoff’s system), as in: (4)

Magtitiis

ang babae ng kahirapan. woman TH hardship ‘The woman will endure hardship.’

AT.will:endure T

(5)

Titiisin ng babae ang kahirapan. will:endure.THT A woman T hardship ‘A/The woman will endure the hardship.’

Finally, if the verb assigns neither Agent nor Location (experiencer), the Actor may have the semantic role of Goal, Source, or Theme, the choice being lexically determined. For example, the verb AT tumanggap/THT tanggapin ‘receive’ in (6)−(7) selects the Goal rather than the Theme as Actor, while the verb AT dumating/DT datingnan ‘reach’ in

47. Tagalog

1661

(8)−(9) selects the Theme rather than the Goal. (The AT infix -um-, which is present in the infinitival forms tumanggap ‘receive’ and dumating ‘reach’, does not occur in irrealis forms − hence the absence of overt AT marking in [6] and [8].) (6)

Tatanggap ang babae ng sulat. woman TH letter AT.will:receive T ‘The woman will receive a letter.’

(7)

Tatanggapin ng babae ang sulat. will:receive.THT A woman T letter ‘A/The woman will receive the letter.’

(8)

Dadating ang sulat sa babae. letter D woman. AT.will:reach T ‘The letter will reach a/the woman.’

(9)

Dadatnan ng sulat ang babae. will:reach.DT A letter T woman ‘A/The letter will reach the woman.’

As the glosses show, Actor assignment takes precedence over thematic role assignment, so Actors are all treated alike syntactically: i.e., as non-Triggers they are all marked in the same way (e.g., by the case-marking preposition ng), as Triggers they all trigger the selection of a member of the same set of affixes (e.g., mag- or -um-), and they all evince the other Actor-related properties discussed in section 6. Direction is a metarole that subsumes Goal and Source, which again manifest syntactic similarities having to do with case marking and Trigger affixation. Compare (8)−(9), in which the Direction is a Goal, with (10)−(11) (involving the verb AT umalis/DT alisan ‘leave’), in which the Direction is a Source: (10) Aalis ang tren sa istasyon. train D station AT.will:leave T ‘The train will leave from the station.’ (11) Aalisan ng tren ang istasyon. will:leave.DT A train T station ‘A/The train will leave from the station.’ As the examples show, a Direction argument, whether it is a Goal or a Source, takes the same case marking (e.g., the preposition sa) and triggers the same verbal affixation (e.g., -an).

2.1.1. Case-Marking and trigger-marking of arguments All non-Trigger arguments are case-marked for their semantic role or metarole. All Trigger arguments are marked as such, and their semantic (meta)role is marked on the verb. The way in which arguments are case-marked or Trigger-marked varies according to the syntactic class of the argument. Nouns are marked prepositionally, while personal and demonstrative pronouns are marked by morphological variation.

1662

VII. Syntactic Sketches

For common nouns, the Trigger-marking preposition is ang, the Actor-marking and Theme-marking preposition is ng (the conventional spelling for phonetic [naŋ]), and the Direction-marking and Location-marking preposition is sa. Personal names have their own distinctive prepositions: the personal name counterparts of ang, ng, and sa are, respectively, si, ni, and kay. Personal and demonstrative pronouns each occur in three forms, which may be called, on the basis of their functional parallelism to phrases introduced by ang, ng, and sa, the ang-form, the ng-form, and the sa-form respectively. Angform and ng-form pronouns are never preceded by prepositions, and neither are sa-form demonstrative pronouns. However, sa-form personal pronouns functioning as Direction or Location arguments are preceded by the preposition sa. (Sa-form personal pronouns also occur as possessive predicates and possessive modifiers. As possessive predicates they are optionally preceded by sa, and as possessive modifiers − cf. section 4 − they occur without a preceding preposition.) The forms of the personal and demonstrative pronouns are summarized in Tables 47.1 and 47.2. (12) Tab. 47.1: Personal Pronouns ang-form Singular 1st person 2nd person 3rd person Plural 1st person exclusive 1st person inclusive 2nd person 3rd person

ng-form

sa-form

ako ka/ikaw siya

ko mo niya

akin iyo kaniya

kami tayo kayo sila

namin natin ninyo nila

amin atin inyo kanila

(The two ang-forms of the 2nd-person-singular pronoun, ka and ikaw, are respectively clitic and nonclitic − cf. the discussion of clitics in section 3.) (13) Tab. 47.2: Demonstrative Pronouns ang-form ‘this’ ‘that (near addressee)’ ‘that (not near addressee)’

ito iyan iyon

ng-form

sa-form

nito niyan niyon/noon

dito diyan doon

2.1.2. Trigger-marking of verbs As noted in 2.1, the semantic role or metarole of the Trigger argument is marked by an affix on the verb. There is no one-to-one correspondence between roles and affixes. Some roles may be marked by two or more different affixes, and some affixes may mark two or more different roles. Nonetheless, there is in fact a good deal of predictability in the Trigger-marking system, on the basis of the semantics of the verb.

47. Tagalog

1663

Consider, for example, two major classes of three-argument verbs, one of which takes the Trigger affixes AT mag-/THT i-/DT -an, the other the Trigger affixes AT -um-/THT -in/ DT -an. It turns out, as noted by Ramos (1974), that the occurrence of a particular verb in one of these two classes is predictable from the verbal semantics, with the distinction based on whether the verb is “centrifugal” or “centripetal”. A centrifugal verb is one in which the Actor is both Agent and Source (while the Direction is Goal); with such verbs the mag-/i-/-an set of affixes is used. Thus it is this set that is used with the verbs meaning ‘hand to’ in (1)−(3), above. (Other verbs showing the same pattern include those meaning ‘give to’, ‘take up to’, ‘take into’, etc.) A centripetal verb, on the other hand, is one in which the Actor is both Agent and Goal (while the Direction is Source); with these verbs the -um-/-in/-an set of affixes is used, as in the following examples: (14) Hihiram ang babae ng laruan sa bata. woman TH toy AT.will:borrow T D child ‘The woman will borrow a toy from a/the child.’ (15) Hihiramin ng babae ang laruan sa bata. will:borrow.THT A woman T toy D child ‘A/The woman will borrow the toy from a/the child.’ (16) Hihiraman ng babae ng laruan ang bata. will:borrow.DT A woman TH toy child T ‘A/The woman will borrow a toy from the child.’ (Other verbs showing the same pattern include those meaning ‘get from’, ‘buy from’, ‘ask for from’, etc.) The semantic roles and metaroles of Trigger arguments that may be marked by a Trigger affix on the verb include: Actor, Theme, Direction, Beneficiary, Location, Instrument, and Cause. The marking of each of these roles is discussed in turn below, with emphasis on such generalizations as can be made concerning the choice of affix in each case. (The discussion will be restricted to “simple” Trigger-marking affixes, and will generally exclude those which, in addition to indicating the role of the Trigger, have some further function such as marking the action as intensive, moderative, accidental, etc. In general, these dual-function affixes are predictable from the simple affixes by rules of derivational morphology.) The two most common Actor Trigger (AT) affixes are mag- and -um-. Mag- (which for some purposes may conveniently be analyzed as consisting of two morphemes − see discussion of Beneficiary-Trigger verbs below, present section) most commonly occurs if the Actor is just an Agent (as in magluto ‘cook’, magsigarilyo ‘smoke’) or is both Agent and Source (as in mag-abot ‘hand to’, mag-pasok ‘take into’), while -um- most commonly occurs if the Actor is both Agent and Goal (as in humiram ‘borrow’), both Agent and Theme (as in pumasok ‘enter’), or not an Agent (as in pumuti ‘turn white’, tumanggap ‘receive’). There are a number of verb roots that form AT verbs with both mag- and -um-, with magforming a three-argument verb whose Actor is Agent and Source, and -um- forming a two-argument verb whose Actor is Agent and Theme. Examples include AT mag-akyat ‘take up to’ vs. umakyat ‘climb’, maglabas ‘take outside’ vs. lumabas ‘go outside’ and mag pasok ‘take into’ vs. pumasok ‘enter’. (The mag- verbs

1664

VII. Syntactic Sketches

of this type, as noted above, all also form THT verbs with i- and DT verbs with -an. The -um- verbs of this type all also form DT verbs with -in.) In addition to mag- and -um-, there are a number of other AT affixes that occur idiosyncratically as simple affixes with certain verb roots: e.g., ma-, maka-, and mang-, as in matulog ‘sleep’, makakita ‘see’, and mangisda ‘go fishing’ respectively. Ma- and mang- also have certain more systematic uses as simple AT affixes. For example, maoccurs regularly in experiential inchoative verbs such as mabingi ‘become deaf’, mauhaw ‘become thirsty’, and mang- (which has mam- as an allomorph in certain cases) occurs regularly in transitory inchoatives such as mamula ‘blush’, mamuti ‘blanch’ (cf. the nontransitory inchoatives involving the same roots in combination with -um-, pumula ‘turn red’, pumuti ‘turn white’). The most common Theme-Trigger (THT) affixes are i- and -in, which are the most common THT counterparts of AT mag- and AT -um- respectively. Thus THT i- commonly occurs, inter alia, when the Actor is both Agent and Source, as in (2), while THT -in commonly occurs, inter alia, when the Actor is both Agent and Goal, as in (15). There are also certain THT verbs formed with -an (which is much more common as a DirectionTrigger affix). These are often verbs whose meaning is such that their Theme undergoes a superficial change: e.g., hugasan ‘wash’, walisan ‘sweep’. Idiosyncratically, ma- occurs as a simple affix in a few THT verbs that correspond to AT maka- verbs: e.g., makita ‘see’. (AT maka- and THT ma- are common as dual-purpose affixes that indicate, in addition to their Trigger-marking function, ability or accidental occurrence. It seems plausible that AT makakita/THT makita ‘see’ were originally ability forms meaning ‘can see’, and that over the course of time these ability forms supplanted the original simple forms.) The usual Direction-Trigger (DT) affix is -an. The one systematic exception is a set of two-argument DT verbs formed with -in. The roots of these verbs all also form threeargument DT verbs with -an. These are the same roots that were mentioned above as forming two-argument AT verbs with -um- and three-argument AT verbs with mag-. The DT -in verbs correspond to the AT -um- verbs, while the DT -an verbs correspond to the AT mag-verbs (and to THT i- verbs). Examples include DT pasukin ‘enter’ vs. pasukan ‘take into’, labasin ‘go outside’ vs. labasan ‘take outside’, and akyatin ‘take up to’ vs. akyatan ‘climb’. Beneficiary-Trigger (BT) verbs select as Trigger the argument denoting the beneficiary of the event, as in: (17) Ihihiram ng babae ng laruan sa lalaki ang Bata. child BT.will:borrow A woman TH toy D man T ‘A/The woman will borrow a toy from a/the man for the child.’ The BT affix is i-. If the corresponding AT verb is formed with mag-, then the BT affix is followed by the stem-forming prefix pag-, as in BT ipag-abot ‘hand to for’ (cf. AT mag-abot ‘hand to’). Mag- itself may in fact be analyzed as including this same stemforming prefix in combination with an AT prefix m-: cf. De Guzman (1980). (Similarly, AT mang-, as in mangisda ‘go fishing’, may be analyzed as consisting of the AT prefix m- plus a stemforming prefix pang-, which surfaces in such BT forms as ipangisda ‘go fishing for’.)

47. Tagalog

1665

Instrument-Trigger (IT) verbs select as Trigger the argument denoting the instrument of the event, as in: (18) Ipangguguhit ng bata ang tsok. chalk IT.will:draw A child T ‘A/The child will draw with the chalk.’ IT verbs commonly involve the prefix ipang-. Ipang- may be analyzed as consisting of the Trigger-marking prefix i- and a stem-forming prefix pang- that also occurs in instrumental adjectives. (Compare with the verb in [18] the instrumental adjective pang-guhit ‘used in drawing’. The stem-forming prefix pang- that occurs in instrumental verbs and adjectives is distinct from the previously mentioned stemforming prefix pang-, showing a different pattern of morphophonemic alternations.) Cause-Trigger (CT) verbs select as Trigger the argument designating the cause of the event, as in:

(19) Ikatatakbo nila ang takot. fear CT.will.run A:they T ‘Fear will make them run.’ CT verbs corresponding to AT -um- (or ma-) verbs are commonly formed with the prefix ika-: compare CT ikatakbo ‘make run’ and AT tumakbo ‘run’ (also CT ikabingi ‘cause to become deaf’ and AT mabingi ‘become deaf’). If the AT verb is formed with mag-, stemforming pag- follows the CT prefix, and the ka- of the prefix is optional: cf. CT i(ka)pagaway ‘cause to fight’, AT mag-away ‘fight’. Tagalog also has a set of causative verbs formed with the causative-stem-forming prefix pa-, which occurs in addition to a Trigger affix, as in:

(20) Magpapapunta ang babae ng bata ṡa tindahan. woman TH child D store AT.cause.will:go T ‘The woman will have a child go to a/the store.’ With such causative verbs there are, in a sense, two Actors, one causing the other to act. However, morphologically and syntactically only the “Causer” is treated as an Actor, while the “Causee” is treated as a Theme or a Direction. Thus, when the Causer is the Trigger, as in (20), the AT affix mag- is used, and a non-Trigger Causer is case-marked as an Actor, as in (21): (21) Papapuntahin ng babae ang bata sa tindahan. cause.will:go.THT A woman T child D store ‘A/The woman will have the child go to a/the store.’ (20) and (21) also illustrate the Causee being treated as a Theme: i.e., taking Theme case-marking in (20) and triggering the occurrence of the Theme-Trigger affix -in in (21). The Causee is treated as a Theme, however, only if the verb underlying the causative formation is one that is not already subcategorized for a Theme. (Thus the verb underlying the causatives in (20) and (21), AT pumunta/DT puntahan ‘go’, is not subcate-

1666

VII. Syntactic Sketches

gorized for a Theme.) If the verb underlying the causative formation is subcategorized for a Theme, then the Causee takes Direction case-marking, and, though it still triggers the occurrence of the Trigger-marking affix -in, the affix under these circumstances may be considered Direction-Trigger. This pattern is illustrated in (22) and (23). (The underlying verb here is AT humingi/THT hingin/DT hingan ‘ask for’.) (22) Magpapahingi ang babae sa katulong ng bigas sa kapitbahay. woman D maid AT.cause-will:ask:for T TH rice D neighbor ‘The woman will have a/the maid ask a/the neighbor for some rice.’ (23) Papahingin ng babae ang katulong ng bigas sa kapitbahay. cause.will:borrow.DT A woman T maid TH rice D neighbor ‘A/The woman will have the maid ask a/the neighbor for some rice.’ (Since the ordering of the two Direction phrases in (22) is not fixed − cf. section 5 − this sentence is potentially ambiguous, and could also mean ‘The woman will have a/ the neighbor ask a/the maid for some rice.’) As examples (21) and (23) illustrate, in causative verbs the affix -in is used whenever the causee is Trigger, whether the corresponding non-Trigger is marked as Theme or Direction. When the Theme of the underlying verb is the Trigger of the causative verb, the affix i- is consistently used, and when the Direction of the underlying verb is the Trigger of the causative verb, the affix -an is consistently used: cf. THT ipahingi/DT pahingan ‘cause to ask for’.

2.2. Clauses with nonverbal predicates Nouns, adjectives, and prepositional phrases may all occur as predicates. Tagalog has no copula, and the structure of a basic clause with a nonverbal predicate is in most cases similar to that of a clause with an intransitive-verb predicate. Thus the predicate is initial, and it is followed by a Trigger expression. Some examples are: (24) Istudyante ang bata. student child T ‘The child is a student.’ (25) Matalino ang bata. intelligent T child ‘The child is intelligent.’ (26) Nasa iskwela ang bata. at school T child ‘The child is at school.’ As these examples illustrate, nouns (or noun phrases), adjectives (or adjective phrases), and prepositional phrases have similar distributions in Tagalog, and all in fact have distributions similar to verbs. (The distributional similarities are even greater than has

47. Tagalog

1667

thus far been illustrated, since nouns, adjectives, prepositional phrases, and verbs all also occur in essentially identical contexts as arguments or as modifiers − cf. section 4. The distinctions between major parts of speech are thus based not on distribution but on morphology or internal structure. For example, only adjectives occur in certain intensive formations and constructions. And only verbs take Trigger-marking affixes and are inflected for aspect. Verbs are not, however, inflected for person, and inflection for number, though possible with certain ActorTrigger verbs, is never obligatory.) In addition to predicate-plus-Trigger constructions, there are also certain basic clauses in which a nonverbal predicate is not followed by a Trigger. These are existential clauses: e.g.: (27) May libro dito. E/P book here (E/P = existential/possessive preposition) ‘There’s a book here.’ The same prepositional phrases that occur in existential predicates without a following Trigger also occur as possessive predicates with a following Trigger: compare (27) and (28): (28) May libro ako. E/P book T:I ‘I have a book.’ One other type of predicate to be noted is the definite predicate, found in equational constructions such as (29): (29) Mg Amerikano ang titrer. American T teacher T ‘The American is the teacher.’ As the example illustrates, equational constructions consist of two Trigger expressions. In such constructions it is not altogether clear which of the two Trigger expressions is the predicate (cf. the English translation of [29]). However, it is the first expression that has the predicate-like property of representing newer information. Thus (29) is an appropriate answer to Sino ang titser? ‘Who is the teacher?’, not to Sino ang Amerikano? ‘Who is the American?’

3. Word order The order of constituents that follow a clause-initial predicate is in general not fixed, and, clitics excepted (see below, present section), arguments and adverbs (such as time expressions) may generally occur in any order. In sentences with verbal predicates, there is often a preference for placing the Actor immediately after the verb, so that an ordering like that shown in (30) is a common one:

1668

VII. Syntactic Sketches

(30) Nakita ni Juan si Maria ngayon. THT.saw A Juan T Maria today ‘Juan saw Maria today.’ But any other ordering of ni Juan, si Maria, and ngayon would result in a grammatical sentence cognitively synonymous with (30) (though perhaps pragmatically distinguished from it). Tagalog does, however, have a set of second-position clitics that have a fixed position in relation to other clause elements, following a clause-initial predicate (or other clauseinitial word − see below) and obligatorily preceding all other non-initial constituents. When two or more of these clitics co-occur, their order in relation to one another is also largely fixed. Thus no other ordering of the constituents of (31), which contains the clitics ko, na, and sila, is possible: (31) Nakita ko na sua. THT.saw A:I already T:they ‘I have already seen them.’ The Tagalog clitics are the ang-form and ng-form personal pronouns (cf. Table 47.1, section 2.1.1) and the rather heterogeneous set of particles shown in Table 47.3 (some of which in fact have a wider semantic range than the glosses on the chart indicate): (32)

Tab. 47.3: Clitic Particles ba (interrogative marker) kasi ‘because’ kaya (speculation marker) daw (reported speech marker) din ‘too’ ho (politeness marker) lamang (only) man ‘even’ muna ‘for a while’

na ‘already’ naman ‘instead’ nga ‘really’ pa ‘still’ pala (surprise marker) po (politeness marker) sana (optative marker) tuloy ‘as a result’ yata (uncertainty marker)

When two clitic pronouns co-occur, monosyllabic pronouns (whether ang-form or ngform) always precede disyllabic pronouns. Two monosyllabic pronouns never co-occur. The only hypothetical possibility of such a co-occurrence would involve the monosyllabic 2nd-person-singular ang-form pronoun ka and the monosyllabic 1st-person-singular ng-form pronoun ko, but in fact a portmanteau disyllabic clitic pronoun kita occurs instead, as in: (33) Nakita na kita. THT.saw already T:you&A:I ‘I’ve already seen you.’ Two disyllabic clitic pronouns may occur in either order. Clitic particles always follow monosyllabic clitic pronouns and precede disyllabic clitic pronouns. The clitic particles themselves fall into a number of different classes

47. Tagalog

1669

that determine their order in relation to one another. This ordering is partly fixed, partly free. For details, see Schachter (1973). Clitics in general follow the clause-initial word, which may be the predicate, as in examples cited thus far, but may also be a negative marker, as in (34), a question word, as in (35), etc. (See section 5 for a discussion of negatives and questions.) (34) Hindi ko sua nakita ngayon. N A:I T:they THT.saw today (N = negative marker) ‘I didn’t see them today.’ (35) Bakit ko sua hindi nakita ngayon? why A:I T:they N THT.saw today ‘Why didn’t I see them today?’ Some clause-initial words cannot serve as clitic hosts, and others serve optionally. There are also some differences between the potential hosts of clitic pronouns and those of clitic particles. For details, see Schachter and Otanes (1972: 187−193, 433−435.) Although predicate-initial constructions are basic in Tagalog, there are several different construction types in which an argument or adverb precedes the predicate. In one such construction, the clause-initial constituent − which may be the Trigger, one of certain types of non-Trigger arguments, or an adverb − is immediately followed by the inversion marker ay. Some examples are: (36) Ikaw ay nakita ni Juan kahapon. T:you I THT.saw A Juan yesterday (I = inversion marker) ‘Juan saw you yesterday.’ (37) Kahapon ay nakita ka ni Juan. yesterday I THT-saw you A Juan ‘Juan saw you yesterday.’ (Ikaw in [36] and ka in [37] are corresponding nonclitic and clitic forms − cf. Table 47.1, section 2.1.1. As [37] illustrates, constituents that precede ay do not host clitic pronouns, and neither does ay itself.) Ay constructions occur more often in writing and in formal speech than they do in ordinary conversation. Fox (1985) has proposed a discourse function for ay constructions in narratives, suggesting that the referent of the pre-ay constituent is one that had been referred to at some earlier point in the narrative and is being reintroduced in the ay construction itself. Other non-predicate-initial constructions with special discourse functions are the contrastive inversion construction, illustrated in (38), and the emphatic inversion construction, illustrated in (39). In the former, the referent of an initial Trigger or adverb is contrasted, either explicitly or implicitly, with a structurally similar element in another sentence. In the latter, the referent of an initial adverb or directional complement is given roughly the same kind of emphasis that is given by the cleft construction in English:

1670

VII. Syntactic Sketches

(38) Ako, magpapahinga. (Ikaw, magtatrabaho.) T:I AT.will:rest T:you AT.will:work ‘I will rest. (You will work.)’ (39) Kay Maria ko ibibigay ang salapi. Maria A:I THT.will:give T money D ‘It’s to Maria that I’ll give the money.’ (Note that the emphatic initial constituent in [39] serves as host to the clitic pronoun.) The order of constituents of noun phrases is discussed, together with other aspects of noun-phrase structure, in section 4.

4. Nominals and other arguments This section first examines the structure of noun phrases, covering pluralization and modification, then goes on to constructions of other types that may function as arguments. The most common expression of pluralization in noun phrases involves the proclitic mga [maŋa]. Explicit pluralization with mga is optional if the context makes the intended plural meaning clear (except that mga is never used if the head noun is modified by a quantifier, as in [45] below). Thus when a predicate nominal and a Trigger are to be interpreted as plurals, the pluralization may be indicated on either or on both: (40) Mga libro ito. Pl book this

(P1 = plural marker)

Libro ang mga ito. book T Pl this Mga libro ang mga ito. book T Pl this ‘These are books.’

Pl

In addition to pluralizing common nouns and demonstratives as above, mga may also be used to pluralize a personal name, as in: (41) Nasaan ang mga Santos? where T Pl Santos ‘Where are the Santoses?’ Personal names may also be pluralized by replacing the usual personal-name markers si, ni, and kay (cf. section 2.1.1) by sina, nina, and kina respectively, in which case the interpretation is ‘X and others’: e.g., sina Santos ‘Santos and the others’. Possessive modifiers other than personal pronouns always follow the nouns they modify. These modifiers have the form of ng phrases or their functional equivalents (i.e., ni plus a personal name or a ng-form demonstrative pronoun), as in:

47. Tagalog

1671

(42) May sakit ang ina ng bata/ ni Juan/ nito. mother Po child Po Juan Po:this E/P sickness T (Po = possessive marker) ‘The child’s/Juan’s/This one’s mother is sick.’ Personal-pronoun possessive modifiers have two forms: either a ng-form pronoun after the head noun, as in (43), or a sa-form pronoun before it, as in (44). If the sa-form is used, the linker -ng [ŋ] comes between it and the head noun. (43) May sakit ang ma mo. mother Po:you E/P sickness T ‘Your mother is sick.’ (44) May sakit ang iyong ma. E/P sickness T Po:you.L mother (L = linker) ‘Your mother is sick.’ Some nonpossessive modifiers regularly precede the head, others regularly follow it, and still others may either precede or follow the head. In any case, a linker comes between the modifier and the head. (This linker has two phonologically conditioned alternants, -ng and na.) The modifiers that regularly precede the head are the quantifiers: e.g., (45) dalawang/ maraming libro many.L book two.L ‘two/many books’ Those that regularly follow are nouns: e.g., (46) gulay na repolyo; larong besbol vegetable:dish L cabbage game.L baseball ‘vegetable dish made from cabbage; baseball game’ Other modifiers may in general either precede or follow the head, though in some cases there is a pragmatic difference. For example, when a demonstrative modifier precedes a noun, the noun is potentially contrastive. On the other hand, when a noun precedes a demonstrative, the demonstrative is potentially contrastive. Thus itong sombrero (this-L hat) ‘this hat’ can be used to mean ‘this hat’ (as opposed, say, to this coat) while sombrero na ito (hat L this) can be used to mean ‘this hat’ (as opposed to that one). A Tagalog nonpossessive modifier generally has the structure of a clause with a deleted Trigger (see below for exceptions), and the head of a modification construction is generally understood as corresponding to this Trigger. For example, the head in the examples of (47) corresponds to the Trigger in the examples of (48): (47) laruang mahal/ nasa mesa/ ibinigay ni Maria kay Juan toy.L expensive on table THT.gave A Maria D Juan ‘toy that is expensive / is on the table / Maria gave to Juan’

1672

VII. Syntactic Sketches

(48) Mahal/ Nasa mesa/ Ibinigay ni Maria kay Juan ang laruan. expensive on table THT.gave A Maria D Juan T toy ‘The toy is expensive / on the table. / Maria gave the toy to Juan.’ As these examples illustrate, in Tagalog the process of deriving modifiers from clauses − i.e., relativization − typically involves the deletion of a Trigger, and one may propose that in general all and only Triggers may be relativized. The one exception is that certain constituents of Triggers may also be relativized. For example, the possessive modifier of the Trigger in (49) may be relativized, as illustrated in (50): (49) Mahaba ang tangkay ng bulaklak. long stem T Po flower ‘The stem of the flower is long.’ (50) bulaklak na mahaba ang tangkay flower L long stem T ‘flower whose stem is long’ Clauses with deleted triggers function freely not only as modifiers but also as arguments, taking such typical argument roles as Trigger, Actor, etc. Some examples are: (51) Ibibigay nasa mesa/ ibinigay ni Maria kay ko sa iyo ang mahal/ expensive on table THT.gave A Maria D THT.will:give A:I D you T Juan. Juan ‘I will give you the expensive one / the one on the table / the one Maria gave to Juan.’ (52) Ginambala ako ng maingay/ nasa likuran ng silid/ kumakain. in back Po room AT.was:eating THT.interrupted T:I A noisy ‘The noisy one / The one at the back of the room / The one who was eating interrupted me.’ Such argument structures have exactly the same form as relative modifiers (compare, for example, [47] and [51]), and may in fact be analyzed as headless relative clauses. There are also sentential arguments. Those corresponding to statements are introduced by the linker -ng/na, as in (53); those corresponding to questions are introduced by kung ‘if’, as in (54). (53) Nakita ni Luz na puno na ang bus. bus THT.was A Luz L full already T ‘Luz saw that the bus was already full.’ (54) Nakita ni Luz kung sinu-sino ang nasa bus. who:Pl T in bus THT.saw A Luz if ‘Luz saw who was in the bus.’ Such sentential arguments have exactly the same internal structure as the corresponding independent sentences, except that embedded questions never contain the interrogative

47. Tagalog

1673

particle ba that is optional in their non-embedded counterparts. (For the structure of questions, see section 5.)

5. Non-declarative sentences and negation Tagalog polar questions are distinguished from the corresponding statements by a rising intonation pattern and, optionally, by the interrogative clitic particle ba. This particle also occurs optionally in question-word questions, which, however, have an intonation pattern distinct from that of both polar questions and statements. The questioned constituent in a question-word question is normally clause-initial. If this constituent is a non-Trigger argument or an adverb, any clitics in the clause attach to it, as in: (55) Saan mo (ba) sua nakita? where A:you ? T:they THT.saw (? = interrogative marker) ‘Where did you see them?’ (56) Sa aling kanto (ba) aalis ang bus? bus D which.L corner ? AT.will:leave T ‘Which corner will the bus leave from?’ If the questioned constituent is a Trigger, the rest of the clause is put into the form of a headless relative clause (cf. section 4) and preceded by the Trigger marker ang. Under these circumstances, the clitic particle ba still follows the questioned constituent, but any clitic Actor pronoun occurs inside the headless relative clause. Some examples are: (57) Ano (ba) ang gagawin mo bukas? what ? will:do.THT A:you tomorrow T ‘What will you do tomorrow?’ (58) Sino (ba) ang nasa kusina? who ? in kitchen T ‘Who is in the kitchen?’ Imperatives of the most frequent type are syntactically similar to statements with verbal predicates and second-person Actors (which may or may not be Triggers), except that the verb is in the infinitive form rather than one of the three finite forms that occur in statements. Some examples are: (59) Magpaluto ka ng pagkain. AT.cause.cook T:you TH food ‘Have some food cooked.’ (60) Ibili mo sua ng damit. BT.buy A:you T:they TH clothes ‘Buy them some clothes.’

1674

VII. Syntactic Sketches

There are also hortatives, which differ from imperatives only in having first-person inclusive Actors. Examples are: (61) Magpaluto tayo ng pagkain. AT.cause.cook T:we:inclusive TH food ‘Let’s have some food cooked.’ (62) Ibili natin sua ng damit. BT.buy A:we:inclusive T:they TH clothes ‘Let’s buy them some clothes.’ Negation is commonly expressed by one of three clause-initial markers. In imperative and hortative clauses, the negative marker is huwag; in existential and possessive clauses, it is wala (which occurs in place of an affirmative existential/possessive marker); and in clauses of other types it is hindi. Huwag and wala take a following linker, which is preceded by any clitic pronouns or particles. Examples are: (63) Huwag mong basahin iyang liham. N A:you.L THT.read that.L letter. (N = negative marker) ‘Don’t read that letter.’ (64) Huwag tayong umalis. N T:we:inclusive.L leave ‘Let’s not leave.’ (65) Walang pera dito. N:E/P.L money here ‘There isn’t any money here.’ (66) Wala ka bang kotse? car N:E/P T:you ?.L ‘Don’t you have a car?’ (67) Hindi ako susulat sa kanila. N T:I AT:will.write D they ‘I won’t write to them.’ (68) Hindi ba masaya si Juan? ? happy T Juan N ‘Isn’t Juan happy?’

6. Subject properties It was suggested in section 2.1 that it is inappropriate to identify the Tagalog Trigger with the familiar grammatical category subject because this would entail analyzing the Trigger system as a voice system, in disregard of the differences between the two types of systems. Another reason for not marking such an identification is the fact that certain

47. Tagalog

1675

syntactic properties that are commonly associated with subjects in other languages (cf. Keenan 1976) are in fact associated not with the Trigger but with the Actor in Tagalog. For example, it is the Actor that is the addressee of an imperative sentence (cf. [59]− [60], above). And it is also the Actor that controls the reference of a reflexive (expressed by sarili ‘self’ and a possessive pronoun), as in: (69) Mag-aalaala sua sa sarili nila. AT.will:worry T:they D self Po:they ‘They will worry about themselves.’ (70) Aalahanin nila ang sarili nila. self Po:they DT.will:worry A:they T ‘They will worry about themselves.’ (Since the verb in [69] is AT, sila in this sentence is Actor as well as Trigger. In [70], however, nila is Actor but not Trigger. So the generalization is clearly that it is the Actor − whether or not it is also the Trigger − that controls reflexive reference.) On the other hand, there are certain subject-like properties that are associated with the Trigger. One such property, noted in section 4, is relativizability: only Triggers (and certain constituents of Triggers) may be relativized. Another subject-like property associated with Triggers is the ability to launch socalled floating quantifiers − i.e., quantifiers that are separated from the nouns they quantify. Some relevant examples are: (71) Sumulat lahat kahapon ang mga bata ng mga liham. AT:wrote all yesterday T Pl child A Pl letter ‘All the children wrote letters yesterday.’ (72) Sinulat lahat kahapon ng mga bata ang mga liham. THT-wrote all yesterday A Pl child T Pl letter ‘The/Some children wrote all the letters yesterday.’ In addition, there is at least one subject property that may be associated with either the Actor or the Trigger. This is the property of deletability under identity in so-called equiNP constructions, as illustrated by the following examples, adapted from Dell (1981): (73) Umiwas akong tumingin (ako) kay Lorna. Lorna AT.avoided T:I.L AT.look:at T:I D ‘I avoided looking at Lorna.’ (74) Umiwas akong tingnan (ko) si Lorna. AT.avoided T:I.L DT-look:at A:I T Lorna ‘I avoided looking at Lorna.’ (75) Umiwas akong mahuli (ako) ng pulis. AT.avoided T:I.L THT-catch T:I A policeman ‘I avoided getting caught by a/the policeman.’ In (73) the complement construction contains an AT verb, and the deletable ako is thus both Trigger and Actor. In (74), however, the deletable ko is Actor alone, while in (75)

1676

VII. Syntactic Sketches

the deletable ako is Trigger alone. So the Actor and the Trigger in this case share the subject property in question. Schachter (1976) argues on the basis of evidence like the above that Tagalog in fact does not have subjects, and that subject is therefore − contrary to a common assumption among grammarians − not a universally attested grammatical category. (Gil 1984 presents similar arguments with regard to the category direct object, which is equally problematic in Tagalog.) In any event, Tagalog in this respect poses an intriguing problem for syntactic theory.

7. References (selected) De Guzman, Videa P. 1980 A reanalysis of the structure of Tagalog verbs. Philippine Journal of Linguistics 11: 21−31. Dell, François C. 1981 On certain sentential complements in Tagalog. Cahiers de linguistique Asie Orientale 10: 19−41. Fox, Barbara A. 1985 Word-order inversion and discourse continuity in Tagalog. Text 5: 39−54. Gil, David. 1984 On the notion of “direct object” in patient prominent languages. In: Frans Plank (ed.), Objects: Towards a Theory of Grammatical Relations, 87−108. London: Academic Press. Jackendoff, Ray S. 1972 Semantic Interpretation in Generative Grammar. Cambridge. Keenan, Edward L. 1976 Towards a universal definition of “subject”. In: Charles N. Li (ed.), Subject and Topic, 247−301. New York: Academic Press. Ramos, Teresita V. 1974 The Case System of Tagalog Verbs (Pacific linguistics series B 27). Canberra: Department of Linguistics, Research School of Pacific Studies, Australian National University. Schachter, Paul 1973 Constraints on clitic order in Tagalog. In: Andrew B. Gonzales (ed.), Parangal kay Cecilio Lopez. (Philippine Journal of Linguistics special monograph 4.) Quezon City: Linguistic Society of the Philippines. Schachter, Paul 1976 The subject in Philippine languages: topic, actor, actor-topic, or none of the above. In: Charles N. Li (ed.), Subject and Topic, 491−518. New York: Academic Press. Schachter, Paul and Fe. T. Otanes. 1972 Tagalog Reference Grammar. Berkeley: University of California Press.

Paul Schachter, Los Angeles (USA)

48. Warlpiri

1677

48. Warlpiri 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Introduction Some general features of Warlpiri clause structure Constituent structure and the sentence The auxiliary and agreement Predicators, argument structures and case array Anaphora Complex clauses Sentential adjuncts Operators and logical form Abbrevations, glosses References (selected)

1. Introduction Warlpiri is an Australian Aboriginal language spoken in Central Australia. It has received a good deal of attention recently, because of its ‘nonconfigurational’ structure. The properties of Warlpiri that make it a ‘nonconfigurational language’, (such as free word order, discontinuous constituents and null anaphora) have been examined in various theories. Hale (1981a) and Nash (1986) present accounts of Warlpiri as a W* language in revised extended standard transformational grammar, as does van Riemsdijk (1981). Bouma (1985, 1986) presents an account of nonconfigurationality within categorial grammar. Lexicalist accounts are given in Andrews (1985) and Simpson (1983b, 1991). Government-Binding accounts are given in Hale (1983), Jelinek (1984) and Laughren (1985a, b, c, 1989). Swartz (1988, 1991) looks at some of these properties in terms of a functional approach to grammar, and discourse pragmatics. A general introduction to Warlpiri grammar is given in Nash (1986), and various aspects are discussed in papers in Swartz (1982a), as well as other references in the bibliography. Apart from grammar, other work on Warlpiri includes work on language acquisition by Edith Bavin and Tim Shopen, and on an auxiliary language, Warlpiri sign language, by Adam Kendon. As well, Michael Kashket has prepared a parser for Warlpiri, based on the Government-Binding theory. A large body of material in Warlpiri is available, including oral and written material in machine-readable files prepared by the Massachusetts Institute of Technology Lexicon Project, as well as many books written by Warlpiri people and published by the Yuendumu Bilingual Resources Development Unit, and translations of the Bible prepared with the help of Stephen Swartz. In this sketch, we outline some of the important features of Warlpiri grammar, referring wherever possible to published works on the topics discussed.

2. Some general features of Warlpiri clause structure 2.1. Verbal and nominal sentences Warlpiri has both verbal and nominal sentences. In the former, illustrated by (1) and (2) below, the predicator is a verb. In the latter, illustrated by (3) and (4), the predicator belongs to the other major part of speech in Warlpiri, i.e. the noun:

1678

VII. Syntactic Sketches

(1)

Ngaju ka-rna wangka-mi. I IMPF-1sS speak-NPST ‘I am speaking’

(2)

Ngajulu-rlu ka-rna-ngku nyuntu nya-nyi. see-NPST I-ERG IMPF-1sS-2sNS you ‘I see you.’

(3)

Ngaju(-rna) mata. tired I-1sS ‘I am tired.’

(4)

Ngaju(-rna) ngampurrpa nalija-ku. wanting tea-DAT I-1sS ‘I want some tea.’

(See Appendix for abbreviations and glosses). In a finite verbal sentence, an auxiliary (AUX) is obligatory − in (1) and (2) above, the base of the auxiliary is the present imperfective element ka. This is construed with the nonpast endings of the verb. In addition, person marking clitics − corresponding to the direct arguments (subject, object) of the verb − are suffixed to the base of the auxiliary. In (1), an intransitive sentence, the auxiliary contains just the first person singular subject clitic (glossed ‘1sS’), corresponding to the subject argument. But in the transitive sentence (2), in addition to the subject clitic immediately following the base, the auxiliary contains a non-subject clitic corresponding to the second person singular object argument (glossed ‘2sNS’). Nominal sentences are stative and have no phonologically overt auxiliary base, though they can optionally have pronominal clitics, as is shown by the parentheses in (3) and (4). This essay will be concerned primarily with the syntax of verbal sentences.

2.2. Free word order A prominent feature of Warlpiri surface syntax is free word order. A transitive sentence, with overt subject and object, may exhibit any of the theoretically possible orderings of these arguments and the verb: (5)

wurru-ka-nyi a. Ngarrka-jarra ka-pala-jana wawirri-patu-ku IMPF-3dS-3pNS kangaroo-PAUCAL-DAT stalk-move-NPST man-DUAL ‘The (two) men are stalking the (several) kangaroos.’ b. Wawirri-patu-ku ka-gala-jana ngarrka-jarra wurru-ka-nyi. c. Ngarrka-jarra ka-pala-jana wurru-ka-nyi wawirri-patu-ku. d. Wawirri-patu-ku ka-pala-jana wurru-ka-nyi ngarrka-jarra. e. Wurru-ka-nyi ka-gala-jana ngarrka-jarra wawirri-patu-ku. f.

Wurru-ka-nyi ka-pala-jana wawirri-patu-ku ngarrka-jarra.

48. Warlpiri

1679

Only the auxiliary complex is restricted in its placement. It normally appears in second position (the so-called Wackernagel position), if its initial element is not a complementizer. Otherwise, an auxiliary may appear either in second position or in initial position, as in (8), (with the exception that the negative complementizer kula cannot follow the verb). This may be thought of as a phonological constraint − the auxiliary must be contained in the first phonological phrase of the phonological clause, where the phonological clause is an intonational unit marked by a final pause and a distinctive tonal melody. (Preceding the phonological clause there may be a topicalised element, as in [18]). Althought the choice of different word order alternatives is conditioned by stylistic and discourse factors, as yet only partially understood, it is also true to an extraordinary degree in Warlpiri that different orderings are considered to be repetitions of one another. When asked to repeat an utterance, speakers depart from the ordering of the original more often than not (cf. Hale 1981a, 1983; Laughren 1984; Swartz 1988, 1991).

2.3. The ergative case system The sentences of (1) through (5) serve to illustrate the essential character of the Warlpiri ergative case system. The subject of a canonical intransitive sentence, like (1), or the nominal sentences (3) and (4), is assigned the socalled absolutive (phonologically unmarked) case, as is the object of a canonical transitive sentence, such as (2). The subject of a transitive sentence of this type appears in the socalled ergative case (glossed ERG and realized morphologically by the ending -ngku after disyllabics and -rlu after polysyllabics and certain exceptional disyllabics). Canonical transitives are, however, not the only ones in which a dyadic predicator can appear. Thus, sentence (5) contains a dyadic verb which differs from that of (2) in that its subject is assigned the absolutive case while its object is assigned the dative (glossed DAT and realized morphologically by the ending -ku). Sentence (4) contains a dyadic nominal predicator, with a similar case assignment; the subject is assigned the absolutive case, and the other argument is assigned the dative case.

2.4. Agreement with the auxiliary These sentences also illustrate another prominent feature of the grammar of Warlpiri tensed clauses − to wit, agreement between the auxiliary complex and the arguments of the verb. As mentioned earlier, this is realized by means of pronominal clitics suffixed to the auxiliary base. These elements appear in the order subject-non-subject, with a partial exception to be noted later, and they embody the person and number categories of the corresponding argument. This agreement is sufficiently “rich” to permit free use of so-called “null anaphora” in Warlpiri tensed clauses − i.e. arguments need not be expressed in syntax as overt noun phrases (cf. Hale 1973a, 1983; Jelinek 1984; Swartz 1991; Simpson 1991). This also holds for nominal sentences containing pronominal clitics. Thus, beside (1−5) above, we also find the following:

1680 (6)

VII. Syntactic Sketches a. Wangka-mi ka-rna. speak-NPST IMPF-1sS ‘I am speaking.’ b. Nya-nyi ka-rna-ngku. see-NPST IMPF-1sS-2sNS ‘I see you.’ c. Mata-rna. tired-1sS ‘I am tired.’ d. Ngampurrpa-rna nalija-ku. wanting-1sS tea-DAT ‘I want some tea.’ e. Wurru-ka-nyi ka-pala-jana. stalk-move-NPST IMPF-3dS-3pNS ‘The (two) are stalking them (more than two).’

As these sentences show, the clitics correspond to grammatical functions, not to the case categories of arguments. Thus, subject clitics − i.e. -rna (1sS = first person singular subject) and -pala (3dS = third person dual subject) − are construed with absolutive subjects in (1), (3), and (5), but with an ergative subject in (2). And non-subject clitics (i.e. -ngku (2sNS = second person singular nonsubject) and -jana (3pNS = third person plural non-subject) − are construed with an absolutive object in (2) and with a dative object in (5). In tensed sentences whose verb is triadic, taking both absolutive and dative “objects”, only the dative exhibits true object properties (cf. Carrier 1976; Simpson and Bresnan 1983; Swartz 1982b) and is represented by a clitic in the auxiliary complex: (7)

karli-patu yi-nyi nyuntu-ku. Ngajulu-rlu kapi-rna-ngku FUTCOMP-1sS-2sNS boomerang-PAUCAL give-NPST you-DAT I-ERG ‘I will give you (the) (several) boomerangs.’

The absolutive argument is not represented by a clitic in such cases. Here again, the direct arguments of the verb, including the absolutive, may be non-overt: (8)

yi-nyi. give-NPST ‘I will give (it) to him.’

Kapi-rna-rla

FUTCOMP-1sS-3sDAT

As an aside, it should be pointed out that this sentence illustrates an additional detail of Warlpiri grammar as well, namely, the fact that a third person singular dative argument is overtly registered in the auxiliary (here by the clitic -rla). Otherwise, third singular arguments, subject or object, are not overtly represented by clitics in the auxiliary, and we do not represent them in our interlinear glossing. (It is very rare for dative arguments of nominal predicates to be cross-referenced by clitics − hence the lack of a clitic agreeing with nalijaku in [4]).

48. Warlpiri

1681

Sentence (8) also exemplifies the fact that an auxiliary complex with a complementizer, being disyllabic, or longer, may remain in initial position. The alternative, with the auxiliary following the verb, is also possible here, of course.

2.5. Complex clauses As we show in section 7, Warlpiri complex sentences involving tensed dependent clauses are adjoined in protasis or apodosis − they are not, strictly speaking, embedded. Infinitival dependent clauses, however, can be embedded. In addition to the use of these dependent clause types in forming complex sentences, Warlpiri makes liberal use of the “secondary predication” of nominal expressions. Secondary predicates may attribute a state to some referent, or describe the circumstances of an event, such as reason, or the resulting state. Secondary predication is important in Warlpiri, shouldering a large portion of the expressive burden in the language.

2.6. Syntactic categories The Warlpiri categorial (or part-of-speech) system recognizes two large classes, nouns and verbs. The verbal category is circumscribed in its notional content, while the nominal category appears, from the view point of Indo-European languages, say, to express a rather wide range of notions, including some which are typically expressed by verbs in Indo-European languages. Apart from verbs of emotion (e.g. yul-kami ‘like, love’) and perception (e.g. nya-nyi ‘see’), Warlpiri verbs typically denote event types in which an entity undergoes, or is caused to undergo, a change (in location, stance, or condition, e.g. yuka-mi ‘enter’, yirrpi-rni ‘insert’, pali-mi ‘die, expire’, etc.), assumes an attitude or stance (in some location or state, e.g. nyina-mi ‘sit, be in location or state’), or produces some effect (in itself or in some other entity, e.g. yula-mi ‘cry’, pi-nyi ‘affect (harmfully)’). (Here and elsewhere, when verbs are cited in isolation, they are given in their nonpast tense forms. This serves not only to provide pronounceable verbal words but also to identify the conjugation to which a cited verb belongs). (Cf. Hale 1982b, 1983; Hale/Platero 1986b; Hale/ Laughren 1986; Nash 1986; Swartz 1982b). The notional range of Warlpiri nouns extends from definite and fully referential expressions to ones which, although morphologically nominal, are almost exclusively predicative in use. Thus, the nominal category encompasses deictics (e.g. the pronouns and demonstratives, such as ngaju ‘I’, nyampu ‘this’), the indefinite determiners (and “quantifiers”, e.g. jinta ‘one’), names (e.g. the subsection terms), substantives (e.g. karnta ‘woman’, pirli ‘stone’), attributives (e.g. wiri ‘big’), mental and psychological statives (e.g. ngampurrpa ‘wanting, desirous’), and locatives and directionals (e.g. kulkurru ‘in the middle’). Items toward the end of this listing are more often than not predicative in function, while those toward the beginning are more often argumentai (cf. Hale 1983). While nouns and verbs are the two major lexical categories in Warlpiri, heading the major syntactic phrasal categories (noun phrase and clause), there is a third lexical category which plays a significant role in Warlpiri morpho-syntax − namely, the preverb.

1682

VII. Syntactic Sketches

While Warlpiri probably has only some 120 monomorphemic verbs, it boasts an impressive inventory of preverbs, and the category accounts for the bulk of the verbal vocabulary in the language (cf. Hale 1982b; Nash 1982, 1986; Swartz 1982b). The Warlpiri preverb is evidently nominal in origin, and it is often ambiguous in its classification − as noun or preverb − in the synchronic grammar. In its basic position, it precedes the verbal stem, forming with it a metrical unit. However, many are rather loosely associated with their verbal hosts, in that an auxiliary, in seeking Wackernagel’s position, may intervene between the preverb and the verb stem − particularly, but not exclusively, when the auxiliary has the null base. Thus, with the preverb pina ‘back, returning’, both (9a) and (9b) are possible: (9)

a. Pina-ya-nu-0̸-gala. back-go-PST-PERF-3dS ‘They (two) went back.’ b. Pina-0̸-pala ya-nu. back-PERF-3dS go-PST ‘They (two) went back.’

In (9a) and (9b), the string (of preverb, auxiliary, and verb) forms a phonological phrase, with a single main stress − on the initial syllable. Warlpiri words, complex or simple, carry primary stress on the first syllable. The string in (9a) (of preverb, verb, auxiliary) forms a single phonological word which also constitutes a phonological phrase, whereas in (9b) the string (of preverb, auxiliary, verb) consists of two phonological words which form a single phonological phrase. The nature of the morphological binding between the preverb and the verb is such that, with the most “productive” preverbs, at least, the relative order of verb and preverb may be reversed − maintaining the metrical unity of the combination, normally: (10) Ya-nu-0̸-pala pina. go-PST-PERF-3dS back ‘They (two) went back.’ As hinted in the preceding paragraph, preverbs are not all equally “productive”. They range from those (like pina ‘back, returning’) which are fully productive, forming predicates whose semantics is entirely compositional, to those (like wurru-, the first two syllables of the verb wurru-ka-nyi ‘stalk’, see [5] above) which, though clearly preverbs morphologically, are unique in their occurrence (according to our records, at least) and utterly obscure in their semantics.

3. Constituent structure and the sentence 3.1. Discontinuous constituents and auxiliary placement The second-position placement of the auxiliary in Warlpiri offers certain clues to surface constituent structure in the language. If we can assume, as is usually done, that what

48. Warlpiri

1683

precedes a second-position auxiliary (in the normal unmarked execution of a sentence) comprises a single constituent, then an argument expression consisting of a noun and modifier (e.g. determiner, genitive, attributive) forms a single constituent, though it may consist of more than one word: (11) a. Kurdu yalumpu-rlu ka-jana jiti-rni. IMPF-3pNS tease-NPST child that-ERG ‘The child is teasing them.’ b. Maliki ngaju-nyangu-ku ka-rna-rla kuyu yi-nyi. dog me-GEN-DAT IMPF-IsS-3sDAT meat give-NPST ‘I am giving meat to my dog.’ c. Maliki wiri-ngki-0̸-ji yarlku-rnu. dog big-ERG-PERF-1sNS bite-PAST ‘A big dog bit me.’ The evidence afforded by the position of the auxiliary here coincides with another type of evidence. The case inflections in (11), being marked once only, at the right-hand margins of the nominal expressions, indicate that these are single constituents. Where a nominal expression is discontinuous − a possibility in Warlpiri − each of the separate subconstituents must be marked for case: (12) Maliki-rli-0̸-ji yarlku-rnu wiri-ngki. dog-ERG-PERF-1sNS bite-PAST big-ERG ‘A big dog bit me.’ The same evidence applies in the case of infinitival expressions, as exemplified by the first two words in (13). The infinitival verb and its object, inflected by a complementizer (appearing just once, at the right margin), may precede the auxiliary: (13) Marna nga-rninja-kurra ka-rna wawirri nya-nyi. grass eat-INF-OBJCOMP IMPF-1sS kangaroo see-NPST ‘I see a kangaroo eating grass.’ Both the position of the infinitival expression and its inflection by a single complementizer indicate that it is a single constituent in the surface syntax of (13). Like nominal expressions, infinitivals too may be discontinuous, in which case the complementizer will appear not only on the verb but on its object as well: (14) Marna-kurra ka-rna wawirri nya-nyi nga-rninja-kurra. grass-OBJCOMP IMPF-1sS kangaroo see-NPST eat-INF-OBJCOMP ‘I see a kangaroo eating grass.’ Complex locative expressions, consisting typically of a nominal in the locative case in apposition with an inherently locative nominal, may also appear in pre-auxiliary position and therefore, presumably, form syntactic constituents when they so occur, (cf. Hale 1981a, 1982a; Laughren 1989; Nash 1986).

1684

VII. Syntactic Sketches

(15) Pirli-ngka kankarlumparra ka ya-ni pintapinta. IMPF go-NPST airplane mountain-LOC over ‘The airplane is going over the mountain.’

3.2. Flat structure If it is true that what precedes the auxiliary in the above examples is a syntactic constituent, and if it is true that a sequence which cannot precede the auxiliary is not a constituent, but rather more than one, then it is evident that there is no surface constituent corresponding to the verb phrase, as normally understood. The verb may precede the auxiliary, − even a complex verb including a preverb may do so, as in (9a) and (6e). But it may not be accompanied there by any of its complements. Thus, (16) is ungrammatical (unless the first word is topicalised): (16) *Wawirri nya-nyi ka-rna. kangaroo see-NPST IMPF-1sS ‘I see a kangaroo.’ Here the verb and its object, in the order object-verb, jointly precede the auxiliary. The order verb-object is equally ungrammatical: (17) *Nya-nyi wawirri ka-rna. see-NPST kangaroo IMPF-1sS ‘I see a kangaroo.’ The ill-formedness is presumably due to the fact that the verb and its object do not form a single subconstituent within the clause. There is an apparent exception to the principle that the auxiliary is in second position in surface structure. This is the common Warlpiri topicalization construction in which a constituent, a topic, is displaced to the left of the clause to which it relates, the comment: (18) Wawirri nyampu, ngajulu-rlu 0̸-rna pantu-rnu. kangaroo this, I-ERG PERF-1sS spear-PAST ‘This kangaroo, I speared it.’ Sentence (16) would be grammatical with wawirri as a topic, but it is unlikely that this could save (17). The dislocated phrase, or topic, does not count in determining the surface positioning of the auxiliary. Typically, the topic is marked by a characteristic falling-rising intonation on the final two syllables, and it is normally, but not inevitably, separated from the comment by a pause (cf. Laughren 1984; Swartz 1991). Sentences of the type represented by (18) are not true exceptions to the canonical positioning of the auxiliary, since the relevant domain to which the second-position principle applies is just the comment portion of the topicalization construction.

48. Warlpiri

1685

The evidence just adduced for the constituent structure of Warlpiri sentences in their surface form indicates that this structure is “flat”, in the sense that the verb does not form a separate constituent with any of its arguments. This is consistent not only with the facts concerning the placement of the auxiliary but also with the free word order, exemplified in (5). This freedom is quite evidently not at all constrained by any hierarchical organization uniting the verb with particular ones of its arguments. This is not to say, however, that there is no phrasal structure at all within the clause, since nominal, locative, and infinitival expressions do form constituents. The free constituent order observed within finite clauses is also observed in nominal and locative expressions; their component words may be freely ordered, although there are preferred orders (cf. Swartz 1988, 1991). However, in nominal expressions a case inflection applying to the whole must appear on the final word. The internal word order of infinitivals is more rigid − the object must directly precede the verb (unless the object is itself marked with the complementizer). This probably follows from the fact that the complementizer must appear on the infinitival verb and must be final in the phrase.

4. The auxiliary and agreement The auxiliary has a flat template-like structure, comprising morphemes which express mood, aspect and tense (which we call “verbal morphemes”), and morphemes which express agreement (which we call “argumental morphemes”). The full auxiliary consists of an auxiliary base, comprising an obligatory aspect (ASP) element optionally preceded by a sentential complementizer (COMP) element; the auxiliary base is followed by obligatory pronominal agreement (AGR) clitics. The auxiliary is obligatory in finite verbal clauses (although it may be phonologically null if there is no COMP and ASP is perfective). It is restricted in its occurrence in nominal-headed clauses: ASP is neutralised, and only the COMP elements kula (NEGCOMP) and kuja (FACTCOMP) can occur, while the AGR pronominal clitics are optional. The auxiliary cannot appear in nonfinite clauses.

4.1. Verbal auxiliary morphemes The sentential categories of tense, mood and aspect are realized discontinuously in the surface string of a Warlpiri sentence. The meanings associated with the verbal inflectional suffixes correspond to those traditionally included in the tense and mood categories. For any given clause, the choice of verbal auxiliary morphemes and the choice of tense/mood suffix on the verb are interdependent. Thus, in a Warlpiri finite clause, tensemood-aspectual information is encoded through the interaction of co-occurring members of each category. There is a perfective/imperfective contrast expressed in Warlpiri by means of the T/ ASP AUX morpheme. The perfective aspect morpheme is phonologically null, while the imperfective aspectual morpheme is realized as -lpa in conjunction with the past and irrealis verb forms, and as ka with the nonpast verb forms. The perfective-imperfective contrast with -lpa is shown in (19).

1686

VII. Syntactic Sketches

ya-nu. (19) a. Wati-0̸-li man-PERF-3pS go-PAST ‘The men left.’ ya-nu. b. Wati-lpa-lu man-IMPF-3pS go-PAST ‘The men were leaving.’ The categories of sentential complementizers, which we refer to as COMP, are listed in the Appendix. Some of these will be discussed further in section 8. There are complex compatibility constraints on the coexistence of COMP and T/ASP morphemes and verbal tense/mood suffixes, (cf. Hale 1973a; Laughren 1982; Nash 1986).

4.2. Agreement: person-number clitics The third category of morphemes which constitute the auxiliary are the agreement (AGR) pronominal clitics. These contain morphemes belonging to two subcategories: PERSON and NUMBER. These categories are indirectly related to the main predicator (verbal or nominal) of a finite clause, in the sense that they manifest features which identify the direct arguments of the predicator, in addition to certain more peripheral arguments. The person-number auxiliary clitics and their interaction with the transitivity features of the clause are of considerable complexity (cf. Hale 1973a; Laughren 1977; Nash 1986; Hale/Laughren 1986; Simpson/Withgott 1986; Simpson 1991; Swartz 1982b, 1991). There is one series of subject clitics, another of non-subject clitics. Warlpiri person-number clitics can be classified according to whether, in their surface form, they are composed of a portmanteau person-number morpheme, or whether they consist of two distinct clitics − one being the person morpheme, the other, the number morpheme. The clitics belonging to the latter set may appear discontinuously in the auxiliary template. The presence of a subject clitic is obligatory in the auxiliary except under some very specific conditions, which we will leave aside here. The presence or absence of a nonsubject clitic depends on a number of complex factors (cf. Hale 1973a; Laughren 1977). Which argument agrees with a subject clitic, and which arguments agree with nonsubject clitics will be discussed in section 5. Normally, both subject and non-subject clitics agree with the arguments of the verb in both person and number. However, there are certain exceptions. These include uses of plural for singular referents and of third person for second person referents, in “special language” (auxiliary languages used in addressing or referring to certain sets of kin relations). Another exception is found in a comitative construction in which nominals with dual or plural marking appear with subject clitics of all persons: (20) Jakamarra-jarra-rlujarra ya-nu. go-PAST Jakamarra-DUAL-1deS ‘I went with Jakamarra.’ A third exception, observed by Stephen Swartz in Lajamanu Warlpiri, is the appearance of second person clitics in sentences with presentational verbs and non-second person

48. Warlpiri

1687

subjects. He says that, typically, this combination is used in narratives to announce a surprise development. The number portion of the clitic agrees with the actual subject of the sentence: (21) Kala-npala nyina-nya marliyarra-jarra. sit-PREST man-DUAL but-2dS ‘There you go, the two men are sitting there!’

5. Predicators, argument structures, and case arrays Predicates in Warlpiri may be headed by either of the two morphologically distinct categories, verbs or nominals (cf. section 2.1). In either case this head, or predicator, denotes an action, process, or state involving one or more participants, commonly referred to as its arguments. The lexical entry of a given predicator defines its argument structure which, in turn, determines the initial syntactic structure of core arguments which it projects. While the surface syntactic structure of a Warlpiri sentence is evidently “flat”, in the sense that there is no evidence for a verb phrase at that level of syntactic representation (cf. 3.2), it is nonetheless clear that the syntactic organization of a predicator’s arguments, as defined by its lexical argument structure, exhibits an asymmetry distinguishing its subject from its complements, if any. This asymmetry is revealed in the Warlpiri systems of anaphora, or binding. We first look at verbs with ergative subjects, and then at verbs with absolutive arguments.

5.1. Verbs with ergative subjects Consider, for example, the verb of (22) below: (22) Ngarrka-jarra-rlu ka-pala-jana maliki-patu paka-rni. man-DUAL-ERG IMPF-3dS-3pNS dog-PAUCAL strike-NPST ‘The (two) men are striking (killing) the dogs.’ This verb takes two core arguments, one assigned ergative case (ERG), the other the phonologically null absolutive case (ABS). The argument with ergative case agrees with the subject clitic -pala, while the argument with absolutive case agrees with the nonsubject clitic -jana. Where one of these arguments anaphorically binds the other, the binder is the ergative argument, and the anaphor is the absolutive argument (represented only by the anaphoric clitic -nyanu ‘reflexive-reciprocal’ (REFL), occupying non-subject position within the auxiliary, as in: (23) Ngarrka-jarra-rlu ka-pala-nyanu paka-rni. man-DUAL-ERG IMPF-3dS-REFL strike-NPST ‘The (two) men are striking themselves! each other.’

1688

VII. Syntactic Sketches

This direction of binding, holding for all verbs which exhibit the ergative-absolutive case array, is never reversed. Hence an ergative argument can only be anaphorically bound “from outside”; that is to say, an ergative argument of a nonfinite clause can only be bound by an argument in a matrix clause in a “control structure”, as in (13). In that sentence, the ergative argument, the eater, of the non-finite verb nga-rni-nja ‘eat’ is bound − i.e. “controlled” − by the object wawirri of the matrix verb nya-nyi. Thus, for ergative-absolutive verbs, the ergative argument is the “prominent” argument. It is the ergative which may bind another argument clause-internally in reflexive-reciprocal constructions; and it is the ergative which may be bound from outside, by a matrix argument. Neither of these properties holds of the absolutive in these verbs. Since these are both properties indicating subjecthood, the ergative, therefore, is the subject. The facts of anaphoric binding and control, of course, are in accord with the facts of agreement. It is the ergative argument that agrees with the subject clitics, as can be seen in (23) and (22), as well as in other illustrative sentences used here. Not only for ergative-absolutive verbs, but for all verbs whose case arrays include an ergative, the ergative is the subject. (24) exemplifies the prototypical ergative-absolutivedative verb yi-nyi ‘give’. The ergative argument is represented by the ergative-marked noun karnta-jarra-rlu and agrees with the subject clitic -pala. The absolutive argument is represented by the noun miyi (not cross-referenced by a clitic), while the dative argument is represented by the non-subject REFL clitic -nyanu. The agreement structure indicates that the ergative argument is the subject, as does the binding relation, according to which the ergative subject binds the dative object: (24) Karnta-jarra-rlu ka-pala-nyanu miyi yi-nyi. woman-DUAL-ERG IMPF-3dS-REFL food give-NPST ‘The (two) women are giving each other food.’ And, correspondingly, the verb in (25) exemplifies the ergative-dative array. The ergative argument agrees with the subject clitic, and the dative argument agrees with the nonsubject clitic. ka-lu-ngalpa warri-rni ngalipa-ku. (25) a. Kurdu-patu-rlu child-PAUCAL-ERG IMPF-3pS-lpiNS seek-NPST us-DAT ‘The children are looking for us (plural inclusive).’ ka-lu-nyanu warn-rni. b. Kurdu-patu-rlu child-PAUCAL-ERG IMPF-3pS-REFL seek-NPST ‘The children are looking for each other.’ Although it is not a prominent feature of Warlpiri, there exists a class of verbs whose case array consists solely of an ergative argument. Typically, these are morphologically complex verbs containing preverbs of clearly nominal origin. The single argument of such verbs, not surprisingly, exhibits the properties of a subject, as illustrated by the agreement pattern of (26) and by the binding (control) relation of (27): (26) Kurdu-jarra-rlu ka-pala ngungkurru-pangi-rni. child-DUAL-ERG IMPF-3dS snoring-dig-NPST ‘The (two) children are snoring.’

48. Warlpiri

1689

(27) Kurdu-jarra ka-rna-palangu purda-nya-nyi ngungkurru-pangi-rninja-kurra. child-DUAL IMPF-1sS-3dNS audio-see-NPST snoring-dig-INF-OBLCOMP ‘I hear the (two) children snoring.’ In (27), of course, the subject of the nonfinite verb ngungukrru-pangi-nrinja may not appear − it is bound, or controlled, and so must be non-overt. It is without question an ergative argument, however, as is clear from the case marking which appears on the overt subject kurdu-jarra-rlu in the corresponding finite clause (26).

5.2. Verbs with absolutive subjects Not all verbs have an ergative argument in their case array, as is evident from examples in section 2. Where there is no ergative, the absolutive assumes the subject function. This fact is illustrated in (1) above, where the verb wangak-mi ‘speak’ appears in its monadic use, representing the simple absolutive array. The absolutive-dative array is exemplified by (5) above, and also in (28), where wangka-mi appears in a dyadic use. In both (5) and (28), the absolute argument agrees with the subject clitic and the dative argument agrees with the nonsubject clitic: (28) Ngaju ka-rna-ngku wangka-mi nyuntu-ku. I IMPF-1sS-2sNS speak-NPST you-DAT ‘I am speaking to you.’ The dative argument may be bound by the absolutive subject in a reflexive/reciprocal construction: (29) Wangka-mi ka-lu-nyanu wati-patu. speak-NPST IMPF-3pS-REFL man-PAUCAL ‘The men are talking to each other.’ Similarly, the absolutive subject of a nonfinite verb may be controlled by an argument of the matrix verb, as illustrated in (30) in which the dative argument of the higher verb controls the absolutive subject of nyina-njakurra. (30) Wangka-mi ka-rna-rla wati-ki nyina-nja-kurra(-ku). speak-NPST IMPF-1sS-3sDAT man-DAT sit-INF-OBJCOMP-DAT ‘I am talking to the man while (he’s) sitting.’ The sentences so far exhaustively exemplify the assignments of grammatical case categories to the subject and object functions. The subject is assigned the ergative case, if there is one (i.e. if the verb has an ergative in its case array); otherwise, the subject is assigned the absolutive. The object is assigned the dative, if there is one, otherwise the absolutive. In a triadic case array, as in (24), therefore, the absolutive is assigned to an argument which bears neither the subject nor the object function (cf. Swartz 1982b).

1690

VII. Syntactic Sketches

That the dative argument is the grammatical object in such structures can be shown, in part. Thus, in (7) above, the dative argument, not the absolutive, is cross-referenced by non-subject agreement morphology in the auxiliary. The objecthood of the dative argument is shown also by control of infinitivals in -kurra ‘OBJCOMP’ clauses (cf. Carrier 1976; Hale 1982a, 1983; Simpson and Bresnan 1983). The subjects of these infinitivals are controlled by a matrix object. In (30) and (31) it is the dative argument which controls the infinitival subject. (31) Kurdu-jarra-ku ka-rna-palangu miyi yi-nyi nyina-nja-kurra(-ku). child-DUAL-DAT IMPF-1sS-3dNS food give-NPST sit-INF-OBJCOMP-DAT ‘I am giving food to the (two) children (while they are) sitting.’

5.3. Alternations in semantic role and case Some verb classes show an alternation between a dative object and an absolutive object involving a change of meaning. These include perception verbs, and verbs of impact and concussion (cf. Guerssel et al. 1985; Hale 1982b; Hale and Laughren 1986; Laughren 1988b; Swartz 1982b; Simpson 1991). That both the dative and the absolutive arguments act as objects is shown by the fact that they control the infinitival subject of an OBJCOMP clause: (32) a. Janganpa-rna paka-rnu ngajulu-rlu nguna-nja-kurra. lie-INF-OBJCOMP possum-1sS chop-PAST I-ERG ‘I chopped out a possum while it was sleeping − and I got the possum.’ b. Janganpa-ku-rna-rla paka-rnu ngajulu-rlu nguna-nja-kurra(-ku). lie-INF-OBJCOMP-DAT possum-DAT-1sS-3sDAT chop-PAST I-ERG ‘I chopped for a possum while it was sleeping − and I didn’t necessarily get the possum.’ A special case of the alternation with verbs of impact and concussion is the “conative” or “attempted action” alternation, in which a second dative clitic is used in the auxiliary in addition to the clitic that cross-references the dative object: (33) a. Ngarrka-ngku ka marlu luwa-rni. IMPF kangaroo shoot-NPST man-ERG ‘The man is shooting the kangaroo.’ b. Ngarrka-ngku ka-ria-jinta marlu-ku luwa-rni. IMPF-3sDAT-3sDAT kangaroo-DAT shoot-NPST man-ERG ‘The man is shooting at the kangaroo.’ Again, both the dative and the absolutive argument can control the infinitival subject of clauses.

OBJCOMP

48. Warlpiri

1691

5.4. Adjunct datives In preceding sections, we have seen the suffix -ku used as a dative marking a core argument of the verb. We turn now to other uses of this suffix in which its status as a dative object argument marker is more questionable. (34) illustrates some such uses: (34) a. Yapa ka-lu muku-ya-ni miyi-ku. Person IMPF-3pS all-go-NPST food-PURP ‘The people are all going for food.’ b. … Purra-nja-rla (ngaka) kala rdipi-ja wirrkardu-ku-warnu PASTCOMP set.off-PAST several-FREQ-ASSOC cook-INF-PRECCOMP later ngurra-ku-warnu. camp-FREQ-ASSOC ‘Having cooked it, he set off again for several days.’ c. Ngarrka-ngku ka-rla kurdu-ku karli jarnti-rni. IMPF-3sDAT child-DAT boomerang trim-NPST man-ERG ‘The man is trimming the boomerang on account of the child.’ In neither (34a) nor (34b) does the adjunct agree with a non-subject clitic in the auxiliary. In (34a), the nominal marked with this suffix indicates the purpose of the action, and is glossed PURP, while (34b) shows the -ku suffix indicating the frequency of an action. (This use of ku, glossed as FREQ (Frequentative), differs from the dative, and from the bare purposive uses, in that it may be followed by further case-marking). In (34c) the suffix -ku indicates for whose benefit the action is performed. The person and number of this participant agree with the non-subject clitic in the auxiliary. This is a dative case use of the suffix -ku − to denote a participant which, in some unspecified way, is affected by the action or event denoted by the verb, or provides the cause or purpose of that action or event. The dative is not a direct argument of the verb, but rather an adjoined argument, the so-called ‘adjunct dative’. Agreement with a non-subject clitic in the auxiliary may be interpreted as foregrounding the participant concerned. The adjunct dative may be expressed as a reflexive bound by the subject, as in (35) in which the dative benefactive ‘themselves’ is expressed by a reflexive pronominal clitic -nyanu bound to the subject wati-patu-rlu and the subject pronominal clitic -lu: (35) Watii-patu-rlu-lpa-lui-nyanui warlu yarrpu-rnu. man-PAUCAL-ERG-IMPF-3pS-REFL fire light-PAST ‘The men were lighting themselves a fire.’ Intransitive verbs (and sometimes even transitive verbs) may go a step further, and permit the dative participant to act as the object. A verb such as ngarlarrimi ‘laugh’ is essentially an intransitive verb. However, in (36) it appears with a dative participant, which agrees with the dative non-subject clitic in the auxiliary. That this dative is an object is shown by the fact that it can control an OBJCOMP -kurra clause: (36) Kurdu ka-rla yinka ngarlarri-mi wangka-nja-kurra-ku. child IMPF-3sDAT laughing smile-NPST talk-INF-OBJCOMP-DAT ‘The child is laughing at the one talking.’

1692

VII. Syntactic Sketches

Agreement with the auxiliary allows the argument marked with -ku to be foregrounded, and raised to the status of object. However the dative participant may act as the controller of a -rlarni OBVCOMP or “pure obviative complementizer” clause, as in (37), even though such clauses cannot be controlled by objects (absolutive or dative) in the matrix clause. (37) Kurdu-ngku ka-(rla) jarntu warnuwajilipi-nyi karnta-ku, miyi child-ERG IMPF-3sDAT dog around.chase-NPST woman-DAT, food purra-nja-rlarni(-ki). cook-INF-OBVCOMP-DAT ‘The child is chasing the woman’s dog around while she is cooking food.’ Control of the OBVCOMP is independent of whether or not the participant marked with -ku is foregrounded and agrees with the auxiliary, as the optionality of the third person singular dative non-subject clitic -rla in (37) shows. Thus there are three main uses of the suffix -ku: as adjuncts with no auxiliary agreement (the FREQ or PURP uses), as adjunct datives with auxiliary agreement and control of OBVCOMP clauses, and as objects with auxiliary agreement and control of OBJCOMP clauses. The relation of the dative participant to the action or event may be made more specific by combining the verb with one of a set of preverbs that add dative participants: jirrnganja, yirrkirnpa ‘with (dependent)’, jurnta ‘away from, removal from’, kaji, ngayi ‘for, on behalf of’, marlaja, marlangka ‘because of, associated with’, piki(piki) ‘in danger of’. The meanings given to the preverbs further specify the relation between the dative participant and the action or event denoted by the verb. Thus, in (38) the action is “cutting the boomerang”, the dative participant is “the little child”, and the preverb kaji indicates that the relation between the child and the boomerang-cutting is benefactive: (38) Ngarrka-ngku ka-ria kurdu wita-ku karli kaji-jarnti-rni. IMPF-3sDAT child small-DAT boomerang.ABS BEN-trim-NPST man-ERG ‘The man is trimming the boomerang for the little child.’ See Craig and Hale (1988), Hale (1982b), Nash (1982, 1986b), Swartz (1982b), Simpson (1991).

6. Anaphora In Warlpiri, the pronominal features of person and number are expressed by two distinct syntactic categories: bound pronominal clitics realized in the auxiliary, and optional free pronouns. The reciprocal-reflexive non-subject clitic -nyanu is always coreferent with the subject of the finite verb in the same clause as itself. The anaphor can never be bound by an element bearing a non-subject grammatical function.

48. Warlpiri

1693

6.1. Disjoint pronominal reference and pronominal coreference While a free pronoun occurs in the same syntactic position as a common or proper noun as the argument of a verb, bearing the same set of grammatical functions and casemarking, its distribution is restricted by relations of possible or impossible coreference between a pronoun and another argument in the same clause. A free pronoun such as the third person pronoun nyanungu cannot be coreferent with the reflexive anaphor -nyanu and hence with the subject of a sentence in which -nyanu expresses the object of a verb that takes an ergative subject. This constraint is demonstrated in (39)−(41): (39) a. Jakamarrai-rlu ka-nyanui (*nyanungu*il*j) paka-rni. Jakamarra-ERG IMPF-REFL PRONOUN hit-NPST ‘Jakamarra is hitting himself (*himiij).’ b. Jakamarrai-rlu ka nyanungu*ilj paka-rni. Jakamarra-ERG IMPF PRON hit-NPST ‘Jakamarra is hitting him.’ c. Jakamarrai-rlu ka-nyanui (*nyanungu*il*j)-ku kuyu yi-nyi. Jakamarra-ERG IMPF-REFL PRON-DAT meat give-NPST ‘Jakamarra is giving himself (*himilj) meat.’ d. Watii-patu-rlu ka-lui-nyanui warri-rni (*nyanungu*il*j)-rra-ku. man-PAUCAL-ERG IMPF-3pS-REFL search-NPST PRON-PL-DAT ‘*The men are looking for each other (*themilj).’ The constraint which prevents the pronoun from bearing the absolutive or dative object function in (37)−(39) does not apply when it bears a dative adjunct function and is construed with the anaphor -nyanu. This is shown in (40): (40) a. Jakamarrai-rlu ka-nyanui-rla warri-rni kuyu-ku nyanungui-ku. Jakamarra-ERG IMPF-REFL-3sDAT search-NPST meat-DAT PRON-DAT ‘Jakamarra is looking for his meat.’ or: ‘Jakamarra is looking for meat for himself.’ b. Jakamarrai-rlu ka-nyanuil*j warlu yarrpi-rni nyanunguil*j-ku. Jakamarra-ERG IMPF-REFL fire light-NPST PRON-DAT ‘Jakamarra is lighting himself a fire.’ A dative pronoun construed with the anaphor -nyanu bearing the object grammatical function may be realized in a sentence containing a verb which takes an absolutive subject and a dative object, as shown in (41): (41) a. Nyanunguil*j-ku ka-nyanuil*j Jakamarrai yulka-mi/wangka-mi. PRON-DAT IMPF-REFL Jakamarra love-NPST/talk-NPST ‘Jakamarra loves himself / talks to himself.’

1694

VII. Syntactic Sketches b. Jakamarrai ka-nyanui yulka-mi/wangka-mi. Jakamarra IMPF-REFL love-NPST/talk-NPST ‘Jakamarra loves himself / talks to himself.’ nyanungu*iljl*k-ku yulka-mi. c. Jakamarrai ka-rla*ilj love-NPST. Jakamarra IMPF-3sDAT PRON-DAT ‘Jakamarra loves him/her.’

See Farmer, Hale and Tsujimura (1986), Laughren (1985b, 1988a), Simpson (1991). No non-pronominal nominal expression may be construed directly with the anaphor nyanu, which would imply binding of that nominal by the verb’s subject. However, indirect construal is possible − a nominal which is predicated of the argument or adjunct expressed by the anaphor -nyanu may be overtly expressed as shown in (42). In (a) the absolutive nominal jurru ‘head’ is predicated of the object expressed by -nyanu (see Hale 1981b; Laughren 1992 for a detailed study of partwhole syntax in Warlpiri). In (b) the absolutive nominal murrumurru ‘pain’ is predicated of the argument associated with the anaphor -nyanu. In (c) the absolutive nominal wati ‘man’ is predicated of the nonsubject argument associated with -nyanu. (42) a. Watii-ngki-nyanuil*j paka-rnu jurru. hit-PAST head man-ERG-REFL ‘The man hit himself (on) the head.’ b. Karntai-ngku-nyanuil*j purdanya-ngu murrumurru. feel-PAST pain woman-ERG-REFL ‘The woman felt herself (to be) in pain.’ c. Wati-lki-li-nyanu nya-ngu kurdu-warnu-rlu. man-CS-3pS-REFL see-PAST child-ASSOC-ERG ‘The young people saw each other (to be) men then.’

7. Complex clauses 7.1. Sentential complements In Warlpiri, very few verbs select events as arguments, and thus few allow sentential complements. The main semantic classes appear to be verbs of ordering, telling, enlisting, and verbs meaning “fail”, or “cause to fail”, as in (43). Selected sentential complements must be non-finite, and may generally be replaced by a nominal bearing the same complementizer, as in (c). (43) a. Walya kiji-rninja-ku ka-rna kapakapa-jarri-mi (ngaju). earth throw-INF-PURP IMPF-1sS fail-INCH-NPST ‘I am failing to throw out the dirt.’ b. Nyuntulu-rlu ka-npa-ju kapakapa-ma-ni walya kiji-rninja-ku. failERG IMPF -2s S -1s NS CAUS-NPST ground throw-INF-PURP you‘You are preventing me from (succeeding in) throwing out the dirt.’

48. Warlpiri

1695

kapakapa-jarri-mi. c. Ngaju ka-rna jaru-ku I IMPF-1sS language-DAT fail-INCH-NPST ‘I fail at, make mistakes in language.’ What the subject of the non-finite clause will be is predictable − if the matrix verb has one argument, as in (a), that argument will control the subject of the non-finite clause. But if the matrix verb has two arguments, as in (b), the object argument will control the subject of the non-finite clause (cf. Hale 1982b; Nash 1986; Simpson 1983).

7.2. Infinitive and nominal secondary predicates Secondary predicates, as opposed to the primary verbal or nominal predicate of a finite clause, have no independent tense, mood, aspect or overtly realized pronominal clitics associated with them. A secondary predicate consists of a nominal or infinitive verb to which a complementizer case and/or nominal case is suffixed. Non-subject arguments of the nominal or infinitive, as well as various adjuncts and modifiers, may also be expressed. There are basically two classes of secondary predicate: eventive and stative. Eventive predicates are headed by a complementizer case which indicates the temporal relation of the event denoted by the secondary predicate to the event or process denoted by the primary (or finite) predicate. The eventive interpretation is attributable to the complementizer case, since a referential nominal bearing one of these cases will be interpreted as referring to some event involving the referent of the subject of the secondary predicate and the referent of the nominal expression. In (44), stative and eventive secondary predicates are compared. In (a) the absolutive case-marked nominal nyurnu ‘sick’ is a stative predicate. Its subject is karnta ‘woman’ which is the absolutive case-marked object of the verb. In (b) nyurnu is interpreted as part of an eventive predicate headed by the object complementizer case -kurra, meaning something like ‘involved with the sick one’. The subject of this eventive secondary predicate is coreferent with karnta, the absolutive case-marked object of the finite verb. (44) a. Wati-ngki karnta nya-ngu nyurnu. man-ERG woman see-PAST sick ‘The man saw the woman (was) sick.’ b. Wati-ngki karnta nya-ngu nyurnu-kurra. man-ERG woman see-PAST sick-OBJCOMP ‘The man saw the woman involved with the sick one.’ See Hale (1982b), Nash (1986), Simpson (1983), Laughren (1992).

7.3. Eventive predicates Eventive predicates may be further classified according to their time-reference; whether they denote an event taking place at the same time as the event denoted by the main clause, or preceding it, or subsequent to it.

1696

VII. Syntactic Sketches

7.3.1. Simultaneous event Warlpiri has three complementizer cases which indicate that the event denoted by the secondary predicate occurs at the same time as the event denoted by the main verb. These are illustrated below. In addition to their temporal content, each of these complementizer cases specifies how the understood subject of the secondary predicate is to be interpreted. (45) a. Wati-ngki marlu nya-ngu, nguna-nja-kurra. man-ERG kangaroo see-PAST lie-INF-OBJCOMP ‘The man saw the kangaroo while it was lying down.’ b. Wati-ngki marlu nya-ngu parnka-nja-karra-rlu. man-ERG kangaroo see-PAST run-INF-SUBJCOMP-ERG ‘The man saw the kangaroo while he was running.’ c. Wati-rla jurnta-ya-nu karnta-ku jarda-nguna-nja-rlarni. man-3sDAT away-go-PAST woman-DAT sleep-lie-INF-OBVCOMP ‘The man went away from the woman while she was sleeping.’ d. Kurdu-lu pu-ngu ngati-nyanu-ku wirlinyi-rlarni. child-3pS hit-PAST mother-his-DAT daytrip-OBVCOMP ‘They hit the child while its mother was out hunting.’ In (a) the secondary predicate consists of the infinitive form of the verb nguna-mi plus the complementizer case -kurra which indicates that the understood subject of the infinitive verb is obligatorily coreferent with the object marlu of the finite verb nya-ngu. In (b) the secondary predicate is made up of the infinitive form of the verb parnka-mi plus the complementizer case -karra which specifies that the understood subject of the infinitive is coreferent with the subject, wati-ngki, of the finite verb. This relationship is further marked by the presence of the ergative caseending -rlu on the secondary predicate. In (c), the secondary predicate jarda-nguna-njarlarni contains the complementizer case -rlarni which indicates that the understood subject of the compound verb jarda-ngunanja is coreferent with the dative case-marked karnta-ku. Karnta-ku is neither the subject nor the object of the finite verb, but rather a dative adjunct (see section 5.4). Whereas the subject of a nominal or infinitival predicate headed by either OBJCOMP or SUBJCOMP cannot be overtly expressed and is obligatorily interpreted as being coreferent with, or controlled by, the object or subject of the finite verb, the subject of a predicate headed by the OBVCOMP -rlarni may be overtly expressed by a dative noun phrase as in (d). In this case we may consider the combination of dative noun phrase plus -rlarni headed predicate and its non-subject arguments to be adjoined to the clause headed by the main finite verb.

48. Warlpiri

1697

7.3.2. Preceding events (46) a. Wati-ngki kuyu purra-nja-rla nga-rnu. man-ERG meat cook-INF-PRECCOMP eat-PAST ‘The man cooked the meat and ate it. / The man, having cooked the meat, ate it.’ b. Wati-ngki kapu kuyu purra-nja-rla nga-lku. man-ERG FUTCOMP meat cook-INF-PRECCOMP eat-FUT ‘The man will cook the meat and eat it.’ (Other translations are possible). In (46) the secondary predicate consists of the infinitive form of the verb purra-mi, its object kuyu and the complementizer case -rla, homophonous with the locative (LOC) case suffix. This suffix indicates that the event of cooking precedes the event of eating denoted by the finite forms of the verb nga-rni ‘eat’ − in the past tense in (a) and the immediate future tense in (b). The understood subject of the PRECCOMP is typically coreferent with the subject of the finite verb.

7.3.3. Subsequent events (47) a. Wati-ngki-nyanu jurnarrpa ma-nu, wurna ya-ninja-kungarnti-rli. Man-ERG-REFL belongings get-PAST travel go-INF-PREPCOMP-ERG ‘The man picked up his things before going on a trip.’ b. Wati ya-nu wirlinyi kuyu pi-nja-ku. Man go-PAST daytrip game kill-INF-PRRPCOMP ‘The man went out to kill game.’ c. Wati warrka-rnu maarnta-rla ya-ninja-kurra. go-INF-SEQCOMP man climb-PAST bus-LOC ’The man climbed into the bus ready to go.’ (The SEQCOMP, as in [c] is homophonous with the OBJCOMP, but differs from it in timereference, and in what may be the controller of the subject of the non-finite clause). Each of the complementizer cases in (47) signals that the event denoted by the infinitive verb or nominal to which it is suffixed follows, or is dependent on, the event denoted by the finite verb. Another complementizer case which refers to a subsequent event is the evitative or negative purposive (NEGPURP) complementizer -kujaku illustrated in (48). This complementizer sometimes allows the object of the nonfinite clause to be construed with an argument of the main clause. There is no obligatory control of the subject of the nonfinite clause. (48) a. Kulpari-ya-nu-rna kulu-kujaku back-go-PAST-1sS fight-NEGPURP ‘I turned back to avoid involvement in the fight.’

1698

VII. Syntactic Sketches b. Yantarli nyina-ya kurlarda-kujaku panti-rninja-kujaku. home stay-IMP spear-NEGPURP pierce-INF-NEGPURP ‘Stay put so not to get speared.’ or: ‘Stay put so as not to spear someone/something.’ c. Yampi-ya nyurnu-kujaku. leave-IMP sick-NEGPURP ‘Leave it alone lest (you) get sick.’ or ‘Leave him alone lest (he) get sick.’ (and so on)

This NEGPURP complementizer may also occasionally be used on finite verb forms, as in (49): (49) Jinta-wangu ya-nta, kalaka-ngku jarnpa-ngku paka-rni-kiaku. go-IMP POTCOMP-2sNS kurdaitcha-ERG hit-NPST-NEGPURP one-PRIV ‘Don’t go alone lest a kurdaitcha man might attack you.’

7.3.4. Objects of infinitive verbs Whereas a finite verb combines with the auxiliary in such a way that certain of its arguments are expressed by means of person-number clitics with which a case-marked nominal expression may be construed, an infinitive verb does not combine with an independent auxiliary, falling as it does under the scope of the tense, mood and aspect morphemes associated with the auxiliary and the related finite verb in the clause. The arguments of the infinitival verb may be realised as nominal expressions, apart from the subject argument of an infinitive to which is suffixed one of the complementizer cases that requires obligatory control of the subject, as in (45). The dative case argument of a verb, finite or nonfinite, may always be expressed by a dative case-marked nominal expression. The object argument of an infinitive which would be expressed by an absolutive case-marked nominal expression of the corresponding finite verb, cannot be thus expressed, since absolutive case fails to be assigned to the object of an infinitive verb. Rather it falls within the scope of the complementizer case which marks the infinitive. However, when the object nominal immediately precedes the infinitive, overt complementizer case-marking on the nominal is not obligatory (giving the appearance of an absolutive object), and the nominal and infinitive verb form a single phonological phrase, as in (47b). When the object nominal occupies another position in the clause, it too is overtly marked by the complementizer case ending (cf. Hale 1982b; Laughren 1989). Compare (50) with the sentences in (47). Not only does the object nominal fall within the scope of the complementizer case, but so does a modifying nominal which has semantic scope over the infinitive and its arguments. (50) a. Wati-ngki-nyanu jurnarrpa ma-nu, ya-ninja-kungarnti-rli man-ERG-REFL belongings get-PAST go-INF-PREPCOMP-ERG wurna-kungarnti-rli. travel-PREPCOMP-ERG ‘The man picked up his things before going on a trip.’

48. Warlpiri

1699

b. Kuyu-ku wati ya-nu wirlinyi pi-nja-ku. game-PRRPCOMP man go-PAST daytrip kill-INF-PRRPCOMP ‘The man went hunting to kill game.’ Arguments of an eventive secondary predicate need not be overtly expressed in Warlpiri. Occasionally, as shown in (51), none of the arguments of the infinitive verb yi-nja ‘giveINF’ are overtly expressed: (51) Wangka-ja-rna-rla yi-nja-ku. say-PAST-1sS-3sDAT give-INF-PURPCOMP ‘I said to him to give (someone something).’

7.3.5. Stative secondary predicates A stative secondary predicate typically attributes some quality to its “subject” (which may be a nominal expression or a pronominal clitic or both). The secondary predicate consists of a nominal or infinitival expression which agrees in case-marking with its “subject”. (52) a. Mata ka karnta nyina-mi wapa-nja-warnu. tired IMPF woman sit-NPST walk-INF-ASSOC ‘The woman is sitting tired from walking.’ b. Nyampu ka-rna nga-rni wanka, warlu-ngku purra-nja-wangu. fire-ERG cook-INF-PRIV IMPF-1sS eat-NPST raw this ‘I am eating this raw without it having been cooked by fire.’ c. Kuyu nga-rnu kurdu-ngku purra-nja-warnu, yarnunjuku-rlu. meat eat-PAST child-ERG cook-INF-ASSOC hungry-ERG ‘The child ate the cooked meat, being hungry.’ In (a) the absolutive nominal predicate mata is predicated of the absolutive case-marked karnta, the subject of the sentence. The infinitive expression wapa-nja-warnu is also predicated of the woman. The associative (ASSOC) case transforms an eventive predicate into a stative one, i.e. the state which results from involvement in the activity, process or event denoted by the verb. The privative (PRIV) suffix -wangu can also transform an eventive predicate into a stative one as in (b). (This is an unusual example, showing ergative case on warlu-ngku in the non-finite clause, agreeing with the understood ergative subject of purranja-wangu). In (c) we have two stative secondary predicates, purranja-warnu is predicated of the absolutive object kuyu while yarnunjuku-rlu is predicated of the ergative subject kurdu-ngku. The relationship between “subject” and secondary predicate is formally indicated in each case by the case-marking on the predicate. This case-marking is identical to that on the nominal understood to be the “subject” of this secondary predicate.

1700

VII. Syntactic Sketches

7.3.6. Secondary predicates and semantic cases Warlpiri has a number of semantic cases which fall into two classes: derivational and non-derivational (see Appendix and also Hale 1982b; Nash 1986; Simpson 1983). These cases are suffixed to nominal expressions. The resulting compound expression is predicated of some element in the sentence, which acts as the “subject” of the expression, as in (53). A semantic case can be suffixed to a nominal plus derivational case compound. A grammatical case, dative or ergative, can be further suffixed to a semantic case-headed predicate, thus indicating the argument of which it is predicated. (53) a. Karnta-lpa nyina-ja walya-ngka. woman-IMPF sit-PAST ground-LOC ‘The woman was sitting on the ground.’ kuyu-kurlu kartaku-rla-kurlu. b. Karnta ya-nu-rnu woman go-PAST-HITHER meat-PROP can-LOC-PROP ‘The woman came with meat in a billycan.’ paka-rnu watiya-kurlu-rlu Yurntumu-wardingki-rli, c. Karnta-kari woman-other hit-PAST stick-PROP-ERG Yuendumu-DENIZ-ERG kulu-parnta-rlu, pama-jangka-rlu. anger-PROP-ERG grog-SOURCE-ERG ‘The one from Yuendumu, in anger, drunk, hit another woman with a stick.’ d. Jakamarra-kurlangu-wana-lpa nguna-ja maliki, ngurra-wana. lie-PAST dog home-PERL Jakamarra-POSS-PERL-IMPF ‘The dog was lying around Jakamarra’s home.’ As these examples show, semantic-caseheaded predicates too may receive case in agreement with the nominal of which they are predicated. See Laughren (1992).

7.3.7. Inchoative There are two inchoative suffixes, illustrated in (54), which are suffixed to predicative nominals to form a resultative secondary predicate. These are -karda ‘TRANS[lative]’ and -kurra ‘RESULT[ative]’. They refer to a state which is achieved or arrived at, that is different from the implicit or explicit original state. While the predicate formed with -karda is freely used and may take any argument or adjunct in the same clause as its “subject”, the predicate formed with -kurra is very restricted in its application. It may only be predicated of the absolutive object of an impact verb. Only a few predicative nominals have been found with the resultative suffix, namely nyurnu ‘sick, dead’, yalyu ‘blood’ and tarnnga ‘long time, forever’. (54) a. Wanta-kurra ka-rnalu kurdiji yirra-rni linji-karda. IMPF-1peS shield put-NPST dry-TRANS sun-ALL ‘We put the shield in the sun to (become) dry.’

48. Warlpiri

1701

b. Yapa ka-rnalu-jana japi-rni ping-karda. people IMPF-1peS-3pNS ask-NPST knowing-TRANS ‘We ask people so as to know.’ c. Yapa-lu-jana paka-rnu nyurnu-kurra tarnnga-kurra. person-3pS-3pNS hit-PAST dead-RESULT forever-RESULT ‘They hit the people to death.’

8. Sentential adjuncts Finite tensed clauses apparently never appear as complements in Warlpiri. However, they are used in a very common construction, the adjoined relative clause, illustrated in (55): (55) Ngajulu-rlu-rna yankirri pantu-rnu, kuja-lpa ngapa nga-rnu. emu spear-PAST, FACTCOMP-IMPF water consume-PAST I-ERG-1sS ‘I speared the emu which was! while it was drinking water.’ Morphologically, adjoined relative clauses are marked by the presence in the auxiliary of one of a set of complementizer (COMP) morphemes, such as kuja above. The others include the factive ngula (FACTCOMP), the nonfactive kaji (NFACTCOMP), and the relational yungu and its allomorphs (RELCOMP). Syntactically, these clauses are dependent on the main clause, and peripheral to it. They are never embedded in the main clause, and they are usually separated from it by a pause. They may be preposed to the main clause, which may start with the anaphorical element ngula, as in (56). There may even be multiple subordinations, as in (57). (56) Yankirri-rli kuja-lpa ngapa nga-rnu, pantu-rnu ngula-rna emu-ERG FACTCOMP-IMPF water consume-PAST, FACTCOMP-1sS spear-PAST ngajulu-rlu. I-ERG ‘The emu which was drinking water, that one I speared.’ or: ‘While the emu was drinking water, then I speared it.’ (57) Karli-ji ma-ninji-nta yali, ngula-ka marda-rni boomerang-1sNS get-go-IMP that.yonder, FACTCOMP-IMPF hold-NPST yapa-kari-rli ngula-ka ngurra ngalipa-nyangu-rla nyina. person-OTHER-ERG FACTCOMP-IMPF camp us-GEN-LOC sit-NPST ‘Go get me that boomerang that that other person who lives in our camp has.’ Semantically, as the translations show, adjoined relative clauses are open to different interpretations, which have been the subject of some investigation (cf. Hale 1976; Larson 1983). Roughly speaking, there are two main classes of interpretation. The first is the “NP-Relative” interpretation, in which the adjoined clause modified an argument of the main clause, and is often translated as a relative clause in English. The second is the “T-Relative” interpretation, a term derived from the use of adjoined relative clauses to specify the time of the main clause, or to describe an event holding

1702

VII. Syntactic Sketches

at the same time as the main clause. However, it has been extended to cover the use of adjoined relative clauses for comments, which may be linked by any reasonable connection to the main clause. As well as time, these include comments on place, cause, purpose, reason, “enabling cause”, contrastive parallels, conditionals and so on. They are often translated into English by clauses headed by conjunctions such as when, where, while, if, whereas, because and so on. Many sentences may have both NP-relative and T-relative readings, and only context will disambiguate them. However, the choice of complementizer, the time references of both the main clause and the adjoined clause, and the presence of coreferent noun phrases in both the main and the adjoined clause are factors in determining what reading is given. An example of a T-relative reading is given in (58): Factive, no shared argument, same time reference, T-relative (58) Ngajulu-rlu-lpa-rna karli jarntu-rnu, kuja-npa ya-nu-rnu boomerang trim-PAST, FACTCOMP-2sS go-PAST-HITHER I-ERG-IMPF-1sS nyuntu. you ‘I was trimming a boomerang when you came up.’ There appears to be no difference in behaviour between coreferential noun phrases in clauses with NP-relative interpretations, and those in clauses with T-relative interpretations. Both are represented by pronominal clitics, if they bear the right grammatical functions. In terms of pronominalisation, usually the second of the two (whether this happens to be in the main clause or the adjoined clause) undergoes pronominalisation, but pronominalisation is not essential.

9. Operators and logical form In content questions the interrogative element normally appears in initial position, preceding the auxiliary; in that position, presumably, it has scope over the remainder of the sentence: (59) a. Ngana-ngku ka karli nyampu jarnui-rni? trim-NPST IMPF boomerang this who-ERG ‘Who is trimming this boomerang?’ b. Nyiya-ku ka-npala-rla warri-rni nyumpala-rlu? what-DAT IMPF-2dS-3dat seek-NPST you:two-ERG ‘What are you two looking for?’ There is, however, no evidence that syntactic movement is involved in the formation of content questions in Warlpiri. Initial position is available to any constituent simply by virtue of the free surface ordering characteristic of Warlpiri syntax generally. Diagnostics, such as the so-called “weak cross-over effect” (cf. Farmer, Hale, and Tsujimura 1986), assuming that they are valid, suggest that no movement is involved in the formation of content questions:

48. Warlpiri

1703

(60) Ngana ka nyanungu-nyangu maliki-rli wajilipi-nyi? dog-ERG chase-NPST Who IMPF he-POSS ‘Who is his dog chasing?’ The Warlpiri sentence here, unlike the English given in translation, can receive an interpretation according to which the question word binds the possessive pronoun. The Warlpiri sentence can have the meanings associated with the English passive counterpart: ‘Who is being chased by his dog?’ This would follow if the Warlpiri question word were not involved in an operator-variable relationship with a trace in object position resulting from syntactic movement. The English of the translation, of course, involves movement and the pronoun may not, therefore, function as a bound variable without violating the restriction against weak crossover. Relativization in Warlpiri also fails to give evidence of syntactic movement. When the relative clause is in protasis position, the internal head of the relative may appear in initial position within its clause, as in (61) and (62). But, again, this position is simply available by virtue of the word order characteristics of Warlpiri generally. (61) Karli-ngki kuja-npa yankirri luwa-rnu, ngulaju rdilyki-ya-nu. boomerang-ERG FACTCOMP-2sS emu shoot-PAST that broken-go-PAST ‘The boomerang you hit the emu with broke.’ (62) Yankirri kuja-npa karli-ngki luwa-rnu, ngulaju pall ja. die-PAST FACTCOMP-2sS boomerang-ERG shoot-PAST that emu `The emu you hit with the boomerang died.’ Although the evidence cannot be considered conclusive at this point, it is doubtful that operator-variable binding relationships are formed, through movement, in the syntactic representations of Warlpiri sentences. The interpretations of questions and relatives, therefore, are evidently effected through movement in logical form, rather than in syntax (cf. Larson 1983).

10. Abbreviations, glosses 10.1. Auxiliary (auxiliary) elements a.

Sentential Complementizers (COMP) FACTCOMP FUTCOMP NEGCOMP NFACTCOMP PASTCOMP POTCOMP RELCOMP

factive complementizer future negative complementizer nonfactive complementizer remote past, usitative potential (with aspect ka) relational (causal, reason)

kuja, ngula kapu/kapi, ngarra kula kaji kala kala yungu/yinga/yingi/yi

1704 b.

VII. Syntactic Sketches Aspect (ASP) past imperfect present imperfect

IMPF IMPF

lpa ka

(Perfect (PERF) 0̸ is not usually glossed).

c.

Glosses for pronominal agreement clitics (AGR) Person

Number

Grammatical relation

1

first person

s

singular

S

subject

e

exclusive

d

dual

NS

non-subject

i

inclusive

p

plural

2

second person

3

third person

3sDAT

third singular Dative

REFL

reflexive

(3sS and 3sNS, 0̸, are not usually glossed).

d.

Pronominal agreement clitics Singular

1s

S

NS

rna

ju

Dual S

NS

1di

rli

ngali(ngki)

1de

rlijarra

Plural S

NS

1pi

rlipa

ngalpa

jarrangku

1pe

rna-lu

nganpa

2s

n(pa)

ngku

2d

n(pa)-

ngku-pala pala

2p

nku-lu

nyarra

3s



0̸ rla (DAT)

3d

pala

palangu

3p

lu

jana

10.2. Complementizer suffixes a.

Simultaneous event OBJCOMP OBVCOMP SUBJCOMP

b.

object-controlled complementizer obviative-controlled complementizer subject-controlled complementizer

-kurra -rlarni -karra

Preceding or purposive event PRECCOMP PURPCOMP DESIDCOMP NEGPURP PREPCOMP SEQCOMP

preceding event purposive desiderative purposive negative purposive preparatory purposive directional purposive

-rla -ku -kupurda -kujaku -kungarnti -kurra

48. Warlpiri c.

1705

Stative ASSOC PRIV

associative, resultative, perfective privative, negative

-warnu -wangu

10.3. Nominal suffixes A. Grammatical case ABS DAT ERG

Absolutive Dative Ergative

0̸ (not usually glossed) -ku -ngku, -rlu

B. Semantic case a.

Non-derivational case ALL COMIT EL LOC RESULT TRANS

b.

allative: ‘to, into comitative: ‘with’ elative: ‘from’ locative resultative translative

-kurra -ngkajinta, -rlajinta -ngurlu -ngka, -rla -kurra -karda

associative, perfective: ‘being’ denizen of: ‘belonging to’ simile-former: ‘as, like’ perlative: ‘along’ possessive possessive (on pronouns) possessive (on kinterms) privative, negative: ‘without’ proprietive: ‘having’ elative of source: ‘from, because of’

-warnu -malu, -ngarna, -ngawurrpa, -wardingki -piya -wana -kurlangu -nyangu -nyanu -wangu -kurlu, -parnta -jangka

dual plural, paucal: ‘few’ plural plural

-jarra -patu 0̸ -rra (on some pronouns)

Derivational case ASSOC DENIZ

LIKE PERL POSS POSS POSS PRIV PROP SOURCE

C. Number DUAL PAUCAL PL PL

1706

VII. Syntactic Sketches

D. Nominal formatives and other clitics CS OTHER WARD

change of state: ‘now, then’ other, next towards

-lku -kari -purda

10.4. Verbs A. Verbal inflections, arranged by conjugation Conjugation class

a.

nonpast past IRR irrealis PAST

III

IV

V

-rni, -ni -rnu -ka-rla

-nyi -ngu -ngka-rla

-rni, -ni -rnu -nja-rla

-ni -nu -nta-rla

-jul-ji -ya -nya

-ku -ka -rni-nya

-ngku -ngka -rni-nya

-lku -nja -na-nya

-nku -nta -nga-nya

-nja -ngu

-rni-nja -rnu

-nja -ngu

-rni-nja -rnu

-ni-nja -nu

-mi, 0̸ -ja -ya-rla

Tense/Mood, no Aspect distinction future imperative PREST presentational FUT IMP

c.

II

Tense, co-occurring with Aspect NPST

b.

I

Non-finite verb forms INF

infinitive

NOMIC

B. Verb formatives CAUS INCH

causative (transitive) N-ma-ni inchoative N-jarri-mi

C. Directionals HITHER BY THITHER

hither, to here past, by, across thither, to there

-rni -mpa -rra

48. Acknowledgments We would like to thank the Warlpiri people who have been teaching us their language. We would also like to thank Robert Hoogenraad, Stephen Swartz and David Nash for very useful comments and discussion. This work has been supported by the Department of Education of the Northern Territory of Australia, the Australian Institute of Aboriginal Studies and the MIT Lexicon Project (System Development Foundation, U.S.A.)

48. Warlpiri

1707

11. References Andrews, Avery 1985 The major functions of the noun phrase. In: Timothy Shopen (ed.), Language Typology and Syntactic Description. Vol. 1, Clause Structure, 62−154. London. Bittner, Maria, and Ken Hale 1991 Remarks on definiteness in Warlpiri. Ms., Cambridge, Massachusetts Institute of Technology. Bouma, Gosse 1985 A categorial grammar for Warlpiri. Ms. Groningen, Nederlands Instituut. Bouma, Gosse 1986 Grammatical functions and agreement in Warlpiri. In: Frits Beukema (ed.), Linguistics in the Netherlands 1986, 19−26. Dordrecht: Foris. Brunson, Barbara A. 1988 A Processing Model for Warlpiri Syntax and Implications for Linguistic Theory. Technical Report Computer Systems Research Institute-208. University of Toronto. Carrier, Jill 1976 Grammatical relations in Warlpiri. Ms., Cambridge, Massachusetts Institute of Technology. Craig, Colette, and Kenneth L. Hale 1988 Oblique relations and reanalysis in some languages of the Americas. Language 64.2, 312−344. Farmer, Ann K., Ken Hale, and Natsuko Tsujimura 1986 A note on weak crossover in Japanese. Natural Language and Linguistic Theory 4.1: 33−42. Granites, Robin Japanangka, Kenneth L. Hale, and David Odling-Smee 1976 Survey of Warlpiri grammar. Ms., Cambridge, Massachusetts Institute of Technolology. Guerssel, Mohammed, Kenneth L. Hale, Mary Laughren, Beth Levin, and Josie White Eagle 1985 A cross-linguistic study of transitivity alternations. In: Papers from the Parasession on Causatives and Agentivity at the Twenty-first Regional Meeting, Chicago Linguistics Society, 48−63. Chicago. Hale, Kenneth L. 1973a Person marking in Walbiri. In: Stephen Anderson and Paul Kiparsky (eds.), A Festschrift for Morris Halle, 308−344. New York. Hale, Kenneth L. 1973b Deep-surface canonical disparities in relation to analysis and change: an Australian example. In: Thomas Sebeok (ed.), Current Trends in Linguistics, 401−458. The Hague. Hale, Kenneth L. 1976 The adjoined relative clause in Australia. In: Robert M. W. Dixon (ed.), Grammatical Categories in Australian Languages, 78−105. Canberra. Hale, Kenneth L. 1981a On the Position of Warlpiri in a Typology of the Base. Bloomington. Hale, Kenneth L. 1981b Preliminary remarks on the grammar of part-whole relations in Warlpiri. In: Jim Hollyman and Andrew Pawley (eds.), Studies in Pacific Languages and Cultures in Honour of Bruce Biggs, 333−344. Auckland. Hale, Kenneth L. 1982a Preliminary remarks on configurationality. In: Proceedings of the North Eastern Linguistics Society 12, 86−96. Hale, Kenneth L. 1982b Some essential features of Warlpiri main clauses. In: Stephen Swartz (ed.), Papers in Warlpiri Grammar in Memory of Lothar Jagst, 217−315.

1708

VII. Syntactic Sketches

Hale, Kenneth L. 1983 Warlpiri and the grammar of non-configurational languages. Natural Language and Linguistic Theory 1.1: 5−47. Hale, Kenneth L. 1986 Notes on world view and semantic categories: some Warlpiri examples. In: Pieter Muysken and Henk von Riemsdijk (eds.), Features and Projections, 233−254. Dordrecht: Foris. Hale, Kenneth L. 1990 Core structures and adjunctions in Warlpiri syntax. For the proceedings of the Tilburg Scrambling Conference, October 1990. Hale, Kenneth L., and Mary Laughren 1986 The structure of verbal entries. Preface to the Warlpiri Dictionary Verb fascicle. MS., Cambridge, Massachusetts Institute of Technology. Hale, Kenneth L., and Paul Platero 1986 Parts of speech. In: Pieter Muysken and Henk von Riemsdijk (eds.), Features and Projections, 31−40. Dordrecht: Foris. Jelinek, Eloise 1984 Empty categories, case, and configurationality. Natural Language and Linguistic Theory 2.1: 39−76. Kashket, Michael Brian 1984 A government-binding based parser for Warlpiri. A free word order language. MSc. dissertation, Massachusetts Institute of Technology. Cambridge, MA. Larson, Richard K. 1983 Restrictive modification: relative clauses and adverbs. Doctoral dissertation, University of Wisconsin. Laughern, Mary 1977 Pronouns in Warlpiri and the category of number. Paper presented to the conference of linguists, Department of Education, Darwin, Northern Territory. Laughern, Mary 1982 A preliminary description of propositional particles in Warlpiri. In: Stephen Swartz (ed.), Papers in Warlpiri Grammar in Memory of Lothar Jagst, 129−163. Laughern, Mary 1984 Some aspects of focus in Warlpiri. Paper presented at the Australian Linguistics Society. Laughern, Mary 1985a The split case hypothesis re-examined: the Warlpiri case. Paper presented at the Australian Linguistics Society. Laughern, Mary 1985b Warlpiri reflexives, ‘inherent’ reflexives and grammatical relations. Paper presented at the Massachusetts Institute of Technology. Laughern, Mary 1985c Case assignment across categories in Warlpiri case. Paper presented at the Linguistics Society of America Winter Meeting. Laughern, Mary 1988a Some data on pronominal disjoint reference and coreference in Warlpiri. Ms. Laughern, Mary 1988b Towards a lexical representation of Warlpiri verbs. In: Wendy Wilkins (ed.), Thematic Realtions, 215−242. (Syntax and Semantics 21.) New York. Laughern, Mary 1989 The Configurationality Parameter and Warlpiri. In: Laszlo K. Maracz and Pieter Muysken (eds.), Configurationality: The Typology of Asymmetries, 319−353. Dordrecht: Foris.

48. Warlpiri

1709

Laughern, Mary 1992 Secondary predication as a diagnostic of underlying structure in Pama-Nyungan languages. In: I. Roca (ed.), Thematic Structures: Its Role in Grammar, 199−246. (Linguistic Models Series.) Berlin. Nash, David 1982 Warlpiri preverbs and verb roots. In: Stephen Swartz (ed.), Papers in Warlpiri Grammar in Memory of Lothar Jagst, 165−216. Nash, David 1986 Topics in Warlpiri Grammar. (Outstanding Dissertations in Linguistics 3.) Copyright 1985. New York, London [Published version of Massachusetts Institute of Technology doctoral dissertation with the same name, 1980]. Riemsdijk, Henk van 1981 On ‘Adjacency’ in phonology and syntax. In: Proceedings of the North Eastern Linguistics Society 11: 399−413. Simpson, Jane 1983a Discontinuous verbs and the interaction of morphology and syntax in Warlpiri. In: Proceedings of the Second Annual West Coast Conference on Formal Linguistics. February, 1983. Simpson, Jane 1983b Aspects of Warlpiri morphology and syntax. Doctoral dissertation, Cambride, Massachusetts Institute of Technology. Cambridge. Simpson, Jane 1988 Case and complementizer suffixes in Warlpiri. In: Peter Austin (ed.), Complex Sentence Constructions in Australian Aboriginal Languages, 205−218. (Typological Studies in Language 15.) Amsterdam. Simpson, Jane 1991 Warlpiri Morpho-Syntax: A Lexicalist Approach. (Studies in Natural Language and Linguistic Theory 23.) Dordrecht. Simpson, Jane, and Joan Bresnan 1983 Control and obviation in Warlpiri. Natural Language and Linguistic Theory 1.1: 49−64. [Revised version of a paper with the same name in Proceedings of the First Annual West Coast Conference on Formal Linguistics, Stanford University, January 1982.] Simpson, Jane, and Meg Withgott 1986 Pronominal clitic clusters and templates. In: Hagit Borer (ed.), The Syntax of Pronominal Clitics, 149−174. (Syntax and Semantics 19.) New York. Speas, Margaret J. 1990 Phrase Structure in Natural Language. (Studies in Natural Language and Linguistic Theory 21.) Dordrecht. Swartz, Stephen 1982a Papers in Warlpiri Grammar in Memory of Lothar Jagst. (Work-Papers of SIL-AAB. Series A, Volume 6.) Berrimah, N. T. Swartz, Stephen 1982b Syntactic structure of Warlpiri clauses. In: Stephen Swartz (ed.), Papers in Warlpiri Grammar in Memory of Lothar Jagst, 69−127. Swartz, Stephen 1988 Pragmatic structure and word order in Warlpiri. Pacific Linguistics 17: 151−166. Swartz, Stephen 1991 Constraints on Zero Anaphora and Word Order in Warlpiri Narrative Text. (SIL-AATB Occasional Papers 1) [Published version of Pacific College of Graduate Studies MA thesis with the same name, 1988].

Kenneth L. Hale, Cambridge/MA (USA) Mary Laughren, Brisbane (Australia) Jane Simpson, Sydney (Australia)

1710

VII. Syntactic Sketches

49. Creole Languages 1. 2. 3. 4. 5. 6. 7.

Introduction The lexicon Phrase structure Linguistics at school Sentence grammar Conclusion References (selected)

Abstract Though some scholars have denied that creole languages exhibit a clear common typology, it is shown that a significant number of structures and features are shared by creole languages, especially those that came into existence under plantation conditions.

1. Introduction The term “creole” has been applied to a wide variety of languages. For instance, languages as varied as Proto-Germanic, Egyptian, Songay, Mbugu and Middle English have been hailed as creoles. Such indiscriminate use deprives the label of all meaning, and has led to the belief that there is no distinct creole typology (Muysken 1988). In what follows, the term will be restricted to what, in Bickerton (1988), were described as “plantation creoles”: languages that arose among speakers of several mutually incomprehensible languages who had been removed from their traditional homelands under the aegis of colonial power and who were thus obliged to “invent” some means of mutual communication. Such languages would include, for example, Gullah, Haitian, Sranan, Saramaccan, Papiamentu, Seselwa, Morisyen Sao Tomense and Hawaiian Creole, to take a random sample of the two or three dozen exemplars. Much of what can be said about these languages would apply also to “fort” and “maritime” creoles, as defined in Bickerton (1988). However, it is only about “plantation creoles” that one can make typological statements which are both general and detailed, without requiring frequent disclaimers of the type “except in the case of X, where …”. Note that in Lefebvre (2011: 6−7) the thirty “creoles” whose typologies are discussed include four that are not even called creoles, but are pidgins, on the highly dubious grounds that “scholars have started referring to pidgins and creoles as P/Cs”. The list also includes a number of “fort” and “maritime” creoles while excluding the genuine plantation creoles from the Indian Ocean. Unsurprisingly, it is concluded that there is no consistent creole typology. However, this conclusion flies in the face of a series of studies spread over nearly half a century (Taylor 1971; Bickerton 1981; Markey 1982; Hancock 1987; Holm and Patrick 2007; Grant and Guillemin 2012) and produced by scholars with very varied agendas and theoretical assumptions, which show that a large number of morphological, lexical and syntactic features are shared by a substantial majority of creoles. Moreover,

49. Creole Languages

1711

where they are not shared, the substitution in most cases of a similar feature from the creole’s superstrate clearly indicates that a greater influence has been exercised by that superstrate and is responsible for the diversion from the prototypical creole pattern. Note however that the process of decreolization (although dismissed as a “myth” by Ansaldo and Matthews 2007) is very real, particularly for those English-related creoles that have remained in contact with English. Accordingly, this chapter ignores decreolized varieties and refers only to the oldest and purest recoverable varieties of creole languages.

2. The lexicon In general, creoles, at least in their early stages of development, have a lexicon considerably smaller than that of their source (superstrate) languages, even though in most cases the latter supply 90 % or more of the basic morpheme stock. Missing from the lexicon are reflexes of the less frequent content words, all or almost all bound morphology, and a large percentage of free grammatical morphemes, including especially superstrate articles, auxiliary verbs and complementizers, as well as some prepositions and conjunctions. Content words that survive tend to be disyllabic: superstrate monosyllables with final consonants usually add an epenthetic vowel, although this may disappear over time if creole and superstrate remain in contact; trisyllabic or longer words tend to be lost prior to creole formation. With regard to the semantics of content words, out of the several members of a single semantic set (e.g. ‘talk’, ‘say’, ‘speak’, ‘report’ etc.) only one may survive (e.g. taki ‘talk’ in the Suriname creoles). Deficits in the content side of the lexicon are made up either by broadening the semantic ranges of existing morphemes or by coinage of new items. Typical processes, of which one of the commonest is compounding (e.g. Guyanese haadyiaz ‘stubborn, obstinate’, literally ‘hard-ears’, or Sranan atibron ‘anger’, literally ‘heart-burn’) have been described by a number of writers (e.g. Hancock 1980; Farquharson 2007). Changes in function are another device: verbs are often derived from nouns, e.g. Guyanese cobweb, ‘to dust’, or Hawaiian Creole lawnmower, ‘to mow’. A part of the vocabulary which tends to reduce over time is drawn from substrate languages. In Saramaccan, this part has been estimated to be as high as 50 % (Price 1976) and in Berbice Dutch at 27 % (Smith, Robertson, and Williamson 1987) although a range between 2 % and 10 % is more common for most synchronic creoles. In general, little core vocabulary is drawn from substrate sources; retentions are most numerous in “private” domains such as body parts, diseases, food, religion and so on. The deficit due to loss of a substantial percentage of grammatical morphemes is made up by “semantically bleaching” content words and (where these are disyllabic) reducing them to monosyllables usually by truncating the unstressed syllable: thus in Sranan sabi ‘know’ gives sa ‘irrealis particle’, in Saramaccan taki ‘talk’ gives taa ‘factive complementizer’, in Crioulo kaba ‘finish’ (from Ptg. acabar) gives ba ‘completive particle’, and so on. These monosyllabic forms will be reduced in stress and are perceived by speakers as quite different in meaning from their etyma even where phonological truncation has not occurrred, cf. Hawaiian Creole he go stay come ‘He will be coming’. The loss of overt case distinctions in creole pronouns, although common (e.g. Haitian), is by no means always complete. Normally most pronouns are invariant for case in nomina-

1712

VII. Syntactic Sketches

tive, accusative and possessive forms. Nominative and accusative may be distinguished in the first person singular (Palenquero, Seselwa) or in the third (Sranan, Guyanese), but seldom elsewhere. Especially in Portuguese creoles, pronouns may have emphatic forms.

3. Phrase structure 3.1. Order within NP Kihm (2008) pointed out that there was considerably more variation in the creole NP than in the creole VP − the latter being highly consistent across creoles. Creoles are strictly configurational SVO languages. However, the order of constituents within NP does not always follow what is predicted by this overall schema. In some cases, determiners, such as possessives, may follow nouns: (1)

pega mi tata mi ta father my PROG hit me ‘My father hit me.’

[Palenquero]

Sometimes determiners, as well as adjectives, may either precede or follow the head: (2)

a. Youn gwos cheval a big horse ‘a big horse’

[Haitian]

b. cheval blan na horse white the ‘the white horse’ Sometimes even in the same NP determiners can both precede and follow the head: (3)

di bai dem dis The boy them this ‘these boys’

[Guyanese]

Compare Haitian NP-sa-a ‘NP-this-the’. But in general, articles have evolved too recently from demonstrative adjectives for such a split to manifest itself. For instance, the di and (d)a definite articles found universally in English creoles are probably not reflexes of English the but truncated forms of creole disi ‘this’ or dati ‘that’ respectively (note that where a creole is referred to as “English”, “French” etc., no claims of genetic affiliation are intended; the terms are used neutrally and entirely for convenience, for making generalizations about sets of creoles whose lexicons are drawn predominantly from a particular language). The variation within noun phrases is, however, limited to issues of word order. Semantically, creoles show a high degree of consistency in their determiner system. Virtually all creoles omit determiners where referents are generic, hypothetical or even indefi-

49. Creole Languages

1713

nite (see Bickerton 1981: 23−25 for discussion). Where the determiner is absent the pluralizer also is obligatorily absent. Pluralizers are also absent where there is any other indicator of plurality, e.g. a quantifier or a numeral, within the phrase. In general, the position of adjectives vis-a-vis their noun heads seems to be determined by their order in the superstrate language: English creoles have only preceding adjectives, Spanish and Portuguese creoles only following ones, while French show a mixture, the majority being postnominal. Relative clauses and other types of complement, such as prepositional phrases, follow their heads universally. In contrast, genitive phrases are highly variable, probably more so than any other construction in creoles. We find at least five types of possessive NP: possessor-possessed (4), possessed-possessor (5), preposition-possessorpossessed (6), possessor-possessive adjective-possessed (7), and possessed-prepositionpossessor (8): (4)

a. Jaan haas John horse ‘John’s horse’ b. mi baáa my brother ‘my brother’

[General Anglo-Caribbean]

[Saramaccan]

(5)

lelo Pegro finger Peter ‘Peter’s finger’

[Palenquero]

(6)

da a fi-mi buk that is for-me book ‘That is my book.’

[Jamaican]

(7)

mi tata su buki my father his book ‘my father’s book’

[Papiamentu]

(8)

di baáa fu mi he brother for me ‘my brother’

[Saramaccan]

As (4 b), (8) indicate, there may be more than one type of genitive in the same language. In general, creoles follow the order of their superstrates, but this is certainly not the case in e.g. Papiamentu.

3.2. Order within VP Verbs invariably precede their complements, whether these are nominal or sentential. Most if not all creoles have “dative-shift”, that is, constructions in which an indirectobject Goal NP precedes a direct-object Theme NP (it is worth noting that though these double-object constructions could be attributed to the superstrate in English, no precedent for them is found in Portuguese or French, which have only dative-type constructions):

1714 (9)

VII. Syntactic Sketches mada mu ua kwa di kume. send me one thing of eat ‘Send me something to eat.’

(10) li bay nu larzan he give we money ‘He gave us money.’

[Sao Tomense]

[Haitian]

If the Theme/direct object comes first, the indirect object is, in many cases, preceded by an additional verb (see section 4.1 below). Although verbs in general generate their Themes in direct-object position, large numbers of transitive (but no ditransitive) verbs are also “ergative” in nature: that is to say that like English melt they may appear either with Agent subject and Theme object, or with Theme subject and no object: bway-la perd lakle-la (11) a. ti small boy-the lose key-the ‘The small boy lost the key.’

[Dominican]

b. lakle-la perd key-the lose ‘The key is lost.’ The restrictions on what verbs may behave in this way have not yet been clarified for any creole language: it may be that causative verbs like kill (= ‘cause to die’) are excluded, or that selectional restrictions are involved (inanimate Theme subjects seem more acceptable than human ones, for instance). More study is needed here. Prepositional phrases do occur in the verb phrase, although as noted above the inventory of prepositions is depleted as compared with the source language. According to Kouwenberg (1992) there are postpositions in Berbice Dutch, a fact that may be due to both the superstrate and the major substrate (Ijo) being SOV languages. (12) a. war ben house in ‘in the house’

[Berbice Dutch]

b. di banku bofu the bench on ‘on the bench’ Adverbs are rare in creoles generally. Occasionally they may be formed by reduplication (e.g. Saramaccan hesi ‘quick’, hesihesi ‘quickly’). Often a serial verb construction may be used: (13) el a kore paden bai. he PST run inside go ‘He went in in a hurry.’ (14) di kyat waak kom an waak go the cart walk come and walk go ‘The cart came here and went away again.’

[Papiamentu]

[Guyanese]

49. Creole Languages

1715

Negation is invariably predicate-external (that is, never falls inside the predicate as do English not, French pas); this is true even where pa is the chosen negator (as it is in French creoles generally). In a large majority of creoles the negator precedes the predicate; in a few (Palenquero, Sao Tomense, Berbice Dutch) it follows: (15) yu nimi dida kane you know that NEG ‘You don’t know that.’

[Berbice Dutch]

3.3. The structure of INFL Creoles have a characteristic array of particles that indicate tense, modality and aspect (TMA). These are always free morphemes, usually monosyllabic, and always appear inthe order indicated: (16) a bin sa e waka he PST IRR PROG walk ‘He would have been walking.’

[Saramaccan]

(17) li t’ av ap marse he PST IRR PROG walk ‘He would have been walking.’

[Haitian]

The semantics of these particles and their combinations are consistent over a wide range of creoles (see discussion in Bickerton 1981: 73−99). PST is marked only for [+past], and is almost always relative (the reference point being the time of the topic under discussion rather than the moment of speech). IRR is marked only for [−realis], and PROG for [−completive]. There are, however, minor variations; completive aspect is marked in some creoles by a VP-final verb derived from a superstrate word meaning finish(ed) e.g. Anglo-Creole done, Franco-creole fini(r), Ibero-Creole (a)kaba(r). This particle may be further reduced, e.g. Crioulo ba, Seselwa (i)n and may in addition be incorporated into INFL. Although TMA particles are almost always free, and often derived from full verbs, their behavior is quite distinct from that of the “true” verbs: the latter, for instance, will undergo the process of verb copying (sometimes called “predicate fronting”) discussed in section 4.6.3, whereas the former will not.

4. Sentence grammar 4.1. Serial verb constructions Perhaps no area of creole grammar has given rise to more confusion and misunderstanding than serial verb constructions (SVCs). In early work they were often treated as

1716

VII. Syntactic Sketches

conjoined sentences, a practice continued by some (e.g. Seuren 1990; Corne et al. 1996) with regard to Indian Ocean creoles. Other writers have taken a variety of approaches, treating SVCs as embedded CPs or conjoined VPs. Creoles show a variety of types, including take serials: (18) a teki nefi koti brede he take knife cut bread ‘He cut bread with a knife.’ (19) li pran ti lisyen tue he take little dog kill ‘He killed the little dogs.’

[Sranan]

[Seselwa]

dative serials with give: (20) nu pote liv-la vini bay u we carry book-the come give you ‘We brought you the book.’

[Haitian]

and directional serials with a verb of locomotion plus come or go, as in example (14) above, or instrumental serials using take or equivalent: (21) a tei faka kii sindeki he take knife kill snake ‘He killed the snake with a knife.’

[Saramaccan]

Serial verbs in Seselwa, Saramaccan and Negerhollands may take markers of tense, modality or aspect, although this is unusual elsewhere. However, such markers must always be the same as that on the first verb in the SVC: (22) a. lulu in pran papa in ale in manze wolf PROG take daddy PROG go PROG eat ‘Wolf’s gone and eaten Daddy!’

[Seselwa]

b. *lulu ti pran papa in ale a manze wolf PST take daddy PROG go IRR eat This fact, plus the fact that extraction is possible out of serial structures rules out any kind of conjunction analysis.

4.2. Empty categories As we have already seen in SVCs, creoles make extensive use of empty categories (null pronouns). The “missing pronouns” in sentences such as (22) above are a case in point: any repetition of subjects or objects of higher verbs on lower verbs is barred.

49. Creole Languages

1717

(23) a. Kofii naki Anbak [Ei kiri Ek ] kill Kofi hit Anba ‘Kofi struck Anba dead.’

[Sranan]

b. *Kofi naki Anba kiri en Kofi hit Anba kill him A regular algorithm for assigning reference to empty categories takes the deepest item (Ek in [23a]) and assigns its reference to the closest NP, Anba, then does the same to the second deepest item (Ei). This algorithm is possibly a universal, since it assigns reference in exactly the same way that reference us assigned in English sentences such as Mary is too angry E to talk to her and Mary is too angry T to talk to E. Creoles are not pro-drop languages, but they do lack “dummy” pronominals in contexts with existential expressions such as it, there or the il of French il y a. Only referential expressions can be subjects. In Guyanese, for example, English it is raining becomes reen a faal (literally, ‘rain is falling’). Similar examples can be found in other creoles: (24) tin un muhe ki tin un yiu-muhe have a woman who have a child-woman ‘There was a woman who had a daughter.’

[Papiamentu]

(25) get wan wahine shi get tumach keiki have one woman she have too.much children ‘There was a woman who had a lot of children.’

[Hawaiian Creole]

(26) no can see nomo nating not can see no.more nothing ‘It’s not possible to see anything.’

[Hawaiian Creole]

As (24)−(26) suggest, a single verb expresses both existence and possession: in most if not all creoles. Morisyen, Seselwa and Hawaiian Creole have in addition a distinct negative form for both (napa, nomoa).

4.3. “Accomplished” versus “unaccomplished” complements In English there is no morphosyntactic distinction between complements whose action is necessarily accomplished (I managed to leave), necessarily unaccomplished (I failed to leave) or of unspecified accomplishment (I decided to leave). In all creoles there exists some degree of distinction. In some creoles, all three conditions are distinguished, while in the remainder the second and third conditions are collapsed. In Saramaccan, Papiamentu and Haitian, for example, all three conditions can be realised, and in the same way. In all creoles there is a complementizer derived from the preposition for or its equivalent (pu and pa in French and Iberian creoles, from pour and para respectively): the absence of this, or its replacement by auxiliary go or equivalent, indicates the “accomplished” condition, its presence alone indicates the “indeterminate” condition, and its presence plus a marker of past or anterior tense in the following clause indicates the “unaccomplished” condition:

1718

VII. Syntactic Sketches

(27) a. a go a wosu (go) nyan he go to house go eat ‘He went home to eat (and did).’

[Saramaccan]

b. a go a wosu faa nyan he go to house for.he eat ‘He went home to eat (and may or may not have done so).’ c. a go a wosu faa bi nyan he go to house for.he PST eat ‘He went home to eat (but did not).’ In Jamaican, Morisyen and Hawaiian Creole, however, the subordinate clause either cannot be tensed or cannot be marked overtly for tense, so that the use of the ‘for’ complementizer covers both the b and c conditions. Note that the distinction between the a and b/c conditions can be clearly indicated by the use of an adversative tag: (28) a. im gaan fu bied, but im duon bied he gone for bathe but he NEG bathe ‘He went to wash, but he didn’t wash.’ b. *im gaan go bied, bot im duon bied

[Jamaican]

It might be thought that the difference between creoles that have the b/c distinction is due to the nonfinite nature of the dependent clause. That this is not not so, and that ‘for’ clauses are always finite in creoles, is shown by the fact that wherever case distinctions are marked in creoles, ‘for’ clauses always choose nominative and reject accusative pronouns: go/*fi am go (29) mi waan fii I want for.he go/for him go ‘I want him to go.’

[Guyanese]

(30) li bizin ed mwa pu mo/*mwa kapav vini he need help me for I/ me be.able come ‘He had to help me for me to be able to come.’

[Morisyen]

4.4. Anaphora The status of anaphora in creoles depends on the extent to which anaphors were retained or lost during pidginization and whether reconstitution has taken place or not. Loss is particularly marked in Iberian-based creoles, e.g. Papiamentu (Muysken 1993) and Palenquero (Dieck 2008), since Spanish and Portuguese use reflexive clitics with verbs which require no overt object in English, e.g. se ha ido, ‘He’s gone’. Naturally, unstressed and often phonologically-reduced items like clitics are among the first things to be lost in pidginization. English creoles generally managed to retain or reconstitute superstrate forms, thus one typically finds expressions like heself or sheself. For reasons that remain unclear,

49. Creole Languages

1719

French creole reflexives involved major restructuring. French “short-range” reflexive clitics were lost (along with cliticised forms in general) in the process of pidginization. Only reflexes of the emphatic pronoun series moi ‘me’, lui ‘him’ etc. survived in the forms mwen, li etc. A reflex of meme survived, and can be conjoined with li, but the meaning is different from the French “long-range” reflexive lui meme: for most speakers, li-mem must have a referent outside the sentence that contains it. In early texts it appears that bare pronouns could be used reflexively: li bat li might mean ‘He hit him’ or ‘He hit himself’. Subsequently, according to Carden and Stewart (1988) reflexives based on body parts (ko-li, literally ‘body-his’; tet-li, literally ‘head-his’) were reconstituted by Haitian speakers. However, the original body-part sense has not been lost, so that for at least some speakers li bat tet-li is ambiguous between ‘He hit himself’ and ‘He hit his head’. Corne (1988) shows a similar situation for Morisyen, although he argues (incorrectly, in this writer’s opinion) for a historical scenario the exact reverse of Carden and Stewart’s. In Seselwa there is massive variation among three forms, li-mem, li and so lekor (literally,’ his body’). For some speakers, only the first and third may be used reflexively; for others, all three may be used, but preferred usage varies according to the verb: (31) li ti get li/ so lekor dan laglas he PST look.at him his body in mirror ‘He looked at himself n the mirror.’

[Seselwa]

(32) mo frer ti bles li/ so lekor my brother PST wound him/ his body ‘My brother hurt himself.’

[Seselwa]

While reflexive forms tend to be reconstituted even if lost, reciprocal forms seem always to be lost and are relatively seldom reconstituted. In some entirely unrelated creoles, the word for ‘friend’ functions as a reciprocal: (33) dem a pelt mati they PROG throw friend ‘They are throwing things at each other.’

[Guyanese]

Similar constructions are found in Caboverdiense (with kompanyer) and in Morisyen (with kamarad). In Saramaccan, there is no difference between reciprocals and reflexives, so that (34 a) is ambiguous and can only be disambiguated by (34 b): (34) a. Kofi ku Samo suti denseei Kofi with Samo shoot REFL/RECP ‘Kofi and Samo shot themselves/each other.’ b. Kofi suti Samo hen Samo suti Kofi Kofi shoot Samo then Samo shoot Kofi ‘Kofi and Samo shot each other.’

[Saramaccan]

1720

VII. Syntactic Sketches

4.5. Relativization One empirical generalization that can be made about creoles is that relativizers and question-words will not be homophonous (unlike English who, French qui etc.). We find pairs like Saramaccan ambe ‘who?’, di ‘relativizer’, or Guyanese (a)hu ‘who?’, we ‘relativizer’, or Papiamentu quen ‘who?’, cu ‘relativizer’. It is true that in French creoles generally, the relativizer is ki, but ki alone does not function as a question-word; rather it occurs in compounds such as Haitian ki moun, literally ‘who-person?’, though it may also be replaced by another form such as Seselwa lekel. Historically, at least some of the synchronically opaque forms in other creoles were derived in a similar manner, e.g. synchronic Sranan suma ‘who’ was originally usuma, from the English forms ‘who?’ and ‘someone’ . It is hard to think of any language, however primitive, as lacking question-words, so that the bimorphemic question-words found frequently though not universally in creoles may have been created in the pidgin phase. However, there is no similar necessity for any kind of relative pronoun or relativizing complementizer; indeed, as early stage pidgins lack any kind of embedding, they would have no place for such a morpheme. It is likely that in their early stages creoles lack any kind of relativizer. In Hawaiian Creole, the most recently formed, there is no relativizer at the basilectal level (although varieties closer to English have that) so that even if the subject of the clause corefers with the head, there is no overt marker of relativization: (35) di gai gon lei da vainil bin kwot mi prais [Hawaiian Creole] the guy IRR lay the vinyl PST quote me price ‘The guy who was going to lay the vinyl had quoted me a price.’ In other creoles, similar structures may occur even though a relativizer exists: (36) me mu gogo na-mina sa gavi my mother like PL-child be good ‘My mother likes children who are good.’ (37) mi miit wan uman a mek ail I meet one woman PROG make oil ‘I met a woman who was making oil.’

[Annobonese]

[Guyanese]

Absence of relativizer is especially common in existential sentences, e.g. the following from Seselwa: (38) ti

[Seselwa] anan en bato ti apel Atelmir ti mouy deor have one boat PST call Atelmir PST moor outside ‘There was a boat that was called the Atemir that was moored outside.’

PST

One aspect of relativization which should be touched on here relates to a striking asymmetry between French and English creoles. In French creoles, extraction from subject position (whether for a questioned or a focused constituent) involves use of the relativizer ki; in English creoles, relativizers are barred from this environment:

49. Creole Languages (39) a. ki moun ki vini? who person REL come ‘Who came?’

1721 [Haitian]

b. *ki moun vini c. suma kon? who come ‘Who came?’

[Sranan]

d. *suma di kon? who REL come (40) a. se-te Jan ki vini be-PST John REL come ‘It was John who came.’

[Haitian]

b. *se-te Jan vini c. na Jan kam FOC Jan come ‘It was John who came.’

[Sranan]

d. *na Jan di kam FOC Jan REL come

4.6. Movement 4.6.1. NP-movement NP-movement is severely limited in creoles. Most creoles lack a passive altogether; the construction discussed in section 3.2 discharges the functions of agentless passives. A few creoles (e.g. Papiamentu, Seselwa) have recently developed a rather marginal passive, but it is questionable whether this really forms a part of either the original or the synchronic basilectal grammar; in all probability such forms are derived through contact with European languages that do have passives (it is interesting to note that the Papiamentu passivizing verb wordu is derived from a Dutch loan). DeGraff (1993) has shown that Haitian, at least, has some raising verbs such as genlè ‘seem’. It also seems likely that NP movement is involved in examples like (41 a), on the assumption that their underlying structure is that of (41 b), where e represents an empty node: (41) a. di trii plaan the tree plant ‘The tree was planted.’ b. e plaan di trii

[Guyanese]

1722

VII. Syntactic Sketches

4.6.2. WH-movement In general, focused NPs and PPs in creoles (arguments in other words) can be freely linked with empty argument positions, subject of course to subjacency: (42) a. di bai-dem sii di pikin a rood kaana the boy-PL saw the child LOC road corner ‘The boys saw the child at the side of the road.’

[Guyanese]

b. a di bai-dem sii di pikin a rood kaana ‘It was the boys who saw the child at the side of the road.’ c. a di pikin di bai-dem sii a rood kaana ‘It was the child that the boys saw at the side of the road.’ d. a rood kaana di bai-dem sii di pikin ‘It was at the side of the road that the boys saw the child.’ In some languages, a particle usually equivalent to the pre-NP copula precedes the focused constituent (a in Guyanese, na in Sranan, se in Haitian etc.). Extraction of focused constituents from the lower clauses of SVCs is a further possibility: (43) a. den tyari a nyan go na oso gi Kofi they carry the food go LOC house give Kofi ‘They took the food home for Kofi.’

[Sranan]

b. na Kofi den tyari a nyan go na oso gi ‘It was Kofi they took the food home for.’ Extraction is possible even when the lower clause is overtly tensed: (44) lekel ou’ n tir larjan ou’ n done avek? who you PROG pull money you PROG give with ‘Who have you withdrawn money for?’

[Seselwa]

This further reinforces the contention that SVCs cannot be regarded as co-ordinate structures, since extraction is never possible from these.

4.6.3. Verb copying There remains a feature found perhaps everywhere but in Hawaiian Creole which is sometimes referred to as “predicate fronting”, but is perhaps better described as verb copying. The term “predicate fronting” suggests that the whole predicate is fronted, which except in Hawaiian Creole sentences such as (45) is never the case: (45) no

laik plei futbal diz gaiz NEG like play football, these guys ‘They don’t want to play football, these guys.’

[Hawaiian Creole]

49. Creole Languages

1723

In creoles generally, the verb alone (or a predicate adjective, alone) may be copied into initial position, but the original item always remains in situ: (46) a. a

tiif Jan tiif di mango steal John steal the mango ‘John stole the mango.’

[Jamaican]

FOC

b. *a tiif Jan di mango. c. *a tiif di mango Jan (tiif di mango) (47) a. se

uve lapot-la te uve open door-the PST open ‘The door was open!’

[Haitian]

FOC

b. *se uve lapot-la Further, only verbs (including most modals) and predicate adjectives undergo this process; preverbal TMA markers do not do so, either individually or with V: waka (48) a. mi bi ta I PST PROG walk ‘I had been walking.’

[Sranan]

waka b. waka mi bi ta walk I PST PROG walk ‘I had been walking.’ c. *bi waka mi bi ta waka d. *ta mi bi ta waka e. *bi ta waka mi bi ta waka f.

*ta waka mi bi ta waka

Variation exists between creoles and even within creoles (whether dialectally or idiolectally is not yet determined) with regard to whether the copying process applies to all verbs in SVCs or only to the matrix verb. The existence of verb-copying raises at least two interesting questions. One is the question why verbs (heads, rather than maximal projections) are available for focusing. Another is why it is necessary for verbs to be copied: “verb second” or “scrambling” processes in many languages successfully move verbs from their original predicate positions. These issues have to take into consideration the fact that copying is blocked by any prior case of WH-movement in at least some creoles: (49) a. wooko a bi go luku ka Kofi ta wooko work he PST go look where Kofi PROG work ‘He had gone to look where Kofi was working.’ b. *wooko a bi go luku naase Kofi ta wooko

[Saramaccan]

1724

VII. Syntactic Sketches

This contrast arises because while ka is an adverbial base-generated where it appears in (49a), naase is a question-word that must have been moved from sentence-final position in (49b). It follows from this that verb-copying is still in some sense a movement operation, operating cyclically and therefor blocked if any other constituent occupies an intermediate landing site.

5. Conclusion The foregoing, while it outlines the core areas of creole grammar, has for reasons of space had to omit mention of many other features with respect to which creoles show striking similarities. Such features include the following: negative indefinite NPs, whether in subject position or elsewhere, are accompanied by verbal negation; there are distinct co-ordinate conjunctions for clauses (usually a reflex of superstrate and) and NPs (usually a reflex of superstrate with); co-ordinates of the type of John went home and had dinner normally require an overt subject for the second conjunct; there are always or almost always different copula forms for NP and locative PP predicates; nonpunctual aspect markers with statives or adjectives indicate ongoing process; questions involve no syntactic differences from affirmatives; verbs of perception take tensed clausal complements with TMA markers and overt nominative subjects. Despite frequent claims in the creole literature that creoles cannot be distinguished from other languages on typologicl grounds. The motivation for such claims is hard to understand, since the grammatical similarities are much greater than those found in any genetically-related subgroup of languages. When one considers that there is neither a direct genetic link nor (in a large majority of cases) any significant historical contact between these languages (see e.g. Roberts 1998 for a convincing rebuttal of “diffusion” claims with respect to Hawaiian Creole), and that the ancestral languages of their first speakers were extremely diverse typologically, the degree of similarity that prevails is unexpected and demands to be accounted for. Now that Bickerton (2014) has developed an improved version of the linguistic bioprogram hypothesis (Bickerton 1984), the universalist case for creole genesis is strongly reinforced.

6. References (selected) Ansaldo, Umberto, and Stephen Matthews 2007 Deconstructing creole: the rationale. In: U. Ansaldo, S. Matthews, and L. Lum (eds.), Deconstructing Creole, 1−18. Amsterdam: Benjamins. Bickerton, Derek 1981 Roots of Language. Ann Arbor: Karoma. Bickerton, Derek 1984 The language bioprogram hypothesis. Behavioral and Brain Sciences 7: 173−221. Bickerton, Derek 1988 Creole languages and the bioprogram. In: F. Newmeyer (ed.), Linguistics: The Cambridge Survey 2, 267−284. Cambridge: Cambridge University Press. Bickerton, Derek 1989 Seselwa serialization and its significance. Journal of Pidgin and Creole Languages 4: 155−183.

49. Creole Languages

1725

Bickerton, Derek 2014 More than Nature Needs: Wallace’s Problem and the Evolution of Mind. Cambridge, MA: University of Harvard Press. Carden, Guy, and William A. Stewart 1988 Binding theory, bioprogram and creolization: evidence from Haitian Creole. Journal of Pidgin and Creole Languages 3: 1−68. Corne, Chris 1988 Mauritian Creole reflexives. Journal of Pidgin and Creole Languages 3: 69−102. Corne, Chris, Dierdre Coleman, and Simon Curnow 1996 Clause reduction in ayndetic coordination in Isle-de-France Creole: the ‘serial verb’ problem. In: P. Baker, and A. Syea (eds.), Changing Meanings, Changing Functions, 129−154. London: University of Westminster. de Graff, Michel 1993 Is Haitian Creole a prodrop language? In: F. Byrne and J. Hol (eds.), Atlantic Meets Pacific, 71−90. Amsterdam: Benjamins. Dieck, Marianne 1988 La expresión de la reflexividad en Palenquero. Papia: Revista Brasileira de Estudos Crioulos e Similares, No. 18. Farquharson, Joseph T. 2007 Typology and grammar: creole morphology revisited. In U. Ansaldo, S. Matthews, and L. Lum (eds.), Deconstructing Creole, 21−37. Amsterdam: Benjamins. Grant, Anthony, and Diana Guillemin 2012 The complex of creole typological features: the case of Mauritian Creole. Journal of Pidgin and Creole Languages 27: 48−104. Hancock, Ian 1980 Lexical expansion in creole languages. In: A. Valdman, and A. Highfield (eds.), Theoretical Orientations in Creole Studies. New York. Hancock, Ian 1987 A preliminary classification of the Anglophone Atlantic creoles with syntactic data from thirty-three representative dialects. In: G. Gilbert (ed.), Pidgin and Creole Languages: Essays in Memory of John E. Reinecke, 264−333. Honolulu. Holm, John, and Peter Patrick (eds.) 2007 Comparative Creole Syntax. London: University of Westminster Press. Kihm, Alain 2008 The two faces of creole grammar and their implications for the origins of complex language. In: R. Eckardt, G. Jager, and T. Veenstra (eds.), Variation, Selection, Development: Probing the Evolutionary Model of Language Change. Berlin: de Gruyter. Kouwenberg, Sylvia 1992 From OV to VO: Linguistic negotiation in the development of Berbice Dutch creole. Lingua 88: 263−296. Lefebvre, Claire (ed.) 2011 Creoles, their Substrates, and Language Typology. Amsterdam: Benjamins. Markey, Thomas 1982 Afrikaans: creole or non-creole? Zeitschrift für Dialektologie und Linguistik 49: 169− 207. Muysken, Peter 1988 Are creoles a special kind of language? In: F. Newmeyer (ed.), Linguistics: the Cambridge Survey 2: 285−301. Cambridge: Cambridge University Press. Muysken, Peter 1993 Reflexes of Ibero-Romance reflexive clitic + verb combinations in Papiamentu: Thematic grids and grammatical relations. In: F. Byrne, and D. Winford (eds.), Focus and Grammatical Relations. Amsterdam: Benjamins.

1726

VII. Syntactic Sketches

Roberts, Sarah 1998 The genesis of Hawaiian Creole and diffusion. Language 74: 1−39. Seuren, Peter 1990 Still no serials in Seselwa: a reply to ‘Seselwa serialization and its significance’ by Derek Bickerton. Journal of Pidgin and Creole Languages 5: 271−292. Smith, Norval S. H., Ian E. Robertson, and Kay Wilkinson 1987 The Ijo element in Berbice Dutch. Language in Society 16: 49−90. Taylor, Douglas R. 1971 Grammatical and lexical affinities of creoles. In: D. Hymes (ed.), Pidginization and Creolization of Languages, 293−296. Cambridge: Cambridge University Press.

Derek Bickerton, Waialua, Hawaii (USA)

50. Northern Straits Salish 1. 2. 3. 4. 5. 6. 7. 8.

Introduction Basic syntactic properties Verbs vs. nouns: the Salish debate Transitivity Arguments, adjuncts and the Pronominal Argument Hypothesis Clause types Abbreviations References (selected)

Abstract This article provides a brief syntactic sketch of Northern Straits Salish based on the published literature. It illustrates the basic properties of the syntax of a Central Salish language and lays out theoretical issues associated with lexical category distinctions, transitivity, and argument/adjunct distinctions that have been central in the study of Northern Straits and Salish more generally.

1. Introduction Northern (or North) Straits Salish is a member of the Salish family, which consists of 23 languages spoken along the coast and in the interior of the Canadian province of British Columbia, and in the US states of Washington, Oregon, Idaho, and Montana. All the Salish languages are endangered, and several have been considered sleeping since the 1960s. Northern Straits itself has fewer than 10 fully-fluent elderly speakers, although language maintenance and revitalization efforts in Northern Straits communities are strong and are producing new speakers.

50. Northern Straits Salish

1727

The Salish language family is divided into three sub-groupings: Central Salish, Interior Salish and Tsamosan Salish, and two isolated languages, Bella Coola and Tillamook (see Czaykowska-Higgins and Kinkade 1998). Northern Straits is closely related to Klallam and together they form the Straits sub-grouping of Central Salish. Traditionally the two languages have been spoken on southern Vancouver Island and the Gulf Islands of Canada, on the San Juan Islands and the Olympic Peninsula of the USA, and on the coast of the mainland from White Rock in British Columbia, Canada to Bellingham in Washington State (USA). From the perspective of linguists, there are 6 mutually-intelligible dialects of Northern Straits, 5 spoken in Canada, one in the USA (see Montler 1999). Each has a linguistic designation and a local name: these are Saanich/SENĆOŦEN, Sooke/T’Sou-ke, Songish (Songhees)/Lekwungen, Samish/Malchosen, Semiahmoo/SEMIYOME, and Lummi/ Xwlemichosen. Native speakers of Northern Straits do not have a single name for the language, preferring instead to use local names for each variety. Almost all previously published linguistic research refers to the Northern Straits dialects by their linguistic designations, and since we draw heavily on this published work, we will follow the linguistic practice here. However, Northern Straits communities are increasingly using local names for their languages rather than linguistic ones; thus, in the future we expect the language designations might change in linguistic work as well. A final point is that each community in which the language is spoken has its own orthography. In the interests of space and consistency, we will use the standard North American Phonetic Alphabet, rather than community orthographies, to transcribe examples in this article. See http://maps.fphlcc.ca/ fphlcc/sencoten for more detailed information about names and orthographies. There is a great deal of similarity in syntactic and morphosyntactic properties throughout the Northern Straits dialects and this chapter therefore makes use of examples from several, focusing on Lummi, Saanich, and Samish, since these are the dialects which have been described and discussed the most. In addition, where appropriate we refer to examples from Klallam, the closely related Straits Salish language, to illustrate constructions. Syntactic research on Northern Straits within the generative paradigm (particularly Principles and Parameters Theory and Minimalism) is found in an extensive body of work spanning approximately 25 years and produced by Demers and Jelinek working separately and jointly on Lummi and Samish; the work of Montler on Saanich and Klallam syntax is focused on detailed descriptions of particular constructions (see References). There are also brief descriptive sketches of syntactic constructions in Galloway (1990) on Samish, Efrat (1969) on Sooke, Raffo (1972) on Songish. A number of younger scholars have worked with Samish- and Saanich-speaking elders since 2000; their research has focused especially on phonological issues and semantic issues related to aspect (see Kiyota 2008; Leonard 2007; Leonard and Turner 2010; Shank 2001, 2002, 2003a, 2003b; Turner 2005, 2007, 2011). The last 20 years have seen very intense and important research on the syntax and semantics of various languages within the Salish family and we refer readers to Czayskowska-Higgins and Kinkade (1998), van Eijk (2008), and Davis and Matthewson (2009) for comprehensive bibliographies. In this chapter, we provide a brief syntactic sketch of Northern Straits, based on the published literature, in order to illustrate the basic properties of the syntax of a Salish language. At the same time, we lay out several theoretical issues that have been central in the study not only of Northern Straits, but

1728

VII. Syntactic Sketches

also in the study of Salish syntax more generally. For reasons of space our theoretical sections are focused primarily on arguments and analysis as they pertain to Northern Straits, even though on some topics there is far more research published on other Salish languages, including the neighbouring language, Halkomelem (see especially work by Gerdts, Hukari, or Wiltschko, listed in van Eijk 2008). Also for reasons of space, we only briefly mention the growing literature on aspect in Northern Straits. This chapter begins in section 2 with a relatively theoretically-neutral description of morphosyntactically-relevant properties of Northern Straits words, and the basic syntactic constructions that have been identified and described in the literature. Section 3 considers one of the controversial topics in the study of Salish syntax, namely the question of whether Salish languages have a noun-verb distinction. Section 4 turns to a description of transitive and intransitive constructions and theoretical issues associated with transitivity, while section 5 examines the Pronominal Argument Hypothesis, another central topic in the theoretical study of Northern Straits and Salish. Section 6 describes the properties of and research on complex sentences.

2. Basic syntactic properties Typically Salish languages are predicate-initial and radically head-marking, and this is true of Northern Straits. A simple clause consisting of a single predicate with no overt grammatical marking is generally interpreted as intransitive, non-control, 3rd person singular, perfective. Both pre- and post-predicate (second-position clitic) particles, and various kinds of stem-changing and affixal morphology play crucial roles in the syntax. Thus, (1b) contains a pre-predicate particle, a complex word with a nominalizer prefix, a root meaning ‘break’, a lexical suffix meaning ‘leg’, and two clitics; (1c) illustrates a pre-predicate particle, causative/transitive marking on the predicate, and two clitics. (1)

a. yeʔ. ‘He went.’

[Saanich] (Montler 1991: 55)

b. ʔiʔ

s-tkʷ-e´yəcˇ=sən=səʔ.

ACCOM NMLZ-break-leg=1SG.SBJ=FUT

‘I’m going to be limping (as part of the first sockeye ceremony).’ (Montler 1986: 193) c. kʷɫ k̓ʷən̓-st-a´ŋ̓əs=ləʔ=sxʷ. REAL see.ACT-CAUS-1SG.OBJ=PST=2SG.SBJ ‘You showed it to me already.’ (Montler 1986: 210) As (1b, c) show, predicates can be marked morphologically for a number of syntactically relevant notions. Any grammar of the syntax of a Salish language therefore needs to describe the morphosyntactic properties encoded in words. We discuss these morphosyntactic properties in 2.1, and turn to the structure of simple clauses in 2.2.

50. Northern Straits Salish

1729

2.1. The morphosyntactic properties encoded in words As in all Salish languages, root morphemes are the only obligatory elements in Northern Straits words. In traditional Salish grammatical descriptions non-root morphemes are divided into grammatical and lexical morphemes. In words, morphemes from particular morphosyntactic categories occupy particular positions with respect to roots and each other. Morpheme order in Northern Straits is similar to that found in most Salish languages (2) (see Czaykowska-Higgins and Kinkade 1998: 23), although Northern Straits has more non-concatenative morphology than (2) allows for, and there is some flexibility in the ordering of some affixes: (2)

Basic morpheme order POSS/NMLZ-ASP-LOC-RED~ROOT~RED-PA-LS-TR/INTR/CTL-OBJ-SBJ/POSS-ASP

The lexical content of words is situated primarily in root morphemes, as one would expect, but in addition, Northern Straits, like other Salish languages, has both a small class of lexical prefixes (7 are listed in Montler 1986: 48−53) and a larger class of approximately 60 lexical suffixes. The latter refer to concrete nominal concepts such as body parts or common items such as -iɫcˇ ‘plant’ or -aɫən ‘fish’, but their meanings may be extended to include more abstract notions; for example -əqsən ‘nose’ can also mean ‘point’. Of interest for the syntax is that in some constructions lexical suffixes take on specific thematic roles, as if they were direct objects in transitive constructions, even though the verbal morphology is intransitive. Compare (3a), in which the lexical suffix represents a Theme, to (3b), in which the Theme is represented by an independent word: (3)

a. ye´ʔ t̓ θəkʷ-a´s-əŋ. go wash-face-MID ‘Go wash your face.’

[Saanich]

b. ye´ʔ t̓ θe´kʷ-ət tθə ʔən-se´ləs. go wash-CTR DET 2SG.POSS-hand ‘Go and wash your hands.’ (Czaykowska-Higgins Fieldnotes 2000) Given examples such as (3a), there has been debate in the general Salish literature about whether lexical suffixes are incorporated nouns, whether they are referential, and whether they are derived syntactically or lexically (e.g., Gerdts 2003). Work on Northern Straits lexical suffixes has largely involved listing and describing the meanings of the suffixes (Pidgeon 1970; Montler 1986: 64−90), although Jelinek and Demers (1994: 715−716) do discuss them briefly, concluding that lexical suffixes are derivational affixes and not incorporated objects. Syntactic categories which are marked on words in Northern Straits include number, aspect, transitivity (voice and valence), control, and person marking (possessors, objects, and subjects). There is, in addition, an s- prefix which does not fit neatly into any of these categories, but has an important syntactic function, serving to mark subordinate clauses of various types (s- is also a nominalizer and stative marker).

1730

VII. Syntactic Sketches

Both reduplicative and infixing morphology distinguish singular and plural entities (e.g, Saanich səl~se´ləs [PL~hand] ‘hands’; Montler 1986: 104), collectivity and iterations of events/actions (e.g., Saanich nəq~nə´q-əŋ [REP~dive-MID] ‘He keeps diving and coming up, diving and coming up’; Montler 1986: 110). The role that number and number agreement play in the syntax of Northern Straits has not been explored in any detail. However, Montler (2003: 130−131) describes constraints on agreement in plurality between predicates and DPs, and within DPs, in Klallam, and it would be worth determining whether similar constraints hold in Northern Straits. The principal aspectual distinction in Northern Straits is between imperfective and perfective (in the literature on Northern Straits, imperfective is also referred to as actual in Thompson and Thompson 1969; Raffo 1971, 1972; Demers 1974; Montler 1986, 1989; Turner 2005, 2007; continuative in Efrat 1969; Galloway 1990; and progressive in Kiyota 2008, while perfective is referred to as non-actual). Perfective is indicated by absence of specific marking and is the unmarked aspect in Salish in general; imperfective is marked by four possible allomorphs, all of which involve stem-changing morphology such as reduplication (cf., Saanich t̓íləm θə Janet [sing DET Janet] ‘Janet sang (a song).’ vs t̓ə~t̓íl̓əm̓ θə Janet [IPFV~sing DET Janet] ‘Janet’s singing.’; p̓ə´kʷ tθə ma´ʔəqʷ [rise.to.surface DET duck] ‘The duck floated to the surface.’ vs p̓e´kʷ-əɫ [rise.to.surface(IPFV)-DUR] ‘rising to the surface now and then [as in a buoy]’; Turner 2007). Northern Straits also has a stative prefix, stative, durative and persistent suffixes (see Montler 1986), and a resultive construction (see Turner 2007). In addition, several pre-predicate particles play a significant, and as yet not fully understood role in creating aspectual distinctions. Transitivity (valence) and voice are marked by a system of productive (simple transitive and applicative) suffixes indicating the various relationships between the participants and the action/state being expressed. As in other Salish languages, transitivity interacts with control, the latter being a significant grammatical category in Salish which specifies agent volitionality or control (see Thompson 1985; Czaykowska-Higgins and Kinkade 1998; Kroeber 1999), and is related to aspect (Jacobs 2011). There are four voice distinctions: active is the unmarked voice, while middle, passive and antipassive are marked morphologically by a final suffix -əŋ and are distinguished from each other by the stems to which the suffix is added and by their morphosyntactic context. Transitivity is discussed in more detail in section 4. Person is the final morphosyntactic category marked on predicates in Northern Straits. There has been controversy in the literature on Salish languages regarding the question of whether person markers are pronouns and hence arguments of the predicate. We consider this issue in section 5. As far as marking is concerned, Northern Straits distinguishes objects by means of suffixes, and subjects by means of affixes and/or secondposition clitics. One set of object suffixes occurs only in control transitive stems (i.e. those marked with -t), while the other occurs following other transitivizing suffixes. (4) lists morphemes from the two sets in Saanich/Lummi. Main clauses and subordinate clauses use different subject markers as seen in (5) (main clause 1st and 2nd person subjects are marked by second-position clitics; in addition, 2nd plural can be indicated by adding hela/helə to a clause). Northern Straits has been argued to have a split-ergative person-marking system in main clauses: 3rd person intransitive subjects and 3rd person transitive objects are marked in the same way (that is, by a 0̸-morpheme), 3rd person control transitive subjects are

50. Northern Straits Salish (4)

1731

Tab. 50.1: Object suffixes in Saanich/Lummi Object Suffixes with Control Transitives Singular

(5)

with other transitivizing suffixes

Plural -al̓xʷ/-oŋəɫ

Singular

1st

-s

2nd

-sə/-s

-aŋə/-oŋəs

3rd





Plural -al̓xʷ/-oŋəɫ

-aŋəs/-oŋəs

Tab. 50.2: Subject markers in Saanich/Lummi Main Clause

Subject Markers

Subordinate Clause

Subject Markers

Singular

Plural

Singular

Plural

st

sən

nd

sxʷ

-əxʷ

rd

-əs / 0̸

-əs

1 2 3

ɫtə/ɫ

-ən

-əɫtə

marked by -əs, and 1st and 2nd person objects and subjects follow a nominative/accusative pattern. (6) illustrates the split-ergative system with examples from Lummi and Saanich (Jelinek and Demers 1983; Jelinek 1996; Montler 1986); all examples except (6e) are from Lummi (Jelinek and Demers 1983: 168, 171): (6)

a. intransitive 3rd subject swəyʔqəʔ=0̸. man=3.SBJ ‘He is a man.’

d. intransitive 1st subject swəyʔqəʔ=sən. man=1SG.SBJ ‘I am a man.’

b. transitive 1st subject/3rd object xcˇ̣ i-t=sən tsə swəyʔqəʔ. know-CTR=1SG.SBJ DET man ‘I know the man.’

e. transitive 3rd subject/1st object k̓ʷə´n-ə(t)-s-əs. look-CTR-1SG.OBJ-3.SBJ ‘He looked at me.’ (Montler 1986: 153)

c. transitive 3rd subject/3rd object xcˇ̣ i-t-(ə)s tsə swəyʔqəʔ. know-CTR-3.SBJ DET man ‘He knows the man.’

f.

transitive 1st subject/3rd object xcˇ̣ i-t=sən tsə swəyʔqəʔ. know-CTR=1SG.SBJ DET man ‘I know the man.’

As we saw in (5), several of the Northern Straits subject markers are second position clitics. Northern Straits has approximately 16 particles that occur in second-position (see Jelinek 1996; Galloway 1990; Raffo 1972; Efrat 1969; Montler 1986). These particles group into 5 basic categories which express mood, modality, tense, person, aspect, interrogative, and a few other notions. The basic order in the clitic complex is HOST=Mood/ Modality=Tense/Subject=Other, although Turner (pc, 2010) suggests that tense and subject order might be variable, with tense-subject order preferred in past and in second person future cases, and subject-tense order preferred in first person future:

1732 (7)

VII. Syntactic Sketches Tab. 50.3: Saanich second-position particles Position 1 Mood

Position 2 Modality

Position 3 Tense

Position 4 Person

ə ‘yes/no question’

cˇ̓əʔ ‘evidential’

ləʔ ‘past’

cˇə ‘command’

yəq ‘optative’

səʔ ‘future’ sxʷ ‘2SG.SBJ’

yəxʷ ‘conjectural’

sən ‘1SG.SBJ’

ɫtə ‘1PL.SBJ’

Position 5 Varied q̓əʔ ‘emphatic’ kʷəʔ ‘informative’ ʔacˇe ‘request information’ kʷəcˇe ‘explanative’ helə ‘2nd pluralizer’ ʔal̓ ‘limiting’

(adapted from Montler 1986: 201) Second-position clitics attach to three different types of hosts which occur in clauseinitial position: 1) main clause predicates (8a); 2) initial predicates in complex predicate constructions (8b; section 2.2.2); and 3) adverbial quantifiers (in constructions such as that in 8c): (8)

a. yéʔ=yəq=ləʔ=sən. go=OPT=PST=1SG.SBJ ‘I ought to go/I wish I’d gone.’ (Montler 1986: 207)

[Saanich]

leŋ-t-ŋ. b. ʔən̓e=ləʔ=sxʷ come=PST=2SG.SBJ see-CTR-PASS ‘You were visited.’ (literally, ‘come-seen’) (Jelinek 1996: 281) ̓ c. məkʷ=sxʷ ʔəw̓ ŋa-t=0̸.

[Lummi]

[Lummi]

all=2SG.SBJ LNK eat-CTR=3.OBJ ‘You ate it/them all.’ (Jelinek 1996: 282)

2.2. The structure of simple clauses A number of different main clause constructions have been identified throughout the literature on Northern Straits. Since these constructions are representative of the basic syntactic patterns found across the Salish language family, we bring them together in this section.

2.2.1. Basic word order In simple intransitive clauses in Northern Straits, second-position clitics and nominal subjects follow the main predicate:

50. Northern Straits Salish (9)

1733

a. me´ʔkʷ-əɫ=səʔ=sxʷ. hurt-DUR=FUT=2SG.SBJ ‘You’ll get hurt.’ (Montler 1986: 212)

[Saanich]

b. t̓íləm=ləʔ=0̸ tsə swiʔqoʔəɫ. sing=PST=3.SBJ DET young.man ‘The young man sang.’ (Jelinek 1995: 490)

[Lummi]

In active transitive sentences default word order is VSO, although subject/object order is free in Lummi (Jelinek 1996: 288). Thus, we find examples like (10a, b) with VSO translations, but we also find examples such as (10c) which are given two possible translations. In passive sentences word order is VS+oblique object: (10) a. xə´p̓k̓ʷ-t-əs tθə sqéxəʔ tθə st̓ θa´m. gnaw-CTR-3.SBJ DET dog DET bone ‘The dog is gnawing the bone.’ (Leonard Fieldnotes 2008)

[Saanich]

b. xcˇ̣ i-t-s tsə swəyʔqəʔ tsə swiʔqoʔəɫ. know-CTR-3.SBJ DET man DET young.man/boy ‘The man knows the boy.’ (Jelinek and Demers 1983: 168)

[Lummi]

c. t̓əm̓-t-s=0̸ tsə ŋənə tsə swəy̓qəʔ. hit-CTR-3.OBJ=3.SBJ DET child DET man ‘Hei hit himj, the childi,j, the mani,j..’ (Jelinek and Demers 1994: 721)

[Lummi]

d. xcˇ̣ i-t-ŋ tsə swiʔqoʔəɫ ə tsə swəyʔqəʔ. know-CTR-PASS DET young.man/boy OBL DET man ‘The boy is known by the man.’ (Jelinek and Demers 1983: 168)

[Lummi]

Despite the existence of transitive sentences like (10a−c) containing both an overt lexical subject and an overt lexical direct object, such constructions tend to be avoided, as is the case in many Central Salish languages. Instead, Northern Straits prefers constructions in which there is only one overt non-oblique nominal. In such sentences, the single nominal is interpreted as the subject in intransitive clauses and as the direct object in transitive clauses; following Gerdts’ analysis of the related language, Halkomelem (1988: 57−59), this pattern is often referred to as One Nominal Interpretation: (11) a. cˇey=0̸ tsə swəyʔqəʔ. work=3.SBJ DET man ‘The man works.’

[Lummi]

(Jelinek and Demers 1994: 718)

1734

VII. Syntactic Sketches tsə swəyʔqəʔ. b. xcˇ̣ i-t-s know-CTR-3.SBJ DET man ‘He knows the man’ *‘The man knows him.’ (Jelinek and Demers 1983: 171)

In basic transitive clauses there are two other restrictions on the co-occurrence possibilities of subjects and objects: In Lummi 3rd person pronominal or nominal subjects do not co-occur in constructions with 1st or 2nd person object (in Saanich only 3rd person subjects and 2nd person object combinations are restricted; Montler 1999: 475); in these kinds of cases, passive constructions are used (12): (12) Active transitive examples a. xcˇ̣ i-t-oŋəs=sən b. xcˇ̣ i-t=0̸=sən/sxʷ c. xcˇ̣ i-t-s d. * e. *

[Lummi] ‘I know you’ ‘I/you know it’ ‘He/she knows it’ *‘He/she knows you/me’ *‘The man knows me/you’

Passive examples f. * g. * h. xcˇ̣ i-t-ŋ=sən/sxʷ i. * j. xcˇ̣ i-t-ŋ=sən/sxʷ ə tsə swəyʔqəʔ

[Lummi] *‘You are known by me’ *‘I am known by you’ ‘I/you are known’ (by someone) *‘It is known by you/me’ ‘I/you are known by the man’ (Jelinek and Demers 1983: 168)

In addition, as (12h, i) illustrate, a passive sentence with 1st/2nd person clitic on the predicate is always interpreted with the 1st/2nd person as subject. In order to express 1st/ 2nd person agents in a passive sentence the emphatic pronominal predicate must be employed in an oblique phrase: (13) xcˇ̣ i-t-ŋ tsə swəyʔqəʔ ə tiʔə ʔes. OBL DET 1.PRED know-CTR-PASS DET man ‘The man is known by me.’ (Jelinek and Demers 1983: 173)

[Lummi]

To account for the restrictions on subjects and objects, Jelinek and Demers (1983) propose that Lummi is governed by an agent hierarchy such that in a main clause the argument of highest rank in the hierarchy must be interpreted as the subject of the clause (Jelinek and Demers [1983] place 3 > NML on the hierarchy, but comparison of examples such as [13] and [12i] suggests that in fact the opposite ranking of NML > 3 might be correct): (14) Lummi Agent Hierarchy 1 and 2 > 3 > NML

(NML = nominal) (Jelinek and Demers 1983: 173)

50. Northern Straits Salish

1735

2.2.2. Complex predicates and pre-predicate particles Northern Straits makes extensive use of complex predicate constructions. One such construction involves a collocation of two first-order predicates, either of which can occur as a main predicate in simple sentences (cf., [1a], [8a], and [15c]). The first of these (often referred to in the literature as an Auxiliary; for example, Montler 2003) is always intransitive in nature (Jelinek and Demers 1994: 208) while the second may be transitive or intransitive: sˇtəŋ. (15) a. yeʔ=sən go=1SG.SBJ walk ‘I’m going for a walk.’ (Czaykowska-Higgins Fieldnotes 2001)

[Saanich]

leŋ-t-oŋes. b. ʔən̓e=ləʔ=sən come=PST=1SG.SBJ see-CTR-2SG.OBJ ‘I came to see you.’ (Jelinek and Demers 1994: 708)

[Lummi]

c. sˇtəŋ ʔə kʷs sˇcˇəleqəɫ. walk OBL DET yesterday ‘He walked yesterday.’ (Czaykowska-Higgins Fieldnotes 2001)

[Saanich]

The second complex predicate construction involves a collocation of an Adverbial Quantifier and a following first-order predicate. In these constructions, the second predicate is preceded by the linking particle ʔuʔ/ʔəw̓ , and the second-position clitics attach to the first predicate: (16) a. yos=sən ʔuʔ yeʔ. always=1SG.SBJ LNK go ‘I always go.’ (Jelinek and Demers 2004: 231)

[Lummi]

b. mək̓ʷ=0̸ ʔuʔ pə´q̓. all=3.SBJ LNK white ‘They are all/completely white.’ (Jelinek and Demers 2004: 230) c. hoy=sən ʔuʔ cˇ-te´la. complete/all=1SG.SBJ LNK have-money ‘I have all the money.’ ‘Only I have money.’ (Jelinek and Demers 2004: 229) All of the initial elements in the constructions in (16) may function as independent predicates in simple sentences (17):

1736

VII. Syntactic Sketches

(17) a. ʔəw̓ hay=sxʷ. LNK finish=2SG.SBJ ‘You have finished (formula for thanking someone).’ (Jelinek 1995: 516)

[Lummi]

tsə scˇeenəxʷ. b. ŋən’=0̸ big/many=3.SBJ DET fish ‘They are many, the fish.’ (Jelinek 1995: 519) The third type of complex predicate is a Predicate Nominal construction. In such constructions a first-order predicate representing a quality/adjective precedes a second predicate with nominal-like meaning. The first predicate again serves as the host for second-position clitics: (18) a. ʔəy̓=sxʷ swəy̓qəʔ. good=2SG.SBJ male ‘You’re a good man.’

[Lummi]

b. ʔəy̓=sxʷ. good=2SG.SBJ ‘You’re good.’ c. swəy̓qəʔ=sxʷ. male=2SG.SBJ ‘You’re a man.’ (Jelinek 1995: 523−524) The initial element in a complex predicate construction is distinct from morphemes belonging to a class of so-called pre-predicate particles. While pre-predicate particles, such as kʷɫ in (19) occur in clause-initial position like initial predicates in complex predicate constructions, they can never serve as hosts to second-position clitics: (19) kʷɫ cˇey=ləʔ=sən ʔə tsə s-xx əɫ. ̣ ̣ PFV work=PST=1SG.SBJ OBL DET NMLZ-sick ‘I have worked for a hospital (lit. I have worked for sick people).’ (Kiyota 2008: 122)

[Saanich]

The class of pre-predicate particles is small; these particles tend to express aspectual and modal meanings although their exact functions are not always clear. Montler (1986) reports 6 pre-predicate particles for Saanich: kʷɫ which Montler glosses ‘realized, already’ has recently been analyzed by Kiyota (2008) as a perfect marker; s ‘unrealised’; ʔiʔ ‘accompanying’ which expresses simultaneous actions and acts as a conjunction; cˇəɫ ‘immediate past’; təwə ‘still, yet’; and ʔəw̓ ‘contemporaneous’. Shank (2002) argues that one of the functions of ʔəw̓ is to shape quantificational structure through focus, while Shank (2003b) suggests that in some constructions it forms a larger complex particle in collocations with ʔal’, creating negative constructions with the general meaning ‘not even’. While Kiyota and Shank’s work is suggestive, there is more work to be done in determining the exact properties of this class of morphemes.

50. Northern Straits Salish

1737

2.2.3. Determiner Phrases In Determiner Phrases the determiner always precedes the nominal. Proper nouns are always preceded by determiners within determiner phrases; determiners may stand alone, as in (20c, d), and when they do they mark contrastive reference (Jelinek and Demers 1994: 717): [tsə swə´y̓qəʔ]. (20) a. cˇə´q=0̸ big=3.SBJ DET male ‘The man is big.’ (Montler 1993: 244)

[Saanich]

[tiʔə John]. b. nə´qʷ-əɫ=0̸ sleep-DUR=3.SBJ DET John ‘John is still sleeping.’ (Czaykowska-Higgins Fieldnotes 2001) c. s-te´ŋ

ʔacˇə ɫe´ʔə.

NMLZ-what REQ DET

‘What is this?’ (Montler 1986: 217) kʷsə. d. leŋ-t-0̸=sən see-CTR-3.SBJ=1SG.SBJ DET.F ‘I saw her, that one.’ (Jelinek and Demers 1994: 717)

[Lummi]

Adjectives, numerals and quantifiers are positioned between the determiner and a nominal in a DP of the form [DET MODIFIER NOMINAL] (Jelinek and Demers 1994: 730 consider the “modifiers” in the DPs in [21] to be quality, cardinality, or quantifier predicates in keeping with their assumptions about category-neutrality; see section 3): [tsə cˇəq swəy̓qəʔ]. (21) a. siʔe´m̓=0̸ boss=3.SBJ DET big male ‘The big man is boss.’ (Montler 1993: 245) [tsə cˇesəʔ swəy̓qəʔ]. b. cˇey=0̸ work=3.SBJ DET two man ‘The two men worked.’ (Jelinek and Demers 1994: 730) [tsə mək̓ʷ nə-scˇeləcˇə]. c. xcˇ̣ i-t-0̸=sxʷ

[Saanich]

[Lummi]

know-CTR-3OBJ=2SG.SBJ DET all 1SG.POSS-kin ‘You know all my relatives.’ (Jelinek and Demers 1994: 730) [DET MODIFIER NOMINAL] DPs contrast with complex predicate nominals which occur initially in main clauses and take the form [MODIFIER NOMINAL] (22a), and with relative clauses in which the predicate follows the nominal head (22b):

1738

VII. Syntactic Sketches

(22) a. cˇəq swəy̓qəʔ [tsə siʔe´m̓]. DET boss big male ‘The boss is a big man.’ (Montler 1993: 245) [tsə swəy̓qəʔ ƛ̓iƛ̓əw̓]. b. k̓ʷən-nəxʷ=sən escaping see-NCTR=1SG.SBJ DET male ‘I saw the man who was getting away.’ (Montler 1993: 252)

[Saanich]

[Saanich]

Approximately 21 demonstratives/determiners have been documented for the dialects of Northern Straits. They distinguish proximate/distal, visible/invisible, gender and direction meanings and function as articles and/or demonstrative pronominals (see Montler 1986: 225; Jelinek and Demers 1994: 717). In addition, Northern Straits lacks quantificational determiners equivalent to weak quantifiers such as many, few, numerals etc. Instead such quantificational notions are expressed as open class, adjective-like, predicates, as in Lummi cˇəsəʔ=0̸ tsə qʷəqʷel̓ [two=3.SBJ DET speak] ‘They are two, the (ones who) spoke.’ (Jelinek 1995: 518−520; see also section 3.1 below). There is currently no evidence that Northern Straits demonstratives/determiners mark definiteness or count versus mass (see Jelinek 1995: 512, 526−530). However, Montler (2007) illustrates a specific/ non-specific distinction related to definiteness and a class of definite demonstratives in Klallam, and a mass-count distinction, which is argued to be very different from that found in English, has been explored for related Halkomelem (Wiltschko 2009).

2.2.4. Possessive constructions Northern Straits, like other Salish languages, has a set of possessive markers that include both prefixes and suffixes: (23) Tab. 50.4: Possessive markers in Northern Straits Possessive Person Markers st

1

Singular

Plural

nə-

-ɫtə

nd

ʔən̓-

rd

-s

2 3

These markers are affixed to the possessed nominal, as would be expected in a headmarking language. (24) a. nə-ten=sxʷ. 1SG.POSS-mother=2SG.SBJ ‘You are my mother.’ (Jelinek 1995: 496)

[Lummi]

50. Northern Straits Salish

1739

tsə swiw̓ləs. b. men-s father-3SG.POSS DET young.man ‘The young man is his father.’ (Montler 1993: 249)

[Saanich]

c. ʔəy̓ tiʔə siʔem=ɫtə. good DET boss=1PL.POSS ‘Our boss is good.’ (Czaykowska-Higgins Fieldnotes 2001)

[Saanich]

Two types of possessive phrases occur in Northern Straits. In the first type the possessed determiner phrase is marked by a possessive affix and precedes the possessor (25a, b). In the second type the possessor is in an oblique phrase following the possessed determiner phrase, which is optionally marked by a possessive affix: tsə swiw̓ləs. (25) a. təl̓sət tsə men-s dancing DET father-3SG.POSS DET young.man ‘The young man’s father was dancing.’ (Montler 1993: 248) tsə men-s tsə swiw̓ləs. b. k̓ʷən-nəxʷ=sən

[Saanich]

see-NCTR=1SG.SBJ DET father-3POSS DET young.man ‘I saw the young man’s father.’ (Montler 1993: 249) ʔə ƛ̓ swiw̓ləs. tsə men c. təl̓sət dancing DET father OBL DET young.man ‘The young man’s father was dancing.’ (Montler 1993: 250) The second type of construction cannot occur when the subject of the transitive clause is 1st or 2nd person (see Montler 1993: 251−252 and 1986: 48−50 for other possessive constructions). In Northern Straits, as in other Salish languages, psych predicates such as those indicating like/dislike or desire take the form of nominalizations in which possessive affixes mark subjects (see Jelinek 1995: 497 on s-): (26) a. nə-s-ƛ̓iʔ=sxʷ. 1SG.POSS-NMLZ-like=2SG.SBJ ‘I like you.’ (literally ‘you are my liking’ or ‘you are what I like’ [ECH/JL]) (Jelinek 1995: 497) kʷsə qʷaʔ. b. nə-s-ƛ̓iʔ=0̸ 1SG.POSS-NMLZ-like=3.SBJ DET water ‘I want water.’ (Czaykowska-Higgins Fieldnotes 2000)

[Lummi]

[Saanich]

1740

VII. Syntactic Sketches kʷə yeʔ-ən. c. nə-s-ləl=ləʔ=0̸ 1SG.POSS-NMLZ-intend=PST=3.SBJ SUB go-1SG.SBD ‘It was my intention to go [that I go].’ (Jelinek 1995: 497) kʷə nə-s-yeʔ sˇtəŋ. d. nə-s-ƛ̓iʔ=0̸ 1SG.POSS-NMLZ-like=3.SBJ ‘I want to go for a walk.’

SUB

[Lummi]

[Saanich]

1SG.POSS-NMLZ-go walk (Czaykowska-Higgins Fieldnotes 2000)

(26c, d) contain subordinate clauses preceded by a subordinate marker kʷə (sometimes labeled determiner, subordinator, or determiner/complementizer) and exhibiting a subordinate clause subject marker on yeʔ. In (26d) the subordinate clause is a nominalized construction in which the subject is indicated by a possessive pronoun.

2.2.5. Prepositions, location and direction In addition to marking possessors and signaling agent in passive and patient in antipassive constructions, the oblique particle ʔə is the principal preposition found in Northern Straits. In its prepositional function ʔə can express notions such as location, instrument, and time. Following ʔə, pronominal meanings are expressed by means of pronominal person/number predicates: tsə s-tíqew ʔə tsə sqəle´lŋəxʷ. (27) a. q̓əp̓-a´s-t=sən tie.up-face-CTR=1SG.SBJ DET NMLZ-horse OBL DET tree ‘I tied the horse to a tree.’ (Montler 1986: 239) ̓ b. suʔ níɫ=0̸ ʔiʔ kʷíc̓-ət-əs tə scˇe´ .nəxʷ ʔə tsə 3.PRED=3.SBJ CNJ butcher-CTR-3.SBJ DET fish OBL sˇípən. knife ‘She is butchering fish with a knife.’ (Raffo 1972: 229) PRT

c. qey̓ləs=0̸ ʔə tiʔə qəyəs. sad=3.SBJ OBL DET day ‘He is sad today.’

[Saanich]

[Songish]

DET

[Lummi]

(Jelinek 1998a: 338) ʔə tɫ nə´kʷə. d. ƛ̓íw̓=c̓əʔ escape=EVID OBL DET you ‘He ran away from you.’

[Saanich]

(Montler 1986: 205) A second preposition ʔəƛ̓ , whose use is much more limited and which may be a concatenation of /ʔə tɬ/ (Montler 1986: 231), is also mentioned for Songish in Raffo (1972: 207);

50. Northern Straits Salish

1741

and for Saanich (Jelinek 1998a: 341, who cites Montler pc). In addition, both Lummi and Saanich have a few relational and directional prefixes: (28) a. ƛ̓i-xʷotqəm=sən. b. cˇə-xʷotqəm=sən. to-Bellingham=1SG.SBJ from-Bellingham=1SG.SBJ ‘I [am going] to Bellingham.’ ‘I [am] from Bellingham.’ (Jelinek 1998a: 343)

[Lummi]

Montler (2008) provides an extensive description of Klallam serial verb constructions composed of motion and location verbs which indicate motion events. He distinguishes 84 location and 115 directed-motion verbs in Klallam that participate in such constructions. Jelinek (1998a: 342) provides a few examples of similar kinds of locational and directional roots from Northern Straits, without illustrating constructions in which they occur. It is likely that such constructions are found in Northern Straits, given its many similarities to Klallam, but they are yet to be identified and described.

2.2.6. Negation Davis (2005) identifies three different types of negation constructions in the Salish language family. Northern Straits exhibits two of these patterns: the principal construction takes the form [NEGATIVE (IRREALIS PARTICLE) INDICATIVE CLAUSE] (29a, b) (in [29a] the indicative clause consists of a psych predicate inflected with a possessive marker), while the secondary construction is of the form [NEGATIVE [DET/COMP [NOMINALIZED CLAUSE]]] (29c). (29) a. [NEG (IRR PRT ) INDICATIVE CLAUSE] ʔə´wə s nə-s-la´l. NEG IRR 1SG.POSS-NMLZ-intend ‘I didn’t mean to.’ (Montler 1986: 191)

[Saanich]

b. ʔə´wə=sən s ʔiʔ ɫit̓ θ-nəxʷ tθə nə-se´ləs. NEG=1SG.SBJ IRR CNJ cut-NCTR DET 1SG.POSS-hand ‘I didn’t cut my hand.’ (Czaykowska-Higgins Fieldnotes 2001) c. [NEG [(DET/COMP) [NOMINALIZED CLAUSE]]] ʔə´wə kʷə nə-s-ɫ-q̓íl̓. NEG DET/COMP 1SG.POSS-NMLZ-partake-believe ‘I don’t believe it.’ (Montler 1986: 52) Based on the claim that the negative morpheme is a predicate in the second negation pattern, Davis (2005: 16−17) argues that the second pattern is bi-clausal in other Salish languages in which it occurs, and this is likely true as well for Northern Straits (see Davis 2005 for a semantic analysis of negation constructions in Salish).

1742

VII. Syntactic Sketches

2.2.7. Copular constructions Constructions which are often categorized as copular cross-linguistically fall into two types in Northern Straits. In the first type, which includes possessive, locational and some equational constructions, there is no copular morpheme. Instead, an inflected predicate is used. For example, in (30a) we see a possessive sentence in which the possessed kin term is the inflected predicate. In (30b) we see that Northern Straits does not allow sentences in which a locative phrase ʔə tsə ʔeləŋ ‘at the house’ serves as the predicate; instead a (deictic) predicate leʔ ‘there’ is required in initial position. And in equational constructions such as those in (31) the predicate is expressed by an inflected root: (30) a. nə-ten=sxʷ. 1SG.POSS-mother=2SG.SBJ ‘You are my mother.’ (Jelinek and Demers 1994: 708)

[Lummi]

ʔə tsə ʔeləŋ *ʔə tsə ʔeləŋ=sən. b. leʔ=sən there=1SG.SBJ OBL DET house *OBL DET house=1SG.SBJ ‘I am/was there at the house.’ (Jelinek and Demers 1994: 712) (31) a. s-ɫeniy̓=sən. woman=1SG.SBJ ‘I am a woman.’

[Lummi]

(Jelinek and Demers 1994: 712) b. ŋən̓=ɫ. many=1PL.SBJ ‘We are many.’ (Jelinek 1995: 519) On the basis of such constructions Jelinek and Demers (1994) conclude that only predicates based on lexical roots can take arguments in Northern Straits. Since copulas have no lexical content as such and are not open-class items, they then suggest that Northern Straits has no copular verbs. However, there is a second type of potentially copular construction in Northern Straits, formed using the morpheme niɫ in predicate position. On the basis of these identificational constructions, illustrated in (32), Kroeber (1999: 370) suggests that niɫ is an identificational copula (see also Shank 2002): in (32a, c), for instance, we see niɫ occurring as the main predicate in a sentence with two Determiner Phrases; (32b) illustrates that this type of identificational construction is ungrammatical without the niɫ predicate. tsə swəy̓qəʔ tsə siʔem. (32) a. niɫ=0̸ DET chief 3.PRED=3.SBJ DET man ‘The man is the chief.’ (Jelinek and Demers 1994: 711)

[Lummi]

50. Northern Straits Salish

1743

b. *tsə siʔem tsə swə´y̓qəʔ. DET chief DET man *‘The man is the chief.’ (Jelinek and Demers 1994: 711) tiʔə sƛ̓i̓ƛ̓əqɫ tə nə-ŋə´nəʔ. c. niɫ=0̸ 3.PRED=3.SBJ DET child DET 1SG.POSS-offspring ‘This child is my kid.’ (Shank 2002: ex. 25)

[Samish]

Shank (2003a) points out that niɫ has two other functions. First, niɫ is the third person singular form in a class of person-number pronominal predicates (Jelinek and Demers 1994 term these deictic predicates; they have also been called predicative pronominals in Montler 1986, and predicative verbs in Galloway 1990). (33) illustrates the Samish paradigm (see Galloway 1990: 33 for all the variant forms in the paradigm): (33) Tab. 50.5: Pronominal predicate paradigm in Samish singular

plural

st

ʔə´s(ə)

ɫni´ŋəɫ

2

nd

nə´kʷ

nəkwi´l̓iy̓eʔ

3rd

ni´ɫ

nən̓´iʔɫiyeʔ

1

[Samish]

(Galloway 1990: 33) When occurring in oblique constructions, pronominal predicates are preceded by a determiner ([27d] above), but they can also occur as intransitive or transitivized predicates: (34) a. níɫ=0̸. 3.PRED=3.SBJ ‘It’s him, it’s her, that’s it, that’s the one.’ (Galloway 1990: 31) ̓ ̓ ̓ ʔal tiʔe sqe´xəʔ. ̣ b. ʔəw̓ niɫ-t-əl LNK 3.PRED-CTR-RECP PRT DET dog ‘These dogs are identical.’ (Shank 2002: ex. 26)

[Samish]

[Saanich]

Second, niɫ is the main predicate in cleft constructions níɫ + [cleftee] + [cleft clause]. Shank argues that, although one might assume that an example like (35a) is compatible with the identificational copula analysis (since headless relative clauses can function as full nominals in Northern Straits; Montler 1993), the fact that the determiner in the cleft clause is optional (as in [35b]) suggests that the cleft clause is not a full argument DP, but is rather a CP. If this is correct, then an analysis in which niɫ is an identificational copula is harder to maintain. Shank (2003a) develops a semantic analysis of niɫ which attempts to unify its three different functions. He treats niɫ as a functor of type which takes a CP or NP inner argument and a DP subject; the relation between the DP and the inner argument is mediated by a free variable.

1744

VII. Syntactic Sketches

(35) a. niɫ= 0̸ [kʷsə sqe´ws] [kʷsə n̓-s-síɫəʔ]. 3.PRED=3.SBJ DET potato DET 2SG.POSS-NMLZ-buy ‘It’s the potatoes that you bought.’ (Shank 2003a: 219)

[Samish]

b. niɫ=0̸ [kʷsə sqe´ws] [n̓-s-síɫəʔ]. 3.PRED=3.SBJ DET potato 2SG.POSS-NMLZ-buy ‘It’s the potatoes that you bought.’ (Shank 2003a: 219)

3. Verbs vs. nouns: the Salish debate An important and long-standing controversy in the study of Salish languages has concerned the question of whether these languages distinguish the categories of Noun and Verb. The Straits Salish languages have played a central role in this debate through the contributions of Jelinek and Demers on Northern Straits (e.g., 1994, 2002, 2004; Jelinek 1998a) and Montler (2003) on Klallam. Most Salish scholars, including Jelinek and Demers (2002, 2004), now conclude that Salish languages do make distinctions in lexical categories. This section highlights several arguments for and against this conclusion, using Straits data. For a historical perspective on the controversy and for more extensive references, see Czaykowska-Higgins and Kinkade (1998); Davis and Matthewson (2009).

3.1. The evidence for category-neutrality In Salish languages, roots can be divided into various lexico-semantic classes that roughly correspond to nominal, verbal, adjectival, and adverbial categories, activities, notions, and so on. In spite of the existence of these lexico-semantic classes, however, most early work on Northern Straits proposes that “(…) while there are root classes, there are no lexical items that are exclusively associated with the maximal projections NP and VP” (Jelinek 1998a: 326). Four principal arguments for the category-neutral hypothesis can be found in the literature on Northern Straits. The first is based on the observation that there is a general absence of robust morphological-distribution tests across the Salish language family to distinguish nouns from verbs (see Demirdache and Matthewson 1995; Montler 2003). Some scholars have argued, for instance, that the prefix s- (which is found in almost all Salish languages and is often analyzed as a nominalizer), derives nouns from verbs, in examples such as (36). (36) a. s-ʔíɫən b. nə-s-ʔíɫən. NMLZ-eat 1SG.POSS-NMLZ-eat ‘food’ ‘It’s my food.’ (Montler 1991) (Montler 1986: 42)

[Saanich]

However, there are also words which take s- and which are not derived nouns, as the pair in (37) illustrates:

50. Northern Straits Salish (37) a. swə´y̓qəʔ b. wə´y̓qəʔ

1745

‘man’ ‘baby boy’ (Montler 2003: 105)

[Saanich]

Providing strong arguments based on morphological distribution for category distinctions is difficult in light of such facts. The second type of argument against lexical category distinctions relates to the observation that in Salish languages all open class items can act as predicates regardless of their lexico-semantic properties (Kinkade 1983). (38) illustrates that any open class item can be marked for person, tense and aspect, and therefore can form a finite clause. In (38) words with verbal, nominal, adjectival, adverbial, and other lexico-semantic content, are inflected with person and tense enclitics (similar examples can be found in all Northern Straits dialects): (38) a. t̓iləm=ləʔ=sxʷ. sing=PST=2SG.SBJ ‘You sang.’

[Lummi]

b. siʔem=ləʔ=sxʷ. chief/noble=PST=2SG.SBJ ‘You were a chief.’ c. sey̓siʔ=ləʔ=sxʷ. afraid=PST=2SG.SBJ ‘You were afraid.’ (Jelinek and Demers 1994: 698) d. mək̓ʷ-t-0̸=ləʔ=sən. all-CTR-3.OBJ=PST=1SG.SBJ ‘I took all of them/it (totalled).’ (Jelinek 1998a: 335) e. staŋ=ləʔ=sxʷ. do.what/something=PST=2SG.SBJ ‘What did you do?’ f.

cˇesə=sə=ɫ. two=FUT=1PL.SBJ ‘We’ll be two (in number).’ (Jelinek and Demers 1994: 699)

In addition, inflected predicates can be turned into referring expressions in association with a determiner. Thus in (39a), the verb-like root ‘sing’ is preceded by a determiner, while the noun-like root ‘man’ acts as a predicate. The situation is reversed in (39b). In (39c) an adjective-like element is preceded by a determiner: (39) a. swəy̓qəʔ=0̸ tsə t̓iləm. man=3.SBJ DET sing ‘The (one who) is singing is a man.’

[Lummi]

1746

VII. Syntactic Sketches tsə swəy̓qəʔ. b. t̓iləm=0̸ sing=3.SBJ DET man ‘The (one who) is a man is singing.’ (Jelinek 1990: 179) c. tsə sey̓siʔ=ləʔ. DET afraid=PST ‘The (one who) was afraid.’ (Jelinek and Demers 1994: 699)

A third type of argument against categorial distinctions is associated with arguments against a copular predicate in Northern Straits. A word like swəy̓qəʔ in (39a) can occupy sentence-initial position, a position normally reserved for predicates. One analysis of this construction would assume that swəy̓qəʔ is a predicate. Another possible analysis, however, would assume that the construction contains a phonologically null copula, and that this copula is the actual predicate. Under the second analysis swəy̓qəʔ could be considered a noun. Based on arguments that Northern Straits does not have locational/possessive copular constructions, Jelinek and Demers (1994: 715) assume that the language also does not have a null copula. For them, noun-like units behave like bare open-class lexical items which function as main predicates. On this line of reasoning, then, nouns are not different from verbs in Northern Straits, and so there is no noun-verb distinction. Jelinek and Demers’s (1994) final argument for category-neutrality is associated with quantification (see also Jelinek 1995, a seminal work on quantification in Salish and cross-linguistically). As mentioned above, there are no quantificational determiners in Northern Straits, and thus Jelinek and Demers assume that there are no D(eterminer)quantifiers in the language (although there are A(dverbial)-quantifiers; see examples in [16]). Given that cross-linguistically D-quantifiers take scope over arguments, and as such are associated with NPs, Jelinek and Demers argue that the lack of a noun category in Northern Straits would predict that Northern Straits would also lack D-quantification. Thus, for Jelinek and Demers the absence of quantificational determiners confirms the prediction that a language with no nouns has no D-quantification; this, in turn is taken to confirm their hypothesis that the language lacks nouns. However, their argument is claimed not to hold for other Salish languages (see Matthewson 1998 and references on quantification in van Eijk 2008).

3.2. The evidence for lexical category distinctions The principal argument for lexical category distinctions in Northern Straits relies on evidence that complex predicate constructions distinguish lexical classes (see section 2.3.5). Montler (2003) proposes this argument for Klallam, and claims that Northern Straits facts parallel those of Klallam (Montler 2003: 103, 108 Fn. 7). Jelinek and Demers (2004) provide some supporting data from Lummi and Samish throughout. Due to a lack of full paradigms for Lummi, Samish or Saanich, we include Klallam examples below in our outline of the Straits Salish argument for lexical categories (see Demirdache and Matthewson 1995 for another line of argumentation).

50. Northern Straits Salish

1747

Complex predicate constructions such as that in (15a) yeʔ=sən sˇtəŋ ‘I’m going for a walk’, provide the basis for Montler’s arguments for the lexical category Verb. Such constructions lack a linking particle. In addition, the second predicate is never a stative (i.e., quality or nominal) predicate. Montler (2003) takes this distribution as a “straightforward syntactic test” for Verb-hood. Lexical items with adjective-like meaning do follow the first predicate but only when prefixed with the derivational morpheme txʷaʔ‘mutative’, frequently glossed as ‘become’ or ‘get’. This latter fact is also taken to be an argument for the existence of a category of Adjectives (see Kiyota 2008 who uses similar arguments from Saanich to distinguish the class of homogenous states). (40) and (41) provide examples from Klallam (where Klallam hiya´ʔ = Saanich ye´ʔ); according to Montler, native speakers of Klallam interpret examples like (40c) as two sentences: ƛ̓a´cu. (40) a. hiya´ʔ=caʔn go=1SG.SBJ.FUT fishing ‘I’ll go fishing.’

[Klallam]

b. hiya´ʔ=u=cxʷ ʔíɫn go=Q=2SG.SBJ eat ‘Are you going to eat?’ c. *hiya´ʔ=cn ʔə´y̓ go=1SG.SBJ good (Montler 2003: 114−115) ʔíɫn. (41) a. ƛ̓a´y=cn [Klallam] again=1SG.SBJ eat ‘I ate again.’ (Montler 2003: 116) cf. ƛ̓eʔ=sən ʔəw̓ t̓əm̓-t. ‘I hit him again/I also hit him.’ [Lummi/Samish] (Jelinek 1995: 515) sˇaʔsˇu´ʔɫ b. *ƛ̓a´y=cn [Klallam] again=1SG.SBJ happy txʷaʔ-sˇaʔsˇu´ʔɫ. c. ƛ̓a´y=cn again=1SG.SBJ MUT-happy ‘I got happy again.’ (Montler 2003: 116−117) Complex Predicate Nominal constructions provide evidence for the categories Adjective and Noun. Montler (2003) points out that in constructions such as that in (31) above (Lummi ʔəy̓=sxʷ swəy̓qəʔ ‘You’re a good man.’) the first word always represents a quality and the second is always nominal, never verbal. He suggests that this distribution is sufficient evidence for a category distinction. Additionally, he observes morphological constraints on plural marking in this type of construction. Plural can be marked on the predicate (42a), or on both the predicate and the following nominal (42b), but when the nominal is plural, the predicate must also be marked plural (see [42c]). This plural agreement is also required in determiner phrases, as the comparison between (43a) and (43b) illustrates:

1748

VII. Syntactic Sketches

swə´y̓qaʔ. (42) a. cˇa´yq=cxʷ=hay big.PL=2SG.SBJ=2PL man ‘You are big men.’

[Klallam]

sw̓wə´y̓qaʔ. b. cˇa´yq=cxʷ=hay big.PL=2SG.SBJ=2PL men ‘You are big men.’ c. *cˇə´q=cxʷ=hay sw̓wə´y̓qaʔ big=2SG.SBJ=2PL men (Montler 2003: 130) cə=cˇa´yq sw̓wə´y̓qaʔ. (43) a. k̓ʷə´nnəxʷ=cn see.3.OBJ=1SG.SBJ DET=big.PL men ‘I see the big men.’ cə=cˇə´q sw̓wə´y̓qaʔ b. *k̓ʷə´nnəxʷ=cn

[Klallam]

see.3.OBJ=1SG.SBJ DET=big men (Montler 2003: 130−131) In sum, then, there is semantic, syntactic and morphological evidence that points to the existence of lexical categories in the Straits Salish languages. Jelinek and Demers (2002) retract their earlier claims against lexical categories in Northern Straits, citing Distributed Morphology as a framework that can reconcile both those properties of Lummi lexical items which suggest category-neutrality and those which suggest the existence of lexical categories. In this framework, all root morphemes are acategorical at an abstract level, and lexical categories are derived by their interaction with grammatical operators, or functional projections.

4. Transitivity Interestingly, the Salish noun/verb question was historically related to discussions of transitivity, since Kuipers (1968) argued that transitive/intransitive and not noun/verb is the basic grammatical contrast in Salish languages. His arguments were informed by the observation that transitive marking plays a significant role in Salish predicates (see Jelinek 1994 for discussion).

4.1. Constructions dependent on transitive marking Like other Salish languages, Northern Straits has an extensive system of suffixes distinguishing different types of in/transitive constructions in terms of transitivity, control/ volitionality and voice. For example, Saanich roots such as k̓ʷən ‘see’ can be marked as active control transitive k̓ʷən-ət ‘Look at it’, active non-control transitive k̓ʷən-nəxʷ ‘See it’ or active causative k̓ʷən-stxʷ ‘Show something to someone’. Stems not overtly marked for transitivity/control are often interpreted as intransitive and non-control (as well as active), as a comparison of forms based on the root t̓ə´m̓ ‘hit’ illustrates:

50. Northern Straits Salish

1749

(44) a. t̓ə´m̓=ə=sxʷ hit=Q=2SG.SBJ ‘Did you get hit?’ (Montler 1986: 175)

[Saanich]

b.

t̓əm̓-t-0̸=ləʔ=sən. hit-CTR-3.OBJ=PST=1SG.SBJ ‘I hit him (on purpose).’

[Lummi]

c.

t̓əm̓-nəxʷ-0̸=ləʔ=sən. hit-NCTR-3.OBJ=PST=1SG.SBJ ‘I hit him (accidentally; finally managed to hit him).’ (Jelinek 1996: 279)

In simple transitive constructions as in (44b, c) the object of the construction takes on the role of patient/theme, while the subject is the agent. In addition, Northern Straits has a set of three applicative suffixes -si, -ŋiy and -nəs, which mark the raising of an oblique object to the status of a direct argument with concomitant demotion of the (initial) direct object; the thematic roles of the initially oblique object may be recipient, possessor, goal or stimulus. In (45b), for instance, the 1st person object is the recipient of the action (see Gerdts and Kiyosawa 2010 on applicative types and properties in Salish; cf. Montler 1986 who has different labels for the suffixes). (45) a. k̓ʷən-sí-(ə)t-0̸=sən. [Saanich] look-APPL-CTR-3.OBJ=1SG.SBJ ‘I looked at it for him (e.g., a boat he was thinking about buying).’ (Montler 1986: 170) ʔə [kʷsə t̓ θa´n-əŋ qʷaʔ]. b. yéʔ kʷən-sí-s-əŋ go bring.along-APPL-1SG.OBJ-MID OBL det cold-MID water ‘Go and get me some cold water.’ (Czaykowska-Higgins Fieldnotes 2001) Various syntactically intransitive constructions are marked by suffixes as well. In Saanich, for instance, these include reciprocal markers (-tal reciprocal control, -nəwel reciprocal non-control) the reflexive -sat, and the middle voice (-əŋ control middle and -naŋət noncontrol middle) suffixes as well as combinations of some of these. Reflexive and reciprocal are often affixed to transitive stems, and in such cases are derived intransitives (but see Montler 1986 on reciprocals). (46) illustrates that in reciprocal constructions participants can be specified using subject and object suffixes, or subject and oblique object: (46) a. k’ʷi~w̓ən̓-təl̓-t-a´ŋəs. ACT~fight-RECP-CAUS-1SG.OBJ ‘We’re fighting; he’s fighting with me.’ ʔə tsəw̓níɫ. b. k’ʷí~w̓ən̓-təl’=sən. ACT~fight-RECP=1SG.SBJ OBL he/she/it ‘I’m fighting with him.’ (Montler 1986: 182)

[Saanich]

1750

VII. Syntactic Sketches

Unlike reflexives and reciprocals, middle voice forms are derived by affixing the middle suffixes to intransitive stems. Middle constructions are thus morphosyntactically, and semantically intransitive; (47b) illustrates that a single determiner phrase is interpreted in such constructions as the subject of the intransitive predicate. (47) a. mə´t̓-əŋ=sən. bend-MID=1SG.SBJ ‘I bent.’

cf. mə´t̓-ət-0̸=sən. bend-CTR-3.OBJ=1SG.SBJ ‘I bent it.’

[Saanich]

b. q̓ʷəcˇa´xʷ-əŋ tsə nə-ƛ̓e´s. grumble-MID DET 1SG.POSS-stomach ‘My stomach’s grumbling.’ (Montler 1986: 177) Middle constructions in the closely related language, Halkomelem, have been shown to have various uses and semantics (Gerdts and Hukari 2006a), but research exploring similar constructions has not yet been done for Northern Straits. Passive and antipassive constructions contrast with active and middle constructions morphosyntactically and syntactically. For instance, in passive constructions, -əŋ follows the transitive marker, and person and number of the patient are indicated by a subject clitic, as in an intransitive predicate. No object marking occurs on the predicate, but the Agent can be included in the construction as the object of an oblique particle ʔə: (48) a. nəp-t-ŋ=səʔ=0̸. advise-CTR-PASS=FUT=3.SBJ ‘He will be advised.’ (Jelinek 1995: 495) tsə swə´yqəʔ ʔə tsə sqe´xəʔ. b. w~wa´s-t-əŋ=0̸ ̣ RES~bark-CTR-PASS=3.SBJ DET man OBL DET dog ‘The man got barked at by the dog.’ (Montler 1986: 180)

[Lummi]

[Saanich]

In passives, the topicality of the agent is reduced, while the topicality of the patient is increased (see Kroeber 1999: 25−28, who calls these agent demotion constructions). In antipassive constructions, the subject is interpreted as agent and the patient of the action may be marked as an oblique (Kroeber 1999: 31). Northern Straits is similar to Halkomelem, where antipassives may be marked by suffixing either -el’s or the cognate middle marker -m to an intransitive stem (Gerdts 1988; Kroeber 1999). The first example of a construction in Northern Straits that is explicitly referred to by Jelinek (1995: 495) as an antipassive is (49a) from Saanich; in (49a) the suffix -el’s is added to an intransitive stem; similar examples are found in Turner (2011: 151−153), who confirms Montler’s (1986) claim that -el’s often occurs with structured activities in imperfective forms, while the corresponding -əlaʔ occurs in perfectives. Examples like (49b), which parallel the Halkomelem -m constructions that Gerdts (1988: 148−153) labels as antipassive are also fairly common in Northern Straits. Compare (49b) to a simple transitive construction such as that illustrated in (49c):

50. Northern Straits Salish (49) a. xʷəl̓k’ʷ-e´l̓s. roll-ANTIP ‘He’s rolling (a cigarette).’ (Montler 1986: 176)

1751 [Saanich]

ʔə tsə s-cˇa´ɫ. b. q̓ə´p̓-əŋ=sən gather-MID=1SG.SBJ OBL DET NMLZ-firewood ‘I gathered the firewood.’ (Montler 1986: 237) tsə s-cˇa´ɫ. c. q̓ə´p̓-ət=0̸=sən gather-CTR=3.OBJ=1SG.SBJ DET NMLZ-firewood ‘I gathered the firewood.’ (Montler 1986: 237)

4.2. Theoretical issues associated with transitivity As the examples given above illustrate, transitivity is marked overtly on Northern Straits predicates. And, in fact, in the vast majority of cases throughout the Salish language family, predicates which are affixed with object suffixes (and are therefore formally transitive) require an overt transitive or transitivizing suffix. This raises a lexical semantic question about the transitivity of bare roots: is there a lexical distinction between intransitive and transitive roots, or are all roots lexically intransitive, with transitivity always being a morphologically-derived property of a stem? This question in turn ties into discussions of derivational approaches to verb meaning and argument structure. Essentially, the overtness of transitive marking on verbs in Salish languages could be argued to be “a window into derivational operations which are largely obscure in more familiar languages” (Davis and Matthewson 2009: 1107; see also Jelinek 1994). As far as the lexical semantics is concerned, some Salish scholars argue that, in spite of surface morphology, there exist distinct classes of transitive and intransitive verb roots, and that only when roots appear in transitive and intransitive alternating pairs can one argue that the transitive stems are truly derived (see for example, Gerdts and Hukari 2006a, b, 2012). In contrast, other researchers have suggested that all verb roots are intransitive in Salish languages (see Davis and Matthewson 2009 and references therein). A related issue is whether all verb roots are basically unaccusative, and hence noncontrol, in Salish languages, or whether it is necessary to distinguish a class of intransitive control/unergative roots from a class of intransitive non-control/unaccusative roots. This latter view is in keeping with Thompson’s work (1985) on the Salish category of control, where it is proposed that in Salish languages roots are non-control or control lexically. Although Jelinek’s work since the early 1990s has mostly not explicitly argued for either the intransitive or the unaccusative hypotheses, it has in fact advocated an approach to compositionality which assumes that all roots are unmarked for transitivity and never surface independently of values for transitivity, or for voice (Jelinek 1996: 279; see also Jelinek 1994, 1995, 1998a, 2000). It further assumes that since both transitive and voice are overtly marked in Salish clauses, they are therefore functional heads in

1752

VII. Syntactic Sketches

the syntax. Under this analysis, a functional head TRANS marks the valence of a clause, degree of volitionality or control, and it entails an internal argument to which it assigns structural case; when there is no overt object marker on the predicate, TRANS entails a 3rd person absolutive argument, which, in the absence of an overt nominal in the clause, is interpreted as referential and definite. The VOICE functional head introduces the subject or external argument, and determines the theta-role assigned to the subject, thus also indicating the voice of a clause. In Jelinek (1996, 2000) this analysis of the predicate word is extended to include tense and mood projections and a focus position, which together with assumptions about predicate-raising, allows for the derivation of the Predicate+Clitic complex. Jelinek (2000: 218−219) illustrates a predicate-raising derivation of nəp-t-0̸=ə=ləʔ=sxʷ [advise-CTR-3.OBJ=Q=PST=2SG.SBJ] ‘Did you advise him?’ (Jelinek 2000: 218−219). Her evidence to support the predicate-raising analysis uses the fact that in complex predicate constructions the second-position clitics are attached to the first predicate in the complex predicate (as in [15b] ʔən̓e=ləʔ=sxʷ leŋ-t-oŋes [come= PST=2SG.SBJ see-CTR-1/2.OBJ] ‘I came to see you.’); this indicates that only the first predicate undergoes raising.

5. Arguments, adjuncts and the Pronominal Argument Hypothesis Syntactic properties of Northern Straits, as described and discussed in the work of Jelinek and Jelinek and Demers, have played a central role in the development of the Pronominal Argument Hypothesis, one of the key hypotheses in the study of generative syntax since the mid 1980s. According to the PAH, in Pronominal Argument (PA) languages personnumber (subject/object marking) morphemes such as those found on a Salish predicate represent the direct (pronominal) arguments of the verb, rather than representing agreement morphology. In PA languages, then, any overt DPs are not arguments and do not appear in A(rgument)-positions. Instead overt DPs are assumed to occur in adjunct positions in sentences. In Jelinek’s work, the PAH is explicitly connected to assumptions about non-configurational languages: since, in PA languages, DPs are adjuncts and adjuncts are not hierarchically positioned with respect to each other, clauses in these languages arguably must exhibit flat structures. In addition, Jelinek (2006) connects the PAH to an explanation of how information structure is represented in a sentence, suggesting that in PA languages morphosyntactic status directly reflects information status, with pronominal arguments representing topical unstressed discourse anaphors and predicates and DPs representing new information with inherent focus. The PAH has been developed in a series of influential articles on Lummi especially and on Salish more generally by Jelinek and Demers (see, for example, Jelinek and Demers 1994; Jelinek 1995, 2006, among others). While many Salishanists have assumed that the premises of the PAH are essentially true for Salish languages, the hypothesis has stirred considerable debate in the literature on Salish, with leading scholars like Davis and Matthewson providing compelling arguments against it based on various syntactic diagnostics (for example, Davis and Matthewson 2003, 2009; Davis 2005). In this section, we briefly survey a few representative claims that have been made in favour of the PAH based on Northern Straits evidence as well as a few claims made against the PAH. Given space limitations we cannot do justice to the extremely rich and interesting discussion of this topic.

50. Northern Straits Salish

1753

5.1. Arguments for the PAH The two most frequently cited arguments in favour of the PAH in Northern Straits are based on surface morphological and syntactic evidence. First, as we saw above in section 2, Northern Straits, like other Salish languages, obligatorily marks both subject and object on predicates in the form of suffixes and clitics. In contrast, overt nominal subject and object DPs are optional in many sentences, and in Northern Straits transitive sentences both a nominal subject and a nominal object rarely occur together in the same sentence (see section 2.2.1). If one assumes that subject and object markers are arguments, while nominal subject and object DPs are adjuncts, the obligatory nature of subject/object marking versus the optional nature of DPs would follow from their argument and adjunct status, respectively. Second, if DPs are adjuncts rather than arguments, and, if, therefore, Northern Straits syntax is non-configurational, then we would predict that DPs would not need to occupy different positions in a hierachically-organized syntactic structure and that their relative word order would be free. As example (10c) illustrates t̓əm̓-t-0̸=s tsə ŋənə tsə swəy̓qəʔ [hit-CTR-3.OBJ=3.SBJ DET child DET man] ‘Hei hit himj, the childi,j, the mani,j.’, since each of the DPs in the sentence can be co-indexed with either the subject or the object this prediction is upheld. Other surface phenomena invoked as arguments for the PAH in Jelinek and Demers (1994) are that Northern Straits has no free-standing pronouns, no overt Nominative/Accusative distinction in case morphology, and no wh-in-situ constructions. While all these properties are compatible with PA languages, they are not sufficient conditions for arguing that a language is PA since they hold in languages that for other reasons would not be analyzed as PA. Several arguments not connected to surface morpho-syntactic phenomena have been set forth in both Jelinek and Demers (1994) and Jelinek (1995, 2006, among others). One that is singled out in Jelinek (2006) is connected to Jelinek’s claim, mentioned above, that Northern Straits and other Salish languages lack D(eterminer)-quantifiers (Jelinek 1995: 487−488). Jelinek’s argument is as follows: D-quantification is selective and “fixes the scope of the quantifier to a particular argument position”; however, in a PA language, only pronouns and not DPs are arguments; therefore one would predict that in a PA language, D-quantification could not occur; since, in Jelinek’s view, there is no D-quantification in Northern Straits, it must be a PA language. An additional argument set forth in Jelinek (2006) for the Pronominal Argument status of Lummi is based on the claim that in PA languages, there are distinct sets of case options for pronouns and for DPs, with pronouns having the kinds of grammatical case that appear on direct arguments, and DPs only appearing as objects of obliques. This claim is illustrated for Lummi with dative constructions which involve the advancement of animate goals to the status of direct object, as in the sentence ʔoŋəs-t-s=ləʔ=0̸ (ʔə tsə scˇeenəx)ʷ [give-CTR-3.SBJ=PST=3.OBJ (OBL DET salmon)] ‘He gifted them ([with] a/the salmon/fish).’ where the goal is marked by a 3rd person suffix, and the item exchanged is marked as oblique (Jelinek 2006: 277). Crucially, this claim is founded on the assumption that a predicate like ʔoŋəs- ‘give’ in Lummi is not ditransitive. This is a claim that has not been corroborated for Lummi, although in other Salish languages such roots are ditransitive. It is useful to note that Jelinek and Demers (1994) connect the adjunct status of DPs to the category-neutral hypothesis discussed in section 3, using this connection as evidence for the PAH. They claim that if a language lacks a noun/verb contrast, it must

1754

VII. Syntactic Sketches

have only pronominal affixes and clitics in A-positions, since otherwise there would be an infinite regress in argument structure (Jelinek and Demers 1994: 702). There is, however, no necessary connection between the category-neutral hypothesis and the PAH, and such a connection is not made in Jelinek (2006), who argues for the PAH, but not for category-neutrality. In sum, although various arguments in favour of the PAH have been laid out in the research on Northern Straits, none of these arguments provides incontrovertible evidence that the PAH is true for the language.

5.2. Arguments against the PAH Based on the relationship between the PAH and non-configurationality, the PAH predicts that a putatively PA language like Northern Straits, because it is hypothesized to have a flat structure, should have no asymmetries in the behaviour of arguments and adjuncts or of subjects and objects, and that phenomena which treat collocations of Verb and Object as a constituent should not occur. As it turns out, diagnostics that would test these predictions have yet to be carried out for Northern Straits. However, relevant diagnostics have been tested for other Salish languages, most extensively for St’at’imcets, a North Interior Salish language. According to these tests, both argument/adjunct and subject/object asymmetries exist, and there is evidence for a VP constituent in other Salish languages. For instance, work on wh-movement in St’at’imcets, Okanagan, Secwepemctsin, and Nlhe7kepmxcín, all Interior languages, shows that such movement is sensitive to the Adjunct Island Condition, according to which movement from an adjunct should be impossible. As the examples from St’at’imcets illustrate, not only is movement from an adjunct barred (50b), but crucially, movement from an A-position (50a) is grammatical: k=Eddie [kʷ=a=s (50) a. sˇwat [kʷu=sˇ-cˇu´t-sˇ [St’at’imcets] who DET=NMLZ-say-3.POSS DET=Eddie DET(NMLZ)=IPFV=3.POSS q wal’út]] speak ‘Who did Eddie say/think was speaking?’ k=Eddieˇ b. *sˇwat [kʷu=sˇ-ka-ʕʷu´y̓t’-sˇ-a who DET=NMLZ-CIRC-sleep-3.POSS-CIRC DET=Eddie [ʔi=wa´ʔ=asˇ q wal’út]] when(PST )=IPFV=3.CNJ speak ‘Which person did Eddie fall asleep when that person was speaking?’ (attempted) (Davis and Matthewson 2009: ex. 23−24) Similar kinds of results are found with other tests: thus, although the PAH predicts that there should be no Condition C effects in PA languages, no strong crossover effects, no weak crossover effects, no variable binding asymmetries, and no superiority effects, evidence from St’at’imcets suggests that all these predictions are incorrect for St’at’imcets. In addition, Davis (2005) argues that VP-coordination, VP-pronominalization, and VP ellipsis all pick out a VP constituent in a sentence.

50. Northern Straits Salish

1755

In all Davis and Matthewson (2009: 1114) set out 18 different predictions made by the PAH for languages in the Salish family. These predictions are divided into surfaceaccessible properties like the optionality of overt DPs and no VP ellipsis, and surfaceinaccessible properties like adjunct-island and cross-over effects. They show that 4/18 of these predictions are true for St’at’imcets, while 14 predictions turn out to be false. For Northern Straits they suggest, based on evidence in the published literature, that 6 of the predictions are correct, 3 are false, and 9 are untested. For other Salish languages, a large number of the predictions are either false or untested. Crucially, however, Davis and Matthewson claim that all the tested diagnostics which are not surface-accessible have unambiguously proved the PAH predictions to be false, with no variation across the family. Hence, they make the strong claim that “no Salish language is a pronominal argument language.” Certainly their results make very clear that a great deal of work remains to be done on Northern Straits as well as on other Salish languages in order to be able to conclude that Northern Straits and other Salish languages are in fact consistent with the PAH.

6. Clause types Complex sentences in Northern Straits are here broadly divided into relative clause, subordinate clause, and cleft constructions. The leading comparative study of complex sentences in Salish, Kroeber (1999), proposes that complex sentences vary across Salish in terms of 1) the nature of introductory particles, 2) the similarities and differences between subordinate and main clauses and 3) the form of the subordinate clause predicate. While there has been considerable research on these variations and on topics such as movement, long-distance extraction and gapping in the literature on Salish, most of the literature on Northern Straits complex constructions has been focused on surface properties of the constructions, and has been largely descriptive. Therefore, in this section we report briefly on construction types for relative clauses, subordinate clauses, cleft constructions, and wh-questions (which involve both relative and subordinate clause types of constructions).

6.1. Relative clauses In early work on Salish there was a certain amount of controversy about relative clauses with some scholars arguing that they do not exist in Salish languages, on the grounds that there are no constructions which have both relative clause semantics and a unique form, and others arguing that there are true relative clauses in some of the languages. An argument for the existence of relative clauses in Northern Straits is made in Montler (1993). As is typical for Salish, headed relatives in Northern Straits are externally-headed, with the (underlined) head preceding the clause; there is no relative pronoun or other specialized connecting particle (ASP in [51] is referred to elsewhere as LNK):

1756

VII. Syntactic Sketches

(51) ʔəw̓ xcˇ̣ i-t=sən [kʷsə swəy̓qəʔ t̓əm̓-ə(t)-s]. hit-CTR-1SG.OBJ ASP know-CTR=1SG.SBJ DET man ‘I know the man who hit me.’ (Montler 1993: 256)

[Saanich]

Montler (1993) makes the case that relative clauses are distinct from general subordinating constructions since the latter are introduced by a specialized particle kʷə (see below). Transitive relative clauses provide evidence that such constructions are dependent rather than main clauses because in such constructions the predicate in the relative clause is missing agreement. Thus in (52) the 3.SBJ marker is omitted (see Montler 1993: 255−256): (52) ʔəw̓ xcˇ̣ i-t=sən [kʷsə swəy̓qəʔ t̓əm̓-ət]. hit-CTR ASP know-CTR=1SG.SBJ DET male ‘I know the man who hit it.’ cf. t̓əm̓-ət-əs. hit-CTR-3.SBJ ‘He hit it.’ (Montler 1993: 254)

[Saanich]

As is also typical for Salish, headless relatives are permitted, and, in fact, based on the Pronominal Argument Hypothesis, all of Jelinek’s work on Northern Straits assumes that every DP is a headless relative clause, similar to the headless relative in (53): (53) ʔəw̓ xcˇ̣ i-t=sən [kʷsə t̓ə´m̓-ə(t)-sə]. ASP know-CTR=1SG.SBJ DET hit-CTR-2SG.OBJ ‘I know the one who hit you.’ (Montler 1993: 260)

[Saanich]

If all DPs are headless relatives, then they all have the same syntactic structure, whether they are based on verb-like lexical items as in (54b) or (55), or noun-like lexical items, as in (54a) (see Demirdache and Matthewson 1995): swəy̓qəʔ-0̸ man-3.SBJ ‘The (one who is a) man’ [DP the [IP pro is a man]]

(54) a. tsə

DET

t̓iləm-0̸ sing-3.SBJ ‘The (one who) is singing’ [DP the [IP pro sing]

b. tsə

DET

Jelinek and Demers (1994: 718), following Jelinek (1987), also assume that relative clauses in Northern Straits are necessarily adjoined rather than embedded in a Pronominal Argument language since only pronouns are arguments. The examples in (55)−(57) illustrate headed relatives, arranged by role of the head: theme-headed intransitive, patient/object-headed transitive, agent/subject-headed transitive. (55) k̓ʷən-nəxʷ=sən [tsə swəy̓qəʔ ƛ̓iƛ̓əw]. escape see-CTR=1SG.SBJ DET man ‘I saw the man who was getting away.’ (Montler 1993: 252)

[Saanich]

50. Northern Straits Salish

1757

(56) ʔəw̓ xcˇ̣ i-t=sən [kʷsə swəy̓qəʔ t̓ə´m̓-ət-əxʷ]. hit-CTR-2SG.SBJ ASP know-CTR=1SG.SBJ DET male ‘I know the man who you hit.’ (Montler 1993: 258)

[Saanich]

(57) niɫ=0̸ [tsə xcˇ̣ i-t-oŋəɫ]. 3.PRED=3.SBJ DET know-CTR-1PL.OBJ ‘There’s the (one that) knows us.’ (Jelinek 1995: 506)

[Lummi]

Montler (1993) provides a fuller paradigm of relative clause examples.

6.2. Subordinate clauses Descriptions of subordinate clauses in Northern Straits Salish divide them into two types. Indicative subordinate clauses are propositional in nature. In such constructions a determiner/demonstrative functions as a complementizer. Predicates are marked by an s- prefix, such that these kinds of constructions in Salish are generally considered to be nominalized (see Kroeber 1999 for discussion of nominalized clauses in Salish). Subjects in such constructions are marked by possessive pronouns, while patients/objects are taken from the standard set of object suffixes. (58) and (59) illustrate such constructions; (59) and (55) contrast subordinate and relative clauses. (58) ʔəw’ xcˇ̣ i-t-0̸=sən [Lummi] kʷə ʔən-s-leŋ-n-oŋəs. LNK know-CTR-3.OBJ=1SG.SBJ DET/COMP 2SG.POSS-NMLZ-see-NCTR-1/2.SBJ ‘I know it, that you saw me.’ (Jelinek and Demers 1994: 722) ̓ (59) kʷən-nəxʷ=sən [Saanich] tsə swəy̓qəʔ kʷə s-ƛ̓iƛ̓əw̓-s. see-NCTR=1SG.SBJ DET male DET/COMP NMLZ-escaping-3.POSS ‘I saw the man (when he was) getting away.’ (Montler 1993: 252) Hypothetical constructions use the same determiner/complementizer as nominalized constructions. They are not nominalized, and subjects are indicated by means of a set of subordinate subject markers (Demers and Jelinek 1982; Kroeber 1999; Jelinek 1994; Jelinek and Demers 1994; Montler 1986, 1993). (60) cˇte-t-ŋ=sən kʷə yeʔ-əs. ask-CTR-PASS=1SG.SBJ DET go-3.SBD ‘I was asked if he went.’ (Jelinek and Demers 1994: 723) (61) k̓ʷən-nəxʷ=sən tsə swəy̓qəʔ kʷə ƛ̓iƛ̓əw̓-əs. see-CTR=1SG.SBJ DET male SUB escaping-3.SBD ‘I’ll see the man if he’s getting away.’ (Montler 1993: 253)

[Lummi]

[Saanich]

1758

VII. Syntactic Sketches

There has been very little, if any, systematic discussion in the literature of theoretically interesting issues associated with the above types of subordinate clauses in Northern Straits. This is clearly a fertile area for study.

6.3. Cleft constructions Cleft constructions are common in Salish languages, and are salient in discourse. Most work on clefts has focused on describing their morphosyntactic form (as Kroeber 1999 points out), with comparatively little work across the family on more subtle syntactic and semantic properties of these constructions. These kinds of constructions in Central Salish have been most discussed in the work of Gerdts (1988) on Halkomelem, Kroeber (1999), and Jacobs (1992) on Squamish. Davis, Matthewson and Shank (2004) examine clefts in the Samish dialect of Northern Straits and in St’at’imcets. Kroeber distinguishes two types of clefts, introduced clefts [copula-like predicate + cleftee + (DET) + headless relative clause residue], and bare clefts or Nominal Predicate Constructions [nominal predicate + DET + relative clause residue] (Kroeber 1999: 264; Davis, Matthewson and Shank 2004: 100−103). Samish examples of NPCs are given in (62), and of introduced clefts in (63). Note that in both types of clefts, subordinate subject markers are used in the residue; note also that in NPCs an overt nominal may or may not head the clausal residue (cf. 62a, b), while in introduced clefts, an overt nominal may not head the residue (cf. 63a, b): (62) a. laʔsn [kʷsə t̓s-ət-s kʷsə Richard]. plate DET break-CTR-3.SBJ DET Richard ‘What Richard broke was a plate.’

[Samish]

[kʷsə sɫe´niʔ leŋ-n-ən]. b. xʷənítəm white.person DET woman see-NCTR-1SG.SBJ ‘The girl that I saw was a white person.’ (Davis, Matthewson and Shank 2004: 102) kʷsə laʔsn [kʷsə t̓s-ət-s kʷsə Richard]. (63) a. niɫ 3.PRED DET plate DET break-CTR-3.SBJ DET Richard ‘It was a plate Richard broke.’

[Samish]

kʷsə xʷənítəm [kʷsə sɫe´niʔ leŋ-n-ən]. b. *niɫ *3.PRED DET white.person DET woman see-NCTR-1SG.SBJ ‘It was a white person − the girl that I saw.’ (Davis, Matthewson and Shank 2004: 103) Davis, Matthewson and Shank (2004) examine the syntax and the semantics of cleft constructions and make two claims: first they argue that in NPCs the clausal resident is a full headed relative clause (as seen above), while in introduced clefts the residue is a bare CP; and second, they demonstrate that this syntactic difference is not reflected in semantic differences (for example, neither construction involves an existential presupposition) or in discourse usage.

50. Northern Straits Salish

1759

As in many Salish languages, cleft constructions in Northern Straits occur in whquestions. Such constructions in the Lummi dialect have been analyzed by Jelinek (1998b) as predicates which raise in the syntax in the same way as all other Lummi predicates. The behaviour of wh-words in Lummi fits cross linguistically into a class of languages including Mandarin, which have wh-clefts and question particles (see Jelinek 1998b: 263). Below are recently elicited examples of wh-questions from Saanich. Two involve cleft constructions with either a relative clause or nominalized clause (64a, b). (64c) illustrates a third kind of wh-question structure with the wh-word cˇənteŋ ‘when’ (64c), which involves neither a relative nor subordinate cleft-like construction, but instead uses a sentence type (not seen before) which requires the conjunctive particle ʔiʔ. kʷsə qʷa´qʷə-t-xʷ (64) a. ste´ŋ=0̸=ʔacˇə what=3.SBJ=REQ DET drink-CTR-2SG.SBJ ‘What did you drink?’

[Saanich]

kʷ ʔən-s-ʔíɫən b. ste´ŋ=0̸=ʔacˇə what=3.SBJ=REQ DET 1SG.POSS-NMLZ-eat ‘What are you going to eat?’ ʔiʔ ye´ʔ=sən sˇtə´ŋ c. cˇənte´ŋ=0̸ when=3.SBJ ACCOM go=1SG.SBJ walk ‘When do I go for a walk?’ (Leonard Fieldnotes 2010) Research is currently underway to explore restrictions on the distributions of these three types of constructions. Although this chapter has shown that much is known about the basic syntax of Northern Straits, it also makes clear that, even on topics as relatively easy to study as whquestions, much is still to be learned both in the interests of pure research on syntax and in the interests of the communities whose ancestral language is one of the dialects of Northern Straits.

7. Abbreviations In this paper, we use the following abbreviations in addition to the LGR conventions: ACCOM ACT ASP CIRC CNJ CTL CTR EVID LNK LS

‘accompanying’ ‘actual aspect’ ‘aspect’ ‘circumfix’ ‘conjunctive’ ‘control’ ‘control transitive’ ‘evidential’ ‘linking particle’ ‘lexical suffix’

MID MUT NCTR NML OPT PA PRT REAL RED REP

‘middle’ ‘mutative’ ‘noncontrol’ ‘nominal` ‘optative’ ‘primary affix’ ‘particle’ ‘realized’ ‘reduplication’ ‘repetitive’

1760 REQ SBD

VII. Syntactic Sketches ‘request information’ ‘subordinate subject’

SUB

‘subordinate clause marker’

Since our data are taken from different sources, we have normalized some transcriptions and glossing: We transcribe stress on examples only if it is marked in the source; stress is usually penultimate. We transcribe apostrophes as [ʔ], [cə] as [tsə], and [lə] as [ləʔ]. We normalize some translations − e.g., compare (11a) ‘The man works’ with Jelinek and Demers’ (1994) translation ‘He works, the (one who is a) man’. We label person morphemes as Subject or Object rather than as absolutive, ergative, nominative or accusative (cf. Jelinek and Demers 1983 and elsewhere). In all other cases, we follow source transcriptions and glosses.

50. Acknowledgements We would like to acknowledge the speakers of SENĆOŦEN who shared their language and time with us. Without their generosity and guidance, we would not be able to pursue this kind of work. To the members of the W ̠ SÁNEĆ communities, thank you for teaching us how to present ourselves and our work. HÍ SW ̠ KE, HÁLE SI,IÁM. For comments, discussion and assistance we are grateful to Donna Gerdts, Tom Hukari, Tibor Kiss, Dave McKercher, an anonymous reviewer, and especially to Claire Turner and Henry Davis. Janet Leonard’s research has been funded in part by a SSHRC Doctoral Fellowship; Ewa Czaykowska’s by UVIC-IRG and SSHRC-CURA Grants and by a SSHRCSRG (awarded to Henry Davis).

8. References (selected) Czaykowska-Higgins, Ewa, and M. Dale Kinkade 1998 Salish languages and linguistics. In: Ewa Czaykowska Higgins and M. Dale Kinkade (eds.), Salish Languages and Linguistics: Theoretical and Descriptive Perspectives, 1− 68. Berlin/New York: Mouton de Gruyter. Davis, Henry 2005 On the syntax and semantics of negation. International Journal of American Linguistics 71: 1−55. Davis, Henry, and Lisa Matthewson 2003 Quasi-objects in St’at’imcets: on the (semi)independence of agreement and case. In: Andrew Carnie, Heidi Harley and Mary Ann Willie (eds.), A Festschrift for Eloise Jelinek. 80−106. (Linguistik Aktuell 62.) Amsterdam: John Benjamins. Davis, Henry, and Lisa Matthewson 2009 Issues in Salish syntax and semantics. Language and Linguistics Compass. 3(4): 1097− 1166. Oxford: Blackwell Publishing. Davis, Henry, Lisa Matthewson, and Scott Shank 2004 Clefts vs. nominal predicates in two Salish languages. In: Donna B. Gerdts and Lisa Matthewson (eds.), Studies in Salish Linguistics in Honour of M. Dale Kinkade, 100− 117. (University of Montana Occasional Papers in Linguistics 10.) Missoula, MT. Demers, Richard, A. 1974 Alternating roots in Lummi. International Journal of American Linguistics 40: 15−21.

50. Northern Straits Salish

1761

Demers, Richard A., and Eloise Jelinek 1982 The syntactic functions of person marking in Lummi. In: Working Papers of the 17 th International Conference on Salish and Neighbouring Languages, 24−47. Portland, OR. Demirdache, Hamida, and Lisa Matthewson 1995 On the universality of syntactic categories. Proceedings of the North East Linguistics Society 25: 79−94. Efrat, Barbara S. 1969 A grammar of non-particles in Sooke: A dialect of North Straits Salish. Ph.D. dissertation, Department of Linguistics, University of Pennsylvania. Galloway, Brent D. 1990 A Phonology, Morphology and Classified Word List for the Samish Dialect of Straits Salish. (Canadian Ethnology Service Mercury Series Paper 116.) Hull, Quebec: Canadian Museum of Civilization. Gerdts, Donna B. 1988 Object and Absolutive in Halkomelem Salish. New York: Garland Publishing. Gerdts, Donna B. 2003 The morphosyntax of Halkomelem lexical suffixes. International Journal of American Linguistics 69: 345−356. Gerdts, Donna B., and Thomas E. Hukari 2006a The Halkomelem middle: A complex network of constructions. Anthropological Linguistics 48(1): 44−81. Gerdts, Donna B., and Thomas E. Hukari 2006b Classifying Halkomelem causatives. In: Papers for the 41st International Conference on Salish and Neighbouring Languages, 129−145. (University of British Columbia Working Papers in Linguistics 11.) Vancouver, BC. Gerdts, Donna B., and Thomas E Hukari 2006c A closer look at Salish intransitive/transitive alternations. Proceedings of the Thirtysecond Annual Meeting of the Berkeley Linguistics Society. University of California, 417−426. Berkeley, CA. Gerdts, Donna B., and Kaoru Kiyosawa 2010 Salish Applicatives. (Brill’s Studies in the Indigenous Languages of the Americas.) Netherlands: Brill Academic Publications. Jacobs, Peter 1992 Subordinate clauses in the Squamish language. M.A. thesis, Department of Linguistics, University of Oregon. Jacobs, Peter 2011 Control in Skwxu7mesh. Ph.D. dissertation, Department of Linguistics, University of British Columbia. Jelinek, Eloise 1987 Headless relatives and pronominal arguments: A typological survey. In: Paul Kroeber and R. E. Moore (eds.), Native American Languages and Pronominal Arguments, 136− 148. Bloomington: Indiana University Linguistics Club. Jelinek, Eloise 1990 Quantification in Straits Salish. In: Working Papers of the 25 th International Conference on Salish and Neighbouring Languages, 177−196. Vancouver, B.C. Jelinek, Eloise 1994 Transitivity and voice in Lummi. Presented at the 29th International Conference on Salish and Neighbouring Languages, Pablo, Montana. Jelinek, Eloise 1995 Quantification in Straits Salish. In: Bach, Emmon, Eloise Jelinek, Angelika Kratzer, and Barbara Partee (eds.), Quantification in Natural Languages, 487−540. Dordrecht: Kluwer.

1762

VII. Syntactic Sketches

Jelinek, Eloise 1996 Definiteness and second position clitics in Straits Salish. In: Halpern, Aaron and Arnold Zwicky, (eds.), Approaching Second: Second Position Clitics and Related Phenomena, 271−297. (Center for the Study of Languages and Information Lecture Notes 61.) Stanford: CSLI Publications. Jelinek, Eloise 1998a Prepositions in Northern Straits Salish and the noun/verb question. In: E. CzaykowskaHiggins and M.D. Kinkade (eds.), Salish Languages and Linguistics: Theoretical and Descriptive Perspectives, 325−346. Berlin/New York: Mouton de Gruyter. Jelinek, Eloise 1998b Wh-clefts in Lummi. In: Working Papers of the 33rd International Conference on Salish and Neighbouring Languages, 257–265. Seattle, WA. Jelinek, Eloise 2000 Predicate raising in Lummi, Straits Salish. In: Andrew Carnie and Eithne Guilfoyle (eds.), The Syntax of Verb Initial Languages, 213−233. Oxford: Oxford University Press. Jelinek, Eloise 2006 The pronominal argument parameter. In: Peter Ackema, Patrick Brandt, Maaike Schoorlemmer and Fred Weerman (eds.), Arguments and Agreement, 261−289. Oxford: Oxford University Press. Jelinek, Eloise, and Richard A. Demers 1983 The agent hierarchy and voice in some Coast Salish languages. International Journal of American Linguistics 49: 167−185. Jelinek, Eloise, and Richard A. Demers 1994 Predicates and pronominal arguments in Straits Salish. Language 70(4): 697−736. Jelinek, Eloise, and Richard A. Demers 2002 A note on “psych” nouns in Lummi. In: Working Papers for the 37 th International Conference on Salish and Neighbouring Languages, 181−187. (University of British Columbia Working Papers in Linguistics 9.) Vancouver, B.C. Jelinek, Eloise, and Richard A. Demers 2004 Adverbs of quantification in Straits Salish and the LINK ’u’.’ In: Donna B. Gerdts and Lisa Matthewson (eds.), Studies in Salish Linguistics in Honour of M. Dale Kinkade, 224−234. (University of Montana Occasional Papers in Linguistics 10.) Missoula, MT. Kinkade, M. Dale 1983 Salish evidence against the universality of ‘noun’ and ‘verb’. Lingua 60: 25−39. Kiyota, Masaru 2008 Situation aspect and viewpoint aspect: from Salish to Japanese. Ph.D. dissertation, Department of Linguistics, University of British Columbia. Kroeber, Paul 1999 The Salish Language Family: Reconstructing Syntax. Lincoln: The University of Nebraska Press. Kuipers, Aert 1968 The categories verb-noun and transitive-intransitive in English and Squamish. Lingua 21: 610−626. Leonard, Janet 2007 A preliminary account of stress in SENĆOŦEN (Saanich/North Straits Salish). Northwest Journal of Linguistics 1(4): 1−59. Leonard, Janet, and Claire K. Turner 2010 Predicting the shape of SENĆOŦEN imperfectives. In: David Beck (ed.), A Festschrift for Thom Hess on the Occasion of his Seventieth Birthday, 82−113. Bellingham: Whatcom County Museum Publications. Matthewson, Lisa 1998 Determiner Systems and Quantificational Strategies: Evidence from Salish. The Hague: Holland Academic Graphics.

50. Northern Straits Salish

1763

Montler, Timothy R. 1986 An Outline of the Morphology and Phonology of Saanich, North Straits Salish. (University of Montana Occasional Papers in Linguistics 4.) Missoula, MT. Montler, Timothy R. 1989 Infixation, reduplication and metathesis in the Saanich actual aspect. Southwest Journal of Linguistics 9: 92−107. Montler, Timothy R. 1991 Saanich, North Straits Salish. Classified Word List. (Canadian Ethnology Service Mercury Series Paper 119.) Hull, Quebec: Canadian Museum of Civilization. Montler, Timothy R. 1993 Relative clauses and other attributive constructions in Salish. In: Anthony Mattina and Timothy Montler (eds.), American Indian Linguistics and Ethnography in Honor of Laurence C. Thompson, 241−262. (University of Montana Occasional Papers in Linguistics 10.) Missoula, MT. Montler, Timothy R. 1999 Languages and dialect variation in Straits Salishan. Anthropological Linguistic 41(4): 462−502. Montler, Timothy R. 2003 Auxiliaries and other categories in Straits Salishan. International Journal of American Linguistics 69(2): 103−134. Montler, Timothy, R. 2007 Klallam demonstratives. In: Working Papers for the 42nd International Conference on Salish and Neighbouring Languages, 409−425. (University of British Columbia Working Papers in Linguistics 20.) Vancouver, B.C. Montler, Timothy R. 2008 Serial verbs and complex paths in Klallam. Northwest Journal of Linguistics 2(2): 1−26. Pidgeon, Michael 1970 Lexical suffixes in Saanich, a dialect of Straits Coast Salish. M.A. thesis, Department of Linguistics, University of Victoria. Raffo, Yolanda A. 1971 Songish aspectual system. In: J. E. Hoard and T. M. Hess (eds.), Studies in Northwest Indian Languages, 117−122. (Sacramento Anthropological Society Papers 11.) Sacramento, CA. Raffo, Yolanda A. 1972 A phonology and morphology of Songish, a dialect of Straits Salish. Ph.D. dissertation, Department of Linguistics, University of Kansas. Shank, Scott 2001 The puzzle of ʔəw̓ in Straits Salish quantification. Presented at the Workshop on Grammatical Structures in Indigenous Languages of the Northwest, University of Victoria, January 27−28, 2001. Shank, Scott 2002 Straits Salish clefts. Presented at the Canadian Linguistics Association Conference, Université du Québec à Montréal, May 2002. Shank, Scott. 2003a A preliminary semantics for pronominal predicates. In: Working Papers for the 38 th International Conference on Salish and Neighbouring Languages, 215−236. (University of British Columbia Working Papers in Linguistics 11.) Lillooet, B.C. Shank, Scott 2003b Not even in Samish. Proceedings of the Conference on Semantics of Under-Represented Languages of America 2, 157−173. Thompson, Larry C. 1985 Control in Salish grammar. In: Franz Planck (ed.), Trends in Linguistics: Relational Typology, 391−428. Berlin: Mouton.

1764

VII. Syntactic Sketches

Thompson, Larry C., and Mary T. Thompson 1969 Metathesis as a grammatical device. International Journal of American Linguistics 35(3): 213−219. Turner, Claire K. 2005 Resultatives and actuals in SENĆOŦEN. In: Papers for the 40 th International Conference on Salish and Neighbouring Languages, 245−263. (Working Papers of the University of British Columbia 16.) Vancouver, B.C. Turner, Claire K. 2007 The SENĆOŦEN resultive construction. Northwest Journal of Linguistics 1(4):1−92. Turner, Claire K. 2011 Event representation in SENĆOŦEN (North Straits Salish). Ph.D. dissertation, University of Surrey. van Eijk, Jan 2008 A bibliography of Salish Linguistics. Northwest Journal of Linguistics 2(3): 1−128. Wiltschko, Martina 2009 Decomposing the Mass/Count distinction. Presented at the Mass/Count Workshop, University of Toronto. February 2009.

Ewa Czaykowska-Higgins, Victoria (Canada) Janet Leonard, Victoria (Canada)

51. Syntactic Sketch: Bora 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

Introduction Noun classes and noun class agreement Basic clause structure Valence adjusting mechanisms and verbalization Person and number Noun phrase structure Subordinate clauses Contrastive focus and contrastive negation Reference tracking and discourse conjunctions An Amazonian perspective Abbreviations References (selected)

Abstract The Amazonian language Bora is predominantly head-final and has a consistently nominative-accusative alignment of arguments. Its syntactic structure is crucially shaped by a complex system of nominal classification, which divides the nominal lexicon into 68 noun classes, based on natural gender for animates and physical shape for inanimates.

51. Syntactic Sketch: Bora

1765

Noun classes are obligatorily marked on modifiers of the noun as well as some verbal predicates. Noun classes are also centrally involved in reference tracking and the formation of discourse conjunctions.

1. Introduction 1.1. Bora, its speakers, description, and documentation Bora is spoken in a number of communities in the Northwest Amazon region, including locations in Southern Colombia and Northern Peru, by a total of about 2,500 speakers (Lewis et al. 2013). Together with its relatively close sister language Muinane (Vengoechea 1997, 2005; Walton et al. 2000) it forms the small Boran linguistic family. A genealogical relation to the Witotoan languages − Witoto proper, Ocaina, and Nonuya − proposed by Aschmann (1993; see Kaufmann 1994 for a critical review), has recently been refuted (Echeverri and Seifart 2011). Since the mid 20th century Bora has been in a process of language shift towards local Spanish, and this shift is already at an advanced stage in many of the communities. There is nevertheless overall little influence of Spanish on the linguistic structure of Bora, except for a few loanwords and possibly some calques. We owe most descriptive work on Bora to Wesley Thiesen (Thiesen 1996; Thiesen and Thiesen eds. 1998; Thiesen and Weber 2012). Seifart (2005) focuses on a Bora dialect called Miraña, spoken in Colombia. The linguistic and cultural practices and traditions of Bora (and neighboring speech communities) have been documented in an annotated multimedia corpus (Seifart, Fagua, Gasché, and Echeverri eds. 2009). In the current chapter, data are represented in phonological transcription, following the phonological analysis of the Miraña dialect (Seifart 2005: 31−39). Morphosyntactically Bora dialects are so similar that the analyses presented here are valid for all dialects of the Bora language, including Miraña.

1.2. Overview of Bora grammar Bora forms are affected by a number of (morpho)phonological processes, including progressive palatalization and various processes involving syllable weight alternations (e.g. vowel quantity reduction). Bora has a tone system consisting of two level tones, with the low tone as the marked value. This system is mainly used for marking grammatical structures, such as genitive constructions, nominalization, and subordination (Weber and Thiesen 2001). Many suffixes carry floating tones that are realized on preceding syllables. Surface tone patterns are restricted by a rule according to which two adjacent syllables may not bear low tones, except at the end of certain phrases. This restriction leads to massive tone sandhi in the form of blocking and delinking of tones, and adds to the complexity of the derivation of surface tone patterns. Bora morphology is almost exclusively suffixing, relatively polysynthetic, and agglutinating. Nouns are distinguished from verbs through word-class specific morphology,

1766

VII. Syntactic Sketches

although nominalization and verbalization (marked by a number of derivational suffixes) are possible. Many word-formation patterns, such as compounds, are right-headed. This extends to syntactic genitive constructions. Except for genitive constructions, syntactic phrases are characterized by a relatively free order of constituents. Arguments in a clause and modifiers in noun phrases are often syntactically optional. There is abundant obligatory morphological marking of categories such as case, number, and noun class. Bora is basically an OV language, and, as expected of this kind of language, modifiers predominantly precede their head. Bora combines head marking in the form of subject marking on verbs, i.e. cross-reference, with dependent marking in a case marking system. The alignment of arguments is nominative-accusative throughout. Many morphosyntactic processes are sensitive to animacy. Noun classes (including natural gender for animates and about 67, mostly shapebased classes for inanimates) are a pervasive feature of Bora grammar. Noun class is obligatorily marked on most nouns and on virtually all modifiers and determiners of the noun, as well as on pro-forms and relative clauses. Relative clauses and other subordinate clauses play an important role also for the expression of adjective-like meanings and for the formation of adverbial modifiers. Many of the morphosyntactic characteristics of Bora, e.g. nominal classification and evidentiality marking, are typical for Amazonian languages generally, and in particular Northwest Amazonian languages.

2. Noun classes and noun class agreement 2.1. Animate noun classes Animate nouns are assigned on a purely semantic basis to five classes that combine natural gender with number marking: masculine and feminine singular and dual, and animate plural (example [1]). Masculine is the unmarked value. Animate nouns may or may not be overtly marked for noun class, but all noun classes are realized in agreement (see section 2.4) (all examples are from Bora and taken from the author’s fieldwork data). (1)

a. ími-ːbɛ aːbó-ːbɛ good-CL.M.SG heal.NMLZ-CL.M.SG ‘a good doctor’ b. ími-dʒɛ taːbó-dʒɛ good-CL.F.SG heal.NMLZ-CL.F.SG ‘a good female doctor’ c. ími-mɯ taːbó-mɯ ´ tsi ´ tsi good-CL.M.DU heal.NMLZ-CL.M.DU ‘two good doctors’ d. ími-mɯ taːbó-mɯ ´ pɨ ´ pɨ good-CL.F.DU heal.NMLZ-CL.F.DU ‘two good female doctors’

[Bora]

51. Syntactic Sketch: Bora

1767

taːbó-mɯ e. ími-mɯ good-CL.ANIM.PL heal.NMLZ-CL.ANIM.PL ‘good doctors’

2.2. Inanimate noun classes Inanimate nouns are classified by a large and heterogeneous set of class markers that are defined by exclusive occurrence in a number of morphosyntactic contexts, among them numerals. This set includes the default, general inanimate class marker -nɛ (example [2a]), which is by far the most frequent morpheme of Bora, and 66 specific class markers, which encode primarily shape-related properties of nominal referents, such as dimensionality and arrangement. Among these are phonologically simple, semantically relatively general, and frequent forms (examples [2a−f]), as well as phonologically complex, semantically specific, and less frequent forms (examples [2g−k]). (2)

a. b. c. d. e. f. g. h. i. j. k.

-nɛ -ko -hɨ -ʔo -ha -hɯ -ʔaːmɨ -paːhɨ -tsoːʔo -roʔdʒo -tsaːragʷa

‘inanimate’ ‘one-dimensional, pointed object’ ‘two-dimensional, round object’ ‘three-dimensional, oblong object’ ‘cover’ ‘tube-shaped’ ‘leaf; two-dimensional, flexible object’ ‘cavity’ ‘medium-sized palm tree’ ‘completely twisted, long and thin object’ ‘unordered fibers with an upward orientation’

Noun class is overtly marked on most inanimate nouns, where class markers usually fulfill a derivational function. It is not uncommon that one and the same noun root productively combines with a number of different class markers, forming different nouns (examples [3a−d]). One of the effects of the derivational use of class markers is unitization, i.e. the transformation of grammatically non-countable nouns (example [3a]) into grammatically countable nouns (examples [3b−d]) (see section 5.2). (3)

a. ɯ ´ hɨ banana ‘banana (substance), banana plants, fruits, etc.’ b. ɯ ´ hɨ-ko banana-CL.pointed ‘banana plant’ c. ɯ ´ hɨ-ʔo banana-CL.oblong ‘banana fruit’

1768

VII. Syntactic Sketches d. ɯ ´ hɨ-dʒíːhɯ banana-CL.powder ‘banana powder (ground dried banana)’

2.3. Noun class agreement Noun class agreement is obligatory on one type of finite verb form, numerals, and subordinate verbs used as adjectives (illustrated in example [4]), as well as virtually all other nominal expressions (possessive pronouns, third-person pronouns, quantifiers, etc.), and relative clauses. (4)

a. íhka-ko tsa-ko mɯ pihhɯ ´ hɯː-ko ´ -ko. COP-CL.pointed one-CL.pointed be_big.SUB-CL.pointed fish.NMLZ-CL.pointed ‘There is one big fishing rod.’ tsa-ʔo mɯ ɯ ´ hɯː-ʔo ´ hɨ-ʔo. b. íhka-ʔo COP-CL.oblong one-CL.oblong be_big.SUB-CL.oblong banana-CL.oblong ‘There is one big banana.’

Agreement with inanimate nouns can be marked with the corresponding specific class marker, as in examples (4a−b) and (5a), as well as with the general inanimate class marker (examples [5b−d]) on any agreement target. There is no syntactic restriction concerning which agreement target should mark agreement by which type of class marker. The choice depends, partially at least, on discourse factors (see section 10). (5)

tsa-hɨ ɯgʷáː-hɨ. a. íhka-hɨ COP-CL.disc one-CL.disc metal-CL.disc ‘There is one ax.’ ɯgʷáː-hɨ. tsa-nɛ b. íhka-hɨ COP-CL.disc one CL.INAN metal-CL.disc ‘There is one ax.’ c. íhka-nɛ tsa-hɨ ɯgʷáː-hɨ. COP-CL.INAN one-CL.disc metal-CL.disc ‘There is one ax.’ tsa-nɛ ɯgʷáː-hɨ. d. íhka-nɛ COP-CL.INAN one-CL.INAN metal-CL.disc ‘There is one ax.’

The general inanimate class marker -nɛ, as a default, is also used for agreement marking with non-unitized, bare noun roots (example [6a]), with coordinated inanimate nouns that belong to different nouns classes (example [6b]), and when an inanimate subject cannot be further specified, as in meteorological verbs (example [6c]).

51. Syntactic Sketch: Bora (6)

1769

ɯ ´ hɨ. a. áːkitɛ´-nɛ fall-CL.INAN banana ‘The banana (substance), banana plants, fruits (etc.) fell down.’ mahtʃó-gʷaː gʷájːba-ɯ. b. áːkitɛ´-nɛ fall-CL.INAN scissors-CL.plank.COORD string-CL.round ‘The scissors and the string fell down.’ c. adʒɛ´-nɛ. rain-CL.INAN ‘It rains.’

Every Bora noun takes a fixed agreement pattern, which determines its noun class. In the case of class-marked inanimate nouns, such as ɯgʷáː-hɨ (metal-CL.disc) ‘ax’, this agreement pattern consists of a specific class and the general inanimate class, for instance, {-hɨ CL.disc, -nɛ CL.INAN}. Importantly, no other specific class marker may be used for agreement marking with a given inanimate noun of this type (examples [7a− b]), even though in a given discourse situation it would be semantically more appropriate, e.g. if an avocado fruit happens to be particularly oblong (example [7b]). This shows that the use of noun class markers in expressions such as numerals does not simply respond to semantic characteristics of referents, but that agreement marking is morphosyntactically constrained and necessarily redundant, both key characteristics of “canonical agreement” (Corbett 2008: 23−24, 26−27). This also justifies the use of the term “(noun) class marker”, rather than (non-agreeing) “classifiers” (Grinevald and Seifart 2004). (7)

kóːhɯ-ba a. tsa-ʔba one-CL.thing avocado-CL.thing ‘one avocado (fruit)’ b. *tsa-ʔo kóːhɯ-ba one-CL.oblong avocado-CL.thing Intended meaning: ‘one avocado (fruit)’

2.4. Semantic motivation of noun classes As mentioned above, the noun class assignment of animate nouns is always semantically motivated. In most derived inanimate nouns, the semantic basis of the noun class is also clearly recognizable. In these cases, the semantic basis of the noun class corresponds to the semantic contribution of the class marker to the derived noun, as can be observed in examples (3a−d), above. For instance, the semantic basis of the noun class defined by -ʔo, ‘oblong shape’, is clearly recognizable in the meaning ‘banana (fruit)’ (example [3c]). But there are also inanimate nouns which are not assigned to noun classes on a semantic basis in the sense that the semantic basis of the noun class is not recognizable in the meaning of the noun (examples [8a−b]). For instance, the semantic basis of the noun class defined by -hɨ, ‘disc-shaped’ is not recognizable in the meaning ‘palm tree’ (example [8]).

1770 (8)

VII. Syntactic Sketches a. tsa-hɨ kóːmɨ-hɨ one-CL.disc palm-CL.disc ‘palm tree’ b. tsa-gʷa kɯ ´ ːhɯ-gʷa one-CL.plank fire-CL.plank ‘fire’

The following test can be used in Bora to assess the semantic motivation of the noun class assignment of a given noun: If the use of the corresponding class marker in a (nonagreeing) predicate nominal that is used to attribute the properties denoted by the class marker to the referent of the subject noun phrase is acceptable, then the noun class assignment is semantically motivated (example [9]). Semantically opaque noun class assignment leads to the unacceptability of such a construction (example [10]). Semantic motivation of noun class assignment is probably a matter of degree rather than a categorical distinction, but this test provides at least a heuristic for identifying basic differences in noun class assignment. (9)

ɯ nɛ´ː-nɛ. pá-ʔo-dɯ ´ hɨ-ʔo ´ banana-CL.oblong complete-CL.oblong-EQUA seem-CL.INAN ‘A banana is like an oblong one.’

(10) *kóːmɨ-hɨ nɛ´ː-nɛ. pá-hɨ-dɯ ´ palm-CL.disc complete-CL.disc-EQUA seem-CL.INAN Intended meaning: ‘The palm tree is like a round and flat one.’ The noun class assignment system for Bora inanimate nouns is thus mixed, with predominantly semantic and some formal assignment for individual noun classes. This is comparable to noun class assignment in Bantu languages, and also gender assignment in European languages, which also involves semantic and formal assignment. For instance, the assignment of the German noun Frau ‘woman’ to feminine gender is strongly semantically motivated (i.e. corresponds to the semantic basis ‘female’), while the assignment of Tafel ‘blackboard’ to feminine gender is not. The difference is that the assignment of inanimate nouns in languages like German is at best weakly semantically motivated, while most inanimate nouns in Bora are assigned to noun classes on a clearly recognizable semantic basis, comparable to gender assignment to most human nouns in German.

3. Basic clause structure Clauses are either formed with a lexical verb as predicate (section 3.1) or with an optional copula verb as predicate (section 3.2). Grammatical roles are marked by a case system (section 3.3), and most tense, aspect, and mood categories are expressed by second-position clitics (section 3.4).

51. Syntactic Sketch: Bora

1771

3.1. Main clause predicates Verbal predicates of main clauses are either formed with the predicate marker -ʔi (example [11a]) or a class marker, which cross-references the subject or stands in for it (examples [11b−c]). tsáː-ʔi. (11) a. náːni my.uncle come-PRED ‘My uncle came.’ b. náːni tsaː-ːbɛ. my.uncle come-CL.M.SG ‘My uncle came.’ c. tsaː-ːbɛ. come-CL.M.SG ‘He came’ When a class marker is used for cross-reference in a main clause predicate, an overt subject noun phrase can precede the predicate (as in example [11b]), be omitted (as in example [11c]), or follow the predicate (as in example [12a]). A predicate that is formed with -ʔi requires an overt subject noun phrase (example [12b]), which must precede the predicate in this construction (example [12c]). Personal pronouns in a subject function procliticize to the predicate (example [12d], see further section 5.1). náːni. (12) a. tsa-ːbɛ come-CL.M.SG my.uncle ‘My uncle came.’ b. *tsáː-ʔi. come-PRED Intended meaning: ‘(Someone) came.’ náːni. c. *tsáː-ʔi come-PRED my.uncle Intended meaning: ‘My uncle came.’ d. o=tsáː-ʔi. 1SG=come-PRED ‘I came.’ The predicate marker -ʔi does not have a cross-referencing function. It can be used with subjects of any person, number, or noun class. Predicates with -ʔi are typically used when a participant is newly introduced and mentioned with a full noun phrase. Participants which are already established are usually tracked with a cross-referencing class marker. Structures that consist of a verb stem and a class marker for subject crossreferencing constitute the minimal clause in Bora.

1772

VII. Syntactic Sketches

3.2. Copula clauses Copula clauses are used to express four kinds of relations: equivalence (example [13]), localization (example [14]), attribution (example [15]), and possession (example [16]). Possessors of inalienable possession, which are marked for accusative case in this construction (example [16a]), are distinguished from possessors of alienable possession, which are marked differently (example [16b]). The copula verb can be omitted in these clauses. (13) gʷáródʒiːʔo ájβɛ´hɯ-ːbɛ ihká-ʔi. proper_name chief-CL.M.SG COP-PRED ‘Gwáródʒiːʔo is the chief.’ (14) táj-nadʒɛ tɛ´koːmí-ri ihká-ʔi. POSS.1SG-sister town-LOC COP-PRED ‘My sister is in town.’ (15) diː-bɛ ajnɯ ihká-ʔi. ´ mɯ ´ náa-hpi 3-CL.M.SG white_people-CL.M.SG COP-PRED ‘He is white.’ ɯ ihká-ʔi. ´ hka-ʔa (16) a. oː-kɛ 1SG-ACC beard-CL.oblong COP-PRED ‘I have a beard.’ dzoʔβɯ b. mɯ ´ mɯ ihká-ʔi. ´ ʔpɨ-dí 1DU.F.EXCL-POSS manioc_flour COP-PRED ‘We (two females) have manioc flour.’

3.3. Grammatical roles and case marking 3.3.1. Core arguments Subject noun phrases are unmarked but may be cross-referenced by class markers in verbs (see examples [11−12], above). The functions of other arguments are expressed by case markers on noun phrases. The overt realization of arguments as noun phrases is syntactically optional, except for subjects of predicates formed with the predicate marker -ʔi. The alignment of arguments is nominative-accusative throughout. Animate objects of monotransitive predicates are marked for accusative, and inanimate objects of monotransitive predicates are unmarked, i.e. there is differential object marking (examples [17a−b]). (17) a. ó=ajhtɯ piʔmɯ ´ mı´-ʔí ´ i-kɛ. ̵ 1SG=see-PRED proper_name-ACC ‘I saw Piʔmɯ ´ i.’ ráːta. b. ó=ajhtɯ ´ mı´-ʔí ̵ 1SG=see-PRED tin_can ‘I saw a tin can.’

51. Syntactic Sketch: Bora

1773

The alignment of object arguments in transitive clauses follows a primary vs. secondary object pattern (Dryer 2007: 255−256), at least for causativized verbs and verbs of transaction (Seifart 2013). Primary objects, i.e. objects of monotransitive clauses and the most recipient-like arguments in ditransitive clauses are marked for accusative (examples [18a] vs. [18b−c]). Secondary objects, i.e. the theme arguments of ditransitive predicates are marked with the allative case (examples [18b−c]). This differs from the direct vs. indirect object pattern found, for instance, in English, where the thematic argument aligns with objects of monotransitive clauses, while recipients are marked differently. Allative (and ablative, see below) case marking is also sensitive to animacy in that the marker -di- is additionally used with animate nouns (example [18b], see example [19b], below, for ablative). okáhi-kɛ. ɨːtɛ´-ʔi (18) a. dʒoʔmái proper_name see-PRED tapir-ACC ‘Dʒoʔmái saw a tapir.’ okáhi-dí-βɯ dʒoʔmái-kɛ. ı̵ ´i ´ːtɛ-tsó-ʔi b. piʔmɯ proper_name see-CAUS-PRED tapir-ANIM-ALL proper_name-ACC ‘Piʔmɯ ´ i showed the tapir to Dʒoʔmái.’ áhkɯ-ːbɛ bájnɛ-hɯ ´ -βɯ. c. oː-kɛ 1SG-ACC give-CL.M.SG tobacco-CL.tube-ALL ‘He gave me a cigarette.’

3.3.2. Obliques There are a number of oblique cases, including various locative cases, the instrumental, and the sociative case. The ablative case is used for expressing the source of an action or event (examples [19a−d]). This includes static location that involves protrusion (examples [19d]). The allative case, besides marking secondary objects, can also express the goal of an action or event (see example [21], below). há-tɯ. íhagʷa tsiβá-ʔi (19) a. iβá proper_name chair bring-PRED house-ABL ‘Iván brought the chair from the house.’ mɛ´ːni-dí-tɯ. b. ó=oːβɛ´-ʔi 1SG=fill_oneself-PRED wild_boar-ANIM-ABL ‘I ate a lot of wild boar.’ (lit. I filled myself from wild boar) máhtsiβá-ʔi báhɯ ´ pajnɛ´-tɯ. c. oːʔí-ːbɛ jaguar-CL.M.SG sing-PRED bush inside-ABL ‘The jaguar is roaring in the bush.’ (i.e. can be heard from inside the bush) gʷáboʔhɯ ´ kɯnɯ ´ -hɨ ɯ ´ gʷaː-hɨ. ´ mɛ´-ʔɛ-tɯ d. ɯ wood-CL.tree-ABL be_stuck-CL.disc metal-CL.disc ‘The ax is stuck in the tree.’ (i.e. protruding from the tree)

1774

VII. Syntactic Sketches

The case marker -ri is used for instruments (example [20a]) and static locations other than those that involve protrusion (example [20b]). Further locative relations are expressed by joining a locative noun and a noun denoting the ground in a genitive construction (examples [20]−[21]) (see section 6.1., below), which can additionally be case marked, like báhɯ ´ pajnɛ´-tɯ (bush inside-ABL ) literally ‘from the inside of the bush’ in example (19c). táj-ʔóhtsɨ-gʷáː-nɛ-ri. (20) a. o=dóː-ʔi 1SG=eat-PRED POSS.1SG-hand-CL.plank-PL-INS ‘I ate the fish with my fingers.’ boʔdó-gʷa ihká-ʔi. b. pájhtɛʔı´ːgʷa-ri ̵ paddle.NMLZ-CL.plank COP-PRED harbor-LOC ‘The paddle is in the harbor.’ (21) píko-ːbɛ íːnɯ ʔadʒɯ ´ -hı´̵ ´ -βɯ ´. put-CL.M.SG earth-CL.disc top-ALL ‘He put (it) on the ground.’ Other oblique cases are the sociative, expressing a person or thing in whose company an action is carried out (example [22]), the benefactive (example [23]), and the equative (example [24]). (22) í-pahkó-ɯ-ma tsáː-ːbɛ. 3.POSS-bag-CL.round-SOC come-CL.M.SG ‘He came with his bag.’ (23) ó=dzı´hı okáhi-kɛ taj-náːdʒɛ´-dʒiːʔɛ. ̵ ´-βɛ-tsó-ʔi ̵ 1SG=death-VBLZ.become-CAUS-PRED tapir-ACC POSS.1SG-sister-BEN ‘I killed a tapir for my sister.’ (24) íhtʃi-ːbɛ amó-ːbɛ´-dɯ. swim-CL.M.SG. fish-CL.M.SG-EQUA ‘He swims like a fish.’

3.4. Second-position tense, aspect, and mood clitics Many tense, aspect, and mood distinctions are expressed by enclitics that attach to the first (possibly internally complex) constituent of a clause (examples [25]−[27]). Several of these clitics can be combined (example [25]) with no strictly fixed order, although there are strong tendencies in their ordering. The quotative and inferential markers (examples [25], [26]) constitute an evidentiality system in which a clause that is unmarked for these categories is usually interpreted as conveying information for which the speaker has direct evidence (example [27]). (25) [móːá ɯ pɛ-ʔíhká-mɛ´. ´ níɯ]-rí=βá=pɛ goLOC = QUOT = REM REP-CL.ANIM.PL river edge‘It is said that they walked again and again along the river.’

51. Syntactic Sketch: Bora (26) ´ı ı̵´kɯ ̵ ´ i=ʔáːka tsáː-ːbɛ. quick=INFR come-CL.M.SG ‘He must have come quickly.’

1775

(speaker has indirect evidence)

(27) tsáʔá=iːkɛ diː-ːbɛ pɛ´ː-tɯ ´ -nɛ. NEG=PRPT 3-CL.M.SG go.SUB-NEG-CL.M.SG ‘He has not gone yet.’ (speaker has direct evidence)

4. Valence adjusting mechanisms and verbalization Causativization is a productive valence-increasing derivational process in Bora. In this process, the subject is demoted to a primary object, and the causee is introduced as a new constituent in the subject function (example [28a−b]). When causativization is applied to transitive verbs (see examples [18a−b], above), the primary object of the underived monotransitive predicate becomes an allative-marked secondary object in the ditransitive, causative predicate. kɯgʷá-ʔi. (28) a. tsı´ı̵ ´mɛ ̵ children sleep-PRED ‘The children sleep.’ tsı´ı̵ ´mɛ-kɛ. ´ gʷa-tsó-ʔi b. ó=kɯ ̵ 1SG=sleep-CAUS-PRED children-ACC ‘I put the children to sleep.’ Valence-decreasing derivations applied to transitive verbs are reflexive (example [29]) and reciprocal (example [30]) marking. In both cases, the object of the transitive predicate is eliminated, resulting in intransitive clauses. The reflexive marker can also function as a passive marker (example [29c]). (29) a. gʷahpi pítʃóɯhká-ʔi íhagʷa. man wipe-PRED chair ‘The man wiped the chair.’ ´ hká-meí-ʔi. b. gʷajpi pítʃóɯ man wipe-REFL-PRED ‘The man wiped himself.’ ´ hká-meí-ʔi. c. íhagʷa pítʃóɯ chair wipe-REFL-PRED ‘The chair was wiped.’ náːni-kɛ kábórikó-ʔi. (30) a. táj-naʔbɛ´-mɯ POSS.1SG-brother-CL.ANIM.PL my.uncle-ACC beat-PRED ‘My brothers hit my uncle.’ kábóríkó-hkatsí-ʔi. b. táj-naʔbɛ´-mɯ POSS.1SG-brother-CL.ANIM.PL beat-RECP-PRED ‘My brothers hit each other.’

1776

VII. Syntactic Sketches

Nouns can be verbalized by three suffixes: One expresses that the subject of the derived verb acquires the quality or state expressed by the verbalized noun (example [31a]), another expresses possession (example [31b]), and another that the subject acts upon the entity denoted by the verbalized noun (example [31c]). ´ -ʔi. (31) a. ó=dzı´hɨ-βɛ ̵ 1SG=death-VBLZ.become-PRED ‘I am dying.’ b. ó=gʷaʔdáʔi-βá-ʔi. 1SG=rattle-VBLZ.have-PRED ‘I have a rattle.’ c. ó=koːmı´-nɯ ̵ ´ -ʔi. 1SG=milpeso-VBLZ.do-PRED ‘I fetched milpeso palm fruits.’

5. Person and number 5.1. Person Person is marked in Bora by two types of expressions: Firstly, a small set of monosyllabic personal pronouns for first and second person singular and one form, which in the absence of additional marking (see below) refers to first person plural inclusive, i.e. including the addressee (examples [32a−c]). When used as subjects, these forms procliticize to predicates of main and subordinate clauses, but they can also be used as free forms and receive case marking. When a third person subject of a subordinate clause is coreferential with that of the corresponding main clause, a marker that could be called a long-distance reflexive is procliticized to the subordinate verb (i= ‘3.SUB’ in example (33), note that subordination is marked by low tone on the proclitic). (32) a. mɛ-kɛ ó=ɨːtɛ´-ʔi. 1PL-ACC 1SG=see-PRED ‘I see us.’ ɯ ´ =ɨːtɛ´-ʔi. b. o-kɛ 1SG-ACC 2SG=see-PRED ‘You see me.’ c. ɯ-kɛ mɛ´=ɨːtɛ´-ʔi. 2SG-ACC 1PL=see-PRED ‘We see you.’ pɛ´-koː-ːbɛ. (33) a. i=tsáː-bɛ. goSUB =comeCL . M . SG PFV-CL.M.SG 3. ‘He, who came, has already gone.’

51. Syntactic Sketch: Bora

1777

pɛ-kóː-mɛ. b. i=tsáː-mɛ 3.SUB=come-CL.ANIM.PL go-PFV-CL.ANIM.PL ‘They, who came, have already gone.’ Additional person categories are expressed by another set of polysyllabic, free personal pronouns. These include dual (masculine and feminine) and plural forms, and first person exclusive vs. inclusive forms. First and second person non-singular subject pronouns are used in addition to the procliticized form mɛ= (examples [34a−b]), which is underspecified for person and interpreted as first person plural inclusive in the absence of polysyllabic, free personal pronouns (see example [32c]). The polysyllabic personal pronouns are separate constituents in the clause, and other constituents may occur between them and the verb, e.g. second-position clitics (example [34b]). íbíi mɛ´=mɛːnɯ (34) a. ámɯʔtsi táʔdi-ma ´ -ʔi. 2DU.M my.grandfather-SOC coca 1/2PL=make-PRED ‘You two with my grandfather (i.e. you and my grandfather) are making coca.’ mɛ´=ɯhkɯ amómɛ-mɛ-kɛ. b. mɯʔtsí=pɛ ´ -ʔi 1PL.M.EXCL=REM 1/2PL=catch-PRED fish-CL.ANIM.PL-ACC ‘We two caught fish.’

5.2. Countability and number marking Most Bora nouns are count nouns. However, when an inanimate noun root can be used in a bare form without a class marker suffix (example [35a]), this form is usually grammatically uncountable in the sense that it cannot combine with dual or plural markers (35b−c). Nouns of this type usually refer to masses or collectives, in some cases they can refer to either, as in example (35a). (35) a. koː ‘wood/logs’ b. *koː-ːkɯ wood-DU Intended meaning: ‘two (pieces of) wood’ c. *koː-ːnɛ wood-PL Intended meaning: ‘(pieces of) wood’ From these uncountable nouns, count nouns that refer to singular referents are derived with class markers (example [36a], see also examples [3a−d], above). These derived nouns are obligatorily inflected for number when non-singular in reference (examples [36b−c]).

1778

VII. Syntactic Sketches

(36) a. ko-ʔba wood-CL.thing ‘log’ b. kó-ʔba-ːkɯ wood-CL.thing-DU ‘two logs’ c. ko-ʔbá-ːnɛ wood-CL.thing-PL ‘logs’

6. Noun phrase structure Bora uses two strategies to form complex noun phrases: genitive constructions (section 6.1) and another type of less tightly integrated noun phrase (section 6.2).

6.1. Genitive constructions A genitive construction in Bora is a tightly integrated phrase with a clearly hierarchical internal structure. It consists of a noun or nominalized verb as its head and another, preposed noun phrase as a dependent. If the genitive construction expresses possession, the head corresponds to the possessed noun phrase and the dependent to the possessor noun phrase. The term genitive construction is used here instead of possessive construction because this construction can express a wide range of meanings, possession being only one of them. Note that the term genitive in this usage does not refer to a case. Genitive constructions are marked by a low tone which is realized near the boundary of the two elements. The exact position of the tone depends on the number of syllables of both elements (Weber and Thiesen 2001). The tonal marking can also be neutralized by the blocking and delinking of tones. Bora genitive constructions are used for compounding (sometimes with idiomatic meanings, as in example [37a]) as well as to productively form complex noun phrases (examples [37b−c]). ´ -mɯ ´ na (37) a. ajnɯ shoot.NMLZ-people ‘white people’ (lit. people of shooting) b. [[táhkórá-bá] táhɯta] trap-CL.thing bait ‘the bait of the trap’ ɯːbádʒɛ] ´ htsɯ ´ -ko-mɯ ´ tsí] c. [[ɯ snail-CL.pointed-CL.M.DU tell.NMLZ ‘the story of the two snails’ A common use of genitive constructions is to express locative relations. A nominal expression that expresses the ground is used as the dependent and a locative noun as the

51. Syntactic Sketch: Bora

1779

head of a genitive construction for this purpose (examples [38a−d]). Examples (38c− d) additionally illustrate the use of pronominal expressions as dependents of genitive constructions. (38) a. [[mónɛ´-ʔɛ´] dɛ´hɯko] ceiba_tree-CL.tree lower_part ‘below the ceiba tree’ b. [[ɯ nɨhkɛ´]-tɯ ´ mɛ´-í-gʷɯɯ ´] ´ wood-CL.medium-DIM end-ABL ‘from the tip of the little wooden stick’ c. [[tɛ´ː-í] nɨhkɛ´]-tɯ 3-CL.medium end-ABL ‘from the tip of it (i.e. stick)’ d. [[ɛ´-nɛ´-ːkɯ ʔadʒɯ ´ ]-βɯ ´ ´] DIST-CL.INAN-DU top-ALL ‘on top of those two’ Relative clauses can also function as the dependent element of a genitive construction, and genitive constructions can be reiterated. Example (39) features a locative noun as the head and a complex genitive construction as the dependent. This dependent genitive construction consists of a noun as its head and a relative clause as the dependent. The relative clause includes yet another genitive construction, which functions as an oblique (locative) object in the relative clause. dʒaʔáhtsı´]̵ nı´hkɛɯ ´ ]-βɯ (39) [[[[[móːáj] ɯníɯ]-ri íhká-há] ̵ river edge-LOC COP.SUB-CL.house yard end-ALL ‘towards the end of the yard of the house that is at the edge of a river’ Bora genitive constructions have a strictly hierarchical internal structure as can be shown by criteria that are typically associated with heads and dependents (Zwicky 1993: 298; see also Zwicky 1985; Fraser, Corbett, and McGlashan 1997: 1−5; Himmelmann 1997: 134−136). With respect to a semantic criterion, the meaning of a Bora genitive construction “is a subtype of the meaning of the Head” (Zwicky 1993: 296; see also Zwicky 1985: 4−5). Also typical for heads in general, the head of a Bora genitive construction is the element required by syntax, while the dependent is accessory and can be omitted. According to another criterion for headedness, the word type of an element determines whether it can function as a head (nouns or nominalized verbs in Bora), while the phrase type of an element determines whether it can be used as the dependent. In Bora, dependents can be noun phrases of any type, including pronominal expressions, and relative clauses (see examples [38c−d] and [39]). The head determines the syntactic category of the phrase and morphosyntactic features will be realized on the head noun. For example, the head noun determines the noun class that is relevant for agreement marking and it is the head noun that bears the case marker (examples [38b−d] and [39]). There are two constructions that are formally and semantically related to genitive constructions, but differ from these in that one of the elements is a bound morpheme.

1780

VII. Syntactic Sketches

The first construction type related to the genitive construction is the combination of a possessor prefix and a noun. Examples (40a−b) illustrate two of these prefixes (note that the paradigm partially overlaps with that of personal pronouns, see section 5.1). (40) a. táj-pı´ːka ̵ POSS.1SG-manioc ‘my manioc’ b. mɛ´-pı´ːka ̵ POSS.1PL-manioc ‘our manioc’ The second type of construction related to genitive constructions is the productive combination of noun roots with class markers (see section 2.2., above). In this case, it is the second element (the class marker) that is bound. Examples (41a−b) illustrate the formal and semantic parallelism between genitive constructions and combinations of noun roots with class markers. ´ hı´]̵ ɯβí-ːbaj] (41) a. [[ɯ banana basket-CL.container ‘a basket (full) of bananas’ ´ hɨ-ʔbábaj b. ɯ banana-CL.bag ‘a bag (full) of bananas’ The parallelism between the use of nouns as heads of genitive constructions and class markers suffixed to noun roots suggests that class markers may have originated from nouns with relatively generic meanings that were often used as heads of genitive constructions, before spreading to other contexts (such as modifiers and pro-forms) and losing their ability to be used as free forms.

6.2. Loose noun phrases Two or more nominal expressions can form noun phrases of another type which exhibits a much lower degree of syntactic integration than genitive constructions, and which are therefore called loose noun phrases here. This construction type contains a noun and a modifier, e.g., a numeral or relative clauses. These modifiers obligatorily agree in noun class, number, and case with the head noun. The order of the elements of this construction is relatively free and both modifier and head may be omitted. Each element in these noun phrases is treated as a separate constituent with respect to tonal patterns and the placement of clitics. This type of noun phrase, which is characterized by syntactically relatively independent and potentially discontinuous elements, is similar to what in other languages has been analyzed as appositive noun phrases (e.g. Heath 1984: 498; Foley 1997: 182; Morse and Maxwell 1999: 94) or as “phrase fracturing” (McGregor 1989). They are an important feature of non-configurational languages (Austin and Bresnan 1996).

51. Syntactic Sketch: Bora

1781

Numeral phrases in Bora constitute one of these noun phrases. It is syntactically possible to place the numeral before or after the noun, although numerals most often precede enumerated nouns. Like other modifiers and determiners, numerals obligatorily agree in noun class and number with the modified noun. If the head noun is case-marked, the numeral usually receives the same case marking (example [42a]), although instances without case marking are attested (example [42b]) (all examples in this section are from spontaneous speech). maːkíní-mɯ-βá-kɛ kɯʔrí-mɯ-kɛ. (42) a. ó=ɯhkɯ ´ -ʔi 1SG=take-PRED three-CL.ANIM.PL-PL-ACC pintadillo-CL.ANIM.PL-ACC ‘I caught three pintadillo fish.’ b. tsa-ːbɛ mí-ʔó-ːkɯ ɯ ´ hɨ-ʔó-ːkɯ-ma. come-CL.M.SG two-CL.oblong-DU banana-CL.oblong-DU-SOC ‘He came with two bananas.’ Unlike dependents of genitive constructions, numerals (and other modifiers) are treated as separate constituents for the placement of second-position clitics, which intervene between them and the enumerated noun (example [43], see example [25], above, for a genitive construction with second-position clitics placed after the whole construction). (43) tsá-ʔo-rɛ´=ko=ɯ ɯ ´ bá=nɛ´kɯ ´ hɨ-ʔo. one-CL.oblong-FOC=PFV=DUB=REC banana-CL.oblong ‘(There was) probably just one banana’ Like numerals, third person pronouns often precede the head noun, and their function is similar to that of a determiner of that noun in this case. They also agree in noun class, number, and case with the head noun (example [44]). [íːnɯ didʒó-βa-ːbɛ. ɯhtsɯ (44) dí-ːbɛ-kɛ ´ -hı´̵ ´ -ko]-kɛ´ 3-CL.M.SG-ACC earth-CL.disc snail-CL.pointed-ACC ask-DIR-CL.M.SG ‘He came to ask that terrestrial snail.’ The placement of modifying elements other than numerals or pronouns used as determiners is more flexible. Example (45) illustrates a modifier that is separated from its head noun by the predicate. (45) pá-hɯːʔó-tɯ mɛːnɯ bɛ´-hɯ ´ -mɛ´ ´ ːʔo-tɯ. complete-CL.palmleaf-ABL make-CL.ANIM.PL palm-CL.palmleaf-ABL ‘From a complete leaf of the palm they make (it).’ Relative clauses (see section 7.1) behave like other nominal modifiers in the clause and also agree in noun class, number, and case (marked on the predicate of the relative clause) with the head noun (example [46]). They can precede the head noun, as in example (46), which illustrates a relative clause containing only a copula verb, as well as follow it (see examples in section 7.1, below).

1782

VII. Syntactic Sketches

(46) ihká-mɛ-dí-βɯ bóːmɛ´-mɛ-dí-βɯ ó=ɯ ´ ːhɛtɛ´-ʔi. COP.SUB-CL.ANIM.PL-ANIM-ALL otter-CL.ANIM.PL-ANIM-ALL 1SG=arrive-PRED ‘Where they were, where the otters were, I arrived.’ Any modifier (as well as the third person pronoun, which may be used as a determiner) can be used on its own without a lexical noun as an overt head noun. A number of agreeing modifiers can thus form a phrase, as do the demonstrative and relative clause in example (47), without a lexical noun as head noun. (47) í-htɛ

ajnɯ nɛ´ː-mɛ. ´ mɯ ´ na pangoáːná-mɯ ´ white_people panguana-CL.ANIM.PL say.SUB-CL.ANIM.PL ‘these, whom the white people call panguanas’

PROX-CL.ANIM.PL

In summary, loose noun phrases in Bora exhibit a low degree of internal syntactic integration. They clearly form a phrasal unit, indicated by agreement morphology, but their internal structure is relatively flat, rather than hierarchical. As before, this can be shown by applying criteria for headedness (see section 6.1): There is no element in these noun phrases that is syntactically required, since any noun phrase can stand on its own (including pronominal expressions, numerals, and relative clauses), and any of them can be omitted. Phrase type determines which element can enter into such a construction, i.e. noun phrases of any type (including pronouns and relative clauses), not word type, as is the case for heads of genitive constructions. Any element of loose noun phrases is equally externally representative in the clause and can be the morphosyntactic locus of marking that pertains to the entire phrase, for instance when numerals are case marked. There are, however, also indications for a syntactic asymmetry in loose noun phrases. Most importantly head nouns, by being the agreement controllers, determine the agreement marking of the modifiers and the determiners, including the possibility of alternative agreement marking by specific or general class markers, which may never be alternatively used in nouns (see section 2.3, above).

7. Subordinate clauses Subordinate clauses are marked by a low tone on the syllable preceding the stem of the subordinate verb. It is realized on the procliticized subject pronoun if one is present (this low tone entails a high tone on the verb stem’s first syllable, which is often the only indication of subordination). The following types of clauses are constructed as subordinate clauses in Bora: relative clauses (section 7.1), adverbial clauses (7.2), and negative clauses (7.3). In some respects, subordinate clauses have a more complex structure than main clauses, for instance in that the predicate has a fixed clause-final position in subordinate clauses.

7.1. Relative clauses The predicate of a relative clause must include a class marker, which marks agreement with the head of the relative clause or stands in for it. Relative clauses (set in boldface

51. Syntactic Sketch: Bora

1783

in the following examples) can follow the head noun (example [48a]) or precede it (example [48b]). Like other modifiers, they can also be used without an overt head noun as headless relative clauses (example [48c]), and they can be non-adjacent with respect to the head noun. gʷatsíʔhɯ-gʷa ɯ=dáPkɯ-gʷa. oP-kɛ (48) a. ó=ɨːtɛ´-ʔí 1SG=see-PRED machete-CL.plank 1SG-ACC 2SG.SUB=give-CL.plank ‘I saw the machete that you gave to me.’ haː áːkitɛ´-ʔi. b. ahɨ o=gʷáP-ha palm 1SG.SUB=cut-CL.cover house fall-PRED ‘The house for which I cut palm leaves, fell down.’ áːkitɛ´-ʔi. c. áíPβɛ-ha burn.SUB-CL.cover fall-PRED ‘What (house, clothes, etc.) burned, fell down.’ The grammatical role of the relativized term within the relative clause is not marked, i.e. the argument that the class marker represents can have different grammatical roles with respect to the predicate of the relative clause. The examples above include relativized terms functioning as subject (example [48c]), secondary object (example [48a]), and beneficiary (examples [48b]). (Note that this contrasts with class markers in main clause predicates, where they always cross-reference the subject.) If the grammatical role of the relativized term in the relative clauses needs to be disambiguated, a resumptive pronoun can be inserted. In the following examples (49a, b), the pronoun díː-ːbɛ (3CL.M.SG) is inserted in the relative clause and case-marked to disambiguate the grammatical role of the relativized term. ɛP-nɛ´=pɛ kánáPma díP-Pbɛ-dí-tɯ (49) a. ájnɯ ´ mɯ ´ náa-hpi 3-CL.M.SG-ANIM-ABL white_people-CL.M.SG DIST-CL.INAN=REM salt ɯ=náʔhɨʔɛ´nɯ-Pbɛ´ pɛ-ːkóː-ʔi. 2SG.SUB=do.business-CL.M.SG go-PFV-PRED ‘The white man, from whom you had traded (i.e. bought) salt, has gone.’ ɛP-nɛ´=pɛ kánáPma díP-Pbɛ-kɛ b. ájnɯ ´ mɯ ´ náa-hpi 3-CL.M.SG-ACC white_people-CL.M.SG DIST-CL.INAN=REM salt ɯ=náʔhɨʔɛ´nɯ-Pbɛ´ pɛː-kóː-ʔi. 2SG.SUB=do.business-CL.M.SG go-PFV-PRED ‘The white man, to whom you had traded (i.e. sold) salt, has gone.’ Relative clauses can contain clitics to express the temporal relation between the relative clause and the main clause, such as anteriority expressed by the remote past marker =pɛ (examples [49a−b]). Predicates of relative clauses have the same inflectional potential as predicates of main clauses (except for interrogation and imperative). Some finite verb inflection in a predicate of a relative clause is illustrated in example (50). (50) ájnɯ níPtɛ´-kóP-i-tɯ pɛː-kóː-ʔi. ´ -ro-Pbɛ ´ mɯ ´ náa-hpi white_people-CL.M.SG go_down.SUB-PFV-FUT-NEG-FRUS-CL.M.SG go-PFV-PRED ‘The white man, who was already at the point of not going down(river), has left.’

1784

VII. Syntactic Sketches

Stative verbs are often used as the only element in relative clauses to modify nouns, functioning as adjectival modifiers of the head noun (example [51]). (51) boʔdó-gʷa tsı´tsɨ ̵ P-gʷa paddle.NMLZ-CL.plank be_white.SUB-CL.plank ‘a white paddle’ Relative clauses marked for oblique cases, e.g. benefactive, can function as adverbial modifiers in a clause, too. These relative clauses are typically headless and contain the general inanimate class marker which stands in for an event or state, i.e. it does not mark agreement with a head (example [52]). (52) diP-bɛ áPkitɛ´-nɛ-dʒíPʔɛ ó=oːmí-ʔi. 3-CL.INAN.SUB fall-CL.INAN-BEN 1SG=return-PRED ‘Because it fell down, I returned.’

7.2. Adverbial clauses Another way to form adverbial modifiers is with adverbial clauses in which the place of the class marker in a subordinate verb is taken by a morpheme that expresses temporal, spatial, or related notions. There are about a dozen markers that are used to form adverbial clauses, among them markers expressing different temporal relations, purpose, condition, and comparison (examples [53a−e]). tsa-ːbɛ. (53) a. o=máhtʃó-ihɯ 1SG.SUB=eat-TEMP come-CL.M.SG ‘When I ate, he came.’ b. o=máhtʃó-koːka tsa-ːbɛ. 1SG.SUB=eat-SIMU come-CL.M.SG ‘While I was eating, he came.’ tsa-ːbɛ. c. o=máhtʃo-ki 1SG.SUB=eat-PURP come-CL.M.SG ‘He came so I would eat.’ d. o=máhtʃó-ʔahtʃíːhɯ tsa-ːbɛ. 1SG.SUB=eat-COND come-CL.M.SG ‘If I eat, he comes.’ tsa-ːbɛ. e. o=nɛ´-ʔdɯ 1SG.SUB=say-EQUA come-CL.M.SG ‘Like I said, he came.’

7.3. Negative clauses Negated clauses are subordinate clauses, as indicated by the tone pattern and obligatory class marker in their predicate, which also includes a negation marker (example [54a]).

51. Syntactic Sketch: Bora

1785

When the negation particle tsáʔa precedes a negated clause, the predicate of the negated clause must include the general inanimate class marker -nɛ (examples [54b−c]). In these constructions, the negation particle may be said to function as a main clause predicate and the negated clause as a complement, as the following, literal translation of example (54b) indicates: ‘It is not the case that I bathed’. pɛ´ː-ʔi. (54) a. máhtʃó-tɯ-ːbɛ eat.SUB-NEG-CL.M.SG go-PRED ‘Without eating, he left.’ ´ ʔkɯ-tɯ ´ -nɛ. b. tsáʔa o=áβɯ NEG 1SG.SUB=bathe-NEG-CL.INAN ‘No, I did not bathe.’ c. tsáʔa diː-tɛ máhtʃo-tɯ ´ -nɛ. NEG 3-CL.ANIM.PL.SUB eat-NEG-CL.INAN ‘No, they did not eat.’

8. Contrastive focus and contrastive negation Noun phrases can be marked for either contrastive focus (example [55]) or contrastive negation (example [56]), which are mutually exclusive in Bora. Contrastive focus or negation can also be marked on subordinate clauses (examples [55b], [56b]). Contrastive focus marking often also conveys the meaning of restricting reference to exactly the referents of the phrase to which it attaches. tsáː-ʔi. (55) a. náːni-rɛ my.uncle-FOC come-PRED ‘It was (only) my uncle who came.’ iːnɯ ´ βatɛ´-ʔi. b. áːkitɛ´-ːbɛ-rɛ fall.SUB-CL.M.SG-FOC get_dirty-PRED ‘He did fall and got dirty.’ tsáː-ʔi. (56) a. náːni-hı´ı̵ ´βari ̵ my.uncle-NEG come-PRED ‘It was not my uncle who came.’ iːnɯ ´ βatɛ´-ʔi. b. áːkitɛ´-ːbɛ´-hı´ı̵ ´βari ̵ fall.SUB-CL.M.SG-NEG get_dirty-PRED ‘He did not fall and (nevertheless) got dirty.’ Negative focus marking can be combined with negation expressed in verbal inflection, resulting in double negation with a positive reading (example [57]). (57) áːkitɛ´-tɯ iːnɯ ´ βatɛ´-ʔi. ´ -ːbɛ´-hı´ı̵ ´βari ̵ fall.SUB-NEG-CL.M.SG-NEG get_dirty-PRED ‘He did fall and got dirty.’

1786

VII. Syntactic Sketches

9. Reference tracking and discourse conjunctions Before concluding this syntactic sketch, we consider some patterns that are characteristic of connected speech in Bora. Various features discussed in the previous sections are combined in an intricate reference tracking system. For tracking inanimate participants, there are five basic structures (for animate participants, [ii] and [iii] are usually not distinguished because their agreement pattern includes only one, animate noun class): nouns, e.g. tódʒiː-hɯ (blowgun-CL.tube); pronominal expressions or verbs that include a specific class marker, e.g. tɛː-hɯ (3-CL.tube) ‘it (tubular shape)’; áːkítɛ-hɯ (fall-CL.tube) ‘it (tubular shape) fell’; (iii) pronominal expressions and verbs that include a general class marker, e.g. tɛː-nɛ (3-CL.INAN) ‘it/they (inanimate)’; áːkitɛ´-nɛ (fall-CL.INAN) ‘it/they fell’; (iv) a pronominal proclitic, i=áːkítɛ-ki (3.SUB=fall-PURP) ‘so it would fall’; (v) zero anaphora, e.g. pájːhɯ ´ kɯ-ːbɛ 0̸ (open-CL.M.SG) ‘He opened (it)’.

(58) (i) (ii)

The following example (59) illustrates the use of these five strategies for reference tracking. The example contains the beginning (line 1), a middle portion (lines 2−4), and the end (lines 5−6) of a native speaker’s account of how he made a blowgun. The different types of expressions identified above are used to introduce and then track an inanimate referent, the blowgun. Mentions of the blowgun are set in boldface. (59) Blowgun making tɛ-Pnɛ tódʒiP-hɯ o=pákigʷájhɯ-kí […] ´ =pɛ´ 1. iːhɯ yesterday=REM 3-CL.INAN blowgun-CL.tube 1SG.SUB=rasp-PURP ‘Yesterday I sandpapered the blowgun. […]’ 2. i-htɯ 0̸ o=míbɛ´hhɯ-ki tɛP-nɛ kóʔpɛ-nɛ´ ´ ː-rí 1SG.SUB=wrap-PURP 3-CL.INAN be_hard.SUB-CL.INAN POSS.3-sap-INS i=káβáːβɛ-ki 3.SUB=become-PURP ‘I wrapped (it) with its (the rubber tree’s) sap, so it would become hard’ ó=míbɛ´kɯ-ʔíhka-ʔí 0̸ ó=míbɛhkɯ 3. aP-nɛ ´ -ʔí 1SG=wrap-PRED CON-CL.INAN 1SG=wrap-REP-PRED ‘And I wrapped it over and over, I wrapped (it),’ tɛP-nɛ i=káβáːβɛ-ki […] 4. ími-nɛ be_good.SUB-CL.INAN 3-CL.INAN 3.SUB=become-PURP ‘so it would become good. […]’ 5. aP-nɛ ó=nɯhtsókɯ ´ -ʔi ɯ ´ βɛ´ʔkóʔ CON-CL.INAN 1SG=try_out-PRED good ‘And I tried it out: good!’ 6. tɛ´tsiːtɯ ó=áhkɯ tɛ´P-hɯ-βɯ. ´ iʔdɯ áːbáhá-hpiː-kɛ´ ´ -ko-ːʔi then indeed owner-CL.M.SG-ACC 1SG=give-PFV-FUT.PRED 3-CL.tube-ALL ‘And after that, indeed, I will give it to the owner.’

51. Syntactic Sketch: Bora

1787

In the beginning of the text (line 1), the blowgun is introduced as a new referent with a noun and a preceding pronominal functioning as a determiner. Throughout the text, the blowgun is referred to with the free pronouns tɛːnɛ and aːnɛ (lines 2, 3, 4, 5) and a procliticized pronoun (lines 2, 4), in addition to class markers used on stative verbs (lines 2, 4). In all free pronouns and verbs, the general inanimate class marker is used for agreement marking with the antecedent tódʒiː-hɯ ‘blowgun’. These semantically general and attenuated forms are typical for reference to highly topical participants. In lines 2 and 3, the object noun phrases corresponding to the blowgun are omitted. At the very end of the text (line 6), the speaker resorts to a somewhat more specific expression to refer to the blowgun: the specific class marker -hɯ (CL.tube) is used in combination with a pronominal root. This increase in specificity in anaphoric choice is typical for final mentions of important participants in concluding sentences (see also Fox 1987). Another interesting feature of reference tracking in Bora, in particular in narratives, is the use of a paragraph-initial connector formed with the root aː-, which carries referential and event coherence information (examples [60a−b], see also example [59], lines 3, 5) (Seifart 2010). It usually includes a class marker to track a previously mentioned participant, and marks this as thematic in the paragraph it introduces (examples [60a− b]). These reference tracking connectors may thus function as pronouns that can have different grammatical roles (indicated by case marking − example [60b]) with respect to the following predicate. The connector can also form an adverbial phrase (example [61]), combining with the morphology that is otherwise used in adverbial clauses (see section 7.2). pámaɯ nɯ (60) a. aː-mɯ ´ kɯ-ʔíhka-rá-ʔi tɛː-nɛ ´ hpajko. ´ pɨ=βáa CON-CL.F.DU=QUOT.REM carry-REP-FRUS-PRED 3-CL.INAN water ‘And they (two girls) tried again and again in vain to carry water.’ mı´ʔbajhɯ b. áː-bɛ-kɛ´=nɛ ´ nɛːkɯ aːmɯ ´ -ʔi. ̵ hit-PRED CON-CL.M.SG-ACC=REC proper_name ‘And Mı´ʔbajhɯ ´ nɛːkɯ hit him.’ ̵ (61) á-ihɯ diː-tɛ´pı´̵ ɛ´ːnɯ-ʔíhká-ʔí ɯ ´ ːkɯmɛː. ´ =βáa CON-TEMP=QUOT.REM 3-CL.F.DU raise-REP-PRED punchana ‘And at that time the two (girls) were raising a punchana.’ The connector is the basis for a number of lexicalized conjunctions (examples [62a−c]), corresponding to ‘and’, ‘but’, and ‘therefore’. These conjunctions are formed with the general inanimate class marker -nɛ, which usually refers to the content of the preceding discourse in general (not to a particular referent) in these forms. The frustrative marker, which is otherwise a verbal inflectional marker, conveys the adversative meaning in the conjunction ‘but’. The causal relation of ‘therefore’ is expressed with the benefactive case marker. (62) a. aː-nɛ / a-nɛ CON-CL.INAN ‘and’

1788

VII. Syntactic Sketches b. áː-ro-nɛ / á-ro-nɛ CON-FRUS-CL.INAN ‘but’ (lit. that did not have the expected result and) c. áː-nɛ´-dʒiːʔɛ / áː-nɛ´-dʒiː CON-CL.INAN-BEN ‘therefore’ (lit. and for the benefit of that)

10. An Amazonian perspective A number of features of Bora are typical of Amazonian languages in general (see Dixon and Aikhenvald 1999: 8−9), in particular of Northwest Amazonian languages. One of these is the relatively polysynthetic and agglutinating morphology. Complex tone systems are also found in many Amazonian languages, e.g. Eastern Tukanoan languages (Gomez-Imbert and Kenstowicz 2000), but probably not in the majority of them. Large and complex nominal classification systems comparable to that of Bora are found in many languages of the Northwest Amazon (Payne 1987; Derbyshire and Payne 2000; Seifart and Payne 2007), e.g. Arawakan languages (Aikhenvald 1994) and Eastern Tukanoan languages (Barnes 1990; Morse and Maxwell 1999). There are also a number of geographically more isolated instances of such systems elsewhere in Amazonia, e.g. Kwaza in the Southwest Amazon (van der Voort 2000) and Palikur in the Northeast Amazon (Aikhenvald and Green 1998). Most of these systems are used both for the derivation of noun stems and for the formation of modifiers and pro-forms (as in Bora), although the degree to which these systems constitute grammaticalized agreement systems varies. Some systems are incipient and restricted to a small subpart of the lexicon, e.g. in Hup (Epps 2008). With respect to alignment systems and the coding of grammatical roles, Bora appears to have a relatively simple system for an Amazonian language. More complex systems involve hierarchical alignment, e.g. in Émérillon (Rose 2003), direct and inverse marking, e.g. in Movima (Haude 2006), and syntactic pivots, e.g. in Jarawara (Dixon 2004). Unlike Bora, the alignment systems of many Amazonian languages involve ergativity to different degrees, e.g. Carib languages (Derbyshire 1999) and Matses (Fleck 2003).

11. Abbreviations ANIM CL CON COND COORD DIM DIR DUB EQUA FRUS INAN

animate class marker connector conditional coordination diminutive directional dubitative equative frustrative inanimate

INFR PRPT REC REM REP SIMU SOC SUB TEMP VBLZ

inference prospective recent past remote past repeated action simultaneous sociative subordination temporal verbalizer

51. Syntactic Sketch: Bora

1789

12. References (selected) Aikhenvald, Alexandra Y. 1994 Classifiers in Tariana. Anthropological Linguistics 36(4): 407−465. Aikhenvald, Alexandra Y. 2001 Areal diffusion, genetic inheritance, and problems of subgrouping. A North Arawak case study. In: Alexandra Y. Aikhenvald and R. M. W. Dixon (eds.), Areal Diffusion and Genetic Inheritance. Problems in Comparative Linguistics, 167−194. Oxford: Oxford University Press. Aschmann, Richard P. 1993 Proto Witotoan. Dallas: The Summer Institute of Linguistics and the University of Texas at Arlington. Austin, Peter, and Joan Bresnan 1996 Non-configurationality in Australian languages. Natural Language and Linguistic Theory 14: 215−268. Barnes, Janet 1990 Classifiers in Tuyuca. In: Doris. L. Payne (ed.), Amazonian Linguistics. Studies in Lowland South American Languages, 273−292. Austin: Texas University Press. Corbett, Greville G. 2008 Agreement. Cambridge: Cambridge University Press. Derbyshire, Desmond C. 1999 Carib. In: R. M. W. Dixon and Alexandra Y. Aikhenvald (eds.), The Amazonian Languages, 22−64. Cambridge: Cambridge University Press. Derbyshire, Desmond C., and Doris L. Payne 1990 Noun classification systems of Amazonian languages. In: Doris. L. Payne (ed.), Amazonian Linguistics. Studies in Lowland South American Languages, 243−272. Austin: Texas University Press. Dixon, R. M. W., and Alan R. Vogel 2004 The Jarawara Language of Southern Amazonia. Oxford: Oxford University Press. Dixon, R. M. W., and Alexandra Y. Aikhenvald 1999 Introduction. In: R. M. W. Dixon and A. Y. Aikhenvald (eds.), The Amazonian Languages, 1−22. Cambridge: Cambridge University Press. Dryer, Matthew S. 2007 Clause types. In: Timothy Shopen (ed.), Language Typology and Syntactic Description, Vol. 1: Clause Structure, 224−275. Cambridge: Cambridge University Press. Echeverri, Juan Alvaro, and Frank Seifart 2011 Una re-evaluación de las familias lingüísticas Bora y Witoto. Paper presented at “Arqueología y Lingüística Histórica de las Lenguas Indígenas Sudamericanas” Universidad de Brasilia, 24 al 28 de octubre de 2011. Epps, Patience L. 2008 A Grammar of Hup. Berlin/New York: Mouton de Gruyter. Fleck, David William 2003 A grammar of Matses. Ph.D. dissertation, Rice University. Foley, William A. 1997 The Yimas Language of New Guinea. Stanford: Stanford University Press. Fox, Barbara 1987 Discourse Structure and Anaphora. Written and Conversational English. Cambridge: Cambridge University Press. Fraser, Norman M., Greville G. Corbett, and Scott McGlashan 1997 Introduction. In: Norman M. Fraser, Greville G. Corbett and Scott McGlashan (eds.), Heads in Grammatical Theory, 1−10. Cambridge: Cambridge University Press.

1790

VII. Syntactic Sketches

Gomez-Imbert, Elsa, and Michael Kenstowicz 2000 Barasana tone and accent. International Journal of American Linguistics 66−68(4): 419−463. Lewis, M. Paul, Gary F. Simons, and Charles D. Fennig (eds.) 2013 Ethnologue: Languages of the World, Seventeenth Edition. Dallas: SIL International. http://www.ethnologue.com. Grinevald, Colette, and Frank Seifart 2004 Noun classes in African and Amazonian languages. Towards a comparison. Linguistic Typology 8(2): 243−285. Haude, Katharina 2006 A grammar of Movima. Ph.D. dissertation, Radboud Universiteit Nijmegen. Heath, Jeffrey 1984 Functional Grammar of Nunggubuyu. Canbarra: Australian Institute of Aboriginal Studies. Himmelmann, Nikolaus P. 1997 Deiktikon, Artikel, Nominalphrase. Zur Emergenz syntaktischer Struktur. Tübingen: Niemeyer. Kaufman, Terence 1994 Review of Proto Witotoan by Richard P. Aschmann. Language 70: 379. McGregor, William 1989 Phrase fracturing in Gooniyandi. In: Laszlo K. Maracz and Pieter Muysken (eds.), Configurationality: The Typology of Asymmetries, 207−222. Dordrecht: Foris Publications. Morse, Nancy L., and Michael R. Maxwell 1999 Cubeo Grammar. Arlington: Summer Institute of Linguistics. Payne, Doris L. 1987 Noun classification in the Western Amazon. Language Science 9(1): 22−44. Rose, Françoise 2003 Morphosyntaxe de l’émérillion: Langue tupi-guarani de Guyane. Ph.D. dissertation, Université Lumière Lyon 2. Seifart, Frank 2005 The structure and use of shape-based noun classes in Miraña (North West Amazon). Ph.D. dissertation, Radboud Universiteit Nijmegen. Seifart, Frank 2010 The Bora connector pronoun and tail-head linkage: a study in language-specific grammaticalization. Linguistics 48: 893−918. Seifart, Frank Forthcoming Valency classes in Bora (Peru). In: Andrej L. Malchukov and Bernard Comrie (eds.) Valency Classes: A Comparative Handbook. Berlin/New York: de Gruyter Mouton. Seifart, Frank, and Doris L. Payne 2007 Nominal classification in the North West Amazon: issues in areal diffusion and typological characterization. International Journal of American Linguistics 73(4): 381−387. Seifart, Frank, Doris Fagua, Jürg Gasché, and Juan Alvaro Echeverri (eds.) 2009 A multimedia documentation of the languages of the People of the Centre. Online publication of transcribed and translated Bora, Ocaina, Nonuya, Resígaro, and Witoto audio and video recordings with linguistic and ethnographic annotations and descriptions. Nijmegen: DOBES-MPIP. http://corpus1.mpi.nl/qfs1/media−archive/dobes_data/Center/ Info/WelcomeToCenterPeople.html Thiesen, Wesley 1996 Gramática del Idioma Bora. (Serie Lingüística Peruana 38.) Pucallpa: Ministerio de Educación and Instituto Lingüístico de Verano.

51. Syntactic Sketch: Bora

1791

Thiesen, Wesley, and Eva Thiesen (eds.) 1998 Diccionario Bora − Castellano, Castellano − Bora. (Serie Lingüística Peruana 46.) Pucallpa: Ministerio de Educación and Instituto Lingüístico de Verano. Thiesen, Wesley, and David Weber 2012 A Grammar of Bora. With Special Attention to Tone. Dallas: SIL International. van der Voort, Hein 2004 A Grammar of Kwaza. Berlin/New York: Mouton de Gruyter. Vengoechea, Consuelo 1997 Fonología de la lengua muinane. Bogotá: COLCIENCIAS/CCELA/Universidad de los Andes. Vengoechea, Consuelo 2005 Morphosyntax of muinane. Typological remarks. In: Jon Landaburu and Anamaría Ospina (eds.), Langues de Colombie, 119−140. (Ameríndia 29−30). Villejuif: CELIA/ CNRS. Walton, James W., Grace Hensarling, and Michael R. Maxwell 2000 El muinane. In: Maria Stella González de Pérez and María Luisa Rodríguez de Montes (eds.) Lenguas Indígenas de Colombia. Una Visión Descriptiva, 255−273. Bogotá: Instituto Caro y Cuervo. Weber, David, and Wesley Thiesen 2001 A synopsis of Bora tone. Work Papers of the Summer Institute of Linguistics, University of North Dakota Sessions 45. Zwicky, Arnold M. 1985 Heads. Journal of Linguistics 21: 1−29. Zwicky, Arnold M. 1993 Heads, bases and functors. In: Norman M. Fraser, Greville G. Corbett and Scott McGlashan (eds.), Heads in Grammatical Theory, 292−315. Cambridge: Cambridge University Press.

Frank Seifart, Leipzig (Germany)

VIII. The Cognitive Perspective 52. Syntax and Language Acquisition 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Issues in language acquisition research Methods for investigating the acquisition of syntax The debate about the role of nature and nurture in the acquisition of syntax The emergence of syntax The acquisition of questions and embedded clauses The acquisition of passive The acquisition of constraints on co-reference The acquisition of quantification Bilingualism, L2-acquisition and the critical period Current trends and developments References (selected)

Abstract This chapter provides an overview of theoretical issues and core empirical findings in cross-linguistic research on the acquisition of syntax. Section 1 identifies key issues in syntax acquisition research: (i) the respective contribution of learners' input and innate predispositions for language acquisition; (ii) the time course of syntactic development; (iii) the role of learners’ age and potential implications for monolingual, bilingual and second language (L2) acquisition. Section 2 introduces methods for investigating syntactic development. Section 3 discusses the relative role of learners’ input and innate predispositions for syntax acquisition. This section presents (i) generative, Optimality Theory and usage-based approaches to syntactic development and (ii) the empirical findings on learners’ input that form the background for the debate between proponents of the different approaches. Section 4 focuses on the emergence of syntax. The following sections discuss the acquisition of core syntactic phenomena: questions and embedded clauses (section 5), passives (section 6), co-reference (section 7), and quantification (section 8). Each of these sections gives an overview of theoretical accounts and empirical findings; with a focus on monolingual first language (L1) acquisition. Age effects and differences between monolingual and bilingual acquisition are the focus of section 9. Section 10 discusses the empirical findings and their theoretical implications and highlights current trends.

1. Issues in language acquisition research Research on syntactic development is characterized by debates about: (i) the role of nature and nurture, i.e. the respective contribution of learners' input and innate predispositions for language acquisition; (ii) the time course of syntactic development; (iii) age

52. Syntax and Language Acquisition

1793

effects and differences between monolingual and bilingual acquisition, see e.g. Ambridge and Lieven (2011), Bavin (2009), Bhatia and Ritchie (2008), de Villiers and Roeper (2011), Eisenbeiss (2009a), Fletcher and MacWhinney (1996), Berko Gleason and Bernstein Ratner (2009), Guasti (2002), Lust (2006), O’Grady (1997), Ritchie and Bhatia (1999, 2009), Saxton 2010), the special issues of Behavioral and Brain Sciences 14 (1991); Linguistic Review 19: 1−2 (2002); Linguistics 47: 2 (2009); and further contributions in Cognition, First Language, Journal of Child Language, and Language Acquisition. The starting point for these debates is the so-called logical problem of language acquisition (for overviews see Berwick, Pietroski, Yankama, and Chomsky 2011; Pinker 1989; Clark and Lappin 2011; and the special issue of Linguistic Review 19: 1−2, published in 2002): children are only exposed to a small, arbitrary sample of their target language, but learn to produce, understand and judge an infinite number of sentences and sentence structures (quantitative underdetermination). Hence, one has to assume that children generalize over their input, even though it does not contain such generalizations, but only individual utterances (qualitative underdetermination). Children’s generalizations can deviate from the target in two ways: they can be too restrictive, i.e. the target might contain structures that are not covered by the child’s generalization. For example, Italian children hear input sentences like (1a) that contain overt subject noun phrases; and they might incorrectly generalize that all Italian sentences have an overt subject. In this case, input sentences with dropped subjects like (1b) provide positive evidence for the possibility of subject drop in Italian − and can thus help children to overcome their incorrect generalization. Another option is that the child’s generalization is not restrictive enough and the set of sentences that it predicts is a superset of the set of sentences that are actually grammatical in the target. For instance, German children hear utterances with omitted subjects in topic position; e.g. (1c). This could lead German children to incorrectly generalize subject omissions to all sentential position (as in Italian). In this case, positive evidence is insufficient to overcome the incorrect generalization: the assumption of generalized subject drop is obviously compatible with input sentences where subjects are omitted in topic position; e.g. (1c). But even sentences with overt subjects cannot tell children that generalized subject drop is incorrect. After all, sentences with overt subjects also occur in subject drop languages such as Italian. Rather, children would need negative evidence, i.e. information about the ungrammaticality of the sentences that they incorrectly expect to be grammatical. For instance, children could be corrected for sentences with dropped subjects in non-topic positions, e.g. (1d). (1)

a. Luigi canta. Luigi sing.3SG.PRS ‘Luigi sings.’

[Italian]

b. canta. sing.3SG.PRS ‘(He) sings.’ c. Hat Frank Durst? Nein __ hat gerade Tee getrunken. has Frank thirst no has just tea drunk ‘Is Frank thirsty? No, has just drunk tea.’

[German]

1794

VIII. The Cognitive Perspective d. *Vorhin habe __ schon Tee getrunken. earlier have already tea drunk ‘Earlier have (I) already drunk tea.’

Even though parents may occasionally explicitly correct their children, this type of direct negative evidence cannot be taken to play a crucial role in acquisition (Marcus 1993): it is not systematically available to all children at all developmental stages; and even for explicit corrections such as You can’t say that it is not obvious whether the parent was correcting a phonological, a morphological, or a syntactic error − or considered the utterance disrespectful or factually incorrect. Moreover, children’s errors are quite infrequent and uniform across individuals; and they often reject corrections. This casts at least some doubt on the assumption that children test random hypotheses using negative evidence. Hence, one has to explain how children can generalize beyond individual input utterances, but avoid or recover from incorrect generalizations − even though they cannot rely on systematic explicit corrections. Approaches to this logical problem make diverging assumptions about the role of nature and nurture. As we will see in section 3, usage-based acquisition researchers stress the role of learners’ input in the acquisition process and investigate mechanisms of social interaction, analogy formation and pattern finding (Ambridge and Lieven 2011; Behrens 2009; Tomasello 2001, 2003, 2006; Clark and Kelly 2006). In contrast, generative researchers tend to focus on innate predispositions for language acquisition that drive and constrain children’s linguistic development (Chomsky 1981; Clahsen 1996; Crain 1991; Crain and Lillo-Martin 1999; Crain and Thornton 1998; de Villiers and Roeper 2011; Eisenbeiss 2009a; Guasti 2002; Lust 2006). In addition to the logical problem, any acquisition model should address the so-called developmental problem and capture the time course of syntactic development. Children go through similar stages, though we can observe inter-individual and inter-language differences with respect to the onset and end for each stage (Bavin 2009; Brown 1973; Brown, Cazden, and Bellugi 1968; Guasti 2002; Ingram 1989): around 3 months, children tend to start cooing and laughing, followed by vocal play around 6 months, and babbling around 9 months. Around their first birthday, they usually start to produce melodic utterances and proto-words that resemble their input. The first word combinations typically appear at 18 months, and become more frequent around the second birthday. However, during this so-called telegraphic stage, children frequently omit function words and grammatical markers, such as tense markers or the determiners the and a; see (2). In the second half of their third year, children start to use three-element phrases, function words and bound grammatical morphemes productively, occasionally overgeneralizing regular inflections; e.g. (3). The fourth year shows increases in syntactic complexity and the range of constructions; e.g. wh-questions with subject-auxiliary-inversion such as (4a). Towards the end of the fourth year, multi-clausal sentences and conjoined phrases appear; e.g. (4b) and (4c). The following years are characterized by the emergence of meta-linguistic abilities (e.g. grammaticality judgment) and further complex construction, in particular subordination as in (5). (2)

a. allgonne cookie b. bad doggie c. auntie go

(3)

the small mouses

52. Syntax and Language Acquisition (4)

a. Where is he going? b. They knew what I wanted. c. boy and girl

(5)

Get up when you’re ready!

1795

Acquisition models that try to capture the developmental trajectory also need to account for the observed similarities and differences between the target-grammar and the child’s grammar at different points. For instance, they must explain why children occasionally use grammatical morphemes in the early two-word stage, but often omit them. Approaches to these questions differ in their assumptions about acquisition mechanisms and children’s syntactic representations (see sections 4 to 9 and Eisenbeiss 2009a): according to maturational accounts, acquisition mechanisms develop over time, due to the maturation of the neural system that underlies language functions (Borer and Wexler 1987; Radford 1990). In contrast, continuity approaches do not assume any maturationally-induced qualitative changes of learners’ acquisition mechanisms (Pinker 1984). With respect to syntactic representations, proponents of usage-based approaches and generative Structure-Building approaches argue that children build up the representations of their target language incrementally. Thus, early representations might not involve all the features and properties of the target (Ambridge and Lieven 2011; Clahsen 1990/ 1991; Clahsen, Eisenbeiss, and Vainikka 1994; Clahsen, Eisenbeiss, and Penke 1996; Clahsen, Kursawe, and Penke; Clahsen and Penke 1992; Clahsen, Penke, and Parodi 1993/4; Clark and Kelly 2006; Duffield 2008; Eisenbeiss 2000, 2003; Radford 1990; Roeper 1996; Tomasello 2001, 2003, 2006). In contrast, Full-Competence or StrongContinuity approaches assume adult-like syntactic representations even in the earliest two-word stage and attribute initial deviations from the target language to other factors − such as problems with morpho-phonological realization, lexical gaps or the interfaces between morpho-syntax and the discourse conditions for the realization of morpho-syntactic representations, etc. (Hoekstra and Hyams 1998; Poeppel and Wexler 1993; Rizzi 1993/4, 2000; Valian, Solt, and Stewart 2009; Weissenborn 1992; Wexler 1998). The third core issue in language acquisition research is the role of learners’ age in monolingual, bilingual and L2-acquisition (see section 9): while L1-acquisition is typically successful and produces largely uniform results, the outcome of adult L2-acquisition is extremely varied and it is unclear whether adult learners ever attain native speaker competence. This has been taken as evidence for a critical period, an ideal window of time to acquire grammar with appropriate input, after which this is no longer possible to achieve native-like command of grammar (Lenneberg 1967). The assumption of such a critical period could also explain why feral or neglected children, who did not have access to appropriate linguistic input, are unable to acquire languages at a later point. However, for these children, lack of linguistic input is clearly confounded with growing up in an unsupportive environment. Similarly, studies on potential critical-period effects in L2-acquisition must distinguish potential age effects from effects of having already learned a language. This has inspired a growing number of studies comparing child, adolescent, and adult L2-acquistion, as well as monolingual and bilingual L1-acquisition (see section 9; the journals Bilingualism, Second Language Research, Studies in Second Language Acquisition; Bhatia and Ritchie 2008; Granfeldt 2000; Meisel 2009, 2011;

1796

VIII. The Cognitive Perspective

Rothman 2008; Singleton 2005). Such studies are also crucial to determine the degree of language separation in bilinguals and constraints on code-switching and code-mixing (section 9).

2. Methods for investigating the acquisition of syntax In order to address the issues discussed above, acquisition researchers have developed a broad range of methods for (i) naturalistic sampling, (ii) experiments and (iii) semistructured elicitation. Naturalistic samples are obtained by audio/video-recording learners’ speech in spontaneous interactions with family members, friends or researchers (Behrens 2008; McDaniel, McKee, and Smith Cairns 1996; Eisenbeiss 2006, 2010; Menn and Bernstein Ratner 2000; Wei and Moyer 2008). Naturalistic samples from a broad range of languages and learner types are now freely available via the CHILDES database (http://childes.psy.cmu.edu/; MacWhinney 2000), the Max-Planck-Institute for Psycholinguistics (http://corpus1.mpi.nl/ds/imdi_browser/) and other webpages (http:// leo.meikai.ac.jp/~tono/lcresource.html). For overviews, see the CHILDES bibliographies and numerous textbooks and edited volumes (http://childes.psy.cmu.edu/bibs/; Ambridge and Lieven 2011; Behrens 2008, 2009; de Villiers and Roeper 2011; Guasti 2002; Ingram 1989; Lust 2006; Myles 2005; O’Grady 1997; Saxton 2010; Sokolov and Snow 1994; the special language learning acquisition issue of Linguistics 47: 2, published in 2009). In naturalistic sampling, researchers only interfere by recording learners and their interaction partners − sometimes without them even knowing that they are being recorded. Hence, the recording situation closely approximates the real-life situation under investigation and learners are unlikely to develop particular response strategies − even when samples are collected repeatedly. Thus, naturalistic sampling has a high ecological validity. Moreover, naturalistic samples can be obtained from any learner, independently of age, cognitive and linguistic ability; and recordings with learners’ regular conversation partners also provide input samples. Finally, naturalistic samples do not target a particular construction and can be (re)analyzed with respect to a broad range of phenomena. Naturalistic sampling does not require specific stimulus materials and hence no prior indepth knowledge of the respective language. Thus, it is ideal for obtaining a first overview of learners’ input and their own production. However, minimizing researcher control can lead to incomparable samples, as learners may talk about different topics and use different words or constructions. Moreover, naturalistic samples often contain very few examples of low-frequency constructions, such as embedded questions. Pooling data from several learners is no solution as this can lead to sampling errors and ignores inter-learner variation. Note also that even the frequent occurrence of a given construction cannot simply be taken as evidence for its acquisition: naturalistic data often involve recurring word-forms and phrases that might be parts of formulaic patterns (Eisenbeiss 2000; Radford 1990; Tomasello 2001), e.g.: (6)

Where’s the key/car/cat …? → Where’s the X?

Thus, one might overestimate learners’ knowledge. Conversely, one might underestimate learners’ knowledge when they are engaged in unchallenging activities that only require

52. Syntax and Language Acquisition

1797

imitations, object naming, and elliptical answers (meals, picture-book reading, etc.). Moreover, naturalistic samples do not provide information about learners’ interpretation of their utterances, which hampers studies on semantic aspects of quantifiers, co-reference, etc. Finally, when researchers refrain from interfering with the recording situation, they cannot systematically manipulate and study variables that affect learners’ performance (e.g. sentence length). In experiments, researchers systematically manipulate one or more variables and measure whether any changes with respect to these variables affect speakers’ behavior (Crain and Thornton 1998; McDaniel, McKey, and Smith Cairns 1996; Menn and Bernstein Ratner 2000; Sekerina, Fernández, and Clahsen 2008; Wei and Moyer 2008). Standardized procedures ensure comparability and the avoidance of models or feedback that occur in spontaneous speech allows one to rule out some potential confounding factors. Moreover, the use of stimuli in some experiments can make it easier to determine learners’ intentions and interpretations. In elicited imitation experiments, participants are asked to imitate spoken sentences (Bernstein Ratner 2000; Gallimore and Tharp 2006; Vinther 2002). This can provide insights into learners’ knowledge as participants cannot memorize complex sentences holistically, but must employ their own grammar to recreate them. As high task demands and partial memorization of targets can make results difficult to interpret, many researchers only use elicited imitation as a first step. In elicited production experiments, learners receive prompts to produce particular constructions, e.g. questions like (7a) or negated sentences like (7b); see Crain and Thornton (1998), Menn and Bernstein Ratner (2000). The responses show whether learners produce the target or deviate from it in ways that reflect their syntactic knowledge. Some production experiments investigate whether learners can productively use a construction with novel words; see (7c) from Berko (1958) and Menn and Bernstein Ratner (2000) for further discussion. (7)

a. The dog is eating something, but I cannot see what. Can you as ask the puppet? b. I’ll say something and then you say the opposite. c. This is a wug. These are two …?

Other experiments involve syntactic priming, speakers’ tendency to repeat syntactic structure across otherwise unrelated utterances (Bencini and Valian 2008; Bock 1986; Branigan 2007; Huttenlocher, Vasilyeva, and Shimpi 2004; Kim and McDonough 2008; Pickering and Ferreira 2008; Savage, Lieven, Theakston, and Tomasello 2003, 2006). For example, speakers are more likely to use passives after hearing or producing passive prime sentences than after active primes. If learners show such priming effects, even when the primes and learners’ own productions contain different words, this suggests that learners possess abstract syntactic representations that can be activated by priming. In contrast, if priming only occurs when primes and learners’ own productions involve the same verb, this indicates that learners’ syntactic representations are not abstract, but lexically bound. Learners’ comprehension of syntactic constructions or grammatical markers can be tested in different ways (Crain and Thornton 1998; McDaniel, McKee, and Smith Cairns 1996; Sekerina, Fernández, and Clahsen 2008): children can be asked to act out sentences with toys or to select pictures that match sentences they hear like (8a) and (8b). For

1798

VIII. The Cognitive Perspective

younger learners, one can use a preferential looking task where an auditory stimulus is presented while two visual stimuli are shown simultaneously and researchers measure which of two visual stimuli learners attend to for longer. Alternatively, one can show a learner a picture or tell a story and then ask learners to answer a comprehension question or to provide a truth-value judgment for an utterance like (9). (8)

a. The girl is hitting the boy. b. The girl is being hit by the boy.

(9)

All children are in the bathtub. Is this true?

In grammaticality-judgment experiments, learners from the age of three can either be asked to tell the experimenter whether a sentence is grammatical or they are asked to decide between a grammatical utterance and an ungrammatical variant of this utterance (McDaniel, McKee, and Smith Cairns 1996). Recently, researchers have employed online-methods that are sensitive to the timecourse of processing to study the syntactic processing involved in learners’ production and comprehension (Clahsen and Felser 2006a, b; Marinis 2003; Sekerina, Fernández, and Clahsen 2008). Such studies typically involve auditory or visual stimuli and measure learners’ reaction times or they record learners’ eye movements to detect their focus of attention at different times in the comprehension or production process. As performance in experiments might be affected by memory problems, task-induced strategies or problems in focusing on relevant aspects of the stimuli, some researchers supplement naturalistic and experimental data with semi-structured elicitation (Berman and Slobin 1994; Eisenbeiss 2009b, 2010; Jaensch 2008). Semi-structured elicitation techniques keep the communicative situation as natural as possible, but use videos or games to encourage the production of rich and comparable speech samples. For instance, one can use form-focused techniques to investigate particular constructions, for example games contrasting colors or sizes to elicit noun phrases with color/size adjectives. Alternatively, one can use meaning-focused tasks to study how learners encode particular meanings, for instance elicitation games for possession transfer constructions, in which learners have to describe which food they give to which animal; see e.g. (10a) vs. (10b). (10) a. I give the bear the honey pot. b. I give the honey pot to the bear. Other techniques are broad-spectrum tools to encourage learners to speak, for instance word-less picture books such as the Frog-story (Berman and Slobin 1994) or games requiring speakers to coordinate their actions verbally, such as the Bag Task, where players hide toys in pockets of a big bag (Eisenbeiss 2009b). Acquisition studies often involve converging evidence from naturalistic, experimental, and semi-structured studies. Experiments are typically part of cross-sectional studies, where learners are recorded once or a few times within a short period. Naturalistic and semi-structured studies may be cross-sectional, but often involve longitudinal sampling, where learners are recorded over longer periods.

52. Syntax and Language Acquisition

1799

3. The debate about the role of nature and nurture in the acquisition of syntax As discussed above, any model of syntactic development must address the logical problem and must explain how children generalize beyond individual input utterances, but avoid or recover from incorrect generalizations without systematic explicit corrections. The current debate about this problem focuses on three core approaches that will be discussed below: the generative approach, the usage-based approach, and Optimality Theory. These approaches have inspired empirical studies on the role of innate predispositions and children’s input.

3.1. Theoretical approaches Generative acquisition researchers assume an innate language acquisition mechanism, Universal Grammar (UG; see de Villiers and Roeper 2011 for an overview). UG involves a set of innate universals that constrain children’s hypothesis space. Thus, children can only make correct generalizations or generalizations that can be rejected without explicit corrections. According to Chomsky (1981), UG contains substantive universals, i.e. predispositions for grammatical categorization, and formal universals, i.e. well-formedness constraints for syntactic representations that apply to all human languages. One of the formal universals is the Structure-Dependency Principle. It states that all syntactic operations are dependent on syntactic structure, not on linear order or other non-structural aspects of language. Given such a principle, children who are faced with sentence pairs like (11) should never assume that questions are formed by fronting the first auxiliary or the third word of the utterance. Rather, all their syntactic operations should affect elements that belong to a particular syntactic category or occupy a particular syntactic position − such as English question formation, which refers to the main clause auxiliary. (11) a. The rooster is eating. b. Is the rooster eating? The nature and role of such innate constraints in acquisition is viewed differently in different versions of generative grammar: in the Principles-and-Parameters model (Chomsky 1981), UG contains two types of formal universals: (i) universal principles that capture universal properties of human languages and (ii) parameters that provide a finite set of options from which learners can choose. For instance, generative linguists assume that all sentences contain subjects, but that languages may differ with respect to subject position and overt subject realization: in Italian subjects may be omitted when their referents can be inferred from context, while subjects in English must be overtly realized; see e.g. (12a) vs. (12b). Parameters can also capture the clustering of syntactic properties (Jaeggli and Safir 1989). For example, the obligatory-subject language English, requires subject expletives for verbs that do not select a subject, whereas Italian does not; see e.g. (13a) vs. (13b). Moreover, subjects of English embedded clauses can

1800

VIII. The Cognitive Perspective

only be extracted when there is no complementizer (14a), while the corresponding Italian utterance is grammatical with a complementizer (14b). In the Principles-and-Parameters model, this clustering is captured by parameters that are associated with clusters of syntactic properties: e.g. [−pro-drop] for English and [+pro-drop] for Italian, which behaves differently for all three properties. In such a model, language acquisition only involves (i) acquiring the lexicon and (ii) setting parameters to target values, which should lead to acquiring a cluster of grammatical properties (Chomsky 1989). (12) a. (Giovanni) parla. John speak.3SG.PRS ‘John speaks.’

[Italian]

b. John speaks. (13) a. It rains. b. Piove. rain.3SG.PRS ‘It rains.’

[Italian]

(14) a. Who do you think (*that) will leave? che partirà? b. Chi credi Who think.2SG.PRS that leave.3SG.FUT ‘Who do you think will leave?’

[Italian]

In early versions of generative grammar, innate formal universals were viewed as domain specific, i.e. specifically targeted to the domain of language. In later minimalist versions, generative researchers derive linguistic universals from more general cognitive principles; for instance from economy principles for the application of syntactic operations (Chomsky 1995, 2001). Other principles have been re-conceptualized. For example, the Structure-Dependency Principle can be interpreted as a domain-specific effect of a general principle that requires operations on a particular level of cognitive representation to refer to properties of units at this level (Eisenbeiss 2003, 2009a). This explains why syntactic operations such as reordering of words in question formation can only refer to syntactic units (e.g. heads and phrases in particular syntactic positions) and not to properties that are unrelated to syntactic structure − such as the linear position of a word. This principle also holds outside the domain of language. For instance, rules for chess refer to the pieces as the functional units in the game (king, pawns, etc.), not to the physical properties of the pieces (the smallest one, etc.). Parameters were also re-conceptualized: initially, parameters referred to a heterogeneous set of linguistic properties, e.g. subject omissions, word order or morphological marking. However, in recent generative models, all parameter values are linked to properties of so-called functional categories that carry grammatical features and are realized by function words or grammatical morphemes (Chomsky 1989). For instance, subjectverb-agreement markers that are associated with subject realization parameters are viewed as realizations of the functional category INFL(lection), which projects to an Inflectional Phrase (IP). Complementizers, whose properties are crucial for extractions from embedded clauses, are treated as realizations of the functional category COMP(lem-

52. Syntax and Language Acquisition

1801

entizer), which projects to a complementizer phrase (CP), and determiners, which show cross-linguistic differences in definiteness and specificity marking, are viewed as realizations of the functional category DET(erminer), the head of the DP. According to such a lexicalist model, children should fix parameters and build up projections of functional categories by learning the properties of the lexical elements that encode the respective functional categories. Setting parameters requires triggers, i.e. input data that are reliably available to all learners and offer positive evidence for the target value of the respective parameter. In input-matching models, triggers are entire sentences: According to Gibson and Wexler (1994) and Clark (1992), children use their current parameter values to parse input sentences and the success of the parses determines which value they will select. In cuebased models, each parameter comes with a cue and children scan the input for these cues and set parameters accordingly. Cues are not complete sentences, but structures derived from the input. For instance, in verb second languages like German, the finite main clause verb must appear in second position, while the first position can be occupied by any type of phrase. Thus, any sentence that starts with a non-subject phrase, followed by a verb, provides a cue that the target language is a verb-second language like German; see e.g. (15) and Lightfoot (1991, 1999). In Fodor’s (1998, 1999) model, parameter values are associated with tree-structure fragments and children determining the appropriate one by parsing each input sentence with each fragment. (15) Gestern aβ ich Hühnchen. yesterday ate I chicken ‘Yesterday ate I chicken.’

[German]

While generative models highlight the role of innate predispositions, proponents of usage-based models assume that (i) children’s generalizations are initially quite limited and (ii) children’s input provides rich information about the target language as well as different types of feedback that helps children to overcome non-target-like generalizations (see section 3.2). Hence, general cognitive principles are deemed sufficient to constrain children’s hypothesis space. Most usage-based acquisition models adopt a constructiongrammar framework (Goldberg 1995) according to which grammars do not result from fixing parameters, but from acquiring the constructions of the target and the relations between them. According to Tomasello (2003, 2006), intention-reading and cultural learning enable children to learn linguistic symbols in the first place. Children then acquire syntactic constructions and syntactic roles such as subject and direct object by analogy formation based on individual utterances; e.g. (16a) and (16b), and similar utterances: (16) a. The car is towing the boat. b. The truck is towing the car. Initially, children’s generalizations are quite limited and their constructions are bound to the lexical elements they frequently encounter in these constructions. Later, however, children generalize across constructions and lexical elements. This process is constrained by entrenchment and competition: the more frequently children hear lexical elements in a particular construction, the more firmly their usage is entrenched − and the less likely

1802

VIII. The Cognitive Perspective

children will be to use these lexical elements in any constructions in which they have not encountered them. They will be even less likely to use this element if they have frequently heard it in another construction with a similar function. Finally, children can use their pattern-finding skills and the distribution of input elements in child-directed speech to form categories such as noun or verb. In Optimality Theory, a grammar of a particular language is an ordered set of violable constraints on outputs and input-output-relationships (Dekkers, van der Leeuw, and van deWeijer 2000; Fickert and de Hoop 2009; Tesar and Smolensky 2000). In syntax, inputs are meanings and outputs are syntactic structures, whereas in semantics, meanings are outputs and forms are inputs. Markedness constraints require outputs to be as economic as possible and favor the default structure for a given context. For instance, the markedness constraint SUBJECT requires all sentences to have subjects. Hence, subjectless sentences are only licensed when subjects can be inferred from context. Markedness constraints typically conflict with faithfulness constraints, which govern the input-output mapping and foster expressiveness. For example, FULL-INTERPRETATION requires each word in the sentence to be meaningful. In Optimality Theory, constraint conflicts are resolved by constraint ranking: while all languages are assumed to share a core set of constraints, languages are considered to differ in their ranking of these constraints. For instance, in English, SUBJECT is higher ranked than FULL-INTERPRETATION, making overt subjects more important than avoiding words that do not contribute meaning. Hence, English speakers combine weather verbs like rain with the expletive subject it that does not contribute to meaning (13a). This violates the lower-ranking FULLINTERPRETATION constraint, but obeys the higher-ranking SUBJECT constraint. In Italian, FULL-INTERPRETATION outranks SUBJECT, which makes it more important to avoid meaningless words than to have subjects. Thus, Italian weather verbs lack overt subjects, violating SUBJECT, but conforming to FULL-INTERPRETATION; see (13b). Even though Optimality Theory constraints are violable, the child’s hypothesis space is constrained: all grammars involve the same types of constraints and only differ in their ranking, which the child has to acquire. It is generally assumed that markedness constraints initially outrank faithfulness constraints (Fikkert and de Hoop 2009). Thus, when children consistently encounter marked structures in their input, this provides positive evidence for the demotion of markedness constraints. In nativist versions of Optimality Theory, the constraints themselves are innate and universal (Prince and Smolensky 2004). According to others, constraints are functional. This means that they are grounded in articulatory or acoustic phonetics and or they serve communicative functions and reduce processing loads (see Boersma 1998; Haspelmath 1999). Thus, Optimality Theory does not commit its proponents to a particular position with respect to the logical problem and the role of nature and nurture. Some core assumptions about the acquisition of syntax are shared by generative, usage-based and Optimality Theory approaches. In particular, they all assume constraints for children’s hypothesis space that prevent some incorrect generalizations. Moreover, there is an emerging consensus that such constraints are not domain-specific, but of a more general nature. For instance, all current approaches assume some version of a principle that gives more specific operations precedence over more general ones (Bates and MacWhinney 1987; Braine and Brooks 1995; Clark 1987; Eisenbeiss 2009a; Marcus 1993; Marcus, Pinker, Ullman, Hollander, Rosen, and Xu 1992). Such a principle of specificity, blocking, contrast or pre-emption can, for example, explain that a German

52. Syntax and Language Acquisition

1803

verb with an idiosyncratic genitive feature will assign the lexically specified genitive to its direct object − and not the default object case accusative. Such a principle could also play a role in acquiring morpho-syntactic markers: it is only applicable if contrasting word forms involve different specifications for grammatical features; e.g. cat: [−PLURAL] vs. cats: [+PLURAL]. Otherwise, the principle could not be employed to decide which form should be given precedence. Therefore, individual word form contrasts in the input should lead children to search for grammatical distinctions that correspond to the use of the respective forms (Eisenbeiss 2003, 2009a).

3.2. Empirical studies on input and innate predispositions Whether children actually obey the constraints postulated by theoretical linguists has been investigated in numerous studies (Crain 1991; Crain and Lillo-Martin 1999). For instance, children have been shown to respect the Structure-Dependency Principle as early as it is possible to test them: when they form questions with two auxiliaries, they do not simply front the first auxiliary as in (17a). This means, they do not simply use operations that refer to linear position. Rather, they produce sentences like (17b). I.e., they seem to know that English question formation involves the inversion of the main clause subject and auxiliary, i.e. a syntactic operation that affects an element in a particular syntactic position. Pullum and Scholz (2002) argue that children could learn not to make mistakes like (17a) when they encounter wh-questions with two auxiliaries, such as (17c). (17) a. * Is the farmer who __ running is old? b. Is the farmer who is running __ old? c. Where’s the other car that was in here? However, while constructions like (17b) or (17c) may occasionally occur in written corpora, they constitute far less than 1 % of child-directed speech corpora (Legate and Yang 2002). This suggests that children’s input is not sufficient to induce the appropriate generalizations without innate constraints. At the same time, computer simulation studies have demonstrated the power of constraints: an unsupervised machine learning mechanism can learn complex syntactic structures with a high degree of accuracy if it involves a non-overlap constraint for constituents, limited conditions on head-argument dependency relations and a restriction to binary branching (Lappin and Shieber 2007). Note that such constraints on phrase structure have recently been derived from more general principles (Chomsky 1995, 2001; Eisenbeiss 2009a; Kayne 1994). In summary, empirical and computational studies are compatible with the assumption that children’s grammatical generalizations are constrained by powerful general cognitive principles. Studies on children’s input have highlighted properties of child-directed speech that might provide additional support for learners: it contains far fewer errors and disfluencies than initially assumed, involves syntactically simple sentences and has special prosodic properties − in particular slow speed with longer pauses between utterances and after words, high and varied pitch, and exaggerated intonation or stress patterns (Gallaway and Richards 1994). Moreover, cross-linguistic studies have shown that one can account for more than half of all child-directed utterances using a very restricted set of frames

1804

VIII. The Cognitive Perspective

like That’s a __ ; even in free word order languages like Russian (Stoll, Abbot-Smith, and Lieven 2009). In addition, child-directed speech is characterized by so-called variation sets, sequences of adult utterances with a constant communicative intention and different types of variation in form; e.g. (18) (Küntay and Slobin 1996; Onnis, Waterfall, and Edelman 2008; Slobin, Bowerman, Brown, Eisenbeiss, and Narasimhan 2011). Such variation sets involve lexical substitution and rephrasing, shifts from full noun phrases to pronouns, and the addition, deletion or reordering of constituents. These properties can highlight constituent boundaries and morphological contrasts (e.g. car vs. cars); and they can provide evidence for word order flexibility, syntactic processes and the optionality of particular constituent types. (18) Put the car away! And now, put this car away! Just put all of these cars away! Put them away! Other input properties could help children to recover from errors. For instance, when a child says *singed, the parent might reformulate the non-target-like form and produce the correct form sang. This could alert children to errors and thus provide implicit negative evidence (Chouinard and Clark 2003; Farrar 1990, 1992). Experimental studies have demonstrated that learners can benefit from such input (Saxton, Blackley, and Gallaway 2005; Valian and Casey 2003). However, it is still debated whether such reformulations are provided frequently and reliably for all children. Moreover, reformulations would not make principles like the Specificity Principle superfluous. On the contrary, hearing a different form for the same meaning can only alert a child to the inappropriateness of their own form if they follow a principle that rules out two different forms for the same meaning. Thus, the effectiveness of reformulations in fact supports the assumption of (general cognitive) constraints. Taken together, theoretical developments and empirical studies have led to complex models that involve (i) general cognitive constraints on children’s grammatical generalizations and (ii) universal distributional and phonological properties of child-directed speech that support speech segmentation, linguistic categorization and the acquisition of word order patterns and grammatical distinctions.

4. The emergence of syntax Acquisition models must not only address the logical problem, they must also explain how syntax emerges on the basis of the proposed constraints and input properties and tackle the bootstrapping problem: children must identify the grammatical distinctions and the corresponding morphological forms of their target language, as well as the mappings between them; and they must embark on this process without knowing which meanings the language will encode and which forms it employs to do so (section 4.1). Moreover, such models must explain why this process is characterized by initial omissions of grammatical morphemes (section 4.2) and subjects or other arguments (section 4.3).

4.1. Syntactic categorization and the bootstrapping problem Faced with the bootstrapping problem, some researchers argue that children use phonological cues (e.g. pauses, stress, …) to identify constituents (Morgan and Demuth 1996).

52. Syntax and Language Acquisition

1805

This is possible because words tend to be grouped into prosodically cohesive units, i.e. units that are rhythmically and intonationally organized and roughly correspond to syntactic clauses and phrases. Other cues highlight distributional regularities: In any given language, it is easier to predict the next sound within a word than to predict sounds across word boundaries as certain sounds are frequently combined within words. Children can use such information to segment the input, but proponents of semantic bootstrapping approaches argue that this is not sufficient for syntactic categorization and that children use innate form/meaning links to identify instances of grammatical categories in the input: For instance, Pinker (1984) argued that children possess innate links between linguistic and conceptual categories, e.g. noun-thing, verb-action, or Agent-subject. Together with information about referents of words and innate phrase-structure principles, this allows them to create initial phrase-structure representations. However, the links that Pinker proposes do not hold for adult languages. For example, the Agent is not the subject in passives like (19). Hence, children would have to overcome their initial biases or focus on active sentences only − and it is unclear how they would do this. Therefore, some researchers assume more general predispositions, e.g. a general predicate/argument distinction (e.g. Fisher 2001). They argue that phonological and distributional bootstrapping allows children to build up partial sentential representations with constituent boundaries; e.g. (20). (19) The chicken is chased by the dog. (20) [[… dog ] [chases chicken …]] Children should also be able to identify the meaning of at least some words by wordto-world-mapping and cross-situational observational learning: for instance, if a parent and a child frequently looked at the same object together, the child could associate this object with a word used to label it in these situations. If this association is consistent with new contexts, a lexical entry for the word can be formed. Distributional properties can then help the child to assign the new word to different classes (Christophe, Millotte, Bernal, and Lidz 2008). For instance, some words tend to occur with the and a, while others co-occur with -ed, -s and -ing. As members in the first class tend to refer to entities while members of the second class are typically used to encode predications, children can map this distributional distinction onto the predicate/argument distinction. Together with phrase-structure constraints, this will allow children to build up initial phrase-structure representations.

4.2. Omissions of grammatical morphemes Empirical studies on the time course of this process show early adaptations to the basic word order patterns of the target, but frequent omissions of grammatical morphemes, such as tense markers or determiners (see Ambridge and Lieven 2011; de Viliiers and Roeper 2011; Eisenbeiss 2009a; Guasti 2002; Lust 2006 for overviews). Efforts to capture these conflicting observations have sparked an intense debate about acquisition mechanisms and early grammars: Full-Competence proponents capture children’s targetlike behavior by assuming adult-like representations; and they attribute deviations from

1806

VIII. The Cognitive Perspective

the target to (i) maturational processes that affect the availability of constraints (Wexler 1998; Rizzi 1993/4, 2000); (ii) the lack of the morphological elements which realize functional projections that children have already established (Boser, Lust, Santelmann, and Whitman 1992); (iii) problems with the phonological realization of unstressed functional elements (Demuth 1994); (iv) a lack of processing capacities (Avrutin 1999; Phillip 1995; Valian 1991) or (v) underdeveloped pragmatic knowledge, which leads to a lack of specifications for features like specificity (Hyams 1996). In contrast, StructureBuilding proponents assume that syntactic representations are build up incrementally and that children’s earliest syntactic representations might lack some or all of the grammatical features of the target (Clahsen 1990/1; Clahsen and Penke 1992; Clahsen, Penke, and Parodi 1993/4; Clahsen, Eisenbeiss, and Penke 1996; Clahsen, Kursawe, and Penke 1996; Duffield 2008; Gawlitzek-Maiwald, Tracy, and Fritzenschaft 1992; Meisel and Müller 1992; Lebeaux 1988; Radford 1990, 1995; Tsimpli 1992; Vainikka 1993/4). The Structure-Building Hypothesis has been combined with the claim that functional categories mature (Radford 1990). An alternative is the Lexical-Learning Hypothesis, which states that functional categories are available early on, but children still need to learn the properties of the corresponding morphological elements to build up syntactic representations (Clahsen, Penke, and Parodi 1993/4; Clahsen, Eisenbeiss, and Vainikka 1994; Clahsen, Eisenbeiss, and Penke 1996, Clahsen, Kursawe, and Penke 1996; Eisenbeiss 2000, 2003; Meisel and Müller 1992; Pinker 1984). In Structure-Building variants, incremental development is expected. The Lexical-Learning approach additionally predicts initial restrictions of grammatical morphemes to particular lexical elements. Similar predictions follow from the usage-based approach: children build up schemas that include functional elements, on the basis of utterances with similar components; e.g. (21) and (22). Initially, these schemas are bound to the lexical elements involved in them, which accounts for lexical restrictions and optionality. (21) Where’s the X? (22) There’s the X., … In order to support their approaches, Full-Competence proponents searched for early occurrences of functional elements, whereas proponents of Structure-Building and usagebased approaches tried to show that early function words and grammatical morphemes only occur in (semi)formulaic utterances and are not based on adult-like representations. These studies have revealed a complex picture (Caprin and Guasti 2009; Clahsen 1996; Clahsen, Penke, and Parodi 1993/4; Clahsen, Eisenbeiss, and Penke 1996; Clahsen, Kursawe, and Penke 1996; Eisenbeiss 2009a; Guasti 2002; Lust 2006; Poeppel and Wexler 1993). In particular, children acquiring English, Dutch, and German go through a socalled root infinitive or optional infinitive stage, where some of their main (or root) clauses contain the required finiteness markers, while others lack them; e.g. (23a) vs. (23b). However, the positioning of finite and non-finite verbs is mostly target-like. For instance, German children correctly place finite verbs in second position and non-finite verbs in final position. Similarly, English children correctly posit the negation before the verb, even if the verb is not correctly inflected as in (24). Thus, children deviate from the target, but not randomly. The same has been observed for case (Schuetze and Wexler 1996; Eisenbeiss, Narasimhan, and Voeikova 2008): at least some English children pro-

52. Syntax and Language Acquisition

1807

duce non-target-like subject pronouns (e.g. me/my go), but verb and preposition complements are typically correct. Moreover, non-target-like subject pronouns like me or my tend to co-occur with non-finite verbs. (23) a. Mommy sleeps. b. Mommy sleep. (24) John not eat(ing). Many Full-Competence proponents attribute these findings to maturation: according to Rizzi (1993/4, 2000), children can produce “pruned” trees until the constraint that all sentences must be complementizer phrase (CPs) has matured. Hyams (1996) argues that children can initially leave functional categories such as INFL and DET underspecified with respect to the grammatical features finiteness and specificity; and in Schuetze and Wexler’s (1996) Agreement-Tense-Omission (ATOM) model, children’s grammars are initially restricted by the Unique Checking Constraint: they can check noun-phrase features of subjects either with tense or with agreement features of the finite verb, but not with both. A similar account has been formulated in Optimality Theory (Legendre, Hagstrom, Vainikka, and Todorova 2002). Problems in finiteness marking may have further consequences: Schuetze and Wexler argue that subject case is assigned by the agreement features in INFL. Hence, if agreement is left unspecified, nominative case should not be assigned, yielding sentences with null subjects or subjects in the default case. For English, this is the accusative as it can appear in contexts without overt case assigners, for instance as the subject of sentences that lack a finite verb, as in Him worry? Never!. In languages where the default case is nominative (e.g. in German), we should not find non-nominative subjects even if the verb lacks finiteness marking. And indeed, no systematic subject case errors are documented for such languages (Eisenbeiss, Bartke, and Clahsen 2005/6). The ATOM-model also makes predictions about cross-linguistic differences: in particular, Romance pro-drop languages like Italian and Spanish should not show a root infinitives stage, as in these languages, the agreement features are in DET and not in an agreement node that competes with Tense for checking. However, it is still debated whether this prediction is actually borne out (see Caprin and Guasti 2009; Pratt and Grinstead 2007; Wexler 1998). Moreover, the ATOM model does not explain (i) why some children make many non-target-like pronoun errors while others produce hardly any; (ii) why some pronouns are more error prone than others; (iii) why different children produce errors with different forms; and (iv) why semantic verb classes seem to be affected to different degrees (Hoekstra and Hyams 1998; Hyams 2001; Pine, Rowland, Lieven, and Theakston 2005; Pine, Conti-Ramsden, Joseph, Lieven, and Serratrice 2008; Rispoli 1999, 2005). Some researchers have pointed out factors that could explain (some of) this cross-linguistic, inter-individual and inter-lexeme variation. For instance, Rispoli (1999, 2005) attributed differences between individual pronoun forms to their respective role in the morphological paradigm. Kirjavainen, Theakston, and Lieven (2009) demonstrated that the proportion of me-subject errors correlates with the proportion of caregivers’ use of me in 1SG pre-verbal contexts; e.g. let me do it. And Freudental, Pine, Aguado-Orea, and Gobet (2007) captured the cross-linguistic differences between English, Dutch, German and Spanish child language in a computational model that focuses

1808

VIII. The Cognitive Perspective

(i) on the beginnings of input sentences, where subjects tend to appear, and (ii) on the ends of input sentences, where many non-finite forms appear in the verb-final languages German and Dutch. Such a model is in line with a common assumption that children pay more attention to the beginnings and the ends of utterances. Taken together, the empirical findings obtained to date suggest that a future comprehensive account for finiteness and case development is likely to involve syntactic, morphological, semantic, lexical and distributional factors. Such an integrated account also seems to be required for noun phrase development (Eisenbeiss 2000, 2009a; Guasti, Gavaro, de Lange, and Caprin 2008): in comprehension experiments, children are sensitive to determiners and other function words in their input before they start to produce them (Shi, Werker, and Cutler 2006). For instance, Englishspeaking children in preferential looking studies discriminated between (i) stories with existing function words like the and (ii) stories with novel function words like guh; and children comprehend utterances with function words better than utterances where function words such as articles are omitted (Shafer, Shucard, Shucard, and Gerken 1992). Moreover, when 14 to 16 month old German children (but not 12 to 13 month old children) were familiarized with novel words in combination with a determiner (e.g. der Pronk ‘the pronk’), they later discriminated between these words in a noun context and the same words in a verb context (Höhle, Weissenborn, Kiefer, Schulz, and Schmitz 2004). This suggests that they can use the determiner to categorize lexical elements, as proposed in phonological and distributional bootstrapping approaches (Christophe, Millotte, Bernal, and Lidz 2008; Höhle 2009; Weissenborn and Höhle 2001). Some researchers interpret these findings as evidence for the Full-Competence Hypothesis and try to demonstrate early use of determiners and other realizations of DET (e.g. the possessive marker ’s; Penner and Weissenborn 1996; Bohnacker 1997; Valian, Solt, and Stewart 2009). For some Romance languages, determiners have indeed been observed quite early. However, many English- or German-speaking children produce no or less than 10 % overt determiners and possessive markers in the early two-word stage (Radford 1990; Eisenbeiss 2000; Eisenbeiss, Matsuo, and Sonnenstuhl 2009). Moreover, Eisenbeiss (2000) has observed a U-shaped developmental curve in German children’s use of determiners: after an initial rise, the rate of overt determiners drops for a while before it raises to target levels. Before the temporary drop in the rate of overt determiners, the occurrence of these elements is restricted to a few fixed determiner+noun combinations and to semi-formulaic utterances like That’s a X. Such restrictions of early determiner use have also been observed for other Germanic languages (e.g. Pine and Lieven 1997), though Full-Competence proponents have tried to challenge these findings and attribute determiner omissions to problems with the realization of unstressed materials (Demuth 1996), the pragmatic knowledge involved in specificity marking (Hyams 1996) or a lack of processing abilities and lexical knowledge (Valian 1991). In summary, studies on functional categories in early child language have demonstrated that children are sensitive to distributional properties of these elements before they produce them. There is general agreement that some initial uses of functional elements may be formulaic, but current studies investigate when and how children learn to use such elements productively.

52. Syntax and Language Acquisition

1809

4.3. Argument omissions Young children not only omit grammatical morphemes, but also subjects and other arguments (de Villiers and Roeper 2011; Guasti 2002; O’Grady 1997; Lust 2006; Hughes and Allen 2006). According to early generative accounts, children initially set the prodrop parameter, which captures the different options for subject realization to [+prodrop], hence allowing null subjects (Hyams 1986). They only set the parameter to the target value when they acquire one of the other properties associated with the prodrop parameter, such as expletives or subject verb agreement. However, no study could convincingly demonstrate that all syntactic properties linked to the pro-drop parameter are acquired at the same time. Thus, there is no evidence for systematic clustering effects. Moreover, children’s subject omissions reflect target properties from the onset (Guasti 2002): for instance, children acquiring a non-pro-drop language like English produce more overt subjects than children acquiring pro-drop languages like Italian. At the same time, Italian children drop subjects in any position; while early subject drop in German and English is mostly restricted to the initial position of main clauses − the only position in which their target languages allows for subject drop in particular discourse contexts; e.g. Woke up late, __ had a headache (see however Duffield 2008). More recent generative accounts of early null subjects do not assume non-target-like parameter settings. Some researchers point out that null subjects tend to occur in utterances without finiteness marking and view subject omissions as a result of children’s well-documented problems with finiteness marking (Vainikka 1993/4; Wexler 1998; Rizzi 1993/4; Hyams 1996). However, such finiteness-related approaches do not explain why children (i) often produce subjects in non-finite sentences; (ii) omit objects and (iii) omit subjects of transitive verbs more often than subjects of intransitive verbs. Some researchers argue that processing difficulties prevent children from producing overt subjects even though they have already acquired the knowledge that their grammar requires overt subjects: Bloom (1990) assumes a limitation of sentence length, which might explain why subjects of verbs that co-occur with an object are more likely to be omitted than subjects of intransitive verbs. Gerken (1991) argues that English-speaking children tend to omit the first syllable in weak-strong feet − e.g (gi)raffe; and sentence initial subject pronouns are weak, which makes them more likely to be omitted. Proponents of discourse-pragmatic accounts argue that subjects of transitive arguments are omitted more often than subjects of intransitive sentences because they often contain old information (Hughes and Allen 2006). Such processing and discourse-based accounts do not capture the relation between subject omissions and finiteness problems, but any comprehensive account of subject omissions will have to take into account morpho-syntactic, processing and discourse factors − just like accounts of grammatical morpheme omissions.

5. The acquisition of questions and embedded clauses Wh-elements like who and where and complementizers like that occupy positions within the complementizer phrase, i.e. in the CP-layer of clauses. Moreover, in English questions an auxiliary has to occupy the COMP position, which requires inversion of subjects and auxiliaries or the addition of the dummy auxiliary do if the utterance does not

1810

VIII. The Cognitive Perspective

contain a modal or auxiliary verb; e.g. (25). Hence, (wh-)questions, subject-auxiliary inversion, do-support, and embedded clauses have been a major focus of acquisition studies on the functional category COMP. Moreover, in questions and relative clauses, listeners have to link the wh-element (the so-called filler) to the position where it is interpreted (the so-called gap). For instance, native speakers must interpret the wh-element who in (25b) as if it occupied the subject position of the embedded clause, while they must treat who as the object of the embedded clause in (25c). Thus, studying whquestions can help us understand how children learn to interpret and produce utterances with filler-gap dependencies. (25) a. (Where) can he go? b. Who do you think __ saw Christopher? c. Who do you think Christopher saw __? Wh-elements and complementizers typically appear in the third year; and wh-elements may occasionally occur even in the early two-word stage (Ambridge and Lieven 2011; de Villiers and Roeper 2011; Brown 1973; Guasti 2002; O’Grady 1997). However, spontaneous early questions tend to involve common components that could be formulaic; e.g. (26). In some of these questions, children do not change the auxiliary part of the formula to produce target-like agreement; e.g. (27) − even though they already mark agreement correctly in other contexts; e.g. (28a) vs. (28b). This suggests that at least some early questions are not based on adult-like representations, but formulaic. (26) Where’s the car/dog/cat? → Where’s the X? (27) *Where’s the cars? (28) a. They are running. b. He is running. While generative Full-Competence proponents acknowledge this possibility, they argue that children generalize wh-question formation and subject-auxiliary formation early on and can even produce and understand complex embedded wh-questions by age four or even earlier (Guasti 2002; Thornton and Crain 1994; Westergaard 2009). In contrast, proponents of usage-based approaches argue that children start out by reproducing frequent questions or parts of questions and then generalize on an item-by-item-basis, incrementally moving from formulaic utterances like (29) to lexically specific templates like (30), and then further on to more general templates (Ambridge and Pine 2006; Boyd and Goldberg 2011; Dabrowska 2000; Dabrowska and Lieven 2005; Rowland and Pine 2000). In current studies, generative researchers carry out more and more sophisticated experiments to determine how early children can comprehend and produce simple and complex questions and embedded sentences (Crain and Thornton 1998). At the same time, usage-based researchers try to show that large proportions of children’s early questions and embedded sentences are based on templates such as (31a) and (31b), which make up the vast majority of complex questions in child-directed speech (Dabrowska, Rowland, and Theakston 2009). (29) What’s that? (30) Where’s-the X?

52. Syntax and Language Acquisition

1811

(31) a. Where BE X? b. WH do you think S-gap? Subject-auxiliary inversion in English emerges gradually, with some non-inversion errors and double-auxiliary errors (32). Generative researchers argue that children must learn how each auxiliary and modal is used, depending on finiteness, negation, and sentence type, which takes time (Pinker 1984; Valian and Casey 2003; Wexler 1998). However, children should generalize rapidly across different forms of the same auxiliary and across different combinations of auxiliaries and wh-elements. In contrast, usage-based approaches predict that children should start out with particular wh+auxiliary+subject combinations and should first make generalizations across auxiliary forms within the same construction and then generalize across constructions (Rowland and Theakston 2009; Theakston and Rowland 2009). In order to evaluate such predictions, large corpus studies with naturalistic and elicited data as well as longitudinal elicitation studies are currently being conducted. Such studies can also help us determine whether different types of relative clauses are acquired incrementally or whether children start to generalize early on (Brandt, Diessel, and Tomasello 2008). At the same time, comprehension and production experiments are employed to compare children’s performance for different types of filler-gap constructions (see Friedmann, Beletti, and Rizzi 2009; Ambridge and Lieven 2011; de Villiers and Roeper 2011 for overviews). In these experiments, children perform better for utterances with subject gaps than for utterances with object gaps. This observation was made for (i) simple questions like (33), (ii) complex questions like (34), and relative clauses like (35). (32) Why did you did scare me? (33) a. subject gap: Who is __ helping Christopher? b. object gap: Who is Christopher helping __ ? (34) a. subject gap: Who do you think __ saw Christopher? b. object gap: Who do you think Christopher saw __? (35) a. subject gap: The woman that __ saw Christopher? b. object gap: The woman that Christopher saw __? Thus, subject-object asymmetries in children’s performance are is not restricted to simple questions where object gap questions involve subject-auxiliary inversion, while subject gap questions have the same word order as declarative sentences. Thus, children’s problems with object gaps cannot be due to problems with subject-auxiliary inversion. Rather, they seem to be due to the fact that the filler-gap distance is shorter in subject gap sentences than in object gap sentences. Finally, the gap is embedded deeper in object gap sentences than in subject gap sentences. Moreover, in object gap sentences another noun phrase (i.e. the subject) intervenes between the filler and the gap. This has been shown to create processing problems in adults as well; and current studies on acquisition and processing try to distinguish between effects of linear distance, depth of syntactic embedding and types of interveners (see e.g. Friedmann, Belletti, and Rizzi 2009 for discussion).

1812

VIII. The Cognitive Perspective

6. The acquisition of passives The acquisition of passives has played a central role in debates about the role of neurological maturation. In passives, the Patient object of the corresponding active sentence appears in subject position, while the Agent can either be omitted or realized as a byphrase (36a). In generative grammar, this is captured by an argument chain that links the Patient argument in subject position to the object position, indicated by the index i for the Patient argument and the trace t in object position. Note, however, that such a chain is only assumed for verbal passives, which have a dynamic reading and an optional by-phrase. No chain is assumed for adjectival passives, where the participle functions as an adjective and the sentence has a stative result state reading (36b). Due to this stative reading, adjectival passives cannot involve progressive markers − just like copula constructions with state adjectives (*The door was being red). Adjectival passives do not involve an implicit Agent − and hence no by-phrase option. For English short passives like (36c), one needs contextual information to determine whether they are verbal or adjectival. In contrast, German or Hebrew verbal and adjectival passives involve different auxiliaries − and can thus be distinguished without contextual information. (36) a. The doori was (being) painted ti (by Sam). b. The door was painted/red *(by Sam). c. The door was painted. In their seminal summary of early corpus and experimental studies, Borer and Wexler (1987) reported that English-speaking children produce very few passives; and those passives that appear before the age of four, often expressed after-the-fact observations about states, and lacked by-phrases. Moreover, passives with action-verbs like hit were better comprehended and produced earlier and more frequently than passives with nonactional verbs like see. Finally, in languages like German or Hebrew, where different auxiliaries are involved, adjectival passives tended to appear earlier than verbal passives. Borer and Wexler argued that the ability to create argument chains between subjects and the object position matures around the age of four. Before that, children can only produce adjectival passives, which could explain the lack of by-phrases and dynamic readings as well as a restriction to action verbs, which allow result state passives more easily than non-action verbs. However, in languages where passives are more frequent in the input, even twoyear old children use verbal passives productively, which suggests that argument chain formation is possible early on (Allen and Crago 1996; Demuth 1989; Eisenbeiss 1994). Additional evidence for this assumption comes from studies on Hebrew (Friedmann 2007): according to the Unaccusativity Hypothesis, the Agent subject of unergative verbs such as run is base-generated in subject position, whereas subjects of unaccusative verbs such as fall originate in object position and can be moved into subject position − thus creating an argument chain like the one involved in verbal passives. The Unaccusative Hypothesis captures the observation that Hebrew unergatives only show Subject-Verb patterns, whereas subjects of unaccusative verbs can appear after the verb (in the base position for objects) or in the preverbal subject position. Hebrew children show the target distribution for unergative and unaccusative subjects even before the age of two. If the ability to create argument chains matured around the age of four, this early adaptation

52. Syntax and Language Acquisition

1813

to the target would be unexpected, as younger children should not be able to create a chain between a preverbal subject of an unaccusative verb and its trace in object position. Further support for the early availability of argument chains comes from more recent studies on English: in picture-verification, truth-value judgment and elicited production tasks with low task demands and appropriate discourse contexts, English-speaking children under the age of four produce and comprehend passives of novel verbs, actional and non-actional passives with be, by-phrases, passives with get and other passives with dynamic readings (Pinker, Lebeaux, and Frost 1987; Fox and Grodzinsky 1998; Crain and Fodor 1993; Crain, Thornton, and Murasugi 2009; see Ambridge and Lieven 2011; de Villiers and Roeper 2011 for review). Moreover, even four-year-old children are more likely to produce passives after hearing other passives than after hearing active sentences (Bencini and Valian 2008; Huttenlocher, Vasilyeva, and Shimpi 2004; Kim and McDonough 2008; Savage, Lieven, Theakston, and Tomasello 2003, 2006). When these experiments involve simple tasks, children show such syntactic priming effects even when the primes and their own productions contain different words. This suggests that the passive representations that are activated by the prime are not bound to particular lexical elements, but target-like abstract syntactic representations. In summary, when task demands are low, there is no evidence for late maturation. While the focus of early studies was on the role of maturational factors, more recent studies have investigated the role of social class and input in the acquisition of low frequency constructions such as passives (e.g. Dabrowska and Street 2006). Others have demonstrated links between the acquisition of passives and the acquisition of copula constructions, which also involve forms of be and get (Eisenbeiss 1994; Abbot-Smith and Behrens 2006).

7. The acquisition of constraints on co-reference Maturation and methodological issues also play a central role in research on the acquisition of co-reference relations between referential expressions like John, pronouns like he and reflexives like himself. In generative models, so-called Binding Principles constrain co-reference (Chomsky 1981). For instance, in (37a), the reflexive himself cannot be co-referential with the man. Its only potential antecedent is John. This can be captured by Binding Principle A, which requires reflexives to be bound, i.e. co-referential with a c-commanding phrase in the same clause. C-command is a structural relation: a phrase X c-commands a phrase Y if X and Y do not dominate each other in the syntactic tree; and every branching node of the tree that dominates X also dominates Y. For instance, in (38), A c-commands C, D, and E; B does not c-command any nodes; C c-commands A; D c-commands E; and E c-commands D. In (37a), John is the only noun phrase that c-commands the reflexive himself within its clause. Hence, John and himself must be co-referential to satisfy Principle A. Principle B states that pronouns may not be bound within their clause. In (37b), John c-commands him and occurs within the same clause. Thus, if the John and him were co-referential, him would be bound, i.e. co-referential with a c-commanding noun phrase. This would violate principle B. However, him in (37b) and he in (37c) may be co-referential with the referential expression the man, because this expression is not included in the same clause. Principle C requires referen-

1814

VIII. The Cognitive Perspective

tial expressions like John to be free. Thus, in (37c), John cannot be co-referential with he nor the man. Otherwise, it would be bound by these elements. Note that principle C is a structural constraint that rules out binding under c-command − and does not simply block a referential expression from being co-referential with a pronoun that precedes it. Thus, backward anaphora are possible when the pronoun does not c-command the noun phrase − for instance, when the pronoun is embedded in a dependent clause as in When shei was dancing Suei was playing guitar. In contrast, backward anaphora like Hei is washing John*i, are ruled out as the pronouns is c-commanding the referential expression. (37) a. The man said that John was washing himself. b. The man said that John was washing him. c. The man said that he was washing John. (38)

B A

C D

E

As early as one can test them, children distinguish between legitimate and illegitimate backward anaphora, demonstrating knowledge of Principle C as a syntactic constraint; and they show sensitivity to Principle A by providing correct co-reference answers for sentences like (39a) (Chien and Wexler 1990; Crain and Thornton 1998). However, even six-year-old children often incorrectly allow co-reference between referential expressions and pronouns for sentences like (39b) (Chien and Wexler 1990). This so-called Delay of Principle B effect, has been replicated using different tasks (truth-value judgment, picture-selection, and act-out) and languages (e.g. Russian: Avrutin and Wexler 1992; French and Danish: Jakubowicz 1984; Hamann, Kowalski, and Philip 1997; Icelandic: Sigurjónsdóttir 1992; Dutch: Philip and Coopmans 1996; see Ambridge and Lieven 2011; de Villiers and Roeper 2011 for review). However, children perform better for sentences with object pronouns when the subject is quantificational; as in (39c) (Chien and Wexler 1990; Thornton and Wexler 1999). Some Full-Competence proponents argue that Principle B only rules out binding of a pronoun within the clause, which is a case of variable binding. It does not prohibit accidental co-reference between a pronoun and a c-commanding referential expression (Grodzinsky and Reinhart 1993; Thornton and Wexler 1999). For instance, accidental co-reference between a pronoun like her and a c-commanding antecedent like Sue is possible in a context like (40). Here, her is not directly co-referent with Sue, but refers to the mystery singer, who might just happen to be Sue. (39) a. Papa Bear is washing himself. b. Papa Bear is washing him. c. Every bear washed her. (40) Speaker A: Speaker B:

Was Sue the mystery singer behind the screen? Sue was praising her all evening. I bet the singer was Sue

52. Syntax and Language Acquisition

1815

Thus, pronouns can be interpreted in two ways: The first way is the bound variable interpretation route, which is always ungrammatical. Hence, her cannot be bound by Sue in (40). The second option is the accidental co-reference route, as in (40), where Sue and her can just happen to refer to the same person in real life. This might be the reason that children do not always rule out co-reference for sentences like (39b). However, when a quantified phrase is involved as in (39c), the bound variable reading is the only option. Variable binding of a pronoun would violate Principle B. Thus children’s adultlike rejection of co-reference for sentences like (39c) suggests that they treat these sentences as instances of variable binding and respect Principle B. In this way, the observed asymmetries between quantified and non-quantified utterances could be interpreted as evidence against a delay in the acquisition of Principle B. Further arguments against the assumption of such a delay come from another asymmetry: while children show problems understanding pronouns in utterances like (39b), they use pronouns and reflexives appropriately in their own speech (Bloom, Barss, Nicol, and Conway 1994; de Villiers, Cahillane, and Altreuter 2006). Thus, recent studies try to demonstrate that at least some of the comprehension problems are due to methodological problems in earlier experimental studies (Elbourne 2005; Conroy, Takahashi, Lidz, and Phillips 2009). Others use eye-movement measurements to investigate the time-course for the processing of coreference relationships (Clackson, Felser, and Clahsen 2009).

8. The acquisition of quantification Most current research on the acquisition of quantification focuses on universal quantifiers like every that denote two-place relations between two sets of individuals. The domain of these quantifiers is defined by the noun they combine with. For instance in (41), adult native speakers tend to concentrate on the set of boys in the situation (not on the set of all boys) and determine whether they are members of the set of elephant−riders. They do not refer to the set of elephants. As sentences with quantifiers like every and each are rare in children’s input, this raises the question of how children acquire this property of universal quantifiers. Answers to this question have to account for systematic incorrect responses in comprehension studies (see Ambridge and Lieven 2011; Berent, Kelly, and Schueler Chokairi 2009; Brooks and Sekerina 2005/6; de Villiers and Roeper 2011; and Guasti 2002 for overviews): The first error type is sometimes referred to as overexhaustive search or exhaustive spreading: children will typically correctly accept descriptions such as (41) for a picture with an exhaustive one-to-one match between elephants and boys − for instance for a picture with three boys and three elephants, where each boy is sitting on an elephant and each elephant has a boy on it. However, in contrast to adults, children will often reject sentences like (41a) when there are extra elephants − even though the extra elephant does not affect the truth value of the sentence. When questioned, children will often point to the elephant without a rider to explain why they rejected the sentence. This suggests that they fail to properly restrict the domain of the universal quantifier to the noun phrase it modifies. The second error type is complementary to the first one (underexhaustive search/pairing): children sometimes ignore participants that are not part of a pairing, but should be considered in the evaluation. For instance, children sometimes accept (42) when the corresponding picture shows

1816

VIII. The Cognitive Perspective

several cars and each of these cars is in a garage, but there is also another car without a garage. The third error type has been labeled bunny spreading: In truth-value judgments, children sometimes even consider objects in pictures that are not mentioned in the sentence at all. For example, they would reject (43) as a description of a picture with three bunnies that are each eating a carrot, while a dog is eating a bone. (41) a. Every boy is riding an elephant. b. Every elephant has a boy on it. (42) Every car is in a garage. (43) Every bunny is eating a carrot. Taken together, these three error types suggest that children require symmetrical one-toone-relationships between members of the two relevant sets − and no extra members or other participants. Some researchers attribute this symmetry effect to non-target-like syntactic representations. For instance, Roeper and Matthei (1975) and Philip (1995) argue that children who show this effect, do not interpret every as a quantifier that denotes relations between individuals and has scope over one noun phrase. Rather, they treat every as an adverb of quantification that denotes relations between events and has scope over the entire sentence − similar to always. The events over which every ranges are contextually defined. For instance, for sentences like (41), these events are events with a boy or an elephant or both. Children make the error because they think that for every event that has a boy and an elephant in it, it must be an event of a boy riding an elephant. However, children do not only produce symmetrical responses, but also other responses, even in Philip’s studies. Moreover, some of the adults who functioned as controls in acquisition experiments also made spreading errors. Therefore, some researchers do not attribute these errors to immature syntactic representations, but to shallow processing − which can occur in children and adults. For instance, Drozd (2001) and Geurts (2003) argued that children cannot establish an appropriate discourse for the interpretation of universal quantifier sentences, create underspecified semantic representations that do not properly identify the domain of the quantifier, and hence have to resort to guessing strategies to interpret the utterance. Crain and colleagues argue that children have adult-like linguistic knowledge and processing abilities (Crain and Thornton 1998; Crain, Thornton, Boster, Conway, and Lillo-Martin 1996). They show that changes to experimental procedures result in more adult-like behavior and attribute children’s errors to problems in experimental design that lead children to believe that the extra objects or participants matter. In order to distinguish between these approaches, current studies investigate the role of quantifier position (subject vs. object noun phrases), collective vs. distributive reading (e.g. several boys riding one elephant vs. each boy riding his own elephant), experimental task and target language (Brooks and Sekerina 2005/6; Berent, Kelly, and SchuelerChokairi 2009).

9. Bilingualism, L2-acquisition and the critical period Current research on syntactic development in bilinguals and L2-learners is characterized by debates about: (i) the existence of a critical period for syntax acquisition (Bhatia and

52. Syntax and Language Acquisition

1817

Ritchie 2008; Birdsong 1999, 2005, 2006; Hawkins 2001; Herschensohn 2007; Hyltenstam and Abrahamson 2003; Long 2007; Meisel 2009, 2011; Ritchie and Bhatia 2009; Rothman 2008; Singleton 2005; White 2003); (ii) the degree of language separation in bilingualism (de Houwer 1990, 1995; Döpke 1992; Genesee 1989, 2002; Hyltenstam and Obler 1989; Paradis and Genesee 1996; Romaine 1989); and (iii) constraints on code-switching and code-mixing (Milroy and Muysken 1995; Muysken 2000). In the debate about potential critical-period and age effects, proponents of the FullAccess Hypothesis point out that adult L2-learners develop sophisticated grammatical systems that are underdetermined by their input and their knowledge of the L1 (Schwartz and Sprouse 1996; Epstein, Flynn, and Martohardjono 1996; Grondin and White 1996). In order to account for this, these researchers claim that learners’ innate predispositions for language acquisition remain fully accessible to L2-learners; and they attribute differences between child and adult learners to various secondary factors. In contrast, proponents of the Critical-Period Hypothesis (or Fundamental-Difference Hypothesis) argue that language acquisition mechanisms become unavailable in later life, due to neural maturation (Bley-Vroman 1990; Clahsen and Muysken 1986; DeKeyser 2003; Long 2007; Meisel 2009, 2011; Paradis 2004; Schachter 1988, 1996; Ullman 2001). There is some debate whether such a critical period should be viewed as a unified period with a relatively fixed end point after which an innate domain-specific acquisition mechanism becomes unavailable − or as a cluster of sensitive phases during which acquisition mechanisms are optimally prepared to handle incoming information (see Birdsong 2006; Meisel 2009, 2011; Rothman 2008 for overviews). Within generative approaches, some researchers assume a critical period for all UG components (Schachter 1996; Meisel 2009, 2011; Ullman 2001; Paradis 2004), while others postulate more limited effects. For instance, proponents of the Failed Functional Feature Hypothesis, argue that the unspecified features of functional categories are only available during an early critical period in which their values are fixed and associated with particular morpho-phonological realizations (Hawkins and Chan 1997; Tsimpli and Smith 1991). Beyond the critical period, only features encoded in lexical entries remain available. While the debate about critical period effects is still ongoing, empirical studies to date suggest that the ability to acquire phonological properties of the target shows earlier and stronger age effects than the ability to acquire morpho-syntax; and that vocabulary learning shows even fewer − if any − age effects (see e.g. Meisel 2009, 2011). In order to distinguish potential age effects from effects of having learned another language, current studies on age effects compare child, adolescent, and adult L2-acquistion, as well as monolingual and bilingual L1-acquisition. Such studies suggest that age effects are far more varied and complex than initially assumed (for overviews see Meisel 2009 and responses; Meisel 2011; Rothman 2008). Research on the degree of language separation in bilingualism was initially characterized by a simple opposition between the Single-System Hypothesis and the SeparateDevelopment Hypothesis: according to the Single-System Hypothesis, bilingual children go through a stage of language confusion before they separate their languages and they develop a hybrid system. For instance, Volterra and Taeschner (1978) argue that the child initially has an undifferentiated grammar and a single hybrid lexicon, i.e. a list of words from both languages. This assumption has proved to be untenable: grammatical morphology such as the use of articles and tense or agreement markers develops in a language-specific manner (De However 1990; Genesee 1989). Moreover, for language

1818

VIII. The Cognitive Perspective

combinations with different word orders, children’s utterances reflect the respective word orders as soon as they produce multi-word utterances. Finally, many studies have found that bilinguals show the same patterns and the same rate of development as monolinguals acquiring one of the two languages, without any signs of transfer, acceleration or delay (Paradis and Genesee 1996). This suggests that the two grammatical systems are separated from very early onwards, as assumed by proponents of the Separate-Development Hypothesis (see Genesee 1989; Hyltenstam and Obler 1989; in particular Meisel’s contribution). However, the two systems are not completely unconnected (Müller 1998): bilinguals can employ the systems interactively, which results in code mixing and code switching, where elements from different languages occur within the same conversation. Both code mixing and code switching respect phrasal boundaries and switches are never made at points in the sentence where switching would leave some portion of the string unacceptable in the language of that portion. This suggests that both competence grammars are available simultaneously for the planning of the utterance.

10. Current trends and developments As this overview has shown, current research on syntactic development is characterized by debates about (i) the respective contribution of learners’ input and innate predispositions for language acquisition, (ii) the time course of syntactic development, and (iii) the role of learners’ age and potential implications for monolingual, bilingual and L2acquisition. With respect to the respective contribution of input and innate predispositions, cross-linguistic corpus and experimental studies on learners’ input have found universal prosodic, distributional and discourse properties of child-directed speech that can help children to determine constituent boundaries and categorize lexical elements; e.g. co-occurrences of high-frequency functional elements with words of a particular lexical category, or the patterns of repetitions and variations in variation sets (section 3.2). Moreover, the role of input frequency has been explored in much more depth recently (Ambridge and Lieven 2011; Bybee 2007; Gülzow and Gagarina 2007). However, such studies have not removed the need for assuming constraints on children’s hypothesis space for morpho-syntactic generalizations: on the one hand, children may deviate from the target, but they do not show random errors; on the other hand, direct negative evidence in the form of systematic explicit corrections is too unreliable, infrequent and ambiguous to overcome incorrect generalizations. Hence, most current approaches to syntactic development assume some type of constraints on children’s generalizations to address the logical problem of language acquisition; and a consensus about the nature of these constraints seems to be emerging: recent generative models try to derive universal constraints for linguistic representations from general cognitive constraints, which are in principle compatible with usage-based models. At the same time, proponents of non-generative approaches have moved beyond general statements about the cognitive abilities involved in language development and made more explicit proposals (see e.g. Saxton 1997). Moreover, studies on the role of formulaic elements have provided converging evidence from studies involving a broad range of methods (see e.g. Shaoul and Westbury 2011). These developments have led to a range of proposed general cognitive constraints that can be evaluated in typological studies, cross-linguistic acquisition studies, and computer simulations.

52. Syntax and Language Acquisition

1819

With respect to the time course of syntactic development, cross-linguistic studies have demonstrated that children adapt to the core properties of their target language very early on, but may go through an extended stage in which target-like uses of grammatical morphology and omissions of the same morphemes can be found side by side. This has led acquisition researchers to abandon earlier all-or-nothing models of syntactic development in favor of more complex models that involve interacting morpho-syntactic, phonological, discourse-pragmatic and processing factors (see e.g. Ambridge and Lieven 2011 for a comparison and a discussion of potential contributions and combinations of models). At the same time, the observed cross-linguistic differences with respect to the onset and course of development have led to and inspired studies on languages that have so far not been investigated; see e.g. Eisenbeiss (2006) for a discussion of acquisition studies on endangered languages. With respect to the role of learners’ age and potential implications for monolingual, bilingual and L2-acquisition, studies comparing monolingual and bilingual L1-learners with child and adult L2-learners suggest that the acquisition of morpho-syntactic properties shows later and weaker age effects than phonological development, but stronger age effects than vocabulary learning. Moreover, studies on bilingualism suggest that learners separate the two linguistic systems early on, but can use them interactively − as evidenced by their systematic code-switching and code-mixing behavior. Recently, studies on age effects and bilingualism have started to take a closer look at processing issues, not just at grammatical representations (see Clahsen and Felser 2006a, b and Sekerina, Fernández, and Clahsen 2008 for overviews). With respect to research methodology, discrepancies between results from different tasks have demonstrated the need for converging empirical evidence from corpus-analysis, different types of experiments, semi-structured elicitation, and computational modeling. At the same time, technological advances have enabled us to investigate syntactic processing in learners of all ages and to compile high-density multi-media corpora that allow us to investigate distributional properties of learners input and their effects in far more detail than it was possible earlier. This will lead to even more complex and interdisciplinary models, which will also take inter-individual variation and social factors into account. At the same time, methodological advances in language acquisition research allow acquisition researchers to provide the kind of data that can make a valuable contribution to theoretical linguistics. For instance, the discussion about the acquisition of case marking and finiteness or the acquisition of co-reference relations has informed theoretical linguistics (see e.g. Babyonishev 1993; Conroy, Takahashi, Lidz, and Phillips 2009; Eisenbeiss, Bartke, and Clahsen 2005/6, 2008; Grodzinsky and Reinhart 1993; Thornton and Wexler 1999).

11. References (selected) Abbot-Smith, Kirsten, and Heike Behrens 2006 How known constructions influence the acquisition of other constructions: The German passive and future constructions. Cognitive Science 30: 995−1026. Allen, Shanley E. M., and Martha B. Crago 1996 Early passive acquisition in Inuktitut. Journal of Child Language 23: 129−155.

1820

VIII. The Cognitive Perspective

Ambridge, Ben, and Elena V. M. Lieven 2011 Child Language Acquisition: Contrasting Theoretical Approaches. Cambridge: Cambridge University Press. Ambridge, Ben, and Julian M. Pine 2006 Testing the Agreement/Tense Omission Model using an elicited imitation paradigm. Journal of Child Language 33: 879−898. Avrutin, Sergey 1999 Development of the Syntax-Discourse Interface. Dordrecht: Kluwer. Avrutin, Sergey, and Kenneth Wexler 1992 Development of Principle B in Russian: coindexation at LF and coreference. Language Acquisition 2: 259−306. Babyonyshev, Maria 1993 The acquisition of Russian case. In: Colin Phillips (ed.), Papers on Case and Agreement II, 1−44. (MIT Working Papers in Linguistics 19.) Cambridge, MA: MIT. Bates, Elizabeth, and Brian MacWhinney 1987 Competition, variation and language learning. In: Brian MacWhinney (ed.), Mechanisms of Language Acquisition, 157−194. Hillsdale, NJ: Lawrence Erlbaum. Bavin, Edith L. (ed.) 2009 The Cambridge Handbook of Child Language. Cambridge: Cambridge University Press. Behrens, Heike (ed.) 2008 Corpora in Language Acquisition Research: Finding Structure in Data. Amsterdam: John Benjamins. Behrens, Heike 2009 Usage-based and emergentist approaches to language acquisition. Linguistics 47: 383− 411. Bencini, Giulia M. L., and Virginia Valian 2008 Abstract sentence representation in 3-year-olds: Evidence from comprehension and production. Journal of Memory and Language 59: 97−113. Berent, Gerald P., Ronald R. Kelly, and Tanya Schueler-Chokairi 2009 Economy in the acquisition of English universal quantifier sentences: The interpretations of deaf and hearing students and second language learners at the college level. Applied Psycholinguistics 30: 251−290. Berko, Jean 1958 The child’s learning of English morphology. Word 14: 150−177. Berko Gleason, Jean, and Nan Bernstein Ratner (eds.) 2009 The Development of Language. Boston: Pearson/Allyn and Bacon. Berman, Ruth, and Dan I Slobin 1994 Relating Events in Narrative: A Crosslinguistic Developmental Study. Hillsdale, NJ: Lawrence Erlbaum. Bernstein Ratner, Nan 2000 Elicited imitation and other methods for the analysis of trade-offs between speech and language skills in children. In: Lise Menn, and Nan Bernstein Ratner (eds.), Methods for Studying Language Production, 291−312. Mahwah, NJ: Lawrence Erlbaum. Berwick, Robert C., Paul Pietroski, Beracah Yankama, and Noam Chomsky 2011 Poverty of the stimulus revisited. Cognitive Science 35: 1207−1242. Bhatia, Tej K., and William C. Ritchie 2008 The Handbook of Bilingualism. (Blackwell Handbooks in Linguistics.) Oxford: Blackwell. Birdsong, David (ed.) 1999 Second Language Acquisition and the Critical Period Hypothesis. Mahwah, NJ: Lawrence Erlbaum.

52. Syntax and Language Acquisition

1821

Birdsong, David 2005 Interpreting age effects in second language acquisition. In: Judith F. Kroll, and Annette M.B. De Groot (eds.), Handbook of Bilingualism: Psycholinguistic Perspectives, 109− 127. Oxford: Oxford University Press. Birdsong, David 2006 Age and second language acquisition and processing: A selective overview. Language Learning 56: 9−49. Bley-Vroman, Robert 1990 The logical problem of second language learning. Linguistic Analysis 20: 3−49. Bloom, Paul 1990 Subjectless sentences in child language. Linguistic Inquiry 21: 491−504. Bloom, Paul, Andrew Barss, Janet Nicol, and Laura Conway 1994 Children’s knowledge of binding and coreference: Evidence from spontaneous speech. Language 70: 53−71. Bock, Kay 1986 Syntactic persistence in language production. Cognitive Psychology 18: 355−387. Boersma, Paul 1998 Functional phonology: Formalizing the interactions between articulatory and perceptual drives. Doctoral dissertation, University of Amsterdam. Bohnacker, Ute 1997 Determiner phrases and the debate on functional categories in early child language. Language Acquisition 6: 49−90. Boser, Katharina, Barbara Lust, Lynn M. Santelmann, and John Whitman 1992 The syntax of CP and V-2 in early German child grammar: The Strong Continuity Hypothesis. Proceedings of the North Eastern Linguistics Association 22: 51−66. Borer, Hagit, and Kenneth Wexler 1987 The maturation of syntax. In: Thomas Roeper, and Edwin Williams (eds.), Parameter Setting, 123−172. Dordrecht: Reidel. Borer, Hagit, and Kenneth Wexler 1992 Bi-unique relations and the maturation of grammatical principles. Natural Language and Linguistic Theory 10: 147−189. Boyd, Jeremy K., and Adele E. Goldberg 2012 Young children fail to fully generalize a novel argument structure construction when exposed to the same input as older learners. Journal of Child Language 39: 457−481. Braine, Martin D. S., and Patricia J. Brooks 1995 Verb argument structure and the problem of avoiding an overgeneral grammar. In: Michael Tomasello, and William E. Merriman (eds.), Beyond Names for Things: Young Children’s Acquisition of Verbs, 352−376. Hillsdale, NJ: Lawrence Erlbaum. Brandt, Silke, Holger Diessel, and Michael Tomasello 2008 The acquisition of German relative clauses: A case study. Journal of Child Language 35: 325−348. Branigan, Holly 2007 Syntactic priming. Language and Linguistics Compass 1: 1−16. Brooks, Patricia J., and Irina A. Sekerina 2005/6 Shortcuts to quantifier interpretation in children and adults. Language Acquisition 13: 177−206. Brown, Roger 1973 A First Language: The Early Stages. Cambridge, MA: Harvard University Press. Brown, Roger, Courtney P. Cazden, and Ursula Bellugi 1968 The child’s grammar from I to III. In: John P. Hill (ed.). Minnesota Symposium on Child Development, Volume 2, 28−73. Minneapolis: Minnesota University Press.

1822

VIII. The Cognitive Perspective

Bybee, Joan 2007 Frequency of Use and the Organization of Language. Oxford: Oxford University Press. Caprin, Claudia, and Maria T. Guasti 2009 The acquisition of morphosyntax in Italian: A cross-sectional study. Applied Psycholinguistics 30: 23−52. Chien, Yu-Chien, and Kenneth Wexler 1990 Children’s knowledge of locality conditions in binding as evidence for the modularity of syntax and pragmatics. Language Acquisition 1: 225−295. Chomsky, Noam 1981 Lectures on Government and Binding. Dordrecht: Foris. Chomsky, Noam 1989 Some notes on economy of derivation and representation. MIT Working Papers in Linguistics 10: 43−74. Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. Chomsky, Noam 2001 Derivation by phase. In: Michael Kenstowicz (ed.), Ken Hale: A Life in Language, 1− 52. Cambridge, MA: MIT Press. Chouinard, Michelle M., and Eve V. Clark 2003 Adult reformulation of child errors as negative evidence. Journal of Child Language 30: 637−69. Christophe, Anne, Séverine Millotte, Savita Bernal, and Jeffrey Lidz 2008 Bootstrapping lexical and syntactic acquisition. Language and Speech 51: 61−75. Clackson, Kaili, Claudia Felser, and Harald Clahsen 2011 Children’s processing of reflexives and pronouns in English: Evidence from eye-movements during listening. Journal of Memory and Language 65: 128−144. Clahsen, Harald 1990/91 Constraints on parameter setting: A grammatical analysis of some acquisition stages. Language Acquisition 1: 361−391. Clahsen, Harald 1996 Generative Perspectives on Language Acquisition. Empirical Findings, Theoretical Considerations and Crosslinguistic Comparisons. Amsterdam: John Benjamins. Clahsen, Harald, Sonja Eisenbeiss, and Martina Penke 1996 Lexical learning in early syntactic development. In: H. Clahsen (ed.), Generative Perspectives on Language Acquisition: Empirical Findings, Theoretical Considerations and Crosslinguistic Comparisons, 129−159. Amsterdam: John Benjamins. Clahsen, Harald, Sonja Eisenbeiss, and Anne Vainikka 1994 The seeds of structure: A syntactic analysis of the acquisition of case marking. In: Teun Hoekstra, and Bonnie D. Schwartz (eds.), Language Acquisition Studies in Generative Grammar, 85−118. Amsterdam: John Benjamins. Clahsen, Harald, and Claudia Felser 2006a How native-like is non-native language processing? Trends in Cognitive Sciences 10: 564−570. Clahsen, Harald, and Claudia Felser 2006b Grammatical processing in language learners. Applied Psycholinguistics 27: 3−42. Clahsen, Harald, Claudia Kursawe, and Martina Penke 1996 Introducing CP: WH-questions and subordinate clauses in German child language. In: Charlotte Koster, and Frank Wijnen (eds.), Proceedings of the Groningen Assembly on Language Acquisition (GALA) 1995, 5−22. Groningen: Center for Language and Cognition. Clahsen, Harald, and Peter Muysken 1986 The accessibility of universal grammar to adult and child learners: A study of the acquisition of German word order. Second Language Research 2: 93−119.

52. Syntax and Language Acquisition

1823

Clahsen, Harald, and Martina Penke 1992 The acquisition of agreement morphology and its syntactic consequences: New evidence on German child language from the Simone-Corpus. In: Jürgen Meisel (ed.), The Acquisition of Verb Placement: Functional Categories and V2 Phenomena in Language Acquisition, 181−224. Dordrecht: Kluwer. Clahsen Harald, Martina Penke, and Teresa Parodi 1993/4 Functional categories in early child German. Language Acquisition 3: 395−429. Clark, Alexander, and Shalom Lappin 2011 Linguistic Nativism and the Poverty of the Stimulus. Oxford: Wiley-Blackwell. Clark, Eve V. 1987 The principle of contrast: A constraint on language acquisition. In: Brian MacWhinney (ed.), Mechanisms of Language Acquisition, 1−33. Hillsdale, NJ: Lawrence Erlbaum. Clark, Eve V., and Barbara F. Kelly (eds.) 2006 Constructions in Acquisition. Stanford: CSLI Publications. Clark, Robin 1992 The selection of syntactic knowledge. Language Acquisition 2: 83−149. Conroy, Anastasia, Eri Takahashi, Jeffrey Lidz, and Colin Phillips 2009 Equal treatment for all antecedents: how children succeed with Principle B. Linguistic Inquiry 40: 446−486. Crain, Steven 1991 Language acquisition in the absence of experience. Behavioral and Brain Sciences 14: 597−611. Crain, Steven, and Diane Lillo-Martin 1999 An Introduction to Linguistic Theory and Language Acquisition. Oxford: Blackwell. Crain, Steven, and Rosalind Thornton 1998 Investigations in Universal Grammar. Cambridge, Mass.: MIT Press. Crain, Stephen, Rosalind Thornton, Carole Boster, Laura Conway, Diane Lillo-Martin, and Elaine Woodams 1996 Quantification without qualification. Language Acquisition 5: 83−153. Crain, Steven, Rosalind Thornton, and Keiko Murasugi 2009 Capturing the evasive passive. Language Acquisition 16: 123−133. Crain, Stephen, and Janet D. Fodor 1993 Competence and performance in child language. In: Esther Dromi (ed.), Language and Cognition: A Developmental Perspective, 141−171. Norwood, NJ: Ablex. Dabrowska, Ewa 2000 From formula to schema: The acquisition of English questions. Cognitive Linguistics 11: 83−102. Dabrowska, Ewa, and Elena V. M. Lieven 2005 Towards a lexically specific grammar of children’s question constructions. Cognitive Linguistics 16: 437−474. Dabrowska, Ewa, and James Street 2006 Individual differences in language attainment: Comprehension of passive sentences by native and non-native English speakers. Language Sciences 28: 604−615. Dabrowska, Ewa, Caroline Rowland, and Anna Theakston 2009 The acquisition of questions with longdistance dependencies. Cognitive Linguistics 20: 571−596. de Houwer, Annick 1990 The Acquisition of Two Languages from Birth. Cambridge: Cambridge University Press. de Houwer, Annick 1995 Bilingual language acquisition. In: Paul Fletcher, and Brian MacWhinney (eds.) Handbook of Child Language, 219−250. Oxford: Blackwell.

1824

VIII. The Cognitive Perspective

DeKeyser, Robert M. 2003 The robustness of critical period effects in second language acquisition. Studies in Second Language Acquisition 22: 499−533. Dekkers, Joost, Frank van der Leeuw, and Jeroen van deWeijer (eds.) 2000 Optimality Theory: Phonology, Syntax, and Acquisition. Oxford: Oxford University Press. Demuth, Katherine 1989 Maturation and the acquisition of the Sesotho passive. Language 65: 56−80. Demuth, Katherine 1994 On the ‘underspecification’ of functional categories in early grammars. In: Barbara Lust, Margarita Suner, and John Whitman (eds.), Syntactic Theory and First Language Acquisition: Cross-Linguistic Perspectives, Volume 1, 119−134. Hillsdale, NJ: Lawrence Erlbaum. Demuth, Katherine 1996 The prosodic structure of early words. In: James L. Morgan, and Katherine Demuth (eds.), Signal to Syntax. Bootstrapping from Speech to Grammar in Early Acquisition, 171−184. Mahwah, NJ: Lawrence Erlbaum. De Villiers, Jill, Jacqueline Cahillane, and Emily Altreuter 2006 What can production reveal about Principle B? In: Kamil Ud Deen, Jun Nomura, Barbara Schulz, and Bonnie D. Schwartz (eds.), The Proceedings of the Inaugural Conference on Generative Approaches to Language Acquisition − North America, 89−100. (University of Connecticut Occasional Papers in Linguistics 4.) University of Connecticut. De Villiers, Jill, and Thomas Roeper 2011 Handbook of Generative Approaches to Language Acquisition. (Studies in Theoretical Psycholinguistics.) Dordrecht/Heidelberg/London/New York: Springer Verlag. Döpke, Susanne 1992 One Parent − One Language. Amsterdam: John Benjamins. Drozd, Kenneth F. 2001 Children’s weak interpretations of universally quantified questions. In: Melissa Bowerman, and Steven C. Levinson (eds.), Language Acquisition and Conceptual Development, 340−376. Cambridge: Cambridge University Press. Duffield, Nigel 2008 Roots and rogues in German child language. Language Acquisition 15: 225−269. Eisenbeiss, Sonja 1994 Auxiliaries and the acquisition of the passive. In: Eve V. Clark (ed.), The Proceedings of the 25 th Annual Child Language Research Forum, 235−242. Stanford, CA: CSLI Publications. Eisenbeiss, Sonja 2000 The acquisition of the DP in German. In: Luigi Rizzi, and Marc-Ariel Friedemann (eds.), The Acquisition of Syntax: Studies in Comparative Developmental Linguistics, 27−62. London: Longman. Eisenbeiss, Sonja 2003 Merkmalsgesteuerter Grammatikerwerb. Doctoral dissertation, University of Düsseldorf. http://docserv.uni-duesseldorf.de/servlets/DerivateServlet/Derivate-3185/1185.pdf Eisenbeiss, Sonja 2006 Documenting child language. Language Documentation and Description 3: 106−140. Eisenbeiss, Sonja 2009a Generative approaches to language learning. Linguistics 47: 273−310. Eisenbeiss, Sonja 2009b Contrast is the name of the game: contrast-based semi-structured elicitation techniques for studies on children’s language acquisition. Essex Research Reports in Linguistics 57.7.

52. Syntax and Language Acquisition

1825

Eisenbeiss, Sonja 2010 Production methods in language acquisition research. In: Elma Blom, and Sharon Unsworth (eds.), Experimental Methods in Language Acquisition Research, 11−34. Amsterdam: John Benjamins. Eisenbeiss, Sonja, Susanne Bartke, and Harald Clahsen 2005/2006 Structural and lexical case in child German: evidence from language-impaired and typically-developing children. Language Acquisition 13: 3−32. Eisenbeiss, Sonja, Ayumi Matsuo, and Ingrid Sonnenstuhl 2009 Learning to encode possession. In: William B. McGregor (ed.), The Expression of Possession, 143−211. Berlin/New York: Mouton deGruyter. Eisenbeiss, Sonja, Bhuvana Narasimhan, and Maria Voeikova 2008 The Acquisition of Case. In: Andrej Malchukov, and Andrew Spencer (eds.), The Oxford Handbook of Case, 369−383. Oxford: Oxford University Press. Elbourne, Paul 2005 On the acquisition of Principle B. Linguistic Inquiry 36: 333−365. Epstein, Samuel D., Suzanne Flynn, and Gita Martohardjono 1996 Second language acquisition: Theoretical and experimental issues in contemporary research. Brain and Behavioral Sciences 19: 677−758. Farrar, Michael J. 1990 Discourse and the acquisition of grammatical morphemes. Journal of Child Language 17: 607−24. Farrar, Michael J. 1992 Negative evidence and grammatical morpheme acquisition. Developmental Psychology 28: 90−98. Fikkert, Paula, and Helen de Hoop 2009 Language acquisition in Optimality Theory. Linguistics 47: 311−357. Fisher, Cynthia 2001 Partial sentence structure as an early constraint on language acquisition. In: Henry Gleitman, Lila R. Gleitman, and Barbara Landau (eds.), Perception, Cognition, and Language: Essays in Honor of Henry and Lila Gleitman, 275−290. Cambridge, MA: MIT Press. Fletcher, Paul, and Brian MacWhinney (eds.) 1996 The Handbook of Child Language. Oxford: Blackwell. Fodor, Janet D. 1998 Unambiguous triggers. Linguistic Inquiry 29: 1−36. Fodor, Janet D. 1999 Triggers for parsing with. In: Elaine Klein, and Gita Martohardjano (eds.), The Development of Second Language Grammars: A Generative Approach, 373−406. Amsterdam: John Benjamins. Fox, Danny, and Yosef Grodzinsky 1998 Children’s passive: A view from the by-phrase. Linguistic Inquiry 29: 311−332. Freudenthal, Daniel, Julian M. Pine, Javier Aguado-Orea, and Fernand Gobet 2007 Modelling the developmental patterning of finiteness marking in English, Dutch, German and Spanish using MOSAIC. Cognitive Science 31: 311−341. Friedmann, Naama 2007 Young children and A-chains: The acquisition of Hebrew unaccusatives. Language Acquisition 14: 377−422. Friedmann, Naama, Adriana Belletti, and Luigi Rizzi 2009 Relativized relatives: Types of intervention in the acquisition of A-bar dependencies. Lingua 119: 67−88. Gallaway, Claire, and Brian J. Richards (eds.) 1994 Input and Interaction in Language Acquisition. London: Cambridge University Press.

1826

VIII. The Cognitive Perspective

Gallimore, Ronald, and Roland G. Tharp 2006 The interpretation of elicited sentence imitation in a standardized context. Language Learning 31: 369−392. Gawlitzek-Maiwald, Ira, Rosemary Tracy, and Agnes Fritzenschaft 1992 Language acquisition and competing linguistic representations: The child as arbiter. In: Jürgen M. Meisel (ed.), The Acquisition of Verb Placement: Functional Categories and V2 Phenomena in Language Acquisition, 139−180. Dordrecht: Kluwer. Genesee, Fred 1989 Early bilingual development: One language or two? Journal of Child Language 16: 161−179. Genesee, Fred 2002 Portrait of the bilingual child. In: Vivian J. Cook (ed.), Portraits of the L2 User, 167− 196. Clevedon: Multilingual Matters. Gerken, LouAnn 1991 The metrical basis for children’s subjectless sentences. Journal of Memory and Language 30: 431−451. Geurts, Bart 2003 Quantifying kids. Language Acquisition 11: 197−218. Gibson, Edward, and Kenneth Wexler 1994 Triggers. Linguistic Inquiry 25: 407−454. Goldberg, Adele E. 1995 Constructions: A Construction Grammar Approach to Argument Structure. Chicago: University of Chicago Press. Granfeldt, Jonas 2000 The acquisition of the Determiner Phrase in bilingual and second language French. Bilingualism: Language and Cognition 3: 263−280. Grodzinsky, Yosef, and Tanya Reinhart 1993 The innateness of binding and coreference. Linguistic Inquiry 24: 69−102. Grondin, Nathalie, and Lydia White 1996 Functional categories in child L2 acquisition of French. Language Acquisition 5: 1−34. Guasti, Maria T. 2002 Language Acquisition. Cambridge Mass: MIT Press. Guasti, Maria T., Anna Gavarro, Joke de Lange, and Claudia Caprin 2008 Article omission across child languages. Language Acquisition 15: 89−119. Gülzow, Insa, and Natalia Gagarina (eds). 2007 Frequency Effects in Language Acquisition: Defining the Limits of Frequency as an Explanatory Concept. Berlin/New York: Mouton deGruyter. Hamann, Cornelia, Odette Kowalski, and William Philip 1997 The French “delay of Principle B” effect. In: Proceedings of the 21 st annual Boston University Conference on Language Development, 205−219. Somerville, Mass.: Cascadilla Press. Haspelmath, Martin 1999 Optimality and diachronic adaptation. Zeitschrift für Sprachwissenschaft 18: 180−205. Hawkins, Roger 2001 Second Language Syntax. Oxford: Blackwell. Hawkins, Roger, and Y-C. Chan 1997 The partial availability of Universal Grammar in second language acquisition: The ‘failed features’ hypothesis. Second Language Research 13: 187−226. Herschensohn, Julia 2007 Language Development and Age. Cambridge: Cambridge University Press. Hoekstra, Teun, and Nina M. Hyams 1998 Aspects of root infinitives. Lingua 106: 81−112.

52. Syntax and Language Acquisition

1827

Höhle, Barbara 2009 Bootstrapping mechanisms in first language acquisition. Linguistics 47: 359−382. Höhle, Barbara, Jürgen Weissenborn, Dorothea Kiefer, Antje Schulz, and Michaela Schmitz 2004 Functional elements in infants’ speech processing: the role of determiners in the syntactic categorization of lexical elements. Infancy 5: 341−353. Hughes, Mary, and Shanley E. M. Allen 2006 Discourse-pragmatic analysis of subject omission in child English. Proceedings of the 30 th annual Boston University Conference on Language Development, 293−304. Somerville, Mass.: Cascadilla Press. Huttenlocher, Janellen, Marina Vasilyeva, and Priya Shimpi 2004 Syntactic priming in young children. Journal of Memory and Language 50: 182−195. Hyams, Nina M. 1986 Language Acquisition and the Theory of Parameters. Dordrecht: Reidel. Hyams, Nina M. 1996 The underspecification of functional categories in early grammar. In: Harald Clahsen (ed.), Generative Perspectives on Language Acquisition. Empirical Findings, Theoretical Considerations and Crossslinguistic Comparisons, 91−127. Amsterdam: John Benjamins. Hyams, Nina M. 2001 Now you hear it, now you don’t: The nature of optionality in child language. Proceedings of the 25 th annual Boston University Conference on Language Development, 34− 58. Sommerville, MA.: Cascadilla Press. Hyltenstam, Kenneth, and Niclas Abrahamson 2003 Maturational constraints in SLA. In: Catherine J. Doughty, and Michael H. Long (eds.), The Handbook of Second Language Acquisition, 539−588. Oxford: Blackwell. Hyltestam, Kenneth, and Lorraine Obler (eds.) 1989 Bilingualism Across the Lifespan. Cambridge: Cambridge University Press. Ingram, David 1989 First Language Acquisition: Method, Description and Explanation. Cambridge: Cambridge University Press. Jaeggli, Oswaldo, and Kenneth Safir 1989 The Null Subject Parameter. Dordrecht: Kluwer. Jaensch, Carol 2008 Defective adjectival inflection in non-native German: prosodic transfer or missing surface inflection? In: Leah Roberts, Florence Myles, and Annabelle David (eds.), EUROSLA Yearbook 2008, 259−286. Amsterdam: John Benjamins. Jakubowicz, Ceila 1984 On markedness and binding principles. In: Charles Jones, and Peter Sells (eds.), Proceedings of North Eastern Linguistics Society 14, 154−182. Amherst, Mass.: University of Massachusetts. Kayne, Richard S. 1994 The Antisymmetry of Syntax. Cambridge, MA: MIT Press. Kim, Youjin, and Kim McDonough 2008 Learners’ production of passives during syntactic priming activities. Applied Linguistics 29: 149−154. Kirjavainen, Minna, Anna Theakston, and Elena V. M. Lieven 2009 Can input explain children’s me-for-I errors? Journal of Child Language 36: 1091−1114. Küntay, Aylin, and Dan I. Slobin 1996 Listening to a Turkish mother: Some puzzles for acquisition. In: Dan I. Slobin, Julie Gerhardt, Amy Kyratsis, and Jiansheng Guo (eds.), Social Interaction, Social Context, and Language: Essays in Honor of Susan Ervin-Tripp, 265−286. Hillsdale, NJ: Lawrence Erlbaum.

1828

VIII. The Cognitive Perspective

Lappin, Shalom, and Stuart M. Shieber 2007 Machine learning theory and practice as a source of insight into universal grammar. Journal of Linguistics 43: 393−427. Lebeaux, David S. 1988 Language acquisition and the form of the grammar. Doctoral dissertation, University of Massachusettes, MA. Legate, Julie A., and Charles D. Yang 2002 Empirical re-assessment of stimulus poverty arguments. The Linguistic Review 19: 151−162. Legendre, Géraldine, Paul Hagstrom, Anne Vainikka, and Marina T. Todorova, M. 2002 Partial constraint ordering in child French syntax. Language Acquisition 10: 189−227. Lenneberg, Eric H. 1967 Biological Foundations of Language. New York, NY: Wiley. Lightfoot, David 1991 How to Set Parameters: Arguments from Language Change. Cambridge, MA: MIT Press. Lightfoot, David 1999 The Development of Language: Acquisition, Changes and Evolution. Oxford. Long, M. 2007 Problems in SLA. Mahwah, NJ: Lawrence Erlbaum. Lust, Barbara C. 2006 Child Language: Acquisition and Growth. Cambridge: Cambridge University Press. MacWhinney, Brian 2000 The CHILDES Project: Tools for Analyzing Talk. Mahwah, NJ: Lawrence Erlbaum. Marcus, Gary F., Steven Pinker, Michael Ullman, Michelle Hollander, T. John Rosen, and Fei Xu 1992 Overregularization in Language Acquisition. (Monographs of the Society for Research in Child Development 57.) Chicago: University of Chicago Press. Marcus, Gary F. 1993 Negative evidence in language acquisition? Cognition 46: 53−85. Marinis, Theodore 2003 Psycholinguistic techniques in second language acquisition research. Second Language Research 19: 144−161. McDaniel, Dana, Cecile McKee, and Helen Smith Cairns (eds.) 1996 Methods for Assessing Children’s Syntax. Cambridge, MA: MIT Press. Meisel, Jürgen M. 2009 Second language acquisition in early childhood. Zeitschrift für Sprachwissenschaft 28: 5−34. Meisel, Jürgen M. 2011 First and Second Language Acquisition: Parallels and Differences. Cambridge: Cambridge University Press. Meisel, Jürgen M., and Müller, Natascha 1992 Finiteness and verb placement in early child grammars. Evidence from simultaneous acquisition of two first languages: French and German. In: Jürgen M. Meisel (ed.), The Acquisition of Verb Placement: Functional Categories and V2 Phenomena in Language Acquisition, 109−138. Dordrecht: Kluwer. Menn, Lise, and Nan Bernstein Ratner (eds.) 2000 Methods for Studying Language Production. Mahwah, NJ: Lawrence Erlbaum. Milroy L, and Pieter Muysken (eds.) 1995 One Speaker, Two Languages: Cross-disiplinary Perspectives on Code-Switching. Cambridge: Cambridge University Press. Morgan James L., and Katherine Demuth (eds.) 1996 Signal to Syntax. Bootstrapping from Speech to Grammar in Early Acquisition. Mahwah, NJ: Lawrence Erlbaum.

52. Syntax and Language Acquisition

1829

Müller, Natascha 1998 Transfer in bilingual first language acquisition. Bilingualism: Language and Cognition 1: 151−171. Muysken, Pieter 2000 Bilingual Speech: A Typology of Code-mixing. Cambridge: Cambridge University Press. Myles, Florence 2005 Interlanguage corpora and second language acquisition research. Second Language Research 21: 373−391. O’Grady, William 1997 Syntactic Development. Chicago: University of Chicago Press. Onnis, Luca, Heidi R. Waterfall, and Shimon Edelman 2008 Learn locally, act globally: Learning language from variation set cues. Cognition 109: 423−430. Paradis, Johanne, and Fred Genesee 1996 Syntactic acquisition in bilingual children: Autonomous or interdependent? Studies in Second Language Acquisition 18: 1−25. Paradis, Michel 2004 A Neurolinguistic Theory of Bilingualism. Amsterdam: John Benjamins. Penner, Zwi, and Jürgen Weissenborn 1996 Strong continuity, parameter setting, and trigger hierarchy. In: Harald Clahsen (ed.), Generative Perspectives on Language Acquisition, 161−200. Amsterdam: John Benjamins. Philip, William 1995 Event quantification in the acquisition of universal quantification. Doctoral dissertation, University of Massachusetts at Amherst. Amherst, MA. Philip, William, and Peter Coopmans 1996 The double Dutch delay of Principle B effect. In: Proceedings of the 20 th annual Boston University Conference on Language Development, 576−587. Somerville, Mass.: Cascadilla Press. Pickering, Martin J., and Victor S. Ferreira 2008 Structural priming: A critical review. Psychological Bulletin 134: 427−459. Pine, Julian M., Gina Conti-Ramsden, Kate L. Joseph, Elena V. M. Lieven, and Ludovica Serratrice 2008 Tense over time: Testing the Agreement/Tense Omission Model as an account of the pattern of tense-marking provision in early child English. Journal of Child Language 35: 55−75. Pine, Julian M., and Elena V. M. Lieven 1997 Slot and frame patterns and the development of the determiner category. Applied Psycholinguistics 18: 123−138. Pine, Julian M., Caroline Rowland, Elena V. M. Lieven and Anna Theakston 2005 Testing the Agreement/Tense Omission Model: why the data on children’s use of nonnominative 3psg subjects count against the ATOM. Journal of Child Language 32: 269−89. Pinker, Steven 1984 Language Learnability and Language Development. Cambridge, MA: Harvard University Press. Pinker, Steven 1989 Learnability and Cognition: The Acquisition of Argument Structure. Cambridge, MA: MIT Press. Pinker, Steven, David S. Lebeaux, and Loren A. Frost 1987 Productivity and constraints in the acquisition of the passive. Cognition 26: 195−267. Poeppel, David, and Kenneth Wexler 1993 The full competence hypothesis of clause structure in early German. Language 69: 365−424.

1830

VIII. The Cognitive Perspective

Pratt, Amy, and John Grinstead 2007 The optional infinitive stage in child Spanish. In: Proceedings of Generative Approaches to Language Acquisition − North America, 351−362. McGill University, Montréal: Cascadilla Press. Prince, Allan, and Paul Smolensky 2004 Optimality Theory: Constraint Interaction in Generative Grammar. Oxford: Blackwell. Pullum, Geoffrey K., and Barbara C. Scholz 2002 Empirical assessment of stimulus poverty arguments. The Linguistic Review 19: 9−50. Radford, Andrew 1990 Syntactic Theory and the Acquisition of English Syntax: The Nature of Early Child Grammars of English. Oxford: Blackwell. Radford, Andrew 1995 Phrase structure and functional categories. In: Paul Fletcher, and Brian MacWhinney (eds.), The Handbook of Child Language, 483−507. Oxford: Blackwell. Rispoli, Matthew 1999 Case and agreement in English language development. Journal of Child Language 26: 357−72. Rispoli, Matthew 2005 When children reach beyond their grasp: Why some children make pronoun case errors and others don’t. Journal of Child Language 32: 93−116. Ritchie, William C., and Tej K. Bhatia 1999 Handbook of Child Language Acquisition. San Diego, CA: Academic Press. Ritchie, William C., and Tej K. Bhatia 2009 The New Handbook of Second Language Acquisition. (Blackwell Handbooks in Linguistics.) Oxford: Blackwell. Rizzi, Luigi 1993/4 Some notes on linguistic theory and language development: The case of root infinitives. Language Acquisition 3: 371−393. Rizzi, Luigi 2000 Remarks on early null subjects. In: Marc-Ariel Friedemann, and Luigi Rizzi (eds.), The Acquisition of Syntax: Studies in Comparative Developmental Linguistics, 269−292. London: Longman. Roeper, Thomas 1996 The role of merger theory and formal features in acquisition. In: Harald Clahsen (ed.), Generative Perspectives on Language Acquisition. Empirical Findings, Theoretical Considerations and Crosslinguistic Comparisons, 415−450. Amsterdam: John Benjamins. Roeper, Thomas, and E. Matthei 1975 On the acquisition of all and some. Papers and Reports on Child Language Development 9: 63−74. Stanford University Press, Stanford, CA. Romaine, Suzanne 1989 Bilingualism. Oxford: Blackwell. Rothman, Jason 2008 Why all counter-evidence to the critical period hypothesis in second language acquisition is not equal or problematic. Language and Linguistics Compass 2: 1063−1088. Rowland, Caroline, and J.M. Pine 2000 Subject-auxiliary inversion errors and wh-question acquisition: ‘What children do know?’. Journal of Child Language 27: 157−181. Rowland, Caroline, and Anna Theakston 2009 The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 2: The modals and auxiliary DO. Journal of Speech, Language, and Hearing Research 52: 1471−1492.

52. Syntax and Language Acquisition

1831

Savage, Ceri, Elena V. M. Lieven, Anna Theakston, and Michael Tomasello 2003 Testing the abstractness of children’s linguistic representations: Lexical and structural priming of syntactic constructions in young children. Developmental Science 6: 557− 567. Savage, Ceri, Elena V. M. Lieven, Anna Theakston, and Michael Tomasello 2006 Structural priming as implicit learning: The persistence of lexical and structural priming in 4-year-olds. Language Learning and Development 2: 27−49. Saxton, Matthew 1997 The contrast theory of negative input. Journal of Child Language 24: 139−161. Saxton, Matthew 2010 Child Language: Acquisition and Development. Thousand Oaks, CA: Sage Publications. Saxton, Matthew, Phillip Backley, and Clare Gallaway 2005 Negative input for grammatical errors: effects after a lag of 12 weeks. Journal of Child Language 32: 643−672. Schachter, Jacqueline 1996 Maturation and the issue of universal grammar in second language acquisition. William C. Ritchie, and Tej K. Bhatia (eds.), Handbook of Second Language Acquisition, 159− 93. New York, NY: Academic Press. Schachter, Jacqueline 1988 Second language acquisition and its relationship to Universal Grammar. Applied Linguistics 9: 219−235. Schuetze, Carson T., and Kenneth Wexler 1996 Subject case licensing and English root infinitives. Proceedings of the Annual Boston University Conference on Language Development 20: 670−681. Somerville, Mass.: Cascadilla Press. Schwartz, Bonnie D., and Rex A. Sprouse 1996 L2 cognitive states and the Full Transfer/Full Access Model. Second Language Research 12: 40−74. Sekerina, Irina A., Eva M. Fernández, and Harald Clahsen (eds.) 2008 Developmental Psycholinguistics: On-Line Methods in Children’s Language Processing. Amsterdam: John Benjamins. Shafer, Valery, David Shucard, Janet Shucard, and Gerken, LouAnn 1998 An electrophysiological study of infants’ sensitivity to the sound patterns of English speech. Journal of Speech and Hearing Research 41: 874−886. Shaoul, Cyrus, and Chris Westbury 2011 Formulaic sequences: Do they exist and do they matter? The Mental Lexicon 6: 171− 196. Shi, Rushen, Janet F. Werker, and Anne Cutler 2006 Recognition and representation of function words in English-learning infants. Infancy 10: 187−198. Sigurjónsdóttir, Sigga 1992 Binding in Icelandic: Evidence from language acquisition. Doctoral dissertation, University of California Los Angeles. Singleton, David 2005 The Critical Period Hypothesis: A coat of many colours. International Review of Applied Linguistics in Language Teaching 43: 269−285. Slobin, Dan I., Melissa Bowerman, Penelope Brown, Sonja Eisenbeiss, and Bhuvana Narasimhan 2011 Putting things in places: Developmental consequences of linguistic typology. In: Jürgen Bohnemeyer and Eric Pederson (eds.) Event Representation, 134−165. Cambridge: Cambridge University Press. Sokolov, Jeffrey L., and Catherine Snow (eds.) 1994 Handbook of Research in Language Development Using CHILDES. Hillsdale, NJ: Lawrence Erlbaum.

1832

VIII. The Cognitive Perspective

Stoll, Sabine, Kirsten Abbot-Smith, and Elena V. M. Lieven 2009 Lexically restricted utterances in Russian, German, and English child-directed speech. Cognitive Science 33: 75−103. Tesar, Bruce, and Paul Smolensky 2000 Learnability in Optimality Theory. Cambridge, MA: MIT Press. Theakston, Anna, and Caroline Rowland 2009 The acquisition of auxiliary syntax: A Longitudinal Elicitation Study. Part 1: Auxiliary BE. Journal of Speech, Language, and Hearing Research 52: 1449−1470. Thornton, Rosalind, and Kenneth Wexler 1999 Principle B, VP-Ellipsis, and Interpretation in Child Grammar. Cambridge, Mass.: MIT Press. Thornton, Rosalind, and Stephen Crain 1994 Successful cyclic movement. In: Teun Hoekstra, and Bonnie D. Schwartz (eds.), Language Acquisition Studies in Generative Grammar, 215−252. Amsterdam: John Benjamins. Tomasello, Michael 2001 The item-based nature of children’s early syntactic development. In: Michael Tomasello, and Elizabeth Bates (ed.), Language Development. The Essential Readings, 169−186. Oxford: Blackwell. Tomasello, Michael 2003 Constructing a Language: A Usage-Based Theory of Language Acquisition. Harvard University Press. Tomasello, Michael 2006 Acquiring linguistic constructions. In: Deanna Kuhn, Robert S. Siegler, William Damon, and Richard M. Lerner (eds.), Handbook of Child Psychology, Volume 2, Cognition, Perception, and Language, 255−298. Hoboken, New Jersey: John Wiley and Sons. Tsimpli, Ianthi M. 1992 Functional categories and maturation: The prefunctional stage of language acquisition. Doctoral dissertation. University College London, UK. Tsimpli, Ianthi M., and Neil V. Smith 1991 Second language learning: evidence from a polyglot savant. UCL Working Papers in Linguistics 3: 171−84. Ullman, Michael 2001 The neural basis of lexicon and grammar in first and second language: The Declarative/ Procedural Model. Bilingualism: Language and Cognition 4: 105−122. Vainikka, Anne 1993/4 Case in the development of English syntax. Language Acquisition 3: 257−325. Valian, Virginia 1991 Syntactic subjects in the early speech of American and Italian children. Cognition 40: 21−81. Valian, Virginia, Stephanie Solt, and John Stewart 2009 Abstract categories or limited-scope formulae? The case of children’s determiners. Journal of Child Language 36: 743−778. Valian, Virginia, and Lyman Casey 2003 Young children’s acquisition of wh-questions: The role of structured input. Journal of Child Language 30: 117−143. Vinther, Thomas 2002 Elicited imitation: A brief overview. International Journal of Applied Linguistics 12: 54−73. Volterra, Virginia, and Traute Taeschner 1978 The acquisition and development of language by bilingual children. Journal of Child Language 5: 311−326.

53. Syntax and Language Disorders

1833

Wei, Li, and Melissa G. Moyer 2008 The Blackwell Guide to Research Methods in Bilingualism and Multilingualism. Oxford: Blackwell. Weissenborn, Jürgen, and Barbara Höhle (eds.) 2001 Approaches to Bootstrapping: Phonological, Lexical, Syntactic and Neurophysiological Aspects of Early Language Acquisition. Amsterdam: John Benjamins. Weissenborn, Jürgen 1992 Null subjects in early grammars: Implications for parameter-setting theories. In: Jürgen Weissenborn, Helen Goodluck, and Thomas Roeper (eds.), Theoretical Issues in Language Acquisition: Continuity and Change in Development, 269−299. Hillsdale, NJ: Lawrence Erlbaum. Westergaard, Marit 2009 Usage-based vs. rule-based learning: The acquisition of word order in wh-questions in English and Norwegian. Journal of Child Language 36: 1023−1051. Wexler, Kenneth 1998 Very early parameter setting and the unique checking constraint: A new explanation of the optional infinitive stage. Lingua 106: 23−79. White, Lydia 2003 Second Language Acquisition and Universal Grammar. Cambridge: Cambridge University Press.

Sonja Eisenbeiss, Colchester (UK)

53. Syntax and Language Disorders 1. 2. 3. 4. 5. 6.

Introduction Accounting for syntactic deficits Problems in accounting for syntactic deficits Summary and outlook Appendix References (selected)

Abstract A focus in the research on syntactic disorders is to find out if and how syntactic representations or operations are defective in language-impaired speakers and how to capture the observed deficits in an explanatory theoretical framework. While a major goal of studying syntactic deficits is to provide a basis for effective therapeutical intervention of language-impaired speakers, linguists have been intrigued by the assumption that syntactic deficits provide insights into the structure and organization of the human language faculty. The article aims to sketch some major issues that have shaped the field over the last 40 years. It presents the aims pursued in studying syntactic deficits, describes typical syntactic deficits in language production and comprehension, and discusses how syntactic theory and syntactic-deficit approaches have co-evolved over time.

1834

VIII. The Cognitive Perspective

1. Introduction Syntactic deficits are common in language disorders and have always been at the focus of research on language disorders. The investigation whether or not syntactic deficits occur in a given acquired or developmental language disorder, which syntactic structures or processes are eventually affected, and how to capture such deficits in an explanatory theoretical account has dominated the linguistic research on language disorders since its very first beginnings to the present. Interest on syntactic deficits in language disorders first focussed on Broca’s aphasia − an acquired language disorder caused by strokes affecting left frontal brain regions. The core symptom of Broca’s aphasia is an agrammatic spontaneous-speech production which is characterised by omissions of free functional elements and a considerable reduction of the length and syntactic complexity of utterances. This leads to a preponderance of very short utterances that seem to consist of a simple, linearly organised string of open-class words. As the term agrammatic already indicates, the spontaneous speech of language-impaired speakers suffering from this disorder gives the impression of lacking syntactic structure. Agrammatic Broca’s aphasia was therefore considered the ideal candidate for investigating syntactic deficits and has been at the focus of linguistic investigations concerned with syntactic disorders ever since the advent of modern generative syntactic theory (Chomsky 1957). According to Chomsky’s generative approach, the human language faculty is based on a specialized, domain-specific language organ situated in the brain that is part of our biological endowment and therefore genetically specified (e.g. Chomsky 1980, 2002). If this conception holds true, the language organ in the human brain should be affected by brain lesions involving this organ. An impairment of this organ due to a brain lesion should, hence, result in an impairment of the language faculty. Agrammatic Broca’s aphasia seemed to exemplify this theoretically predicted case of an impaired language faculty due to brain lesions. Consequently, from the late 1960’s to the late 1980’s the linguistic investigation of language disorders associated with syntactic deficits was predominantly concerned with the language deficits observed in Broca’s aphasia, the focus of this research being on the issue whether or not Agrammatism is due to a deficit affecting syntactic competence and how to capture this purported syntactic-competence deficit in a syntax-theoretical framework. The interest on agrammatic Broca’s aphasia was also furthered by the assumption that Wernicke’s aphasia constituted a syndrome mirroring the deficits observed in Broca’s aphasia. Individuals with Wernicke’s aphasia typically suffer from lesions to temporo-parietal brain regions and display marked language comprehension deficits and impairments in word retrieval. These symptoms were considered as indicative of a lexical-semantic deficit in Wernicke’s aphasia, whereas syntactic abilities were assumed to be intact (cf. Marin, Saffran, and Schwartz 1976; the overviews in Edwards 2005 and Wimmer 2010). The assumed dichotomy between intact lexical-semantic and impaired syntactic abilities in Broca’s aphasia and spared syntactic but affected lexical-semantic capacities in Wernicke’s aphasia was taken as evidence for a dual architecture of the language system encompassing a storage component (i.e. the lexicon) and a computational component (i.e. syntax) that contains the operations to generate composite hierarchical structures such as phrases and sentences out of the stored elements in the mental lexicon (Chomsky 1965; Pinker 1999).

53. Syntax and Language Disorders

1835

In the 1990’s then, the discovery of the British KE family, a huge three-generational family in which about half of the members suffer from an inherited speech and language disorder (Gopnik and Crago 1991), drew the attention to developmental language disorders, specifically to the language disorder called Specific Language Impairment (SLI). Children with Specific Language Impairment display severe problems in acquiring inflectional morphology, verb movement and complex syntactic constructions such as relative clauses or passives. If the language capacity is innately specified, as suggested by Chomsky’s generative approach, this capacity should in principle be susceptible to genetic disorders which affect the genetically encoded blueprints controlling the development and functioning of the brain areas that subserve this language organ. Indeed, in 2001 a point-mutation of the human gene FOXP2 (located on the long arm of chromosome 7) was found to be related to the inherited speech and language disorder affecting the members of the KE family (Lai, Fisher, Hurst, Vargha-Khadem, and Monaco 2001). The findings suggest that the mutation of this gene leads to an atypical development of brain areas typically associated with speech and language functions (Liégeois, Baldeweg, Connelly, Gadian, Mishkin, and Vargha-Kahdem 2003). This was the first case attested where a genetic defect could be directly related to neuro-anatomical abnormalities which, in turn, are held responsible for certain speech and language functions. FOXP2 was therefore enthusiastically celebrated as the first “language gene” to be discovered. Although further genetic investigations have revealed that the point mutation discovered by the KE family is not regularly observed in other individuals with Specific Language impairment (e.g. SLI Consortium 2002; Meaburn, Dale, Craig, and Plomin 2002), the evidence suggests that Specific Language Impairment has a genetic basis (cf. SLI Consortium 2002; overview in Stromswold 2001; Bishop 2009). This nourishes the hope that it might be possible to explore the genetic basis underlying the specific syntactic deficits observed in this syndrome, thereby ultimately uncovering those aspects of the language capacity that are genetically specified in our species. Moreover, SLI is considered to display an interesting dissociation between impaired language and spared general cognitive capacities, a dissociation that has been argued to exemplify that the human language capacity is independent from other cognitive domains as assumed in the framework of Generative Grammar (e.g. Chomsky 1957, 1980) and cannot be put down to the working of domain-general cognitive operations (see e.g. Levy 1996; Van der Lely 1997, 2005). The deficits associated with this developmental language disorder have hence attracted much research on syntactic deficits over the last 20 years. Meanwhile research on syntactic deficits has also taken other language disorders as seen in Down’s syndrome, Parkinson’s syndrome, or Autism into focus. Today, there is hardly any acquired or developmental deficit syndrome associated with language impairments for which a deficit affecting the syntactic domain is not discussed. Such syndromes include developmental disorders such as Specific Language Impairment, Williams syndrome, Down’s syndrome, or Autism and acquired language deficits such as aphasic language disorders (Broca’s aphasia, Wernicke’s aphasia) or degenerative brain diseases (Parkinson’s disease, Alzheimer’s disease, Huntington’s disease). The wealth of research that has been undertaken in this area up to today precludes any in-depth discussion of the investigations conducted and the discussions led − present and past − for every single language disorder. I will, therefore, try to single out some major issues that have shaped the field during the last 40 years. Controversies with respect to syntactic deficits have centred on the question whether or not the symptoms

1836

VIII. The Cognitive Perspective

related to a given language disorder are due to a deficit affecting syntactic competence and, if so, how to capture this deficit in a theoretical framework. Many issues first arose with respect to Broca’s aphasia, the first language disorder to attract research informed by theoretical syntax. Therefore, research on Broca’s aphasia will be at the focus of this article. Nevertheless, the article tries to achieve a broader coverage by referring to related research conducted on other language disorders when possible and by taking up issues and findings relevant to specific language disorders. In addition, the appendix gives short descriptions of the most important language disorders that have been associated with syntactic deficits, sketches the relevance of these disorders for the field, and gives references to publications where the reader can find more detailed information on the respective syndromes.

1.1. Why investigate syntactic deficits? In investigating language disorders the aim is twofold. A first goal is to describe and explain which aspects of the language faculty are impaired or retained in a given language disorder, i.e. we want to learn about the impaired language system. Such knowledge could then provide the basis for effective therapeutical intervention. Most research in language disorders has naturally focussed on this goal. However, although the task is set out precisely enough in theory, accomplishing this task has turned out to be quite intricate in practice. To this day, there is not a single language disorder for which a general consensus regarding the nature of the observed language deficits has been obtained among researchers (see section 2). A second goal of the investigation of syntactic deficits is to learn about the structure of the human language capacity by investigating cases where this capacity seems to break down. Here, the idea is that a thorough study of the syntactic deficits associated with language disorders might provide insights into the structure and organization of the normal language system. Since Chomsky (1957) introduced the generative approach into the field of linguistics, the subject matter of investigation for a generative linguist is the abstract linguistic knowledge of an individual, i.e. her/his language competence or Ilanguage. This knowledge is accessed when we produce or comprehend utterances or when we judge their grammaticality. The application of this abstract system of knowledge in language production, comprehension, or judgement results in performance data. The abstract system of grammatical knowledge is not directly accessible to investigation. It remains a “black box”. Insights in our language competence can only be gained by the analysis of performance data reflecting the application of this knowledge, i.e. when we produce or comprehend speech utterances or judge their grammaticality. Note, however, that grammatical competence is only one factor determining performance, with other factors such as e.g. processing capacities, social, pragmatic, or discourse factors affecting it, too. As linguistic competence is a black box which does not allow for direct observation, the crucial task for generative linguists is to find evidence for the content of this black box within performance data. The deficits observed in individuals with language disorders constitute one type of performance data that can be used to draw inferences about language competence. The value of deviant utterances for elucidating the human language faculty was probably first noticed in speech-error research. A lin-

53. Syntax and Language Disorders

1837

guistic investigation of speech errors revealed that such errors were not random, but that they were constrained by the architecture of the language system and only involved elements of the same category and level of representation (Fromkin 1971). A similar logic applies to erroneous forms produced in language impairments, the idea being that we can learn about the human language faculty by investigating which factors constrain the errors that occur in language-impaired individuals (cf. Fromkin 1997 for overview). The assumption that the investigation of syntactic deficits can provide insights into the structure of the human language faculty rests on four axioms on the nature of the human language capacity and its breakdown following brain lesions. According to the assumptions of autonomy and modularity (Fodor 1983) the human language faculty is autonomous from other cognitive domains or modules (i.e. it is domain-specific) and consists of task-specific and independent submodules carrying out different types of computations on different types of representations. According to the fractionation assumption (Caramazza 1984) brain damage can result in the selective impairment of specific submodules of the language capacity. The transparency assumption (Caramazza 1984) states that the components unaffected by the brain lesion will continue to function as normal, such that the output of the language system will directly reflect the impairment of the affected component. The function of the impaired module in the language system can, then, be directly inferred from the symptoms observed in the languageimpaired individual. The validity of these assumptions has been controversially debated (see Kosslyn and Van Kleek 1990; Caramazza 1992; Levy and Kavé 1999; Penke 2006: section 2.3 for discussion). The assumptions of modularity and autonomy are, for instance, refuted in functional approaches to language that reject the notion of a domainspecific language capacity performing language-specific operations on language-specific representations (cf. Penke and Rosenbach 2007). The transparency assumption is threatened by an individual’s ability to compensate her/his language deficit, resulting in language data that mirror the application of adaptation strategies rather than the impairment of the affected component (e.g. Marin, Saffran, and Schwartz 1976; Kolk and Heeschen 1992). Nevertheless, pursuing the goal to find out about the language faculty by studying language disorders has proven quite fruitful in practice. The major justification for following this goal and, hence, for adopting these axioms comes from the observation that results obtained from the analysis of language deficits have been supported by converging results both from theoretical linguistics and from data obtained with other psychoor neurolinguistic methodologies (cf. Caramazza 1992; Penke 2006: section 2.3 for discussion). In pursuing the goal to draw inferences about the normal language faculty by investigating languages disorders, the investigation of selective deficits has drawn particular attention. The observation that a language disorder affects linguistic entity (i.e. an element, structure, or operation) X while sparing others is seen as evidence that the affected entity is represented in the mind/brain in a way that is distinct from entities unaffected by this language disorder. Consider for example the assumption that agrammatic Broca’s aphasia is caused by a syntactic deficit sparing the lexical-semantic component, whereas Wernicke’s aphasia is associated with a deficit of the semantic component sparing syntax (cf. Marin, Saffran, and Schwartz 1976). The assumed double dissociation of the deficits (syndrome A impairs X and spares Y, whereas deficit syndrome B spares X and impairs Y) associated with these two aphasic syndromes was regarded as evidence for the view that the language faculty consists of two autonomous modules −

1838

VIII. The Cognitive Perspective

a semantic and a syntactic component − that can be selectively affected in language disorders. Meanwhile, however, a number of studies have provided evidence that syntax is not unaffected in Wernicke’s aphasia but that the deficits seen in Wernicke’s aphasia largely resemble the impairments observed in Broca’s aphasia (see appendix and e.g. Edwards 2005; Wimmer 2010; Penke 2013a for overview), thus challenging the view of a double dissociation between syntactic and lexical-semantic deficits in Broca’s and Wernicke’s aphasia. Nevertheless, the search for selective deficits drives much research on language impairments. A more recent example is the exploration of selective deficits with regular or irregular inflected forms in language-impaired speakers that have been taken as evidence that regular and irregular inflection are subserved by two qualitatively different cognitive operations (storage retrieval vs. computation) localized in different brain areas (cf. Pinker 1999; Ullman, Pancheva, Love, Yee, Swinney, and Hickok 2005; Penke 2006, 2012). A third goal in studying language impairments is based on the assumption that data from language-impaired speakers might serve to evaluate competing linguistic theories. Out of a set of competing theories, the one that captures the observed language deficits best, i.e. the theory that allows for representing preserved and impaired structures or functions as natural classes within the theory, is to be preferred as it has the property of breakdown compatibility (Grodzinsky 1990: 17). Consider as example two linguistic constructions X and Y which are considered to be qualitatively different by theory A, while theory B assumes that the same mental representations and/or operations underlie constructions X and Y. In this case, the observation that construction X is impaired in a specific deficit syndrome while construction Y is unaffected can be considered as providing evidence for theory A which assumes that X and Y belong to different natural classes within the theory. In contrast, the observed selective deficit is incompatible with theory B which argues that constructions X and Y belong to the same natural class and should, hence, be equally affected by the language disorder (see section 2.4 for an example). Finally, with respect to exploring the human language faculty, another goal of investigating language disorders is to find out about the localisation of specific grammatical capacities in the human brain. By exploring the correlation between specific language deficits and the localisation of the brain lesions causing these deficits, the hope is to identify those brain areas that critically subserve a specific grammatical structure or operation. Whereas modern imaging techniques such as PET and fMRT indicate all brain areas active during a given cognitive operation, lesion studies allow for identifying those brain areas whose functioning is critical for performing this cognitive operation. A lesion of such a brain area should necessarily result in a deficit of the cognitive operation that critically depends on the normal functioning of this brain area (Rugg 1999). In practice, however, factors such as the variability of lesion sites across individuals diagnosed with a specific disorder syndrome (e.g. Basso, Roch Lecours, Moraschini, and Vanier 1985; De Bleser 1988; Willmes and Poeck 1993), the neuroanatomical variation between individuals (e.g. Uylings, Malofeeva, Bogolepova, Amunts, and Zilles 1999), and the variability of language deficits observed in individuals with similar brain lesions (cf. section 3) pose a challenge to this goal. The linguistic investigation of language deficits offers a fascinating route to explore the nature, the localisation, and the functioning of the human language faculty. Also, the study of language deficits has been considered to serve as “a natural laboratory in which linguistic theories may be tested“ (Levy and Kavé 1999: 138). Nevertheless, despite this

53. Syntax and Language Disorders

1839

potential, researchers have been very cautious to advance theoretical claims on the basis of data from language impairments. Rather than making use of impairment data to advance syntactic theory, syntactic theories have been used to explain impairment data (see section 2).

1.2. What are typical symptoms of syntactic deficits? A deficit affecting syntactic representations or the processing of syntactic structures in language performance has to be observable in language production and/or language comprehension. Whether or not a language-impaired individual displays a syntactic deficit in language comprehension can only be revealed by explicitly testing an individual’s language-comprehension capacities. In contrast, symptoms suggestive of a syntactic deficit can, in general, quite readily be observed in the spontaneous-speech production of affected individuals and often require no explicit testing. Consequently, much research on syntactic deficits has concentrated on the signs of syntactic deficits that are observable in spontaneous-speech production.

1.2.1. Typical signs indicative of syntactic deficits in spontaneous-speech production While deficit symptoms in spontaneous speech may vary between different syndromes, research has revealed a surprising correspondence of such symptoms across different language disorders. Among the core symptoms indicative of syntactic deficits in spontaneous-speech production are the following: − Problems with bound inflectional morphology are a core symptom in many language disorders. Depending on the morphological system of the respective language, such problems might lead to the omission or substitution of inflectional markers. Omissions of inflectional markers only occur if the remaining word stem is a possible word in the respective language (such as book in *two book or kiss in *she kiss) (Grodzinsky 1984). Where the omission does not result in a possible word (e.g. Italian *ross instead of rossa[FEM, SG]), inflectional markers are substituted − often by unmarked forms such as the infinitive form in verbs and the nominative form in nouns. Omissions and/or substitutions of bound inflectional morphemes are a characteristic symptom of agrammatic Broca’s aphasia (Menn and Obler 1990a) and Specific Language Impairment (Levy and Kavé 1999) across languages. Similar problems have, however, also been observed in Autism (Tager-Flusberg 2002), in Down’s syndrome (Laws and Bishop 2003), in children who have undergone hemispherectomy (Curtiss and Schaeffer 2005), and in Wernicke’s aphasia (Kolk and Heeschen 1992) (cf. Penke 2008 for an overview on inflectional deficits). − Free function words such as determiners, auxiliaries, or complementisers are often omitted. Such omissions characterise agrammatic Broca’s aphasia (Menn and Obler 1990a) but similar omissions are also found in Specific Language Impairment (Leonard 1998), Down’s syndrome (Eadie, Fey, Douglas, and Parsons 2002), Autism (Bar-

1840

VIII. The Cognitive Perspective

tolucci, Pierce, and Streiner 1980), and in children who have undergone hemispherectomy (Curtiss and Schaeffer 2005). − Sentence length is reduced in language disorders such as agrammatic Broca’s aphasia (Menn and Obler 1990a), Specific Language Impairment (Leonard 1998), Autism, and Down’s syndrome (Tager-Flusberg, Calkins, Nolin, Baumberger, Anderson, and Chudwick-Dias 1990). Especially in Broca’s aphasia, spontaneous speech is often reduced to one- or two-word utterances which only contain the central open-class words of the intended utterance. Wernicke’s aphasia is, in contrast, characterised by an increase in sentence length due to the wild-running concatenation of sentence constituents and formulas that often results in long utterances devoid of meaning (cf. Edwards 2005). − The produced utterances are of reduced syntactic complexity. In languages which allow for different word orderings, canonical word ordering is preferred to other word-order patterns. Thus, in German the canonical SVO order is preferred to the likewise grammatical OVS or XVS orderings in main clauses. Subordinate clauses are rarely produced in spontaneous speech, as are wh-questions or passives. This has been observed for agrammatic Broca’s aphasia (Bates, Friederici, Wulfeck, and Juarez 1988; Menn and Obler 1990a), Wernicke’s aphasia (Bates, Friederici, Wulfeck, and Juarez 1988; Niemi and Laine 1997), for Specific Language Impairment (Hamann, Penner, and Lindner 1998), Down’s syndrome (Rondal and Comblain 1996), autistic children (Durrleman and Zufferey 2010), and for children who have undergone hemispherectomy (Curtiss and Schaeffer 2005). − In languages with overt obligatory verb movement for finite verbs, verbs might appear in a non-finite form in main clauses and do not undergo verb movement. These structures are called root-clause infinitives. They regularly occur in agrammatic Broca’s aphasia (Kolk and Heeschen 1992), in Specific Language Impairment (Rice and Wexler 1996), and in Down’s syndrome (Penke 2013b). Whereas there is general agreement on the deficit symptoms that occur in spontaneousspeech production of a given language disorder, the relevance of deficit symptoms that have been identified in language experiments tailored to test a specific syntactic-deficit account regarding a specific language disorder are often discussed controversially among researchers. Controversies concern issues such as whether the observed deficits are characteristic symptoms of a specific syntactic language disorder that can be observed in every affected individual, or whether the experimental set-up contains flaws in procedure, experimental material, or subject choice that render the data invalid (cf. e.g. the discussions following the article of Grodzinsky 2000). Nevertheless, carefully designed experiments that allow for a systematic collection of relevant data with respect to a critical syntactic construction are necessary to further our understanding of language disorders. A well-known limitation of spontaneous-speech data is that more complex syntactic structures are only rarely produced. However, absence of evidence is not necessarily evidence of absence. That is, we simply cannot know whether a specific form or construction is missing because the individual can no longer produce it due to her/his language deficit, or whether the form or construction is not showing up by coincidence, for example, because we just did not look at a large enough data set. A systematic collection of relevant data in an experiment allows for disentangling this issue. Comparisons between elicited and spontaneous-speech data have indeed illustrated that construc-

53. Syntax and Language Disorders

1841

tions or forms whose production poses a problem for a language-impaired speaker are often avoided in spontaneous-speech production and do only show up when the context provided in the experimental task requires the production of this specific form or construction (Kolk and Heeschen 1992; Penke 2006: section 2.1.1). A further advantage of experimental studies is that the researcher can control for all sorts of confounding factors that might influence the individual’s language behaviour such as, for instance, word-form frequency and the length of sentences. Thus, languageimpaired speakers might experience problems with agreement or case inflection but correctly use inflected suppletive forms that have a very high frequency of occurrence (Penke 1998: 200−202, 211−212). Also, a syntactic construction might pose problems for a subject simply because it contains more words to produce, parse, or remember than another less affected construction. In a carefully designed experiment such potentially influencing factors are controlled for. On the other hand, the control over the experimental material and the reaction of the tested individual is obtained at a price, namely the (sometimes highly) artificial situation which might also influence the individual’s language behaviour (e.g. Heeschen and Schegloff 2003). Thus, experimental data has to deal with the critique that observed deficits might be due to the unnaturalness of the situation and might just constitute an artefact caused by the chosen experimental design. Hence, it is important to adopt different methodologies in experimental testing.

1.2.2. Typical signs indicative of syntactic deficits in language comprehension While symptoms indicative of syntactic deficits are readily observable in spontaneousspeech production, syntactic deficits in language comprehension are hidden from simple observation and can only be revealed by explicit testing. As mentioned above, experimental data are, however, susceptible to criticism targeting the experimental set-up. Thus, it can be argued that the number of items was too small to obtain reliable results or that confounding factors were not sufficiently controlled for. Moreover, whereas spontaneous-speech symptoms are generally seen as facts which are rarely disputed, the matter of syntactic comprehension deficits targets a theoretical issue related to the nature of the language faculty. Chomsky’s generative approach to the human language faculty predicts that the language organ underlying this faculty should be affected by brain lesions leading to language disorders. A deficit impairing the syntactic competence of an individual should affect language production and language comprehension in parallel (Weigl and Bierwisch 1970) since the human grammatical competence is put to use in both, language production and comprehension (i.e. in performance) (Chomsky 1980). Consequently, researchers who assume that syntactic disorders are due to a deficit in syntactic competence try to provide evidence for a deficit that affects all facets of language performance (language production, comprehension, and judgement) in parallel. In contrast, researchers who assume that there is no such thing as syntactic competence, that syntactic disorders are due to modality specific processing disorders, or that syntactic competence is basically intact in language disorders aim to show that language comprehension is not affected or, at least, not affected in parallel to language production. The presence or absence of comprehension deficits is, hence, hotly disputed. A classic design in testing language-comprehension capacities is the sentence-picturematching task where the subject has to match a given sentence, such as The boy is kissed

1842

VIII. The Cognitive Perspective

by the girl, to one of two pictures: one depicting the action described in the sentence (e.g. girl kissing boy), the other depicting the reverse action (e.g. boy kissing girl). Studies on language-comprehension capabilities adopting this or related designs have revealed that in languages with a canonical SVO word order most language-impaired individuals display a better understanding of sentences with this canonical word order compared to sentences with a non-canonical word order such as object-clefts (it is the boy who the girl kissed), object relatives (the boy who the girl kissed …), passives (the boy is kissed by the girl), or object topicalisations where the object has moved out of its base-generated position and precedes the subject. The typical error in the comprehension of these constructions is that the first noun phrase occurring in the sentence is interpreted as the AGENT of the action. While this does not lead to comprehension problems when the first noun phrase is indeed the AGENT, such as in active SVO sentences (the girl kissed the boy), subject cleft constructions (it is the girl that kissed the boy), or subject relative clauses (the girl who kissed the boy …), problems arise when the first noun phrase is the THEME of the action. Then, for example, a passive clause such as the boy is kissed by the girl might be interpreted as boy kissed girl. The better comprehension of subject- or AGENT-initial sentences as opposed to object/THEME-initial sentences is referred to as subject-object asymmetry. The English passive construction is certainly the best investigated construction in language comprehension studies. A good understanding of active sentences compared to an impaired understanding of passive sentences (e.g. The boy is kissed by the girl interpreted as boy kissed girl) has been observed in Broca’s aphasia (Caramazza and Zurif 1976; Grodzinsky 2000) and Wernicke’s aphasia (Bastiaanse and Edwards 2004), in dementia of the Alzheimer type (Grober and Bang 1995), in Autism (Tager-Flusberg 1981), Specific Language Impairment (Van der Lely 1996), Down’s syndrome (Ring and Clahsen 2005), in children with early unilateral focal brain lesions (Dick, Wulfeck, Krupa-Kwiatkowski, and Bates 2004), and in children who underwent a resection of the left brain hemisphere (Dennis and Whitaker 1976). These findings have, however, not gone unchallenged and discussions have especially ignited on the comprehension problems in agrammatic Broca’s aphasia (cf. Martin 2006 for overview). For one, a subject-object asymmetry, resulting from the strategy to regard the first noun phrase as AGENT of the action, is also regularly observed in subjects who do not suffer from any language impairment (cf. e.g. MacWhinney and Bates 1989; Ferreira 2003). Thus, this asymmetry and the underlying comprehension strategy do not seem to be specific for syntactic deficits in language-impaired speakers. Also, there has been a fierce controversy on the issue of how to deal with language-impaired speakers who do not display the described comprehension problem. A comparison of languagecomprehension abilities in a group of unimpaired subjects and a group of languageimpaired speakers will typically yield that the group of language-impaired speakers performs significantly worse with respect to the comprehension of non-canonical structures (such as passives) compared to the group of unimpaired subjects. Whereas this finding will hold for the group, there might be some language-impaired individuals who will perform within the range of the unimpaired subjects. Indeed, a number of case studies of agrammatic Broca’s aphasics who do not display problems in understanding noncanonical sentences have been published (cf. Berndt, Mitchum, and Haendiges 1996; Martin 2006 for overview). One group of researchers regards these studies as providing evidence for the claim that language-comprehension problems are no characteristic sign

53. Syntax and Language Disorders

1843

of Broca’s aphasia and, therefore, do not have to be captured in deficit theories accounting for this disorder. The other group, however, argues that these cases are to be expected in a normally-distributed group of language-impaired subjects and constitute no countercases for the claim that the described language-comprehension deficit is a characteristic symptom in Broca’s aphasia (cf. Grodzinsky 2000; Drai and Grodzinsky 2006 and the subsequent discussions on methodological considerations how to compare and evaluate group differences). Besides the above described problems in understanding non-canonical sentence structures, comprehension problems have been found in the area of binding. Interestingly, it seems as if problems in this area might be more specific to different language impairments and individuals with different disorder syndromes seem to display distinct problems in interpreting pronouns. Thus, children with Specific Language Impairment have been found to give an anaphorical reading to pronouns and might choose a picture showing that Mowgli tickles himself as adequate depiction of the sentence Mowgli tickles him (Van der Lely and Stollwerck 1997). A similar behaviour has been observed in individuals with Broca’s aphasia (Grodzinsky, Wexler, Chien, Marakovitz, and Solomon 1993). Wernicke’s aphasics, in contrast, seem to display a general disruption in understanding pronouns and, thus, might consider that a sentence like Mowgli tickles him does not depict a picture where Mowgli tickles Baloo Bear (Grodzinsky, Wexler, Chien, Marakovitz, and Solomon 1993). A different behavior has again been observed in individuals with Down’s syndrome who experience problems in understanding reflexive pronouns and might, hence, accept a picture showing that Mowgli tickles Baloo Bear as illustration of the sentence Mowgli tickles himself (Ring and Clahsen 2005). In contrast to all these groups, individuals with Williams syndrome have been found to be unimpaired with respect to the comprehension of pronouns and anaphors (Clahsen and Almazan 1998; Ring and Clahsen 2005). The syndrome-specificity of the deficits observed in this area of grammar has inspired researchers during the last years to pinpoint the precise nature of the deficits related to pronoun comprehension in the different syndromes (cf. section 2.5).

2. Accounting for syntactic deficits Over the last 40 years, a multitude of deficit approaches have been advocated to capture the above described symptoms as resulting from syntactic deficits. Such attempts range from deficit theories which assume rather global deficits, encompassing all of syntax, to approaches which presuppose rather subtle deficits, affecting only particular syntactic structures, processes, or features. Most of these attempts have been formulated within the framework of Generative Grammar and here within the theories of Government and Binding (Chomsky 1981) and the Minimalist Program (Chomsky 1995, 2000).

2.1. Global syntactic deficit accounts The first syntactic deficit theories proposed to account for the language impairments found in agrammatic Broca’s aphasia date from the late 1970s and the early 1980s. The

1844

VIII. The Cognitive Perspective

framework of Generative Grammar was still in its youth by then and research on language disorders focused on finding evidence for the assumption that the human language faculty consists of autonomous modules such as a syntax module (see section 1.1). Given this research agenda, researchers focussed on the issue whether Broca’s aphasia could be described as a language disorder that selectively affected the syntactic component, sparing the semantic component (e.g. Caramazza and Berndt 1978), and whether this disorder was due to an impairment in syntactic competence that affected language production and comprehension in parallel (e.g. Caramazza and Zurif 1976) (see section 1.2.2). Syntactic deficit approaches to agrammatic Broca’s aphasia assumed rather global deficits encompassing all syntactic capacities. An example for these approaches is the Lexical-Node Hypothesis suggested by Caplan (1985). According to this approach, agrammatic Broca’s aphasics are not able to construct syntactic phrase structures. They only have access to lexical categories such as verbs and nouns and to the information that is stored with these lexical elements in the mental lexicon. Besides semantic information, this includes syntactic information on word class, subcategorisation frames, and thematic roles. Since the capacity to build up hierarchically ordered syntactic structures is lost in agrammatic Broca’s aphasia, the main lexical elements are linearly ordered. Thematic roles such as AGENT or THEME are assigned to linear positions in the word string: AGENT is assigned to the noun in front of the verb, THEME is assigned to the noun coming after the verb (cf. figure 53.1). The incapacity to build-up phrase-structure representations also entails that syntactic operations or relations which build on phrase structures, such as movement operations or the establishment of syntactic relations between hierarchically ordered constituents, are no longer possible. With this account, Caplan tried to provide a unitary account for the omission of function words and inflectional morphemes, for the preponderance of canonical sentence structures, and for the comprehension problems that result when sentences with non-canonical ordering are presented. (1)

linear ordering

semantic relations

N

AGENT

V

N

THEME

Fig. 53.1: Sentence representations according to a global syntactic deficit account

2.2. Deficits in structure building With time and concurring with the development of generative syntax, more fine-grained accounts of syntactic deficits were developed that no longer assumed a global deficit affecting all syntactic operations and relations. Instead, approaches were proposed that suggested more specific deficits in the build-up of syntactic phrase structure.

53. Syntax and Language Disorders

1845

2.2.1. Surface-structure-deficit accounts With the elaboration of the theory of Government and Binding (Chomsky 1981), the distinction between a thematic D(eep)-structure and a S(urface)-structure presented itself as an obvious candidate in accounting for the syntactic deficits observed in Broca’s aphasia. D-structure is the level that encodes the lexical properties of the sentence elements and the thematic relations that hold between these elements. Surface properties of the sentence such as the ordering of sentence elements and the inflection of these elements are reflected by S-structure which is derived from D-structure by movement transformations. Functional heads and their projections are central in deriving S-structure. Functional heads host inflectional morphology such as agreement, tense, and case markings that have to be associated with lexical elements such as verbs and nouns. Functional elements, like complementisers and auxiliaries, are base-generated in functional heads such as C, T, and AGR and functional heads and their specifiers offer landing sites for constituents that move out of their base-generated D-structure positions (Haegeman 1991). As presented above (see section 1.2.1), the omission of functional elements (e.g. articles, complementisers, and auxiliaries) and substitutions respectively omissions of inflectional morphemes are core symptoms of agrammatic language production. In contrast, lexical information (e.g. word class and subcategorisation information) and the establishment of semantic relations between lexical elements seem relatively unimpaired in Broca’s aphasia. This dissociation led researchers to hypothesise that agrammatic Broca’s aphasia might be due to an impairment in deriving S-structure representations − an impairment that ultimately results from a deficit in projecting functional heads (e.g. De Bleser and Bayer 1991; Ouhalla 1993). According to Ouhalla (1993) for instance, agrammatic individuals no longer have access to the inventory of functional categories that is part of Universal Grammar and, hence, can no longer project these functional heads into syntax. While the derivation of S-structure is impaired according to these approaches, all information that comes from the lexicon is unaffected and can be projected into syntax. Thus, the build-up of lexical projections such as VPs, NPs, and PPs according to the X′-scheme is still possible and the argument structure as well as the thematic structure of lexical heads can be expressed. In consequence then, the build-up of syntactic structure is reduced to the D-structural level in these approaches (cf. figure 53.2). All syntactic operations that require functional heads and their projections at the S-structural level are, however, assumed to be impaired. (2)

VP Spec

V′ NP

or V

V NP

Fig. 53.2: Sentence representations according to a S(urface)-structure-deficit account

While S-structure-deficit approaches differ with respect to whether or not inflected forms can be extracted from the lexicon, the deficit in building-up S-structure representations should in any case affect movement transformations. Movement transformations require

1846

VIII. The Cognitive Perspective

a landing site for the moved constituent. These landing sites are provided by functional heads and their specifier positions. If functional heads are no longer projected, systematic movement transformations should not be observable in agrammatic speech production. Obligatory verb movement has provided an excellent test case for this prediction. Consider German, a V2 language where the finite verb has to move to the C position, the second structural position, in main clauses. Non-finite verbs, in contrast, remain clausefinally in the V position of the head-final German VP. Figure 53.3 depicts the standard double-movement analysis of German V2 movement (cf. Haegeman 1991). In main clauses, the verb starts out in the VP-final V position and successively moves over the INFL node, where it incorporates its finite inflectional morphology, to the C position. The subject moves to SpecIP to realize its nominative case features and to enter into an agreement relation with the finite verb in INFL. In addition, the subject or any other constituent has to move to SpecCP to ensure that the finite verb ends up in the second structural sentence position (SVX or XVS word order). (3)

CP Spec ichj

C′ C

sprechei

IP Spec tj

I′ VP

Spec tj

INFL ti

V′ NP Deutsch

V ti

Fig. 53.3: V2 movement in German (example: Ich spreche Deutsch ‘I speak German’)

An inability to build-up S-structure and to project the functional head C should necessarily affect the obligatory V2 movement of finite verbs in German main clauses. In consequence, a systematic V2 placement of finite verbs in main clauses should not occur in the language production of German agrammatic Broca’s aphasics. However, an investigation of spontaneous and elicited speech produced by four German agrammatic aphasics revealed that out of 615 finite verbs for which verb placement could be unambiguously determined, 607 (98.7 %) were correctly placed in V2 position, whereas only 2 % were incorrectly placed clause-finally (Penke 2001). In contrast to finite verbs that were almost always moved to the V2 position, 97.8 % of the 182 non-finite verbs were correctly left clause-finally in the VP. These data indicate that the V2 placement of finite verbs and the resulting asymmetry in the placement of finite and non-finite verbs in German main clauses are retained in German individuals with Broca’s aphasia. Since V2 movement requires functional projections outside VP, the data from German, as well as similar data from Dutch, French, and Italian Broca’s aphasics (Kolk and Heeschen 1992; Lonzi and Luzzatti 1993), provide strong evidence against deficit approaches which assume that functional categories can no longer be projected in agrammatic aphasia.

53. Syntax and Language Disorders

1847

2.2.2. Deficits affecting the build-up of specific functional projections As indicated above, deficit approaches assuming that all functional heads and their projections are impaired in a language disorder such as agrammatic Broca’s aphasia seem to posit too much a deficit. As the data on verb movement in Broca’s aphasia indicate, not all functional heads are missing from agrammatic S-structure representations. Research consequently focussed on the syntactic functions specific functional categories have, exploring whether or not these particular functions are retained or impaired in language-disordered subjects. These endeavours were furthered by developments in syntactic theory such as the Split-INFL Hypothesis (Pollock 1989) which proposed that the functional category INFL should be divided up into two independent functional categories: T for tense marking and AGR for establishing subject-verb agreement. This proposal and subsequent related work increased the number of potential functional heads to investigate in language disorders. As a result of this research agenda, a number of dissociations have been reported in the literature on syntactic deficits, indicating that syntactic functions related to specific functional categories are impaired whereas syntactic functions related to other functional categories are retained in particular language disorders. Consider for example the Tree-Pruning Hypothesis advocated by Friedmann and Grodzinsky (1997) which claims that syntactic structures in agrammatic aphasics are pruned at the tense node. The account was proposed on the basis of data from a Hebrewspeaking agrammatic aphasic subject who displayed a strong dissociation within verbal morphology. While this subject had a good command of agreement inflection, she markedly failed in producing verb forms correctly inflected for tense. Moreover, this subject could not produce embedded sentences and wh-questions, omitting the sentence-initial complementiser or wh-phrase most of the time. In accounting for this data, Friedmann and Grodzinsky made use of the Split-INFL Hypothesis and assumed an ordering of functional heads according to which C is higher than T which is higher than AGR. Given this ordering, the observed deficit can be captured by the assumption that all functional heads from T upwards can no longer be projected in agrammatic representations (cf. figure 53.4), thus accounting for retained agreement inflection in their subject. The TreePruning Hypothesis implies that all syntactic operations that rely on the functional projection TP and on functional projections higher up the syntactic tree, such as CP, can no longer be performed in agrammatic subjects. Accounts such as the Tree-Pruning Hypothesis allow for relating the severity of the syntactic deficit to the type and number of functional heads affected. The more severe the agrammatic deficit, the fewer functional projections can be built up (Friedmann and Grodzinsky 1997) or generated by the syntactic operation merge (Hagiwara 1995). Whereas a mild deficit might only affect the CP layer in phrase-structure representations, a severe deficit might affect the build-up of all functional projections down to the VP layer. The integration of gradeability into accounts such as the Tree-Pruning Hypothesis is certainly advantageous since language deficits come in different levels of severity in different individuals. However at present, the suggestion to relate the severity of the disorder to the amount of functional heads that can be projected by language-impaired individuals suffers from a lack of independent criteria to indisputably establish the severity of the disorder in a given subject. Without independent criteria to establish the severity of a language deficit, circular arguments are likely to arise: A deficit in producing a language structure X (e.g. V2 movement) is taken as evidence for a severe disorder. A

1848

VIII. The Cognitive Perspective

(4)

CP Spec

C′ C

pruned projections

TP Spec

T′ T

AGRP Spec

AGR′ AGR

VP Spec

V′ NP

V

Fig. 53.4: A pruned syntactic representation according to the Tree-Pruning Hypothesis

severe disorder is claimed to lead to a pruning of functional nodes in the syntactic tree (e.g. CP). Evidence that the respective functional layer (CP) is indeed pruned is then claimed to come from the observation that structure X (e.g. V2 movement) is impaired. As long as independent criteria are missing, potential counter-evidence to these deficit approaches − such as the data on German verb movement which indicate that the functional head C can still be projected in agrammatic aphasic subjects − might be discarded by claiming that these data come from individuals who are not impaired severely enough to display the suggested deficit. This renders the falsification of gradeable deficit theories quite difficult in practice. Despite empirical controversies on the issue whether or not the Tree-Pruning Hypothesis adequately captures the deficits observed in agrammatic Broca’s aphasia across languages (cf. e.g. Lee 2003; Duman, Aygen, Özgirgin, and Bastiaanse 2007; Lee, Milman, and Thompson 2008; Neuhaus and Penke 2008), the Tree-Pruning Hypothesis has also been challenged on theoretical grounds. For one, it has been argued that the suggested order of functional categories that places the T node above the AGR node does not hold across languages. In German, for instance, the AGR node is placed above the T node reflecting affix-order in German verbs (Baker’s 1985 Mirror Principle). A pruning of the syntactic tree above the T node would, hence, lead to an inability to project any functional categories in German − a prediction which is not borne out since agreement inflection (involving the AGR node) and V2 movement (involving the C node) are largely intact in German agrammatic aphasics (Penke 2000; see also Nanousi, Masterson, Druks, and Atkinson 2006 for a similar argument with respect to Agrammatism in Greek). Also, in more recent developments of the Minimalist Program tense and agreement are no longer treated as two independent functional categories (Chomsky 2000), but agreement is an operation checking uninterpretable features of T. Without an independent functional category AGR, however, the dissociation between retained agreement inflection and impaired tense inflection observed by Friedmann and Grodzinsky (1997) can

53. Syntax and Language Disorders

1849

no longer be captured in the Tree-Pruning account since agreement, like tense inflection, would involve the pruned T node (cf. Wenzlaff and Clahsen 2004). Other approaches have considered the CP layer as the locus where language deficits strike in agrammatic Broca’s aphasia or Specific Language Impairment. Proposals such as by Platzack (2001) and Hamann, Penner, and Lindner (1998) capture the observation that in agrammatic aphasics and Specific Language Impaired children syntactic structures targeting the CP-layer, such as wh-questions or subordinate clauses, are often missing in speech production or are incorrectly produced lacking wh-pronouns or complementisers (cf. section 1.2.1). However, while such constructions might be at times omitted or incorrectly produced, studies have also found that agrammatic aphasics and Specific Language Impaired children do not lack the ability to produce correct wh-questions and subordinate clauses targeting the CP layer (cf. e.g. Penke 2001; Lee, Milman, and Thompson 2008; Neuhaus and Penke 2008; Rothweiler, Chilla, and Clahsen 2012). These findings are incompatible with the assumption that the CP layer can no longer be projected in disorder syndromes such as Agrammatism or Specific Language Impairment (cf. section 3).

2.3. Deficits with the feature specification of functional heads A different type of deficit theories assumes that while the build-up of syntactic structure via the projection of functional heads is intact, the syntactic features that are hosted by these functional categories are underspecified. This type of approach goes back to an account on agrammatic language production proposed by Grodzinsky (1984). Most research on language impairments up to the early 1980s was concerned with the deficits that were observed in English-speaking individuals. Since the omission of inflectional affixes is a core symptom of English speakers with Broca’s aphasia, omissions of inflectional markers were seen as characteristic sign of this disorder across languages. The assumption that across languages bound and free functional elements (inflectional markers and function words) are omitted in Broca’s aphasia seemed to suggest that the build-up of syntactic structure, specifically the build-up of functional projections, was impaired in agrammatic individuals (see section 2.2.1). In comparing deficits associated with agrammatic Broca’s aphasia across languages, Grodzinsky (1984), however, observed that omissions of inflectional markers only occur in agrammatic speech production if the remaining word stem is a possible word in the respective language, as is the case in English. In languages such as Hebrew, Russian, or Italian where the omission of inflectional markers results in stems which cannot surface as possible words in these languages (e.g. Italian *ross instead of rosso) omission errors do not occur. Rather, in these languages inflectional markers are often substituted by other markers in the speech of agrammatic aphasic individuals, resulting in inflectional errors. Such inflectional errors are problematic for deficit accounts which assume that all functional projections are missing in syntactic representations of agrammatic individuals. According to Government and Binding theory, the standard syntactic framework at this time, inflectional morphology is hosted by functional heads. Hence, the presence of inflectional markers, even incorrect ones, indicated that syntactic representations contain

1850

VIII. The Cognitive Perspective

functional heads that dominate these inflectional markers. Therefore, Grodzinsky suggested that the build-up of S-structure representations itself is not impaired. However, functional heads are underspecified since the specific feature values of the syntactic features dominated by functional heads are deleted in agrammatic syntactic representations (cf. figure 53.5). As a consequence, free standing functional elements such as articles or auxiliaries are omitted in agrammatic language production. In addition, the deletion of specific feature values results in omissions of bound inflectional morphology if the omission of an inflectional marker leads to a possible surface word in the respective language. In languages where omissions are not an option, any feature value, matching or not, is selected for an underspecified syntactic feature and, hence, any possible inflectional form can be produced − the correct one or an incorrect one. Given these considerations, a sentence such as the boy is kissing the girl would consequently be produced as boy kiss girl by English-speaking agrammatic subjects. (5)

IP DP D [α def]

I′ NP boy

INFL VP α person α number Spec α tense V kiss

V′ DP D [α def]

NP girl

Fig. 53.5: Syntactic S(urface)-structure representation with underspecified functional heads, resulting in the agrammatic sentence boy kiss girl

Grodzinsky’s finding that omissions of inflectional elements will only occur where licensed by the grammar of a given language constitutes an important generalisation on inflectional deficits in language disorders. However, his proposal that all functional heads are underspecified turned out to be too strong. Consider for instance subject-verb-agreement inflection in German Broca’s aphasics. According to Grodzinsky’s account the agreement features dominated by the functional head INFL are underspecified. Consequently, agreement affixes should be omitted since this is a morphologically legitimate option in German. In contrast to this prediction, subject-verb inflection is basically intact in German Broca’s aphasics. An analysis of subject-verb-agreement inflection in spontaneous-speech production and in several elicitation experiments conducted with a total of 14 German agrammatic aphasics revealed that all tested individuals were able to systematically mark agreement in German (individual correctness scores ranging from 71 % to 100 %, mean 92 %) (Penke 1998: 192−195; Janssen and Penke 2002). Nevertheless, Grodzinsky’s suggestion that syntactic deficits might be due to underspecified feature values of functional heads inspired researchers in the field, a reason certainly being the advent of the Minimalist Program (Chomsky 1995, 2000) in which syntactic features play a central role in the derivation of syntactic structure. In the Mini-

53. Syntax and Language Disorders

1851

malist Program, functional heads dominate bundles of syntactic features that have to be checked against the inflectional endings of a form generated in the lexicon. Feature checking will eliminate the abstract features dominated by functional heads. To enable feature checking, the inflected form has to move into a checking domain of the functional head hosting the respective syntactic feature. If the syntactic features realized by an inflected form are compatible with the syntactic features dominated by a functional head, the syntactic features headed by this functional category can be checked off and will be eliminated from the syntactic representation. The Minimalist Program draws a distinction between different types of features. Strong features force overt movement, i.e. feature checking has to occur prior to spell-out. Weak features are checked after spell-out. Their movement into the checking domain, therefore, is covert, i.e. not visible on the surface. Another distinction is drawn between interpretable and uninterpretable features. Interpretable features can be interpreted at the logical form level, i.e. they are relevant to the semantic interpretation of an expression. Uninterpretable features, in contrast, cannot be interpreted at this level and need to be checked off before the logical form level, lest the derivation crashes at this level. The issue whether syntactic impairments can be related to deficits causing the underspecification of specific features while sparing others has inspired research during the last 20 years. Rice and Wexler (1996) adopted this line of explanation to account for the observation that children with Specific Language Impairment display a protracted period where they optionally produce root infinitives instead of main clauses with finite verbs (cf. section 1.2.1). According to the Tense-Omission Model (Wexler 1994) an optional infinitive results when the child leaves the tense feature of the functional category INFL underspecified. In this case, the verb displays non-finite inflection and does not undergo verb movement. In children with Specific Language Impairment the time period where the tense feature may be left underspecified is said to be extended (Extended Optional Infinitive Hypothesis). Whereas the tense feature may be unspecified in syntactic representations of children with Specific Language Impairment, all other functional categories, features, or syntactic operations are said to be intact. This claim is, however, disputed on the basis of data indicating that subject-verb-agreement inflection is impaired in Specific Language Impairment although the agreement features should be unaffected according to the Extended Optional Infinitive Hypothesis (cf. Clahsen, Bartke, and Göllner 1997, see Wexler, Schütze, and Rice 1998 for a version of the Extended Optional Infinitive Hypothesis encompassing the underspecification of tense and/or agreement features). Based on the observation that verbal agreement inflection is impaired in German children with Specific Language Impairment whereas tense marking is significantly better retained, Clahsen, Bartke, and Göllner (1997) proposed that Specific Language Impairment is due to a deficit particularly affecting the uninterpretable phi-features of verbs, i.e. the verb’s agreement features. A reverse deficit affecting interpretable tense features has been claimed to account for the observation that tense inflection is more impaired than verbal agreement inflection in German agrammatic Broca’s aphasics (Wenzlaff and Clahsen 2004). Note, however, that the language deficits in Specific Language Impairment and agrammatic aphasia are not as selective as suggested above. Whereas Clahsen, Bartke, and Göllner (1997) state that only agreement but not tense inflection is impaired in Specific Language Impairment, a number of studies have found tense inflection to be severely affected in English children with Specific Language Impairment (e.g. Rice and

1852

VIII. The Cognitive Perspective

Wexler 1996; van der Lely and Ullman 2001). Conversely, deficits with agreement inflection are typically observed in English-speaking Broca’s aphasics (e.g. Goodglass, Christiansen, and Gallagher 1993; Faroqi-Shah and Thompson 2004), contrary to the claim of Wenzlaff and Clahsen (2004). Rather, the findings suggest that deficits with respect to inflectional morphology vary across languages. In English, an analytic language with a largely reduced inflectional component, inflectional markers − be it tense or agreement markers − tend to be omitted, resulting in inflectional errors. In languages where inflectional systems are more elaborate and express more syntactic information (such as in Finnish, German, Italian, Polish, Spanish, or Hebrew) omission rates of inflectional markers are markedly lower compared to English (e.g. Bates, Friederici, and Wulfeck 1987; Dromi, Leonard, Adam, and Zadunaisky-Ehrlich 1999). Thus, whereas for instance the English 3SG marker -s was found to be omitted in about half of the obligatory contexts for this marker in data from children with Specific Language Impairment (Clahsen, Bartke, and Göllner 1997) and in adult Broca’s aphasics (Goodglass, Christiansen, and Gallagher 1993), omission rates for subject-verb-agreement inflection in German-speaking subjects with these disorders are considerably lower. Clahsen, Bartke, and Göllner (1997) report an omission rate of 20 % for German children with Specific Language Impairment. The five agrammatic aphasic subjects investigated by Penke (1998: 191) omit agreement affixes in only 15 out of 914 cases (1.6 %) in spontaneous-speech production. Such variations across languages suggest that language-specific factors related to the complexity and importance of inflectional systems critically affect which inflectional errors will occur in language-impaired speakers (see Penke 2006, 2008 for discussion). Note that language-specific variations in inflectional errors are problematic for deficit theories supposing selective deficits with interpretable or uninterpretable features. The categorization as uninterpretable or interpretable feature holds for all languages. Hence, similar deficits with tense or agreement inflection should be observed across languages. At present, the evidence for deficit accounts which suggest that the impairments observed in language disorders such as Specific Language Impairment and Agrammatism are selective to specific features while sparing others is, therefore, inconclusive.

2.4. Deficits with movement As presented in section 1.2.2, language-impaired individuals often show a better understanding of sentences with a canonical SVO order of sentence constituents (the girl kissed the boy) compared to sentences with a non-canonical word order such as objectclefts (it is the boy who the girl kissed), object relatives (the boy who the girl kissed …), passives (the boy was kissed by the girl), or object topicalisations. What is common to these latter sentence structures is that the object has moved out of its base-generated position and now precedes the subject. Grodzinsky has, therefore, suggested that it is the movement of the object that causes comprehension deficits in Broca’s aphasia (Grodzinsky 1984, 2000). Specifically, he proposed that the deficits in language comprehension are due to the fact that movement traces in syntactically-derived sentences, such as passives or object extracted questions, are deleted from syntactic representations. In the framework of Government and Binding (Chomsky 1981), S-structure is derived from D-structure by movement transformations. Every moved constituent leaves behind

53. Syntax and Language Disorders

1853

a trace. The moved constituent and its trace are connected via a syntactic chain. In case of moved noun phrases, the most important syntactic function of this chain is to transmit the theta role assigned by the verb to its arguments within VP to the moved constituent. Consider, for instance, a passive clause as in (6) where the object NP has to move out of VP to the subject position in SpecTP to receive case. It leaves behind a co-indexed trace in its base-position as internal argument of the verb kiss. The theta role THEME is assigned by the verb to this trace. Via the syntactic chain connecting the moved NP with its trace, this theta role is indirectly assigned to the moved NP, the derived subject of the sentence (cf. Haegeman 1991). The theta role AGENT is assigned by the preposition by. According to Grodzinsky’s Trace-Deletion Hypothesis, movement traces are, however, deleted from the syntactic representations of Broca’s aphasics. Consequently, the theta role THEME cannot be assigned to the moved object NP via the disrupted chain. Since for interpretation every NP must have a theta role, the NP is now assigned a role by a non-linguistic linear default strategy. This default strategy assigns the AGENT role to the clause-initial NP. Since the preposition by also directly assigns the role AGENT to its complement NP, the thematic representation now contains two AGENT roles to choose from in interpretation: one assigned via the default strategy, the other directly assigned by the preposition. As a consequence, in a sentence-picture-matching task (cf. section 1.2.2) the comprehension of such sentences will drop to chance level. (6)

unimpaired representation of passives [TP The boy i was [ VP kissed ti by the girl]]

THEME

AGENT

agrammatic representation [TP The boy was [ VP kissed [by the girl]]]

AGENT by strategy

AGENT

For active sentences such as in (7), in contrast, no problem arises in agrammatic comprehension since both theta roles can be directly assigned: the THEME role is assigned by the verb to its internal argument, the AGENT role is directly assigned to the subject base-generated in SpecTP. (7)

unimpaired/agrammatic representation for active sentences [TP The girl [VP kissed the boy]]

AGENT

THEME

Note as a side comment, that the dissociation between impaired comprehension of passives and spared comprehension of actives observable in agrammatic aphasia led Grodzinsky, Pierce, and Marakovitz (1991) to argue for a syntactic approach to deriving passives, as proposed in the framework of Government and Binding, and to claim that frameworks such as Lexical Functional Grammar or Generalised Phrase Structure Grammar which derive passives by lexical derivation lack breakdown compatibility as they

1854

VIII. The Cognitive Perspective

are not able to account for this circumscribed and selective deficit in agrammatic comprehension. The Trace-Deletion Hypothesis has probably been the most influential deficit theory in the field to date. It has inspired a wealth of research aimed at testing and improving this account. Also, deficit accounts related to the Trace-Deletion Hypothesis − which is explicitly restricted to capture only agrammatic language comprehension (cf. Grodzinsky 2000) − have been adopted to capture similar problems in individuals with Wernicke’s aphasia (Grodzinsky and Finkel 1998) and in children with Specific Language Impairment (Friedmann and Novogrodsky 2004) or hearing impairments (Friedmann and Szterman 2006). A strength of this theory certainly lies in the fact that it has been flexible enough to adapt to both, new findings on agrammatic comprehension and new developments in syntactic theory. Originally, the Trace-Deletion account stated that all types of traces, including traces of moved heads, were deleted in agrammatic representations. Based on evidence that head chains are intact in agrammatic comprehension, the account was modified in such a way that only traces in theta positions are said to be deleted in agrammatic representations (Grodzinsky 1995). A further modification was initiated by Hickok and Avrutin (1995) who tested the comprehension of wh-questions in Broca’s aphasics. The Trace-Deletion Hypothesis predicts that the comprehension of wh-object questions (which boy/who did the girl kiss?) should be at chance level in agrammatic aphasic subjects. The moved object-wh-phrase cannot receive a theta role via the disrupted chain and receives the AGENT role via the default strategy. Since the subject NP is also assigned the AGENT role (by the verb), the conflict between two AGENT roles should lead to chance performance in interpretation. Hickok and Avrutin, however, observed that their agrammatic subjects showed this behavioural pattern only for which N object questions, but not for who object questions. This led to a revision of the TraceDeletion Hypothesis according to which a default theta role is only assigned to a referential NP, i.e. a NP which is D(iscourse)-linked, as is the case for which N phrases but not for who phrases which are said to be quantifiers and thus not D-linked (Grodzinsky 1995). The Trace-Deletion Hypothesis has not only had to adapt to new types of data but it has also been affected by developments in syntactic theory, most notably by the VPInternal-Subject Hypothesis (Koopman and Sportiche 1988) according to which the subject NP is not base-generated in SpecTP but in SpecVP. From SpecVP the subject moves to SpecTP to check its features, leaving a trace inside SpecVP. Integrating this assumption into the Trace-Deletion Hypothesis has turned out to be quite challenging. Consider first a simple active sentence such as (8). In the agrammatic representation of this sentence the AGENT role can no longer be assigned to the subject NP moved out of VP since the trace is deleted. Nevertheless, comprehension of such sentences is not affected in Broca’s aphasics because the subject NP receives the AGENT role via the default strategy (cf. 8). (8)

unimpaired representation of actives [TP The girl i [VP ti kissed the boy]]

AGENT

THEME

agrammatic representation [TP The girl [ VP kissed the boy]]

AGENT by strategy

THEME

53. Syntax and Language Disorders

1855

For other sentence constructions, however, the integration of the VP-Internal-Subject Hypothesis into the Trace-Deletion Hypothesis has led to predictions that are not born out by the agrammatic comprehension data. Consider as an example wh-object questions such as (9). In a wh-object question neither the object NP nor the subject NP receives a theta role directly by the verb because both phrases are moved out of VP and their traces are deleted. The default strategy would then assign the AGENT role to the clause-initial NP, the object wh-pronoun. In consequence, wh-object questions should be interpreted as wh-subject questions by agrammatic individuals and their performance in understanding such sentences should, hence, be worse than chance, a prediction which is, however, not born out by the data (cf. Hickok and Avrutin 1995; see Grodzinsky 2000: 59 for suggestions how to solve this issue within the Trace-Deletion Hypothesis). The challenge that the integration of the VP-Internal-Subject Hypothesis poses to the Trace-Deletion Hypothesis has led researchers to suggest that the dissociation between impaired comprehension of sentences including object traces and spared comprehension of sentences with subject traces in agrammatic aphasia provides evidence that the VP-Internal-Subject Hypothesis might not be valid (Schaeffer 2000). (9)

agrammatic representation [CP who/which boy did [ TP the girl [ VP kiss ]]]

AGENT by strategy

A different theoretically based challenge to the Trace-Deletion Hypothesis is posed by the fact that the recent framework of the Minimalist Program discards with movement operations which Grodzinsky explicitly sees at the basis of agrammatic comprehension problems (cf. Edwards and Lightfoot 2000).

2.5. Deficits involving structural dependencies Whereas Grodzinsky explicitly regards deficits in movement operations as causing the impaired comprehension of sentences with moved object NPs, other researchers have suggested that these comprehension problems are due to a deficit in establishing a structural relationship between the moved constituent and its trace (e.g. Mauner, Fromkin, and Cornell 1993). According to a more recent proposal that captures this idea within the Relativized Minimality approach to locality in syntax (Rizzi 2004), the establishment of a chain fails if an intervening NP blocks chain formation between the trace and the moved constituent (Garaffa and Grillo 2008). In sentences where the object NP has moved over the subject NP, the subject NP might act as intervener blocking chain formation when object and subject display features of the same feature class. According to Garaffa and Grillo (2008), sameness with respect to the features associated with subject and object NP is likely to come about in agrammatic representations because agrammatic aphasics experience problems in representing the full feature content of syntactic el-

1856

VIII. The Cognitive Perspective

ements. In sentences with an initial subject, no comprehension problems occur as the subject NP does not move over the object NP. The object NP is, hence, no possible intervener blocking chain formation. A deficit in establishing syntactic structural relationships should also affect other syntactic phenomena where a similar structural dependency holds. A case in point is the structural relation that holds between referentially dependent NPs and their antecedents. Van der Lely and colleagues were the first to investigate the comprehension of passives and referentially dependent NPs in children with Specific Language Impairment (van der Lely 1996; van der Lely and Stollwerck 1997). They found that children with Specific Language Impairment were impaired in interpreting passives and referentially dependent NPs and suggested that children with Specific Language Impairment suffer from a deficit with non-local structure-dependent relations that, for instance, impairs the computation of the binding domain. Van der Lely’s Computational Grammatical Complexity Hypothesis (van der Lely 2005) posits a rather broad grammatical deficit that is not precisely couched in theoretical terms and might encompass all sorts of hierarchically organized structures in syntax, morphology, and phonology. It has therefore been criticized for being too broad to capture more fine-grained deficits in Specific Language Impairment affecting, for instance, agreement but not tense inflection (cf. Clahsen 2008). Interestingly, the deficits obtained for binding relationships might turn out to be specific for certain language disorders. Agrammatic Broca’s aphasics and children with Specific Language Impairment display particular problems in interpreting pronouns and often accept a reading in which the pronoun is anaphorically linked to an antecedent. Thus, they might accept a sentence such as Mowgli is tickling him as a description of a picture depicting Mowgli tickling himself (Van der Lely and Stollwerck 1997; Grodzinsky, Wexler, Chien, Marakovitz, and Solomon 1993). In doing so, these language impaired individuals mirror unimpaired children who also take considerably longer to acquire the correct interpretation of non-reflexive pronouns compared to reflexive pronouns (cf. Guasti 2002: chapter 8 for overview). A different deficit has been reported in two investigations on the comprehension of passives and anaphoric elements in individuals with Down’s syndrome (Ring and Clahsen 2005; Perovic 2006). These studies found a marked impairment in interpreting reflexive pronouns and passive sentences. In contrast, the interpretation of pronouns and active sentences was unimpaired (Ring and Clahsen 2005; Perovic 2006). In an attempt to provide a unitary account for the deficits in interpreting passives and reflexive pronouns, Ring and Clahsen (2005) have suggested that individuals with Down’s syndrome suffer from a specific syntactic deficit leading to an inability to construct A(rgument)chains. An A-chain requires that the two elements connected by the chain (the moved object NP and its trace, respectively the reflexive and its antecedent) are in the same local structural configuration, that they share the same syntactic features via coindexation, and that the reflexive pronoun respectively the trace are c-commanded by the antecedent respectively the moved object NP (Chomsky 1995; Reuland 2001). A deficit in forming A-chains brings about that a reflexive pronoun cannot be syntactically bound by its local antecedent. Likewise a moved object NP cannot be related to its trace resulting in defective theta-role assignment to the moved NP. Both will lead to comprehension problems. Ring and Clahsen also investigated a group of children with Williams syndrome who were matched for age and IQ with the Down’s syndrome subjects. Despite a similar impairment in intellectual development, the subjects with Williams syndrome

53. Syntax and Language Disorders

1857

did not show a deficit in the comprehension of passives or referentially dependent NPs such as reflexive pronouns. Ring and Clahsen therefore suggest that the comprehension problems observed in individuals with Down’s syndrome are specific for this syndrome and cannot be explained by their impaired intellectual development. The claim that individuals with Down’s syndrome suffer from a specific syntactic deficit in constructing Achains is also interpreted as being informative for theoretical accounts of binding. Thus, Perovic (2006) argues that the data support a fractionation of binding into a syntactic component which governs the interpretation of bound anaphors and which is impaired in Down’s syndrome and an extra-syntactic component which regulates the coreferential interpretation of pronouns and is intact in individuals with Down’s syndrome. The data and interpretations presented by Ring and Clahsen (2005) and Perovic (2006) are certainly fascinating. Note, however, that much hinges on whether or not the adapted syntactic interpretation of binding relations proves valid. Also, Ring and Clahsen admit that their interpretation does not capture every error pattern observed in their data. Thus, younger children with Down’s syndrome also committed significantly more reversal errors in interpreting active sentences than unimpaired control children. As active sentences are said not to involve A-chains, this error pattern is unexplained by Ring and Clahsen’s account. Finally, Ring and Clahsen (2005) and Perovic (2006) link the deficit in interpreting reflexive pronouns to the deficit in interpreting passive sentences and provide a syntactic explanation that relates both comprehension problems to an inability to construct A-chains. Note however, that while deficits in interpreting referentially dependent NPs seem to vary between different deficit syndromes, such variation is not observed with respect to the comprehension of passives. Independent of the particular disorder syndrome, language-impaired subjects display similar problems in understanding passive clauses (cf. section 1.2.2). This observation asks for an explanation. Are the similar problems in passive comprehension caused by different language deficits in different deficit syndromes? And if so what are these deficits that lead to similar performance in passive comprehension but to different performance in interpreting referentially dependent NPs? Further research is needed to evaluate these issues.

3. Problems in accounting for syntactic deficits As indicated by these short sketches on syntactic-deficit theories advocated in the field, to date the controversies about how to capture and explain language impairments that seem to affect syntactic representations have been going on. In these controversies, syntactic-deficit theories have been challenged both by conflicting empirical data and by developments in syntactic theory that force an adaptation of deficit approaches to encompass not only new evidence but also new developments in syntactic theory − a venture that often turns out to be quite intricate. Besides the challenges posed by new data and new theoretical developments, syntactic-deficit accounts are also subject to more general discussions concerning the issue whether the observed deficits are indeed due to impaired syntactic representations. The accounts presented in section 2 above propose a deficit in syntactic representations and thus a deficit in grammatical competence that underlies impaired language behaviour. Such a deficit in grammatical competence should affect all aspects of lan-

1858

VIII. The Cognitive Perspective

guage performance in parallel (Weigl and Bierwisch 1970). A defective syntactic element, structure, or operation should no longer be produced, comprehended, or judged correctly by an affected individual. A crucial issue for such accounts is their all-or-none flavour. A specific syntactic structure or function is either intact or impaired and, thus, the language-impaired speaker is or is not able to produce grammatical structures that involve the respective syntactic representation. Note, however, that such all-or-none behaviour is rarely observed in individuals. Grammatical competence is only one factor determining performance, with other factors, such as e.g. processing limitations, social, pragmatic, or discourse factors affecting it, too. Indeed, it has been shown that syntactic deficits observed, for instance, in agrammatic Broca’s aphasics deteriorate or ameliorate depending on task demands or time constraints that lead to a lowering or raising of the processing load associated with the task (Kolk and Heeschen 1992). Tasks which minimize processing load, such as cloze tasks where the subject has to produce only the structure or form of interest whereas the sentential context is given, often lead to better performance than more unrestrained tasks (e.g. Kolk and Heeschen 1992). For an illustration of this point, consider data that come from an investigation on the production of wh-questions in German agrammatic Broca’s aphasics (Neuhaus and Penke 2008). Production of wh-questions was tested by an elicitation task where subjects had to transform a given main clause into a wh-question, and by sentence repetition task where subjects were asked to repeat wh-questions as accurately as possible. The repetition of a given syntactic structure is less demanding than the construction of this structure by the individual her/himself. A comparison of the correctness scores obtained in these two experiments yielded that the production of whquestions resulted in better performance in the less demanding repetition task (mean correctness score 78.6 %) than in the more demanding elicitation task (mean correctness score 66.1 %), a finding that illustrates the influence of task demands on language behaviour in language-impaired individuals. Other syntax-external factors that influence the performance of language-impaired subjects relate to the testing situation, i.e. to factors such as the familiarity with the investigator or the formality of the testing situation (e.g. Heeschen and Schegloff 2003). All these factors contribute to the observation that language-impaired individuals will only rarely achieve a correctness score of 100 % for a tested construction in an experimental investigation. What, however, is to be concluded from correctness scores of 80 %, 60 %, 40 %, or 20 %? That is, when is the correctness score low enough to conclude that a syntactic representation is defective and when is it high enough to conclude that syntactic representations are basically unimpaired? Consider again the data on wh-question production in German agrammatic Broca’s aphasics (Neuhaus and Penke 2008). According to the Tree-Pruning Hypothesis (Friedmann and Grodzinsky 1997), individuals suffering from agrammatic Broca’s aphasia should no longer succeed in producing grammatical whquestions since the CP layer is pruned from syntactic representations (see section 2.2.2). In an experiment eliciting wh-questions, the seven tested aphasic subjects succeeded in 66.1 % of the contexts in producing a correct wh-question that involved the projection of the CP layer, a score that was significantly lower than the 97.5 % obtained by unimpaired control subjects (Neuhaus and Penke 2008). The significant difference between aphasic subjects and unimpaired control subjects indicates that the aphasic subjects suffer from an impairment in producing wh-questions. But is this impairment due to an inability to project CP as claimed by the Tree-Pruning Hypothesis? If this were the case, how

53. Syntax and Language Disorders

1859

then can we account for the observation that the aphasic subjects succeeded in 66.1 % of the contexts in producing wh-questions that require the projection of the CP layer? A related problem comes from the variability in language behaviour a group of individuals suffering from the same language disorder will typically display. In the abovementioned investigation of wh-question production abilities in German Broca’s aphasics (Neuhaus and Penke 2008), individual correctness scores of the seven tested aphasic subjects ranged from 27.8 % to 96.3 %. The scores for the unimpaired control subjects in this task ranged between 88.3 % and 100 %. Thus, rather than a clear boundary between impaired and spared performance, there often is a continuum between the performance of language unimpaired and impaired individuals. This continuum between impaired and unimpaired performance and the variability in performance that typically holds in a group of language-impaired individuals render any strong claims on a representational deficit (such as the claim: agrammatic syntactic representations are pruned) problematic. As sketched in section 2.2.2, one solution to this problem of variability is to integrate the idea that language disorders are gradeable and come in different levels of severity. This, however, requires independent criteria to establish the severity of language disorder, lest to avoid circularity in argumentation. As the data on wh-question production in German agrammatic aphasia illustrate, data of language-impaired subjects are complex and difficult to interpret in practice. Their interpretation as well as the conclusions drawn on the basis of such data crucially depend on issues related to statistical methodology, subject sampling, and data analysis that have been at the heart of many controversies in the field (cf. e.g. Drai and Grodzinsky 2006; Grodzinsky 2000 and the discussions following these two articles). The problems related to the identification of clear-cut behavioural patterns, the gradeability of language deficits, and the variability of language behaviour between different individuals suffering from the same language impairment as well as within a single individual (e.g. due to task demands) have led to a second group of approaches accounting for language deficits. In contrast to representational syntactic deficit accounts, processing accounts state that syntactic representations are not impaired in language disorders and can still be put to use. According to these accounts, the observed language problems result from a deficit in language processing caused by limitations of the processing capacity. Such processing limitations may be domain-specific, i.e. restricted to the processing of language, or domain-general, affecting non-verbal processing alike. Processing accounts have suggested, for instance, that the language behaviour observed in language-impaired subjects is caused by reductions in memory components such as working memory (e.g. Just and Carpenter 1992) or verbal short-term memory (e.g. Gathercole and Baddeley 1990). A consequence of such a deficit might be that language input can not be kept in working memory long enough to extract or check morpho-syntactic information, especially in complex sentences where, for instance, deviations of canonical ordering occur. Other researchers have suggested that processing limitations result from a restricted amount of energy available to the subject for a specific task (e.g. Lapointe 1985; Bates and Wulfeck 1989; Avrutin 2000) or from desynchronisation in the temporal course of language processing (e.g. Kolk 1995). In the former case, it might be too costly for a languageimpaired individual to compute specific syntactic constructions or relations. In the latter case, he/she might not be able to compute syntactic structure or structural relationships in a given temporal window, either because the computation is too slow or its results decay too quickly.

1860

VIII. The Cognitive Perspective

Limitations of processing capacity have been held responsible for the language impairments found in a wide variety of language deficits such as agrammatic Broca’s aphasia (e.g. Bates and Wulfeck 1989; Kolk 1995; Caplan and Waters 1995), Wernicke’s aphasia (e.g. Lavorel 1982), Specific Language Impairment (cf. Leonard, Weismer, Miller, Francis, Tomblin, and Kail 2007 for overview), Down’s syndrome (e.g. Chapman, Hesketh, and Kistler 2002), and early unilateral focal brain lesions in children (Bates and Roe 2001). A critical issue with respect to processing accounts is whether the proposed limitations in domain-general processing capacities cause the observed language impairments or whether they are a co-symptom that accompanies syntactic deficits without causing them. A deficit in relating moved NPs to their traces, for example, might be more pronounced if more words intervene between the moved constituent and the trace. In this case, a limitation in working memory might aggravate the syntactic deficit, without, however, being the cause for the underlying problem in establishing a chain between the moved constituent and its trace. Similarly, a limitation in establishing agreement relationships might be aggravated by defective auditory processing capacities. While the auditory processing impairment might particularly affect the perception of inflectional affixes that are often realised as unstressed syllables and might, hence, influence an individual’s behaviour, it might not be the cause for the deficit in establishing agreement relations. Despite this issue, processing accounts are well suited to capture the continuum in performance that can be observed between impaired and unimpaired subjects and within groups of individuals suffering from the same language impairment. The concept that language performance, and thus also language deficits, are gradeable comes naturally to processing accounts. Processing limitations can vary depending on the severity of the underlying disorder that affects the processing of syntactic structures or structural relationships. With growing processing costs or decreasing processing capacities the ability to implement the intact syntactic knowledge in language performance ameliorates or deteriorates. Processing-deficit accounts are, hence, also able to account for language behaviour that is neither completely impaired nor completely intact. And, moreover, they provide an explanation for the observation that the grammatical deficits observed in language-impaired speakers deteriorate or ameliorate depending on task demands or time constraints that lead to a lowering or raising of the processing load (Kolk and Heeschen 1992) − a finding that is difficult to account for in representational deficit theories which claim that syntactic representations are either intact or impaired. The long-standing debate between representational deficit accounts and processingdeficit accounts is one of the central debates in the field of language disorders and far from being settled. However, an integration of these two types of accounts might well turn out to be a fruitful third path in research on language disorders. Analyses of the affected behaviour with respect to the syntactic structures, elements, or operations involved inform us where to expect problems in language-impaired speakers. For example, syntactic structures that involve the projection of the CP layer, the interpretation of particular features, the processing of specific binding relations, or the interpretation of moved noun phrases which reverse canonical ordering of arguments are likely to cause problems for language-impaired individuals. Processing-deficit accounts help us in predicting when such a deficit is likely to appear, namely when processing load increases, for example, because more syntactic operations have to be computed or the task is more

53. Syntax and Language Disorders

1861

taxing and exceeds the processing capacities of the individual subject. During the last years a number of approaches have been postulated that aim to pinpoint which syntactic operations are particularly prone to be affected by limitations in processing capacities, thus integrating the where- and when-perspectives (e.g. Garaffa and Grillo 2008 on the interpretation of non-canonical sentences; Pinango and Burkhardt 2001 on the interpretation of pronouns; Jakubowicz 2011 on wh-questions). An interesting finding, suggestive of the proposed integration between representational deficit accounts (where-accounts), and processing-deficit accounts (when-accounts), is the observation that syntax-related deficits are largely unspecific to a particular deficit syndrome. Rather than finding deficits that are characteristic for only one particular deficit syndrome, deficit symptoms such as omissions of free functional elements, problems with bound inflectional morphology, reduction of syntactic complexity, and problems in interpreting sentences with non-canonical argument order or pronouns are found in a wide variety of acquired and developmental deficit syndromes (cf. section 1.2). These symptoms seem to mark critical areas in the language system which are typically affected when an individual suffers from a limitation of her/his language ability. Syntactic deficit theories serve to inform us about the factors that make these critical structures, elements, or operations vulnerable to language impairments. The observation that the same critical entities are affected and similar deficit symptoms occur throughout different syndromes suggests that processing deficits target these critical areas in the language system, resulting in similar deficit symptoms across syndromes.

4. Summary and outlook The investigation of syntactic disorders is a fascinating topic where broad theoretical issues on the nature of the human language capacity and specific theoretical issues related to syntactic representations interact with empirical research conducted with a challenging subject group. A focus in the research on syntactic disorders is to find out if and how syntactic representations are defective in language-impaired speakers. As indicated by the overview in section 2, syntactic theory and syntactic deficit approaches have coevolved during the last 40 years. New developments in syntactic theory, such as the distinction between S-structure and D-structure, the Split-INFL Hypothesis, or the role ascribed to features, have prompted new developments in accounting for language disorders. To date, researchers active in the field have not agreed on how to capture and explain syntax-related deficits for a single deficit syndrome. However, the controversies that have arisen in the field have undeniably advanced our knowledge on the language impairments observed in particular deficit syndromes. With new developments in syntactic theory and subsequent new suggestions how to capture syntax-related language impairments new facets of syntactic knowledge have come under scrutiny in languageimpaired individuals. Consequently, a growing body of data has been collected over the years on which aspects of syntax are impaired or retained in particular deficit syndromes. As is the case in science, each new piece of evidence has given rise to new and more specific questions to investigate and answer, in this way accumulating our knowledge on language deficits. Whatever approach to syntactic deficits is taken − a representational approach, a processing approach, or an integrative approach − progress in the research on syntactic

1862

VIII. The Cognitive Perspective

deficits will critically depend on syntactic theories. For one, the characterisation of affected and unaffected constructions as natural classes within a theory allows for testable predictions regarding which constructions or operations should be affected in a particular deficit syndrome. Moreover, syntactic theories provide a means to determine the complexity of a construction or operation. Following the intuition of most researchers in the field that more complex constructions or operations are more likely to be affected in language disorders than less complex ones, a theoretically motivated measure of complexity is helpful in guiding research where to look for syntactic deficits in language impairments. While syntactic theories and the advancements within theoretical approaches have been used to capture and explain language impairments, data of language-impaired speakers have, on the other hand, only rarely been taken as evidence in advancing syntactic theories. However, this aspect of the investigation of language deficits certainly merits some research efforts. Insights in our language faculty can only be gained by the analysis of performance data reflecting the application of this knowledge. In our aim to elucidate the nature of the human language faculty, we should consider all sorts of evidence − including data that come from the investigation of language disorders. Data presented by theoretical linguists in advancing syntactic theories is no privileged type of data in this respect, but it constitutes just another type of performance data that can be used to investigate our language faculty (cf. Penke and Rosenbach 2007). The value of data from language-impaired individuals has, for instance, been demonstrated in research on inflectional morphology. Here the selective vulnerability of regular respectively irregular inflectional morphology in specific language syndromes has been taken as evidence for a qualitatively distinct representation of regular and irregular inflection in the human mind/brain (cf. Pinker 1999; Penke 2006 for overview). Also, the finding that regular and irregular inflection often dissociate in language disorders can be used as diagnostic for which inflectional markers are regular or irregular in a particular inflectional system (cf. Penke and Krause 2002; Penke 2008). Research that does not only take syntactic theory as the basis for explaining language deficits, but which aims to advance syntactic theory by taking into account error data or breakdown patterns observed in language-impaired individuals might prove to be valuable in investigating the nature of syntactic representations.

53. Acknowledgement I am very grateful to Liliane Haegeman, Monika Rothweiler, Eva Wimmer and an anonymous reviewer for helpful comments and suggestions.

5. Appendix Broca’s aphasia Broca’s aphasia is an acquired language disorder that is typically caused by strokes affecting anterior parts of the left hemisphere. The spontaneous-speech production of

53. Syntax and Language Disorders

1863

Broca’s aphasics displays a symptom called Agrammatism. Agrammatic speech is characterized by the following signs: (i) omission of free function words, (ii) problems with bound inflectional morphology leading to omissions and/or substitutions, (iii) reduction of sentence length, leading to the preponderence of one or two word utterances in spontaneous speech, (iv) occurrence of root-clause infinitives, and (v) reduced syntactic complexity, resulting in a preponderence of canonical sentence structures and a lack of more complex structures such as subordinate clauses, wh-questions, or utterances with a topicalised constituent. In language comprehension, most Broca’s aphasics display a better understanding of sentences with a canonical word order compared to sentences with non-canonical word order where the object has moved out of its base-generated position and precedes the subject. Major controversies are concerned with the issue whether or not Agrammatism is due to a deficit in syntactic competence and, if so, how to account for this deficit within a theoretical framework (see sections 2 and 3). Suggested reading: Menn and Obler (eds.) (1990b); Penke (1998); Grodzinsky (2000); Avrutin (2001); Bastiaanse and Thompson (eds.) (2012).

Wernicke’s aphasia The language disorder in Wernicke’s aphasia typically results from strokes in left temporo-parietal brain regions. One of the core symptoms in this type of aphasia is so-called Paragrammatism. This symptom describes a speech production which, although fluent, is characterized by repetitions of words and phrases, aborted sentences, word order errors, and sentence blends. Moreover, Wernicke’s aphasics have severely impaired language-comprehension abilities and word-retrieval problems which are indicated by semantic and phonological paraphasias and neologisms in naming and spontaneous speech. It has been suggested that this lexical-semantic deficit is the basis for Paragrammatism while syntax per se is intact. Wernicke’s aphasia as a purely lexical disorder sparing syntax has been traditionally claimed to be the mirror image of agrammatic Broca’s aphasia which has been postulated to be a syntactic disorder with an intact lexicon. The double dissocation between these two different aphasic language disorders that are typically associated with lesions in different brain areas has been considered as supporting the argument that grammar and lexicon constitute autonomously operating modules of the language faculty that can be independently impaired (cf. section 1.1). More recent studies with Wernicke’s aphasics have however challenged the traditional view on the language deficits associated with Wernicke’s aphasia. Thus, studies have provided evidence that complex syntactic structures such as wh-questions or passives are impaired in Wernicke’s aphasics in both language production as well as comprehension (cf. Edwards 2005; Wimmer 2010). Moreover, the observed deficits and performance patterns often resemble the language deficits observed in Broca’s aphasia. For instance, subject wh-questions are better comprehended and produced than object wh-questions in both German Broca’s and Wernicke’s aphasics (cf. Wimmer 2010). The structurally-based problems in Wernicke’s aphasia and the observed similarities to Broca’s aphasia cast doubt on the double dissociation postulated for the two aphasic syndromes. To date, the

1864

VIII. The Cognitive Perspective

debate whether the grammatical difficulties of Wernicke’s aphasics are due to a lexicalsemantic deficit, to an additional syntactic deficit, or to limitations in the languageprocessing capacity (cf. section 3) is controversial. Suggested reading: Edwards (2005); Wimmer (2010); Penke (2013a).

Specific Language Impairment Specific Language Impairment (SLI) is a cover term for delays respectively disorders of the normal acquisition of language that cannot be attributed to obvious neurological, cognitive, psycho-emotional, or significant hearing impairments. Consequently, SLI has proven to be a heterogeneous disorder and has been divided into syndrome subgroups. Relevant for the issue of syntactic deficits is the syndrome subgroup called Grammatical SLI that is characterised by problems in the acquisition of morphosyntax (Van der Lely 1996, 2005). Core symptoms of Grammatical SLI include impairments with inflectional morphology and difficulties with producing or interpreting complex sentences, such as subordinate clauses, relative clauses, or passives. Whether the language impairments in Grammatical SLI are due to a representational deficit affecting syntactic structures or relationships and, if so, how to account for them (cf. section 2), or whether the observed impairments are due to a processing deficit that affects language-processing capacities (cf. Leonard 1998) is a major issue of controversy in the field. A related debate concerns the question whether the language deficits of children with SLI are language-specific, or whether they can be attributed to domainunspecific cognitive impairments, such as deficits in rapid auditory discrimination, in symbolic play, in nonverbal attention, in spatial imagery, or executive functions (cf. Tallal 1990; Thal, Tobias, and Morrison 1991; Johnston 1994; Townsend, Wulfeck, Nichols, and Koch 1995; Im-Bolter, Johnson, and Pascual-Leone 2006). Note, however, that the crucial issue here is not whether children with SLI have any non-linguistic cognitive deficits, but whether these deficits can explain the observed language impairments. Suggested reading: Leonard (1998); Levy and Kavé (1999); Levy and Schaeffer (eds.) (2002); Van der Lely (2005); Schulz and Friedmann (eds.) (2011).

Williams syndrome Williams syndrome is a rare neurodevelopmental disorder of genetic origin that is associated with physical (e.g. renal and cardiovascular) anomalies, a global cognitive impairment, and selective deficits in non-linguistic cognitive domains such as visuospatial constructive cognition where they display marked problems in constructing or drawing a coherent Gestalt. One of the central controversies in the investigation of Williams syndrome concerns the issue whether or not there is a dissociation between grammatical knowledge and other cognitive skills in individuals with this syndrome. Children with Williams syndrome have been shown to display good language skills with respect to regular inflec-

53. Syntax and Language Disorders

1865

tional morphology and the production respectively comprehension of complex sentence structures (e.g. subordinate clauses, passive clauses, tag questions, anaphors) despite severe mental retardation (Bellugi, Lichtenberger, Jones, and Lai 2000; Clahsen and Almazan 1998). The purported dissociation between intact grammar skills and impaired general cognitive capacities observed in this disorder has been taken as evidence that the human language faculty is autonomous from other cognitive domains and that the properties of the language faculty cannot be put down to the operation of domain-general principles (see Bellugi, Lichtenberger, Jones, and Lai 2000). This view has, however, been challenged by researchers who dispute the autonomy of language from general cognitive capacities. These researchers have tried to show that grammatical abilities are not spared in affected children and, hence, are not dissociated from general cognitive capacities (Karmiloff-Smith 1998). Moreover, it has been suggested that children with Williams syndrome undergo a different neural and cognitive development, resulting in the construction of a qualitatively different grammatical system in the brain. Note that if the grammatical system of individuals with Williams syndrome were indeed qualitatively different from the normal system, nothing could be concluded on the basis of data from Williams syndrome about the normal language faculty, for instance, about its autonomy from general cognition (Karmiloff-Smith 1998). Suggested reading: Bellugi and St. George (eds.) (2000); Bartke and Siegmueller (eds.) (2004).

Down’s syndrome Down’s syndrome is a congenital neurodevelopmental disorder caused by a third copy of chromosome 21 that leads to moderate mental retardation and characteristic physiological traits. The language deficits of individuals with Down’s syndrome display characteristics similar to deficits observed in Specific Language Impairment. In speech production, free grammatical morphemes are often omitted (Eadie, Fey, Douglas, and Parsons 2002) and the production of bound grammatical morphemes is impaired (Laws and Bishop 2003). In language comprehension, deficits arise in the comprehension of passive sentences and in establishing grammatical binding relations between anaphors and their referents (Ring and Clahsen 2005; Perovic 2006). A critical issue regarding the language deficits in Down’s syndrome is whether the general limitation of cognitive capacities in individuals with Down’s syndrome is also responsible for the observed language deficits. If this were the case, claims regarding the modularity and autonomy of language capacities from general cognitive capacities (cf. Fodor 1983) would be severely threatened. To investigate this issue, language capacities of individuals with Down’s syndrome are compared to language capacities of individuals who have a similar level in nonverbal cognitive development without suffering from Down’s syndrome. These can be unimpaired younger children or children with mental retardation due to a different neurodevelopmental disorder, such as individuals with Williams syndrome. A language performance that differs quantitatively and/or qualitatively from the performance of mental-age-matched controls is seen as indicative of an indigenous language deficit because the deficit surpasses a level that could be ac-

1866

VIII. The Cognitive Perspective

counted for by the level of cognitive development. Especially the differences in capacities and deficits observed in individuals with Down’s syndrome and individuals with Williams syndrome indicate that language capacities are independent from general cognitive capacities since a comparable general cognitive impairment affects language differently in these two syndromes. Suggested reading: Bellugi, Lichtenberger, Jones, and Lai (2000); Roberts, Chapman, and Warren (2008).

6. References (selected) Avrutin, Sergey 2000 Comprehension of discourse-linked and non-discourse-linked questions by children and Broca’s aphasics. In: Yosef Grodzinsky, Lewis Shapiro, and David Swinney (eds.), Language and the Brain: Representation and Processing, 295−313. San Diego: Academic Press. Avrutin, Sergey 2001 Linguistics and Agrammatism. GLOT International 5(3). Baker, Mark 1985 The mirror principle and morphosyntactic explanation. Linguistic Inquiry 16: 373−416. Bartke Susanne, and Julia Siegmueller (eds.) 2004 Williams Syndrome across Languages. Amsterdam: Benjamins. Bartolucci, Giampiero, Sandra J. Pierce, and David Streiner 1980 Cross-sectional studies of grammatical morphemes in autistic and mentally retarded children. Journal of Autism and Developmental Disorders 10(1): 39−50. Basso, Anna, André Roch Lecours, Silvia Moraschini, and Marie Vanier 1985 Anatomoclinical correlations of the aphasias as defined through computerized tomography: Exceptions. Brain and Language 26: 201−29. Bastiaanse, Roelien, and Susan Edwards 2004 Word order and finiteness in Dutch and English Broca’s and Wernicke’s aphasia. Brain and Language 89: 91−107. Bastiaanse, Roelien, and Cynthia Thompson (eds.) 2012 Perspectives on Agrammatism. New York, NY: Psychology Press. Bates, Elisabeth, Angela Friederici, and Beverly Wulfeck 1987 Grammatical morphology in aphasia. Cortex 23: 545−574. Bates, Elisabeth, Angela Friederici, Beverly Wulfeck, and Larry A. Juarez 1988 On the preservation of word order in aphasia: Cross-linguistic evidence. Brain and Language 33: 323−363. Bates, Elisabeth, and Beverly Wulfeck 1989 Crosslinguistic study of aphasia. In: Brian MacWhinney, and Elisabeth Bates (eds.), The Crosslinguistic Study of Sentence Processing, 328−374. Cambridge: Cambridge University Press. Bates, Elisabeth, and Katherine Roe 2001 Language development in children with unilateral brain injury. In: Charles A. Nelson, and Monica Luciana (eds.), Handbook of Developmental Cognitive Neuroscience, 281− 307. Cambridge, MA: MIT Press. Bellugi, Ursula, and Marie St. George (eds.) 2000 Linking cognitive neuroscience and molecular genetics: New perspectives from Williams syndrome. Special issue of Journal of Cognitive Neuroscience 12: Supplement.

53. Syntax and Language Disorders

1867

Bellugi, Ursula, Liz Lichtenberger, Wendy Jones, and Zona Lai 2000 The neurocognitive profile of Williams syndrome. A complex pattern of strengths and weaknesses. Journal of Cognitive Neuroscience 12 (Supplement): 7−29. Berndt, Rita Sloan, Charlotte C. Mitchum, and Anne N. Haendiges 1996 Comprehension of reversible sentences in “agrammatism”: A meta-analysis. Cognition 58: 289−308. Bishop, Dorothy 2009 Genes, cognition, and communication: Insights from neurodevelopmental disorders. Annals of the New York Academy of Sciences 1156(1): 1−18. Caplan, David 1985 Syntactic and semantic structures in agrammatism. In: Marie Louise Kean (ed.), Agrammatism, 125−152. Orlando: Academic Press. Caplan, David, and Gloria S. Waters 1995 Aphasic disorders of syntactic comprehension and working memory capacity. Cognitive Neuropsychology 12: 637−651. Caramazza, Alfonso 1984 The logic of neuropsychological research and the problem of patient classification in aphasia. Brain and Language 21: 9−20. Caramazza, Alfonso 1992 Is cognitive neuropsychology possible? Journal of Cognitive Neuroscience 4(1): 80−95. Caramazza, Alfonso, and Edgar Zurif 1976 Dissociation of algorhithmic and heuristic processes in language comprehension: evidence from aphasia. Brain and Language 3: 572−582. Caramazza, Alfonso, and Rita Sloan Berndt 1978 Semantic and syntactic processes in aphasia: A review of the literature. Psychological Bulletin 85: 898−918. Chapman, Robin S., Linda J. Hesketh, and Doris J. Kistler 2002 Predicting longitudinal change in language production and comprehension in individuals with Down’s syndrome: hierarchical linear modelling. Journal of Speech, Language, and Hearing Research 45: 902−915. Chomsky, Noam 1957 Syntactic Structures. The Hague: Mouton. Chomsky, Noam 1965 Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. Chomsky, Noam 1980 Rules and representations. Behavioral and Brain Sciences 3: 1−14. Chomsky, Noam 1981 Lectures on Government and Binding. Dordrecht: Foris. Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. Chomsky, Noam 2000 Minimalist inquiries: The framework. In: Roger Martin, David Michaels, and Juan Uriagereka (eds.), Step by Step, 89−155. Cambridge MA: MIT Press. Chomsky, Noam 2002 On Nature and Language. Cambridge: Cambridge University Press. Clahsen, Harald 2008 Chomskyan syntactic theory and language disorders. In: Martin Ball, Michael Perkins, Nicole Mueller, and Sara Howard (eds.), The Handbook of Clinical Linguistics, 165− 183. Oxford: Blackwell. Clahsen, Harald, Susanne Bartke, and Sandra Göllner 1997 Formal features in impaired grammars: A comparison of English and German SLI. Journal of Neurolinguistics 10(2/3): 151−171.

1868

VIII. The Cognitive Perspective

Clahsen, Harald, and Mayella Almazan 1998 Syntax and morphology in Williams syndrome. Cognition 68: 167−198. Curtiss, Susan, and Jeannette Schaeffer 2005 Syntactic development in children with hemispherectomy: The I-, D-, and C-systems. Brain and Language 94: 147−166. De Bleser, Ria 1988 Localisation of aphasia: Science or fiction. In: Gianfranco Denes, Carlo Semenza, and P. Bislacchi (eds.), Perspectives of Cognitive Neuropsychology, 161−185. Hove: Lawrence Erlbaum. De Bleser, Ria, and Josef Bayer 1991 On the role of inflectional morphology in agrammatism. In: Michael Hammond (ed.), Theoretical Morphology: Approaches in Modern Linguistics, 45−69. San Diego: Academic Press. Dennis, Maureen, and Harry A. Whitaker 1976 Language acquisition following hemidecortication. Brain and Language 3: 404−433. Dick, Frederic, Beverly Wulfeck, Magda Krupa-Kwiatkowski, and Elizabeth Bates 2004 The development of complex sentence interpretation in typically developing children compared with children with specific language impairments or early unilateral focal lesions. Developmental Science 7(3): 360−377. Drai Dan, and Yosef Grodzinsky 2006 A new empirical angle on the variability debate: Quantitative neurosyntactic analyses of a large data set from Broca’s Aphasia. Brain and Language 96: 117−128. Dromi, Esther, Laurence B. Leonard, Galit Adam, and Sara Zadunaisky-Ehrlich 1999 Verb agreement morphology in Hebrew speaking children with specific language impairment. Journal of Speech and Hearing Research 42: 1414−1431. Duman, Tuba Y., Gülşat Aygen, Neşe Özgirgin, and Roelien Bastiaanse 2007 Object scrambling and finiteness in Turkish agrammatic production. Journal of Neurolinguistics 20(4): 306−331. Durrleman, Stephanie, and Sandrine Zufferey 2009 The nature of syntactic impairment in autism. Rivista di Grammatica Generativa 34: 57−86. Eadie, P. A., M. E. Fey, J. M. Douglas, and C. L. Parsons 2002 Profiles of grammatical morphology and sentence imitation in children with Specific Language Impairment and Down’s syndrome. Journal of Speech, Language, and Hearing Research 45: 720−732. Edwards, Susan 2005 Fluent Aphasia. Cambridge: Cambridge University Press. Edwards, Susan, and David Lightfoot 2000 Intact grammars but intermittent access. Behavioral and Brain Sciences 23(1): 31−32. Faroqi-Shah, Yasmeen, and Cynthia K. Thompson 2004 Semantic, lexical, and phonological influences on the production of verb inflections in agrammatic aphasia. Brain and Language 89: 484−498. Ferreira, Fernanda 2003 The misinterpretation of noncanonical sentences. Cognitive Psychology 47(2): 164−203. Fodor, Jerry A. 1983 The Modularity of Mind. Cambridge, MA: MIT Press. Friedmann, Naama, and Yosef Grodzinsky 1997 Tense and agreement in agrammatic production: Pruning the syntactic tree. Brain and Language 56: 397−425. Friedmann, Naama, and Rama Novogrodsky 2004 The acquisition of relative clause comprehension in Hebrew: A study of SLI and normal development. Journal of Child Language 3: 661−681.

53. Syntax and Language Disorders

1869

Friedmann, Naama, and Ronit Szterman 2006 Syntactic movement in orally trained children with hearing impairment. Journal of Deaf Studies and Deaf Education 11(1): 56−75. Fromkin, Victoria A. 1971 The non-anomalous nature of anomalous utterances. Language 47(1): 27−52. Fromkin, Victoria A. 1997 Some thoughts about the brain/mind/language interface. Lingua 100: 3−27. Garraffa, Maria, and Nino Grillo 2008 Canonicity effects as grammatical phenomena. Journal of Neurolinguistics 21: 177−197. Gathercole, Susan, and Alan D. Baddeley 1990 Phonological memory deficits in language impaired children: Is there a causal connection? Journal of Memory and Language 29: 336−360. Goodglass, Harold, Julie Ann Christiansen, and Roberta Gallagher 1993 Comparison of morphology and syntax in free narrative and structured tests: Fluent vs. nonfluent aphasics. Cortex 29: 377−407. Gopnik, Myrna, and Martha B. Crago 1991 Familial aggregation of a developmental language disorder. Cognition 39: 1−50. Grober, Ellen, and Shereen Bang 1995 Sentence comprehension in Alzheimer’s disease. Developmental Neuropsychology 11: 95−107. Grodzinsky, Yosef 1984 The syntactic characterization of agrammatism. Cognition 16: 99−120. Grodzinsky, Yosef 1990 Theoretical Perspectives on Language Deficits. London: MIT Press. Grodzinsky, Yosef 1995 A restrictive theory of agrammatic comprehension. Brain and Language 50: 27−51. Grodzinsky, Yosef 2000 The neurology of syntax. Behavioral and Brain Sciences 23(1): 1−71. Grodzinsky, Yosef, Amy Pierce, and Susan Marakovitz 1991 Neuropsychological reasons for a transformational derivation of syntactic passive. Natural Language and Linguistic Theory 9(3): 431−453. Grodzinsky, Yosef, Ken Wexler, Yu-Chin Chien, Susan Marakovitz, and Julie Solomon 1993 The breakdown of binding relations. Brain and Language 45: 396−422. Grodzinsky, Yosef, and Lisa Finkel 1998 The neurology of empty categories: Aphasics’ failure to detect ungrammaticality. Journal of Cognitive Neuroscience 10(2): 281−292. Guasti, Maria Teresa 2002 Language Acquisition. Cambridge MA: MIT Press. Haegeman, Liliane 1991 Introduction to Government and Binding Theory. Oxford: Blackwell. Hagiwara, H. 1995 The breakdown of functional categories and the economy of derivation. Brain and Language 50: 92−116. Hamann, Cornelia, Zvi Penner, and Katrin Lindner 1998 German impaired grammar: The clause structure revisited. Language Acquisition 7(2– 4): 193−245. Heeschen, Claus, and Emanuel Schegloff 2003 Aphasic agrammatism as interactional artefact and achievement. In: Charles Goodwin (ed.), Conversation and Brain Damage, 231− 282. Oxford: Oxford University Press. Hickok, Gregory, and Sergey Avrutin 1995 Representation, referentiality, and processing in agrammatic comprehension: Two case studies. Brain and Language 50: 10−26.

1870

VIII. The Cognitive Perspective

Im-Bolter, Nancie, Janice Johnson, and Juan Pascual-Leone 2006 Processing limitations in children with specific language impairment: The role of executive function. Child Development 77: 1822−1841. Jakubowicz, Celia 2011 Measuring derivational complexity: New evidence from typically developing and SLI learners of L1 French. Lingua 121: 339−351. Janssen, Ulrike, and Martina Penke 2002 How are inflectional affixes organized in the mental lexicon? Evidence from the investigation of agreement errors in agrammatic aphasics. Brain and Language 81: 180−191. Johnston, J. R. 1994 Cognitive abilities of language-impaired children. In: Ruth Watkins and Mabel Rice (eds.), Specific Language Impairments in Children: Current Directions in Research and Intervention, 107−121. Baltimore: Brookes. Just, Marcel A., and Patricia A. Carpenter 1992 A capacity theory of comprehension: Individual differences in working memory. Psychological Review 98: 122−149. Karmiloff-Smith, Annette 1998 Development itself is the key to understanding developmental disorders. Trends in Cognitive Sciences 2: 389−398. Kolk, Herman 1995 A time-based approach to agrammatic production. Brain and Language 50: 282−303. Kolk, Herman, and Claus Heeschen 1992 Agrammatism, paragrammatism and the management of language. Language and Cognitive Processes 7: 89−129. Koopman, Hilda, and Dominique Sportiche 1988 The position of subjects. Lingua 85: 211−58. Kosslyn, Stephen M., and Michael Van Kleek 1990 Broken brains and normal minds: Why Humpty Dumpty needs a skeleton. In: Eric L. Schwartz (ed.), Computational Neuroscience, 390−402. Cambridge, MA: MIT Press. Lai, Cecilia S., Simon E. Fisher, Jane A. Hurst, Faraneh Vargha-Khadem, and Anthony P. Monaco 2001 A fork-head-domain gene is mutated in a severe speech and language disorder. Nature 413: 519−523. Lapointe, Steven G. 1985 A theory of verb form use in the speech of agrammatic aphasics. Brain and Language 24: 100−155. Lavorel, P. M. 1982 Production strategies: a systems approach to Wernicke’s aphasia. In: Michael Arbib, David Caplan, and John C. Marshall (eds.), Neural Models of Language Processing, 135−164. New York: Academic Press. Laws, Glynis, and Dorothy Bishop 2003 A comparison of language abilities in adolescents with Down’s syndrome and children with Specific Language Impairment. Journal of Speech, Language, and Hearing Research 46: 1324−1339. Lee, Miseon 2003 Dissociations among functional categories in Korean agrammatism. Brain and Language 84(2): 170−88. Lee, Jiyeon, Lisa Milman, and Cynthia K. Thompson 2008 Functional category production in English agrammatism. Aphasiology 22(7/8): 893−905. Leonard, Laurence 1998 Children with Specific Language Impairment. Cambridge, MA: MIT Press. Leonard, Laurence, Susan Weismer, Carol Miller, David Francis, Bruce Tomblin, and Robert Kail 2007 Speed of processing, working memory, and language impairment in children. Journal of Speech, Language, and Hearing Research 50: 408−428.

53. Syntax and Language Disorders

1871

Levy, Yonata 1996 Modularity of language reconsidered. Brain and Language 55: 240−263. Levy, Yonata, and Gitit Kavé 1999 Language breakdown and linguistic theory: A tutorial overview. Lingua 107: 95−143. Levy Yonata, and Jeannette Schaeffer (eds.) 2002 Language Competence across Populations − Towards a Definition of Specific Language Impairment in Children. Mahwah, NJ: Lawrence Erlbaum. Liégeois, Frédérique, Torsten Baldeweg, Alan Connelly, David G. Gadian, Mortimer Mishkin, and Faraneh Vargha-Khadem 2003 Language fMRI abnormalities associated with FOXP2 gene mutation. Nature Neuroscience 6(11): 1230−1237. Lonzi, Lidia, and Claudio Luzzatti 1993 Relevance of adverb distribution for the analysis of sentence representation in agrammatic patients. Brain and Language 45: 306−317. MacWhinney, Brian, and Elisabeth Bates 1989 The Crosslinguistic Study of Sentence Processing. Cambridge: Cambridge University Press. Marin, Oscar S. M., Eleanor M. Saffran, and Myrna F. Schwartz 1976 Dissociations of language in aphasia: Implications for normal function. In: Steven R. Harnad, Horst D. Steklis, and Jane Lancaster (eds.), Origins and Evolution of Language and Speech, 868−884. New York: Academy of Sciences. Martin, Randi C. 2006 The neuropsychology of sentence processing: Where do we stand? Cognitive Neuropsychology 23(1): 75−95. Mauner, Gail, Victoria Fromkin, and Thomas L. Cornell 1993 Comprehension and acceptability judgements in agrammatism: disruptions in the syntax of referential dependency. Brain and Language 45: 340−370. Meaburn Emma, P. A. Dale, Ian W. Craig, and Robert Plomin 2002 Language-impaired children: No sign of the FOXP2 mutation. Neuroreport 13(8): 1075−1077. Menn, Lise, and Loraine K. Obler 1990a Cross-language data and theories of agrammatism. In: Lise Menn, and Loraine K. Obler (eds.), Agrammatic Aphasia: A Cross-Language Narrative Sourcebook, Vol. III, 1369− 1389. Amsterdam: Benjamins. Menn, Lise, and Loraine K. Obler (eds.) 1990b Agrammatic Aphasia: A Cross-Language Narrative Sourcebook, Vol. I−III. Amsterdam: Benjamins. Nanousi, Vicky, Jackie Masterson, Judith Druks, and Martin Atkinson 2006 Interpretable vs. uninterpretable features: Evidence from six Greek-speaking agrammatic patients. Journal of Neurolinguistics 19(3): 209−238. Neuhaus, Eva, and Martina Penke 2008 Production and comprehension of wh-questions in German Broca’s aphasia. Journal of Neurolinguistics 21: 150−176. Niemi, Jussi, and Matti Laine 1997 Syntax and inflectional morphology in aphasia: Quantitative aspects of Wernicke speakers narratives. Journal of Quantitative Linguistics 4(1–3): 181−189. Ouhalla, Jamal 1993 Functional categories, agrammatism and language acquisition. Linguistische Berichte 143: 3−36. Penke, Martina 1998 Die Grammatik des Agrammatismus: Eine linguistische Analyse zu Wortstellung und Flexion bei Broca-Aphasie. Tübingen: Niemeyer.

1872

VIII. The Cognitive Perspective

Penke, Martina 2000 Unpruned trees in German Broca’s aphasia. Behavioral and Brain Sciences 23(1): 46− 47. Penke, Martina 2001 Controversies about CP: a comparison of language acquisition and language impairments in Broca’s aphasia. Brain and Language 77: 351−363. Penke, Martina 2006 Flexion im Mentalen Lexikon. Tübingen: Niemeyer. Penke, Martina 2008 Morphology and language disorder. In: Martin Ball, Michael Perkins, Nicole Mueller, and Sara Howard (eds.), The Handbook of Clinical Linguistics, 212−227. Oxford: Blackwell. Penke, Martina 2012 The dual-mechanism debate. In: Markus Werning, Wolfram Hinzen, and Edouard Machery (eds.), The Oxford Handbook of Compositionality, 574−595. Oxford: Oxford University Press. Penke, Martina 2013a Syntaktische Störungen bei Aphasie. Spektrum Patholinguistik 6: 45−84. Penke, Martina 2013b Morphosyntactic development in German children with Down’s syndrome. Talk presented at 2013 meeting of EUCLDIS at NIAS, Wassenaar (The Netherlands). Penke, Martina, and Marion Krause 2002 German noun plurals − a challenge to the Dual-Mechanism Model. Brain and Language 81: 303−311. Penke, Martina, and Anette Rosenbach 2007 What counts as evidence in linguistics? In: Martina Penke, and Anette Rosenbach (eds.), What Counts as Evidence in Linguistics − the Case of Innateness, 1−50. Amsterdam: John Benjamins. Perovic, Alexandra 2006 Syntactic deficit in Down’s syndrome: more evidence for the modular organisation of language. Lingua 116: 1616−1630. Pinango, Maria, and Petra Burkhardt 2001 Pronominals in Broca’s aphasia comprehension: The consequences of syntactic delay. Brain and Language 79(1): 167−168. Pinker, Steven 1999 Words and Rules. New York: Basic Books. Platzack, Christer 2001 The vulnerable C-domain. Brain and Language 77: 364−377. Pollock, Jean-Yves 1989 Verb movement, Universal Grammar, and the structure of IP. Linguistic Inquiry 20: 365−424. Reuland, Eric 2001 Primitives of binding. Linguistic Inquiry 32: 439−492. Rice, Mabel L., and Ken Wexler 1996 Towards tense as a clinical marker of specific language impairment in English-speaking children. Journal of Speech and Hearing Research 39: 1239−1257. Ring, Melanie, and Harald Clahsen 2005 Distinct patterns of language impairment in Down’s syndrome and Williams syndrome: The case of syntactic chains. Journal of Neurolinguistics 18: 479−501. Rizzi, Luigi 2004 Locality and the left periphery. In: Belletti, Andrea (ed.), Structure and Beyond, 223− 251. Oxford: Oxford University Press.

53. Syntax and Language Disorders

1873

Roberts, Joanne, Robin Chapman, and Steven Warren (eds.) 2008 Speech & Language Development & Intervention in Down Syndrome & Fragile X Syndrome. Baltimore, MD: Paul H. Brookes. Rondal, Jean, and Annick Comblain 1996 Language in adults with Down’s syndrome. Down’s Syndrome Research and Practise 4(1): 3−14. Rothweiler, Monika, Solveig Chilla, and Harald Clahsen 2012 Subject verb agreement in Specific Language Impairment: A study of monolingual and bilingual German-speaking children. Bilingualism: Language and Cognition 15(1): 39−57. Rugg, Michael D. 1999 Functional neuroimaging in cognitive neuroscience. In: Colin Brown, and Peter Hagoort (eds.), The Neurocognition of Language, 15−36. Oxford: Oxford University Press. Schaeffer, Jeannette 2000 Aphasia research and theoretical linguistics guiding each other. Behavioral and Brain Sciences 23(1): 50−51. Schulz, Petra, and Naama Friedmann (eds.) 2011 Specific Language Impairment (SLI) across languages: Properties and possible loci. Special issue of Lingua 121. SLI Consortium 2002 A genomewide scan identifies two novel loci involved in specific language impairment. American Journal of Human Genetics 70: 384−398. Stromswold, Karen 2001 The heritability of language: A review and metaanalysis of twin, adoption, and linkage studies. Language 77(4): 647−723. Tager-Flusberg, Helen 1981 Sentence comprehension in autistic children. Applied Psycholinguistics 2: 5−24. Tager-Flusberg, Helen 2002 Language impairment in children with complex neurodevelopmental disorders: The case of Autism. In: Yonata Levy, and Jeannette Schaeffer (eds.), Language Competence across Populations − Towards a Definition of Specific Language Impairment in Children, 297−321. Mahwah, NJ: Lawrence Erlbaum. Tager-Flusberg Helen, Susan Calkins, Tina Nolin, Therese Baumberger, Marcia Anderson, and Ann Chudwick-Dias 1990 A longitudinal study of Ianguage acquisition in autistic and Down’s syndrome children. Journal of Autism and Developmental Disorders 20(1): 1−21. Tallal Paula 1990 Fine-grained discrimination deficits in language-learning impaired children are specific neither to the auditory modality nor to speech perception. Journal of Speech and Hearing Research 33: 616−621. Thal, Donna, Stacy Tobias, and Deborah Morrison 1991 Language and gesture in late talkers: A one-year follow-up. Journal of Speech and Hearing Research 34(3): 604−612. Townsend, J., Beverly Wulfeck, S. Nichols, and L. Koch 1995 Attentional Deficits in Children with Developmental Language Disorder. Technical Report CND-9503. Center for Rerearch in Language, University of California at San Diego. Ullman, Michael, Roumyana Pancheva, Tracy Love, Eiling Yee, David Swinney, and Gregory Hickok 2005 Neural correlates of lexicon and grammar: Evidence from the production, reading, and judgment of inflection in aphasia. Brain and Language, 93(2): 185−238.

1874

VIII. The Cognitive Perspective

Uylings, Harry, Lidia Malofeeva, Irina Bogolepova, Katrin Amunts, and Karl Zilles 1999 Broca’s language area from a neuroanatomical and developmental perspective. In: Colin Brown, and Peter Hagoort (eds), The Neurocognition of Language, 319−336. Oxford: Oxford University Press. van der Lely, Heather 1996 Specifically language impaired and normally developing children: Verbal passive vs. adjectival passive sentence interpretation. Lingua 98: 243−272. Van der Lely, Heather 1997 Language and cognitive development in a grammatical SLI boy: Modularity and innateness. Journal of Neurolinguistics 10(2/3): 75−107. van der Lely, Heather 2005 Domain-specific cognitive systems: Insight from Grammatical-SLI. TRENDS in Cognitive Sciences 9(2): 53−59. van der Lely, Heather, and Linda Stollwerck 1997 Binding theory and specifically language impaired children. Cognition 62: 245−290. van der Lely, Heather, and Michael T. Ullman 2001 Past tense morphology in specifically language impaired and normally developing children. Language and Cognitive Processes 16(2/3): 177−217. Weigl, Egon, and Manfred Bierwisch 1970 Neuropsychology and linguistics: Topics of common research. Foundations of Language 6: 1−18. Wenzlaff, Michaela, and Harald Clahsen 2004 Tense and agreement in German agrammatism. Brain and Language 89: 57−68. Wexler, Ken 1994 Optional infinitives, head movement and economy of derivation. In: Norbert Hornstein, and David Lightfoot (eds.), Verb Movement, 305−350. Cambridge: Cambridge University Press. Wexler, Ken, Carson T. Schütze, and Mabel Rice 1998 Subject case in children with SLI and unaffected controls: evidence for the Agr/Tns omission model. Language Acquisition 7: 317−344. Willmes, Klaus, and Klaus Poeck 1993 To what extent can aphasic syndromes be localized? Brain 116(6): 1527−1540. Wimmer, Eva 2010 Die syntaktischen Fähigkeiten von Wernicke-Aphasikern. Eine experimentelle Studie. Dissertation University of Duesseldorf. http://docserv.uni-duesseldorf.de/servlets/ DocumentServlet?id=16808.

Martina Penke, Cologne (Germany)

54. Syntax and Language Processing

1875

54. Syntax and Language Processing 1. 2. 3. 4. 5. 6. 7.

Introduction Universality of parsing Determining sentence complexity Syntactic representations in processing Bridging the gap between syntactic and processing theory Conclusion References (selected)

Abstract Experimental psycholinguistics examines how mental representations of words, sentences, or larger pieces of text or discourse are created in real time during language production and comprehension. This chapter looks at recent findings from sentence processing research from a linguistic perspective and seeks to illustrate how language processing data can add to our understanding of a wide range of grammatical phenomena including morphosyntactic feature hierarchies, argument linking and thematic role assignment, non-canonical word orders, syntactically mediated referential dependencies such as reflexive binding and obligatory control, and different types of ellipsis. While some empirical findings provide evidence for the mental reality of abstract linguistic categories and representations, others turn out to present non-trivial challenges for some current theoretical assumptions. The chapter also looks at more general issues such as the degree to which parsing might be universal, possible ways of determining sentence complexity, and some recent attempts to link syntactic theory and processing theory.

1. Introduction Language processing research investigates the mental architectures, mechanisms and representations involved in language comprehension and production. Both language comprehension and production are thought to involve the computation of grammatical representations in real time, the details of which, and the factors constraining them, psycholinguists seek to uncover − goals that do not seem altogether different from those of a linguist seeking to determine the properties of the mental grammar. The online computation of grammatical representations is commonly referred to as parsing. Although the precise nature of the relationship between the mental grammar and the parser is still unclear, few people would dispute that the two must be intimately linked. Despite the fact that modern linguistics, along with experimental psychology and a handful of other disciplines sharing the goal of understanding the mind, considers itself to be part of cognitive science, successful collaborations between theoretical syntacticians and psycholinguists engaged in experimental sentence processing research are still comparatively rare.

1876

VIII. The Cognitive Perspective

The main reason for this, according to Culicover (2005: 228), is the fact that “the role of linguistic theory in cognitive science has not been properly characterized, either within linguistics or outside of linguistics”. While linguists usually take the principal object of inquiry to be the competence grammar and may thus deem language processing data to be irrelevant to their concerns, psycholinguists have found linguistic theories of limited use for accounting for their findings. Psycholinguists’ original enthusiasm for Chomsky’s (1957, 1965) generative-transformational grammar quickly waned when the hypothesized correspondence between the number of transformations in a sentence and processing difficulty was disconfirmed, leading to a breakdown in communication between the two fields that is proving difficult to overcome. See, for example, Geurts’ (2007) critique of research within the field of the neurocognition of language, which he claims is often poorly informed by linguistic theory. Although transformational grammar and its descendants, notably Principles-and-Parameters Theory (Chomsky 1981), have continued to influence sentence processing research during the past few decades (compare e.g. Crocker 1996; Gorrell 1995; Pritchett 1992), Chomsky’s (1995, and later) minimalist framework has been criticized for being incompatible with current models of language processing (e.g. Ferreira 2005 − but see Marantz 2005 for a more positive assessment). Alternative approaches such as Combinatory Categorial Grammar (Steedman 2000), Jackendoff’s (2007a) Parallel Architecture framework, variants of Head-Driven Phrase Structure Grammar (Pollard and Sag 1994; S. Müller, article 27, this volume) and Tree-Adjoining Grammar (Joshi, Levy, and Takahashi 1975) have been claimed to be better compatible with performance models, although the ultimate jury on this is still out. Another problem is the fact that linguists and psycholinguists tend to have different views as to what constitute appropriate methods of investigation, with linguists traditionally relying on intuitive (usually, their own) judgments and psycholinguists carrying out controlled laboratory experiments with large numbers of participants (Ferreira 2005; Gibson and Fedorenko 2010 − but see Phillips 2009 for a defense of intuitive judgment data). This methodological gap, at least, is beginning to close, as a growing number of syntacticians are now taking a serious interest in the experimental study of language. While some have started using improved techniques for collecting judgment data (compare e.g. Cowart 1997; Featherston 2005; Schütze 1996; Sorace and Keller 2005), others are using reaction-time or other types of online processing data in place of, or supplementing, more traditional types of linguistic data. There has also been a growing interest in supporting judgment data by corpus data. Experimental methods commonly used in sentence processing research range from offline paper-and-pencil tasks to highly sophisticated online techniques, including some borrowed from neuroscience. Time-course sensitive methods such as eye-movement monitoring or the recording of event-related brain potentials (ERPs), for example, allow us to chart the step-by-step processes involved in sentence comprehension at the millisecond level, and are able to provide detailed information about the nature of syntactic representations and processes (for detailed descriptions of current experimental methods, see e.g. Carreiras and Clifton 2004; Kaan 2007; Staub and Rayner 2007). ERPs may moreover allow us to distinguish between syntactic and lexical-semantic processes, which have been argued to be reflected in distinct patterns of brain activity. The vast majority of sentence processing studies have focused on comprehension, where the input to the processing system (i.e., the stimulus sentences) is easier to control

54. Syntax and Language Processing

1877

than in language production (where the input are our thoughts, or preverbal messages). This chapter looks at recent findings from sentence processing research from a linguistic rather than from a psychological perspective. Specifically, it aims to show how language processing data can add to our understanding of various kinds of syntactic phenomena, rather than providing an overview of current processing models or discussing how different types of information sources interact during parsing. Also excluded from this review are findings from computational modelling, a branch of language processing research from which we can also gain many theoretically relevant insights (compare e.g. Crocker 1996; Dijkstra and de Smedt 1996; Lewis 2003), and findings from neuroimaging studies attempting to localize language functions in the brain (see Bornkessel-Schlesewsky and Friederici 2007, for a review). The remainder of this chapter is organized as follows. Section 2 examines the degree to which parsing might be universal, followed by a closer look at the relationship between structural complexity and processing difficulty in section 3. Section 4 presents a selective overview of experimental findings that are relevant to specific linguistic claims or controversies, and section 5 briefly considers some recent attempts to render syntactic theory better compatible with processing models. The chapter concludes with a brief summary and an overview of recent developments in sentence processing research.

2. Universality of parsing Many of the questions that drive language processing research closely resemble those also traditionally asked by linguists. These include the question of the number and nature of the different subsystems, or levels of processing, that are involved in sentence comprehension and production, the question of how autonomous the grammatical processor is from other cognitive subsystems, and the search for possible universal constraints on the way strings of letters or sounds (in comprehension) or preverbal messages (in language production) are mapped onto grammatical representations. While theories of parsing differ with respect to their assumptions about the architecture of the processing system and the way in which structural and non-structural information interact during processing, some aspects of parsing are widely assumed to be universal. Current processing models can be broadly divided into serial-autonomous ones which assume that initial parsing decisions are based on (morpho-)syntactic information only, and constraint-satisfaction models which assume that parsing is interactive. For reviews of current psycholinguistic models and issues, see e.g. MacDonald and Seidenberg (2006), Pickering and Van Gompel (2006), or Townsend and Bever (2001). One empirical finding that appears to hold true universally is that parsing is incremental. That is, new incoming words or phrases are integrated immediately into the emerging structural representation during sentence comprehension. Even though verbs obviously play a key role in sentence interpretation, parsing has been found to proceed word-byword and without delay even in head-final languages. Studies examining the processing of verb-final structures in languages such as Japanese (Kamide 2006) or German (Bader and Lasser 1984; Fiebach, Schlesewsky, and Friederici 2002) have shown that potential argument noun phrases are analyzed and integrated into the current partial sentence representation even before the subcategorizing verb has been processed. Besides being

1878

VIII. The Cognitive Perspective

predictive (see Kamide 2008, for a review of relevant studies), there is also evidence that incremental left-to-right parsing builds syntactic representations that are fully connected, rather than building multiple unconnected structural fragments (Sturt and Lombardo 2005). Another putatively universal feature of parsing is the parser’s preference for pursuing the most economical analysis. According to the principle of Minimal Attachment (Frazier 1979; Frazier and Fodor 1978), the parser will seek to minimize structural complexity. Locally ambiguous or garden-path sentences such as The log floated down the river sank provide good test cases for hypothesized structural economy principles. When faced with temporarily ambiguous input such as the word floated in a sentence fragment such as The log floated down the river…, which is formally ambiguous between the past tense and past participle forms of float, for example, the parser will preferentially analyze floated as a finite verb in the past tense. This analysis yields the minimal phrase marker consistent with the current input, a monoclausal structure as indicated in (1a), while avoiding the postulation of any potentially unnecessary nodes. The alternative analysis of floated as a participle that introduces a reduced relative clause (RC) modifying the log, on the other hand, requires the computation of additional structure that is potentially unnecessary at this point during processing. This includes an extra clause boundary (plus any associated functional scaffolding and covert wh-operator) and the prediction of another verb or verb phrase to complete the main clause, as indicated in (1b). (1)

a. [IP [NP the log ] [VP floated down the river ]] b. [IP [NP [NP the log ] [RC floated down the river ]] [VP … ]]

The severe processing difficulty typically elicited by this type of garden-path sentence when the disambiguating verb (e.g. sank) is encountered suggests that the main clause analysis (1a) is indeed opted for initially, and that revising it requires considerable (and possibly conscious) processing effort. Many models of parsing have explicitly linked the relative difficulty of undoing and repairing an erroneous first analysis to the specific kinds of structural change required (compare e.g. Fodor and Ferreira 1998; Sturt, Pickering, and Crocker 1999). Similar economy considerations constrain the way we process discontinuous syntactic dependencies. According to the Active Filler Hypothesis (AFH) (Frazier and Clifton 1989) and its variants, such as the Minimal Chain Principle (De Vincenzi 1991), the parser seeks to keep the distance between a syntactically displaced element (or filler) and its lexical licenser (or associated gap) as short as possible. Many experimental studies have shown, for instance, that having encountered a fronted constituent such as the wh-phrase which book in (2) below, the processor will try and link this to the closest potential subcategorizer − i.e., the verb read − as soon as this is encountered. (2)

Which book did Kathryn read many articles about? a. Which book did Kathryn read [ which book ] … b. Which book did Kathryn read [ which book ] many articles about [ which book ]

In (2a), associating which book with read satisfies local processing economy but requires this analysis to be revised later on during the sentence as the wh-phrase’s real licenser

54. Syntax and Language Processing

1879

is the preposition about (cf. [2b]). Evidence for immediate gap-filling is provided by socalled filled gap effects − an increase in processing difficulty typically observed when a potential gap position turns out to be filled by another constituent, such as many articles in (2) (Stowe 1986). While the parser’s tendency to pursue the simplest possible analysis compatible with the current input, and its preference for keeping discontinuous dependencies short, have been attested in a range of typologically different languages, cross-linguistic differences have been reported in the domain of modifier ambiguity resolution. Monolingual English speakers, for example, usually prefer associating an ambiguous relative clause such as who had won the competition in (3) below with the second of two potential host noun phrases (i.e. with the writer) rather than with the head of the complex noun phrase (i.e. the daughter). (3)

Peter admired the daughter of the writer [RC who had won the competition ].

Speakers of many other languages including Spanish, French, German and Greek, on the other hand, tend to associate the RC with the referent of the first-mentioned noun phrase, i.e. with the daughter (see e.g. Cuetos and Mitchell 1988; Hemforth, Konieczny, and Scheepers 2000; Papadopoulou and Clahsen 2003; Zagar, Pynte, and Rativeau 1997). This observation has given rise to the hypothesis that certain parsing principles may be parameterized. Gibson, Pearlmutter, Canseco Gonzalez, and Hickok (1996) have suggested that language-specific variation in ambiguity resolution preferences may result from the interaction of two competing, structurally based economy principles. While the supposedly universal economy principle of Recency favors associating ambiguous modifiers with the most recently processed (i.e. most local) constituent, the principle of Predicate Proximity favors association with a head or constituent closest to the head of the current clause or predicate phrase. Several variants of the Recency principle have been proposed in the sentence processing literature, including Kimball’s (1973) principle of Right Association, the principle of Late Closure (Frazier 1979; Frazier and Fodor 1978), and Phillips’ (1996) Branch Right principle. Unlike Recency, the Predicate Proximity principle is argued to be subject to parameterization, with its relative strength tentatively being linked by Gibson, Pearlmutter, Canseco Gonzalez, and Hickok (1996) to the degree to which languages allow free word order. Unlike universalist theories of parsing (e.g. Frazier 1979; Frazier and Fodor 1978; Frazier and Clifton 1996; Gibson, Pearlmutter, Canseco Gonzalez, and Hickok 1996), probabilistic or experience-based models of language processing attribute any crosslinguistic variation in parsing to differences in statistical patterns in the input (Cuetos, Mitchell, and Corley 1996; MacWhinney and Bates 1989). According to the Competition Model developed by MacWhinney and colleagues, for example, language comprehension involves direct form-function mappings made on the basis of multiple competing cues to meaning, whose relative strength is claimed to vary from language to language (for a detailed critique of this model, see Gibson 1992). The question of whether or not syntactic information is prioritized in some way in real-time sentence processing, as serialautonomous or syntax-first models would have it (e.g. Frazier and Clifton 1996; Friederici 2002), is likely to remain controversial for some time to come. Much recent research examines the question of the extent to which general cognitive factors − notably, working memory limitations − might influence the way we process a

1880

VIII. The Cognitive Perspective

given input string, and possibly even shape the grammar. Since language processing takes place in real time and requires us to keep track of elements already processed and link them to subsequent input, the fact that our computational resources are limited is bound to affect the way our processing systems operates.

3. Determining sentence complexity Why are some types of sentence more difficult to process than others? It seems reasonable to assume that processing difficulty is linked to sentence complexity − but how exactly should this be measured? Several complexity metrics have been proposed in the literature that have tried to make transparent the way in which structural complexity and processing cost are related. The fact that a sentence like (4b) below is easier to process than (4a) shows that processing difficulty is neither simply a function of sentence length nor of the number of structural embeddings involved. (4)

a. The bicycle the girl the boy liked rode was old. b. The young boy liked the pretty girl who rode the blue bicycle that was very old.

As indicated above, the original Derivational Theory of Complexity (Miller 1962; Miller and Chomsky 1963), which was based upon early transformational-generative grammar and assumed that transformations such as passivization or negation had to be systematically undone during comprehension, proved unable to accurately predict sentence processing difficulty (Fodor, Bever, and Garrett 1974). A more recent attempt to predict processing difficulty is Gibson’s (1998, 2000) Dependency Locality Theory (DLT). Processing cost is assumed to be made up from two components, storage cost and integration cost, and can be calculated for any given parse state during sentence processing. Underlying the DLT are the assumptions that (i) the processor makes lexically-based predictions about the categories or constituents that can follow the current word, and (ii) integrating a new word into the current representation involves trying to match it with a syntactic category prediction, as well as some degree of semantic evaluation (including thematic role assignment). Storage cost is defined in terms of the minimum number of syntactic heads required to complete the current input as a grammatical sentence. That is, the longer a given prediction must be upheld (i.e., kept in working memory) before it is satisfied, the more computational resources will be required for maintaining that prediction. The DLT Locality Theory is able to predict rather well, for example, the difference in processing difficulty between subject (5a) and object (5b) relative clauses. In (5), the figures below each word are memory units (MUs) indicating the relative storage cost at each point. (5)

a. The girl who liked the teacher recited a poem. 2 1 3 2 2 1 1 10 (MU) b. The girl who the teacher liked recited a poem. 2 1 3 4 3 1 1 10 (MU)

54. Syntax and Language Processing

1881

In both (5a) and (5b), the relative pronoun who signals the start of an embedded clause and thus, in addition to the as yet unsatisfied prediction of a main verb, triggers the prediction of an embedded verb and of a syntactic gap. Both of these predictions can be satisfied at the point at which liked is received. However, while who can be integrated with liked immediately (and hence, relatively effortlessly) in subject relatives such as (5a), both predictions must be kept in memory for longer in (5b), leading to increased storage cost during the intervening region the teacher. Integration cost is thought to reflect the relative difficulty of associating a new incoming head with the structure built thus far, which is based on the relative distance between the two elements to be integrated. The greater the distance between a newly input word and the head to which it attaches or which licenses it, the greater the integration cost. Distance, in turn, is taken to be a function of the number of intervening new discourse referents. Local integration cost at the embedded verb liked is also higher in object than in subject relatives because of the greater number of new discourse referents intervening between who and its associated gap. The DLT’s prediction that (automatic) processing will fail if local processing cost exceeds a certain limit fits with a large body of experimental evidence indicating that we tend to find multiply-nested dependencies as in (6) below extremely difficult to process. (6)

The girl who the teacher who the headmaster hated liked was late for class.

Many researchers have speculated that processing factors such as computational resource limitations might determine at least some properties of our mental grammar (Berwick and Weinberg 1984; Hawkins 1994, 2004, among others). Hawkins (2004) argues that grammars are shaped so as to facilitate language processing, and that many typological generalizations may have their origin in performance principles. The hypothesis that parsing constraints help shape the grammar is supported by a large body of empirical evidence, including the tendency for natural language grammars to be either uniformly left-branching or right-branching, the preference for suffixation over prefixation, the tendency for non-case marking languages to show SVO order, and the observation that word order preferences are affected by heaviness or phrase length. Others have gone even further in arguing that some supposedly grammatical constraints, notably those that restrict discontinuous syntactic dependencies, such as Ross’s (1967) island constraints, are merely epiphenomena reflecting processing constraints at work (e.g. Kluender 1998; Kluender and Kutas 1993; Pritchett 1991). Chomsky himself appears to be sympathetic to this view, observing that “[t]he constraints involved − socalled “island conditions” − (…) may reduce in large measure to minimal search conditions of optimal computation, perhaps not coded in UG [universal grammar] but more general laws of nature…” (Chomsky 2007: 16−17). Kluender (1998), for instance, argues that the increase in referential processing load at clause boundaries might be responsible for the observation that wh-extraction from syntactic islands usually leads to (varying degrees of) unacceptability. We will return to processing-based accounts for island effects in section 4.2.1 below. To conclude, there are good reasons for assuming that sentence complexity and parsing difficulty are linked, and complexity metrics such as Gibson’s (1998, 2000) DLT go some way towards making this relationship transparent. For alternative accounts of sentence complexity, see Bever (1970), Hawkins (1994, 2004), Lewis (1996), Stabler (1994), and Vasishth and Lewis (2006).

1882

VIII. The Cognitive Perspective

4. Syntactic representations in processing Sentence processing research has focused on three broad types of phenomena: (i) incremental structure building and ambiguity resolution, (ii) the processing of non-canonical word orders, and (iii) the processing of anaphoric expressions. This section provides selective overviews of recent findings from each of these three areas of investigation that might have implications for syntactic theory and description.

4.1. The role of grammatical and lexical features in phrase-structure building Although the issue of the relative timing of morphosyntactic, lexical, prosodic, discourselevel and probabilistic information in sentence parsing is still under investigation, there is abundant evidence that all of these information sources are accessed and integrated quickly during real-time language comprehension. In the following, we shall focus on the role of structural information in processing and consider how insights from the study of online structure building may be brought to bear on theoretical linguistic issues.

4.1.1. Morphosyntactic features in parsing Inflectional morphology provides the parser with vital clues about the syntactic category and grammatical function of new incoming words or phrases, and how these are linked to words or phrases already processed. Results from ERP studies have revealed that native comprehenders’ brains register morphosyntactic violations extremely quickly, within about 300−500 milliseconds after encountering them (Friederici 2002). While the early processing of morphosyntactic features is generally thought to facilitate structural integration processes, local feature mismatches may also give rise to unwanted intrusion effects. The presence of a plural marker on the second noun cabinets in sentences like The key to the cabinets was/*were rusty, for example, has been shown to affect the processing of the following auxiliary both in production (Bock and Miller 1991) and comprehension (Pearlmutter, Garnsey, and Bock 1999). Gender information is also used rapidly in sentence processing (Cowart and Cairns 1987). The observation that gender agreement violations elicit the same type of brain response (P600, or syntactic positive shift) as do number agreement and phrase structure violations has been taken to suggest that gender agreement is syntactic rather than semantic in nature (Hagoort and Brown 1999). Unlike person or number, however, fixed grammatical gender is thought to be a lexical feature of nouns, an assumption that is also supported by processing data. Examining the online resolution of the null subject pronouns in Italian, Carminati (2005) found evidence for differences in the psychological salience of different types of morphosyntactic features, in accordance with Greenberg’s (1963) universal feature hierarchy (Person > Number > Gender) (compare also Nevins, Dillon, Malhotra, and Phillips 2007, for processing evidence from Hindi). Carminati’s findings are in line with evidence from Spanish suggesting that gender agreement is

54. Syntax and Language Processing

1883

processed independently from number agreement (Anton-Mendez, Nicol, and Garrett 2002) and may be more difficult to compute than number agreement (Barber and Carreiras 2005). Taken together, these findings are suggestive of differences in the way these features are syntactically represented, such that number but not gender may head a functional projection of its own (Carminati 2005). This does not, however, preclude the possibility that gender agreement is syntactic in nature, as suggested by Hagoort and Brown’s (1999) results, if we assume that gender features are part of the featural makeup of other syntactic heads (Carminati 2005: 274). In the processing of free word order languages, case features have been shown to be of particular importance (Bader and Bayer 2006; Hyönä and Hujanen 1997; Miyamoto 2002, among many others). In head-final language like Japanese, morphological case markers facilitate incremental processing by helping the parser determine the grammatical function of a newly received noun phrase, thus allowing it to be integrated into the emerging sentence structure even in the absence of the subcategorizing verb (see also section 4.2.2 below). The results from a reading-time study reported by Miyamoto (2002) furthermore show that case markers may facilitate processing by signalling the presence of clause boundaries. Results from speeded grammaticality judgment tasks in German suggest that case markers may provide more effective reanalysis cues than number marking (Meng and Bader 2000), and differences in the processing of noun phrases bearing structural versus oblique case have been argued to support the hypothesis that the latter have a more complex functional architecture than the former (Bayer, Bader, and Meng 2001).

4.1.2. Argument structure and thematic roles A number of studies have shown that arguments are processed differently from adjuncts, supporting linguistic claims to the effect that the two should be formally distinguished (see Tutunjian and Boland 2008, for a review). Kennison (2002), for example, found that argument NPs are processed more quickly than adjunct NPs if they follow verbs that are biased towards transitive use such as read but not if they followed intransitivebiased verbs such as perform. These findings fit with a large body of experimental work showing that parsing is sensitive to argument structure and thematic role information (e.g. Burkhardt, Fanselow, and Schlesewsky 2007; Friederici and Frisch 2000; Trueswell, Tanenhaus, and Garnsey 1994). The results from several studies using time-course sensitive measures indicate that the parser’s use of thematic role information and/or selectional restrictions may be delayed relative to its use of structural information, however (see e.g. Clifton, Traxler, Taha Mohamed, Williams, Morris, and Rayner 2003; Ferreira and Henderson 1990; Friederici 2002; McElree and Griffith 1995, 1998). Processing data may also provide information about the representation of argument structure and thematic hierarchies. The usual preference for subject-initial over objectinitial structures in German, for instance, has been found to be absent or reversed in sentences containing object-experiencer verbs such as gefallen ‘to be appealing to’ (Schlesewsky and Bornkessel 2003). This observation is in line with the assumption that for verbs of this type, the dative argument is higher on the thematic hierarchy than the nominative-marked one (e.g. Primus 1999). Experimental evidence furthermore suggests

1884

VIII. The Cognitive Perspective

that implicit agents in passive structures such as The game show’s wheel was spun form part of the mental representations constructed during processing (Mauner, Tanenhaus, and Carlson 1995). There is evidence that syntactic processing and thematic or semantic role assignment may proceed independently but in parallel (compare e.g. Schlesewsky and Bornkessel 2003; Townsend and Bever 2001). In an ERP study reported by Frisch and Schlesewsky (2001), for instance, presenting participants with two arguments carrying the same morphological case invariably gave rise to a P600 response, whereas a second ERP component (a so-called N400) thought to reflect lexical-semantic processing difficulty was seen only if both arguments were animate (as in *… welcher Bischof der Priester begleitete ‘which bishop the priest accompanied’), but not where animacy differences provided cues as to their likely thematic roles. The observation that the brain registers syntactic anomalies regardless of whether or not a sentence is in fact comprehensible could be taken to suggest that structural and semantic processes operate independently. Further evidence for two separate processing pathways comes from the observation that the processor will sometimes compute rough-and-ready or “good enough”, meaningbased representations that may, under certain conditions, override the interpretations derived from detailed grammatical analyses (see Ferreira and Patson 2007, for a review). Christianson, Hollingworth, Halliwell, and Ferreira (2001), for instance, tested participants’ interpretation of garden-path sentences such as (7), in which the NP the rabbit is temporarily ambiguous between a subject and an object analysis but is then unambiguously identified as the main clause subject by the following verb ran. (7)

While the grocer hunted the rabbit ran into the woods.

The authors found that in response to the question Did the grocer hunt the rabbit? many people would wrongly reply with yes, which indicates that their initial assignment of the Theme or Patient role to the rabbit had not been corrected even in the face of contradictory syntactic evidence.

4.1.3. Processing unaccusatives According to the Unaccusative Hypothesis, subjects of unaccusative verbs originate in object position (Perlmutter 1978). Subjects of unergative verbs, in contrast, are assumed to be base-generated in subject position. Friedmann, Taranto, Shapiro, and Swinney (2008) used the cross-modal lexical priming technique to test the Unaccusative Hypothesis. Participants were required to make a lexical decision (that is, to discriminate between word and nonwords) to visually presented targets while listening to stimulus sentences spoken at normal speed. If syntactically displaced constituents are mentally reconstructed at their canonical structural positions, then responses to targets semantically related to the antecedent should be faster at the point of a structural gap, compared to non-gap (control) positions. Friedmann, Taranto, Shapiro, and Swinney’s auditory stimulus sentences contained three types of verb, non-alternating unaccusatives such as disappear (8a), alternating unaccusatives such as dry (8b), and unergatives such as wave (8c).

54. Syntax and Language Processing (8)

1885

a. The runner with the funny accent and humble attitude unfortunately disappeared … b. The table in the basement of the old house finally dried … c. The fisherman from the south side of the peninsula happily waved …

The results showed that subjects of non-alternating unaccusatives were reactivated after the verb whereas subjects of unergatives were not. A mixed pattern of reactivation was observed for alternating unaccusatives such as dry, on the other hand, which can also be used transitively. These findings support the Unaccusative Hypothesis while presenting a challenge for non-derivational accounts of unaccusative structures (compare also Bever and Sanz 1997, for Spanish).

4.1.4. Summary The above examples illustrate the potential relevance of language processing data for linguistic hypotheses about the syntactic representation of morphosyntactic features, thematic hierarchies and argument linking, as well as for the more general question of the degree of modularity of the language system. Other controversial issues which processing data may bear on include the representation of complex predicates (Nakatani 2006) and of multi-word expressions such as compounds (Cunnings and Clahsen 2007), complex verbs (Frazier, Flores d’Arcais, and Coolen 1993; Matlock and Heredia 2002), and idioms (e.g. Cacciari and Tabossi 1993; Libben and Titone 2008).

4.2. Processing non-canonical word orders Syntactic theories differ in their analyses of non-canonical word order phenomena such as passives, NP-raising, or wh-fronting. While transformational grammar traditionally takes these to be derived from canonically ordered structures via syntactic movement, other frameworks assume that they are base-generated or lexicalized. Early psycholinguistic evidence in support of the psychological reality of empty categories such as NPtraces in passive and raising structures came from studies using the end-of sentence probe recognition technique (Bever and McElree 1988; McElree and Bever 1989). These results must be interpreted with some caution, however, as the mental reactivation of displaced constituents at the end of a sentence could also reflect post-interpretation or end-of-sentence “wrap-up” processes. Arguably better suited to investigating whether arguments are mentally reactivated at specific structural positions are online techniques such as the cross-modal lexical priming paradigm described in section 4.1.3 above. Osterhout and Swinney (1993), for instance, found evidence for the reactivation of subjects shortly after the verb had been processed in passive but not in active structures, in line with what the movement hypothesis would lead us to expect. Although the majority of studies examining non-canonical word order patterns have used comprehension based rather than production tasks, there is evidence that empty categories also form part of the grammatical representations built during sentence production (Franck, Lassi, Frauenfelder, and Rizzi 2006; Franck, Soare, Frauenfelder, and Rizzi 2010).

1886

VIII. The Cognitive Perspective

In the following we shall take a look at some of the most commonly studied noncanonical word order patterns in sentence processing research, to explore the extent to which language processing data might help inform current theoretical debates.

4.2.1. Unbounded dependencies Let us first consider the phenomenon of wh-fronting, the prototypical case of what in the psycholinguistics literature is commonly referred to as a filler-gap dependency. When processing interrogatives such as Which book did you say Bill has bought?, for example, the wh-filler which book must be held in working memory until a suitable gap or lexical licenser is identified in the input − in this case, the filler must ultimately be linked to its lexical subcategorizer bought. Numerous studies have shown that encountering a whfiller will trigger an active search for a gap, as stated by the AFH in (9) (Frazier and Clifton 1989: 95). (9)

Active Filler Hypothesis When a filler has been identified, rank the option of assigning it to a gap above all other options.

As we saw in section 2 above, finding potential gap positions occupied by another constituent may lead to measurable processing difficulty. Interestingly, the equivalent of filled-gap effects have also been observed in wh-in-situ languages like Japanese. In Japanese, interrogativity is marked by a question particle on the verb as shown in (10) rather than by wh-fronting. (10) Mary-ga nani-o katta-no? Mary-NOM what-ACC bought-Q ‘What did Mary buy?’

[Japanese]

Using the self-paced reading paradigm, Miyamoto and Takahashi (2000) observed a type-mismatch effect when the earliest potential position for a question particle was found to be filled by an affirmative complementiser instead, as in example (11) below. (11) [Senmu-ga donna-pasokon-o tukatteiru-to ] kakarichoo-ga [Japanese] director-NOM what.kind-computer-ACC using.is-COMP supervisor-NOM itta-no said-Q ‘What kind of computer did the supervisor say the director is using?’ Participants were slowed down after coming across the complementiser to ‘that’ at the end of the embedded clause (compared to another condition containing the question particle no instead), indicating that shorter wh-dependencies are preferred over longer ones in wh-in-situ languages as well (see also Aoshima, Phillips, and Weinberg 2004). As pointed out in section 3 above, the AFH is an economy condition that helps the processor save working memory resources. The fact that the relative difficulty of pro-

54. Syntax and Language Processing

1887

cessing unbounded dependencies is linked to the distance between the filler and its gap has given rise to the hypothesis that at least some (supposedly categorical) grammatical constraints on the formation of filler-gap dependencies might merely reflect processing problems. This idea is expressed in a generalized form by Hofmeister, Jaeger, Sag, Arnon, and Snider’s (2007: 191) Wh-Processing Hypothesis (12) (compare also Hofmeister and Sag 2010). (12) The Wh-Processing Hypothesis a. Factors that have been shown to burden the processing of referential filler-gap dependencies (e.g. relative clauses) burden the processing of all FGDs, including wh-interrogative constructions. b. Many filler-gap sentences that have standardly been analyzed as ungrammatical (violating island constraints) are in fact grammatical, but are judged to be less acceptable by speakers because they are harder to process. The authors argue that (12) helps account for the perceived ungrammaticality of superiority violations such as *What did who buy? as well as for island violations. Consider the interrogative sentence in (13a) below, for example, which involves wh-extraction of which book out of the bracketed relative clause who recently wrote and which is generally considered to be ungrammatical. Corresponding declarative sentences like (13b) are fully grammatical, however. (13) a. *Which book did the journalist admire the woman [ who recently wrote __ ] ? b. The journalist admired the woman [ who recently wrote a new book about government human rights abuses ]. From a left-to-right processing perspective we might expect a surge in local processing load in (13a) to occur at the RC boundary, due to the need to simultaneously (i) keep the original wh-filler which book active in working memory, (ii) access a discourse referent for the definite noun phrase and RC head the woman, (iii) start a new clause, and (iv) start a new gap search triggered by the wh-pronoun who. Following Kluender (1998), Hofmeister and Sag (2010) and others, the combined syntactic and semantic processing effort required at this point may exceed the available cognitive resources, leading to the original gap search to be suspended or abandoned, and thus impeding the parser’s ability to link which book to its subcategorizer (the verb write) inside the RC when this is encountered. The results from several experimental studies have shown that the parser does indeed refrain from postulating wh-gaps inside syntactic islands (e.g. Felser, Cunnings, Batterham, and Clahsen 2012; Traxler and Pickering 1996). Processing explanations have also been proposed for another type of ungrammatical filler-gap dependency, weak crossover violations as in *Whoi did hisi mother criticize? (Alphonce 1998; Shan and Barker 2006). Processing-based accounts of ungrammatical wh-extractions may help explain the high degree of variability in speakers’ judgments of subjacency violations, and the observation that subjacency effects are sensitive to pragmatic factors such as the referentiality of the wh-phrases involved (as noted e.g. by Pesetsky 1987). Frazier and Clifton (2002), for example, showed that filler-gap dependencies containing full or referential wh-phrases are easier to process than those contain-

1888

VIII. The Cognitive Perspective

ing wh-pronouns, and that a referential wh-phrase can improve the acceptability of sentences containing resumptive pronouns inside islands such as (14a, b). (14) a. Which students did the teacher wonder if they had gone to the library? b. Who did the teacher wonder if they had gone to the library? The authors argue that unlike fronted wh-pronouns, referential wh-phrases as in (14a) are integrated immediately into the current discourse representation and thus provide relatively more accessible antecedents for a gap or resumptive pronoun (compare also Ariel 2001). While the idea of explaining subjacency effects in terms of parsing difficulty might seem intuitively rather plausible, the results from a reading-time study reported by Phillips (2006) cast some doubt on the hypothesis that island constraints are not grammaticalized (compare also Phillips 2013; Wagers and Phillips 2009). To put the processing hypothesis to the test, Phillips (2006) exploited the phenomenon of parasitic gaps (PGs), whose presence is contingent on the presence of a “real” wh-dependency and which may be grammatically licensed inside syntactic islands. His materials included sentences such as (15), which contains a subject island that provides a potential PG environment. (15) The school superintendent learned which schools [NP the proposal to expand (*PG) drastically and innovatively upon the current curriculum ] would overburden … In subject islands containing a non-finite verb, PGs are grammatically possible in principle − although in the case of (15), postulating a PG following the verb expand turns out later on to be an erroneous parsing decision. Phillips’ results show that PGs are indeed initially postulated in sentences such as (15), but not in structurally identical sentences containing a finite verb (i.e., the proposal that expanded …). This suggests that island constraints can be violated during processing in potential PG environments but not otherwise. This kind of variability in the parser’s sensitivity to syntactic islands is not what we would expect to see, according to Phillips, if island constraints were merely reflexes of local processing difficulty, as processing-based accounts of subjacency do not differentiate between potential PG and non-PG environments in otherwise identical sentences. Similar to current debates about the need for grammatical formalisms to include empty categories in their inventory, psycholinguists are divided as to whether gap-filling involves purely lexically-driven direct association (Pickering 1993; Pickering and Barry 1991) or is mediated syntactically by an empty category located in the filler’s canonical or base position (see e.g. Bever and McElree 1988; Gibson and Hickok 1993; Nicol and Swinney 1989). While these two competing hypotheses are difficult to dissociate empirically in verb-initial languages like English, evidence in support of the existence of wh-trace has been found in studies examining the processing of filler-gap dependencies in verb-final languages (Clahsen and Featherston 1999; Fiebach, Schlesewsky, and Friederici 2002; Miyamoto and Takahashi 2002; Nakano, Felser and Clahsen 2002 − see section 4.2.2 below). For English, the results from studies investigating indirect object dependencies (e.g. Nicol 1993), subject relative clauses (Lee 2004), or dependencies spanning more than one clause (e.g. Gibson and Warren 2004) also indicate that empty categories are mentally real in some sense. This conclusion is further corroborated by

54. Syntax and Language Processing

1889

evidence from ERPs showing that filled wh-gaps in ungrammatical sentences like *The zebra that the hippo kissed the camel on the nose ran far away initially elicit a brain response (an early left-anterior negativity, or ELAN) thought to index purely syntactic processes, with brain responses associated with lexical-semantic processing being delayed (Hestvik, Maxfield, Schwartz, and Shafer 2007).

4.2.2. Scrambling Many languages permit flexible ordering of argument phrases, a phenomenon known as scrambling. While transformational accounts of scrambling typically assume that scrambling configurations are derived by syntactic movement, others have argued that scrambled word orders are base-generated (see Nemoto 1999, for a review). Most processing studies of scrambling have focused on short (i.e., clause-internal) scrambling as illustrated by the Japanese example in (16b) (adapted from Miyamoto 2006: 257). (16) a. John-ga Mary-ni ocha-o dasita. John-NOM Mary-DAT tea-ACC served

[Japanese]

b. John-ga ocha-o Mary-ni dasita. John-NOM tea-ACC Mary-DAT served ‘John served tea to Mary.’ Several of these have found evidence that scrambling increases sentence processing difficulty compared to canonically ordered sentences such as (16a) (for review and discussion, see Miyamoto 2006; Sekerina 2003). This observation has been argued to support movement-based approaches to scrambling and the view that the verb phrase in free word-order languages is configurational (see also Clahsen and Featherston 1999, for German). Evidence for the configurationality of Japanese verb phrases also comes from a crossmodal lexical priming study reported by Nakano, Felser, and Clahsen (2002), who investigated the processing of long scrambling (i.e., scrambling across one or more clause boundaries) in sentences such as (17) below (where the symbol t, for trace, indicates the hypothesized preverbal gap). (17) Suruto, remon-oi , [S futarimeno hito-ga [Japanese] shikaisha-ni, person-NOM Master.of.Ceremonies-DAT and.then lemon-ACC second [S sono kodomo-ga onnano hito-ni ti nedatteiru to ] kotaeta ] that child-NOM female person-DAT asking COMP answered Lit. ’And then, a lemon, the second person answered to the Master of Ceremonies that that child was asking the woman for.’ The results showed that the scrambled object remon ‘lemon’ was mentally reactivated at its canonical structural position, i.e. at the offset of the embedded dative argument onnano hito-ni ‘female person’, although this effect was found to be restricted to participants who had also scored highly in a complementary working memory test. Considering the fact that the dependency between remon and its associated gap holds across two

1890

VIII. The Cognitive Perspective

clause boundaries, with a large number of new discourse referents to process in between, the finding that the processing of long scrambling sentences should be affected by individual working memory differences seems unsurprising. Like the results from other studies that indicate reactivation of scrambled constituents before the subcategorizing verb was received (Clahsen and Featherston 1999; Fiebach, Schlesewsky, and Friederici 2002; Miyamoto and Takahashi 2002), Nakano, Felser, and Clahsen’s results present a challenge for the hypothesis that gap filling should involve lexically-driven direct association only.

4.2.3. Summary The above studies illustrate the potential relevance of language processing data for theoretical controversies surrounding non-canonical word order phenomena, including the question of whether or not filler-gap dependencies are mediated by syntactically represented placeholders or empty categories. Other displacement phenomena that have been investigated experimentally include topicalization (Bader and Frazier 2005; Felser, Clahsen, and Münte 2003), heavy NP shift (Staub, Clifton, and Frazier 2006), subject movement (Koizumi and Tamaoka 2010), verb movement (De Goede 2006), and covert quantifier raising (Koster-Moeller, Varvoutis, and Hackl 2007).

4.3. Processing anaphoric expressions The ability to link anaphoric expressions quickly to their antecedents during processing is vital for successful sentence and discourse comprehension. This section provides a brief overview of processing studies of anaphoric dependencies that are thought to be mediated by hierarchical sentence structure, such as the two construal operations of binding and control, and certain types of ellipsis. The degree to which the interpretation of reflexive or pronominal anaphors is constrained by syntactic principles is controversial, however. Proposals range from the suggestion that coreference assignment is constrained by a small number of universal syntactic principles (Chomsky 1981) to the idea that binding is a matter of semantics and determined at conceptual structure (Culicover and Jackendoff 1995). A parallel controversy concerns the phenomenon known as control, which refers to the relationship between an understood argument and an argument noun phrase in a higher clause (see e.g. the collection of articles in Davies and Dubinsky 2006). Turning to language processing experiments might be useful here because experimental techniques such as ERPs, for example, have been claimed to be able to distinguish between structural and lexical-semantic processing (see Callahan 2008 for a review of neurophysiological studies of anaphor resolution). As we shall see below, data from reaction-time or eye-movement experiments can also help us determine what kind of constraints (syntactic, pragmatic) influence antecedent selection.

4.3.1. Anaphoric pronouns Although the interpretation of anaphoric expressions is known to be affected by a variety of different structural and non-stuctural factors (see Nicol and Swinney 2003, for re-

54. Syntax and Language Processing

1891

view), some experimental studies have focused specifically on the role of syntactic constraints on coreference assignment. According to traditional binding theory (Chomsky 1981), a reflexive (or reciprocal) anaphor must be bound in a local domain (= Principle A), whereas pronouns must be locally free (= Principle B). Several processing studies have shown that readers quickly link reflexives to their binding-theoretically appropriate antecedent during comprehension (Cunnings and Felser 2013; Harris, Wexler, and Holcomb 2000; Nicol and Swinney 1989; Sturt 2003; Xiang, Dillon, and Phillips 2009). Using the cross-modal lexical priming paradigm, Nicol and Swinney (1989) showed that the binding principles constrain anaphor resolution from the earliest stages of processing onwards. Upon hearing a reflexive in sentences such as (18) below, participants were found to mentally reactivate only the local antecedent (i.e. the doctor) but not any of the other noun phrase referents that were mentioned in the experimental sentences (i.e. the boxer or the skier). (18) The boxer told the skier that the doctor for the team would blame himself (him) for the recent injury. Conversely, for sentences containing a pronoun such as him, priming effects were observed only for words semantically related to boxer and skier, in accordance with Principle B. These findings support the binding-as-initial-filter hypothesis, according to which syntactic coreference constraints immediately exclude any structurally inappropriate antecedents from the candidate set. The results from other studies, however, suggest that binding principles may be violable, at least during later stages of processing. Using the eye-movement monitoring technique, Sturt (2003) examined the time-course of processing reflexive anaphors in sentences such as (19) below, which were preceded by a lead-in sentence that introduced a potential competitor referent for the reflexive. Four experimental conditions were created by manipulating stereotypical gender congruence with the local antecedent (the surgeon … himself/herself ) and gender congruence with the non-local antecedent (he/ she … himself ). (19) Jonathan (Jennifer) was pretty worried at the City Hospital. He (she) remembered that the surgeon had pricked himself (herself) with a used syringe needle. The reflexive must be bound by the local antecedent, the surgeon, which denotes a stereotypically male profession. Using the feminine form herself thus creates a mismatch between the reflexive’s and the local antecedent’s stereotypical gender. The results showed that during the early stages of processing participants were sensitive only to local mismatches, corroborating previous claims to the effect that Principle A constrains the initial search for an antecedent. Later on during processing the gender of the nonlocal competitor antecedent was also found to affect participants’ reading times, however, which indicates that pragmatically salient but binding-theoretically inappropriate antecedents may also be considered at later processing stages (compare also Badecker and Straub 2002). A second experiment reported by Sturt (2003) confirmed that participants’ early focusing on the binding-theoretically appropriate antecedent was not due to differences in the linear distance between the reflexive and the two potential antecedents (see also Xiang, Dillon, and Phillips 2009).

1892

VIII. The Cognitive Perspective

Studies of reflexive binding using ERPs (Harris, Wexler, and Holcomb 2000; Osterhout 1997; Osterhout and Mobley 1995) have found that Principle A violations elicit a P600 brain response believed to index syntactic processes, thus supporting approaches to reflexive binding that take the relationship between argument reflexives and their antecedents to be syntactic in nature. No P600 was observed, in Harris, Wexler, and Holcomb’s study, for locality violations involving reflexives that were not coarguments of the verb as in (20) below, however. (20) The pilot’s mechanics brow-beat Paxton and themselves/*himself after the race. While one should be careful about drawing any strong conclusions from the absence of an experimental effect, this observation is consistent with accounts of binding that treat non-argument reflexives as exempt from Principle A (compare e.g. Pollard and Sag 1992; Reinhart and Reuland 1993). Apparent violations of Principle A were also observed by Runner, Sussman, and Tanenhaus (2003, 2006), who used the visual world eye-movement paradigm to investigate the processing of reflexives in picture noun phrases containing possessors. When asked to manipulate dolls (e.g. Joe and Harry) while listening to instructions such as Have Joe touch Harry’s picture of himself, participants would frequently interpret himself as referring to Joe rather than to the more local antecedent Harry, contrary to what classical binding theory would lead us to expect. The authors point out that their findings might be reconciled with (some versions of) binding theory if we abandon the assumption that possessors and reflexives in picture noun phrases are coarguments. Although fewer binding-theoretically inconsistent interpretations were observed for sentences containing non-reflexive pronouns, the results from Runner, Sussman, and Tanenhaus’s studies provide strong evidence that reflexives and pronouns are not always in complementary distribution, which presents a challenge for traditional syntactic accounts of binding. Experimental evidence furthermore indicates that the two alternative routes available for interpreting non-reflexive pronouns, variable binding and coreference assignment (compare e.g. Reuland 2001), might involve distinct mental processes. The results from an eye-movement study by Koornneef, Wijnen, and Reuland (2006) show that readers initially prefer to link ambiguous pronouns to a c-commanding variable binding antecedent (i.e., every worker) even in contexts biased towards a competing coreference antecedent (i.e., Paul), as in (21). (21) [ Every worker ]i who knew that Paulk was running out of energy, thought it was very nice that hei/k could go home early this afternoon. This finding suggests that the syntactically mediated variable binding interpretation is computed independently of discourse properties (but cf. Frazier and Clifton 2000 for somewhat different findings).

4.3.2. Cataphoric pronouns Unlike reflexives or pronouns that are used anaphorically and thus require us to access syntactic or discourse representations already built, cataphoric pronouns are thought to

54. Syntax and Language Processing

1893

trigger a predictive search for a suitable referent. That the human parser should seek to resolve cataphoric pronouns as soon as possible is not surprising given its general preference for shorter over longer dependencies, and has been demonstrated in a series of eyemovement monitoring experiments by Van Gompel and Liversedge (2003). Sentences containing cataphoric pronouns also provide a good test case for binding Principle C, which requires referring expressions such as proper names or full noun phrases to remain unbound. Kazanina, Lau, Lieberman, Yoshida, and Phillips (2007) report the results from a set of reading-time experiments to determine whether Principle C constrains online reference resolution for backwards anaphora such as the sentence-initial pronoun he in sentences like (22) below (from their Experiment 3). (22) Hei/*k chatted amiably with some fans while [NP the talented, young quarterback ]k signed autographs for the kids, but Stevei wished the children’s charity event would end soon so he could go home. In (22) coreference between the matrix subject he and the embedded subject headed by quarterback is precluded by binding Principle C. The authors manipulated both the stereotypical gender congruence between the pronoun and the head noun of the embedded subject (he/she…quarterback) and the pronoun’s hierarchical prominence. While coreference between the pronoun and the embedded subject NP is ruled out where the pronoun c-commands the latter (as in he/she chatted amiably …), coreference is permitted where it does not (as in his/her managers chatted amiably …). A gender mismatch effect was seen only where coreference assignment did not violate Principle C, with higher reading times on the embedded subject for the gender-incongruent condition her managers … quarterback than for the congruent one his managers … quarterback. That is, participants tried to link the cataphoric pronoun to quarterback only in the absence of a c-command relationship between the two, suggesting that Principle C constrains online reference resolution.

4.3.3. Control versus raising The understood subject of non-finite clauses in sentences such as (24a,b) below has been claimed to be a covert pronominal conventionally labelled PRO (Chomsky 1981). Transitive control verbs differ as to whether the designated controller is the subject (23a) or object argument (23b) of the matrix verb. (23) a. Maryi promised Johnk [ to PROi/*k keep quiet about the accident ]. b. Maryi persuaded Johnk [ to PRO*i/k keep quiet about the accident ]. Several studies have examined the question of whether or not verb control information is used immediately during the processing of subjectless infinitives, or whether lexical requirements are (temporarily) overridden by the parser’s preference to keep dependencies as short as possible (compare also Rosenbaum’s 1979 Minimal Distance Principle). While the results from a speeded comprehension task reported in Frazier, Clifton, and Randall (1983) indicate that the use of verb control information might be delayed, several

1894

VIII. The Cognitive Perspective

more recent studies have found evidence for its immediate availability during online comprehension (Betancort, Carreiras and Acuña-Fariña 2006; Betancort, Meseguer, and Carreiras 2004; Boland, Tanenhaus, Garnsey, and Carlson 1990; Demestre and GarcíaAlbea 2007, among others). Demestre and García-Albea, for instance, recorded native Spanish listeners’ brain responses to stimuli sentences containing either a subject (24) or an object control verb (25). Gender congruence between the controller of PRO and the embedded predicative adjective was manipulated as a diagnostic for testing whether or not the null subject would be linked immediately to its correct controller. (24) a. Pedroi ha prometido a Maríak [PROi ser estrict-o con los Peter has promised to Mary PRO to.be strict-M with the alumnus] students

[Spanish]

b. *Maríai ha prometido a Pedrok [PROi ser estrict-o con los alumnus ]. Mary has promised to Peter PRO to.be strict-M with the students ‘Peter/Mary has promised Mary/Peter to be strict (masc) with the students.’ (25) a. Pedroi ha aconsejado a Maríak [PROk ser educad-a con la Peter has advised to Mary PRO to.be polite-F with the gente ]. people

[Spanish]

b. *Maríai ha aconsejado a Pedrok [PROk ser educad-a con la gente ]. Mary has advised to Peter PRO to.be polite-F with the people ‘Peter/Mary has advised Mary/Peter to be polite (fem) with people.’ The authors found that gender incongruence between the adjective and the controller of PRO elicited a P600-like effect in both subject control and object control items. As gender agreement is a local phenomenon involving a morphosyntactic dependency between the understood infinitival subject and its predicate, Demestre and García-Albea’s finding indicate that PRO has been associated rapidly with the appropriate controller in the main clause, thereby inheriting the latter’s gender features. Where verb control information is unavailable, however, as in the processing of null subjects in head-final languages like Japanese, the parser may initially apply a locality-based default strategy (Sakamoto and Walenski 1998). Syntactic theories disagree as to whether or not subject control sentences such as (26a) differ syntactically from subject raising sentences like (26b), even though both of them share the same surface structure. (26) a. Maryi tried [ to PROi stay calm ] b. Maryi appeared [ to ti stay calm ] Raising and control verbs differ in their semantic complexity as only the latter assign a thematic role to their subject (i.e. to Mary). Traditional generative-transformational accounts claim that whereas subject control in (26a) involves a referential dependency

54. Syntax and Language Processing

1895

without movement or transmission of thematic roles, subject raising sentences such as (26b) are derived via syntactic movement (Chomsky 1981). Under this view, the empty categories conventionally labelled PRO and NP-trace are taken to be distinct types of element. This assumption has however been challenged by Hornstein (1999), who proposed that both (26a) and (26b) are derived by syntactic movement. In non-derivational syntactic frameworks such as Lexical-Functional Grammar (Bresnan 2001; Butt and King, article 25, this volume), on the other hand, the difference between raising and control is encoded in the verbs’ lexical representations only, with both raising and control dependencies involving the same relationship of functional control. Language processing data may potentially be brought to bear on this theoretical controversy. While control verbs might be expected to give rise to increased semantic processing difficulty, compared to raising verbs, under any of the above accounts, only accounts which assume that an additional syntactic mechanism (such as movement) is involved in the computation of raising dependencies predicts greater syntactic processing difficulty for raising compared to control sentences. Featherston, Gross, Münte, and Clahsen (2000) carried out an ERP study in German to examine the real-time processing of complex sentences such as (27) that contained either a subject raising or a subject control verb. (27) Der Sheriff schien/hoffte, als die Witwe plötzlich in das Zimmer [German] the sheriff seemed/hoped as the widow suddenly into the room kam, [ ___ den Täter endlich verurteilen zu können ]. came the offender at.last sentence to can ‘The sheriff seemed/hoped, as the widow suddenly came into the room, to be able to sentence the offender at last.’ A larger P600 response was observed at the gap position in the raising condition compared to the control condition, which the authors suggest might reflect the increased processing cost associated with having to form a movement chain in the raising but not in the control condition. Increased processing difficulty for raising compared to control sentences was also observed by Batterham (2009) in an eye-movement monitoring experiment in English. Somewhat different results were obtained in Walenski’s (2002) study on the processing of English control and raising structures, however. Unlike Featherston, Gross, Münte, and Clahsen (2000) and Batterham (2009), Walenski found evidence for increased processing difficulty in the control condition compared to the raising condition in a cross-modal lexical priming experiment, which he attributed to the greater semantic processing demands involved in computing control dependencies, and to differences in sentence complement expectancy. In summary, even though they may not provide any definitive answers to the question of whether or not subject control and raising are syntactically as well as semantically distinct, Featherston, Gross, Münte, and Clahsen’s (2000) and Batterham’s (2009) results support theoretical accounts according to which the computation of raising sentences involves an extra syntactic mechanism.

1896

VIII. The Cognitive Perspective

4.3.4. Ellipsis Ellipsis phenomena also involve anaphoric dependencies in the sense that the missing constituent must be recovered from the context. Results from ERPs indicate that the parser identifies and fills verb gaps in sentences such as Ron took the planks, and Bill __ the hammer immediately during processing (Kaan, Wijnen, and Swaab 2004). The observation that ambiguous gapping sentences such as (28) below show a preference for an object (28a) over a subject reading (28b) has been argued to support the hypothesis that gapping is a non-uniform phenomenon (Carlson, Dickey, and Kennedy 2005). (28) Somehow, Bob insulted the guests during dinner and Sam during the dance. a. Bob insulted Sam during the dance. b. Sam insulted the guests during the dance. Carlson, Dickey, and Kennedy argue that the object reading can be derived from a structure involving simple VP-coordination plus across-the-board extraction of the verb insulted, as indicated in (29a) below, whereas the subject reading requires a genuine gapping structure that involves clausal conjunction, as in (29b). (29) a … and [VP tinsulted Sam during the dance ]. b. … and [IP Sam [VP insulted the guests ] during the dance ]. The authors further assume that in (29b), the entire IP is deleted from the second conjunct following remnant scrambling of the subject Sam and the prepositional phrase during the dance to IP-external positions. Even without these additional assumptions, though, the observed processing preference for the object reading can be argued to follow from the fact that this involves the simpler structural analysis, in accordance with principles of parsing economy such as Minimal Attachment. Other ellipsis phenomena that have been studied experimentally include VP-ellipsis (30a), antecedent-contained deletion (30b), and sluicing (30c). (30) a. Martin smiled and Angela did __ too. b. Nancy read every book that Karen did __. c. Andrew shouted something but I could not hear what __. In the generative-transformational tradition it is usually assumed that the syntactic structure of the antecedent and of the elided constituent must match at Logical Form, whereas others have argued that ellipsis requires establishing a semantic or pragmatic link to the antecedent only (Hardt 1999, among others). Processing evidence supporting the assumption that at least part of the elided constituent’s syntactic structure is recovered at the ellipsis site − either by a copying mechanism or by some form of structuresharing − has been provided by a number of studies including Arregui, Clifton, Frazier, and Moulton (2006), Frazier and Clifton (2001, 2005), Koster-Moeller, Varvoutis, and Hackl (2007), Shapiro and Hestvik (1995), and Shapiro, Hestvik, Lesan, and Garcia (2003). Shapiro, Hestvik, Lesan, and Garcia (2003) carried out a cross-modal priming study to examine the processing of VP-ellipsis structures such as (31). Although structures of

54. Syntax and Language Processing

1897

this type are normally ambiguous between a sloppy reading in which the interpretation of an elided anaphor is determined by the antecedent in the local clause, and a strict reading where its interpretation is fixed, the use of strongly biased verbs such as the inherently reflexive verb behave effectively precluded the strict reading here. (31) The violinist who was usually temperamental behaved herself, and her housekeeper, who had just turned 57, did __ too, according to guests that attended the party. a. The housekeeper behaved herself. (sloppy reading) b. *The housekeeper behaved the violinist. (strict reading) The authors found that both potential antecedents for the reflexive (i.e., violinist and housekeeper) were reactivated at the ellipsis site, suggesting that all potential antecedents for elided anaphors are mentally reconstructed at VP gaps, and that the parser initially ignores lexical constraints on the interpretation of covert reflexives as in (31). Frazier and Clifton (2005) carried out a set of experiments investigating different types of ellipsis phenomena including sluicing, which is thought to involve an elided IP whose contents must be recovered from an antecedent clause (see also Poirier, Wolfinger, Spellman, and Shapiro 2010). Manipulating the distance between the ellipsis site and the antecedent clause, as well as the latter’s structural complexity, as shown in (32a−d), allowed the authors to examine whether or not structural properties of the elided IP would be recovered at the ellipsis site. (32) a. b. c. d.

Michael Michael Michael Michael

slept and studied but he studied and slept but he slept and he studied but studied and he slept but

didn’t tell me what. didn’t tell me what. he didn’t tell me what. he didn’t tell me what.

(VP, near conjunct) (VP, far conjunct) (S, near conjunct) (S, far conjunct)

Participants found sentences containing a near antecedent more acceptable and easier to process than those containing a distant antecedent, which suggests that some syntactic structure must be recovered at the ellipsis site following what. Differences in the antecedent’s complexity, on the other hand, only affected participants’ offline acceptability judgments but not their online processing of sentences such as those in (32). Along with other experimental results showing that the speed of interpreting VP-ellipsis is not affected by the antecedent’s length or internal complexity, this observation could be taken to argue against full structure-copying and in favor of some kind of structure-sharing mechanism being involved in the processing of ellipsis (Martin and McElree 2008).

4.4. Summary As Phillips and Wagers (2007: 742) point out, it would be unrealistic for linguists to expect “that psycholinguistics will provide psychological validation of linguistic models”. Notwithstanding this important caveat, however, the experimental findings reported above demonstrate that language processing data can indeed bear on theoretical linguistic issues, and that testable predictions for language processing research can be

1898

VIII. The Cognitive Perspective

derived from linguistic claims or controversies. The overview presented in this section is by no means comprehensive, though, as lack of space prevents me from discussing the full range of syntactic phenomena that have thus far been investigated from a processing perspective.

5. Bridging the gap between syntactic and processing theory Jackendoff (2007b) considers the integration of linguistics with psycholinguistics to be one of the major challenges facing the field of linguistics today, pointing out that the formal structure building algorithms assumed in mainstream linguistics are incompatible with basic assumptions about human language processing, such as the fact that structure building proceeds incrementally from left to right during language comprehension (see also Ferreira 2005; Phillips and Lewis 2013, among others). Jackendoff (2007b: 257) notes that theoreticians “often say that the order of derivation and the movement of constituents is somehow ‘metaphorical’, with no processing implications, and they leave the connection between the metaphor and reality a mystery”. Non-derivational (or equative) grammar frameworks − which, by definition, do not impose any specific order at all on computations − are not necessarily better compatible with real-time human language processing performance, however. As Phillips and Lewis (2013: 15) point out, in both bottom-up derivational and non-derivational syntactic frameworks, grammars are typically viewed as “implementation independent” systems that characterize the grammatically well-formed sentences of a given language, rather than as “procedural systems that can be understood in terms of actual real-time mental computations”. For grammatical theories or models to provide psycholinguistically plausible algorithms for how sentences are assembled during human language processing, they need to be incremental, predictive, and build structures that are connected. Although non-derivational models of grammar may be better compatible with left-to-right incrementality than bottom-up derivational ones, Demberg and Keller (2009: 1889) point out that “full connectivity cannot be achieved using canonical linguistic structures as assumed in standard grammar formalisms such as CFG, CCG, TAG, LFG, or HPSG”. With a growing number of theoreticians now recognizing the desirability of processing-compatible grammars, several proposals for models of grammar that are more compatible with human performance have started to emerge. Jackendoff (2007a), for example, advocates an overarching Parallel Achitecture framework which takes phonology, syntax and semantics to be independent but interconnected processing components. Others have explored further the extent to which lexicalist formalisms such as Combinatory Categorial Grammar, Lexical-Functional Grammar or Head-Driven Phrase Structure Grammar might be embedded within psycholinguistically plausible processing models (see Sag and Wasow 2011, for discussion). Some of those working in the transformational-generative tradition have tried to reinterpret the minimalist bottom-up, right-to-left structure generation algorithm as left-to right parsing algorithms (Fong 2005; Phillips 1996, 2003; Weinberg 2001, among others). Other potentially promising approaches incorporating left-to-right incrementality include variants of Tree-Adjoining Grammar such as that proposed by Demberg and Keller (2009), and Cann, Kempson, and Marten’s (2005) unification based Dynamic

54. Syntax and Language Processing

1899

Syntax framework. Assuming that testable predictions about human language processing can be derived from them, future research will show which, if any, of the above proposals are best compatible with the way sentence representations are created during language use.

6. Conclusion This chapter illustrates how data from language processing experiments may be brought to bear on theoretical linguistic issues. It also calls for a better integration of theoretical linguistics and experimental psycholinguistics as both of these fields are in fact pursuing very similar goals. Dismissing language processing data as irrelevant for the concerns of theoretical linguistics is a mistake that deprives the field of a rich and potentially very useful data source. Arguing that theoretical linguistics is concerned with competence rather than performance does not seem a valid argument given that it is unclear to what extent these really are separable, and in view of the fact that all linguistic data are, after all, performance data of some sort or other. The experimental study of language is a rapidly expanding area of research whose findings may have implications not only for syntactic theory but also for theories of language acquisition and breakdown. Although the vast majority of sentence processing studies have examined adult monolingual speakers, a steadily growing body of research has investigated grammatical processing in children or non-native speakers (see Clahsen and Felser 2006, for a review). Other recent developments in language processing research include the study of language production (Branigan 2007; Ferreira and Engelhardt 2006), a greater emphasis on cross-linguistic studies, a growing number of studies investigating language processing in language-impaired populations, and the use of experimental techniques borrowed from neuroscience, including brain imaging techniques (compare also Garrett 2007). Getting more linguists to be involved in these will benefit both experimental psycholinguistic research and theoretical linguistics.

54. Acknowledgement I am grateful to Oliver Boxell, Harald Clahsen, Tibor Kiss and an anonymous reviewer for helpful comments on an earlier version of this manuscript.

7. References (selected) Alphonce, Carl 1998 A processing account of weak crossover. In: Lisa Chang, Elizabeth Currie and Kimary Shahin (eds.), Proceedings of the North West Linguistics Conference 13, 255−274. Department of Linguistics, University of British Columbia. Anton-Mendez, Inez, Janet L. Nicol, and Merrill F. Garrett 2002 The relation between gender and number agreement processing. Syntax 5: 1−25.

1900

VIII. The Cognitive Perspective

Aoshima, Sachiko, Colin Phillips, and Amy Weinberg 2004 Processing filler-gap dependencies in a head-final language. Journal of Memory and Language 51: 23−54. Ariel, Mira 2001 Accessibility theory: An overview. In: Ted Sanders, Joost Schliperoord and Wilbert Spooren (eds.), Text Representation, 29−87. Amsterdam: John Benjamins. Arregui, Ana, Charles Clifton Jr., Lyn Frazier, and Keir Moulton 2006 Processing elided verb phrases with flawed antecedents: The recycling hypothesis. Journal of Memory and Language 55: 232−246. Badecker, William, and Kathleen Straub 2002 The processing role of structural constraints on the interpretation of pronouns and anaphors. Journal of Experimental Psychology: Learning, Memory, and Cognition 28: 748−769. Bader, Markus, and Lyn Frazier 2005 Interpretation of leftward-moved constituents: Processing topicalizations in German. Linguistics 43: 49−88. Bader, Markus, and Josef Bayer 2006 Case and Linking in Language Comprehension − Evidence from German. Heidelberg: Springer. Bader, Markus, and Ingeborg Lasser 1994 German verb-final clauses and sentence processing: Evidence for immediate attachment. In: Charles Clifton Jr., Lyn Frazier and Keith Rayner (eds.), Perspectives on Sentence Processing, 225−242. Hillsdale, NJ: Lawrence Erlbaum Associates. Barber, Horacio, and Manuel Carreiras 2005 Grammatical gender and number agreement in Spanish: An ERP comparison. Journal of Cognitive Neuroscience 17: 137−153. Batterham, Claire 2009 Constraints on covert anaphora in sentence processing: An investigation of control, raising and wh-dependencies. PhD dissertation, Department of Language and Linguistics, University of Essex. Bayer, Josef, Markus Bader, and Michael Meng 2001 Morphological underspecification meets oblique case: Syntactic and processing effects in German. Lingua 111: 465−514. Berwick, Robert, and Amy Weinberg 1984 The Grammatical Basis of Linguistic Performance. Cambridge, MA: MIT Press. Betancort, Moises, Manuel Carreiras, and Carlos Acuña-Fariña 2006 Processing controlled PROs in Spanish. Cognition 100: 217−282. Betancort, Moises, Enrique Meseguer, and Manuel Carreiras 2004 The empty category PRO: Processing what cannot be seen. In: Manuel Carreiras and Charles Clifton Jr. (eds.), The On-line Study of Sentence Comprehension, 95−118. Hove: Psychology Press. Bever, Thomas G. 1970 The cognitive basis for linguistic structures. In: J. R. Hayes (ed.), Cognition and the Development of Language, 279−352. New York: Wiley. Bever, Thomas G., and Brian McElree 1988 Empty categories access their antecedents during comprehension. Linguistic Inquiry 19: 35−43. Bever, Thomas G., and Montserrat Sanz 1997 Empty categories access their antecedents during comprehension: Unaccusatives in Spanish. Linguistic Inquiry 28: 69−91. Bock, Kathryn, and Carol A. Miller 1991 Broken agreement. Cognitive Psychology 23: 45−93.

54. Syntax and Language Processing

1901

Boland, Julie E., Michael K. Tanenhaus, Susan M. Garnsey, and Gregory N. Carlson 1995 Verb argument structure in parsing and interpretation: Evidence from wh-questions. Journal of Memory and Language 34: 774−806. Bornkessel-Schlesewsky, Ina, and Angela D. Friederici 2007 Neuroimaging studies of sentence and discourse comprehension. In: Gareth Gaskell (ed.), The Oxford Handbook of Psycholinguistics, 407−424. Oxford: Oxford University Press. Branigan, Holly 2007 Syntactic priming. Language and Linguistics Compass 1(1−2): 1−16. Bresnan, Joan 2001 Lexical-Functional Syntax. Oxford: Blackwell. Bresnan, Joan, and Marilyn Ford 2010 Predicting syntax: Processing dative constructions in American and Australian varieties of English. Language 86: 186−213. Burkhardt, Petra, Gisbert Fanselow, and Matthias Schlesewsky 2007 Effects of (in)transitivity on structure building. Brain Research 1163: 100−110. Cacciari, Cristina, and Patrizia Tabossi (eds.) 1993 Idioms: Processing, Structure, and Interpretation. Hillsdale, NJ: Lawrence Erlbaum Associates. Callahan, Sarah M. 2008 Processing anaphoric constructions: Insights from electrophysiological studies. Journal of Neurolinguistics 21: 231−266. Cann, Ronnie, Ruth Kempson, and Lutz Marten 2005 The Dynamics of Language: An Introduction. Oxford: Elsevier. Carlson, Katy, Michael Walsh Dickey, and Christopher Kennedy 2005 Structural economy in the processing and representation of gapping sentences. Syntax 8: 208−228. Carminati, Maria N. 2005 Processing reflexes of feature hierarchy (person > number > gender) and implications for linguistic theory. Lingua 115: 259−285. Carreiras, Manuel, and Charles Clifton Jr. (eds.) 2004 The On-line Study of Sentence Comprehension. Hove: Psychology Press. Chomsky, Noam 1957 Syntactic Structures. The Hague: Mouton. Chomsky, Noam 1965 Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. Chomsky, Noam 1981 Lectures on Government and Binding. Dordrecht: Foris. Chomsky, Noam 1995 The Minimalist Program. Cambridge, MA: MIT Press. Chomsky, Noam 2001 Derivation by phase. In: Michael Kenstowicz (ed.), Ken Hale: A Life in Language, 1− 52. Cambridge, MA: MIT Press. Chomsky, Noam 2007 Of minds and language. Biolinguistics 1: 9−27. Christianson, Kiel, Andrew Hollingworth, John F. Halliwell, and Fernanda Ferreira 2001 Thematic roles assigned along the garden path linger. Cognitive Psychology 42: 368− 407. Clahsen, Harald, and Samuel Featherston 1999 Antecedent priming at trace positions: evidence from German scrambling. Journal of Psycholinguistic Research 28: 415−437. Clahsen, Harald, and Claudia Felser 2006 Grammatical processing in language learners. Applied Psycholinguistics 27: 3−42.

1902

VIII. The Cognitive Perspective

Clifton Jr., Charles, Matthew J. Traxler, Mohamed Taha Mohamed, Rihana S. Williams, Robin K. Morris, and Keith Rayner 2003 The use of thematic role information in parsing: Syntactic processing autonomy revisited. Journal of Memory and Language 49: 317−334. Cowart, Wayne 1997 Experimental Syntax: Applying Objective Methods to Sentence Judgments. Thousand Oaks, CA: Sage Publications. Cowart, Wayne, and Helen Cairns 1987 Evidence for an anaphoric mechanism within syntactic processing: Some reference relations defy semantic and pragmatic constraints. Memory and Cognition 15: 318−331. Crocker, Matthew W. 1996 Computational Psycholinguistics. Dordrecht: Kluwer. Cuetos, Fernando, and Donald C. Mitchell 1988 Cross-linguistic differences in parsing: restrictions on the use of the Late Closure strategy in Spanish. Cognition 30: 73−105. Cuetos, Fernando, Donald C. Mitchell, and Martin Corley 1996 Parsing in different languages. In: Manuel Carreiras, Jose E. Garcia-Albea and Nuria Sebastian-Galles (eds.), Language Processing in Spanish, 145−187. Mahwa, NJ: Lawrence Erlbaum Associates. Culicover, Peter W. 2005 Linguistics, cognitive science, and all that jazz. Linguistic Review 22: 227−248. Culicover, Peter W., and Ray Jackendoff 1995 Something else for the binding theory. Linguistic Inquiry 26: 249−275. Cunnings, Ian, and Harald Clahsen 2007 The time-course of morphological constraints: Evidence from eye movements during reading. Cognition 104: 467−494. Cunnings, Ian, and Claudia Felser 2013 The role of working memory in the processing of reflexives. Language and Cognitive Processes 28: 188−219. Davies, William D., and Stanley Dubinsky (eds.) 2006 Special issue of Syntax 9.2 featuring articles based on a symposium at the 2005 LSA annual meeting, “New Horizons in the Grammar of Raising and Control”. De Goede, Dieuwke 2006 Verbs in spoken sentence processing: Unraveling the activation pattern of the matrix verb. Doctoral dissertation, University of Groningen. De Vincenzi, Marica 1991 Syntactic Parsing Strategies in Italian. Dordrecht: Kluwer. Demberg, Vera, and Frank Keller 2009 Computational model of prediction in human parsing: Unifying locality and surprisal effects. In: Niels Taatgen and Hedderik van Rijn (eds.), Proceedings of the 31 st Annual Conference of the Cognitive Science Society, 1888−1893. Amsterdam. Demestre, Josep, and Jose E. Garcia-Albea 2007 ERP evidence for the rapid assignment of an (appropriate) antecedent to PRO. Cognitive Science 31: 343−354. Dijkstra, Anton, and Koenraad de Smedt (eds.) 1996 Computational Psycholinguistics: AI and Connectionist Models of Human Language Processing. London: Taylor and Francis. Featherston, Samuel 2005 Magnitude estimation and what it can do for your syntax. Lingua 115: 1277−1302. Featherston, Samuel, Matthias Gross, Thomas F. Münte, and Harald Clahsen 2000 Brain potentials in the processing of complex sentences: An ERP study of control and raising constructions. Journal of Psycholinguistic Research 29: 141−154.

54. Syntax and Language Processing

1903

Felser, Claudia, Harald Clahsen, and Thomas F. Münte 2003 Storage and integration in the processing of filler-gap dependencies: An ERP study of topicalization and wh-movement in German. Brain and Language 87: 345−354. Felser, Claudia, Ian Cunnings, Claire Batterham, and Harald Clahsen 2012 The timing of island effects in nonnative sentence processing. Studies in Second Language Acquisition 34: 67−98. Ferreira, Fernanda, and John M. Henderson 1990 Use of verb information in syntactic parsing: Evidence from eye movements and wordby-word self-paced reading. Journal of Experimental Psychology: Learning, Memory, and Cognition 16: 555−568. Ferreira, Fernanda 2005 Psycholinguistics, formal grammars, and cognitive science. The Linguistic Review 22: 365−380. Ferreira, Fernanda, and Paul E. Engelhardt 2006 Syntax and production. In: Matthew J. Traxler and Morton A. Gernsbacher (eds.), Handbook of Psycholinguistics, 2 nd Edition, 61−91. Oxford: Elsevier. Ferreira, Fernanda, and Nikole D. Patson 2007 The good enough approach to language comprehension. Language and Linguistics Compass 1: 71−83. Fiebach, Christian, Matthias Schlesewsky, and Angela D. Friederici 2002 Separating syntactic memory costs and syntactic integration costs during parsing: The processing of German wh-questions. Journal of Memory and Language 47: 250−272. Fodor, Jerry A., Thomas G. Bever, and Merrill F. Garrett 1974 The Psychology of Language. New York: McGraw Hill. Fodor, Janet D., and Fernanda Ferreira (eds.) 1998 Reanalysis in Sentence Processing. Dordrecht: Kluwer. Fong, Sandiway 2005 Computation with probes and goals: A parsing perspective. In: Anna Maria Di Sciullo and Rodolfo Delmonte (eds.), UG and External Systems, 311−334. Amsterdam: John Benjamins. Franck, Julie, Glenda Lassi, Ulrich H. Frauenfelder, and Luigi Rizzi 2006 Agreement and movement: A syntactic analysis of attraction. Cognition 101: 173−216. Franck, Julie, Gabriela Soare, Ulrich H. Frauenfelder, and Luigi Rizzi 2010 Object interference in subject−verb agreement: The role of intermediate traces of movement. Journal of Memory and Language 62: 166−82. Frazier, Lyn 1979 On comprehending sentences: Syntactic parsing strategies. Doctoral dissertation, University of Connecticut (reproduced by the Indiana University Linguistics Club). Frazier, Lyn, and Charles Clifton Jr. 1989 Successive cyclicity in the grammar and the parser. Language and Cognitive Processes 4: 93−126. Frazier, Lyn, and Charles Clifton Jr. 1996 Construal. Cambridge, MA: MIT Press. Frazier, Lyn, and Charles Clifton Jr. 2000 On bound variable interpretations: The LF-only hypothesis. Journal of Psycholinguistic Research 29: 125−139. Frazier, Lyn, and Charles Clifton Jr. 2001 Parsing coordinates and ellipsis: Copy alpha. Syntax 4: 1−22. Frazier, Lyn, and Charles Clifton Jr. 2002 Processing ‘d-linked’ phrases. Journal of Psycholinguistic Research 31: 633−660. Frazier, Lyn, and Charles Clifton Jr. 2005 The syntax-discourse divide: Processing ellipsis. Syntax 8: 121−174.

1904

VIII. The Cognitive Perspective

Frazier, Lyn, and Janet D. Fodor 1978 The sausage machine: A new two-stage parsing model. Cognition 6: 291−325. Frazier, Lyn, Charles Clifton Jr., and Janet Randall 1983 Filling gaps: Decision principles and structure in sentence comprehension. Cognition 13: 187−222. Frazier, Lyn, Giovanni B. Flores d’Arcais, and Riet Coolen 1993 Processing discontinuous words: On the interface between lexical and syntactic processing. Cognition 47: 219−249. Friederici, Angela D. 2002 Towards a neural basis of auditory sentence processing. Trends in Cognitive Sciences 6: 78−84. Friederici, Angela D., and Stefan Frisch 2000 Verb argument structure processing: the role of verb-specific and argument-specific information. Journal of Memory and Language 43: 476−507. Friedmann, Naama, Gina Taranto, Lewis P. Shapiro, and David Swinney 2008 The leaf fell (the leaf): The online processing of unaccusatives. Linguistic Inquiry 39: 355−377. Frisch, Stefan, and Matthias Schlesewsky 2001 The N400 indicates problems of thematic hierarchising. Neuroreport 12: 3391−3394. Garrett, Merrill F. 2007 Thinking across boundaries: psycholinguistic perspectives. In: Gareth Gaskell (ed.), The Oxford Handbook of Psycholinguistics, 805−820. Oxford: Oxford University Press. Geurts, Bart 2007 Neurocognition of language: good, bad, and bogus. Unpublished manuscript, University of Nijmegen. Gibson, Edward 1992 On the adequacy of the Competition Model. Language 68: 812−830. Gibson, Edward 1998 Linguistic complexity: locality of syntactic dependencies. Cognition 68: 1−76. Gibson, Edward 2000 The dependency locality theory: A distance-based theory of linguistic complexity. In: Alec Marantz, Yasushi Miyashita and Wayne. O’Neil (eds.), Image, Language, Brain, 95−125. Cambridge, MA: MIT Press. Gibson, Edward, and Evelina Fedorenko 2010 Weak quantitative standards in linguistics research. Trends in Cognitive Sciences 14: 233−234. Gibson, Edward, and Gregory Hickok 1993 Sentence processing with empty categories. Language and Cognitive Processes 8: 147−161. Gibson, Edward, and Tessa Warren 2004 Reading-time evidence for intermediate linguistic structure in long-distance dependencies. Syntax 7: 55−78. Gibson, Edward, Neal J. Pearlmutter, Enriqueta Canseco Gonzalez, and Gregory Hickok 1996 Recency preference in the human sentence processing mechanism. Cognition 59: 23−59. Gorrell, Paul 1995 Syntax and Parsing. Cambridge: Cambridge University Press. Greenberg, Joseph H. 1963 Some universals of grammar with particular reference to the order of meaningful elements. In: Joseph H. Greenberg (ed.), Universals of Language, 73−113. Cambridge, MA: MIT Press. Hagoort, Peter, and Colin M. Brown 1999 Gender electrified: ERP evidence on the syntactic nature of gender processing. Journal of Psycholinguistic Research 28: 715−728.

54. Syntax and Language Processing

1905

Hardt, Daniel 1999 Dynamic interpretation of verb phrase ellipsis. Linguistics and Philosophy 22: 187−221. Harris, Tony, Kenneth Wexler, and Phillip J. Holcomb 2000 An ERP investigation of binding and coreference. Brain and Language 75: 313−346. Hawkins, John A. 1994 A Performance Theory of Order and Constituency. Cambridge: Cambridge University Press. Hawkins, John A. 2004 Efficiency and Complexity in Grammars. Oxford: Oxford University Press. Hemforth, Barbara, Lars Konieczny, and Christoph Scheepers 2000 Syntactic attachment and anaphor resolution: Two sides of relative clause attachment. In: Matthew Crocker, Martin Pickering and Charles Clifton (eds.), Architectures and Mechanisms for Language Processing, 259−282. Cambridge: Cambridge University Press. Hestvik, Arild, Nathan Maxfield, Richard G. Schwartz, and Valerie Shafer 2007 Brain responses to filled gaps. Brain and Language 100: 301−316. Hofmeister, Philip, T. Florian Jaeger, Ivan A. Sag, Inbal Arnon, and Neal Snider 2007 Locality and accessibility in wh-questions. In: Samuel Featherston and Wolfgang Sternefeld (eds.), Roots: Linguistics in Search of its Evidential Base, 185−206. Berlin: de Gruyter. Hofmeister, Philip, and Ivan A. Sag 2010 Cognitive constraints and island effects. Language 86: 366−415. Hornstein, Norbert 1999 Movement and control. Linguistic Inquiry 30: 69−96. Hyönä, Jukka, and Heli Hujanen 1997 Effects of case marking and word order on sentence parsing in Finnish: An eye fixation analysis. Quarterly Journal of Experimental Psychology A 50: 841−858. Jackendoff, Ray 2007a A Parallel Architecture perspective on language processing. Brain Research 1146: 2−22. Jackendoff, Ray 2007b A whole lot of challenges for linguistics. Journal of English Linguistics 35: 253−262. Joshi, Aravind, Leon S. Levy, and Masako Takahashi 1975 Tree adjunct grammar. Journal of Computer and System Sciences 10: 136−163. Kaan, Edith 2007 Event-related potentials and language processing. A brief introduction. Language and Linguistics Compass 1: 571−591. Kaan, Edith, Frank Wijnen, and Tamara Y. Swaab 2004 Gapping: Electrophysiological evidence for immediate processing of ”missing” verbs in sentence comprehension. Brain and Language 89: 584−592. Kamide, Yuki 2006 Incrementality in Japanese sentence processing. In: Mineharu Nakayama, Reiko Mazuka and Yasuhiro Shirai (eds.), The Handbook of East Asian Psycholinguistics, Vol. II: Japanese, 249−256. Cambridge: Cambridge University Press. Kamide, Yuki 2008 Anticipatory processes in sentence processing. Language and Linguistics Compass 2: 647−670. Kazanina, Nina, Ellen F. Lau, Moti Lieberman, Masaya Yoshida, and Colin Phillips 2007 The effect of syntactic constraints on the processing of backwards anaphora. Journal of Memory and Language 56: 384−409. Kennison, Shelia M. 2002 Comprehending noun phrase arguments and adjuncts. Journal of Psycholinguistic Research 31: 65−81.

1906

VIII. The Cognitive Perspective

Kimball, John 1973 Seven principles of surface structure parsing in natural language. Cognition 15−47. Kluender, Robert 1998 On the distinction between strong and weak islands: a processing perspective. In : Peter Culicover and Louise McNally (eds.), Syntax and Semantics 29: The Limits of Syntax, 241−279. San Diego, CA: Academic Press. Kluender, Robert, and Martha Kutas 1993 Bridging the gap: Evidence from ERPs on the processing of unbounded dependencies. Journal of Cognitive Neuroscience 5: 196−214. Koornneef, Arnout W., Frank Wijnen, and Eric Reuland 2006 Towards a modular approach to anaphor resolution. In: Ron Artstein and Massimo Poesio (eds.), Ambiguity in Anaphora Workshop Proceedings, 65−72. European Summer School in Language, Logic and Information, Málaga, Spain, August 2006. (http:// cswww.essex.ac.uk/anaphora/). Koizumi, Masatoshi, and Katsuo Tamaoka 2010 Psycholinguistic evidence for the VP-internal subject position in Japanese. Linguistic Inquiry 41: 663−680. Koster-Moeller, Jorie, Jason Varvoutis, and Martin Hackl 2007 Processing evidence for quantifier raising: The case of antecedent contained deletion. In: Masayuki Gibson and Tova Friedman (eds.), Proceedings of SALT 17. Ithaca, NY: CLC Publications. Lee, Ming-Wei 2004 Another look at the role of empty categories in sentence processing (and grammar). Journal of Psycholinguistic Research 33: 51−73. Lewis, Richard L. 1996 Interference in short-term memory: the magical number two (or three) in sentence processing. Journal of Psycholinguistic Research 25: 93−115. Lewis, Richard L. 2003 Computational psycholinguistics. In: L. Nadel (ed.), Encyclopedia of Cognitive Science. London: MacMillan (Nature Publishing Group). Libben, Maya R., and Debra A. Titone 2008 The multidetermined nature of idiom processing. Memory and Cognition 36: 1103−1121. MacDonald, Maryellen C., and Mark S. Seidenberg 2006 Constraint satisfaction accounts of lexical and sentence comprehension. In: Matthew J. Traxler and Morton A. Gernsbacher (eds.), Handbook of Psycholinguistics, 2 nd Edition, 581−611. London: Elsevier. MacWhinney, Brian, and Elizabeth Bates (eds.) 1989 The Crosslinguistic Study of Sentence Processing. Cambridge: Cambridge University Press. Marantz, Alec 2005 Generative linguistics within a cognitive neuroscience of language. The Linguistic Review 22: 429−445. Martin, Andrea E., and Brian McElree 2008 A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language 58: 879−906. Matlock, Teenie, and Roberto R. Heredia 2002 Understanding phrasal verbs in monolinguals and bilinguals. In: Roberto R. Heredia and Jeannette Altarriba (eds.), Bilingual Sentence Processing, 251−274. Amsterdam: Elsevier. Mauner, Gail, Michael K. Tanenhaus, and Gregory N. Carlson 1995 Implicit arguments in sentence processing. Journal of Memory and Language 34: 357−382.

54. Syntax and Language Processing

1907

McElree, Brian, and Thomas G. Bever 1989 The psychological reality of linguistically defined gaps. Journal of Psycholinguistic Research 18: 21−35. McElree, Brian, and Theresa Griffith 1995 Syntactic and thematic processing in sentence comprehension: Evidence for a temporal dissociation. Journal of Experimental Psychology Learning, Memory, and Cognition 21: 134−157. McElree, Brian, and Theresa Griffith 1998 Structural and lexical constraints on filling gaps during sentence comprehension: A timecourse analysis. Journal of Experimental Psychology: Learning, Memory, and Cognition 24: 432−460. Meng, Michael, and Markus Bader 2000 Ungrammaticality detection and garden-path strength: Evidence for serial parsing. Language and Cognitive Processes 15: 615−666. Miller, George A. 1962 Some psychological studies of grammar. American Psychologist 17: 748−762. Miller, George A., and Noam Chomsky 1963 Finitary models of language users. In: R. D. Luce, R. R. Bush, and E. Galanter (eds.), Handbook of Mathematical Psychology, volume 2: 419−491. New York: Wiley. Miyamoto, Edson T. 2002 Case markers as clause boundary introducers in Japanese. Journal of Psycholinguistic Research 31: 307−347. Miyamoto, Edson T. 2006 Processing alternative word orders in Japanese. In: Mineharu Nakayama, Reiko Mazuka, and Yasuhiro Shirai (eds.), The Handbook of East Asian Psycholinguistics, Vol. II: Japanese, 257−263.Cambridge: Cambridge University Press. Miyamoto, Edson T., and Shoichi Takahashi 2000 The processing of wh-phrases and interrogative complementizers in Japanese. In: N. Akatuka and S. Strauss (eds.), Japanese Korean Linguistics 10: 62−75. Stanford, CA: CSLI Publications. Miyamoto, Edson T., and Shoichi Takahashi 2002 Antecedent reactivation in the processing of scrambling in Japanese. In: Tania Ionin, Heejeong Ko, and Andrew Nevins (eds.), Proceedings of the 2 nd HUMIT Student Conference in Language Research (HUMIT 2001), 127−142. (MIT Working Papers in Linguistics 43.) Cambridge, MA: MITWPL, Department of Linguistics and Philosophy, MIT. Nakano, Yoko, Claudia Felser, and Harald Clahsen 2002 Antecedent priming at trace positions in Japanese long-distance scrambling. Journal of Psycholinguistic Research 31: 531−571. Nakatani, Kentaro 2006 Processing complexity of complex predicates: A case study in Japanese. Linguistic Inquiry 37: 625−647. Nemoto, Naoko 1999 Scrambling. In: Natsuko Tsujimura (ed.), The Handbook of Japanese Linguistics, 121− 153. Oxford: Blackwell. Nevins, Andrew, Brian Dillon, Shiti Malhotra, and Colin Phillips 2007 The role of feature-number and feature-type in processing Hindi verb agreement violations. Brain Research 1164: 81−94. Nicol, Janet L. 1993 Reconsidering reactivation. In: Gerry T. Altmann, and Richard Shillcock (eds.), Cognitive Models of Speech Processing: The Second Sperlonga Meeting, 321−350. Hove: Erlbaum.

1908

VIII. The Cognitive Perspective

Nicol, Janet L., and David Swinney 1989 The role of structure in coreference assignment during sentence comprehension. Journal of Psycholinguistic Research 18: 5−20. Nicol, Janet L., and David Swinney 2003 The psycholinguistics of anaphora. In: Andrew Barss (ed.), Anaphora: A Reference Guide, 72−104. Malden, MA: Blackwell. Osterhout, Lee 1997 On the brain responses to syntactic anomalies: Manipulations of word position and word class reveal individual differences. Brain and Language 59: 494−522. Osterhout, Lee, and L. A. Mobley 1995 Event-related brain potentials elicited by failure to agree. Journal of Memory and Language 34: 739−773. Osterhout, Lee, and David Swinney. 1993 On the temporal course of gap-filling during comprehension of verbal passives. Journal of Psycholinguistic Research 22: 273−286. Papadopoulou, Despina, and Harald Clahsen 2003 Parsing strategies in L1 and L2 sentence processing: A study of relative clause attachment in Greek. Studies in Second Language Acquisition 24: 501−528. Pearlmutter, Neal J., Susan M. Garnsey, and Kathryn Bock 1999 Agreement processes in sentence comprehension. Journal of Memory and Language 41: 427−456. Perlmutter, David M. 1978 Impersonal passives and the Unaccusative Hypothesis. In: Proceedings of the 4 th Annual Meeting of the Berkeley Linguistics Society, 157−189. University of California at Berkeley, Berkeley Linguistics Society. Pesetsky, David 1987 Wh-in-Situ: Movement and unselective binding. In: Eric Reuland and Alice ter Meulen (eds.), The Representation of (In)definiteness, 98−129. Cambridge, MA: MIT Press. Phillips, Colin 1996 Order and structure. PhD dissertation, MIT. Phillips, Colin 2003 Linear order and constituency. Linguistic Inquiry 34: 37−90. Phillips, Colin 2006 The real-time status of island phenomena. Language 82: 795−823. Phillips, Colin 2009 Should we impeach armchair linguists? In: S. Iwasaki, (ed.), Japanese/Korean Linguistics 17. CSLI Publications. Phillips, Colin 2013 Some arguments and non-arguments for reductionist accounts of syntactic phenomena. Language and Cognitive Processes 28: 156−187. Phillips, Colin, and Shevaun Lewis 2013 Derivational order in syntax: Evidence and architectural consequences. Studies in Linguistics 6: 11−47. Phillips, Colin, and Matthew Wagers 2007 Relating structure and time in linguistics and psycholinguistics. In: Gareth Gaskell (ed.), The Oxford Handbook of Psycholinguistics, 739−756. Oxford: Oxford University Press. Pickering, Martin J. 1993 Direct Association and sentence processing: A reply to Gibson and Hickok. Language and Cognitive Processes 8: 163−196. Pickering, Martin J., and Guy Barry 1991 Sentence processing without empty categories. Language and Cognitive Processes 6: 229−259.

54. Syntax and Language Processing

1909

Pickering, Martin J., and Roger P. G. Van Gompel 2006 Syntactic parsing. In: Matthew J. Traxler and Morton A. Gernsbacher (eds.), The Handbook of Psycholinguistics, 455−503. San Diego, CA: Elsevier. Poirier, Josée, Katie Wolfinger, Lisa Spellman, and Lewis P. Shapiro 2010 The real-time processing of sluiced sentences. Journal of Psycholingusitic Research 39: 411−427. Pollard, Carl, and Ivan A. Sag 1992 Anaphors in English and the scope of binding theory. Linguistic Inquiry 23: 261−303. Pollard, Carl, and Ivan A. Sag 1994 Head-driven Phrase Structure Grammar. Chicago: University of Chicago Press. Primus, Beatrice 1999 Cases and Thematic Roles. Tübingen: Niemeyer. Pritchett, Bradley L. 1991 Subjacency in a principle-based parser. In: Robert C. Berwick (ed.), Principle-based Parsing: Computation and Psycholinguistics, 301−345. Dordrecht: Kluwer. Pritchett, Bradley L. 1992 Grammatical Competence and Parsing Performance. Chicago: University of Chicago Press. Reinhart, Tanya, and Eric Reuland 1993 Reflexivity. Linguistic Inquiry 24: 657−720. Reuland, Eric 2001 Primitives of binding. Linguistic Inquiry 32: 439−492. Rosenbaum, Peter 1967 The Grammar of English Predicate Complement Constructions. Cambridge, MA: MIT Press. Ross, John R. 1967 Constraints on Variables in Syntax. Cambridge, MA: MIT dissertation. Runner, Jeffrey T., Rachel S. Sussman and Michael K. Tanenhaus 2003 Assignment of reference to reflexives and pronouns in picture noun phrases: Evidence from eye movements. Cognition 89: B1−B13. Runner, Jeffrey T., Rachel S. Sussman, and Michael K. Tanenhaus 2006 Processing reflexives and pronouns in picture noun phrases. Cognitive Science 30: 193−241. Sag, Ivan A., and Thomas Wasow 2011 Performance-compatible competence grammar. In: Kersti Börjars and Robert Borsley (eds.), Constraints and Correspondences: New Models of Grammar, 359−377. Oxford: Blackwell. Sakamoto, Tsutomu, and Matthew Walenski 1998 The processing of empty subjects in English and Japanese. In: Dieter Hillert (ed.), Sentence Processing: A Cross-Linguistic Perspective, 95−112. (Syntax and Semantics 31.) San Diego: Academic Press. Schlesewsky, Matthias, and Ina Bornkessel 2003 Ungrammaticality detection and garden path strength: a commentary on Meng and Bader’s (2000) evidence for serial parsing. Language and Cognitive Processes 18: 299−311. Schütze, Carson T. 1996 The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. Chicago: University of Chicago Press. Sekerina, Irina 2003 Scrambling and processing: Complexity, dependencies, and constraints. In: Simin Karimi (ed.), Word Order and Scrambling, 301−324. Malden, MA: Blackwell.

1910

VIII. The Cognitive Perspective

Shan, Chung-Chieh, and Chris Barker 2006 Explaining crossover and superiority as left-to-right evaluation. Linguistics and Philosophy 29: 91−134. Shapiro, Lewis P., and Arild Hestvik 1995 On-line comprehension of VP-ellipsis: Syntactic reconstruction and semantic influence. Journal of Psycholinguistic Research 24: 517−532. Shapiro, Lewis P., Arild Hestvik, Lesli Lesan, and A. Rachel Garcia 2003 Charting the time-course of sentence processing: Reconstructing missing arguments in VP-ellipsis constructions. Journal of Memory and Language 49: 1−19. Sorace, Antonella, and Frank Keller 2005 Gradience in linguistic data. Lingua 115: 1497−1524. Stabler, Edward 1994 The finite connectivity of linguistic structures. In: Charles Clifton Jr., Lyn Frazier and Keith Rayner (eds.), Perspectives on Sentence Processing, 303−336. Hillsdale, NJ: Lawrence Erlbaum Associates. Staub, Adrian, Charles Clifton Jr., and Lyn Frazier 2006 Heavy NP shift is the parser’s last resort: Evidence from eye movements. Journal of Memory and Language 54: 389−406. Staub, Adrian, and Keith Rayner 2007 Eye movements and online comprehension processes. In: Gareth Gaskell (ed.), The Oxford Handbook of Psycholinguistics, 327−342. Oxford: Oxford University Press. Steedman, Mark 2000 The Syntactic Process. Cambridge, MA: MIT Press. Stowe, Laurie A. 1986 Parsing wh-constructions: evidence for on-line gap location. Language and Cognitive Processes 1: 227−245. Sturt, Patrick 2003 The time-course of the application of binding constraints in reference resolution. Journal of Memory and Language 48: 542−562. Sturt, Patrick, and Vincenzo Lombardo 2005 Processing coordinate structures: Incrementality and connectedness. Cognitive Science 29: 291−305. Sturt, Patrick, Martin J. Pickering and Matthew W. Crocker 1999 Structural change and reanalysis difficulty. Journal of Memory and Language 40: 136−150. Townsend, David and Thomas G. Bever 2001 Sentence Comprehension. Cambridge, MA: MIT Press. Traxler, Matthew J. and Martin J. Pickering 1996 Plausibility and the processing of unbounded dependencies: An eye-tracking study. Journal of Memory and Language 40: 542−562. Trueswell, John C., Michael K. Tanenhaus, and Susan M. Garnsey 1994 Semantic influences on parsing: Use of thematic role information in syntactic disambiguation. Journal of Memory and Language 33: 285−318. Tutunjian, Damon, and Julie E. Boland 2008 Do we need a distinction between arguments and adjuncts? Evidence from psycholinguistic studies of comprehension. Language and Linguistic Compass 2: 631−646. Van Gompel, Roger P. G., and Simon P. Liversedge 2003 The influence of morphological information on cataphoric pronoun assignment. Journal of Experimental Psychology: Learning, Memory, and Cognition 29: 128−139. Vasishth, Shravan, and Richard Lewis 2006 Argument-head distance and processing complexity: Explaining both locality and antilocality effects. Language 82: 767−794.

54. Syntax and Language Processing

1911

Wagers, Matthew and Colin Phillips 2009 Multiple dependencies and the role of the grammar in real-time comprehension. Journal of Linguistics 45: 395−433. Walenski, Matthew 2002 Relating parsers and grammars: On the structure and real-time comprehension of English infinitival complements. Doctoral dissertation, University of California, San Diego. Weinberg, Amy 2001 A minimalist theory of human sentence processing. In: Samuel Epstein and Norbert Hornstein (eds.), Working Minimalism, 283−315. Cambridge, MA: MIT Press. Xiang, Ming, Brian W. Dillon, and Colin Phillips 2009 Illusory licensing effects across dependency types: ERP evidence. Brain and Language 108: 40−55. Zagar, Daniel, Joel Pynte, and Sylvie Rativeau 1997 Evidence for early-closure attachment on first-pass reading times in French. Quarterly Journal of Experimental Psychology 50A: 421−438.

Claudia Felser, Potsdam (Germany)

IX. Beyond Syntax 55. Syntax and Corpora 1. 2. 3. 4. 5. 6.

Introduction Characteristics of corpora Syntactically annotated corpora Syntactic evidence from corpora Further readings and resources References (selected)

Abstract Linguistic corpora are collections of spoken and written texts that are sampled to provide evidence for linguistic analyses and computational linguistics applications. They are often enriched with linguistic information in terms of part of speech annotation, and also more elaborate linguistic analyses, both of which allow the user to search for linguistically relevant structures in an efficient way. Furthermore, texts in corpora are aligned with extra-linguistic meta-information such as speaker's gender or text genre, which can be taken into account as independent factors in linguistic analyses. This article first introduces general corpus characteristics and then focuses on syntactic annotation. Finally, it briefly mentions different kinds of syntactic evidence to be derived from corpora.

1. Introduction Over the past 15 years, there has been a lively debate on what counts as legitimate evidence in linguistics. The pros and cons of introspective judgments, experimental results, and corpus frequencies have been thoroughly discussed (e.g., Schütze 1996; Sampson 2001; Wasow 2002; Newmeyer 2003; Sampson 2007; Haider 2009). In the course of this debate, the categorical distinction between grammatical and ungrammatical sentences has been challenged. Models of syntactic structure and syntactic change that take into account gradiences of acceptability and variations in the data have been developed (e.g., Sorace and Keller 2005; Bresnan 2007; Szmrecsanyi 2013). These models make use of frequency data, based either on corpora or on experimental collections of speaker judgments. Recent statistically advanced analyses have explained syntactic variation on the basis of multiple independent factors, including corpus-based evidence such as lexical co-occurrence and other frequency effects (e.g., Bresnan et al. 2007; Wolk et al. 2013). Linguistic corpora are collections of spoken and written texts. In the discussion of what counts as legitimate linguistic evidence, corpus data is characterized by a number of properties that make it valuable for linguistic investigation: (i) Corpora are compiled from samples of authentic language use − they provide natural examples; (ii) corpora

55. Syntax and Corpora

1913

consist of collections of sentences and utterances − they are a natural source for frequency information; (iii) corpora consist of connected texts and discourses − they provide evidence for contextual effects, such as the influence of information structure on word order; (iv) corpora often include linguistic annotations − they offer opportunities for linguistic abstractions, such as part-of-speech analyses or analyses of dependency structures that may be lexicalized and contextualized; (v) corpora are stored on physical devices − corpus data is reusable, enabling replication of previous empirical analyses. There are also some well-known restrictions on corpus data that must be considered when using such data for linguistic investigation. (i) Corpora necessarily consist of a finite number of sentences, and thus they can never represent a language in all its diversity. As a direct consequence, certain constructions may be under-represented in a corpus or be lacking altogether; (ii) language in a corpus is always influenced by external factors; this second restriction is most relevant when the object of investigation is language competence and not language use. Spoken data, for example, includes performance errors made by the speakers, such as incomplete sentences, clearly ungrammatical structures, and self-corrections. In spoken dialogue settings, speakers often use extralinguistic means of communication (e.g., deictic gestures or mimic), which may be employed in place of an explicit oral expression. These extra-linguistic means of communication are not necessarily coded in the corpus transcription; as a result, part of the communication might be missing from the corpus. Written data can include typing errors, misprints, and quotations of ungrammatical language. Even more important to note is the fact that both spoken and written language conform to the implicit rules of their register or genre. Formal speech differs from informal speech, not only with respect to vocabulary choice but also in syntactic preferences; the same holds for written text genres. (iii) Corpus annotation is never perfect. First of all this is due to the fact that annotation schemes are categorical, and − like grammars − do not fully model linguistic reality. In addition, annotators are never completely consistent in their analyses in contrast to automatic annotation tools. The latter, however, largely lack the skill of humans to disambiguate ambiguities, which are ubiquitous in language. Resolving an ambiguity often requires non-local text knowledge or even world knowledge, a level not necessarily available to an automatic annotation tool. However, errors in a corpus made by annotation tools tend to occur in a systematic fashion, so that a user familiar with certain common mistakes may find ways to work around them. In addition to their relevance as empirical evidence in linguistic investigations, corpora play an important role in computational linguistics and lexicography. Corpora are an integral part of developing tools in natural language processing, such as syntactic parsers that assign syntactic analyses to sentences. Parsers that make use of corpusderived structures in their grammar are generally more robust in analyzing naturally occurring texts than parsers based on hand-crafted rules. Probabilistic parsers take corpus frequencies into account, enabling them to predict the probability of a specific structure on the basis of its relative frequency count. The rest of this chapter is organized as follows: section 2 discusses different characteristics of corpora and outlines a typology of corpus types. section 3 presents various types of syntactic annotation, illustrated by examples. section 4 briefly introduces syntactic examples of corpus research. The final section points the reader to further readings and other resources.

1914

IX. Beyond Syntax

2. Characteristics of corpora Linguistic corpora are collections of spoken and written texts. In a more narrow sense, linguistic corpora are machine-readable collections of spoken and written texts. They are defined in various ways, e.g., as collections of attested language use, as data sampled according to pre-defined parameters, or as data annotated with linguistic analyses. These definitions are not necessarily contradictory; rather, they emphasize different characteristics of corpora that will be discussed in the following subsections.

2.1. Attested language use A linguistic corpus is “a large body of linguistic evidence typically composed of attested language use” (McEnery 2003: 449). Attested language use refers to naturally occurring language: conversations and texts that are spoken or written with their own communicative goals. This does not exclude controlled settings of data collection, which are artificial in the sense that they are initiated for the purpose of linguistic data collection. To some extent, almost all settings for the collection of spoken data are artificial, as some means of recording must be added to the scene. An extreme case is the collection of spontaneous speech for the American English Air Travel Information System (ATIS) corpus (Hemphill et al. 1990), which contains information about flights, fares, airports, etc. The ATIS corpus was collected in order to investigate users’ speech in human-machine dialogues. The participants were asked to interact with an automatic information system by voicing requests over a microphone to a travel planner; they received the travel planner’s answers on a computer screen. In reality, the machine was being manipulated by a researcher simulating the computer’s answers. A rather common type of controlled setting is the oral report of movie plots or the stories behind pictures. This is an ideal way to elicit structures that occur less frequently in uncontrolled settings. Such an approach was the basis for the collection of the Map Task corpus of the Human Communication Research Centre (HCRC) in Edinburgh (Anderson et al. 1991). This corpus was intended to be a common resource for research into different aspects of spoken dialogue, such as the acoustic properties of speech and sociolinguistic phenomena in interactions. The HCRC corpus creation can be described as a large, carefully controlled elicitation exercise, resulting in 128 dialogues that are samples of spontaneous speech. In each dialogue, two speakers talk about a route on a map. Only one of the speakers has a map with the route marked. The other has a similar (but not identical) map with no route marked. The participants’ common goal is to reproduce the route on the map of the second speaker without seeing each other’s maps. Both maps show drawings of landmarks, which are labeled with names such as monument or abandoned cottage. Other corpora are the products of controlled settings for the creation of written data. For instance, the texts collected in the International Corpus of Learner English ICLE (Granger 2003) and in the German Falko corpus (Lüdeling et al. 2005) were written by second-language learners in predefined settings with respect to text type, topic, and length.

55. Syntax and Corpora

1915

2.2. Sampling and representativeness From a different point of view, a linguistic corpus is “a collection of electronic texts built according to explicit design criteria for a specific purpose” (Atkins, Clear, and Ostler 1992: 1). In statistical terms, a representative sample of a population accurately reflects the structure of the population. Here, a corpus represents a sample of a language; the language itself the population. Thus, a representative corpus would have the same characteristics as the language itself, such that a linguistic phenomenon would have the same relative frequency in the corpus as in the language as a whole. Consequently, observations made on the basis of the corpus would also be valid for the language. Unfortunately, it is not always known what the language refers to, except in closed populations such as the language in the published work of Dickens. For example, consider the case of English as the language. The term could be defined as the collection of all possible written and spoken utterances produced by native speakers of English. This hypothetical collection presupposes that the set of English native speakers is known. However, definition of this set is not trivial. Does a five-year-old child belong to the set? Or a thirty-year-old woman who was born in 1800? What about a man from Lancashire, who utters sentences such as Give them us instead of the standard variants of Give them to us or Give us them (cf. Hollmann and Siewierska 2006: 97)? This problem is not specific to corpora, of course, but is a much more general issue of definition. One solution to this problem is to operationalize the boundaries of “the language” in a feasible manner. This has been done, for example, in the Brown corpus (Kučera and Francis 1967), an early corpus project begun in the 1960s. For the Brown corpus, “English” is defined as all English texts published in the year 1961 in the United States. This definition was operationalized as the English-language American publications of 1961 listed in the catalogues of two libraries, the Brown University Library and the Providence Athenaeum. This sampling frame provides the population from which the corpus was sampled (cf. Biber 1993). To sample a truly random collection of linguistic evidence, one must select individual samples from the texts included in the sampling frame, with only one data point per text in the size of the unit under investigation (typically words, phrases, or sentences; cf. the library metaphor of sampling in Evert 2006). It is then possible to apply standard statistical tests to infer the characteristics of the population from the variation in the samples (i.e., the language as operationalized by the sampling frame). In the real world, this is normally not feasible in corpus creation. Because their creation is costly and time-consuming, corpora are intended to be reusable for multiple purposes and applications. A corpus project that acquires the rights to include a certain piece of literature in its corpus will not just use one sentence and throw the rest of the text away. There are also linguistic motivations to sample larger parts of individual texts, even if the unit of investigation is only a sentence. It is necessary to look beyond the sentence boundaries to investigate discourse-related factors such as anaphora and their effect on sentence structure, for example. In this case, the unit of corpus sampling must be larger than the unit of linguistic investigation. In practice, the units of corpus sampling are not merely the individual words, phrases, or sentences relevant to certain specific research questions, but entire texts or at least connected subparts of these texts (e.g., the texts of 20,000 connected words in the Brown

1916

IX. Beyond Syntax

corpus). Each text is characterized by a particular set of vocabulary and structures, a feature that is employed in automatic text classifications such as authorship attribution. For a corpus, this means that individual words, phrases, and structures are not randomly distributed across the corpus as a whole; rather, they are more highly concentrated in some parts and appear less frequently in others. An a priori definition of relevant subsets before (randomly) sampling the units of these subsets generally strengthens the statistical findings in the corpus as a whole (cf. stratified sampling, Biber 1993). This is due to the fact that linguistic variance within an individual subset is expected to be less than the variance between different subsets. Design criteria are specified to control the internal structure of a corpus by defining relevant subsets. A typical corpus design is to sample for text genre, which clusters texts in situationally defined categories such as fiction or sports commentary (Biber 1993). Biographical texts, academic prose, advertisements, and other genres can be reliably distinguished from one another on the basis of their part-of-speech sequences alone − sequences of, for example, two or three contiguous words represented by their part-of-speech labels as an approximation of syntactic structure (Santini 2004). Users can work around the non-randomness of corpora by randomly collecting their own material (in the unit size under investigation). If a corpus provides sufficient metadata, users can filter their data in such way that only single data points from individual texts are considered, thereby enhancing randomness. In addition to the method of data collection, the statistical model to be employed is also an important issue. An appropriate model will show whether it is statistically justified to draw conclusions from a sample or not. We will not go into detail on this topic, but interested readers will find suggestions for further reading in the final section.

2.3. Collections of texts Another perspective on corpora is that “any collection of more than one text” qualifies as a corpus (McEnery and Wilson 2001: 29). Careful sampling is not always an option, due to restrictions on time and funding. Furthermore, the inclusion of published texts in a corpus is often problematic because of copyright restrictions that do not permit the free distribution of the corpus. To some extent, it is possible to provide online access to copyrighted texts by limiting the query results to only very small clippings, i.e., the match of a query and its very local context. Another approach is to abandon detailed sampling and instead compile corpora from texts that are readily available. The American English one-million-word Wall Street Journal subcorpus of the syntactically annotated Penn Treebank is an example of such a collection. In addition to the previously mentioned ATIS and Brown corpora, the Treebank also includes a sample of 2,499 texts from the TIPSTER Wall Street Journal text archive published between 1987 and 1989. The German syntactically annotated TIGER corpus (900,000 words in release version 2.1) is another example of so-called opportunistic sampling. TIGER, based on the German newspaper Frankfurter Rundschau, consists of about 1,600 articles from several daily issues published primarily in 1995 and 1997. The rise of the World Wide Web has introduced a new area for corpus-based research (cf. Kilgarriff and Grefenstette 2003). As the largest collection of electronically available

55. Syntax and Corpora

1917

texts, the Internet itself may be used as a corpus. Specialized search tools have helped to optimize linguistically motivated queries; however, whether the Internet may be rightfully used as a linguistic corpus is subject of some dispute. The principal objections concern the unclear origin of many web pages, the unbalanced content in terms of text genres, and the unknown population of the web as a whole (cf. Lüdeling, Evert, and Baroni 2007; Bergh and Zanchetta 2008). Some of these objections can be overcome by creating corpora that effectively freeze a certain stage of the web at a specific point in time. Examples of this method are the 1.5- to 2-billion-word mega-corpora of the webas-corpus WaCky initiative (Baroni et al. 2009). The WaCky corpora have been created by a computer program that crawls the web and selects pages according to lexical cues and other filtering options (e.g., web domain specifications, such as .de or .uk). The lexical selection starts from a set of seed words which are iteratively enriched by dynamically determined characteristic words. In the WaCky corpora, the stored pages have been post-processed: Duplicate pages have been deleted, and web-related markup and other textual material that does not belong to the text proper (navigation bars, advertisements) have been discarded. The texts have then been linguistically analyzed and annotated by means of sentence segmentation and tokenization, part-of-speech tagging, and partial chunking and parsing. For the time being, the extent to which copyright restrictions apply to web pages is a legal limbo. Since there are no restrictions on distributing web addresses, it has been suggested that researchers not distribute the texts of web pages themselves, but rather lists of their addresses (URLs), in addition to the appropriate tools and descriptions of the compilation and cleaning procedures. This method would allow users to re-create the corpus themselves without violating any copyrights (cf. Kilgariff 2001; Sharoff 2006). However, one problem with URL list-based corpora is that some URLs become quickly outdated.

2.4. Primary data, metadata, and annotation Our final perspective on the nature of corpora takes into account the kind of information belonging to a corpus and to its collected texts: A linguistic corpus consists of the language data itself, metadata that describe the data, and (typically) annotations that have been assigned to the data (adapted from Lemnitzer and Zinsmeister 2010: 8).

2.4.1. Primary data In most cases, the language data represented in a corpus is a simplified version of the primary data. The primary data is itself the immediate product of some language use, for example, the language spoken at a particular event, or the text written or printed on a particular device. The simplification in corpora of spoken language is obvious. Speech is recorded in some audio format; the sound signal can be heard or graphically displayed, e.g., in the form of a spectrogram. For further linguistic analysis, the data is generally encoded in written form. In the simplest case, this is an orthographic transcription, which renders the data readable but loses a great deal of information about the speech event,

1918

IX. Beyond Syntax

such as intonation, stress, length of segmental duration, pauses, loudness, and voice quality. This process of reformatting also loses extra-linguistic information: Deictic gestures, mimic, laughter, and the overlap of speech contributions from more than one speaker are among the elements lost. To some extent, this information can be captured by coding schemes that add simple diacritic symbols to the orthographic transcription; one example is the SAMPROSA scheme, the prosodic extension of the SAM Phonetic Alphabet (SAMPA), which provides a computer-readable inventory of phonetic symbols. The ToBi standard (Pitrelli, Beckman, and Hirschberg 1994) is widely used for annotating prosodic structure. This system comes in various dialects for different languages, e.g., GlaToBI for Glaswegian English is used for annotating prosody in the HCRC Map Task corpus (cf. section 2.1; Mayo, Aylett, and Ladd 1997). In corpora of written texts, simplifications of the primary data are less obvious. This is due to the fact that these simplifications often concern apparently extra-linguistic properties of the original texts that are not always considered to be important by the corpus creators. For example, some newspaper articles happen to be distributed over different pages. An article that starts on the front page might be continued elsewhere in the issue. If the author knows about the split presentation beforehand, it might influence how he or she realizes information in the different parts of the article. For instance, the author might use full noun phrases to refer to referents in the text; if the text had been printed on just one page, pronouns might have been used instead. Typographical properties of the original publication (font type, font size, color, etc.) are also not generally documented in linguistic corpora, although these factors might have been used to affect the interpretation of the text.

2.4.2. Metadata Metadata describe a corpus by documenting the circumstances of its creation and its content: the language data, the annotation, and also the metadata itself. Metadata is similar to the front matter in a book or a file chart in a library catalogue. It describes the circumstances of corpus creation, such as the authors and annotators, the tools used to annotate the data, the size of the corpus, and so forth. In the case of written corpora, metadata should provide detailed information on each text included in the collection: the name of its author, the date and place of its publication, its genre, its language, its size in number of words or sentences, etc. It should also list all levels of annotation, together with information on the labels used in the annotation (the tagsets), the annotation tools if applicable, and the dates of revisions. In the ideal case, there would also be metainformation provided for the metadata itself. This would include information about who created the metadata and whether the categories and labels used in the metadata conform to a given standard. There are two principal standards for the metadata of linguistic corpora: the feature set of the TEI header of the Text Encoding Initiative (TEI) and the Corpus Encoding Standard (xCES). These standards suggest the metadata categories to be provided with a corpus; they also specify in detail how this information is to be encoded in terms of a markup language (since the end of 1990s, the World Wide Web Consortium has recommended the use of Extensible Markup Language, XML).

55. Syntax and Corpora

1919

2.4.3. Annotation Annotation is an ambiguous term; it denotes both the practice of adding interpretative information to corpus data and also the end product of this process: the linguistic symbols attached to or linked with the electronic representation of the language data. In computational linguistics, annotation is required for programs to automatically learn the properties of a language. In corpus linguistics, the motivation for adding annotation to language data is to allow the user to retrieve linguistic evidence systematically and efficiently. Linguistic annotation ranges from word-related information (such as morphological information, part of speech, lemma, and word sense) to discourse-related features (such as discourse relations that span multi-sentential units). Annotation is a way to generalize from surface word strings to abstract linguistic concepts by identifying particular instances as belonging to more abstract structures. The process of annotation generally requires interpretation of the data. Take, for example, the word contact. Contact per se is ambiguous between verb and noun readings, but in a specific sentence only one reading would be relevant. Part-of-speech annotation disambiguates between the two readings, allowing the user to retrieve examples for the relevant reading without having to sift through irrelevant examples (called noise). Manual annotation is performed by annotators, also referred to as coders in more computationally-oriented projects. Often, linguistic experts or trained students work as annotators, but even laymen can create useful annotations. This holds in particular if the annotation guidelines not only provide definitions of linguistic concepts but also linguistic tests that result in annotation decisions compatible with native speaker judgments. In recent years, annotation has sometimes been outsourced to internet platforms such as Amazon Mechanical Turk or CrowdFlower where anonymous coders perform annotation tasks. Annotation guidelines define the meanings of the annotation labels (called tags). For instance, the guidelines for the Penn Treebank define NP as denoting a noun phrase and JJ as denoting an adjective. The guidelines should also explain the linguistic concepts behind the tags and how they are operationalized, i.e., which kinds of strings are to be marked with a particular tag − or, from a user perspective, which kinds of strings are to be expected as matches for a particular query. Guidelines often include discussions of difficult cases and how they were resolved in the corpus annotation. Examples of detailed guidelines for syntactic annotation are Bies et al. (1995) for the Penn Treebank II bracketing format, Telljohann et al. (2012) for the Tübingen Treebank of written German, and Hajič et al. (1999) for the analytical layer of the Prague Dependency Treebank. In the case of manual annotation, the quality of annotation is related to the reliability of the coding scheme, which can be determined by considering three questions. First, to what extent is the scheme reproducible, in the sense that different annotators apply it in a consistent manner to the same data (inter-annotator agreement)? Second, to what extent is it stable, in the sense that an annotator applies it in a consistent manner over time (intra-annotator agreement)? Third, is the scheme accurate according to the expertise of a linguist? Annotator agreement is often measured using the kappa coefficient, κ. Kappa measures pairwise agreement between annotators, taking into account the fact that there will be a certain amount of chance agreement between them. For instance, if two annotators must randomly choose between two labels, there will be a 50 % chance that they will select the same label. If they repeat this process 100 times (i.e., they annotate 100

1920

IX. Beyond Syntax

data points), we would expect them to agree in 50 cases by mere chance (the topmost bar in Figure 55.1). (1)

Fig. 55.1: Example of agreement in a binary annotation task

An observed agreement in 75 cases (75 %, the second bar in Figure 55.1) would achieve a kappa score of only 0.5 (50 %), because we would subtract the 50 cases of agreement that are expected to occur by chance from the 75 observed cases, leaving only 25 cases of real agreement that cannot be explained by chance. To obtain a proportion, this number is divided by the overall number of cases that cannot be explained by chance, in our example 50 cases (the bottom bar in Figure 55.1). This leaves us with 25 divided by 50, which equals 0.5, i.e., κ = 0.5. If the two annotators agree completely (which is rarely achieved), then κ = 1. If the observed agreement equals chance agreement, then κ = 0. If there is less agreement observed than expected to occur by chance, then κ < 0, indicating a serious problem with the annotation. There are various interpretations of the kappa score; some view κ = 0.5 as “moderate”, while others only accept scores greater than 0.67 for drawing “tentative conclusions” and rate κ > 0.8 as “good reliability”. A related measurement to kappa is Krippendorff’s alpha (Krippendorff 2013). The quality of automatically annotated data is often described in terms of accuracy, which is the percentage of material correctly annotated. Two other widely used concepts are recall and precision, which were originally developed in the context of information retrieval to evaluate the performance of search programs integrated in web browsers. Rather than simply counting the percent correct over all instances, these measures take into account how many of the relevant items the system has actually found (recall) and how many of the items that the system has found are actually relevant (precision). Often, there is also a third value provided, the F-score, which combines precision and recall into a single value. The motivation for doing so is that it is easier to compare two different program results on the basis of a single value than to compare two different measures. The same concepts are applied to the evaluation of linguistic annotation. Correctness is determined in comparison to a manually annotated gold standard corpus. To illustrate the evaluation of an annotation program, we use an example adapted from Carroll, Briscoe, and Sanfilippo (1998) that evaluates the automatic syntactic analysis of the sentence John tried to open the window compared to a gold standard analysis (see Figure 55.2).

55. Syntax and Corpora

1921

The list on the left-hand side shows the gold standard analysis in a simplified LFGstyle dependency representation omitting minor relations. The dependency labels are interpreted as follows: subj (PRED,SUBJ) denotes the relation between a predicate and its subject, dobj (PRED,DOBJ) between a predicate and its dative object, and xcomp (C,PRED,XCOMP) between a predicate and its clausal complement that starts with the complementizer C. The example is a control structure in which the matrix subject controls the interpretation of the subject of the embedded infinitive. The gold standard analysis makes this interpretation explicit by including a subject relation from open to John. The list on the right-hand side presents the parser’s output. The only difference between the two representations is that the latter does not account for the control structure, failing to provide a subject relation between open and John. (2)

Gold standard:

Parser output:

subj(try,John) xcomp(to,try,open) subj(open,John) dobj(open,window)

subj(try,John) xcomp(to,try,open) dobj(open,window)

Fig. 55.2: Gold standard dependency (left) and parser output (right) of the sentence John tried to open the window.

First, we evaluate recall. The parser has produced all but one gold standard dependency relation, i.e., three out of four, which results in a recall of 0.75. For precision, we obtain a better result: All dependencies that the parser suggested are indeed part of the gold standard, i.e., three out of three, resulting in a precision of 1. The F-score is the fraction of two times precision times recall divided by the sum of precision and recall, here Fscore is about 0.86. In recent years, even phrase structure-based annotation has been evaluated on the basis of dependency relations derived from the phrase structure. Previously, the standard had been the genuine phrase-based evaluation measure PARSEVAL (Black et al. 1991), which provides a score for correct constituency (bracketing) and another score that also takes syntactic labels into account (labeled bracketing), both based on recall and precision. An alternative evaluation measure is the Leaf-Ancestor Metric (Sampson and Babarczy 2003). Having described various aspects of the annotation process, we now consider the end product of annotation. Figures 55.3 through 55.6 illustrate different annotation formats for the sequence to contact possible buyers (for the unit), taken from sentence 44770 in the Wall Street Journal corpus of the Penn Treebank. For ease of presentation, the structure and the IDs are simplified in the examples. No format can claim to be the best. The choice of format depends on the annotation type and also on the application for which (3)

to contact possible buyers

TO VB JJ NNS

(the word to) (verb, base form) (adjective) (noun, plural)

Fig. 55.3: Column format (part-of-speech annotation with the Penn Treebank tagset)

1922

IX. Beyond Syntax

the annotation is intended. In the subsequent figures, the structures are simplified and relevant tags are printed in bold for ease of comparison: the adjectival tag JJ is attached to possible, the nominal tag NNS (“noun, plural”) to buyers, and, where applicable, the phrasal node tag NP (“noun phrase”) refers to the phrase possible buyers. (4)

(VP

(TO to) (VP

(VB contact) (NP (JJ possible) (NNS buyers))))

Fig. 55.4: Penn Treebank bracketing format (part-of-speech and syntactic annotation)

Figure 55.3 depicts the column format used for the part-of-speech annotation in the Penn Treebank. The first column lists all the words, and the second column lists the corresponding part-of-speech tags that conform to the Penn Treebank tagset (Santorini 1990). Figure 55.4 shows the Penn Treebank bracketing format that extends the part-of-speech representation with hierarchical syntactic information. Hierarchical units are grouped together by means of nested bracketing (Marcus, Santorini, and Marcinkiewicz 1993). Figure 55.5 translates the bracketing representation into a tree structure. (5)

501

VP TO to

1

VP VB contact

to

NP JJ

NNS

possible

buyers

502 2 contact

503 3

4

possible

buyers

Fig. 55.5: Tree representation and mapping of nodes to numerical identifiers (drawn with TreeForm, Derrick and Archambault 2010)

The tree on the left-hand side of Figure 55.5 is an isomorphic representation of the bracketing format in Figure 55.4. The tree on the right-hand side illustrates how numerical identifiers refer to individual nodes after the tree has been indexed. Indexing is required, for example, for stand-off representations in Extensible Markup Language (XML). Stand-off means that different layers of the annotation are represented separately in different sections or are split into different files altogether. TIGER-XML (Lezius 2002) is a commonly used stand-off representation for syntactically annotated corpora. Figure 55.6 shows details of a TIGER-XML representation of the example tree from Figure 55.5. Hierarchical relations between nodes are represented by means of pointer features. The non-terminal (nt) element is a node of the noun phrase category and has the unique identifier 503. It embeds several edge elements, each of which contains a pointer feature idref that takes the identifier of the daughter node as its value, e.g., idref="3" denotes a pointer to node 3. In other words, there is an edge that starts at node 503 and ends at node 3; in linguistic terms, the configuration encodes the fact that node 503 immediately dominates node 3. The column-based format in Figure 55.3 is best suited for word-based annotation. (There are also well-defined approaches for inclusion of hierarchical information in the

55. Syntax and Corpora (6)

1923

[ …]





[ …]



[ …] Fig. 55.6: TIGER-XML format (part-of-speech, syntactic annotation, pointer features)

column format, e.g., Brants 1997). The bracketing format in Figure 55.4 allows unrestricted embedding and is therefore perfectly suited to represent recursive structures. However, the bracketing format cannot easily encode multiple levels of alternative descriptions, non-projective structures (i.e., crossing branches), or overlapping annotations such as overlapping syntactic and prosodic phrases. XML stand-off as shown in Figure 55.6 is the most expressive format, allowing arbitrary linking to many layers of annotation of the data − including conflicting layers. One drawback of this format is that the XML format is not easily readable; however, specialized graphical interfaces can render the data accessible, e.g., ANNIS (Zeldes et al. 2009) or TIGERSearch (Lezius 2002).

2.5. Corpus typology The process of creating a corpus begins with a number of design decisions dependent on the purpose for which the corpus is to be created. Questions of purpose are also relevant when deciding which existing corpus to use for a particular task. Knowing the original purpose of a corpus helps us to understand its specific characteristics, such as whether it consists of just one language or of several languages, what the mode of text creation was (written or spoken), and whether the data present a snapshot of the language at a specific point in time or whether it was collected over a longer period. The purpose of a corpus also influences the existence and types of annotation that may have been applied to the data. Corpora are subclassified according to their choice of language(s): Monolingual corpora consist of a single language, while bilingual or multilingual corpora combine language data from more than one language. There are two different kinds of bi- or multilingual corpora: Parallel corpora contain language data from one or more source languages and their translations into one or more target languages. The parallel texts are often aligned at the level of paragraphs or sentences, sometimes even at the level of individual words. This alignment provides a mapping from the original text onto the corresponding units in its translation and vice versa. Word-aligned corpora are principally used as

1924

IX. Beyond Syntax

training data for statistical machine translation. Parallel corpora are also an interesting resource for contrastive language analyses, even though translation studies suggest that there are universals in translation, such that translated texts have slightly different characteristics than comparable original texts. One translation universal is described as explication, the tendency of translators to spell things out more explicitly than authors of original texts (cf. Klaudy 2008). For studies from a typological perspective, corpora with massively parallel texts are of interest (Cysouw and Wälchli 2007). These are corpora based on texts that have translations into many languages, such as texts from the Bible or Antoine de Saint-Exupéry’s novella Le Petit Prince. The latter, for example, has been translated into about 200 language and dialects. Comparable corpora differ from parallel corpora in that they do not contain original texts and their translations, but rather independent texts in more than one language or language variety. Comparable corpora are especially useful for contrastive analyses because their texts are set up to be comparable in terms of content and form; however, they do not allow the same sort of direct comparisons provided by aligned parallel corpora. Two specific subtypes of comparable corpora are worth mentioning. The first type is a comparable corpus in the sense of translation studies. This type of corpus contains texts in only one language: original texts and texts translated into this language (thus, two different varieties of texts in the language). The second type is learner corpora, which are based on data from non-native speakers. In general, these collections are sampled as comparable corpora containing data from learners with different first languages or with different language expertise, or in combination with comparable data from native speakers of the target language Corpora differ with respect to the mode of their primary data, whether spoken, written, or in a mixed form, such as written-to-be-spoken (in the case of speeches, radio features, or theater plays) or spoken-to-be-written (in the case of an author dictating text into a recording device, as is common practice in healthcare settings and law firms, for example). Spoken corpora are distinguished from speech corpora. The user of a spoken corpus will expect to find textual representations of the spoken data. The corpus qualifies as spoken even if there are only transcriptions of the speech events and no audio recordings at all. Speech corpora, on the other hand, always contain the recordings of their speech events, but not necessarily a textual representation of them. In multi-modal corpora, in addition to audio data and/or transcripts, the primary data includes information on the communicative event, such as video documenting gestures and mimic (e.g., eyebrow movement) or eye-tracking data measuring the movements of the speaker’s eyes. Another characterizing property of a corpus is its relation to time. Data in synchronic corpora cover only a limited period of time. Diachronic corpora, on the other hand, span a longer period of time; their data can provide evidence for different stages in the evolution of a language. Two different corpora might share the same linguistic data but differ with respect to their annotation levels. The most common type of linguistic annotation is part-of-speech annotation. Annotations of (inflectional) morphology and base forms (lemmas) are less frequently applied. More advanced levels of annotation include syntactic structures, lexical semantics and word senses, information structure, discourse structure (anaphora and discourse relations), and event structure (i.e., the temporal reference system). Some resources combine a heterogeneous set of annotation layers. The ongoing OntoNotes project (Hovy et al. 2006), for example, seeks to create a comparable corpus of English,

55. Syntax and Corpora

1925

Chinese, and Arabic texts of various genres. The corpus includes syntactic annotations in the Penn Treebank style, an explicit marking of predicate-argument structures, wordsense disambiguation for nouns and verbs (linking the words to their senses in external knowledge resources), and also coreference resolution (linking pronouns and other coreferring entities to their textual antecedents). Corpora are also subclassified according to their persistency. Static corpora have a fixed size, while monitor corpora do not. Monitor corpora either grow constantly (e.g., by adding each daily issue of a newspaper to an ever-growing corpus) or − in the original understanding of the term − expand because they are based on “texts scanned on continuing basis, ‘filtered’ to extract data for [a] database, but not permanently archived” (Atkins, Clear, and Ostler 1992: 13). It is important to note that static corpora may also grow over time. Corpora that contain annotations in particular tend to be published in updated versions; often, these revisions contain not only more annotations, but also additional sentences. It is therefore important to refer to specific releases when working with a particular corpus. The availability of a corpus in terms of costs and data access is not a feature by which corpora are normally subclassified, but it is crucial information for the user. Corpus resources for Natural Language Processing are often distributed for a fee by national or international agencies. The two most prominent agencies are the American Linguistic Data Consortium (LDC) and the European Language Resources Agency (ELRA). The European CLARIN initiative is an ongoing project that seeks to make corpora and related tools available to researchers in the humanities. In recent years, the availability of corpora that can be searched online has steadily increased. Online query interfaces have the advantage that users do not have to store corpus files and search tools on their own computers. However, some of the corpora available online have the disadvantage that their output is restricted to only a limited number of context words due to legal restrictions, or that they provide only a limited number of total hits, independent of the actual corpus frequency.

3. Syntactically annotated corpora 3.1. Motivation The creation of syntactically annotated corpora is primarily motivated by research in Natural Language Processing. This process serves two major goals (cf. Leech and Eyes 1997: 34−36): (i) developing and testing syntactic parsers (programs that take sentences as input and assign syntactic analyses to them), and (ii) extracting lexical information (for example, subcategorization frames of verbs). The following paragraphs provide a brief description of this first goal. For more details on the second goal, see the recent overview in Schulte im Walde (2009). For many applications, including machine translation and question answering, the analysis of syntax is seen as an important processing step on the way to understanding the meaning of a text. Parsers of the first generation were knowledge-based systems in the form of hand-crafted grammar rules. These parsers shared certain deficiencies (cf. Dipper 2008: 76): Hand-crafted systems are not easily extensible to parsing larger-scale

1926

IX. Beyond Syntax

texts; they are not robust, in the sense that they do not analyze ungrammatical or other unexpected input; they cannot easily deal with ambiguity in the data; and they are not easily portable to other languages. These shortcomings encouraged researchers to take an empirical turn, which led to the development of corpus-based grammar induction. Under this system, texts are annotated with syntactic information, such that grammar rules can be induced from the annotated texts. In the simplest case, these grammar rules consist of context-free rewriting rules that cover all non-terminal nodes in the annotated tree structures and their immediate daughter nodes. Figure 55.7 exemplifies a small corpus-driven grammar induced from our earlier example tree from Figure 55.5. (7)

VP TO to

VP VB contact

NP JJ

NNS

possible

buyers

VP VP NP

→ → →

TO VP VB NP JJ NNS

TO VB JJ NNS

→ → → →

to contact possible buyers

Fig 55.7: A syntactic tree and a context-free phrase structure grammar induced from it

In addition to grammar rules, parsers can learn frequencies from corpora. In other words, they can be trained on corpora. Relative rule frequencies translate into rule probabilities. As a result, the parser can assign not only a syntactic analysis to the sentence, but also a probability to the parse, based on the individual rules that are applied. In this way, the parser has a means of disambiguating between different analyses of a sentence by deriving the most probable parse. Rule-based grammars are generally designed as precision grammars, i.e., they only parse grammatical sentences and fail to parse ungrammatical input. As noted above, corpus-derived parsers are more robust. Probabilistic parsers have the additional advantage that deviant rules are likely to be assigned low probabilities, as deviant input is the exception rather than the rule. The evaluation of syntactic parsers is another motivation for creating syntactically annotated corpora. As described in section 2.4.3, manually annotated or corrected corpora function as a gold standard against which parser outputs can be evaluated. Gold standards are often made available in the course of competitions (shared tasks) for syntactic parsing (e.g., at the conference on Computational Natural Language Learning, CoNLL).

3.2. Part-of-speech annotation Linguistically annotated corpora most often include a level of part-of-speech annotation in which each token is assigned a part-of-speech label. These tagsets go beyond the classical set of eight parts of speech (adverbs, articles, conjunctions, nouns, participles, prepositions, pronouns, and verbs), classifying words into more finely grained categories. These include additional tags to capture the entire range of tokens that occur in corpora: words, numbers, other strings of digits and characters such as list elements, and also punctuation. Commonly used tagsets for English include between 40 and 150 different

55. Syntax and Corpora

1927

tags. For example, the English Penn Treebank tagset consists of 36 part-of-speech tags for words, including interjections and list items, plus twelve tags for punctuation. The design of part-of-speech tagsets is primarily driven by distributional and morphological criteria, but to some extent also by semantic concerns. − Distributional criteria: Can the word be substituted by particular other words? Does the word occur in the neighbourhood of particular other words? Does the word occur in particular positions within the sentence? − Morphological criteria: Does the word inflect? If so, does it have a particular kind of inflection? Does the word take a particular affix in derivation? − Semantic criteria: For example, does the word refer to a particular individual or to a class of entities? The specificity of a tagset is a compromise between providing a maximum of information − relevant for further syntactic annotation and analyses − and the reliability with which different subclasses are (automatically) distinguished. Six of the 36 Penn Treebank tags, for example, are reserved for distinguishing between different subclasses of verbs. Table 55.1 illustrates the systematics behind this subclassification. (8)

Tab. 55.1: Systematics behind the verbal part-of-speech tags in the Penn Treebank tagset Form played play

plays playing

Distribution VBN past participle

VBD past tense

VB infinitive imperative subjunctive

VBP present tense, non-3rd person singular VBZ 3rd person singular, present VBC gerund present participle

Past participle (VBN) and past tense (VBD) of regular verbs in English share the same form. They are differentiated in part-of-speech tagging due to their differing distributions in the clause. The same holds for infinitive (VB) and present tense forms (VBP), except for third-person singular present verbs (VBZ), which can be reliably identified on morphological grounds and which receive a specific label. There are no further sub-classifications between infinitives, imperatives, and subjunctives or between gerunds and present participles, respectively, as these forms cannot be reliably distinguished in all contexts.

3.3. Treebanks Syntactic annotation above the word level is encoded in treebanks. This term was coined by Geoffrey Leech to describe the SUSANNE corpus (Sampson 1995), one of the first

1928

IX. Beyond Syntax

syntactically annotated corpora (cf. Sampson 2003: 40, fn.1). Each sentence had been annotated with a syntactic phrase structure tree, thus a bank of trees. At present, the term is generally used to signal that a corpus contains syntactic annotation, even if it does not contain tree structures in the formal sense of a tree; for example, this category of corpus includes more general types of graphs, such as those found in the TIGER corpus (Brants et al. 2004): so-called acyclic directed graphs that model crossing edges (see section 3.3.3). The following subsections introduce four different methods of syntactically annotating corpora, outlining the decisions involved in each method and referring to typical examples of existing treebanks.

3.3.1. Phrase structure annotation: The Penn Treebank The main organizational principle of phrase structure annotation is a hierarchical boxin-box design. Sets of words are grouped together into phrases, which again are grouped together into longer phrases. Phrases themselves are encoded as non-terminal nodes. The syntactic head of a phrase determines its syntactic category; however, the head might also be unspecified. In appositive noun phrases such as President Barack Obama, for example, there is no need to decide which noun functions as the head of the phrase. An important issue when dealing with natural occurring language is the consideration of ellipsis and fragments. In phrase-based annotation, ellipsis is annotated by means of syntactic phrases that might lack lexical correspondence but that conform to the structural requirements of the tree. In general, phrase structure allows for generalizations independent of lexicalization and function; for example, it would be possible to find sequences of adjacent PPs independent of their function and attachment site in the clause. The Treebank of American English, commonly referred to as the Penn Treebank, is the prototypical example of a phrase-structure annotated treebank. Originally created at the University of Pennsylvania in the 1990s, the latest release (Treebank 3) consists of four different subcorpora that have been syntactically annotated: the one-million-word Brown Corpus (Kučera and Francis 1967), the one-million-word Wall Street Journal newswire articles, parts of the spoken Air Travel Information System (ATIS) corpus (Hemphill, Godfrey, and Doddington 1990), and parts of the spoken Switchboard corpus of telephone conversations (Godfrey, Holliman, and MacDaniel 1992). The Penn Treebank morphosyntactic tagset (Santorini 1990) and its syntactic coding scheme (Bies et al. 1995) have been used as a template for many other annotation efforts, including the Penn Parsed Corpora of Historical English, for which the original schemes were adapted to fit the needs of diachronic corpora. The syntactic coding of the Penn Treebank is guided by the theory of Government and Binding (Chomsky 1981). Its main organizational principle is hierarchical constituency, in which dislocated elements are linked to their base position by means of labeled traces. Traces and other empty elements such as understood subjects in infinitives (big PRO) are represented by means of empty elements in the text string (e.g., “*T*” for trace or “*” for PRO). In the example sentence used in Figure 55.8, the subject constituent U.S. News is co-indexed with the understood subject “*” of the infinitive to announce (legend for the annotations in Figure 55.8: S: sentence, NP: noun phrase, VP: verb phrase, ADVP: adverb phrase, NNP: proper name, VBZ: verb [third person singular], RB: adverb, TO: the word “to”, VB: verb in base form, PRP$: possessive pronoun, CD: cardinal number, NN: normal noun, NNS: normal noun in plural).

55. Syntax and Corpora

1929

(9)

Fig. 55.8: Phrase structure annotation in the Penn Treebank: U.S. News has yet to announce its 1990 ad rates

The Penn Treebank consists of flat trees that are not purely phrase-structure annotations; rather, they display some hybrid characteristics as well. In addition to the theory-motivated empty heads and trace relations (“*”), the phrases are enriched with functional information. Labeled edges mark phrases that are not determined functionally by their position − for example, the subject constituent (SBJ ), topicalized and fronted constituents, and non-VP predicates. In addition, some semantic labels are assigned to adverbial modifiers (e.g., TMP for temporal modifiers). The annotation is designed to allow the user to extract simple predicate-argument structures, as illustrated in example (10). (10) U.S. News has yet to announce its 1990 ad rates: has ( U.S. News, announce ( U.S. News, ad rates ) ) Corpora annotated with the Penn Treebank bracketing format can be queried by means of tools such as Tgrep2, CorpusSearch, Treebank Viewer, and TIGERSearch, among others. A graphical query-by-example tool is included in the Natural Language Toolkit.

3.3.2. Dependency annotation: The Prague Dependency Treebank The main principle behind dependency annotation is the hierarchical ordering of the words in a sentence without assuming a phrase structure. Starting from the verb as the main governing node, all immediate dependents of the verb (subject, object, and clausal or verbal adverbials) are first linked to it by edges. Then, for each node dependent on the verb, it is determined whether other words are dependent on it, and so forth. When determining the dependency structure, there is a straightforward extension to spell out the function relating the dependent word to its governor, such as subject-of or determiner-from.

1930

IX. Beyond Syntax

The motivation to use dependency annotation arises from languages like Czech that have a more relaxed word order than English. An important advantage of dependency structures is that they allow users to extract predicate-argument structures and other functional relations independent of the surface word order; these structures and relations serve as the first step in understanding the meaning of a sentence. In computational linguistics, the use of dependency-annotated corpora has increased in recent years for this reason (cf. Rehbein 2010). One crucial characteristic of dependency annotation is that a specific head must be determined for each substructure. It cannot be left implicit, as was the case with phrasal nodes that abstract away from the internal structure. For instance, if the string President Barack Obama functions as the subject, it must be specified which of the nouns is the immediate dependent of the verb, since it is impossible to have three subject relations and there is no intermediate abstract node available that spans all three nouns. As a rule of thumb, the right-most noun could be specified as head. This would allow us to integrate the nominal structure in the dependency tree, but it would also implicate a hierarchy for which there is no real evidence. As with phrase-structure annotation, it is important to note how fragments and ellipsis are dealt with in dependency annotation. Clearly, a problem will arise if words cannot be related to each other because of missing links. The Prague Dependency Treebank for Czech (Hajič et al. 2003) is a typical example of a dependency-annotated corpus. Its second release incorporates two million words from newspaper and journal texts, all of which are annotated with a detailed morphosyntactic tagset, 1.5 million words annotated with a surface-oriented syntactic analysis (called the analytical level), and 800,000 words annotated with meaning-oriented information (called the tectogrammatical level). Figure 55.9 depicts treebank representations for the sentence We forgot to breathe (see example [11]). In the analytical layer (left), zapomněli ‘we-forgot’ governs the other nodes of the sentence except for the punctuation mark, which is governed by an auxiliary root node AuxS together with the verb. The functional labels of the dependency relations, such as Pred (predicate), AuxV (auxiliary verb), Obj (object), and AuxK (punctuation), are displayed as features of the dependent nodes. (11) Zapomněli jsme dýchat. we.forgot AUX to.breathe ‘We forgot to breathe.’

[Czech]

The analytical level (the left side in Figure 55.9) is only an intermediate layer of the analysis. Full annotation includes the tectogrammatical level (the right side in Figure 55.9), which follows the theory of Functional Generative Description (Sgall, Hajičová, and Panevová 1986). This level represents the functional deep structure of a sentence in terms of tree structures similar to those in the analytical layer, but on a more abstract level: Only base forms of lexical words are represented by nodes; the meanings of functional words such as auxiliary verbs is captured in the annotations of the lexical words if relevant. Additional nodes are inserted for understood arguments, such as the subject of the clause (#PersPron ACTor in Figure 55.9) and the understood subject of the infinitive (#Cor ACTor), which is co-referent with the clausal subject, as indicated by the arrow. In addition to the functional analysis, the tectogrammatical level includes information structural annotation, in which the topic-focus structure of a sentence is

55. Syntax and Corpora

1931

marked, and also co-reference annotation, which links anaphoric pronouns and other referential expressions to their antecedents, as shown in the example tree. Corpora that are annotated with the Prague Dependency Treebank format can be queried by Netgraph or TrEd, among other tools (see section 5 for references). (12)

Fig. 55.9: The analytical layer (left) and the more abstract tectogrammatical layer (right) of Zapomněli jsme dýchat. ‘We forgot to breathe.’ (cf. Pajas and Štěpánek 2009: 35)

3.3.3. A hyprid approach: The TIGER Corpus It has recently been argued that treebanks should be created in a flexible manner, in such a way that they could be easily converted from phrase-based annotation to dependencybased annotation and vice versa in order to accommodate varying user needs (cf. Xia et al. 2009). This type of reasoning seems to have motivated the creation of the German TIGER Corpus (Brants et al. 2004), which presents a more hybrid type of annotation than the Penn Treebank, a combination of phrase structure and dependency structure. Its second release comprises about 900,000 words of newspaper text annotated with a flat phrase structure analysis in which all edges are labeled with functional tags (as in dependency annotation). Also similar to dependency annotation is the linking of dislocated elements with the phrase of their governing node by a functional edge, which often results in crossing edges. Consequently, there are no trace categories applied, and understood arguments are not represented. Figure 55.10 displays the annotation of example (13). It contains a topicalized direct object (OA), which is linked to its governing head (the verb umkehren), creating a crossing edge (legend for Figure 55.10: OA: accusative object, HD: head, SB: subject, OC: clausal object, PDS: substitutive demonstrative pronoun, VMFIN: finite modal verb, PPER: personal pronoun, VVINF: non-finite full verb). (13) Das wollen wir umkehren. that want we reverse ‘We want to reverse that.’ (TIGER corpus 2.1)

[German]

1932

IX. Beyond Syntax

(14)

Fig. 55.10: The TIGER annotation of Das wollen wir umkehren. ‘We want to reverse that.’

The annotation scheme of the TIGER corpus is an extended version of the earlier annotation scheme of the 350,000-word NEGRA corpus (Brants, Skut, Uszkoreit 2003). Its goal is a theory-neutral representation of basic syntactic features. Corpora annotated in the TIGER-XML format can be queried by tools including TIGERSearch and ANNIS.

3.3.4. Grammar-based treebanking All large-scale manual annotation projects like the Penn Treebank, the Prague Dependency Treebank, and the TIGER corpus employ automatic preprocessing steps for creating (partial) structures, which are then manually corrected and extended by the annotators. In addition to this method, there are also projects that seek to create a treebank by employing deep parsers. “Deep” here means that the parser’s output fully models the underlying predicate-argument structure and modifications, including the interpretation of dislocated phrases, control phenomena with understood arguments, and function assignments in coordinate structures. Rule-based deep parsers implement a specific syntactic theory. One example of this is the English Resource Grammar (Copestake and Flickinger 2000), an implementation of the Head-Driven Phrase Structure Grammar (HPSG, Pollard and Sag 1994), which is a constraint-based theory that assumes a strongly lexicalized grammar. This grammar has been used to create the LingO Redwoods HPSG Treebank (Oepen et al. 2002), the first version of which consisted of 10,000 utterances from the English Verbmobil corpus (the latter has been published as the TüBa-E/S corpus of spoken English). Baldwin et al. (2005) report on adapting this grammar to different domain-written texts sampled from the British National Corpus (BNC). Another deep grammar used for treebanking is the implementation of the Lexical Functional Grammar (LFG, e.g. Bresnan 2001) in the ParGram project (Butt et al. 1998). It was used to annotate the PARC 700 Dependency Bank (King et al. 2003), which consists of 700 sentences originally from the Penn Treebank. An advantage of grammar-based treebanking is its consistency: Errors will occur in a consistent manner and are therefore more easily spotted than inconsistent errors in human annotation. A disadvantage is that a broad-coverage parser generally assigns multiple analyses to individual sentences due to potential lexical and structural ambiguities. As a consequence, the analysis step must be followed by a selection step, which can require time-consuming selection by hand if it is not supported by a selection module (either rule-based or statistical). Manual selection of relevant parses can also be sup-

55. Syntax and Corpora

1933

ported by graphical tools. An early example of this is the TreeBanker (Carter 1997). The LFG Parsebanker supports the discrimination of LFG parses (Rosén, Meurer, and de Smeet 2009). This concludes our brief survey of syntactic annotation. The next section examines the exploitation of corpora for syntactic investigation.

4. Syntactic evidence from corpora “(…) [E]very corpus I’ve had a chance to examine, however small, has taught me facts I couldn’t imagine finding out about in any other way” (Fillmore 1992: 35). This statement can be interpreted to mean that data gathered from a corpus often function as a catalyst for further analysis. A strong motivation for collecting authentic examples from corpora is the aim of modelling and understanding variation (often among equally grammatical structures). The investigation into the nature of the English dative alternation provides an illustrative example (cf. Bresnan et al. 2007). The English dative structure can either be expressed as a double object structure (Mary gave John the book) or as a prepositional dative structure (Mary gave the book to John). One analysis, the meaning-to-structure mapping hypothesis (e.g. Pinker 1989), proposes that these structures differ in their meanings. The double object structure denotes a change of state (i.e., a change of possession), whereas the prepositional dative structure denotes a change of place (i.e., a movement to a goal). Some evidence for this hypothesis comes from give idioms such as give someone the creeps, which do not involve a movement but indicate a change of state in the possessor. These are reported to be ungrammatical in the prepositional dative reading, according to the meaning-to-structure-hypothesis illustrated in example (15). (15) a. That movie gave me the creeps. b. *That movie gave the creeps to me. Bresnan at al. (2007) searched the World Wide Web for instances of give idioms used with the prepositional dative and came up with a number of examples, including the one shown in example (16). (16) This story is designed to give the creeps to people who hate spiders, but is not true. (Bresnan et al. 2007: 72, their example [6b]) This type of counter-evidence motivated Bresnan at al. (2007) to pursue further quantitative investigations in controlled corpora as well. They applied advanced statistical techniques (i.e., regression analyses) to identify significant factors and interactions among them, including the animacy and givenness of the recipient. Like other variable phenomena, the dative alternation is multi-factorial, in that multiple features are assumed to have an influence on the choice between the alternative expressions. The features discussed with regard to dative alternation include properties of the two argument phrases: animacy (animate/inanimate), discourse accessibility (e.g., given/non-given), surface realisation (pronoun/NP), definiteness (definite/indefinite) and length (e.g., number of words), among other aspects.

1934

IX. Beyond Syntax

In general, the first step in a quantitative study is to determine the features and their values (e.g., discourse accessibility and its values, given and non-given) and how they may be found in the data. If the corpus provides annotation of the relevant features, corresponding examples can be easily retrieved. If not, it might be possible to approximate the target features on the basis of the corpus’s annotation. For instance, the information-structural value brand-new has been approximated by occurrences of proper names modified by a relative clause (cf. Strube and Hahn 1999). As a last resort, relevant examples are manually annotated for the missing features. For the statistical analysis, one or more features are singled out to be the dependent or response variables. These are the features for which the statistical model will predict certain values. The other features are interpreted as independent or explanatory variables; the model tests whether the behaviors of these variables influence the outcomes of the dependent variables. In other words, the model tests whether variation in the realization of the independent features can explain part of the variation in the dependent features. There are two approaches for statistical investigation: confirmatory analysis, in which the confidence of an explicit hypothesis is tested (for example, the hypothesis that the animacy of the recipient has a statistically significant influence on the realization of the dative constructions), and exploratory analysis, in which the influences of the features are explored without preconceptions. For instance, a logistic regression analysis would test the influences of all independent variables on one dependent variable at once. More elaborate multi-variate methods allow the model to test the influences of independent variables on more than one dependent variable. Which types of tests are applicable depends on the data. The reader will find references to further literature on this topic in the final section. Example studies of syntactically variable phenomena include Heylen (2005), Bresnan et al. (2007), Bouma (2008), Bader and Häussler (2010), and Kiss et al. (2010).

5. Further readings and resources Further readings on all topics in this chapter may be found in Lüdeling and Kytö (2008, 2009). Within this collection, Xiao (2008) provides a comprehensive overview of wellknown corpus resources. Ostler (2008) completes the picture with a state-of-the-art survey on corpora of less-frequently studied languages. In recent years, David Lee has compiled the most comprehensive bookmark collection of corpus-based linguistics (http://tiny.cc/corpora), including links to corpus resources and tools. Kübler and Zinsmeister (2004) provide a general introduction to linguistically annotated corpora. The BootCat toolkit as described in Baroni et al. (2009) helps to compile corpora from the World Wide Web. Wynne (2005) offers a comprehensive guide to developing linguistic corpora in general, including standards of annotation and encoding. Further readings on the design, the development, and the use of syntactically annotated corpora in particular is provided by Abeillé (2003) and more recently summarized in Nivre (2008). Tools for manually annotating syntactic corpora include WebAnno and MMAX2 for positional annotation and linking sets of text units, @nnotate for interactively annotating

55. Syntax and Corpora

1935

constituency structure, TrEd for the annotation of both constituency and dependency trees, and WordFreak, a general XML-based annotation tool. Further information on the important issue of evaluating annotation is provided by Artstein and Poesio (2008), who have assembled a comprehensive survey of various measures of inter-annotator agreement. VassarStats is a webpage on statistics that provides interactive forms for online statistical computation. WebLicht is an online platform for the creation of annotated text corpora. It makes available a collection of annotation tools for different languages with an emphasis on German. The subject of how to query corpora is introduced in detail for the online version of the British National Corpus BNCweb (Hoffmann et al. 2008). The BNCweb queries run on the corpus query processor of the IMS Open Corpus Workbench, which is also used as search engine for many other online corpus interfaces. Meurers and Müller (2009) showcase TIGERSearch queries on different levels of abstraction by means of three case studies using the German TIGER corpus. Other query tools mentioned in section 3.3 are: CorpusSearch, Tgrep2, Treebank Viewer, Netgraph, and TrEd. A graphical queryby-example tool is included in Fangorn, ICARUS, and also in the Natural Language Toolkit (NLTK). The WebCorp search engine allows the user to search the web for patterns of words (including placeholders) and to display the results in concordance (key-word-in-context). Another linguist-oriented meta-search engine is the Web as Corpus tool. For some years, the Linguist’s Search Engine provided syntactic annotation and syntactic query options for English and Chinese web pages (Resnik et al. 2005). It has recently been re-implemented as Extended Linguist’s Search Engine (ELSE) at the Ruhr-University Bochum. Quantitative exploration of corpus-based data by means of the program R is the topic of a number of recent publications (Baayen 2008; Gries 2008, 2009; Johnson 2008). Corpus Linguistics and Linguistic Theory is a journal on corpus-based research focusing on theoretically relevant issues in all core areas of linguistic research, including syntax. Another relevant journal is Language Resources and Evaluation, which has the corpora themselves in focus as its title suggests. Conferences that bring together work on syntax and on corpora include the former workshop series of Linguistically Interpreted Corpora (LINC), the Quantitative Investigations in Theoretical Linguistics workshop (QITL), the biannual Corpus Linguistics conference in England, the annual Treebanks and Linguistic Theory workshop (TLT), and the biannual Linguistic Evidence conference. The biannual Language Resources and Evaluation Conference (LREC) is a comprehensive conference on corpus resources and applications. General information on corpus resources and corpus-related events can be found on the Corpora List and in its web archive.

6. References (selected) Abeillé, Anne (ed.) 2003 Treebanks: Building and Using Parsed Corpora. Dordrecht: Kluwer. Anderson, A., M. Bader, E. Bard, E. Boyle, G. Doherty, S. Garrod, S. Isard, J. Kowtko, J. McAllister, J. Miller, C. Sotillo, H. Thompson, and R. Weinert 1991 The HCRC MapTask corpus. Language and Speech 34: 351−366.

1936

IX. Beyond Syntax

Atkins, Sue, Jeremy Clear, and Nicholas Ostler 1992 Corpus design criteria. Literary and Linguistic Computing 7: 1−16. Artstein, Ron, and Massimo Poesio 2008 Inter-coder agreement for computational linguistics. Computational Linguistics 34: 555−596. Baayen, R. Harald 2008 Analyzing Linguistic Data: A Practical Introduction to Statistics. Cambridge: Cambridge University Press. Bader, Markus, and Jana Häussler 2010 Word order in German: A corpus study. Lingua 120: 717−762. Baldwin, Timothy, John Beavers, Emily Bender, Dan Flickinger, Ara Kim, and Stephan Oepen 2005 Beauty and the beast: What running a broad-coverage precision grammar over the BNC taught us about the grammar − and the corpus. In: Stephan Kepser and Marga Reis (eds.), Linguistic Evidence: Empirical, Theoretical and Computational Perspectives, 49− 69. Berlin/New York: Mouton de Gruyter. Baroni, Marco, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta 2009 The WaCky Wide Web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation 43.3: 209−226. Bergh, Gunner, and Eros Zanchetti 2008 Web linguistics. In: Anke Lüdeling and Merja Kytö (eds.), Corpus Linguistics, Part 1, 309−327. (Handbooks of Linguistics and Communication Science/HSK 29.1.) Berlin/ New York: Mouton de Gruyter. Biber, Douglas 1993 Representativeness in corpus design. Literary and Linguistic Computing 8, 243−257. Bies, Ann, Mark Ferguson, Karen Katz, and Robert MacIntyre 1995 Bracketing Guidelines for Treebank II Style Penn Treebank Project. Technical manual, University of Pennsylvania. http://languagelog.ldc.upenn.edu/myl/PennTreebank1995.pdf (accessed June 10 2014). Black, E., S.P. Abney, D. Flickinger, C. Gdaniec, R. Grisham, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M.P. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski 1991 A procedure for quantitatively comparing the syntactic coverage of English grammars. In: Proceedings of DARPA Speech and Natural Language Workshop, 306−311. Pacific Grove, CA: Morgan Kaufmann. Bouma, Gerlof 2008 Starting a sentence in Dutch: A corpus study of subject- and object fronting. Doctoral Dissertation, University of Groningen. Brants, Sabine, Stefanie Dipper, Peter Eisenberg, Sylvia Hansen, Esther König, Wolfgang Lezius, Christian Rohrer, Georg Smith, and Hans Uszkoreit 2004 TIGER: Linguistic interpretation of a German corpus. Journal of Language and Computation, 2: 597−620. Brants, Thorsten 1997 The NeGra export format for annotated corpora (version 3). Technical Report, NEGRA Project, Universität des Saarlandes. Brants, Thorsten, Wojciech Skut, and Hans Uszkoreit 2003 Syntactic annotation of a German newspaper corpus. In: Anne Abeillé (ed.), Treebanks: Building and Using Parsed Corpora, 73−87. Amsterdam: Kluwer. Bresnan, Joan 2001 Lexical-Functional Syntax. Malden, MA/Oxford: Blackwell. Bresnan, Joan 2007 Is syntactic knowledge probabilistic? Experiments with the English dative alternation. In: Sam Featherston and Wolfgang Sternefeld (eds.), Roots: Linguistics in Search of Its Evi-

55. Syntax and Corpora

1937

dential Base, 77−96. (Studies in Generative Grammar.) Berlin/New York: Mouton de Gruyter. Bresnan, Joan, Anna Cueni, Tatiana Nikitina, and Harald Baayen 2007 Predicting the dative alternation. In: Gerlof Bouma, Irene Krämer, and Joost Zwarts (eds.), Cognitive Foundations of Interpretation, 69−94. Amsterdam: Edita/KNAW. Butt, Miriam, Tracy H. King, María-Eugenia Niño, and Frédérique Segond 1998 A Grammar-Writer’s Cookbook. Sanford/CA: CSLI Publications. Carroll, John, Ted Briscoe, and Antonio Sanfilippo 1998 Parser evaluation: A survey and a new proposal. In: Proceedings of the International Conference on Language Resources and Evaluation (LREC-1998), 447−454. Granada, Spain. Carter, David M. 1997 The TreeBanker: A tool for supervised training of parsed corpora. In: Proceedings of the ACL-Workshop on Computational Environments for Grammar Development and Linguistic Engineering, 9−15. Madrid, Spain. Chomsky, Noam 1981 Lectures on Government and Binding. Berlin/New York: Mouton de Gruyter. Copestake, Ann, and Dan Flickinger 2000 An open-source grammar development environment and broad-coverage English grammar using HPSG. In: Proceedings of the Second Conference on Language Resources and Evaluation (LREC-2000). Athens, Greece. Cysouw, Michael, and Bernhard Wälchli 2007 Parallel texts: Using translational equivalents in linguistic typology. Sprachtypologie und Universalienforschung (STUF) 60.2: 95−99. Derrick, Donald, and Daniel Archambault 2010 TreeForm: Explaining and exploring grammar through syntax trees. Literary & Linguistic Computing 25.1: 53−66. Dipper, Stefanie 2008 Theory-driven and corpus-driven computational linguistics, and the use of corpora. In: Anke Lüdeling, and Merja Kytö (eds.), Corpus Linguistics: An International Handbook, 68−96. Berlin/New York: Mouton de Gruyter. Evert, Stefan 2006 How random is a corpus? The library metaphor. Zeitschrift fürAnglistik und Amerikanistik 54.2: 177−190. Fillmore, Charles 1992 ‘Corpus linguistics’ or ‘computer-aided armchair linguistics’. In: Jan Svartvik (ed.), Directions in Corpus Linguistics: Proceedings of the Nobel Symposium 82, 35−60. Godfrey, John J., Edward C. Holliman, and Jane McDaniel 1992 SWITCHBOARD: Telephone speech corpus for research and development. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 517−520. Granger, Sylviane 2003 The International Corpus of Learner English: A new resource for foreign language learning and teaching and second language acquisition research. TESOL Quarterly 37.3: 538−546. Gries, Stefan Th. 2008 Statistik für Sprachwissenschaftler. Göttingen: Vandenhoeck & Ruprecht. Gries, Stefan Th. 2009 Quantitative Corpus Linguistics with R: A Practical Introduction. Milton Park/New York: Routledge. Haider, Hubert 2009 The thin line between facts and fiction. In: Sam Featherston, and Susanne Winkler (eds.), The Fruits of Empirical Linguistics, Volume 1. Berlin/New York: Mouton de Gruyter.

1938

IX. Beyond Syntax

Hajič, Jan, Alena Böhmová, Eva Hajičová, and Barbora Vidová-Hladká 2003 The Prague Dependency Treebank: A three-level annotation scenario. In: Anne Abeillé (ed.), Treebanks: Building and Using Parsed Corpora, 103−127. Amsterdam: Kluwer. Hajič, Jan, Jarmila Panevová, Eva Buráňová, Zdeňka Urešová, and Alla Bémová 1999 Annotations at analytical level: Instructions for annotators. Techical manual. [English translation of the Czech original by Zdeněk Kirschner]. Hemphill, Charles, John Godfrey, and George Doddington 1990 The ATIS Spoken language systems pilot corpus. In: Proceedings of the DARPA Speech and Natural Language Workshop. Hidden Valley: Morgan Kaufmann. Heylen, Kris 2005 A quantitative corpus study of German word order variation. In: Stephan Kepser, and Marga Reis (eds.), Linguistic Evidence: Empirical, Theoretical and Computational Perspectives, 241−263. Berlin/New York: Mouton de Gruyter. Hoffmann, Sebastian, Stefan Evert, Nicholas Smith, David Lee, and Ylva Berglund Prytz 2008 Corpus Linguistics with BNCweb: A Practical Guide. Frankfurt am Main: Peter Lang. Hollmann, Willem B., and Anna Siewierska 2006 Corpora and (the need for) other methods in a study of Lancashire dialect. Zeitschrift für Anglistik und Amerikanistik 54: 203−216. Hovy, Eduard, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel 2006 OntoNotes: The 90 % solution. Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL, 57−60. New York, USA. Johnson, Keith 2008 Quantitative Methods in Linguistics. Malden/Oxford/Victoria: Blackwell Publishing. Kilgarriff, Adam 2001 Web as corpus. Proceedings of the Corpus Linguistics Conference, 342−344. Lancaster, United Kingdom. Kilgarriff, Adam, and Gregory Grefenstette 2003 Introduction to the special issue on the Web as corpus. Computational Linguistics 29.3: 333−348. King, Tracy H., Richard Crouch, Stefan Riezler, Mary Dalrymple, and Ronald M. Kaplan 2003 The PARC 700 Dependency Bank. Proceedings of the 4 th International Workshop on Linguistically Interpreted Corpora, held at the 10 th Conference of the European Chapter of the Association for Computational Linguistics (EACL’03). Budapest, Hungary. Kiss, Tibor, Katja Keßelmeier, Antje Müller, Claudia Roch, Tobias Stadtfeld, and Jan Strunk 2010 A logistic regression model of determiner omission in PPs. In: Proceedings of International Conference on Computational Linguistics (COLING 2010), 561−569. Beijing, China. Klaudy, Kinga 2008 Explicitation. In: Mona Baker, and Gabriela Saldanh (eds.), Routledge Encyclopedia of Translation Studies, 2nd edition, 104−108. London/New York: Routledge. Krippendorff, Klaus 2013 Content Analysis: An Introduction to its Methodology, 3rd edition. Thousand Oaks, CA: Sage. Kučera, Henry, and William Francis 1967 Computational Analysis of Present-day American English. Providence, RI: Brown University Press. Kübler, Sandra, and Heike Zinsmeister 2014 Corpus Linguistics and Linguistically Annotated Corpora. London: Bloomsbury. Leech, Geoffrey, and Elizabeth Eyes 1997 Syntactic annotation: Treebanks. In: Roger Garside, Geoffrey Leech, and Tony McEnery (eds.), Corpus Annotation, 34−52. London/New York: Longman.

55. Syntax and Corpora

1939

Lemnitzer, Lothar, and Heike Zinsmeister 2010 Korpuslinguistik: Eine Einführung, 2nd edition. Tübingen: Narr. Lezius, Wolfgang 2002 Ein Suchwerkzeug für syntaktisch annotierte Textkorpora. Ph.D. dissertation IMS, University of Stuttgart. Arbeitspapiere des Instituts für Maschinelle Sprachverarbeitung (AIMS) 8.4. Lüdeling, Anke, and Merja Kytö (eds.) 2008 Corpus Linguistics, Part 1. (Handbooks of Linguistics and Communication Science/HSK 29.1.) Berlin/New York: Mouton de Gruyter. Lüdeling, Anke, and Merja Kytö (eds.) 2009 Corpus Linguistics, Part 2. (Handbooks of Linguistics and Communication Science/HSK 29.2.) Berlin/New York: Mouton de Gruyter. Lüdeling, Anke, Stefan Evert, and Marco Baroni 2007 Using Web data for linguistic purposes. In: Marianne Hundt, Caroline Biewer, and Nadja Nesselhauf (eds.), Corpus Linguistics and the Web, 7−24. (Language and Computers: Studies in Practical Linguistics 59.) Amsterdam/New York, NY: Rodopi. Lüdeling, Anke, Maik Walter, Emil Kroymann, and Peter Adolphs 2005 Multi-level error annotation in learner corpora. In: Proceedings of the Corpus Linguistics Conference. Birmingham, Great Britain. Marcus, Mitchell, Beatrice Santorini, and Mary Ann Marcinkiewicz 1993 Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19.2 [Special issue on Using Large Corpora: II]. Mayo, Catherine, Matthew Aylett, and D. Robert Ladd 1997 Prosodic transcription of Glasgow English: An evaluation study of GlaToBI. In: Intonation: Theory, Models and Applications. European Speech Communication Association, Athens, Greece. McEnery, Tony, and Andrew Wilson 2001 Corpus Linguistics, 2nd edition. Edinburgh: Edinburgh University Press. McEnery, Anthony M. 2003 Corpus linguistics. In: Ruslan Mitkov (ed.), Handbook of Computational Linguistics, 448− 463. Oxford: Oxford University Press. Meurers, Detmar, and Stefan Müller 2009 Corpora and syntax. In: Anke Lüdeling, and Merja Kytö (eds.), Corpus Linguistics, Part 2, 920−933. (Handbooks of Linguistics and Communication Science/HSK 29.2.) Berlin/ New York: Mouton de Gruyter. Newmeyer, Frederick J. 2003 Grammar is grammar and usage is usage. Language 79: 682−707. Nivre, Joakim 2008 Treebanks. In: Anke Lüdeling, and Merja Kytö (eds.), Corpus Linguistics, Part 1, 225− 241. (Handbooks of Linguistics and Communication Science/HSK 29.1.) Berlin/New York: Mouton de Gruyter. Oepen, Stephan, Dan Flickinger, Kristina Toutanova, and Christoper D. Manning 2002 LinGO Redwoods. A rich and dynamic treebank for HPSG. In: Proceedings of The First Workshop on Treebanks and Linguistic Theories (TLT 2002). Sozopol, Bulgaria. Ostler, Nicholas 2008 Corpora of less studied languages. In: Anke Lüdeling, and Merja Kytö (eds.), Corpus Linguistics, Part 1, 457−483. (Handbooks of Linguistics and Communication Science/ HSK 29.1.) Berlin/New York: Mouton de Gruyter. Pajas, Petr, and Jan Štěpánek 2009 System for querying syntactically annotated corpora. Proceedings of the ACL-IJCNLP 2009 Software Demonstrations, 33−36. Singapore, Singapore.

1940

IX. Beyond Syntax

Pinker, Steven 1989 Learnability and Cognition. The Acquisition of Argument Structure. Cambridge, MA: MIT Press. Pitrelli, John, Mary Beckman, and Julia Hirschberg 1994 Evaluation of prosodic transcription labeling reliability in the ToBI framework. In: Proceedings of the International Conference on Spoken Language Processing, 123−126. Yokohama, Japan. Pollard, Carl, and Ivan A. Sag 1994 Head-driven Phrase Structure Grammar. Chicago: University of Chicago Press. Pullum, Geoffrey 2003 Corpus Fetishism. http://itre.cis.upenn.edu/~myl/languagelog/archives/000122.html (accessed June 10 2014). Rehbein, Ines 2010 Der Einfluss der Dependenzgrammatik auf die Computerlinguistik. Zeitschrift für Germanistische Linguistik 38.2: 224−248. Resnik, Philip, Aaron Elkiss, Ellen Lau, and Heather Taylor 2005 The Web in theoretical linguistics research: Two case studies using the Linguist’s Search Engine. 31 st Meeting of the Berkeley Linguistics Society, February 2005. Rosén, Victoria, Paul Meurer, and Konraad De Smedt 2009 LFG Parsebanker: A toolkit for building and searching a treebank as a parsed corpus. In: Frank Van Eynde, Anette Frank, Konraad De Smedt, and Gertjan Van Noord (eds.), Proceedings of the Seventh International Workshop on Treebanks and Linguistic Theories, 127−133. Utrecht: LOT. Sampson, Geoffrey 1995 English for the Computer: The SUSANNE Corpus and Analytic Scheme. Oxford: Clarendon Press. Sampson, Geoffrey 2001 Empirical Linguistics. London/New York: Continuum. Sampson, Geoffrey 2003 Thoughts on two decades of drawing trees. In: Anne Abeillé (ed.), Treebanks: Building and Using Parsed Corpora, 23−41. Dordrecht: Kluwer. Sampson, Geoffrey 2007 Grammar without grammaticality. Corpus Linguistics and Linguistic Theory 3: 1−32 [Special issue]. Sampson, Geoffrey, and Anna Babarczy 2003 A test of the leaf-ancestor metric for parse accuracy. Journal of Natural Language Engineering 9.4: 365−380. Santini, Marina 2004 A shallow approach to syntactic feature extraction for genre classification. In: Proceedings of the 7 th Annual Colloquium for the UK Special Interest Group for Computational Linguistics (CLUK 2004). Birmingham, Great Britain. Santorini, Beatrice 1990 Part-of-Speech Tagging Guidelines for the Penn Treebank Project. Technical Report, University of Pennsylvania. http://repository.upenn.edu/cis_reports/570/ (accessed June 10 2014). Schulte im Walde, Sabine 2009 The induction of verb frames and verb classes from corpora. In: Anke Lüdeling, and Merja Kytö (eds.), Corpus Linguistics. An International Handbook, 952−971. Berlin/New York: Mouton de Gruyter. Schütze, Carson 1996 The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. Chicago: University of Chicago Press.

55. Syntax and Corpora

1941

Sekine, Satoshi, and Michael Collins 1997 EvalB: A bracket scoring program. http://nlp.cs.nyu.edu/evalb/ (accessed June 10 2014). Sgall, Petr, Eva Hajičová, and Jarmila Panevová 1986 The Meaning of the Sentence and its Semantic and Pragmatic Aspects. Dordrecht: Reidel. Sharoff, Serge 2006 Open-source corpora: Using the net to fish for linguistic data. International Journal of Corpus Linguistics 11.4: 435−462. Sorace, Antonella, and Frank Keller 2005 Gradience in linguistic data. Lingua 11: 1497−1524. Strube, Michael, and Udo Hahn 1999 Functional centering: Grounding referential coherence in information structures. Computational Linguistics 25: 309−344. Szmrecsányi, Benedikt 2013 The great regression: Genitive variability in Late Modern English news texts. In: Kersti Börjars, David Denison, and Alan Scott (eds.), Morphosyntactic Categories and the Expression of Possession, 59−88. Amsterdam: Benjamins. Telljohann, Heike, Erhard Hinrichs, Sandra Kübler, Heike Zinsmeister, and Kathrin Beck 2012 Stylebook for the Tübingen Treebank of Written German (TüBa-D/Z), Revised Version. Technical Report, Seminar für Sprachwissenschaft, Universität Tübingen. Wasow, Thomas 2002 Postverbal Behavior. Stanford, CA: CSLI Publications. Wolk, Christoph, Joan Bresnan, Anette Rosenbach, and Benedikt Szmrecsányi 2013 Dative and genitive variability in Late Modern English: Exploring cross-constructional variation and change. Diachronica 30: 382−419. Wynne, Martin (ed.) 2005 Developing Linguistic Corpora: A Guide to Good Practice. Oxford: Oxbow Books. Available online from http://ota.ahds.ac.uk/documents/creating/dlc/ (accessed June 10 2014). Xia, Fei, Owen Rambow, Rajesh Bhatt, Martha Palmer, and Dipti M. Sharma 2009 Towards a multi-representational treebank. In: Proceedings of the 7 th International Workshop on Treebanks and Linguistic Theory (TLT-7), 159−170. Groningen, Netherlands. Xiao, Richard 2008 Well-known and influential corpora. In: Anke Lüdeling and Merja Kytö (eds.), Corpus Linguistics, Part 1, 383−457. (Handbooks of Linguistics and Communication Science/ HSK 29.1.) Berlin/New York: Mouton de Gruyter. Zeldes, Amir, Julia Ritz, Anke Lüdeling, and Christian Chiarcos 2009 ANNIS: A search tool for multi-layer annotated corpora. In: Proceedings of Corpus Linguistics 2009. Liverpool, Great Britain.

Heike Zinsmeister, Hamburg (Germany)

1942

IX. Beyond Syntax

56. Syntax and Stylistics Par example, si de ces deux idèes contenues dans la phrase serpentem fuge, je vous demande quelle est la principale, vous me direz, vous, que c’est le serpent; mais un autre prétendra que c’est la fuite; et vous aurez tous deux raison. (Diderot) 1. 2. 3. 4.

Empirical data Basic strategies Beyond the informative function References (selected)

Abstract Lexicon and grammar determine the set of stylistic variations possible in a language. Grammar decides upon acceptability, stylistics upon appropriateness and the basic elements and principles describing grammatical competence have their counterparts determining stylistic competence. The most general stylistic principle is the Principle of Appropriateness, which − for the most basic function of informative language use − requires easy discourse integration. Restricting stylistic variations to syntax, we can choose from a subset of structural paraphrases differing in word order, case frame and explicitness the version which is most appropriate to a particular discourse. Interpreting the empirical data gained by such systematically controlled paraphrasing in the light of relevant linguistic and psycholinguistic theories (esp. those about information structure and language processing), language specific preferences can be shown to be due to the impact grammatical parameters have on general stylistic strategies safeguarding discourse appropriate sentence structures. The typological difference of German as an SOV language with rather free word order and English as an SVO language with relatively rigid word order presents a particularly striking case of such a general interdependency between the syntactic and stylistic properties of languages.

1. Empirical data 1.1. A method In his famous Letter on the Deaf and Dumb: for the use of those who Hear and Speak (1751), Denis Diderot turned to the question of natural word order − a current philological controversy about the relation between language and thought: whether languages with a greater potential for free word order (inversion) were closer or less close to the order of our ideas (or concepts). The main (principal) idea of serpentem fuge can be either the idea of the snake or that of the flight, relative to different perspectives. Diderot

56. Syntax and Stylistics

1943

demonstrates at length that certain ways of expressing one’s thoughts are preferred to other ways. But especially a poetic discourse is “no longer merely a concatenation of energetic terms which present a thought with force and nobility, it is also a tissue of hieroglyphs, piled one upon another” (Diderot in Furbank 1992: 70). This “pile of hieroglyphs” is what stylistics is about, and at the bottom of it lies the question of the main idea and the question about the ways in which it can be verbalized most efficiently. “The verbal structure of a message depends primarily on the predominant function”, says Jacobson in Linguistics and Poetics (1960: 358) and, as the example of the Stanislawsky actor with his fifty different messages of This evening suggests, analyzing the special function of a verbal structure involves a wide range of different functions − according to Jacobson: emotive (speaker related), conative (hearer related), referential (object related), phatic (securing the contact between speaker and hearer), metalingual (code related) and poetic (related to the message itself − where paradigmatic (lexical) selection “on the base of equivalence, similarities and dissimilarities, synonymity and antonymity” projects into the syntagmatic “axis of combination”). Lexicon and grammar together determine the set of stylistic variations possible in a language. This can also be claimed for any regional, social and functional sublanguage within the language. Since Diderot’s time, progress in linguistics has provided us with a number of theoretical models and methods which allow us to systematically identify the linguistic features contributing to the stylistic effect of a paraphrase within its discourse. It is obvious that the stylistic rules and principles of language use cannot be read off from the grammatical rules and principles of language, however much the stylistic options depend upon the grammatical possibilities, but the methods developed by linguists to get to the rules and principles of grammar can also be used for stylistic studies. Although there are a great number of competing approaches, associated with as many terminological differences, the potential for disagreement may be somewhat reduced by using the generative method of empirical research and restrict stylistic studies to “controlled paraphrases”, that is to paraphrases which differ only in one feature, as for example word order, but share all other features including the discourse setting. To isolate the stylistic impact of variation, grammaticality and similarity of meaning have to be guaranteed (at least as far as the fuzziness of all linguistic phenomena which include discourse conditions permits); if the context is restricted to elements one should flee from, serpentem fuge could no longer serve as an example of focus alternatives. That is, any statement about the stylistic effect of a syntactic feature presupposes a detailed contextual analysis. The method of “controlled paraphrasing” accesses the native speaker’s tacit, intuitive knowledge of the stylistic effects of different verbal structures with similar meaning. Although the complexity of the factors involved in the comparison is much higher, the way to stylistic principles and rules is no less promising than that of sets of paraphrases to the principles and rules of grammar. In the following, stylistic appropriateness of word order (1.2), case frame (1.3) and structural explicitness (1.4) will be discussed in relation to a particular context, (exemplified by a written form of standard language use). Examples from the empirical data will be followed by proposals about stylistic strategies underlying informative language use (processing economy 2.1; balanced information 2.2; sentence borders 2.4) and assumptions about their language specific variations (from 2.3 on). Concluding considerations will shortly turn to other functions of language use (argumentation 3.1 and particular stylistic figures 3.2).

1944

IX. Beyond Syntax

All examples will be in German in the original version and felicitious English translation will be taken up under the aspect of language specific differences from 2.3. on. There will be no literal translations as these could only show the grammatical but not the stylistic properties of the examples. The discussion will not take up the linguistically contentious syntactic issues of word order, case frame and structural explicitness, but rely on the cross-referential potential of a handbook on syntax. As the state-of-the-art in stylistics based on the method of controlled paraphrases is still in status nascendi, (except for twenty years of research by Doherty on such general stylistic aspects of felicitous translations), the following German-English demonstration and commentaries can do no more than stimulate a future research program on syntax (or grammar) and stylistics − where the methodological and theoretical approach presented is put to the test, also on a greater variety of empirical data, including other languages and sublanguages.

1.2. Word order Anyone with the necessary grammatical competence of German knows that a sentence like (1a) is a grammatically well-formed sentence, as is its reverse (1b): (1)

a. Der ruhmreichste aller Anachronismen ist die Renaissance. b. Die Renaissance ist der ruhmreichste aller Anachronismen.

(1a) and (1b) differ only in word order and have the same cognitive (referential) meaning. But if we are to access our stylistic competence we need some additional information about the discourse into which the two versions are to be embedded. If we learn that (1a) introduces the first example of anachronisms, we can even guess which of the two sentences will have been the original one, viz. the one that can be inserted into the context without any extra conditions. With ‘anachronism’ presenting the topic of the preceding discourse and ‘Renaissance’ the topic of the ensuing paragraphs, nearly anyone with a general stylistic competence of German will consider (1a) as more “natural”, that is as more discourse appropriate than (1b). On the other hand, when the discourse returns to the idea of Renaissance after a passage about atavism, (2a) will be judged as more appropriate than (2b): (2)

a. Der Atavismus ist sozusagen das Negativ der Renaissance. b. Sozusagen das Negativ der Renaissance ist der Atavismus.

The order “given before new (or before less recent) information” is one of the strategies which determine stylistic competence concerning word order variation. But it does not cover more complex cases like (3a) where ‘Renaissance’ is the only element given by the preceding context. The greater syntactic complexity of the sentence increases the potential for variations, (3b−d): (3)

a. Später sind alle möglichen Erweckungsbewegungen auf den Namen der Renaissance getauft worden. b. Auf den Namen der Renaissance sind später alle möglichen Erweckungsbewegungen getauft worden.

56. Syntax and Stylistics

1945

c. Alle möglichen Erweckungsbewegungen sind später auf den Namen der Renaissance getauft worden. d. Später sind auf den Namen der Renaissance alle möglichen Erweckungsbewegungen getauft worden. e. Erweckungsbewegungen sind später alle mögliche auf den Namen der Renaissance getauft worden. f. Später hat man alle möglichen Erweckungsbewegungen auf den Namen der Renaissance getauft. g. Später sind alle möglichen Erweckungsbewegungen Renaissance genannt worden. h. Später hat man alle möglichen Erweckungsbewegungen als Renaissance bezeichnet. i. Später wurde der Name der Renaissance für alle möglichen Erweckungsbewegungen verwendet. Word order variations may also concern phrase internal structures and changes between phrase internal and external positions, as for example in the case of a split Noun Phrase in (3e) above. The number of syntactic paraphrases will be still higher if active options are included, as in (3f ) above. It is clear that discourse appropriateness is a highly complex affair even if it is restricted to syntactic features; but the complexity can be multiplied further by lexical variation, see (3g−i). Lexical variation cannot be omitted altogether as it is lexical projection − that is, syntactic properties of lexical elements − which determines word order in the first place. But it takes us quickly beyond the border separating paraphrases with similar meaning from paraphrases with different meaning. The replacement of the main verb taufen in (3g−i) changes more than the syntactic structure since taufen is normally related to persons or at least associated with some ceremony. Because both conditions are missing in (3a), the lexical choice in (3a) is slightly marked as satirical − an effect which is neutralized in (3g−i). Looking at the lexical choices of (3a) more closely, we find that alle möglichen is not used literally either, nor is the noun Erweckungsbewegungen (exemplified in the following by Irish or Catalan Renaissance). It is clear that the stylistic effects of such referential shifts are part of Diderot’s heap of hieroglyphs. They are to be dealt with in semantics rather than in syntax. But there is also a connection to syntax as the position after the finite verb promotes the non-literal interpretation of alle mögliche in the subject of (3a): The position after the finite verb is often interpreted as informationally weak in German. (It can also be seen as a “designated” position of topic, suggested e.g. by Frey 2005 − provided the topic is not contrastive or partitive.) In all other syntactic variants ([3b−f]) alle mögliche receives more attention, allowing also a literal interpretation of the attribute − a possibility which diminishes the discourse appropriateness of these paraphrases. (The split case of [3e] retains the “sloppy” interpretation of the quantifier, but is even less discourse appropriate with its contrastive foci.) Notions like “focussed information, topics, informationally weak positions” lead beyond syntax “proper” into the syntactic-pragmatic interface of information structure, which is generally also related to the prosodic aspects of stress and intonation (silent reading included). Syntax proper is context independent; it deals with “properties that

1946

IX. Beyond Syntax

an element has because of its lexical specifications or because it stands in a formal relation to other elements in a sentence” and “its prosodic properties (being accented/ de-accented) can also be determined sentence-internally by comparing it with other elements in the same prosodic phrase” (Fanselow 2006: 138). But information structure is a dominating aspect of the relation between syntax and stylistics. Although we can deviate intentionally from the basic stress pattern associated with (3a), discourse appropriateness of the written form manifests itself in the basic correlation between word order and prosody; the word order variations of (3b−e) yield different prosodic patterns. Deviating from basic patterns requires special reasons. In any case, discourse appropriate word order matches the information structure as it is determined by the intra- and extrasentential context. In cases of discourse inappropriate word order, the formally expressed information structure, in particular the focus expectations associated with a given word order, does not match the information structure determined by the discourse. It is clear that progress in the theoretical understanding of such stylistic phenomena depends heavily on the explanatory power of the theoretical models describing the interaction between the various systems of language involved in discourse. Although discourse related interpretations of syntactic structures are traditionally linked to concepts like topic and focus, none of the existing theories on topic or focus can cope with the stylistic effects of the word order variations in (3a−e) in a compositional way. (Jacobs 1991/1992 suggests a way that is, however, not yet flexible enough for the more complex cases of discourse appropriate information structure. Also because the Noun Phrase internal information structure must be calculated and integrated into the information structure of the Verb Phrase and of the whole sentence.) Other concepts related to discourse, in particular those of informational values determined by retrospective or prospective relevance, are still less applicable (as is certainly the case with the informational grading suggested for the concept of “communicative dynamism”, e.g. in Sgall 2001.) However, since information structure has become a major topic in linguistic research, a better understanding of the syntactic and prosodic systems interacting to achieve discourse appropriate word order can be expected. (This does not automatically carry over to the prospects of a linguistically based “generative” stylistics as long as linguistics and stylistics do not join forces − except for an occasional reference, as in Abraham 2007, the relation has so far been a one-sided affair, as the bibliography below demonstrates.)

1.3. Perspective Paraphrasing by syntactic variation is not restricted to the phenomenon of word order. But all other variations involve differences in the form and quantity of the lexical elements used. One of the most basic areas allowing syntactic paraphrases in the interest of discourse appropriateness is related to case frames, in particular to lexical projection (the way in which a lexical head controls its syntactic “partners” by its specific meaning, its “Semantic Form”, as described e.g. in Bierwisch 1996.) An example of such paraphrasing was (3f ), repeated here in (4a), where the agent underlying the impersonal passive of the paraphrase (3a), repeated in (4b), is spelled out by an indefinite pronoun in the active sentence.

56. Syntax and Stylistics (4)

1947

a. Später hat man alle möglichen Erweckungsbewegungen auf den Namen der Renaissance getauft. b. Später sind alle möglichen Erweckungsbewegungen auf den Namen der Renaissance getauft worden.

Except for the pronominal subject, the word order of (4a) is identical to that of the passive structure in (4b), but the perspective differs. A look at the discourse shows that (4b) is also more discourse appropriate than (4a): Introducing a short paragraph on revivalist movements given the name of Renaissance, the sentence follows two paragraphs about difficulties in dating the original Renaissance, but as the last sentence says “that such a reciprocal moment took place is not doubted”. The discourse topic of the new paragraph is the use of the name (‘Renaissance’) and (4b), its first sentence, transfers the original name onto other cases. Presenting alle möglichen Erweckungsbewegungen as subject (in [4b]) is clearly a more concise way to indicate the topic shift than using it as object after an additional impersonal agent (in [4a]). Also, with the indefinite pronoun occupying the informationally weak position, the defocussing advantage of this position for the quantifier is lost. The perspective of another case frame may diminish discourse appropriateness by adding redundant elements as in (4a), but it could also increase discourse appropriateness. This is obvious for shifted perspectives which require no additional phrases (as was the case in [1a] where the inversion of subject and complement in a copular sentence secured a greater degree of discourse appropriateness than the basic perspective). But a different perspective may be more appropriate even if it allows the same word order and is less economical than its counterpart. The sentence (5a) below follows the description of a mathematical operation called “Baker’s transformation” (which is compared to the production of a pastry dough). The word order in (5a) involves topicalizing of the adverbial, which produces the same word order as the transitive version in its basic order, given in (5b): (5)

a. Mit jedem weiteren Schritt verfeinert sich die Struktur. b. Jeder weitere Schritt verfeinert die Struktur.

The difference in case frame is due to the projection of the reflexive (in [5a]) versus the transitive form of the verb verfeinern (in [5b]), and the overt topicalization of the instrumental in (5a) corresponds to a covert topicalization in (5b), where the semantic role of the instrument is mapped onto the grammatical subject − a case of lexical transfer. Despite its longer, more complex structure, (5a) is stylistically preferred to (5b). Unlike English, German allows topicalizing out of verb phrases freely, while it observes more restrictions on lexical transfer, frequently leading to a stylistic preference of the syntactically more involved passives or passive-like perspectives versus the syntactically less complex, straightforward active perspective. (Doherty 1993 presents a first attempt at a more comprehensive discussion of different “perspectives” associated with the typological profiles of German and English.) Lexical transfer is a diachronic phenomenon which extends the syntagmatic properties of lexical elements. Grammaticalized restrictions are language specific and some languages are more liberal than others, especially by allowing more semantic classes in the position of subjects. Thus, English for example is more liberal in relation to inanimate,

1948

IX. Beyond Syntax

non-intentional subjects than German, where such subjects can result in an inadvertent personification. Stylistic restrictions can be very subtle, as demonstrated by (5b). The transitive verb verfeinern is perfectly appropriate with inanimate, non-intentional subjects like spices. The representation of such idiosyncratic effects of lexical elements is no problem of syntax, but the lexical trends of a language can be determined by its syntactic parameters. Thus, the higher degree of configurationality in English may have contributed to its more liberal lexical transfer. But the stricter selection restrictions in German lead to competing stylistic preferences. In cases like (5a), discourse appropriate word order is realized by less economical, i.e. longer, syntactically more complex structures. And a stylistic preference of more complex paraphrases is not what would be expected normally.

1.4. Explicitness There is no doubt that stylistic appropriateness is also a matter of explicitness. How much does one have to say explicitly at a certain point of discourse, how much has to remain implicit? Concentrating on the informative use of language, we know that there is a great variety of structural reductions possible in order to avoid redundancy, that is to avoid being too explicit. In particular, we would not make use of unnecessary repetitions or spell out what has been said before without special reasons. But as it is a constitutive property of discourse to progress from old to new information, repetitions are also a constitutive feature of language use in discourse. Repetitiveness is generally assessed as a stylistic deficiency, reliable linguistic rules about the proper proportion between elimination or reduction, for example by pronominalization, and lexically full repetitions are hardly available. There is even less known about language specific peculiarities; for example, lexical repetition where pronominalization would be confusing is recommended in English but avoided in German in favour of the use of synonyms. Explicitness is not restricted to fully meaningful lexical elements, syntactically structured language use is also subjected to stylistic rules for discourse appropriate degrees of explicitness. In case of informative discourse functions we classify a word, a phrase or a clause as redundant if they present a repetition not needed for discourse appropriate interpretation. But discourse appropriate interpretation may profit from “dummies”, extra words, phrases or clauses with little meaning, used solely for the acceleration of discourse integration. In some cases such structural “redundancies” have even be grammaticalized, as with cleft sentences, expletives (like there) or focussing particles. There is e.g. the cleft version of (6a) below for the beginning of a new paragraph spelling out Stephen Jay Gould’s fundamental criticism of psychometrical tests to measure intelligence. The history of these tests has been described over almost fifteen pages. The sentence preceding (6a) introduces Jay Gould as the critic who is more to the point than anyone else. The sentence following (6a) presents the first argument: intelligence is no quantifiable phenomenon; it cannot − and this is the second argument − participate in a one-dimensional scale of metrical values. The cleft version uses two clauses, extended by structural elements with solely syntactic meanings (a copula and two pronouns, the expletive and the relative pronoun). But the subject, er, viz. Stephen Jay Gould, and the test methods of the psychologists are given information, and the indefinite

56. Syntax and Stylistics

1949

object zwei fundamentale Trugschlüsse has to be identified as the focus of the sentence. Precisely this can be considered the normal interpretation of the simple sentence ([6b]): (6)

a. Es sind zwei fundamentale Trugschlüsse, die er den Testmethoden der Psychologen ankreidet. b. Er kreidet den Testmethoden der Psychologen zwei fundamentale Trugschlüsse an.

Thus, (6b) should be perfectly discourse appropriate. What, then, made the author of the original prefer the more explicit version of the cleft (6a), where the indefinite, focussed object precedes the given information? The answer is that the greater degree of explicitness adds more weight to the focussed element; it introduces the new discourse referent explicitly by an extra identifying relation (Rooth 1999 says that the focus of clefts is associated with an existential presupposition) and links it to a claim with which the reader is already familiar − Stephen Jay Gould’s criticism of the psychologists’ methods. Although the following three paragraphs elaborating Gould’s criticism are rather short compared with the fifteen pages about psychometrical methods, Gould’s fundamental criticism has to be assessed the culmination point of the critical tenor of the entire presentation. While the end position in (6b) assigns no more than a local focus to the direct object, the cleft structure in (6a) increases the discourse relevance of the focussed element (Doherty 2001a: 278 speaks of “macro-structural relevance” in such cases). As the discussion of (1)−(6) has shown, variations involving different word order, perspective and degrees of explicitness contribute to stylistic appropriateness in highly intricate ways, securing discourse appropriate information structures and perspectives even over large discourse segments. If they are controlled by “incremental” paraphrasing and a close analysis of the associated discourse, intuitive judgments on discourse appropriateness can secure a wide array of empirical data which allow generalizations about frequently recurring patterns of stylistic preferences. But what are the basic principles and rules guiding our stylistic competence in all these cases? And what is their relation to the syntactic principles and rules of the language under consideration?

2. Basic strategies 2.1. Processing economy Position, perspective and explicitness are fundamental formal features of sentence structure in most genres. However much the referential use of language is superimposed upon by the emotive, conative, phatic or poetic functions, comprehension of what has been expressed depends normally upon language processing in discourse. If certain syntactic structures are preferred to others − as illustrated in the examples above − it will also be due to the different processing conditions set by differences in word order, perspective or explicitness. Questions of language processing are dealt with in psycholinguistics and the specific conditions underlying discourse appropriateness can best be studied within the psycholinguistic model of “garden paths”, which result from errors in the resolution of syntactic

1950

IX. Beyond Syntax

ambiguities − temporary or permanent. In either case, such errors or retardations of processing impede easy comprehension and promote the stylistic preference of paraphrases with fewer garden paths. Garden path effects may arise at all levels of language processing, from categorial and inflectional properties of words and their localization within the structural hierarchy of a sentence up to the interpretation of graphic means in written texts. There are theoretical models (like Bader’s 1996 Sprachverstehen) which can serve as a framework for the basic aspects characterizing the interaction between the different modules needed for a syntactically based stylistics. But as discourse appropriateness concerns the ease of discourse integration, also all the extra-linguistic information relevant for the discourse at a particular point of language processing has to be taken into account, and the complexity of the elements and their interaction seem to defy any attempt at formulating stylistic rules of sufficiently predictive power. Theoretical models of lexically driven syntactic parsing have to be combined and enriched at least by theories of discourse related anaphora resolution (for example along the lines of Asher’s 1993 Discourse Representational Theory) and information structure, in order to spell out the linguistic conditions of discourse appropriateness for the referential, cognitive uses of language. As the comparison of the paraphrases (1)−(6) has demonstrated, the most general stylistic strategies underlying the preference of the paraphrases under (a) secure the most economical way of identifying the referents and their sentence internal roles as well as their relevance for the discourse. The most general stylistic strategy aiming at informative discourse appropriateness could thus be formulated as “Stylistic Parsimony” (Sparprinzip): (7)

SP Choose the syntactic structure which offers the most economical ways to identify the referents, semantic roles and discourse relevance of its phrases.

(SP is a somewhat more detailed version of Crocker’s [1996: 106] Principle of Incremental Comprehension: “The sentence processor operates in such a way as to maximise comprehension of the sentence at each stage of processing.”) A simple case of such processing economy was illustrated by the active version of (3f ) above, repeated in (8b) here, which was less appropriate than the passive version (3a), repeated in (8a), as the indefinite pronominal subject man does not add anything worth the extra processing effort. (8)

a. Später sind alle möglichen Erweckungsbewegungen auf den Namen der Renaissance getauft worden. b. Später hat man alle möglichen Erweckungsbewegungen auf den Namen der Renaissance getauft.

Naturally, there are constraints on economy. Sperber and Wilson (1996) speak of a balance between the cognitive gains carried by a message and the effort that is needed to process it. It is clear that a stylistic strategy of processing economy must not apply to any irretrievable information. But (5) and, even more strikingly, (6) have shown that there are also much subtler constraints following from the trade-off between the three areas of SP. The more economical structure of (5b), repeated in (9b), is rejected in favour of the less economical one of (5a), repeated in (9a), where the obvious disadvantage of

56. Syntax and Stylistics

1951

processing an additional preposition and reflexive pronoun is outweighed by the isomorphic relation between the semantic roles and the syntactic functions, i.e. by the syntactic option of the reflexive version (sich verfeinern) for a more direct lexical projection. (9)

a. Mit jedem weiteren Schritt verfeinert sich die Struktur. b. Jeder weitere Schritt verfeinert die Struktur.

While the preference of (9a) over (9b) may be decided at the interface between syntax and semantics, the preference of (6a), repeated in (10a), over (6b), repeated in (10b), has to be decided at the interface between semantics and pragmatics, between the meaning of sentences and that of larger discourse segments. (10) a. Es sind zwei fundamentale Trugschlüsse, die er den Testmethoden der Psychologen ankreidet. b. Er kreidet den Testmethoden der Psychologen zwei fundamentale Trugschlüsse an. As described under 1.4 the discourse relevance of the new information exceeds the local focus, justifying extra processing effort for the structural dummies of a cleft sentence. A theoretical model weighing processing advantages and disadvantages against each other would require a presentational format like the one offered by Optimality Theory, which would have to compare the individual steps within and between all modules of language processing, including macro-structural aspects. It is clear that this can only be achieved if SP is subdivided into special strategies with a narrower scope.

2.2. Balanced information distribution Different preferences in the explicitness of linguistic structures (as in [3]−[6]) are directly related to processing economy. But also word order preferences (as in [1]−[3]) can be seen under the aspect of processing economy. The first example presented a case of word order shift where the discourse appropriate beginning of a sentence resumed an element which was already present in the preceding discourse and should therefore be easy to retrieve from the working memory of the processor. In general, the order given information before new information may be assumed to promote discourse linking at an early stage of processing. A stylistic strategy of given-before-new, GIN (or, as in [2a], immediately-given-before-earlier-given) seems to be reasonable as it economizes on the activation of informational elements which are still present in short term memory. Moreover, word order in (2a), repeated in (11), secures an economical presentation by its basic order of subject and predicative. (11) Der Atavismus ist sozusagen das Negativ der Renaissance. In contrast to this, the inverted order in (1a), repeated in (12), is in line with GIN but requires some additional processing effort at the syntactic level (identifying the topicalization), which might outweigh the gains to be made at the level of discourse integration.

1952

IX. Beyond Syntax

(12) Der ruhmreichste aller Anachronismen ist die Renaissance. The topicalized phrase is only to some extent given information as it is only the nominal head of the phrase which is resumed from the preceding discourse. The partitive construction with its adjective in the superlative is altogether new information and is no less focussed than the information of the subject at the end of the sentence. Since part of the information of the topicalized Noun Phrase requires some more attention it may be worth the extra processing effort. Although (12) is a prototypical instance of Daniel Büring’s (1997) bridge accent, we are free to assign the “hat pattern” also to (1b), repeated in (13), securing its discourse appropriateness at the phonetic level. (13) Die Renaissance ist der ruhmreichste aller Anachronismen. But as this step could not be taken before the discourse appropriate interpretation of (13), that is by the last phase of processing, the economic preference of (12) with its partitive topic as a formal marker is evident. The regularities underlying word order preferences in more complex sentences (like [3]) are more intricate. But the preferred distribution of information in (3a) and of a wide variety of empirical data gained by the method of control paraphrases suggest a stylistic strategy of “balanced information distribution”, BID, which aims at an alternation between focussed and unfocussed information. (Doherty 2006 replaces her 1993 metaphor of a “concave/convex” information structure by this more “operational” term.) While the given-before-new order of simpler structures meets the stylistic requirement of BID per se, any more complex information structure is partitioned further. The most frequent pattern of balanced information distribution is a tripartite structure. In a highly simplified way BID can be said to select unfocussed or weaker elements surrounded by focussed or stronger ones or vice versa, stronger elements surrounded by weaker ones. Sentence (3a) was an example of an informationally weaker element surrounded by two stronger ones. A stricter account of such informational values (relativized to each other and to the discourse) as strong or focussed, weak or unfocussed, is still beyond the horizon, but discourse appropriate stress patterns will have to play a role in it, as demonstrated by the discussion of (3) above. BID constrains the processing economy of direct discourse linking associated with the strategy of GIN. But it helps to avoid garden paths which would result from processing unbalanced information structures (as exemplified by the comparison of [3a] with [3b−d], which is just one case of the impressive range of stylistic preferences pertaining to BID). Stylistic strategies like GIN and BID are sub-strategies of SP, but are still generalizations over widely divergent cases of syntactic structures and their interpretation in terms of traditional concepts like focus and topic is no minor challenge. For example, the adverb “later” in (3a) is more “focussed” in the initial position than in its “weaker” medial position in (3b). But it is less focussed than the topicalized object in a sentence like (14a). (14) a. Genügend Raum und ein gewisses Maß an Sicherheit können sich solche Leute kaufen. b. Genügend RAUM und ein gewisses Maß an SICHERHEIT können sich solche Leute KAUFEN.

56. Syntax and Stylistics

1953

c. Solche Leute können sich genügend Raum und ein gewisses Maß an Sicherheit kaufen. d. Solche Leute können sich genügend RAUM und ein gewisses Maß an SICHERHEIT KAUFEN. With its topicalized argument, (14a) can be considered a marked case of balanced information distribution, with the weakest value (solche Leute) surrounded by − partitively or contrastively − focussed information. (The preceding sentence says that people in top positions will not be able to afford some of the future luxury goods − which is then spelled out in the sentence following (14a): “Aber sie haben keine Zeit und keine Ruhe/ But they have no time and no peace.”) The object in (14a) is a partitive topic specifying two of the new luxury goods, while the predicate (sich kaufen können) establishes a contrastive relation with the preceding (negated) predicate (sich leisten können); the subject in between merely summarizes the various groups referred to in the preceding sentence. With capital letters representing focussed information, the balance indicated by the topicalized object in (14b) is something like a reading finally forced upon the syntactic parser when it has worked through the initial NP, the modal verb and the reflexive pronoun and reaches the position of the subject, just in time for the contrastive interpretation of kaufen. Before that the parser has no reason to drop its primary interpretation of the coordinated Noun Phrase as subject − which could also be confirmed by another ending (as e.g. … können sich beruhigend auswirken). Syntactic garden paths like this are well described in psycholinguistic literature (e.g. P. Gorell 2000) and it is by no means easy to explain the stylistic preference of (14a) over (14c). There is no question that the basic order in (14c) could also be read with the intended foci (see [14d]). However, the contrastive focus on kaufen could easily be missed after the focussed object. It is much more “visible” in (14b), where it is separated from the other foci by the informationally weak subject. (Doherty 2002: 43 suggests the term “focus separation”; a recent, technically elaborate presentation of this idea in Optimality Theory is to be found in Caroline Féry 2007.)

2.3. Language specific variations Stylistic strategies like BID, which rationalize processing by an alternating pattern of processing challenges, may guide carefully planned verbalization in all languages. But the linguistic structures selected by BID will differ in different types of language. Thus, the English translation of (14a), given in (15a), is analogous to the German version of (14c): (15) a. Such individuals can buy sufficient space and a certain degree of security. b. *Sufficient space and a certain degree of security such individuals can buy. Topicalizing the object, like in the German original, would yield a sentence which is ungrammatical in English − at least under normal conditions, see (15b) above. Word order variation is known to be restricted in English. The inversion of subject and verb

1954

IX. Beyond Syntax

is bound to extra conditions and not possible in sentences like (15). The processing disadvantage arising from a topicalized phrase before a preverbal subject is evident. BID cannot apply freely to internal arguments in a language which depends upon structural configurations to identify the syntactic functions of subject and object. But BID can apply to free extensions of the verb such as the adverb in (16a), the translation of (3a). (16) a. Later, every conceivable revivalist movement was given the name “Renaissance” b. Every conceivable revivalist movement was later given the name “Renaissance” The basic position of later in (16b), produces the same stylistic disadvantages as the analogous German paraphrase (3c), giving too much prominence to the quantifier of the subject. However, with the English subject fixed to its preverbal position, there are a great number of cases where English adverbials are stylistically preferred in their basic, Verb Phrase internal position. Moreover, the typological difference between English, a SVO language with relatively rigid word order, and German, a SOV language with free word order, is also associated with different processing conditions at the right hand side of sentences. While prototypical arguments tend to follow the same order in English and German, free adverbials come in alternative positions. Thus, the pronominal adverb in (17a) occurs before the object in German, but its corresponding phrase occurs after the object in English, see (17b): (17) a. Die Projektion spielt dabei eine entscheidende Rolle. b. The projection plays a decisive role in this. The different word order within the verb phrases of (17a) and (17b) is correlated with different patterns of focussed/unfocussed information − in metrical terms, iambic in German, trochaic in English. Consequently, the stylistic strategy of BID yields different results in a great number of sentences. As arguments are more likely to be focussed than adjuncts − the subject in (17) happens to be a new element − the iambic/trochaic difference will often add up to such a tripartite IS with an end focus in German and a mid focus in English. In contrast to traditional assumptions classifying English as an end focus language, empirical data of felicitous, discourse appropriate translations support the idea of English as a “mid focus” language − also for a great number of phrases and structural segments other than adverbials. It is certainly reasonable to assume (as seems to be more common now anyway) that the normal position for the most relevant information of a sentence, its main focus, is associated with the position of the verb − either with the verb adjacent argument or with the verb itself if there is no focusable argument. Verb adjacency promotes end focus in German with its basic final verb, but mid focus for English with its verb phrase initial verb. A great number of structural differences between original English texts and their stylistically appropriate German translations confirm the different position of sentence foci before informationally weaker elements in English and after those elements in German.

56. Syntax and Stylistics

1955

End focus and mid focus differences may even be associated with alternative patterns of redundancies, that is structural additions used solely for the purpose of a balanced information structure. The original sentence in (18a) has been translated into English as (18b) (The pronominal das/this refers to “mixed feelings accompanying the promises of technical globalization” in the preceding discourse): (18) a. b. c. d. e.

Das hat einen sehr einfachen Grund. There is a very simple reason for this. Es gibt einen sehr einfachen Grund dafür. Dafür gibt es einen sehr einfachen Grund. This has a very simple reason.

While the German sentence has a prototypical binary information structure with the order given new, the English sentence has a tripartite structure, where the expletive pronoun opens up a structure with mid focus. Although a parallel version is possible in German, as in (18c) or (in line with BID) in (18d), the original sentence (18a) is stylistically preferred because it is free of redundancies. Alternatively, the tripartite version with its focusing there-is structure seems to be preferred to the binary one, given in (18e), in English. So far, there is no model of language processing which could explain the use of such language specific “conventionalized redundancies”. But grammatical, language specific aspects characterize not only the well-known options for structural reductions (like non-finite verb forms, nominalizations, pronominal or other pro-forms) and eliminations (as in coordination reductions, asyndetical linking of sentences or other forms of ellipses), which make up a large part of stylistic figures. They can also underlie the presence or absence of certain syntactic and lexical possibilities contributing to processing ease in various ways. One of such cases is the availability of cleft sentences (as demonstrated under 1.4) and of particles like nämlich und auch (see below and 3.1), which are known to be highly sophisticated discourse organisational means. It is certainly no coincidence that cleft sentences are more appropriate in midfocus English and particles in end-focus German. Both means can be considered grammaticalized forms of conventionalized redundancies compensating for language specific disadvantages. (Doherty [2001b: 607−638] discusses a particular option of “cleft-like” sentences compensating for the tighter restrictions on topicalization in English.)

2.4. SIP Stylistic repercussions of syntax reach beyond sentence borders. Processing economy may be realized by shorter sentences. Fabricius-Hansen (1999) suggests principles of discourse organisation limiting the number of new referents and conditions/accommodations per sentential increment. But every new sentence is also more costly as there are fewer syntactic options to avoid the repetition of linguistic elements, and processing economy of sequences of sentences constituting larger discourse segments requires some trade-off between the individual sentences in order to minimize anaphoric redundancies. A strategy of incremental parsimony (SIP) exploits the syntactic potential for complex sentences. Doherty (2006: 66) formulates SIP, which safeguards economy over sequences of sentences:

1956

IX. Beyond Syntax

(19) SIP: Attach incoming information to an appropriate point of attachment in the current partial phrase marker (CPPM). (This is somewhere along the lines of Frazier’s 1988 Principle of Minimal Attachment, which concerns the perceptive side of sentence processing and says “assign minimal grammatically permissible syntactic structure to an input sentence as the words are encountered”). If there is no (discourse appropriate, easy-to-process) point of attachment, SIP requires a new sentence. As SIP interacts with GIN and BID and other stylistic strategies securing discourse appropriate, easy-to-process sentence structures, it is subjected to a great number of constraints. And, as SIP operates on language specific structures, we can, again, expect different results in different languages, reducing or increasing the number of sentences or shifting the borders between sequences of sentences. Unravelling the language specific conditions for the use of sentence borders is enormously difficult. All the more so, as the options are multiplied by the various means of sentence internal discourse organisation, available in addition to the grammatically determined use of commas: dashes, brackets, colons, semicolons − each of them presents an alternative to the use of independent sentences. Processing capacity for modifying phrases and additional clauses is normally restricted and grammatical options for attachment are subjected to the “natural” order of referents (cataphoric relations being the exception). But grammatical options for attachment are also superimposed by demands of stylistic discourse appropriateness guaranteeing sufficient informational weight for macro-structural aspects of relevance. An exceptionally short example may indicate the complexities awaiting us here. The two short sentences in (20a) could be subjected to SIP and conjoined into one complex sentence expressing the explicative relation by the connector denn instead of the adverb nämlich, as in (20b): (20) a. Doch auch seine Liste ist keineswegs vollständig. Es werden nämlich fortwährend neue Arten entdeckt. b. Doch auch seine Liste ist keineswegs vollständig, denn es werden fortwährend neue Arten entdeckt. c. Doch seine Liste ist keineswegs vollständig, denn es werden fortwährend neue Arten entdeckt. d. Doch seine Liste ist keineswegs vollständig; denn es werden fortwährend neue Arten entdeckt. e. Doch seine Liste ist keineswegs vollständig: es werden fortwährend neue Arten entdeckt. f. Doch seine Liste ist keineswegs vollständig, da fortwährend neue Arten entdeckt werden. g. Doch seine Liste ist nicht vollständig. Es werden nämlich fortwährend neue Arten entdeckt. But the complex version is less appropriate, in fact, it is not even fully equivalent. The sequence of (20a) opens a new paragraph, specifying some of the newly discovered types of intelligence, the subject of the first sentence seine Liste refers back to J. P. Guilford’s The Nature of Human Intelligence, which extends an earlier list of about

56. Syntax and Stylistics

1957

twenty types on to the staggering number of hundred and twenty. The parallelizing focus of auch relates Guilford’s list to the earlier list and the predicate asserts that both lists are incomplete. The second sentence explains that Guilford’s list is incomplete because people discover ever new types of intelligence. In (20b) the explanation seems to extend onto the earlier list as the scope of auch includes the causal clause. But the original list of twenty types is not included in the explicatory relation of (20a). If we delete auch, as in (20c), dropping the explicit reference to the parallel case − after all, the difference between twenty and hundred twenty suggests that anyway − the attachment of the second sentence seems a bit better. But (20c) is less good than a sequence of two semi-autonomous sentences separated by semi-colon, as in (20d), or better, less redundant, as an asyndetic connection separated by colon in (20e). However, the most economical way of applying SIP would be a syntactic subordination, as in (20f ), which seems to be faultless. But when we return to the original discourse and insert (20f ) at the beginning of the new paragraph, which is going to diversify the earlier lists of twenty and hundred and twenty types of intelligence even further, the greater discourse appropriateness of a fully autonomous sequence becomes visible, see (20g). The prospective relevance of the explanation (for the most recently discovered types of intelligence elaborated in the following paragraphs) is enhanced by the autonomy of the second sentence and the focussing effect of the attitudinal adverb nämlich. In addition to that, it is the parallelizing focus marked by the particle auch in the original (20a) which gives also sufficient weight to the retrospective relevance of the first idea. That is, macro-structural discourse appropriateness is optimized in (20a) by autonomous sentence structures and a greater degree of explicitness: the grammatically necessary expletive subject es and two additional particles. Syntax, especially the typological profile of a language determines parametrized conditions also for SIP and a similar set of paraphrases in another language may be rated differently. Discourse appropriate translations from English into German show a tendency towards shifted sentence borders, where initial or final parts of complex sentences are integrated into preceding or following sentences, often to meet German end focus expectations. (Doherty, [e.g. 2006: 60−70, 131−156] discusses various cases of such differences in discourse appropriate sentence borders.) SP and its special strategies − from GIN and BID over parametrized perspectives and conventionalized redundancies onto SIP and ever larger cross-sentential strategies − aim at stylistic appropriateness at the most basic level of language use. But even here, at the bottom of Diderot’s “heap of hieroglyphs”, stylistic appropriateness is also related to other functions of language use which are not dominated by the criterion of informativity; and higher up in the stylistic “heap” there are not few functions which can flout SP altogether.

3. Beyond the informative function 3.1. Argumentation There is no question that the method of control paraphrases can also be applied to functions of language use other than the purely informative function. But it is again the

1958

IX. Beyond Syntax

comparison of a German original and its English translation which opens our eyes for the far-reaching impact of typologically-based differences on stylistic appropriateness. Although the following comparison of German and English sequences of sentences does not meet the paraphrase condition of similar meaning, it is precisely this feature, a striking difference in structural explicitness, which demonstrates a “bewildering” stylistic effect of the language specific constraints on SP. The sequence in (21a) below introduces two final paragraphs on the future of luxury, which extend the list of paradoxes from mass exclusivity and abstinence to their reciprocal social limitations. Time, for example, which is the most important of luxury items is least available to the elite, but “with no money and security the majority of the population can make little use of their empty time”. As the immediately preceding paragraph refers to an aspect which has always characterized luxury (viz. withdrawal from reality), discourse appropriate word order favours a topicalized predicative in the first sentence indicating the contrastive relation between old and new questions. (21) a. Neuartig und verwirrend ist eine andere Frage, die sich bei solchen Aussichten stellt. Es ist nämlich keineswegs klar, wer in Zukunft eigentlich zu den Nutznießern des Luxus zählen wird. b. Eine andere Frage, die sich bei solchen Aussichten stellt, ist neuartig und verwirrend […] c. Neuartig und verwirrend ist eine andere Frage. […] d. Neuartig und verwirrend ist bei solchen Aussichten eine andere Frage. […] e. New and bewildering, however, is another question that must be posed in light of future prospects: who will count among the beneficiaries of luxury in the future? f. New and bewildering, however, is another question that must be posed in light of future prospects: it is by no means clear who will count among the beneficiaries of luxury in the future. g. Neuartig und verwirrend ist eine andere Frage, die sich bei solchen Aussichten stellt. Es ist keineswegs klar, wer in Zukunft eigentlich zu den Nutznießern des Luxus zählen wird. h. Neuartig und verwirrend ist eine andere Frage, die sich bei solchen Aussichten stellt: Wer wird eigentlich in Zukunft zu den Nutznießern des Luxus zählen? The initial position of the subject in (21b) would also be discourse appropriate due to the implicit reference to earlier questions expressed by the contrastive phrase andere Frage. But as the second sentence presents the question itself, (21a) is prospectively more appropriate than (21b). The fact that discourse appropriateness is decided both ways, retrospectively and prospectively, presents no minor problem for any serious theoretical representation. The next question concerns the aspect of structural explicitness. The subject ends in a restrictive relative clause which contains nothing but given information: solche Aussichten could not be used if they had not been introduced before. Would not a lower degree of explicitness be stylistically better, as illustrated in (21c)? Replacing (21a) by (21c) in the discourse, we find that the paraphrase without the modifying clause is seriously underspecified regarding the identification of the referential

56. Syntax and Stylistics

1959

counterpart. Relating the subject explicitly to previous passages which describe the prospect of future luxury: time, space, quiet, security etc., the relative clause (esp. solche Aussichten) directs the reader backwards, beyond the last paragraph to the individual discussion of our future priorities, which has taken up altogether seven paragraphs. Dropping the relative clause means dropping the summarizing reference to a discourse segment of several pages and with it a formal means to accelerate the process of discourse integration considerably. A structurally less explicit, adverbial version of the modifier, given in (21d) above, is not available either as it extends the scope of the modifier onto the predicative adjective phrase, which changes the meaning of (21) more than acceptable. If we look at the English translation of (21) in (21e), we find a topicalized predicative (legitimized also by the additional “however”) and a restrictive relative clause (with a modalized passive perspective instead of the pseudo-reflexive, which is a lexico-syntactic gap in English). While the German version of the second sentence begins with an attitudinal matrix clause, the English translation deletes the matrix clause and attaches the question to the preceding sentence by a colon. Whatever the translator’s motives for the reduction may have been, from an English point of view the meaning of the matrix clause will be perceived as redundant because it is the primary function of a question to refer to something that is not clear. But the adverbs nämlich und eigentlich, which have also disappeared from the English version, are meaningful expressions and together with the strong form of the negation keineswegs, the German matrix clause does carry some information which the direct question does not. As in (20a), the meaning of nämlich establishes an explanatory discourse relation between the first and the second sentence. In this, the matrix clause of the second sentence implies and rejects a possible assumption, viz. that it is clear who the beneficiaries are. (Doherty [2002: 116−118] spells out the additional meaning of eigentlich, which shall be ignored in this context.) There is no economical way to get the attitudinal meaning of nämlich into the English version; but without it − and despite the colon − the sequence is clearly less appropriate, as shown in (21f ) above. Without nämlich the sequence would also be less appropriate in German, see (21g). It is the use of nämlich which justifies the use of the matrix clause, contributing to a stylistically well-formed piece of discourse, in which the question can be separated from its referential antecedent. That particles like nämlich belong to the set of language specific discourse organisational means compensating for syntax-based processing problems was already mentioned in 2.3. In particular, it is the processing disadvantage of a final verb/end focus in German which is lessened by such particles. Nevertheless, according to the basic stylistic criterion of discourse appropriateness the German and English versions of (21) seem to be equally well-formed and as the structure of (21e) is more economical it should also be preferred in German, see (21h). But no one could agree to this. However marginal the additional meaning of the matrix clause in the second sentence of (21a) may be from an informational point of view, it is a relevant contribution to another function of language use: The readers should not only be informed, but also made to believe the “information” (especially if they hold different views). As (21e) could, at best, only implicate the argumentative relation, the more explicit version of (21a) has to be assessed as stylistically more appropriate than the more economical version (21e).

1960

IX. Beyond Syntax

The argumentative function may be considered a typical feature of the genre of essays, even more so in an essay which begins with a complex title and a sequence of sentences that leave no doubt about the argumentative nature of the whole, as in (22a). (22) a. Luxus − woher und wohin damit? Reminiszenzen an den Überfluß Lohnt es sich denn darüber zu reden? Ist das Thema nicht längst erledigt? Ein zweitausendjähriger Streit scheint sich erschöpft zu haben. Es sieht ganz danach aus, als hätte der Luxus über seine Widersacher gesiegt. b. The Future of Luxury Is it even worth discussion? Wasn’t the issue settled long ago? It seems that the debate has finally played itself out after two thousand years, and it looks as if luxury has won. In the macro-structural sense (21a) is clearly more discourse appropriate than the straight sequence of (21h). But this means also that discourse appropriateness can mean different things in different languages depending upon their basic typological potentials. With its relatively rigid word order and fewer lexical markers, English tends to a lower degree of explicitness at the level of discourse appropriate argumentation. Although this is not the place to discuss the stylistic details of the sequence of sentences in (22a), even a short glance at the published English translation (given above in [22b]) confirms the “tuning-down”.

3.2. Stylistic figures A close comparison of (22a) and (22b), their different use of particles, impersonal passives, reflexives and the like, are likely to reveal a great deal about language specific conditions for SP and its substrategies − under the aspect of translation in particular and text production in general. But while stylistic strategies like GIN, BID or SIP are serious candidates for a syntax-based stylistics safeguarding processing economy for such cognitive functions of language use as information and argumentation, they will definitely play only a minor role for the aesthetic functions of literary, poetic language use. Aesthetic interests will not seldom favour strategies contrary to those of structural parsimony and embrace all sorts of repetitions, including structural parallelism, hyperboles and the like. However, cognitive functions are not excluded from language uses dominated by aesthetic criteria, and aesthetic functions are not excluded from language uses for cognitive purposes. And herein lies a problem which is an enormous challenge for any comprehensive theory of stylistics: the theoretical separation of intentional and non-intentional violations of processing economy. Traditionally syntax and stylistics are dealt with under learned terms like asyndeton, aposiopesis, chiasm, zeugma and the like. Stylistic figures like these classify deviations from the neutral use of language. They are expected in texts with literary functions more often than elsewhere, but the question whether such deviations are intended or not, i.e. whether they are aesthetic figures or unfortunate slips, can be rather confusing.

56. Syntax and Stylistics

1961

Mark Twain, for example, illustrates the “awful German language” by a prenominal attribute which is difficult to process and stylistically awkward also in German, see (23a). Mark Twain’s ostentatious “literal” translation is given in (23b). (23) a. Wenn er aber auf der Strasse der in Samt und Seide gehüllten, jetzt sehr ungeniert nach der neuesten Mode gekleideten Regierungsrätin begegnete … b. But when he, upon the street, the (in-satin-and-silk-covered-now-very-unconstrainedly-after-the-newest-fashion-dressed) government counselor’s wife MET … Similar German structures are also often used by W. G. Sebald, a master of an extraordinarily intense poetic German. See the description of his first encounter with Austerlitz, who seems to be sketching the waiting-room of the Antwerp Centraal Station, in (24a), which the published translation renders by a neutralized version ([24b]): (24) a. Skizzen, die offenbar in einem Bezug standen zu dem prunkvollen, meines Erachtens eher für einen Staatsakt als zum Warten auf die nächste Zugverbindung nach Paris oder Ostende gedachten Saal […] b. sketches obviously relating to the room where we were both sitting − a magnificent hall more suitable, to my mind, for a state ceremony than as a place to wait for the next connection to Paris or Oostende − It is clear that the literary genre by itself cannot justify all deviations from the strategy of processing ease as stylistic figures. Even if we concede a greater degree of freedom for satirical uses of language (eine in Samt und Seide gehüllte Regierungsrätin), we have to admit that a theoretical model of the aesthetic use of language which could explain the different stylistic effect of (23a) and (24a) is not in sight. Diderot’s heap of hieroglyphs contains many more of such phenomena for which the function of conveying information rationally is only marginal. But in a handbook on syntax we may comfort ourselves that it is the informative function which determines stylistic expectancy in the first place, relative to which at least serious questions about many of the more specific functions of language use can be asked.

4. References (selected) Abraham, Werner 2007 Topic, focus and default vs. contrastive accent: Typological differences with respect to discourse prominence. In: Kerstin Schwabe and Susanne Winkler (eds.), On Information Structure, Meaning and Form: Generalizations across Languages, 183−203. (Linguistik Aktuell/Linguistics Today 100.) Amsterdam: Benjamins. Asher, Nicolas 1993 Reference to Abstract Objects in Discourse. Dordrecht: Kluwer. Bader, Markus 1996 Sprachverstehen. Syntax und Prosodie beim Lesen. Opladen: Westdeutscher Verlag. Bierwisch, Manfred 1996 Lexikon und Universalgrammatik. In: N. Weger (ed.), Semantik, Lexikographie und Computeranwendungen, 129−164. Tübingen: Niemeyer.

1962

IX. Beyond Syntax

Büring, Daniel 1997 The 59 th Street Bridge Accent: On the Meaning of Topic and Focus. London/New York: Routledge. Crocker, M. W. 1996 Computational Psycholinguistics: An Interdisciplinary Approach to the Study of Language. Dordrecht: Kluwer. Diderot, Denis 1964 Lettre sur les sourds et muets à l’usage de ceux qui entendent et qui parlent, addressée à Monsieur ***. In: Hervé Falcou (ed.), Diderots Ecrits Philosophique. Paris: JeanJaques Pauvert. Doherty, Monika 1993 Parametrisierte Perspektive. Zeitschrift für Sprachwissenschaft 12(1): 3−38. Doherty, Monika 2001a Discourse Theory and the translation of clefts between English and German. In: István Kenesei and Robert M. Harnisch (eds.), Perspectives on Semantics, Pragmatics and Discourse: A Festschrift for Ferenc Kiefer, 273−292. (Pragmatics & Beyond, New Series 90.) Amsterdam: Benjamins. Doherty, Monika 2001b Cleft-like sentences. Linguistics 39(3): 607−638. Doherty, Monika 2002 Language Processing in Discourse: A Key to Felicitous Translation. London/New York: Routledge. Doherty, Monika 2006 Structural Propensities: Translating Nominal Word Groups from English into German. Amsterdam: Benjamins. Enzensberger, Hans Magnus 1999a Vom Blätterteig der Zeit: Eine Meditation über den Anachronismus. In: Zickzack: Aufsätze, 9−32. Frankfurt: Suhrkamp. Enzensberger, Hans Magnus 1999b Luxus − Woher, und wohin damit? Reminiszenzen an den Überfluss. In: Zickzack: Aufsätze, 143−161. Frankfurt: Suhrkamp. Enzensberger, Hans Magnus 1997 The Pastry Dough of Time: A meditation on Anachronism. In: ZigZag: The Politics of Culture and Vice Versa, 33−52. New York: The New Press. Enzensberger, Hans Magnus 1997 The Future of Luxury. In: ZigZag: The Politics of Culture and Vice Versa, 323−339. New York: The New Press. Enzensberger, Hans Magnus 2007 Im Irrgarten der Intelligenz: Ein Idiotenführer. Frankfurt: Suhrkamp. Fabricius-Hansen, Cathrine 1999 Information packaging and translation: Aspects of translational sentence splitting (German−English/Norwegian). Studia Grammatica 47: 175−214. Fanselow, Gisbert 2006 On pure syntax (uncontaminated by information structure). Studia Grammatica 63: 137−158. Féri, Caroline 2007 The prosody of topicalization. In: Kerstin Schwabe and Susanne Winkler (eds.), On Information Structure, Meaning and Form: Generalizations Across Languages, 69−86. (Linguistik Aktuell/Linguistics Today 100.) Amsterdam: Benjamins. Frazier, Lyn 1988 Grammar and language processing. In: Frederick J. Newmeyer (ed.), Linguistics: The Cambridge Survey III, 97−123. Cambridge: University Press.

57. Syntax and Lexicography

1963

Frey, Werner 2005 Pragmatic properties of certain German and English left peripheral Constructions. Linguistics 43(1): 89−129. Furbank, P. N. 1993 Diderot. London: Minerva. Gorrell, Paul 2000 The subject-before-object preference in German clauses. In: Barbara Hemfort and Lars Konieczny (eds.), German Sentence Processing, 25−63. (Studies in Theoretical Psycholinguistics 24.) Dordrecht: Kluwer. Jacobs, Joachim 1991/1992 Neutral stress and the position of heads. In: Informationsstruktur und Grammatik, 220−244. (Linguistische Berichte, Sonderheft 4.) Jakobson, Roman 1960 Linguistics and Poetics. In: Thomas A. Sebeok (ed.), Style in Language, 350−377. New York/London: MIT. Rooth, Mats 1999 Association with focus or association with presupposition? In: Peter Bosch and Rob van der Sandt (eds.), Focus, Linguistic, Cognitive and Computational Perspectives, 232− 246. Cambridge: University Press. Sebald, W. G. 2001 Austerlitz. München/Wien: Hanser. Sebald, W. G. 2002 Austerlitz. London: Penguin. Sgall, Peter 2001 Functional description, word order and focus. Theoretical Linguistics 17(1): 3−19. Sperber, Dan and Deirdre Wilson 1986 Relevance. Oxford: Blackwell. Twain, Mark 1960 The awful German language. In: Your Personal Mark Twain, 94−115. Berlin: Seven Seas.

Monika Doherty, Berlin (Germany)

57. Syntax and Lexicography 1. 2. 3. 4. 5. 6. 7. 8. 9.

Introduction Some basic lexicographic concepts Lexicography and linguistics Syntactic information in monolingual lexicons Bilingual lexicography Specialized dictionaries Computational lexicography Further syntactic aspects of lexicography References (selected)

1964

IX. Beyond Syntax

Abstract The article gives an overview of the role of syntax in theoretical and practical lexicography. Generally speaking, the grammatical and syntactic information of a dictionary entry should indicate how to use the described word in a grammatically correct way. Issues to be addressed include the kind of syntactic information specified in dictionaries, its representation for the user, and the empirical basis and methodology for gathering this information. The first two points depend strongly on the type and purpose of the dictionary: A dictionary for professional use by translators or teachers is subject to different requirements concerning the delicacy and presentation of syntactic specifications than a learners’ dictionary, and still other requirements hold for electronic dictionaries suitable for natural language processing by computers.

1. Introduction Lexicography is concerned with the theory and practice of compiling dictionaries. The general monolingual dictionary can be described as a work of lexical reference, a documentation of the lexical repertoire of a language. In lexicography, syntactic information is relevant to at least two sorts of people: lexicographers and dictionary users. The common user of a monolingual dictionary may want to look up how to use a certain word in a grammatically correct way. User-friendliness of presentation is therefore a central issue − the user should be able to easily grasp the syntactic information. Which kind of representation is the most effective depends on the needs and the language competence of the intended user group, which can be language learners at different levels of mastery, adult native speakers, or language professionals such as translators or teachers. Delicacy and completeness of syntactic specifications as well as presupposed grammatical competence may thus vary considerably. The lexicographer, on the other hand, should take syntactic analysis into account while compiling dictionary entries − not only in the obvious sense that the entry has to be supplied with certain grammatical specifications in accordance with the predefined entry structure, but as a tool for guiding the design of the entry. In general, it is a nontrivial question of how to convey the syntactic insights of the lexicographer to the user. A convenient presentation will clearly be less explicit and most probably less detailed; what is needed is, as Corbin (2002: 34) puts it, “la vulgarisation dictionnairique des savoirs linguistiques.”

1.1. Lexicography versus grammaticography and lexicology Lexicology and grammaticography are two fields closely related to lexicography and syntax. Grammaticography can be described as the “art and craft of writing grammars” (Mosel 2006). Grammars and dictionaries have in common that they are both metalinguistic descriptions compiled mostly for didactic and documentation purposes (Béjoint 1994: 28); cf. article 59. Grammars are traditionally concerned with the regularities of a

57. Syntax and Lexicography

1965

language whereas dictionaries are responsible for the properties of single units. However, where exactly to draw the line between grammar and dictionary is not that clear (Béjoint 1994) − a point also emphasized by corpus-oriented lexicographers like John Sinclair: Recent research into the features of language corpora give us reason to believe that the fundamental distinction between grammar, on the one hand, and lexis, on the other hand, is not as fundamental as it is usually held to be and since it is a distinction that is made at the outset of the formal study of language, then it colours and distorts the whole enterprise. (Sinclair 2004: 164).

Lexicology investigates the structure of the vocabulary of a language; it examines the internal semantic structure of individual words, the relationships between them, and syntagmatic regularities such as selectional preferences and syntactic projection rules. The field of lexicology thus overlaps with theoretical lexicography (cf. Gouws 2004; Wolski 2005). Lang (1983) provides a careful comparison between the dictionary as a lexicographic product and the lexicon as a model component (Modellkomponente) of the grammar. He notes, inter alia, that while in lexicology, as a subdiscipline of linguistics, the lexicon is seen as part of the grammar in a more general sense, in lexicography, the dictionary is contrasted with grammar in the narrow sense. Moreover, the lexicon aims at depth of description and explicit, model-dependent representations, whereas the dictionary tries to achieve broad coverage and implicit, user-oriented entries. The notion of lexicon understood this way is a key component for theories of the syntax-semantics interface (cf. articles 32 and 35), whose goal is to explore the syntagmatic realization of semantic arguments, among others. We will see below that explanations at this level of linguistic abstraction can be helpful to the lexicographer’s task of designing dictionary entries.

1.2. Overview, scope, and general terminology The syntactic information provided by a dictionary entry is concerned with the syntagmatic characteristics of the entry’s headword, that is, with the question of how the word, in each of its senses, combines with other words in forming phrases and sentences. This includes information about the word’s wordclass, its syntactic valency, and the collocations the word occurs in (cf. Svensén 2009: 7). The main focus of this article is on the lexicographic treatment of syntactic valency. The terms valency pattern and complementation pattern will be used interchangeably (cf. article 1 on basic syntactic notions). A note on the notion of word sense and meaning might be in order: In lexicography, “the treatment of a word in a dictionary does not aim at specifying what the word ‘really’ means but at describing its meaning in a way that is suitable to the needs of the user of the dictionary” (Svensén 2009: 205). We will see below how linguistic and syntactic insights can support the lexicographer’s decision on where to draw the line between the different meanings of a word. The basic concepts and terminology of lexicography used in this article are introduced in section 2. The goal of section 3 is to indicate the positive role that linguistics and syntactic theory can play in lexicographic design decisions. In section 4, we take a closer look at how syntactic information is represented in monolingual dictionaries. Special

1966

IX. Beyond Syntax

emphasis is put on the specification of complements, since complementation patterns are essential for constructing phrases and sentences. In addition to the lexicographic coding of complementation patterns (section 4.1), we will discuss methods of conveying such patterns implicitly in dictionary definitions and examples (section 4.2). The purpose of the case studies in section 4.3 is to exemplify the usefulness of taking a linguistically informed view of the presentation of complementation in dictionaries. After an overview of the specific issues related to presenting syntactic information in bilingual dictionaries (section 5), we will return to the topic of complementation in the context of so-called valency dictionaries (section 6.1), which are instances of specialized dictionaries (section 6). Dictionaries of function words, a second type of dictionary specialized on syntactic information, are briefly discussed in section 6.2. Section 7 discusses the role of syntactic information in computational lexicography, and section 8 concludes the article with some remarks on syntactic aspects of lexicography that go beyond those concerned with the syntax of natural language.

2. Some basic lexicographic concepts 2.1. Dictionary types and dictionary users Dictionaries can be classified in a number of ways (Kühn 1989; Hausmann 1989). The discriminating properties listed by Atkins and Rundell (2008: 24−25) include language (monolingual, bilingual, multilingual), coverage (general language, specific areas of language, sublanguages, etc.), size, medium (print vs. electronic), user group (linguists, translators, literate adults, language learners, etc.), and intended use (decoding vs. encoding). As to the last distinction, decoding refers to the process of understanding the meaning of a word, whereas encoding means using a word correctly. In the context of the present article, whose focus is on syntax, the following types of dictionaries are particularly relevant: dictionaries intended for encoding, that is, monolingual learners’ dictionaries and active bilingual dictionaries, and dictionaries specialized on syntagmatic information or grammatical words. A key factor in dictionary design is the intended user. With regard to user skills and needs, there is a stark contrast between learners’ dictionaries and dictionaries for the expert. Syntagmatic information for non-experts, who are the overwhelming majority of dictionary users, must be conveyed in non-technical terms, preferably without presupposing any acquaintance with grammatical notions. The linguistic expert, on the other hand, prefers descriptions that are as detailed and explicit as possible. In addition to the learner and the expert, we may also count the computer as a potential user profiting from syntactic specifications in the dictionary. More precisely, a natural language processing system may take advantage of the lexical data specified in a dictionary, if they are available in machine-readable form. The requirements concerning precision and formalization are here rather different from those of the human user. To keep in mind the goals of a dictionary is important for an adequate evaluation. As an example, Zgusta (2006: 115−116) mentions the Warlpiri dictionary project presented in Laughren and Nash (1983), where actants in meaning definitions are described abstractly with case labels indicating ergative and absolutive case marking respectively.

57. Syntax and Lexicography

1967

Zgusta accepts an objection by Wierzbicka stating that this kind of presentation is not helpful to the average dictionary user such as a teacher or a high school student. But he points out that a learners’ dictionary was not intended by the project and that an explicit and detailed description has its value for language documentation, especially with endangered languages.

2.2. Component parts of dictionaries Dictionaries are highly structured objects (Hausmann and Wiegand 1989). The macrostructure of a dictionary determines the types of entries included in the dictionary and the arrangement of the headwords or lemmas (e.g. in alphabetical order); the microstructure characterizes the lexicographic information within a lemma and its internal organization (Wiegand 1989a; 1989b). The microstructure specifies the way in which the different senses or lexical units of the headword are arranged within the entry, and it comprises information on the form, meaning and use of the lemma. Formal information may include details about spelling, pronunciation, inflection, and also about syntax and grammar, which will be discussed in more detail in section 4. Explaining the meanings of a word can be regarded as the central function of the general monolingual dictionary. In lexicography, meaning explications are known as definitions (Wiegand 1989c). Atkins and Rundell (2008: 407) speak of a “misnomer”, because definition stands for the unrealistic ideal of necessary and sufficient conditions. The traditional model leans on the Aristotelian notion of analytic definition by genus proximum, the immediate superordinate word, and differentia specifica, i.e., distinguishing features. Defining by synonyms is another traditional strategy widely used in dictionaries. Contextual or full sentence definitions (Hanks 1987) are a more recent device, where the definiendum is embedded in the defining sentence, typically as part of the ifclause of an if-then sentence (cf. section 4.2). Information on the use of a headword may include textual examples, either authentic, adapted or constructed, in which the word occurs. Usage notes, by contrast, provide the user with additional helpful hints such as metaphorical interpretations of the headword. As will be discussed in section 4.2, examples and definitions can be employed to some degree to convey syntagmatic information about the headword.

2.3. The lexicographic process Atkins (1992: 7−8) characterizes the process of monolingual dictionary building as consisting of two main stages: analysis and synthesis; see Atkins and Rundell (2008) for a detailed exposition; different but related aspects of the lexicographic process are discussed in Wiegand (1998b: sect. 1.5) and Müller-Spitzer (2007). In the first stage, the lexicographer analyzes a word, tries to identify its senses, and records linguistic facts about the word in a systematic way. The rationale is to gather as much details as possible and to store them in a pre-dictionary database. The analysis process calls for a “high degree of linguistic knowledge and awareness” (Atkins 1992). In the synthesis stage, dictionary entries are compiled on the basis of facts collected in the database. It is this

1968

IX. Beyond Syntax

stage where decisions on the macro- and microstructure as well as user skills become important. The same lexical database can thus serve as a basis for constructing dictionaries of rather different appearance. For bilingual dictionaries, synthesis is preceded by a transfer stage, in which the database is partially translated into the target language. A careful treatment of syntactic information is necessary at all stages. In the analysis stage, all relevant syntactic facts about a word are recorded in a structured scheme to be stored in the database. The lexicographer has to describe the syntactic constructions in which the word occurs with sufficient detail and linguistic expertise. All constructions are to be furnished with examples, and the examples should be provided with syntactic annotations that instantiate the construction. In the entry-compilation stage, on the other hand, the lexicographer has to decide on how to present the syntactic information in a way suitable to the user and on the amount of information the user should be supplied with, e.g., concerning the set of recorded syntactic patterns. Such a two-step process may offer a solution to the aforementioned problem that syntactic information needs to be explicit for the expert, including the lexicographer, but ought to be appropriately disguised for the non-expert.

2.4. Pointers to the literature The following list of suggestions for further reading, which is necessarily very brief and highly selective, provides some first entry points to the vast field of lexicography. Overview articles on lexicography are Kirkness (2004) and Hanks (2003), the latter from a computational linguistics perspective. A selected list of German and English textbooks are Herbst and Klotz (2003), Engelberg and Lemnitzer (2009), Béjoint (1994), Landau (2001), Atkins and Rundell (2008), and Svensén (2009), a useful collection of important articles is Fontenelle (2008). Major periodicals on the topic are The International Journal of Lexicography (Oxford University Press) and Lexicographica, International Annual for Lexicography (Niemeyer/de Gruyter). An influential book series is Lexicographica, Series Maior. The three volume set on dictionaries by Hausmann et al. (1989, 1990, 1991) exhaustively describes the state of the art at that time; a forth, supplementary volume covering more recent developments has just appeared (Gouws et al. 2013). A major conference is the EURALEX International Congress organized biennially by the European Association for Lexicography (http://www.euralex.org/).

3. Lexicography and linguistics Scholars such as Apresjan, Atkins, Corbin, Helbig and Zgusta, among others, have repeatedly expressed the opinion that linguistic research can be useful to practical dictionary making, especially with regard to its systematicity. In order to bridge the gap between lexicography and linguistic theory, Apresjan (1992, 2000, 2002) has proposed a number of “principles of systematic lexicography.” One of his principles calls for using “integrated linguistic descriptions, with perfectly coordinated dictionary and grammar.” The lexicographer could clearly profit from such an integrated linguistic framework at the analysis stage of the lexicographic process.

57. Syntax and Lexicography

1969

How to convert the linguistic descriptions into presentations accessible by the normal dictionary user is an issue to be solved at the synthesis stage. Corbin (2002: 32) regards this as one of the problems for the transmission of linguistic theory to lexicographic practice. The second problem is the lexicographers’ access to linguistic insights, which Corbin sees as considerably hindered by the wide variety of linguistic theories at disposal: L’accès des lexicographes aux savoirs linguistiques qui pourraient leur être utiles dans leur pratique n’est pas chose aisée. La linguistique n’est pas aujourd’hui une discipline unifiée dans ses fondements et dans ses méthodes, qui proposerait, dans divers secteurs, un ensemble de résultats stabilisés. [It is not an easy task for lexicographers to access the linguistic knowledge that could be useful for their practical work. Today, linguistics is not a unified discipline concerning its foundations and methods and does not provide a set of stable results in its various branches.]

Adopting a linguistic framework for lexicographic descriptions, as demanded by Apresjan, is thus a non-trivial task. The most natural choice for the lexicographer is to consult an established descriptive grammar of the language in question. However, using such a grammar for lexicographic purposes can be problematic on two grounds: The grammar may not show the necessary level of linguistic sophistication since it targets primarily high school students and teachers. For example, the grammar may lack a systematic treatment of complementation and argument alternation, which is a prerequisite for a syntactically informed analysis of the corpus data. On the other hand, the grammar may presuppose too much linguistic knowledge to be of use at the synthesis stage when the needs of language learners are to be met. There is a certain tension between a linguistically informed analysis and the type of analysis put forward by corpus-oriented lexicographers, for which reservations about theoretical assumptions are programmatic: “If (…) the objective is to observe and record behaviour and make generalisations based on the observations, a means of recording structures must be devised which depends as little as possible on a theory. The more superficial, the better” (Sinclair 1987b: 107). However, finding the right generalizations on the basis of surface patterns goes along with a more abstract level of analysis such as the identification of dependencies (Hunston 2004: 108). So, even if linguistic theory may provide analyses only for a limited subset of the data found in corpora, it seems advisable to take them into account as valuable generalizations which can serve as a starting point for the lexicographer. Another of Apresjan’s principles of systematic lexicography says that “all salient lexical classes should be fully taken into account and uniformly described in a dictionary in all of their linguistically relevant properties.” He gives the example of distinguishing between factive and putative mental state verbs, i.e., between verbs such as to know, to understand, to guess on the one hand, and those like to think, to believe, and to consider on the other hand. These verbs show a number of characteristic syntactic properties indicating their respective classes. For instance, factives, in contrast to putatives, can govern embedded questions − compare She knew how to do it with *She believed how to do it. In this way, “two well-defined and consistently organized lexico-semantic classes emerge. To make them accessible to certain rules of grammar and other sufficiently general linguistic rules we have to posit two distinct lexicographic types which

1970

IX. Beyond Syntax

should be uniformly described throughout the dictionary” (Apresjan 2002). That is, the distinction between factives and putatives should be systematically taken into account in the organization of the respective entries in the dictionary. Apresjan’s proposal is similar to the program sketched by Atkins, Kegl, and Levin (1988), who investigate the alternation behavior of a verb as a useful probe to identify verb senses and verb classes (see also Levin 1991, 1993). Atkins, Kegl, and Levin (1988) present a case study on verbs like bake, cook etc. These verbs participate in the causative-inchoative (or object-to-subject) alternation and the unexpressed (or indefinite) object alternation, besides others. The first alternation is exemplified by the pair John cooked the broccoli and The broccoli cooked, with broccoli serving as the direct object of the causative variant and as the subject of the inchoative variant. The second alternation is shown by the pair John cooked lunch and John cooked, where the object remains implicit in the second variant. The two alternations are associated with different senses of cook: the causative-inchoative alternation with a change-of-state interpretation, the unexpressed object alternation with a creation interpretation. The resulting distinction of senses in the dictionary entry has positive consequences in the monolingual as well as in the bilingual case. In the monolingual entry, the two intransitive uses are clearly separated, in the bilingual entry, the two senses can be associated with different translation equivalents, e.g. French (faire) cuire vs. préparer, faire, cuisiner; cf. Atkins (2002).

4. Syntactic information in monolingual dictionaries The general monolingual dictionary for native speakers is primarily used for decoding, i.e., for looking up the meaning of a word. Dictionaries of this type provide grammatical information only to a limited extent because the user is expected to have internalized the rules of his or her mother tongue. It is for the same reason that native speakers preferably consult onomasiological dictionaries for encoding since finding the right word is their main concern. The macrostructure of onomasiological (or meaning-to-word) dictionaries is based on the meaning of lexical units. Dictionaries of this type subsume dictionaries of synonyms and thesauri, where grammatical information is traditionally confined to general wordclass labels at best. The situation is different for learners’ dictionaries, which are the topic of pedagogical lexicography. Learners’ dictionaries are consulted for encoding to a much greater extent than dictionaries for the native speaker. Encoding requires becoming acquainted with the syntagmatic behavior of a word in sufficient detail. The presentation of this information in the dictionary must meet the learner’s abilities, where little grammatical knowledge can be assumed, in general. User-friendliness is thus an important factor in pedagogical lexicography since “learners want to find information quickly and be able to grasp it immediately once they find it” (Rundell 1998: 330). In view of the apparent trade-off between accessibility and accuracy of grammatical descriptions, clarity is meanwhile favored over delicacy and completeness (see section 4.1). Matching the user’s needs and reference skills has therefore become a driving force in pedagogical lexicography. At the same time, there are a growing number of empirical studies on the utility of grammatical information in learners’ dictionaries. For example, Dziemianko (2006) presents a carefully designed user study on the utility of information about verb syntax that takes

57. Syntax and Lexicography

1971

into account various methods of presentation (see also Bogaards and van der Kloot 2001, 2002). The main focus of general dictionaries is on lexical (or content or open-class) words, that is, on nouns, verbs, adjectives, and adverbs. The lexicographic treatment of grammatical (or function or closed-class) words such as prepositions, conjunctions, pronouns, auxiliaries, and determiners is more in dispute. Dictionaries for native speakers often do not deal with the syntactic properties of function words in much detail since, as mentioned above, the user is expected to have some basic knowledge of the respective language, and mastering the use of function words is considered part of general grammatical competence. Learners’ dictionaries usually provide more elaborate descriptions of grammatical words, though without much information about syntactic constructions. Interestingly, Coffey (2006) criticizes the treatment of grammatical words in current advanced learners’ dictionaries of English as unnecessarily detailed. He argues that elaborate sense distinctions do not necessarily meet the learner’s needs, and, furthermore, questions the usefulness of including very elementary facts in the entry because this kind of knowledge is trivial to the advanced learner. Irrespective of such strictly user-oriented considerations, there is also the “duty of documentation” in monolingual reference dictionaries, which typically come in several volumes. Lang (1989) discusses the description of conjunctions as an example of function words in general; see Schaeder (1985) on prepositions and Wolski (1989a) on modal particles. Lang’s main points are in full compliance with the principles introduced in section 3: The descriptions in the entry should follow grammatical insights; syntactic constructions and their constraints should be part of the entry; and building the entry should consist of two stages, first, recording the relevant facts and, second, designing the final entry presentation. The treatment of light verbs is a particularly interesting challenge, both for grammar and lexicography. Halfway auxiliary, halfway tied in collocations, the meaning of a light verb is bound up with its complement. Hanks, Urbschat, and Gehweiler (2006) suggest a new kind of dictionary entry for such verbs which is based on extensive corpus analysis. By comparison, the proposal of Polenz (1989) puts more emphasis on an explicit description of the meaning component carried by the light verb. Syntactic specifications in the general monolingual dictionary are often restricted to grammatical categories (wordclasses) and subcategories such as gradable adjective and transitive verb, possibly extended by information about positional restrictions, prepositional and clausal complements, or the formation of passives. The use of categories in dictionaries is not without problems. Cowie (1989: 588) gives the example of nominal modifiers in noun compounds which are classified as adjectives, Apresjan (2002) that of numerals classified as adjectives. In order to clarify and systematize the grammatical notions used in a dictionary, it has been repeatedly suggested to provide the dictionary with a dictionary grammar (Bergenholtz 1984, 2002; Mugdan 1989; Lemmens and Wekker 1991), that is, to include a separate grammar section in the dictionary. The dictionary grammar should be part of, or at least be compatible with a descriptive grammar of the natural language in question − in compliance with Apresjan’s principle of integrated linguistic descriptions. According to Mugdan (1989: 743), there is no reason not to employ modern grammatical frameworks for this purpose, since traditional grammar is neither linguistically adequate nor necessarily familiar to the average dictionary user. Matching the abilities of the user is again an essential factor, as acknowledged by all of

1972

IX. Beyond Syntax

the above-mentioned authors; see e.g. Lemmens and Wekker (1991: 5−10) for a proposal of what a user-friendly dictionary grammar could look like. A number of general recommendations regarding the content and structure of a dictionary grammar in learners’ dictionaries can be found in Tarp (2008: 246−247). Among them are the recommendations that dictionary grammars “must be structured like production grammar books” and that they “must be integrated into the rest of the dictionary via a system of references.” Complementation may be regarded as the syntactic phenomenon most relevant to lexicography, especially if the focus is on encoding. Complementation patterns constitute the basic schemes for constructing phrases and sentences. The following sections describe how complementation information is represented in dictionaries. Most of the dictionaries discussed are learners’ dictionaries, where usability is a key issue. Theoretically more advanced descriptions of complementation are provided by valency dictionaries to be discussed in section 6.1. These dictionaries have limited coverage (numbering at most in the hundreds) and assume considerable linguistic expertise on the part of the user.

4.1. Complementation codes and patterns It depends to a certain degree on the lexicographic tradition of the respective language how complementation information is presented in a dictionary (cf. section 4.4). A good part of this section will be confined to English pedagogical lexicography, for which the coding of complementation information has been an object of thorough investigation and continuous revision during the last few decades (e.g. Lemmens and Wekker 1986; Aarts 1991; McCorduck 1993; Herbst 1996; Rundell 1998; Cowie 1999; Dziemianko 2006). Herbst (1996: 329−330) identifies the following types of coding systems (see also Herbst and Klotz 2003: 78−82; Dziemianko 2006: sect. 1.3.1): opaque coding systems, which are neither transparent nor mnemotechnically organized; mnemotechnically organized systems, whose codes are not transparent in that their meaning cannot be transparently deduced from their form; transparent coding systems, whose codes are compositionally built from basic grammatical labels; and systems of pattern illustrations, which employ neither codes nor labels. This ordering reflects the evolution that coding has undergone in subsequent revisions of English learners’ dictionaries. The dictionary entry for the verb promise in (1) shows the use of the opaque, non-transparent coding system in the third edition of the Oxford Advanced Learner’s Dictionary (OALD3) from the 1970s. (1)

promise vt, vi 1 [VP6A, 7A, 9, 11, 12A, 13A, 17] make a promise(1) to: They wd an immediate reply. He wd (me) to be here/that he would be here at 6 o’clock. (…) (OALD3 1974)

The coding system of OALD3 was already an improvement over that of the second edition in that similar complementation types bear adjacent code numbers (Cowie 1998: 264). Mnemotechnically organized coding systems are a first step towards transparent codes. Transparency is still rather limited for the mnemonic system employed in the first edition of the Longman Dictionary of Contemporary English (LDOCE1), see entry (2) (e.g., T3 stands for “transitive verb with a to-infinitive”), while the system of OALD4, exemplified by entry (3), is easier to access for the user.

57. Syntax and Lexicography

1973

(2)

promise 1 [T1, 3, 5a, b; V3; D1, 5a; I0̸] to make a promise to do or give (something) or that (something) will be done: Do you promise secrecy? (…) (LDOCE1 1978)

(3)

promise 1 [I, Tn, Tf, Tt, Dn·n, Dn·pr, Dn·f] wsth (to sb) make a promise (to sb); assure (sb) that one will give or do or not do sth: I can’t promise, but I’ll do my best. He has promised a thorough investigation into the affair. (…) (OALD4 1989)

The code Tf stands for “Transitive verb + finite that clause” and Dn·pr for “Doubletransitive verb + noun + prepositional phrase”. Further code examples are Tnt for “Transitive verb + noun + to-infinitive” (e.g., want sb to do sth) and Cn·t for “Complextransitive verb + noun + to-infinitive” (e.g. persuade sb to do sth), where a complextransitive verb is defined as a verb “followed by a direct object and a complement, an element which provides more information about the direct object” (Cowie 1989a: 1555; the terminology is that of Quirk et al. 1985). Fully transparent codes go a step further in that they employ only standard grammatical labels and a small set of function words for code construction. For instance, the OALD4 code Tnt is replaced by the transparent code “VN to inf” in OALD7. The OALD7 entry for promise is shown in (4). (4)

promise verb 1 w sth (to sb) | w sb sth to tell sb that you will definitely do or not do sth, or that sth will definitely happen: [V to inf] The college principal promised to look into the matter. (…) [V] They arrived at 7.30 as they had promised. [VN] The government has promised a full investigation into the disaster. (…) [V (that)] The brochure promised (that) the local food would be superb. [VN (that)] You promised me (that) you’d be home early tonight. [VN, VNN] He promised the money to his grandchildren. He promised his grandchildren the money. (…) (OALD7 2005)

The entry also illustrates an improvement of presentation concerning the distribution of complementation information within the entry’s microstructure: Complementation patterns are interspersed with examples, that is, there is a direct alignment of patterns and examples instead of a full list of codes at the beginning of the entry as it was common in earlier editions. A different strategy has been pursued by the COBUILD dictionaries, where transparent complementation codes appear in an extra column. However, the extra-column convention has been abandoned in the latest, sixth edition of that dictionary in favor of in-line labelling. As mentioned at the beginning of section 4, the presentation of syntax and grammar plays only a minor role in dictionaries for native speakers. The entry for promise shown in (5) is taken from the second edition of the Oxford Dictionary of English (ODE2). (5)

promise verb 1 [reporting verb] assure someone that one will definitely do something or that something will happen: [with infinitive] he promised to forward my mail | [with clause] she made him promise that he wouldn’t do it again | (…) [with two obj] he promised her the job. (…) (ODE2 2005)

Although this dictionary is “exceptional in providing a more detailed level of grammatical information” compared to other native speaker dictionaries (Atkins and Rundell

1974

IX. Beyond Syntax

2008: 400), the syntagmatic specifications are clearly less detailed and explicit than in the learners’ dictionaries seen before. Pattern illustrations characterize complementation patterns without recourse to grammatical labels by spelling them out in terms of the headword and suitable proforms such as someone, something, etc. The LDOCE4 entry shown in (6) exemplifies this kind of presentation. Note that already (4) comprises two pattern illustrations at the beginning of the entry, although without relating them to the pattern codes and examples in the rest of the entry. (6)

promise v 1[I, T] to tell someone that you will definitely do or provide something or that something will happen: Last night the headmaster promised a full investigation. promise to do sth She’s promised to do all she can to help. promise (that) Hurry up − we promised we wouldn’t be late. promise sb (that) You promised me the car would be ready on Monday. (…) promise sth to sb I’ve promised that book to Ian, I’m afraid. promise sb sth The company promised us a bonus this year. (…) He reappeared two hours later, as promised. (…) (LDOCE4 2003)

French and German pedagogical lexicography have a tradition of pattern illustrations, too (see also section 4.4). In French lexicography, notably Jean Dubois has systematically employed syntagmatic patterns for specifying complementation and as a basis for sense distinctions in the dictionary. The contrast between affecter qqch à qqn (‘assign’) and affecter qqn (‘affect sb’) gives a simple illustration of this idea (Dubois 1981: 244). In German pedagogical lexicography, the notions Strukturformel, Konstruktionsformel, syntaktisches Gebrauchsmuster, and Satzbauplan are prevalent, which owe a good part to previous research on valency dictionaries (section 6.1); cf., e.g., Bergenholtz and Mogensen (1998); Gouws (1998); Schafroth (2002); Dentschewa (2006). Pattern illustrations for verbs in general monolingual dictionaries are mostly given in infinitival form, i.e., without a subject. The PONS Großwörterbuch Deutsch als Fremdsprache (PGwDaF) is one of the few exceptions in that its pattern illustrations take finite form with filled subject position (e.g., jmd. wendet sich an jmdn. instead of sich an jmdn. wenden). Finite clause patterns illustrate the constructional properties of the verb in a more transparent way than infinitival ones, though at the price of having to decide on a finite verb form. Pattern illustrations have “the obvious advantage of not requiring the user to know any grammatical terminology at all” (Herbst 1996: 329). On the negative side, they can “lead to a certain amount of confusion with respect to dynamic and stative verbs and human or inanimate objects.” To what extent, for instance, is He wanted to be left alone covered by the pattern want to do sth? In addition, pattern illustrations can blur distinctions respected by more technical codes, though such a simplification may be acceptable from the viewpoint of maximal transparency: [W]hat most current coding systems have in common is that they assume very little grammatical knowledge on the part of users, and they aim to satisfy users’ needs in this department without requiring them to consult explanatory tables and charts. There is a trade-off here, in which a certain delicacy of description is sacrificed to the need for maximum clarity. (Rundell 1998: 329)

57. Syntax and Lexicography

1975

For example, persuade sb to do sth and want sb to do sth show the same surface pattern structure but behave differently with respect to passivization. In the OALD4 coding system this difference is at least partially captured by the respective codes Cn·t and Tn·t. The transparent complementation codes used in entry (4) are built of formal or phrasetype categories such as verb, nominal phrase, infinitival clause etc. Functional categories such as (direct or indirect) object or adjunct, i.e., categories that characterize constituents by their sentence function, have also been employed in complementation codes. For example, COBUILD1 uses V+O instead of OALD7’s VN, where O means “object”. Functional labels have been discarded in later editions of the COBUILD dictionaries. Herbst (1996: 331) welcomes the elimination of functional categories “because of the many analytical problems they involve.” Similarly, Colleman (2005) argues for preferring formal over functional labels because the former are more theory-neutral than the latter. As an example he mentions English double object constructions as in She gave the boy the candy, in which the status of the constituent the boy is not agreed upon between different syntactic theories with regard to assigning the labels direct object and indirect object. Another of his arguments is that functional labels are often too unspecific: For some transitive verbs, direct objects can be realized by nominal and clausal complements while others take nominal complements only, and clausal complements can be further restricted to finite or infinitival ones. Nevertheless, Colleman concedes that functional descriptions are necessary in some cases. For instance, formal labels do not allow one to distinguish between He felt the cold air and He felt such a fool. Most current English learners’ dictionaries code the latter case by the functional label linking verb, that is, the respective lexical unit is categorized as a copular verb. In contrast, Colleman suggests a functional label subject complement for the complement of a copular verb. Aarts (1991: 577) holds a more rigid position in that he proposes codes which “should contain category symbols only, not symbols denoting sentence functions” and which “should represent surface syntactic structures” while “underlying differences between structures can be ignored.” In this system, both of the above example sentences for feel would be subsumed under the code V+n and the only distinction presented to the user is the label no passive attached to the copular verb. The basic categories transitive and intransitive of traditional grammar are dispensable in light of elaborate coding systems. The usefulness of this distinction has been called into question anyway by many authors. For example, Bergenholtz (1984: 34) notes that the label transitive verb is used rather inconsistently across German dictionaries. Herbst (1996: 331) finds it a wise decision to dispense with the labels transitive and intransitive because many verbs would have to be labelled both intransitive and transitive, and transitive covers divalent as well as trivalent verbs. Remarkably, the labels transitive and intransitive are still in use in LDOCE4. In fact, they are often the only explicit grammatical descriptions in the entry − witness (6), which even contains conflated labels of the sort criticized by Cowie (1989b: 588). The more fine-grained subdivision into transitive, ditransitive, and complex-transitive verbs introduced in Quirk et al. (1985) has found its way into OALD4; cf. (3). The verb label ergative used in the COBUILD dictionaries is also worth mentioning in this context: verbs categorized that way participate in the object-to-subject alternation mentioned in section 3, that is, have systematically related transitive and intransitive usages. A further problem is that verbs with prepositional complement traditionally count as intransitive, on a par with verbs lacking any complement. French lexicography has resolved this issue by a subdivision of transitivity into direct and indirect transitivity, the latter covering prepositional verbs (Dubois 1983: 88).

1976

IX. Beyond Syntax

4.2. Syntagmatic information in definitions and examples Besides complementation codes or patterns, there is an increasing tendency in pedagogical dictionaries to employ definitions and examples as a means for conveying syntagmatic information to the user (Rundell 1998). For “even when codes are ignored, verbal illustrations are what dictionary users should always be able to safely fall back on” (Dziemianko 2006: 19). In traditional-style definitions, lexicographers have developed various conventions to ensure that the subcategorization properties of the definiendum are properly reflected by those of the definiens. For instance, the definiens of an intransitive verb should also be intransitive, which means that the definiens is either an intransitive verb or a transitive verb complemented with an object (Landau 2001: 174). Ilson (1985: 164−167) shows how the incompleteness of transitive verbs, or other relational definienda like prepositions, can be reflected by similarly incomplete definientia. One of his examples is assist, defined as to give usu. supplementary support or aid to in Webster’s Ninth New Collegiate Dictionary. Ilson furthermore indicates that verbs with clausal complements can be treated the same way if the definiens has compatible subcategorization properties: The definition of transitive hope, taken from the same dictionary, is to expect with desire, and both verbs can take a that-clause. However, as noted by Dziemianko (2006: 29), this example is problematic since the dictionary user might mistakenly conclude that hope, like expect, can take a nominal object too. Current dictionaries are less concerned with the requirement that the definiens has the same subcategorization properties as the definiendum and is thus substitutable for the latter. In the following example, taken from the second edition of the Macmillan English Dictionary for Advanced Learners (MEDAL2), the definition does not provide many hints about the correct syntagmatic usage of hope: (7)

hope verb [I/T] to want and expect something to happen or be true: +(that) I just hope she’s pleasant to him on his birthday. +for It wouldn’t be sensible to hope for immediate success. (…) (MEDAL2 2007)

The user has to resort to the complementation patterns +(that) and +for, and the examples given there, in order to find out how the object of hope can be realized syntagmatically. Moreover, the entry says nothing about whether nominal complements are admissible or not, because the labels I and T could be respectively due to the prepositional and the clausal complement. In fact, as the entry of expect given in (8) reveals, it is only the absence of an appropriate example in (7) that allows one to conclude that hope cannot take a nominal object: (8)

expect verb [T] to think that something will happen: We’re expecting good weather at the weekend. (…) +(that) Investors expect that the rate of inflation will rise. (…) (MEDAL2 2007)

Full sentence definitions are an influential lexicographic convention developed during the COBUILD project (Hanks 1987). They are typically formulated as conditionals with

57. Syntax and Lexicography

1977

the definiendum embedded in the premise. The left-hand side of a full sentence definition thus reflects the syntactic patterns in which the definiendum occurs: (9)

If you hope that something is true, or you hope for something to happen, you want it to be true or to happen and usually believe that it is possible or likely. (COBUILD1 1987)

Dziemianko (2006: 38) concludes that “full sentence definitions do not require much effort of the dictionary user looking for information on verb syntax in entries.” However, not all full sentence definitions are equally helpful in this respect. As noted by Tarp (2008: 235), the definition of the verb anticipate in COBUILD4 begins with If you anticipate an event, you realize in advance that it may happen, whose left-hand side gives no information about clausal complements at all. Rundell (2006) takes a balanced position towards full sentence definitions. He recommends using them, for instance, if a transitive verb occurs mostly in the passive or if an adjective has a narrow range of complements. On the other hand, he prefers a more conventional, economic style of definition in more straightforward cases, because full sentence definitions can get rather lengthy. In learners’ dictionaries, any statement about the syntactic behavior of a word should be backed up by an example (Atkins and Rundell 2008: 454). The advent of corpusbased lexicography in the 1980s came along with a strong commitment to authentic examples. Meanwhile, there seems to be consensus that “the primary function of the corpus is as a source of evidence rather than as a source of examples” (Atkins and Rundell 2008: 458). Hence, “in a dictionary designed for learners, there is no incompatibility in supporting a corpus-driven description with examples that reflect the recurrent patternings in the corpus within an accessible and intelligible format” (Atkins and Rundell 2008: 458). Similarly, Dziemianko (2006: 28): “The consciously pedagogical orientation of examples, evident also in the earliest learners’ dictionaries, thus remains a distinctive feature of the corpus-based learners’ dictionaries published today.”

4.3. Case studies on complementation It has been argued in section 3 that linguistic analysis can be useful for guiding the design decisions of the lexicographer. In the following, several example entries are discussed from this perspective. The entries are taken from monolingual (learners’) dictionaries of French, English and German. The examples considered all have to do with clausal complementation in one way or another. Corbin (2002: 29−30) gives an example for French, where an adequate syntactic analysis on the side of the lexicographer matters for the quality of the lexical entry. He discusses the use of the French verb décider followed by de. He points out that an adequate dictionary entry for decider should list décider de + infinitive as a verbe transitif direct that takes a clausal complement, on a par with décider que, whereas décider de + noun phrase should count as a verbe transitif indirect. The syntactic insight here is that de acts as a complementizer in the first case and as a preposition in the second. Corbin’s requirement is clearly satisfied by the entry shown in (10), which is taken from the Dictionnaire de Français (DDF).

1978

IX. Beyond Syntax

(10) décider v.t. 1. (…) Décider qqch, de (+ inf.), que (+ ind. ou cond.), décider si, qui, quand, etc. (+ ind.), se prononcer pour qqch, déterminer ce qu’on doit faire: Le gouvernement a décidé l’indemnisation des sinistrés (…). J’ai décidé de tenter ma chance. (SYN. résoudre). (…) v.t. ind. (…) 1. Décider de qqch, se prononcer sur qqch, prendre parti à ce sujet. (…) (DDF 1987) Corbin contrasts this linguistically coherent entry structure with the treatment of decider in the Nouveau Petit Robert 2000, shown in (11), where all uses with de are listed in the verbe transitif indirect section. (11)

DÉCIDER I. V. tr. dir. (…) 2. (…) Arrêter, déterminer (ce qu’on doit faire); prendre la décision de. (…) DÉCIDER QUE. Il décide qu’il n’ira pas travailler. (…) II. V. tr. ind. DÉCIDER DE QQCH. (…) Disposer en maître par son action ou son jugement. (…) DÉCIDER DE (et l’inf.): prendre la résolution, la détermination de. (…) Décidons de nous retrouver à huit heures. (…) (NPR 2000)

He regards (11) as “défendable du point de vue de la commodité de consultation, mais linguistiquement inconséquent et peut-être sous-informé” [justifiable with regard to lookup convenience but linguistically inconsistent and perhaps under-informed]. This lack of linguistic precision or awareness leads to an unwelcome definitional redundancy: decider de + infinitive is circumscribed as prendre la résolution, la détermination de, which essentially repeats the explication prendre la décision de of the verbe transitif direct section. Our next example is concerned with control verbs in English (cf. articles 14 and 38). Prototypical subject and object control verbs are promise and persuade, respectively. The argument missing in the infinitival complement of a subject control verb coincides with the argument expressed by the subject of the matrix clause, and an analogous condition holds for object control verbs and the object of the matrix clause. The relevant sections of the LDOCE4 entries for promise and persuade are shown respectively in (6) and (12). (12) persuade v [T] 1 to make someone decide to do something, especially by giving them reasons why they should do it, or asking them many times to do it: persuade sb to do sth I finally managed to persuade her to go out for a drink with me. (…) (LDOCE4 2003) There is, of course, no mentioning of control in the entries, nor is the non-linguist expected to be familiar with that notion. The question is rather whether the correct usage of the control construction is conveyed to the user. The infinitival clause pattern is present in both entries. The information about subject and object control, on the other hand, is only accessible from the definitions, that is, from to tell someone that you will do something and to make someone decide to do something, respectively. But understanding the latter definition presupposes grasping the meaning and use of decide, which is a control verb itself. The LDOCE4 definition to make a choice or judgment about something given for decide does not resolve the issue. The gloss to make a choice about what

57. Syntax and Lexicography

1979

you are going to do taken from MEDAL2 is more helpful in this respect, since the user is told that the agent of the matrix is identical to the agent of the complement. A more explicit (though not systematic) treatment of these matters can be found in the Dictionnaire du Français Contemporain (DFC), the predecessor of the DDF, where the infinitival complement of décider de is characterized as “un infinitif ayant même sujet logique que décider.” However, it seems questionable whether the notion of a logical subject is of much help to the average dictionary user. The final example is the treatment of the prepositional verb denken an in two German learners’ dictionaries, Langenscheidts Großwörterbuch Deutsch als Fremdsprache (LGwDaF) and the de Gruyter Wörterbuch Deutsch als Fremdsprache (dGWDaF). The relevant sections of the entries for denken in these dictionaries are shown in (13) and (14), respectively. The exposition adheres to the textual condensations (abbreviations, place holders, etc.; cf. Wolski 1989b) found in the book editions of the dictionaries, although such techniques will become more and more obsolete in view of CD-ROM editions and online access. (13) denken Vi (…) 10 an j-n/etw. d. sich an j-n/etw. erinnern, j-n/etw. nicht vergessen: Wie nett, dass Sie an meinen Geburtstag gedacht haben; Denkst du noch manchmal daran, wie schön es damals war?; Denk bitte daran, den Hund zu füttern! 11 an j-n/sich/etw. d. sein Interesse, seine Gedanken auf j-n/sich/etw. (bes. auf j-s Bedürfnisse) konzentrieren: (…) Du sollst mehr an deine Familie d.! 12 (daran) d. + zu + Infinitiv die Absicht haben, etw. zu tun, etw. tun wollen (…): Sie denkt daran, ihr Geschäft zu verkaufen; (…) (LGwDaF 1998) (14) denken 5. /jmd./ an etw., jmdn. w. 5.1. ‘seine Gedanken auf etw., jmdn. richten’: an die Feier, den Freund w; er denkt immer an seine Familie (‘ist immer auf ihr Wohl bedacht’); (…) 5.2. ‘jmdn., etw. im Gedächtnis behalten, nicht vergessen’: wir werden an dich w!; wir müssen daran w, den Brief einzuwerfen; hast du daran gedacht, dass wir heute ins Theater wollen? (dGWDaF 2000) There are three lexical units of denken an in LGwDaF whereas dGWDaF lists only two. As to complementation, all of these lexical units allow a clausal complement with the preposition an replaced by the obligatory correlate daran. However, none of the two dictionaries conveys this information in a systematic and transparent way. In dGWDaF, it is part of the editing strategy not to include complementation alternatives in patterns but to indicate them in examples. The dictionary user might thus mistakenly conclude that the first sense in (14) does not license clausal complements. The situation in LGwDaF is not better in this respect. Only the third sense in (13) encodes a clausal complement plus correlate by a pattern, and even this specification has to be qualified since it neglects the possibility of prepositional complements. In fact, the clause in the example can easily be nominalized: Sie denkt an den/einen Verkauf ihres Geschäfts. A linguistically more sensible analysis of the usages of denken an could clearly improve the quality of the entries in both dictionaries. It goes without saying that there are many more complementation issues in the dictionary worth analyzing from a theoretical perspective. A good example is the interaction between derivation and complementation in general, and between complementation of deverbal nouns and verb complementation in particular.

1980

IX. Beyond Syntax

4.4. Further notes on lexicographic traditions The following notes are limited to French, English, and German lexicography; see Hausmann et al. (1990) and Gouws et al. (2013) for more information about developments in the lexicography of these and other languages. Modern French lexicography has a strong tradition in following linguistic principles (Schafroth and Zöfgen 1998). Cowie (1989b: 590) attributes this fact to the direct involvement of linguists in the design and compilation of monolingual French dictionaries. A prominent example is the DFC edited by Jean Dubois and published in the midsixties. In the DFC, syntax plays a major role both for pedagogical and linguistic reasons (Cowie 1989b: 590). Complementation is systematically described by syntagmatic patterns, which also serve as a basis for sense distinctions. The DFC thus reflects the concept of distributional analysis in the tradition of Zellig Harris (cf. Rey 1990: 1834). The Grand dictionnaire encyclopédique Larousse, edited by Dubois in the early 1980s, has benefited from a collaboration with Maurice Gross (Corbin 2002: 15), whose lexicon-grammar approach (Gross 1994) is based on Harris’ structuralist ideas as well. Theoretical considerations have also influenced the making of the Robert méthodique in the early 1980s, which puts into practice the functionalist and lexicalist positions of Josette Rey-Debove (Rey 1990: 1834). The influence of linguistics on French lexicography has meanwhile declined (Corbin 2002), and French pedagogical lexicography is even said to have failed in view of the present publishing situation (Schafroth and Zöfgen 1998: 16; see also Corbin 2008; Leroyer, Binon, and Verlinde 2009). The systematic use of verb patterns in English pedagogical lexicography goes back to the work of H. E. Palmer and A. S. Hornby in the 1930s (Cowie 1989b, 1998, 1999). Both introduced an extensive set of alphanumerical codes for specifying complementation patterns. Starting in the 1980s, there has been a tendency towards more transparent, user-friendly descriptions of syntagmatic information, even in face of a slight reduction of accuracy (cf. section 4.1). At about the same time, the corpus-revolution set in, most notably represented by COBUILD1. It seems fair to say that, at present, British lexicography is leading in the production of high-quality learners’ dictionaries that are both userfriendly and rich in syntagmatic information. The corpus-orientation is exemplary and the innovation cycles have been impressive during the last two decades. The future will show whether the type of presentation realized in LDOCE4 and MEDAL2 with its strong emphasis on pattern illustrations and its very limited set of explicit syntactic indicators is the last word on the subject. After all, the “description of syntactic behaviour is far from complete, and better ways of presenting that description can still be discovered” (Rundell 2007b: 17−18). The development of post-war German lexicography until 1990 is thoroughly described in Wiegand (1990). While German lexicography, compared to English, has a rich tradition in producing valency dictionaries (cf. section 6.1), comprehensive learners’ dictionaries have not been available until fairly recently. The publication of LGwDaF in 1993 can be seen as the advent of German pedagogical dictionaries (Schafroth 2002: 57) − not counting dictionaries of basic vocabulary. A second learners’ dictionary, the dGWDaF, appeared in 2000, and in the meantime, other dictionary publishers took up the challenge. Most of the German learners’ dictionaries present syntagmatic information by pattern illustrations and examples; witness (13) and (14). While LGwDaF uses the traditional categories transitive and intransitive, dGWDaF abandons this distinction −

57. Syntax and Lexicography

1981

a decision welcomed by Schafroth (2002: 62). In PGwDaF, by comparison, transitivity and intransitivity are indicated by mit OBJ and ohne OBJ, which is disputable because the user still has to keep in mind that mit OBJ covers only accusative objects (Dentschewa 2006: 125). A detailed metalexicographic analysis of LGwDaF and dGWDaF can be found in Wiegand (1998a) and Wiegand (2002), respectively, including an indepth investigation of grammatical and syntactic information; see also Wiegand (2003, 2005) for a similarly thorough analysis of the latest edition of the German reference dictionary Duden − Das große Wörterbuch der deutschen Sprache in zehn Bänden (GWDS). Bergenholtz (2002: 43) has expressed some reservations about the flourishing field of German metalexicography that shows up in these investigations, given its low impact on actual dictionary making. And Tarp (2008: 241) states that “[u]ntil now no German learner’s dictionary has found a satisfactory solution to the difficult question of how to treat syntactic data lexicographically.”

5. Bilingual lexicography The topic of syntactic information in bilingual lexicography will be covered only briefly (cf. Herbst 1985; Kromann, Riiber, and Rosbach 1991a, 1991b; Tarp 2005, 2008; Bassola 2006; and Atkins and Rundell 2008: chap. 11 and 12 for more information). In a bilingual dictionary, each lexical unit of the source language, i.e. a headword in one of its senses, is associated with a translation equivalent, i.e. a word of the target language in one of its senses. A unidirectional or monofunctional bilingual dictionary is one with fixed source and target language; a bidirectional or bifunctional dictionary comprises two unidirectional parts with reverse directions. Further distinctions can be drawn: A unidirectional dictionary is called an active or encoding dictionary if its purpose is to support the encoding of expressions in a foreign language, that is, if the dictionary’s source language is the language of the user. A passive or decoding dictionary, by contrast, presumes that the dictionary’s target language is the user’s language. This distinction affects the user’s needs concerning the type and distribution of grammatical information since this sort of information is mainly needed on the side of the foreign language. It follows that “[i]n active bilingual dictionaries it is first and foremost the equivalents that need to be supplied with grammatical information; in passive dictionaries, it is the lemmata” (Kromann, Riiber, and Rosbach 1991b: 2723). Commercial bilingual dictionaries often aim at two language groups and therefore will have to serve encoding as well as decoding. This means that the grammatical information in an entry is to a large part redundant for speakers of either one of the two languages (Atkins and Rundell 2008: 43). The treatment of grammatical information in bilingual dictionaries is complicated by various factors: Karl (1991) discusses the problem of using grammatical and lexical categories in bilingual dictionaries when different languages come along with different category systems. Moreover, any account of syntagmatic information in bilingual dictionaries has to cope with all sorts of cross-language divergences known from contrastive grammar and valency studies. A concise overview of divergence classes relevant to lexicography is given in Heid (1997: chap. 6). Examples of divergence classes are syntactic divergence − enter the house (En.) vs. entrar en la casa (Sp.), thematic divergence − jmd an den Termin erinnern (Ge.) vs. rappeler la date à qqn (Fr.), and head switching − gern tun (Ge.) vs. like to do (En.).

1982

IX. Beyond Syntax

The presentation of syntactic information in bilingual dictionaries is generally less sophisticated than in monolingual learners’ dictionaries (Salerno 1999; Klotz 2001). For example, the transitive/intransitive distinction is still widely in use as an organizational principle of the entry structure of verbs. In addition to the problems mentioned at the end of section 4.1, this distinction can lead to a considerable duplication of information in the bilingual entry (Herbst and Klotz 2003: 180−182; Atkins and Rundell 2008: 496). For if the transitive and the intransitive uses of a verb are listed separately, irrespective of their semantic relatedness, their translation equivalents have to be listed separately as well, even if they are in effect identical. Typical examples for this phenomenon are verbs like cook and eat that participate in the unexpressed object alternation. As to pattern illustrations in bilingual dictionaries, the finite form with subject appears to be more appropriate than the infinitival form, because finite patterns allow one to describe divergences that involve the subject as, e.g., in qqn manque qqch (Fr.) vs. etw. misslingt jmdm. (Ge.). Kromann, Riiber, and Rosbach (1991a: 2774) propose a number of general guidelines for specifying grammatical constructions in the bilingual dictionary. First, the user’s needs and competence have to be taken into account: “The less foreign-language competence the user has, the greater the need for the specifications of constructions in the bilingual dictionary.” Second, the divergences determined by the language pair call for special attention: “[T]he divergent constructions in the respective languages must be given priority in the bilingual dictionary for the language pair in question. Grammatical and idiosyncratic constructions which are not predictable in translation between the language pair in question must be included in the bilingual dictionary concerned” (Kromann, Riiber, and Rosbach 1991a: 2772). Third, the intended function of the dictionary (active vs. passive, mono- vs. bidirectional) must be respected. Forth, a dictionary grammar should be included. As to the third point, it should be added that the active/passive distinction has meanwhile been criticized as insufficient. According to Tarp (2005, 2008), the user’s communicative and cognitive situation must be described in much more detail in order to come up with an adequate typology of dictionary functions.

6. Specialized dictionaries There are essentially two types of dictionaries specialized on syntactic information: dictionaries of function words (section 6.2) and dictionaries of syntagmatic constructions for lexical words. Zöfgen (1989: 1002) subdivides dictionaries of syntagmatic constructions (Konstruktionswörterbücher) into traditional, distributional, and valency based approaches. Traditional approaches describe morphosyntactic properties of dependent (governed) elements; they are usually limited to case marking and prepositional objects. The concept of valency, by contrast, emphasizes the ability of a lexical unit to impose requirements on its surrounding constituents, including the subject and clausal or participle complements. As long as the focus is on syntactic properties, valency based approaches resemble distributional accounts to some extent (Zöfgen 1989: 1002). This affinity dissolves if the goal is to explain valency on semantic grounds, because, in this case, the semantic properties of the valency bearing unit become of central importance.

57. Syntax and Lexicography

1983

6.1. Valency dictionaries The following discussion will mainly be confined to monolingual German and English dictionaries. Schumacher (2006a) provides a good overview of valency dictionaries for German; Busse (2006) covers other languages with a focus on Romance languages; contrastive valency dictionaries are reviewed in Schumacher (2006b); and valency dictionaries available on the internet are the topic of Heid (2006). The first valency dictionary of German verbs was the Wörterbuch zur Valenz und Distribution deutscher Verben by Helbig and Schenkel (1969). In this dictionary, complements are described syntactically by a transparent coding system with formal categories (e.g., pSa stands for präpositionales Substantiv im Akkusativ) and semantically by coarse-grained categories such as human and abstract. Sommerfeldt and Schreiber (1974, 1977) extend this methodology to the valency of adjectives and nouns. A second early exemplar is Kleines Valenzlexikon deutscher Verben (KVL) by Engel and Schumacher (1976), which is restricted to syntactic valency. It employs a non-mnemonic coding system for constructional patterns (Satzmuster, Satzbaupläne) that are based on a dependency grammar approach (Engel and Schumacher 1976: chap. 2; Schumacher 2006a: 1399). A theoretically more ambitious project is Verben in Feldern (ViF) by Ballweg et al. (1986); see also Ballweg et al. (1981). ViF’s underlying model draws on ideas from categorial grammar and generative semantics (Ballweg et al. 1981: chap. 3; Schumacher 2006a: 1401). The semantic orientation of ViF is reflected by its onomasiological macrostructure where verb entries are organized into semantic fields. The compilation of these dictionaries, which are limited to several hundred headwords, has been motivated with reference to teaching German as a foreign language (Deutsch als Fremdsprache). The first two dictionaries are supposed to serve teachers as well as learners, while the linguistically more advanced ViF aims at teachers only. However, the usefulness of valency dictionaries for language teaching has been repeatedly called into question by Zöfgen, Wiegand, and others. Zöfgen (1989: 1007) observes a growing evidence “daß die einseitig linguistisch argumentierende Valenzlexikographie zunehmend der Versuchung erliegt, ihre eigenen Anforderungen zum alleinigen Maßstab zu machen und mit Benutzerbedürfnissen gleichzusetzen” [that valency lexicography, with its strong focus on linguistic arguments, tends to take its own requirements as a standard and to identify them with user needs]. Wiegand (1990: 2173−2174) expresses similar doubts as to whether the intended user has sufficient linguistic expertise to benefit from dictionaries of this type. Zöfgen (1989: 1007) therefore argues for a more useroriented presentation and is even inclined to consider pedagogical dictionaries as competitive to linguistically motivated ones in terms of precision and conciseness. The situation today has improved insofar as there are two recent valency dictionaries available, one for German and one for English, which both aim at combining linguistic theory with user-friendliness. The first one is VALBU − Valenzwörterbuch deutscher Verben by Schumacher et al. (2004), the second one is A Valency Dictionary of English (VDE) by Herbst et al. (2004). VALBU builds partly on the ideas that underlie KVL and ViF, but uses only a minimal amount of technical vocabulary to describe the valency of verbs syntactically and semantically; cf. Schumacher (2006a: 1406−1408) for a brief introduction to VALBU’s basic features. For example, the prepositional verb denken an we looked at in section 4.3 has three main senses in VALBU. The constructional patterns associated with these units are uniformly described by the transparent code “NomE

1984

IX. Beyond Syntax

PräpE”, with NomE and PräpE being short for Nominativergänzung (‘nominative complement’) and Präpositivergänzung. As to PräpE, the dictionary grammar included in VALBU tells us that a Präpositivergänzung can be realized by a prepositional phrase, but also by a clausal complement, possibly in combination with a correlate; details are to be specified in the respective lexical entries. In the case of denken an, the first sense is glossed seine Überlegungen oder Gedanken auf jemanden/etwas richten. The list of syntactic realizations for PräpE comprises a prepositional phrase with an plus accusative as well as a number of clausal complements with obligatory correlate daran: dass-clause, infinitival clause with zu, w-question, and a complement that takes the form of a main clause as in Denke daran, der Teufel steckt im Detail! (without noting that this indirect speech construction is restricted to the imperative). All possible realizations are illustrated by example sentences, most of which are taken from corpora. The complementation patterns of the other two senses of denken an are specified in a similar vein. Taken together, the three lexical units cover about one full two-column page of the dictionary. Compared to the entries in (13) and (14) taken from learners’ dictionaries, the VALBU entry for denken an is clearly superior with respect to precision and explicitness of syntactic information. The 638 verbs covered by VALBU were selected with regard to the requirements of the Zertifikat Deutsch, which corresponds to the competence level B1 (lower intermediate learners) of the Common European Framework of Reference for Languages. VALBU’s primary target group are non-native teachers of German; further potential users are advanced learners (Schumacher et al. 2004: 20). The pretended user situation is, however, not without problems. It seems questionable whether the high level of detail in VALBU is of much help with teaching lower intermediate learners. Advanced learners and language professionals, on the other hand, can profit from the large amount of information provided by the dictionary, but may regard its limitation to several hundred words as a significant disadvantage to everyday use. Similarly to VALBU, VDE is targeted toward non-native teachers, advanced learners, and language professionals (Herbst et al. 2004: vii). VDE is the first valency dictionary for English, notably compiled at a German university. In addition to 511 verbs, it includes 274 nouns and 544 adjectives. The verbs were chosen on the basis of frequency and complexity of their valency structure; for adjectives and nouns, the presence of a valency pattern was a key criterion for inclusion. Valency information is described in two ways: by an inventory of syntactic realizations per argument position and by an exhaustive list of complementation patterns, accompanied by corpus examples (cf. Herbst 2007). VDE uses an elaborate though transparent coding system with formal labels. For instance, the pattern “+ NP + to N”, which occurs in entries like deliver and promise, says that, in an active sentence, the verb can be followed by a noun phrase and a prepositional phrase with to; the index P in NP indicates that the respective complement can take the subject position of a passive sentence (e.g., the car was delivered/promised to John). Although VDE strives for a user-friendly presentation, the sheer amount of information may complicate access even for the lexicographically versed reader: “The coding of the linguistic information that organizes and accompanies the examples that fill this book demands a seriously motivated reader and quite a bit of work” (Fillmore 2009: 56). Let us take a closer look at the VDE entry for the verb promise, which served as a running example in section 4.1. The entry contains ten complementation patterns, including the following three:

57. Syntax and Lexicography (15) a. + (that)-CLP(it) b. + NP + (that)-CL c. + NP + to-INF

1985 Chancellor Kohl promises that the new Germany (…) In a televised speech, he promised East Germans they would not (…) She’d promised Beryl to keep an eye on him.

The index P(it) in (15a) allows for extraposition with it if the finite complement occurs as subject of a passive sentence. The lack of such an index in pattern (15b) indicates that the finite complement cannot be subject of a passive sentence, whether extraposed or not, if the promisee is expressed by the direct object. It would thus be correct to say East Germans were promised they would etc. but not It was promised East Germans they would etc. Pattern (15c) is particularly interesting from a linguistic point of view because it brings up again the topic of subject control (cf. section 4.3). The attentive reader may have noticed that neither (3), (4), nor (6) admit a control construction matching (15c) where the promisee is explicitly mentioned. This omission in the learners’ dictionaries is presumably due to low corpus frequency, which in turn might be explained in terms of constructional and discourse-related constraints as suggested by Egan (2006). A striking fact about (15c) is that it allows passivization, because this contradicts the restriction known as Visser’s Generalization (Bach 1979) according to which subject controlled sentences cannot be passivized. Since VDE does not give an example for the passive construction, the reader can only guess whether the index P indicates corpus evidence against Visser’s Generalization, or whether it is simply a mistake. According to Fillmore (2009: 75), VDE is primarily committed to complements “that appear inside the phrasal projection of the lexical item.” While in verb entries, the subject is specified as well, though not as part of the valency patterns, there is no way to refer to the subject of a copula sentence or the modified noun in an adjective entry. This strategy might be acceptable from a purely syntactic perspective on valency, but it does not allow a satisfying explanation, for instance, of the semantic difference between familiar to and familiar with, since it is the thematic switch between phenomenon and experiencer in the external and internal arguments of familiar that marks the difference in this case (Fillmore 2009: 59). Another of Fillmore’s desiderata is that valency similarities and differences between derivationally related lexical units could have been pointed out in VDE. The goals of Fillmore’s own approach, FrameNet (Fillmore, Johnson, and Petruck 2003), are similar to those that underlie the compilation of VDE: to document all valency patterns for each lexical unit on the basis of corpus evidence. A central difference is that in FrameNet, lexical units are grouped into semantic classes, each of which is associated with a frame, a conceptual representation of a certain type of situation or state of affairs. For example, FrameNet defines a frame Commitment that includes the verbs commit, consent, promise, swear, and threaten (in one of their senses) as well as the nouns commitment, oath, and promise, besides others. FrameNet’s goals can be summarized as follows: The FrameNet project is dedicated to producing valency descriptions of frame-bearing lexical units (…), in both semantic and syntactic terms, and it bases this work on attestations of word usage taken from a very large digital corpus. The semantic descriptors of each valency pattern are taken from frame-specific semantic role names (…), and the syntactic terms are taken from a restricted set of grammatical function names and a detailed set of phrase types. (Fillmore 2007: 129)

1986

IX. Beyond Syntax

FrameNet provides thus a three-layered description of valency patterns: by phrase type, by grammatical function, and by semantic role. (16) shows two (of the many) valency patterns for the verb promise available in the FrameNet database, each accompanied by a corpus example. (16) a.

NP Ext Speaker I

b.

Sfin Dep Message

Addressee

promise I won’t even offer to help. NP Ext Speaker

But

DNI

she

had promised

NP Obj Addressee

Vpto Dep Message

Peter

to stay.

In (16a), DNI is short for definite null instantiation and indicates a “lexically licensed zero anaphora” (Fillmore 2007: 148). Ext and Dep stand for External and Dependent; Speaker, Addressee, and Message belong to the core semantic roles of the Commitment frame. As of the time of this writing, FrameNet is an ongoing project. Its potential benefits for valency lexicography as described in Heid (2006) and Fillmore (2007) are undeniable. However, in order to become usable as a dictionary for teachers, or even learners, progress on several fronts is necessary: a broader coverage of word senses, a more reliable annotation of corpus examples, and a more intelligible way of presenting the valency information to the non-expert.

6.2. Dictionaries of function words An overview of dictionaries of function words in different languages is beyond the scope of this article. For German, one can mention Schröder (1986) on prepositions, Helbig (1988) on particles, Buscha (1989) on conjunctions, and Helbig and Helbig (1990) on modals. Since function words are a heterogeneous class, their lexicographic treatment in specialized dictionaries differs considerably. For example, Schröder (1986: 243−245) covers the grammatical aspects of prepositions on only three pages by indicating the case they govern and their position with respect to the dependent noun phrase. The main part of the dictionary is concerned with a feature-based identification of the different senses a preposition can have. Conjunctions and other clausal connectives, in comparison, have much more intricate syntactic properties. It is thus indicative that Pasch et al. (2003: XV) locate the subject matter of their voluminous Handbuch der deutschen Konnektoren between lexicography and grammaticography. The descriptions of this handbook are partly available online at the web portal GRAMMIS of the Institut für Deutsche Sprache (Strecker 2005), which also provides syntagmatic information about prepositions.

57. Syntax and Lexicography

1987

7. Computational lexicography In Hanks (2003: 49), two meanings of the term computational lexicography are distinguished: “Using computational techniques to compile new dictionaries”, and “[r]estructuring and exploiting human dictionaries for computational purposes”.

7.1. Lexicography and corpus analysis At the analysis stage of the lexicographic process described in section 2.3, the lexicographer systematically records, among other things, the syntagmatic constructions in which a word occurs, and decides on complementation patterns and sense distinctions. The resulting pre-dictionary profile of the word is maintained in a database; see Atkins and Rundell (2008: sect. 9.2.5) for an overview of the kind of grammatical information to be stored. Nowadays, this task is conducted with massive computational support. A word’s syntagmatic behavior can be studied in detail by powerful corpus analysis tools such as the Sketch Engine of Kilgarriff et al. (2004), which allows one to extract concordances and to measure the salience scores of collocations and the like. Despite these advancements, one should keep in mind that “[t]he advent of electronic corpora and media can make the lexicographers’ work better, but not necessarily easier” (Kirkness 2004: 56). Hanks (2003: 58) points out that corpus evidence must be treated with caution. Frequency alone is not a sufficient criterion because it cannot always preclude the idiosyncrasies of a single author. Another problem is the “failure-to-find fallacy”. A pattern may exist in the language even it is not found in the corpus. In general, “lexicographers should carefully decide which language sample(s) they want to be working on” (Heid 2008: 134), that is, they should be particularly sensitive to questions of corpus authenticity and representativity. Heid (2008: 138−139) draws a distinction between corpus-driven and corpus-based approaches to corpus lexicography. In the first case, the lexicographer avoids “as much as possible the a priori projection of linguistic categorizations” and relies “essentially on the observation of distributional facts identified.” In the second case, projections or annotations resulting from computational linguistic preprocessing are accepted as a basis for lexicographic analysis. The wide variety of preprocessing options includes lemmatizing, part-of-speech tagging, noun phrase chunking, shallow (syntactic or semantic) parsing, or even deep (syntactic or semantic) parsing. The more linguistic annotation can be conducted automatically, the less has to be decided intellectually by the lexicographer. However, it also holds that the more ambitious the automated annotation, the more errorprone it is likely to be. Alternatively, the lexicographer can resort to corpora that have been annotated manually. Notably treebanks, i.e., syntactically analyzed corpora, are of great interest to the lexicographic process, but they are naturally limited in size because of the time and effort involved in the manual treatment. FrameNet, seen as a corpus annotation project, provides both a syntactic and a semantic annotation layer. It has been argued that the FrameNet corpus can be useful to lexicographic analysis (Atkins 2002; Atkins, Fillmore, and Johnson 2003; Atkins, Rundell, and Sato 2003; Atkins and Rundell 2008: sect. 5.4). The verb cook discussed in section 3 may serve as an example. It is associated with three FrameNet frames: Apply_heat,

1988

IX. Beyond Syntax

Absorb_heat, and Cooking_creation. The first two frames belong to the change-of-state sense of cook that participates in the causative-inchoative alternation; Apply_heat covers the causative and Absorb_heat the inchoative use; these two frames are interrelated by the is_causative_of frame-to-frame relation. The Cooking_creation frame corresponds to the creation sense of cook that participates in the unexpressed object alternation. Since FrameNet annotates complements by frame-specific semantic roles, the annotation makes explicit to the lexicographer which of the intransitive uses of cook has a subject that carries the same role as that of the transitive use, and which does not. A related attempt towards providing corpora with a (shallow) syntactic and semantic annotation layer for lexicographic purposes is the Corpus Pattern Analysis approach of Hanks and Pustejovsky (2005). Given a morphosyntactically analyzed corpus, it is relatively straightforward to extract subcategorization patterns from it − which means, in a sense, to automatize part of the lexicographic process. Senses of words are then distinguished insofar as they are already distinguished in the annotated corpus. Manning (1993) is an early example of this type of approach, Korhonen, Krymolowski, and Bricoe (2006) a more recent one. Both have in common that the syntactic analysis of the corpus is generated on the basis of probabilistic processing; see Schulte im Walde (2009) for an overview of this line of research. Unsurprisingly, the resulting lexicons are not intended for human use but as input to natural language processing (NLP) systems. Note that the FrameNet lexicon can be regarded as automatically acquired from an analyzed corpus too, since “the valency patterns are automatically derived from the annotations” (Fillmore 2007: 129); see also Spohr et al. (2007). The difference is that the corpus has been analyzed manually in this case.

7.2. Electronic dictionaries Most dictionaries compiled for human use are nowadays available in electronic form, in one way or another, and are accessible via CD-ROM or the internet. From a user’s perspective, this does not necessarily imply significant changes compared to the print edition besides a new retrieval system and certain improvements in the layout (e.g., the avoidance of textual condensation). Truly innovative features and a fully integrated hypermedia access structure are still the exception (de Schryver 2003). As far as the presentation of syntagmatic information is concerned, the challenges of matching the user’s needs and reference skills remain basically the same as those discussed in sections 4 and 5. Nevertheless, a hypermedia dictionary system could be much more flexible than its printed counterpart if it would allow the user to decide on how explicit and detailed the lexical information should be presented to her or him. Moreover, the ideal system would come along with an integrated grammar component and a corpus query facility. De Schryver (2003: 189−190) lists a handful of promising research projects such as the web-based language learning system ELDIT (Abel and Weber 2005) that are working towards more adaptive hypermedia dictionaries. As soon as standard dictionaries were available in machine-readable form, computational linguists became interested in exploiting them for natural language processing (NLP) purposes, since building a wide-coverage lexical resource for NLP is an expensive

57. Syntax and Lexicography

1989

and time-consuming undertaking. For example, Boguraev et al. (1987) describe an approach to automatically transform the syntactic information of LDOCE1 into a format suitable for NLP. Other research initiatives, such as the ACQUILEX project (Copestake et al. 1993), took a similar direction, but with an emphasis on using dictionary definitions as a source for knowledge acquisition; see also Wilks et al. (1996). With regard to the latter aspect, Ide and Véronis (1994) give a rather sceptical summary of what has been achieved. As to the exploitation of syntactic information, one could suspect that the emphasis on usability and the resulting move from codes to descriptions in normal prose is rather unwelcome to those who want to use this sort of information in NLP applications. Indeed, Rundell (2007b) mentions in passing that he “was almost lynched at a computational linguistics conference (…) when the word got around that [he] was ‘the man who removed the codes from LDOCE’.” Zgusta (1988) arrives at a similar conclusion concerning the diverging needs of humans and machines: If the COBUILD style proves attractive and if monolingual dictionaries for human users generally follow this style, we shall have the interesting situation in which dictionaries constructed for the human user will take into consideration human abilities and will therefore allow themselves to be less exact and less explicit, whereas dictionaries constructed for machine use will not be able to allow themselves such licenses. In this way a situation may develop in which dictionaries for computational use will have to be more strictly constructed than the user-friendly dictionaries for human users.

However, Zgusta’s argument is not compelling. For if the distinction between analysis and synthesis in the lexicographic process is taken seriously, and if the pre-dictionary database is carefully designed and maintained, then it is not the final dictionary product computational linguists should be after, but the lexical database itself. Valency dictionaries are more on the database side of the foregoing distinction since it is their concern to describe the valency of a word as completely as possible. Accordingly, Heid (2007: 378) comes to the conclusion that VDE “shows considerable affinity with NLP, even though is was not conceived with the use by automatic systems in mind. But the presence of a clear descriptive programme, its richness in details and its reproducible internal structure contribute to its multifunctionality.” Heid (2007) furthermore reports on an experiment to enhance the lexicon of a Lexical Functional Grammar (LFG) system for English with data from VDE. The lexicon-grammar of Gross (1994), in its electronic form, can be regarded as a lexical database that describes the syntagmatic environment of several thousand French verbs. Gardent et al. (2005, 2006) outline how to transform these data into a subcategorization lexicon suitable for NLP applications. It is part of Gross’ program to group verbs into classes on the basis of their common distributional properties. This idea shows some resemblance to the alternation based approach that underlies VerbNet (Kipper, Dang, and Palmer 2000), a lexical database of English verb classes that builds on Levin (1993). Especially the corpus-based induction of verb classes pursued by Kipper et al. (2006) resembles Gross’ program to some extent. Crouch and King (2005) describe an attempt to enrich the above-mentioned LFG lexicon by the valency patterns of VerbNet, amongst others. COMLEX (Grishman, Macleod, and Meyers 1994) is one of the few electronic dictionaries available that have been specifically compiled for supporting NLP applications

1990

IX. Beyond Syntax

without any specific grammatical framework in mind. COMLEX has been used to automatically supplement the lexicon of the English Resource Grammar, a broad-coverage grammar of English in the Head-Driven Phrase Structure Grammar (HPSG) framework that is part of the Linguistic Grammars Online (LingGO) project (Copestake and Flickinger 2000). Other computational dictionaries specifically developed for NLP applications are described in Hartrumpf, Helbig, and Osswald (2003) and McShane, Nirenburg, and Beale (2005), both of which are embedded in NLP systems for deep semantic analysis.

8. Further syntactic aspects of lexicography Syntax, up to now, has referred to the syntax of the natural languages dictionaries are about. There are further meanings of this term which are relevant to lexicography. A first example is the syntax of definitions in dictionary entries, which is not concerned with the syntax of natural language proper but with a regimented, controlled sublanguage: “Scraps of sublanguage (…) may well be found in unexpected places, as for example in the language of dictionary definitions” (Sinclair 1996). In this context, syntax serves as a normative device that specifies what definitions must look like, i.e., their basic vocabulary and the set of admissible constructions. Another, more technical notion of syntax is concerned with the microstructure of a lexical entry, i.e., with questions of how the entry’s informational components are arranged. Used in this way, the syntax of a lexical entry is a topic for metalexicographic analysis and has been extensively studied by Wiegand (1989a, 1989b). Today, in the electronic age, there is an additional structural, and thus, in a sense, syntactic level relevant to dictionary design: the specification of lexical data as an abstract data structure. From this perspective, microstructure is more a concept of presentation than of representation (see also Müller-Spitzer 2006: 91). Lexical data structures can be specified in different ways and at different levels of abstraction. The prevalent methods are XML schemas, entity-relationship models, and the like. Another option is to lean on the Lexical Markup Framework (LMF) (Francopoulo et al. 2006), a generic conceptual model for dictionaries that is formally specified by means of the Unified Modelling Language (UML) and is currently being developed under the auspices of the International Organization for Standardization (ISO).

9. References (selected) 9.1. Dictionaries Buscha, Joachim 1989 Lexikon deutscher Konjunktionen. Leipzig: Verlag Enzyklopädie. Ballweg, Joachim, Angelika Ballweg-Schramm, Pierre Bourstin, Helmut Frosch, Micheal Kinne, Jacquline Kubczak, and Helmut Schumacher 1986 Verben in Feldern: Valenzwörterbuch zur Syntax und Semantik deutscher Verben. (Schriften des Instituts für deutsche Sprache 1.) Berlin: de Gruyter. [= ViF].

57. Syntax and Lexicography

1991

Cowie, Anthony Paul (ed.) 1989a Oxford Advanced Learner’s Dictionary of Current English. Oxford: Oxford University Press, 4th edn. [= OALD4]. Cyffka, Andreas (ed.) 2004 PONS Großwörterbuch Deutsch als Fremdsprache. Stuttgart: Ernst Klett Sprachen. [= PGwDaF]. Dubois, Jean (ed.) 1966 Dictionnaire du Français Contemporain. Paris: Larousse. [= DFC]. Dubois, Jean (ed.) 1987 Dictionnaire de Français. Paris: Larousse. [= DDF]. Dudenredaktion (ed.) 1999 Duden. Das große Wörterbuch der deutschen Sprache in zehn Bänden. Mannheim: Duden Verlag. [= GWDS]. Engel, Ulrich, and Helmut Schumacher 1976 Kleines Valenzlexikon deutscher Verben. (Forschungsberichte des Instituts für Deutsche Sprache Mannheim 31.). Tübingen: Narr. [= KVL]. Götz, Dieter, Günther Haensch, and Hans Wellmann (eds.) 1998 Langenscheidts Großwörterbuch Deutsch als Fremdsprache. Berlin: Langenscheidt. [= LGwDaF]. Helbig, Gerhard 1988 Lexikon deutscher Partikeln. Leipzig: Verlag Enzyklopädie. Helbig, Gerhard, and Agnes Helbig 1990 Lexikon deutscher Modalwörter. Leipzig: Verlag Enzyklopädie. Helbig, Gerhard, and Wolfgang Schenkel 1969 Wörterbuch zur Valenz und Distribution deutscher Verben. Leipzig: VEB Bibliographisches Institut. Herbst, Thomas, David Heath, Ian F. Roe, and Dieter Götz 2004 A Valency Dictionary of English. A Corpus-Based Analysis of the Complementation Patterns of English Verbs, Nouns and Adjectives. Berlin: de Gruyter. [= VDE]. Hornby, Albert S. (ed.) 1974 Oxford Advanced Learner’s Dictionary of Current English. Oxford: Oxford University Press, 3rd edn. [= OALD3]. Kempcke, Günter (ed.) 2000 Wörterbuch Deutsch als Fremdsprache. Berlin: de Gruyter. [= dGWDaF]. Procter, Paul (ed.) 1978 Longman Dictionary of Contemporary English. Harlow: Pearson. [= LDOCE1]. Rey-Debove, Josette, and Alain Rey (eds.) 2000 Le nouveau Petit Robert. Dictionnaire alphabétique et analogique de la langue française. Paris: Le Robert. [= NPR]. Rundell, Michael (ed.) 2007a Macmillan English Dictionary for Advanced Learners. Oxford: Macmillan, 2nd edn. [= MEDAL2]. Schröder, Jochen 1986 Lexikon deutscher Präpositionen. Leipzig: Verlag Enzyklopädie. Schumacher, Helmut, Jacquline Kubczak, Renate Schmidt, and Vera de Ruiter 2004 VALBU − Valenzwörterbuch deutscher Verben. (Studien zur Deutschen Sprache 31.) Tübingen: Narr. [= VALBU]. Sinclair, John M. (ed.) 1987a Collins COBUILD English Language Dictionary. London: Collins ELT. [= COBUILD1]. Sinclair, John M. (ed.) 1995 Collins COBUILD English Dictionary. London: HarperCollins. [= COBUILD2].

1992

IX. Beyond Syntax

Sinclair, John M. (ed.) 2003 Collins COBUILD Advanced Learner’s English Dictionary. London: HarperCollins. [= COBUILD4]. Soanes, Cathrine, and Angus Stevenson (eds.) 2003 Oxford Dictionary of English. Oxford: Oxford University Press, 2nd edn. [= ODE2]. Sommerfeldt, Karl-Ernst, and Herbert Schreiber 1974 Wörterbuch zur Valenz und Distribution deutscher Adjektive. Leipzig: VEB Bibliographisches Institut. Sommerfeldt, Karl-Ernst, and Herbert Schreiber 1977 Wörterbuch zur Valenz und Distribution deutscher Substantive. Leipzig: VEB Bibliographisches Institut. Summers, Della (ed.) 2003 Longman Dictionary of Contemporary English. Harlow: Pearson, 4th edn. [= LDOCE4]. Wehmeier, Sally (ed.) 2005 Oxford Advanced Learner’s Dictionary of Current English. Oxford: Oxford University Press, 7th edn. [= OALD7].

9.2. Other publications Aarts, Flor 1991 Lexicography and syntax: The state of the art in learner’s dictionaries of English. In: J. E. Alatis (ed.), Georgetown University Round Table on Languages and Linguistics 1991. Linguistics and Language Pedagogy: The State of the Art, 567−582. Washington D. C: Georgetown University. Abel, Andrea, and Vanessa Weber 2005 ELDIT − electronic learner’s dictionary of German and Italian: Semibilingual, bilingualised or a totally new type? In: H. Gottlieb, J. E. Mogensen, and A. Zettersten (eds.), Proceedings of the Eleventh International Symposium on Lexicography, 73−84. (Lexicographica: Series maior 115.) Tübingen: Niemeyer. Apresjan, Juri 1992 Systematic lexicography. In: H. Tommola, K. Varantola, T. Salmi-Tolonen, and J. Schopp (eds.), Proceedings of the Fifth EURALEX International Congress (EURALEX 1992), 3−16. Tampere: University of Tampere. Apresjan, Juri 2000 Systematic Lexicography. Oxford: Oxford University Press. Apresjan, Juri 2002 Principles of systematic lexicography. In: M.-H. Corréard (ed.), Lexicography and Natural Language Processing. A Festschrift in Honour of B. T. S. Atkins, 91−104. Grenoble: Euralex. Atkins, B. T. Sue 1992/93 Theoretical lexicography and its relation to dictionary-making. Dictionaries: The Journal of the DSNA 14: 4−43. Atkins, B. T. Sue 2002 Then and now: Competence and performance in 35 years of lexicography. In: A. Braasch and C. Povlsen (eds.), Proceedings of the Tenth EURALEX International Congress (EURALEX 2002), 1−28. Copenhagen, Denmark: Center for Sprogteknologi. Atkins, B. T. Sue, Charles J. Fillmore, and Christopher R. Johnson 2003 Lexicographic relevance: Selecting information from corpus evidence. International Journal of Lexicography 16(3): 251−280. Atkins, B. T. Sue, Judy Kegl, and Beth Levin 1988 Anatomy of a verb entry: from linguistic theory to lexicographic practice. International Journal of Lexicography 1(2): 84−126.

57. Syntax and Lexicography

1993

Atkins, B. T. Sue, and Michael Rundell 2008 The Oxford Guide to Practical Lexicography. Oxford: Oxford University Press. Atkins, B. T. Sue, Michael Rundell, and Hiroaki Sato 2003 The contribution of FrameNet to practical lexicography. International Journal of Lexicography 16(3): 333−357. Bach, Emmon 1979 Control in Montague Grammar. Linguistic Inquiry 10(4): 515−531. Ballweg, Joachim, Angelika Ballweg-Schramm, Pierre Bourstin, Helmut Frosch, Jacquline Kubczak, and Helmut Schumacher 1981 Konzeption eines Wörterbuchs deutscher Verben. (Forschungsberichte des Instituts für deutsche Sprache 45.) Tübingen: Narr. Bassola, Peter 2006 Valenzinformationen in allgemeinen zweisprachigen Wörterbüchern. In: V. Ágel, L. M. Eichinger, H. W. Eroms, P. Hellwig, H. J. Heringer, and H. Lobin (eds.), Dependenz und Valenz / Dependency and Valency, vol. 2: 1387−1396. Berlin: de Gruyter. Béjoint, Henri 1994 Modern Lexicography. Oxford: Oxford University Press. Bergenholtz, Henning 1984 Grammatik im Wörterbuch: Syntax. In: H. E. Wiegand (ed.), Studien zur neuhochdeutschen Lexikographie, vol. V: 1−46. (Germanistische Linguistik 3−6/84.) Hildesheim: Olms. Bergenholtz, Henning 2002 Das de Gruyter Wörterbuch Deutsch als Fremdsprache und das neue DUDEN-Wörterbuch in zehn Bänden. Ein Vergleich im Hinblick auf die Grammatik. In: H. E. Wiegand (ed.), Perspektiven der pädagogischen Lexikographie des Deutschen II. Untersuchungen anhand des “De-Gruyter-Wörterbuchs Deutsch als Fremdsprache”, 35−53. (Lexicographica: Series maior 110.) Tübingen: Niemeyer. Bergenholtz, Henning, and Jens Erik Mogensen 1998 Die Grammatik der Verben in Langenscheidts Großwörterbuch Deutsch als Fremdsprache. In: H. E. Wiegand (ed.), Perspektiven der pädagogischen Lexikographie des Deutschen. Untersuchungen anhand von “Langenscheidts Großwörterbuch Deutsch als Fremdsprache”, 77−87. (Lexicographica: Series maior 86.) Tübingen: Niemeyer. Bogaards, Paul, and Willem A. van der Kloot 2001 The use of grammatical information in learners’ dictionaries. International Journal of Lexicography 18(2): 97−121. Bogaards, Paul, and Willem A. van der Kloot 2002 Verb constructions in learners’ dictionaries. In: A. Braasch and C. Povlsen (eds.), Proceedings of the Tenth EURALEX International Congress (EURALEX 2002), 747−757. Copenhagen, Denmark: Center for Sprogteknologi. Boguraev, Bran, Ted Briscoe, John Carroll, David Carter, and Claire Grover 1987 The derivation of a grammatically indexed lexicon from the Longman Dictionary of Contemporary English. In: Proceedings of the 25 th Annual Meeting of the ACL, 193− 200. Busse, Winfried 2006 Valenzlexika in anderen Sprachen. In: V. Ágel, L. M. Eichinger, H. W. Eroms, P. Hellwig, H. J. Heringer, and H. Lobin (eds.), Dependenz und Valenz / Dependency and Valency, vol. 2: 1424−1435. Berlin: de Gruyter. Coffey, Stephen 2006 High-frequency grammatical lexis in advanced-level English learners’ dictionaries: From language description to pedagogical usefulness. International Journal of Lexicography 19(2): 157−173.

1994

IX. Beyond Syntax

Colleman, Timothy 2005 On representing verb complementation patterns in dictionaries: The approach of the CVVD. In: H. Gottlieb, J. E. Mogensen, and A. Zettersten (eds.), Proceedings of the Eleventh International Symposium on Lexicography, 183−193. (Lexicographica: Series maior 115.) Tübingen: Niemeyer. Copestake, Ann, and Dan Flickinger 2000 An open-source grammar development environment and broad-coverage English grammar using HPSG. In: Proceedings of LREC 2000, 591−600. Athens, Greece. Copestake, Ann, Antonio Sanfilippo, Ted Briscoe, and Valeria de Paiva 1993 The ACQUILEX LKB: An introduction. In: T. Briscoe, A. Copestake, and V. de Paiva (eds.), Inheritance, Defaults, and the Lexicon, 148−163. Cambridge: Cambridge University Press. Corbin, Pierre 2002 Lexicographie et linguistique: une articulation difficile. L’exemple du domaine français. In: F. Melka and M. C. Augusto (eds.), De la lexicologie à la lexicographie / From Lexicology to Lexicography, 8−37. Utrecht: Utrecht Institute of Linguistics OTS. Corbin, Pierre 2008 Quel avenir pour la lexicographie française. In: J. Durand, B. Habert, and B. Laks (eds.), Congrès Mondial de Linguistique Française − CMLF’08, 1227−1250. Paris: Institut de Linguistique Française. Cowie, Anthony Paul 1989b Information on syntactic constructions in the general monolingual dictionary. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 588−592. Berlin: de Gruyter. Cowie, Anthony Paul 1998 A. S. Hornby, 1898−1998: A centenary tribute. International Journal of Lexicography 11(4): 251−268. Cowie, Anthony Paul 1999 English Dictionaries for Foreign Learners: A History. Oxford: Oxford University Press. Crouch, Dick, and Tracy Holloway King 2005 Unifying lexical resources. In: Proceedings of the Interdisciplinary Workshop on the Identification and Representation of Verb Features and Verb Classes, 32−37. Saarbrücken, Germany. de Schryver, Gilles-Maurice 2003 Lexicographers’ dreams in the age of electronic dictionaries. International Journal of Lexicography 16(2): 143−199. Dentschewa, Emilia 2006 DaF-Wörterbücher im Vergleich: Ein Plädoyer für “Strukturformeln”. In: A. Dimova and V. J. und Pavel Petkov (eds.), Zweisprachige Lexikographie und Deutsch als Fremdsprache, 113−128. (Germanistische Linguistik 184−185/2006.) Hildesheim: Olms. Dubois, Jean 1981 Models of the dictionary: Evolution in dictionary design. Applied Linguistics 2(3): 236−249. Dubois, Jean 1983 Dictionnaire et syntaxe. Lexique 2: 85−88. Dziemianko, Anna 2006 User-friendliness of Verb Syntax in Pedagogical Dictionaries of English. (Lexicographica: Series maior 130.) Tübingen: Niemeyer. Egan, Thomas 2006 Did John really promise Mary to leave? Constructions SV 1–2.

57. Syntax and Lexicography

1995

Engelberg, Stefan, and Lothar Lemnitzer 2009 Lexikographie und Wörterbuchbenutzung. Tübingen: Stauffenburg, 4th edn. Fillmore, Charles J. 2007 Valency issues in FrameNet. In: T. Herbst and K. Götz-Votteler (eds.), Valency: Theoretical, Descriptive and Cognitive Issues, 129−160. Berlin: Mouton de Gruyter. Fillmore, Charles J. 2009 Review of ‘A Valency Dictionary of English’. International Journal of Lexicography 22(1): 55−85. Fillmore, Charles J., Christopher R. Johnson, and Miriam R. L. Petruck 2003 Background to FrameNet. International Journal of Lexicography 16(3): 235−250. Fontenelle, Thierry (ed.) 2008 Practical Lexicography. A Reader. Oxford: Oxford University Press. Francopoulo, Gil, Monte George, Nicoletta Calzolari, Monica Monachini, Nuria Bel, Mandy Pet, and Claudia Soria 2006 Lexical markup framework (LMF). In: Proceedings of LREC 2006, 233−236. Gardent, Claire, Bruno Guillaume, Guy Perrier, and Ingrid Falk 2005 Maurice Gross’ grammar lexicon and Natural Language Processing. In: Proceedings of the 2 nd Language and Technology Conference. Poznan, Poland. Gardent, Claire, Bruno Guillaume, Guy Perrier, and Ingrid Falk 2006 Extraction d’information de sous-catégorisation à partir des tables du LADL. In: Proceedings of TALN 2006. Leuven. Gouws, Rufus H. 1998 Das System der sogenannten Strukturformeln in Langenscheidts Großwörterbuch Deutsch als Fremdsprache: eine kritische Übersicht. In: H. E. Wiegand (ed.), Perspektiven der pädagogischen Lexikographie des Deutschen. Untersuchungen anhand von “Langenscheidts Großwörterbuch Deutsch als Fremdsprache”, 63−76. (Lexicographica: Series maior 86.) Tübingen: Niemeyer. Gouws, Rufus H. 2004 State-of-the-art paper: Lexicology and lexicography: Milestones in metalexicography. In: P. van Sterkenburg (ed.), Linguistics Today − Facing a Greater Challenge, 187−205. Amsterdam: John Benjamins. Gouws, Rufus H., Ulrich Heid, Wolfgang Scheickard, and Herbert Ernst Wiegand (eds.) 2013 Dictionaries. An International Encyclopedia of Lexicography. Supplementary Volume: Recent Developments with Focus on Electronic and Computational Lexicography. Handbooks of Linguistics and Communication Science. Berlin: de Gruyter. Grishman, Ralph, Catherine Macleod, and Adam Meyers 1994 Comlex syntax: Building a computational lexicon. In: Proceedings of COLING’94. Kyoto, Japan. Gross, Maurice 1994 Constructing lexicon-grammars. In: B. T. S. Atkins and A. Zampolli (eds.), Computational Approaches to the Lexicon, 213−263. Oxford: Oxford University Press. Hanks, Patrick 1987 Definitions and explanations. In: J. M. Sinclair (ed.), Looking Up: An Account of the COBUILD Project in Lexical Computing, 116−136. London: Collins ELT. Hanks, Patrick 2003 Lexicography. In: R. Mitkov (ed.), The Oxford Handbook of Computational Linguistics, 48−69. Oxford: Oxford University Press. Hanks, Patrick, and James Pustejovsky 2005 A pattern dictionary for natural language processing. Revue française de linguistique appliquée 10(2): 63−82. Hanks, Patrick, Anne Urbschat, and Elke Gehweiler 2006 German light verb constructions in corpora and dictionaries. International Journal of Lexicography 19(4): 439−457.

1996

IX. Beyond Syntax

Hartrumpf, Sven, Hermann Helbig, and Rainer Osswald 2003 The semantically based computer lexicon HaGenLex − Structure and technological environment. Traitement automatique des langues 44(2): 81−105. Hausmann, Franz Josef 1989 Wörterbuchtypologie. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 968−988. Berlin: de Gruyter. Hausmann, Franz Josef, Oskar Reichmann, Herbert Ernst Wiegand, and Ladislav Zgusta (eds.) 1989 Wörterbücher / Dictionaries / Dictionnaires, vol. 1. Berlin: de Gruyter. Hausmann, Franz Josef, Oskar Reichmann, Herbert Ernst Wiegand, and Ladislav Zgusta (eds.) 1990 Wörterbücher / Dictionaries / Dictionnaires, vol. 2. Berlin: de Gruyter. Hausmann, Franz Josef, Oskar Reichmann, Herbert Ernst Wiegand, and Ladislav Zgusta (eds.) 1991 Wörterbücher / Dictionaries / Dictionnaires, vol. 3. Berlin: de Gruyter. Hausmann, Franz Josef, and Herbert Ernst Wiegand 1989 Component parts and structures of general monolingual dictionaries: A survey. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 328−360. Berlin: de Gruyter. Heid, Ulrich 1997 Zur Strukturierung von einsprachigen und kontrastiven elektronischen Wörterbüchern. (Lexicographica: Series maior 77.) Tübingen: Niemeyer. Heid, Ulrich 2006 Valenzwörterbücher im Netz. In: P. C. Steiner, H. C. Boas, and S. J. Schierholz (eds.), Contrastive Studies and Valency: Studies in Honor of Hans Ulrich Boas, 69−89. Frankfurt am Main: Peter Lang. Heid, Ulrich 2007 Valency data for natural language processing: What can the ‘Valency Dictionary of English’ provide? In: T. Herbst and K. Götz-Votteler (eds.), Valency: Theoretical, Descriptive and Cognitive Issues, 309−320. Berlin: Mouton de Gruyter. Heid, Ulrich 2008 Corpus linguistics and lexicography. In: A. Lüdeling and M. Kytö (eds.), Corpus Linguistics. An International Handbook, vol. 1: 131−153. Berlin: de Gruyter. Herbst, Thomas 1985 Das zweisprachige Wörterbuch als Schreibwörterbuch: Information zur Syntax im zweisprachigen Wörterbuch Englisch−Deutsch Deutsch−Englisch. In: H. Bergenholtz and J. Mugdan (eds.), Lexikographie und Grammatik, 308−331. (Lexicographica: Series maior 3.) Tübingen: Niemeyer. Herbst, Thomas 1996 On the way to the perfect learners’ dictionary: a first comparison of OALD5, LDOCE3, COBUILD2 and CIDE. International Journal of Lexicography 9(4): 321−357. Herbst, Thomas 2007 Valency complements or valency patterns? In: T. Herbst and K. Götz-Votteler (eds.), Valency: Theoretical, Descriptive and Cognitive Issues, 15−35. Berlin: Mouton de Gruyter. Herbst, Thomas, and Michael Klotz 2003 Lexikografie. Paderborn: Schöningh. Hunston, Susan 2004 The corpus, grammar patterns, and lexicography. Lexicographica 20: 100−113. Ide, Nancy, and Jean Véronis 1994 Machine readable dictionaries: What have we learned, where do we go? In: Proceedings of the International Workshop on Directions of Lexical Research, 137−146. Beijing, China. Ilson, Robert 1985 The linguistic significance of some lexicographic conventions. Applied Linguistics 6(2): 162−172.

57. Syntax and Lexicography

1997

Karl, Ilse 1991 Grammatische und lexikalische Kategorisierungen im zweisprachigen Wörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 3: 2824−2828. Berlin: de Gruyter. Kilgarriff, Adam, Pavel Rychly, Pavel Smrž, and David Tugwell 2004 The Sketch Engine. In: G. Williams and S. Vessier (eds.), Proceedings of the 11 th EURALEX International Congress (EURALEX 2004), 105−116. Lorient: Université de Bretagne-Sud. Kipper, Karin, Hoa Trang Dang, and Martha Palmer 2000 Class-based construction of a verb lexicon. In: AAAI-2000 Seventeenth National Conference on Artificial Intelligence. Austin, TX. Kipper, Karin, Anna Korhonen, Neville Ryant, and Martha Palmer 2006 A large-scale extension of VerbNet with novel verb classes. In: Proceedings of the 12 th EURALEX International Congress (EURALEX 2006). Turin. Kirkness, Alan 2004 Lexicography. In: A. Davies and C. Elder (eds.), The Handbook of Applied Linguistics, 54−81. Oxford: Blackwell. Klotz, Michael 2001 Valenzinformation im monolingualen englischen Lernerwörterbuch und im bilingualen Wörterbuch englisch−deutsch. Zeitschrift für Angewandte Linguistik 35: 61−79. Korhonen, Anna, Yuval Krymolowski, and Ted Briscoe 2006 A large subcategorization lexicon for natural language processing applications. In: Proceedings of LREC 2006, 1326−1331. Kromann, Hans-Peder, Theis Riiber, and Poul Rosbach 1991a Grammatical constructions in the bilingual dictionary. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 3: 2770−2775. Berlin: de Gruyter. Kromann, Hans-Peder, Theis Riiber, and Poul Rosbach 1991b Principles of bilingual lexicography. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 3: 2711−2728. Berlin: de Gruyter. Kühn, Peter 1989 Typologie der Wörterbücher nach Benutzungsmöglichkeiten. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 111−127. Berlin: de Gruyter. Landau, Sidney I. 2001 Dictionaries. Cambridge: Cambridge University Press, 2nd edn. Lang, Ewald 1983 Lexikon als Modellkomponente und Wörterbuch als lexikographisches Produkt: ein Vergleich als Orientierungshilfe. In: J. Schildt and D. Viehweger (eds.), Die Lexikographie von heute und das Wörterbuch von morgen. Analysen − Probleme − Vorschläge, 76− 91. (Linguistische Studien des ZISW: Reihe A − Arbeitsberichte 109.) Berlin: Akademie der Wissenschaften der DDR. Lang, Ewald 1989 Probleme der Beschreibung von Konjunktionen im allgemeinen einsprachigen Wörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 862−868. Berlin: de Gruyter. Laughren, Mary, and David Nash 1983 Warlpiri dictionary project. In: P. Austin (ed.), Australian Aboriginal Lexicography, 109−133. Canberra: The Australian National University. Lemmens, Marcel, and Herman Wekker 1986 Grammar in English Learners’ Dictionaries. (Lexicographica: Series maior 16.) Tübingen: Niemeyer.

1998

IX. Beyond Syntax

Lemmens, Marcel, and Herman Wekker 1991 On the relationship between lexis and grammar in English learners’ dictionaries. International Journal of Lexicography 4(1): 1−14. Leroyer, Patrick, Jean Binon, and Serge Verlinde 2009 La lexicographie d’apprentissage française au tournant du troisième millénaire: le couple FLM/FLE(S) entre tradition et innovation. Lexicographica 25: 109−134. Levin, Beth 1991 Building a lexicon: The contribution of linguistics. International Journal of Lexicography 4(3): 205−226. Levin, Beth 1993 English Verb Classes and Alternations. Chicago: University of Chicago Press. Manning, Christopher D. 1993 Automatic acquisition of a large subcategorization dictionary from corpora. In: Proceedings of the 31 st Annual Meeting of the Association for Computational Linguistics, 235−242. McCorduck, Edward Sc. 1993 Grammatical Information in ESL Dictionaries. (Lexicographica: Series maior 48.) Tübingen: Niemeyer. McShane, Marjorie, Sergei Nirenburg, and Stephen Beale 2005 An NLP lexicon as a largely language-independent resource. Machine Translation 19: 139−173. Mosel, Ulrike 2006 Grammaticography: The art and craft of writing grammars. In: F. K. Ameka, A. Dench, and N. Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 41−68. Berlin: Mouton de Gruyter. Mugdan, Joachim 1989 Grundzüge der Konzeption einer Wörterbuchgrammatik. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 732−749. Berlin: de Gruyter. Müller-Spitzer, Carolin 2006 Das Konzept der Inhaltsstruktur. Eine Auseinandersetzung mit dem Konzept der Mikrostrukturen im Kontext der Modellierung einer lexikografischen Datenbasis. Lexicographica 22: 72−94. Müller-Spitzer, Carolin 2007 Der lexikografische Prozess. Konzeption für die Modellierung der Datenbasis. (Studien zur Deutschen Sprache 42.) Tübingen: Narr. Pasch, Renate, Ursula Brauße, Eva Breindl, and Ulrich Hermann Waßner 2003 Handbuch der deutschen Konnektoren. Linguistische Grundlagen der Beschreibung und syntaktische Merkmale der deutschen Satzverknüpfer (Konjunktionen, Satzadverbien und Partikeln). (Schriften des Instituts für Deutsche Sprache 9.) Berlin: de Gruyter. Quirk, Randolph, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik 1985 A Comprehensive Grammar of the English Language. London: Longman. Rey, Alain 1990 La lexicographie française depuis Littré. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 2: 1818− 1843. Berlin: de Gruyter. Rundell, Michael 1998 Recent trends in English pedagogical lexicography. International Journal of Lexicography 11(4): 315−342. Rundell, Michael 2006 More than one way to skin a cat: Why full-sentence definitions have not been universally adopted. In: E. Corino, C. Marello, and C. Onesti (eds.), Proceedings of the 12 th EURALEX International Congress (EURALEX 2006), 323−338. Turin: Università di Torino.

57. Syntax and Lexicography

1999

Rundell, Michael 2007b Review of Anna Dziemianko: User-friendliness of verb syntax in pedagogical dictionaries of English. Kernerman Dictionary News 15: 15−18. Salerno, Laura 1999 Grammatical information in the bilingual dictionary: A study of five Italian-French dictionaries. International Journal of Lexicography 12: 209−222. Schaeder, Burkhard 1985 Die Beschreibung der Präposition im einsprachigen deutschen Wörterbuch. In: H. Bergenholtz and J. Mugdan (eds.), Lexikographie und Grammatik, 278−307. (Lexicographica: Series maior 3.) Tübingen: Niemeyer. Schafroth, Elmar 2002 Die Grammatik der Verben im “de Gruyter Wörterbuch Deutsch als Fremdsprache”. In: H. E. Wiegand (ed.), Perspektiven der pädagogischen Lexikographie des Deutschen II. Untersuchungen anhand des “De-Gruyter-Wörterbuchs Deutsch als Fremdsprache”. (Lexicographica: Series maior 110.) Tübingen: Niemeyer. Schafroth, Elmar, and Ekkehard Zöfgen 1998 Langenscheidts Großwörterbuch Deutsch als Fremdsprache und die französische Lernerlexikographie. In: H. E. Wiegand (ed.), Perspektiven der pädagogischen Lexikographie des Deutschen. Untersuchungen anhand von “Langenscheidts Großwörterbuch Deutsch als Fremdsprache”, 3−19. (Lexicographica: Series maior 86.) Tübingen: Niemeyer. Schulte im Walde, Sabine 2009 The induction of verb frames and verb classes from corpora. In: A. Lüdeling and M. Kytö (eds.), Corpus Linguistics. An International Handbook, vol. 2: 952−972. Berlin: de Gruyter. Schumacher, Helmut 2006a Deutschsprachige Valenzwörterbücher. In: V. Ágel, L. M. Eichinger, H. W. Eroms, P. Hellwig, H. J. Heringer, and H. Lobin (eds.), Dependenz und Valenz / Dependency and Valency, vol. 2: 1396−1424. Berlin: de Gruyter. Schumacher, Helmut 2006b Kontrastive zweisprachige Valenzwörterbücher. In: V. Ágel, L. M. Eichinger, H. W. Eroms, P. Hellwig, H. J. Heringer, and H. Lobin (eds.), Dependenz und Valenz / Dependency and Valency, vol. 2: 1435−1446. Berlin: de Gruyter. Sinclair, John M. 1987b Grammar in the dictionary. In: J. M. Sinclair (ed.), Looking Up: An Account of the COBUILD Project in Lexical Computing, 104−115. London: Collins ELT. Sinclair, John M. 1996 The empty lexicon. International Journal of Corpus Linguistics 1(1): 99−120. Sinclair, John M. 2004 Lexical grammar. In: J. M. Sinclair and R. Carter (eds.), Trust the Text: Language, Corpus and Discourse, 164−176. London: Routledge. Spohr, Dennis, Aljoscha Burchardt, Sebastian Pado, Anette Frank, and Ulrich Heid 2007 Inducing a computational lexicon from a corpus with syntactic and semantic annotation. In: J. Geertzen, E. Thijsse, H. Bunt, and A. Schiffrin (eds.), Proceedings of the 7 th International Workshop on Computational Semantics, 210−221. Tilburg: Tilburg University. Strecker, Bruno 2005 GRAMMIS. Das grammatische Informationssystem des Instituts für Deutsche Sprache. Sprachreport. Informationen und Meinungen zur deutschen Sprache 3: 12−15. Svensén, Bo 2009 A Handbook of Lexicography. The Theory and Practice of Dictionary-Making. Cambridge: Cambridge University Press.

2000

IX. Beyond Syntax

Tarp, Sven 2005 The concept of a bilingual dictionary. In: I. Barz, H. Bergenholtz, and J. Korhonen (eds.), Schreiben, Verstehen, Übersetzen, Lernen. Zu ein- und zweisprachigen Wörterbüchern mit Deutsch, 27−42. (Finnische Beiträge zur Germanistik 14.) Frankfurt: Peter Lang. Tarp, Sven 2008 Lexicography in the Borderland between Knowledge and Non-Knowledge. General Lexicographic Theory with Particular Focus on Learner’s Lexicography. (Lexicographica: Series maior 134.) Tübingen: Niemeyer. von Polenz, Peter 1989 Funktionsverbgefüge im allgemeinen einsprachigen Wörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 882−887. Berlin: de Gruyter. Wiegand, Herbert Ernst 1989a Arten von Mikrostrukturen im allgemeinen einsprachigen Wörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 462−501. Berlin: de Gruyter. Wiegand, Herbert Ernst 1989b Der Begriff der Mikrostruktur: Geschichte, Probleme, Perspektiven. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 409−462. Berlin: de Gruyter. Wiegand, Herbert Ernst 1989c Die lexikographische Definition im allgemeinen einsprachigen Wörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 530−588. Berlin: de Gruyter. Wiegand, Herbert Ernst 1990 Die deutsche Lexikographie der Gegenwart. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 2: 2100−2246. Berlin: de Gruyter. Wiegand, Herbert Ernst (ed.) 1998a Perspektiven der pädagogischen Lexikographie des Deutschen. Untersuchungen anhand von “Langenscheidts Großwörterbuch Deutsch als Fremdsprache”. (Lexicographica: Series maior 86.) Tübingen: Niemeyer. Wiegand, Herbert Ernst 1998b Wörterbuchforschung. Untersuchungen zur Wörterbuchbenutzung, zur Theorie, Geschichte, Kritik und Automatisierung der Lexikographie, vol. 1. Berlin: de Gruyter. Wiegand, Herbert Ernst (ed.) 2002 Perspektiven der pädagogischen Lexikographie des Deutschen II. Untersuchungen anhand des “De-Gruyter-Wörterbuchs Deutsch als Fremdsprache”. (Lexicographica: Series maior 110.) Tübingen: Niemeyer. Wiegand, Herbert Ernst (ed.) 2003 Untersuchungen zur kommerziellen Lexikographie der deutschen Gegenwartssprache I. “Duden. Das große Wörterbuch der deutschen Sprache in zehn Bänden”. (Lexicographica: Series maior 113.) Tübingen: Niemeyer. Wiegand, Herbert Ernst (ed.) 2005 Untersuchungen zur kommerziellen Lexikographie der deutschen Gegenwartssprache II. “Duden. Das große Wörterbuch der deutschen Sprache in zehn Bänden”. (Lexicographica: Series maior 121.) Tübingen: Niemeyer. Wilks, Yorick A., Brian M. Slator, and Louise M. Guthrie 1996 Electric Words. Cambridge, MA: MIT Press.

58. Computational Syntax

2001

Wolski, Werner 1989a Die Beschreibung von Modalpartikeln im allgemeinen einsprachigen Wörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 805−814. Berlin: de Gruyter. Wolski, Werner 1989b Formen der Textverdichtung im allgemeinen einsprachigen Wörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 956−967. Berlin: de Gruyter. Wolski, Werner 2005 Lexikologie und Lexikographie. In: A. D. Cruse, F. Hundsnurscher, M. Job, and P. R. Lutzeier (eds.), Lexikologie / Lexicology, vol. 2: 1816−1828. Berlin: de Gruyter. Zgusta, Ladislav 1988 Pragmatics, lexicography and dictionaries of English. World Englishes 7(3): 243−253. Zgusta, Ladislav 2006 Lexicography Then and Now: Selected Essays. (Lexicographica: Series maior 129.) Tübingen: Niemeyer. Zöfgen, Ekkehard 1989 Das Konstruktionswörterbuch. In: F. J. Hausmann, O. Reichmann, H. E. Wiegand, and L. Zgusta (eds.), Wörterbücher / Dictionaries / Dictionnaires, vol. 1: 1000−1010. Berlin: de Gruyter.

Rainer Osswald, Düsseldorf (Germany)

58. Computational Syntax 1. 2. 3. 4. 5. 6. 7. 8. 9.

Introduction: What is computational syntax? Why do computational syntax? Regression testing and maintenance Robustness Parse selection and ambiguity management Parsing Multilingual computational syntax Summary and conclusion References (selected)

Abstract This chapter provides an introduction to and overview of syntax from a computational perspective. Computational syntax is concerned with the syntactic representations that are necessary for automatic analysis by computers. These representations can provide insights into the validity of proposed theoretical formalisms and into theoretical syntactic analyses, especially into non-core phenomena and interactions across phenomena and are key to practical applications, especially meaning-sensitive applications such as question answering and machine translation.

2002

IX. Beyond Syntax

Computational syntax involves determining both the types of representations needed and the means of efficiently and accurately obtaining these representations. These concerns require research into efficient parsing and generation algorithms, robustness techniques, and ambiguity management, as well as areas more familiar to theoretical syntacticians such as the interaction of syntax with pre- and post-processing modules (e.g. morphology and semantics) and the cross-linguistic applicability of analyses and methods.

1. Introduction: What is computational syntax? Computational syntax concerns itself with the representation and manipulation (by computer) of the structures inherent in natural language strings, making explicit in the structures relationships and other grammatical information that are only implicit in the strings. This research goal requires work on three sets of problems: (1) the design of the representational structures (dependency structures, phrase structure trees, feature structures, etc.), (2) the design of algorithms for parsing (assigning structures to input sentences), generation (assigning surface realisations to underlying semantic representations), and downstream processing where parsers (or generators) are embedded in larger natural language processing systems, and (3) the exploration of processes for collecting the machine-readable grammatical knowledge used by the algorithms to assign structure. Within computational syntax, there is a wide variety of approaches to obtaining the grammatical knowledge which forms the basis of parsing (or generation) systems. At one end of the spectrum there is grammar engineering, the process of hand-constructing grammars, including both lexical and syntactic knowledge (e.g. Copestake and Flickinger 2000; Butt et al. 2002). At the other end there is the annotation of large collections of naturally occurring texts with phrase structural or dependency information in order to create treebanks or dependency banks. Grammars can then be automatically extracted out of such data sources (e.g. Miyao et al. 2004; Cahill et al. 2002; Hockenmaier and Steedman 2007). These two approaches can be hybridised, for example creating a skeletal grammar by hand and then elaborating it with machine learning over a treebank (Cramer and Zhang 2009). Finally, there is research into unsupervised learning of grammatical structure which pairs a very small number of starting assumptions (e.g. binarybranching tree structure) with a large amount of unannotated text as input to an algorithm that finds a grammar for the language in the text (Klein and Manning 2004; Bod 2007). Just as there is variation in the approach to constructing grammars, there is also variation in the resulting representations. The range of possibilities includes: phrase structure trees (with a relatively small set of node labels); dependency trees; more elaborate syntactic structures representing grammatical roles, case and other features with attribute-value matrices; and a variety of semantic representations which typically include predicate-argument structure, constraints on the scope of quantifiers and scopal modifiers, and semantically relevant features such as person, number, and gender, or tense, aspect and mood. Some of these different types of structures are illustrated (in simplified form) in Figure 58.1. Computational syntax is almost as old as modern generative syntax; early transformational grammar-inspired implementations include Zwicky et al. (1965), Petrick (1965)

58. Computational Syntax

2003

(1)

Fig. 58.1: Sample representations from various formalisms, simplified

and Friedman et al. (1971) (see also Kay 1967 for an alternative approach). In the 1970s and 1980s (and into the early 1990s), the field saw the creation or elaboration of a variety of syntactic frameworks specifically designed to be computationally tractable: Augmented Transition Networks (ATN, Woods 1970), Tree-Adjoining Grammar (TAG, Joshi et al. 1975), Lexical-Functional Grammar (LFG, Kaplan and Bresnan 1982), Headdriven Phrase Structure Grammar (HPSG, Pollard and Sag 1994), and Categorial Gram-

2004

IX. Beyond Syntax

mar (CG, Wood 1993). Since then, there has been a continual development and refinement of software tools (e.g., Crouch et al. 2009; Copestake 2002; Baldridge et al. 2007) for working in these frameworks. Mainstream generative syntax has been less integrated with computational work, but there are implementation efforts for Government and Binding Theory (Chomsky 1986) and the Minimalist Program (Chomsky 1995), for example by Fong (1991) and Stabler (2001). As computers have become more powerful and easier to work with, the purview of computational syntax has increased, both in terms of the range of phenomena addressed in a single grammar, and in terms of the array of applications that make use of such grammars. In this chapter, we begin by motivating computational syntax as a field of study (section 2), and then look into some of the considerations which are particular to computational syntax, including regression testing and grammar maintenance (section 3), robustness (section 4), and ambiguity management (section 5). In section 6 we briefly overview parsing algorithms, i.e. the means by which computers use the knowledge contained in machine-readable grammars to assign structure to input sentences. Finally, section 7 explores multilingual computational syntax.

2. Why do computational syntax? Formal theories of syntax are designed to be predictive: In creating syntactic analyses, linguists generalise from the specific examples under study to a large (in fact typically unbounded) set of sentences or sentence structures in the language(s) under consideration. By creating implementations of these analyses which can be manipulated by computers, linguists can test the predictions of their analyses against vast quantities of data. In addition, we can leverage this predictive capacity to handle an open-ended set of inputs in a variety of practical applications.

2.1. Hypothesis testing As mentioned above, formal grammars encode predictive analyses. However, traditional methods of syntactic analysis limit the range of data that can practically be considered in the development and validation of analyses. By creating machine readable implementations of grammars, we can address these concerns. Most syntactic research focuses on a small number of phenomena, investigating their interaction within a language or group of languages, and formalising a set of rules which account for the observed behavior. Even the simplest sentences, however, illustrate a large number of phenomena. For example, the English sentence in (2) involves at least plural marking on the subject, tense marking on the verb, the licensing of determinerless NPs, and major constituent order. (2)

Dogs barked.

Phenomena such as relative clauses, parasitic gaps and non-constituent coordination can only be illustrated in much longer sentences, which will also simultaneously exhibit many other phenomena.

58. Computational Syntax

2005

Thus any model of human language (or human linguistic competence) must not only account for the large range of phenomena involved, but also their interaction. And in fact, this is what theories of syntax aim to do. However, it is a huge task, and one that requires the combined effort of many researchers, each looking at different phenomena in different languages. The results of such research are accumulated into models, and syntacticians generally situate new analyses within the assumptions and mechanisms of a particular model. However, research that investigates the overall cohesiveness of syntactic models is rare, and indeed, exceedingly difficult to do without the aid of a computer. With implemented grammars, in contrast, the computer takes on the role of ensuring that the analyses of separate phenomena are consistent with each other. By using the grammar together with an algorithm for parsing to analyze examples, the grammar engineer can verify that the analyses of core phenomena such as word order, case, agreement, noun phrase structure, verb subcategorisation, long-distance dependencies, etc. interact as intended to produce correct analyses of the example sentences. Furthermore, by building up a test suite of examples illustrating the phenomena addressed, the linguist can check that further additions to the grammar do not break the existing coverage − i.e. lead to some sentences not receiving an analysis whereas previously they did − and fix them if they do (see section 3 below). With current hardware and software, it is not uncommon to run a test suite with hundreds of examples several times a day. Implemented grammars can also be used to discover counterexamples to existing analyses which would be very hard to discover by other means, either because they represent variations on the structures involved that are neither obviously predicted or contradicted by the theory, or because they involve constructions which are pragmatically constrained in such a way that they seem ungrammatical out of context. To discover such examples, researchers use the grammar to parse running text and then examine the sentences from the text that the grammar failed to analyze properly. This methodology involves using the grammar as a sieve to separate the rare, interesting examples from the much larger mass of examples that are already understood (Baldwin et al. 2005). For example, using this methodology, Baldwin et al. found that adjectives are subject to “pied-piping” by their degree specifiers in English free relatives, although the analysis encoded in the grammar they used (the English Resource Grammar; Flickinger 2000) incorrectly predicted that they could not be. An example from the British National Corpus is given in (3): (3)

However pissed off we might get from time to time, though, we’re going to have to accept that Wilko is at Elland Rd. to stay.

In addition, grammars that are used to process naturally occurring text, either for linguistic research or in the context of a practical application, need to be able to handle not only the so-called core linguistic phenomena which have been of most interest to theoretical syntacticians, but also a wide variety of other phenomena traditionally relegated to the periphery. In fact, many so-called peripheral phenomena have quite high text frequency, at least in particular genres: quotatives, parentheticals, complex number names, proper name constructions, and determinerless count nouns in headers, just to name a few (Rohrer and Forst 2006). (Some of these areas are increasingly being addressed in the theoretical literature, see, e.g., van Langendonck 2007 on proper names.) Even if we

2006

IX. Beyond Syntax

maintain a distinction between core and periphery, it is clear that the analyses of both types of phenomena must coexist within a single working grammar. The methodology of computational syntax allows us to explore how incorporating peripheral phenomena can constrain the space of possible analyses of phenomena considered to belong to the core (cf. Bender and Flickinger 1999).

2.2. Practical applications There is a wide range of practical applications in natural language technology which benefit from (or indeed rely on) syntactic analysis of input or output text. Syntactic analysis can be helpful for practical applications for a variety of reasons: (1) It allows access to predicate-argument structure which can only be approximated through other forms of more surface-based linguistic processing, but which can be computed directly through computational syntax systems which represent the realisation of grammatical functions, the linking of syntactic and semantic arguments, and a variety of more subtle phenomena, such as long-distance dependencies and control. (2) Implemented grammars encode constraints on well-formed sentences of a language, which can be crucial in producing well-formed output in systems involving natural language generation. (3) Syntactic analysis is a key step in the chain from surface strings to semantic representations, and therefore is important in systems which require the machine to represent its human interlocutor’s communicative intent. (4) Computational grammars can model the likelihood of different strings, according to the likelihood of the structures they use, both in general and in combination with the particular words of the sentence. In this section, we overview example applications illustrating the different ways in which computational syntax contributes to practical natural language processing. An example of an application which requires a strong notion of well-formed sentences, but not necessarily any representation of semantics or user intent, is grammar checking (Thurmair 1990; Crysmann et al. 2008; Prost 2009). Grammar checkers detect likely errors and suggest potential (well-formed) corrections. Grammar checkers can be embedded in editors for general use, but there are also more specialised applications, such as controlled-language checkers (Adriaens and Schreors 1992). Controlled languages are used in technical manuals which are written in a simplified and stylised manner to facilitate comprehension by non-native speakers and/or machine-assisted human translation into many languages. To write in a controlled language, authors require assistance in ensuring that they are keeping to its constraints. A second specialised application of grammar checking is computer assisted language learning (CALL) systems which allow learners to interact with an automated tutor as they work with structures in the language they are studying (e.g. Vandeventer 2001). More sophisticated CALL systems are dialog systems which engage the learner and provide opportunities for realistic language use, and therefore also require semantic representations as part of understanding the user’s input and responding appropriately (e.g. Price et al. 1999). Particularly in the case of general purpose grammar checkers and grammar checkers for CALL, the system has to be designed carefully to account for the fact that full coverage of the grammatical structures of a language is not easily attainable, and to avoid (as much as possible) flagging as ungrammatical sentences which are correct but nonetheless outside the system’s grammar.

58. Computational Syntax

2007

There is a range of applications involving natural language understanding which require syntactic analysis on the way to a machine representation of user intent. These include dialog systems, such as those for CALL mentioned above, but also for telephone access to structured data (such as flight information or account balances), in-car dialog systems which provide access to navigation devices, radio, and other information, and even dialog systems allowing astronauts to navigate and listen to instructions for procedures aboard space craft (Rayner et al. 2005). When a dialog system is built for a particular domain (e.g. flight information or navigation systems), it typically includes a grammar that is tailored to that domain. This grammar specialisation involves both the development of domain-specific parse selection information (section 5), as well as a reduction in the range of grammatical phenomena covered to those required for the interaction. For example, quotatives are typically not required in task-oriented dialog systems. Beyond dialog systems, natural language understanding is used in semantics-based search and other kinds of text processing. For example, in Information Retrieval, indexing over semantic representations allows for more information to be found because of the more canonicalised representations: words are represented by concepts, not strings, thereby allowing synonyms to match (e.g. buy in the query can match purchase in the passage), passives can match actives, long-distance dependencies are resolved, etc. In addition, the use of semantic representations instead of keywords for search strings allows for more accurate, precise retrievals by paying attention to the roles the arguments play. For example, a query like (4) will only retrieve answer passages where IBM is the agent of the buying (e.g. a list of acquired companies) and not ones where it is the patient, e.g. where people are buying IBM products or stock. (4)

What did IBM buy?

This contrasts with keyword search results which ignore grammatical relations and roles. Thus, using deeper linguistic representations in Information Retrieval provides a level of abstraction that can provide more accurate and comprehensive search results than even sophisticated keyword-based approaches. It is important to note that search engines such as Google currently do not rely on this kind of linguistic knowledge, and essentially return pages containing words in the query, with very little attention paid to the relations between the words. A high-profile search engine which does use such knowledge is Powerset (Kaplan 2009), which uses an LFG grammar and parser (Kaplan et al. 2004) to analyse the queries and documents. The related search area of Question Answering, in which the query is a question rather than a set of query terms, has consistently seen the use of grammars and parsers, especially in the best-performing systems (Harabagiu et al. 2003). There is a wide variety of applications which depend on language models, or stochastic models which can assign probabilities to strings of the language. This information is typically used to select among alternative hypotheses output by some other processing component, such as the acoustic model of an automatic speech recognition (ASR) system, the translation model of a statistical machine translation system, or the output of a handwriting or optical character recognition (OCR) system. The system will prefer hypotheses which are more likely strings of the target language. Most such systems use language models based on counts of short sequences of words (called n-grams) in a

2008

IX. Beyond Syntax

training corpus. However, n-gram models do not represent syntactic structure and therefore cannot capture dependencies that span substrings longer than the length of the ngram. Since it is rarely practical to train n-gram models for sequences longer than three or four words, they miss many important syntactic dependencies. Therefore, recent work has been investigating techniques for incorporating syntax-based language models into machine translation (MT) and ASR (e.g. Charniak 2001; Xu et al. 2001; Collins et al. 2005). The grammars developed for domain-specific systems can be more practical to use in this way, and indeed in many systems, one and the same grammar is used for language modeling in the speech recognition component and then analysis of the recognised string (e.g. Bouillon et al. 2006). Finally, machine translation (MT) is an application which can benefit from syntactic analysis without needing to understand the full meaning intensions of a human interlocutor or the text being translated. Different MT systems are frequently contrasted with each other with reference to the “MT pyramid” (Vauquois 1968), shown in Figure 58.2. At the corners of the base of the pyramid are the source and target language strings in their surface form. At the apex of the pyramid is an interlingual representation that could mediate between all of the languages in the system, provided that both the interlingual representation and the analysis and generation components mapping to and from it for each language could be defined. (5)

source

transfer

n tio era gen

ana lys is

interlingua

target

Fig. 58.2: Machine translation pyramid

However, most researchers currently working in the field consider the interlingual approach to be either impossible (no such system of representations could be defined that would work for all languages) or impractical. Instead, MT systems are described in terms of how much analysis is performed on the source and target sides, with the idea that, even in the absence of an interlingua, the more we abstract away from surface forms towards semantic representations, the closer the input and output will be and the less work will be needed at the stage mapping source to target. While rule-based approaches to MT have explored using greater levels of abstraction, most work in statistical MT has operated along the bottom of the pyramid, learning correspondences between words or strings of words in the source and target languages from training samples of translated text. However, even in statistical MT, current work is exploring using syntactic information on the source side, the target side, or both, as a means of getting more information out of the training data (e.g. Quirk et al. 2005; Riezler and Maxwell 2006; Li et al. 2009). It is an open question where the correct balance lies for machine translation between explicit linguistic knowledge and statistical knowledge acquired from parallel data. Oepen et al. (2007), following some high-profile attempts at MT such as Verbmobil (Wahlster 2000), argue that knowledge-heavy, deep approaches based on linguistic grammars

58. Computational Syntax

2009

will ultimately be required for high-quality machine translation (combined with stochastic approaches for disambiguation and finding the most probable translation). In order to translate from Norwegian to English, they use an LFG grammar for analysis of the Norwegian, producing an abstract representation in terms of minimal recursion semantics (Copestake et al. 2005), from which they generate into English using an HPSG grammar. In summary, because syntax is concerned with both strings (well-formedness, likelihood) and structure, implemented grammars can benefit applications of natural language processing in many ways.

2.3. Summary This section has motivated computational approaches to syntax, invoking both theoretical considerations and practical applications. The following section addresses regression testing, which is key to both kinds of uses of computational syntax.

3. Regression testing and maintenance Regression testing refers to testing how a system, in our case a computational grammar, performs over time. It is particularly concerned with discovering unintended negative consequences, i.e. regressions in grammar behavior, when changes are made to the grammar. Regression testing measures parse accuracy, speed, and ambiguity over a corpus. Such testing is extremely important in computational syntax because new phenomena added to a grammar interact with existing ones, often in unexpected ways. The testing ensures that already implemented constructions continue to be parsed correctly, even as the grammar grows. Most large-scale systems automatically run the regression testing on a regular basis and provide user interfaces to examine the results, highlighting any problems that have arisen, and to view system progress (Oepen and Flickinger 1998; Oepen et al. 2002; Chatzichrisafis et al. 2007). Regression testing relies on sets of test suites which illustrate the behavior of the grammar over time. Results of the current grammar are compared against either gold standard results (indications of grammaticality and intended structure or semantics) and/or the analyses of a previous test run. Regression testing can also be integrated with the construction of grammar-based treebanks. In that case, the grammar developer (or other treebankers) create a set of gold standard results by examining the parse trees assigned by the grammar to each sentence in a corpus and selecting the one that matches the intended meaning of the sentence in context, or rejecting them all if no suitable analysis is found. With this kind of a treebank, when the grammar is changed, regression testing can determine not only whether the current grammar produces the same set of analyses for each item as the previous grammar, but also whether the intended analysis is maintained. Given the high degree of ambiguity of natural language, the task of selecting the right tree for each sentence could be daunting indeed. However, this task is made much easier by software systems supporting discriminants-based treebanking (Carter 1997). The Redwoods tool (Oepen et al. 2004) supports this kind of annotation by calculating minimal discriminants between parses, so that annotation involves making binary decisions (e.g. analyses with causative

2010

IX. Beyond Syntax

have vs. possessive have) rather than selecting whole trees. It further records the decisions that the annotator made so that when the grammar is updated and the corpus reparsed, the annotation decisions can be rerun, minimising the additional annotation work that needs to be done. A similar system, Trepil, based on the XLE LFG grammars is discussed in Rose´n et al. (2007). The sentences used in the test suites are typically a combination of constructed and attested examples (de Paiva and King 2008). The attested examples are usually drawn from corpora representing the domain(s) the grammar is being developed for, and ensure that the grammar covers all the necessary constructions, not just linguistically salient ones. They also ensure that the analyses of different linguistic phenomena interact properly and that the system is efficient enough to parse naturally occurring text. The handconstructed examples are usually simpler than the naturally occurring ones, and targeted at illustrating particular linguistic phenomena that have been analyzed in the grammar. For some languages, there exist hand-constructed test suites covering a range of important phenomena. For example, the HP test suite for English includes sentences exemplifying many standard linguistic phenomena, such as subject-verb agreement, question and relative clause formation, and coordination (Flickinger et al. 1987). The TSNLP project (Lehmann et al. 1996) built parallel test suites for English, German, and French, and tagged each example with the phenomena it was meant to illustrate, to support finergrained evaluation of grammars. Tracking progress of a grammar over time is not straightforward. Gold standard analyses are time-consuming to construct, even for isolated linguistically-based examples. In addition, if the preferred analysis is changed, e.g. due to improvements in the underlying linguistic analysis, then either the gold standard or the mapping between the system output and the gold standard must be updated. Either task is time consuming and error prone. The Trepil tools (Rose´n et al. 2007) address this issue by using the previously stored discriminants to select and display the closest current analysis to the previous gold standard. In addition, many gold standards include only more stable, core features, such as predicate-argument structure. These can be faster to build and easier to maintain. Finally, it is important to make running and inspecting the results of the regression test easy for the grammar developer. This includes creating a user interface to highlight changes and to investigate them. It also includes providing a view on the long term behavior of the system, including the best performance on each test suite. In this way, the development of the system is captured and any problems can be rapidly addressed. Another aspect of developing large-scale grammars over time is designing for maintainability. Broad-coverage grammars are large, complex objects. For example, the July 2009 release of the English Resource Grammar (Flickinger 2000) has 863 distinct types of lexical entry, 208 phrase structure rules, and 67 lexical rules. The definitions of these are collectively supported by 6,350 other type definitions. In the face of such complexity, in order for grammar developers to be able to make additions and refinements to the grammar, best practices for organising the information involved are required. These range from extensive documentation in the form of comments associated with each definition in the grammar to striving to capture linguistic generalisations rather than repeat information in different parts of the grammar. Devices for capturing generalisations include the types in the type hierarchy of HPSG and macros in the definition of XLE LFG grammars (Crouch et al. 2009) and most other grammar development platforms. These

58. Computational Syntax

2011

best practices also facilitate collaborative grammar development, where multiple grammar engineers work on the same grammar. Collaborative grammar development, in turn, facilitates scaling-up of grammars to broad-coverage.

4. Robustness For many applications using computational grammars, it is important that the grammars be robust, producing output for any input. This is true even if the input is ungrammatical (e.g. mismatched subject verb agreement, misspelled words, run-on sentences) or if it is out of coverage (e.g. parasitic gaps may not have been implemented). However, it is good to know when the output of the parser is well-formed according to the grammar and when it is out of coverage. This can be achieved by including special features for out-of-coverage sentences or by giving confidence scores to the output, where a higher score generally indicates a more in-coverage sentence. Robustness is especially important for language technology applications dealing with text such as that found on web pages or in emails. In these domains the text can be particularly noisy and ungrammatical. This section explores three topics related to robust processing with computational grammars: what to do about unknown words; cases where specific kinds of ungrammatical input are handled by special rules; and fall-back strategies used to gather partial information in the absence of a parse spanning the whole sentence.

4.1. Lexical acquisition Techniques for rapid lexical acquisition are important for extending grammar coverage. This is the case both in initial grammar development and when applying an existing grammar to a new corpus. For many grammars, lexical items fall into two main classes: those whose part of speech is sufficient for relatively accurate parsing (e.g. many common nouns) and those which require more specific information, such as subcategorisation frames (e.g. verbs, nouns that take complements). From a purely syntactic perspective, knowing the part of speech of a word can often be enough to provide a reasonable parse, although additional information is typically needed for semantically-rich applications. The part of speech for words that are unknown to the parser may be provided by a part-of-speech tagger, which uses lists of known words and information such as the parts of speech of the surrounding words and any affixes on unknown words in order to infer a likely part-of-speech tag. In addition, when multiple parts of speech may be possible, some parsers can handle multiple inputs and allow the syntactic grammar to determine the correct choice. The situation is more complicated when the syntactic behavior of a word depends on more than its part of speech. For example, which subcategorisation frames are permitted vary from verb to verb. When unknown verbs are encountered, the subcategorisation frame can be guessed (e.g. intransitive or transitive as possible defaults), but these guesses will be insufficient in cases where more elaborate subcategorisation frames are called for. In addition, even word classes which seem at first glance to have fairly

2012

IX. Beyond Syntax

consistent syntactic behavior often turn out to have more nuances on closer examination. For example, in English the syntactic behavior of common nouns is governed in part by whether they are count or mass nouns. Many systems can exploit this distinction when it is entered into the lexicon, but can revert to treating all common nouns in the same way. Similarly, knowing that a noun is a proper noun may be sufficient to provide basic syntactic coverage. However, having additional type information, i.e. whether the noun refers to a person, location, or organisation, can further control its syntactic distribution. For example, locations can occur in comma separated constructions (e.g. Detroit, Michigan; Paris, France). Much current lexical acquisition work addresses the problem of discovering more fine-grained information through supertagging (Bangalore and Joshi 1999). In HPSG supertagging work (e.g. Blunsom and Baldwin 2006; Matsuzaki et al. 2007), the supertags are lexical types, which encode part-of-speech and subcategorisation information, as well as other properties of lexical classes (such as countability or gender for nouns, or auxiliary selection for verbs in languages such as French or Italian). The training data for such supertaggers are treebanks, collections of text parsed by the grammar in question. Lexical acquisition can be done off-line as part of the grammar development process (see, e.g. O’Donovan et al. 2005) or on the fly during parsing, to generate lexical entries as the parser needs them. These scenarios place different requirements on the lexical acquisition system. When lexical acquisition is done off-line to create lexical entries for the grammar, there is a higher requirement for precision. When lexical acquisition is used as a robustness technique at run time, the precision requirement may vary depending on the task the grammar is being used for. Run-time lexical acquisition also faces the problem of determining when a particular use of a word is unknown to the grammar: If a grammar with, for example, only a transitive entry for the verb adjust encounters an intransitive use of it as in (6), it can be difficult to pinpoint adjust as the source of parse failure. (6)

Despite losing their parents, they adjusted well.

Because of these issues, lexical acquisition is non-trivial. Nonetheless, it is an important aspect of computational syntax.

4.2. Accommodation of ungrammatical and extragrammatical input Most applications require grammars to accommodate ungrammatical and extragrammatical input. In certain applications, such as grammar checkers for computer assisted language learning and applications involving email or other informal texts, a relatively large percentage of the text will contain ungrammatical sentences. Even in applications that use well-edited text such as newspaper texts, manuals, and financial reports, there will always be some sentences that contain constructions that have not yet been integrated into the grammar. This may be because the construction has not been seen before or because the construction was not implemented for other reasons. As an example of the latter, the English topic movement construction shown in (7) is quite rare, but extant, in many corpora and its analysis can be inefficient to compute due to the potentially long-

58. Computational Syntax

2013

distance dependency. As such, grammars may choose not to implement topicalisation and instead intentionally fail on such sentences. (7)

Bagels, I like.

As noted above, it is often important to not only accommodate extragrammatical or ungrammatical input, but also recognise when this has happened. This is particularly important for computer assisted language learning where the system needs to signal ungrammaticality of a sentence and, if possible, pinpoint the source of the error (e.g. mismatched subject-verb agreement, missing determiner with a singular count noun, incorrect placement of negation). Even for applications such as search and retrieval, where redundancy of information can overcome coverage issues because the same information may be present in other sentences with other phrasings, the indication of ungrammaticality or extragrammaticality can be useful for ranking returned results and otherwise determining the likely correctness of the information in the parse. When a system is developed which is likely to encounter ungrammatical examples involving known error types, rules can be written to parse these errors, providing an analysis for them. For example, the grammar may be written to first try to parse with grammatical English and to then try to find a parse where the subject does not agree with the verb (Frank et al. 2001). If a parse is found with such a mismatch, a feature can be added to the syntactic analysis or the metadata for the sentence indicating that the parse is ungrammatical with respect to subject verb agreement. Nonetheless, not all parse failures can be handled this way. When the grammar cannot analyze a sentence, even with its robustness rules, we turn to fall-back techniques.

4.3. Fall-back techniques The general idea with fall-back techniques is to construct as good a parse as possible for noisy or otherwise unhandled input. This can include providing parses for chunks of the sentence (e.g. verb phrases and noun phrases) and then putting them together as an ordered set. Such parses are sometimes referred to as fragment parses (Bouma et al. 2001; Riezler et al. 2002). This approach can work particularly well if the error is due to unexpected final punctuation, as in (8), so that the main sentence is a single wellformed fragment with an additional record of the unsuccessfully parsed final punctuation. (8)

You left early?!!!

Other approaches to extragrammatical input are to selectively loosen constraints on the grammar, e.g. allowing subcategorisation violations. These two ideas are combined in the system presented by Zhang and Kordoni (2008), which builds subconstituents based on a precision grammar and then combines them into larger constituents according to a more relaxed set of rules. Clearly, any approach to robust parsing that relies on relaxing constraints is highly dependent on parse selection and ambiguity management (sections 5−6). Regardless of the chosen fall-back technique(s) for a given parser, the application using the syntactic output can use the fact that the parse is degraded, perhaps with

2014

IX. Beyond Syntax

information as to how, as input. This can be particularly useful in search applications where even having well-formed noun phrase chunks can be an improvement over keyword search. It is also useful with texts that contain many long sentences with sentencelevel coordination where one of the conjuncts may be parseable while the other is not, thereby allowing information from the parseable conjunct to be used. In either case, the system benefits from knowing that the smaller pieces of structure (a noun phrase chunk or a conjunct within a complex sentence) are well-formed, even though the entire string was not parseable.

4.4. Summary This section has considered the problem of robustness, which arises when computational grammars are deployed in practical systems that encounter strings outside the grammars’ purview and still require a parse. We first briefly discussed approaches to extending lexical resources and then turned to accommodation of specific ungrammatical or extragrammatical syntactic patterns. Finally, we considered more general fall-back techniques.

5. Parse selection and ambiguity management Ambiguity is a classic issue in linguistic theory. Structures can be ambiguous at many linguistic levels (morphology, syntax, semantics, etc.). Such ambiguity becomes strikingly apparent in computational syntax because ambiguities from word segmentation, to morphology, to syntactic attachment, to grammatical functions multiply out and can produce thousands of results for average length sentences in natural language text. As an example, Table 58.1 presents the degree of ambiguity found in sentences from two sections of the Penn Treebank Wall Street Journal corpus (Marcus et al. 1993) as parsed by an LFG grammar (Riezler et al. 2002), binned by sentence length. Shorter sentences, i.e. 1−10 and 11−20 words long, are much less ambiguous, averaging 3.7 and 31 parses per sentence, than the longer sentences. As can be seen in the table, there is a consistent trend for greater ambiguity as sentence length increases. Although many ambiguities are local (e.g. whether red is an adjective or a noun in red boxes), others interact with constructions further apart in the sentence (e.g. choices about verbs and their argument structures); both types multiply out, resulting in high ambiguity rates. The longest sentences (61−70 words) show a drop in ambiguity relative to those before them largely because there were only two sentences of this length in the corpus and these contained several semi-colons that constrained the ambiguity. Abney (1996) provides an illuminating discussion of why natural language is surprisingly ambiguous. He shows that even simple sentences such as John saw Mary can have surprising but perfectly legal analyses according to a grammar (e.g. a noun phrase reading in which Mary is associated with a kind of saw called a John saw). In addition, to the extent that a grammar is underconstrained, it will find more analyses than are actually warranted. One of the benefits of computational approaches to syntax is being able to explore the range of analyses assigned by a grammar to a set of sentences, and discover where the hypotheses implemented in the grammar are in fact flawed.

58. Computational Syntax (9)

2015

Tab. 58.1: Sample ambiguity statistics for newspaper text length range

average # of words

avg. ambiguity (# parses)

# of sentences in range

1–10 11–20 21–30 31–40 41–50 51–60 61–70

7.0 15.6 25.1 34.3 43.9 53.3 68.5

3.7 31 258 18,268 144,717 568,401 15,042

777 1,811 1,352 534 114 15 2

all

19.8

6,718

4,605

Precision grammars, developed on the basis of linguistic theories, reduce the ambiguity problem somewhat, compared to machine-learned grammars (section 6). By ruling out ungrammatical strings, they also rule out unwarranted analyses of grammatical inputs. For example, a precision grammar will not posit an analysis which pairs a subject NP with a VP phrase if the two constituents do not agree (unless it is running with robustness techniques; see section 4). Even without unwarranted analyses, a given sentence often has many syntactically well-formed structures. For linguistic investigations, grammar writers may be interested in all well-formed analyses. Practical applications commonly require only the most plausible analysis. Many implementations of syntactic theories provide a way to rank the output of the parser (e.g. Bouma et al. 2001; Toutanova et al. 2002; Riezler et al. 2002; King et al. 2004). This allows the system to return the single most probable (or alternatively the n-best) parses. The ranking methods are generally stochastic. For example, they may rank certain rules more highly than others, or put more weight on certain features, feature combinations, lexical items, subcategorisation frames, etc. In addition, ambiguity management can be performed once per parse or at several levels in the system. For example, in unification-based systems, parsing often takes place in two stages, first constructing trees according to a context-free grammar (CFG) abstracted from the grammar rules and then incorporating the remaining constraints on the rules through unification. In this two-step process, ambiguity management can be done at both steps, ranking CFG trees first and then also ranking the output of the unification step. Another example would be to use a part-of-speech tagger, which assigns the POS tags based on their likelihood, to constrain the syntactic parser, whose output would then be ranked. Ambiguity management choices have a significant influence on modularity. If an earlier level of representation prunes too many possibilities, then later modules may no longer have the input that they need to produce a well-formed parse. For example, stateof-the-art part-of-speech taggers for English are 97–98 % accurate when operating on newspaper text. This means that, on average, roughly every other sentence will have a tagging error because the sentences are on average 20 words long in such texts. If this tagging were used to strictly constrain the input to the syntactic parser, then every other sentence would be analysed incorrectly or receive no analysis at all. One solution is to carry all the ambiguity from each level of processing forward and then select among the full set of parses at the end. However, this can be inefficient, and lead to sentences not parsing at all, e.g. because the machine runs out of memory before finding an analysis

2016

IX. Beyond Syntax

(though see Maxwell and Kaplan 1991 on efficiently passing alternatives across module boundaries). One compromise is to preserve only some ambiguity, pruning the most unlikely analyses at each step (e.g. Curran et al. 2006). Another is to allow the system to backtrack and get more possibilities from an earlier step in case of parse failure. In order to build the stochastic ranking model, it is necessary to have training data. The data, referred to as a treebank, could consist entirely of gold standard analyses for a sample of sentences, or in addition contain a number of incorrect derivations for each sentence, depending on the type of model. As an example of the kind of information encoded in the data, a treebank will resolve PP attachment ambiguities and ambiguities in part of speech. Some treebanks also encode grammatical function information either through the configuration (e.g. objects are right sister to the verb in English) or through labels on the tree nodes (e.g. NP-SBJ for subject noun phrases). Dependency banks can provide additional information, information which is especially needed by formalisms which represent grammatical functions and other dependencies as primitives. Likewise, treebanks constructed by annotating the output of hand-built grammars (Oepen et al. 2004; Rose´n et al. 2007) provide an extremely rich source of data. The amount of data needed for accurate ranking models is often in the tens of thousands of sentences, although the amount of data required depends on the degree of ambiguity output by the parser and the richness of the training data, and systems can be trained with just thousands of sentences and still work relatively well (Cahill et al. 2008). Creating training data for disambiguation of parsers using hand-built grammars is typically less costly than creating data for data-driven parsers (cf. section 6.3). Markup that is lightweight and relatively easy to create can be used to train systems that model much more linguistic detail and the training data need not include extremely long, complicated sentences. For example, contrast the sample training data used for ranking output of a hand-coded grammar (Riezler et al. 2002) shown in (10) with that in Figure 58.4 (section 6.3) used for training a parser developed using methods based primarily on machine learning. (10) Last month, [NP Judge Curry] set/VBD [NP the interest rate on the refund] at 9 %. Similarly, the construction of grammar-derived treebanks (Oepen et al. 2004; Rose´n et al. 2007) uses the grammar itself to do most of the work of constructing the trees, and only relies on the human annotator to chose among grammar-licensed analyses. In addition to using stochastic methods based on machine learning, grammars can incorporate ranking constraints that the grammar writer defines as (dis)preferences. In LFG, this is done with a system inspired by Optimality Theory (Frank et al. 2001). Such mechanisms generally work by the grammar first producing all the grammatical analyses. Each analysis may be associated with specific marks. The analyses with the fewest dispreference marks (or most preference marks) are then selected. The advantage of these marks is that an unlikely analysis will surface if it is the only one available. For example, in English, by phrases can be locative or temporal adjuncts or oblique agents; the grammar writer could prefer the oblique agent analysis when the verb is passive, instead of allowing both analyses. Marks can also be used together with robustness rules. For example, a mark might be used to disprefer mismatched subject-verb agreement. If an analysis is found with correct agreement, then any analyses with incorrect agreement

58. Computational Syntax

2017

are suppressed. However, if the only analysis is one with mismatched agreement, then that analysis is allowed. This allows for robustness in the face of less than ideal input. This section has covered ambiguity management, a focus of research in computational linguistics, adapting techniques from theoretical linguistics and computer science. The current state-of-the-art methods for parse selection treat sentences in isolation. Context is taken into account only to the level of using training data (in stochastic systems) from the targeted domain, if such training data is available. However, the choice of most plausible analysis of a sentence is also partially dependent on the local discourse context around the sentence. The best means of taking this context into account in parse selection is an area for further research.

6. Parsing Parsing is the process of taking as input a sentence and a grammar, and assigning a syntactic structure to the sentence according to the grammar. In principle the input could be a larger unit, such as a paragraph or document, but most parsers work sentence by sentence. The output representation depends on the grammar being used, but could be a phrase structure tree, a dependency graph (Nivre 2006), or a logical form, for example. Increasingly, syntactic formalisms are informing work on parser technologies, allowing for linguistically motivated, efficient, broad-coverage systems (Baldwin et al. 2007). This section briefly overviews the parsing process, starting with pre-processing (section 6.1), and continuing through parse construction (section 6.2) and selection (section 6.3), before ending with a note on evaluation (section 6.4).

6.1. Pre-processing for parsing In order to parse whole documents such as web pages, various pre-processing steps must be performed. First, the document must be split into sentences. Even this apparently simple task contains an ambiguity problem, since end-of-sentence markers such as periods have alternative uses in most languages, for example marking abbreviations (Kiss and Strunk 2006). Second, each sentence must be tokenised: separating off punctuation marks from the words they are attached to, breaking up contractions into separate tokens (e.g. don’t into do n’t), and perhaps breaking hyphenated words into separate tokens (e.g. easily-amused audience into easily amused audience), all according to the expectations of the particular grammar or parsing system. Since these steps are not directly related to the syntactic analysis, they will not be covered here, but it is worth noting that these pre-processing steps are often non-trivial and are important for providing accurate input to the parser. The next level of pre-processing for a parser is part-of-speech (POS) tagging. The level of granularity used by the tagger depends on the tag set, and varies significantly across parsers. A standard English tag set is that from the Penn Treebank (Marcus et al. 1993), which consists of around 50 labels representing basic grammatical categories such as verb, noun, pronoun, adjective, adverb, conjunction, preposition, and determiner. Some additional grammatical information is represented, which explains the relatively

2018

IX. Beyond Syntax

large number of labels, such as tense on the verb and number on the noun. One option is to simply use a POS dictionary to assign all possible tags to each word, and then let the parser disambiguate the tags as part of the parsing process. Another approach is to use a deterministic POS tagger to assign the tags, which leads to greater parsing efficiency but risks introducing tagging errors into the parsing process. The current standard approach to POS tagging is a statistical one, with methods based on machine learning being used to make the tagging decisions. For each word in a sentence, information from the context − typically two words on either side of the target word, and properties of the target word itself, such as prefix and suffix strings − is encoded as features, and a training phase using gold-standard tagged sentences is used to assign a weight to each feature (Ratnaparkhi 1996). Informally, the weight of a feature can be thought of as measuring how useful that feature is at distinguishing between good and bad tags for a particular aspect of the context. For an example of how such features can help in the tagging process, consider googling as an example of a word previously unknown to the tagger; a useful feature here would encode that fact that the final three characters are ing. The number of possible tag sequences grows exponentially with the length of the sentence, but an efficient algorithm exists for finding the highest probability tag sequence for a new sentence − the Viterbi algorithm applied to sequences (see Chapter 10 of Manning and Schütze 1999 for a textbook treatment of this algorithm) − which can then be passed to the parsing phase. POS tagging for newspaper text, using the Penn Treebank tag set, achieves over 97 % accuracy on unseen text from the Wall Street Journal. Lying somewhere between ordinary POS tagging and full parsing, the process of supertagging has been used for practical parsing (Bangalore and Joshi 1999). Supertagging can be applied to lexicalised grammar formalisms, in which elementary syntactic structures are assigned to words in a sentence. For example, in Lexicalised Tree Adjoining Grammar (LTAG), an elementary tree is assigned to each word, and parsing combines these trees using the operations of LTAG (substitution and adjunction). The key point is that elementary trees contain a significant amount of syntactic information; for example, in a sentence with a transitive verb, the elementary tree assigned to the verb would be the tree for the whole sentence, but without the noun phrase subtrees corresponding to the subject and object, which are inserted as part of the parsing process. Supertagging is potentially useful from a practical point of view because it can assign elementary trees to words in a sentence, and tagging can be performed much faster than full parsing. Once supertags have been assigned, the parser has much less work to do, and so the parsing phase can be carried out efficiently also. One formalism for which supertagging has been particularly successful is Combinatory Categorial Grammar (CCG) (Steedman 2000). Here the elementary syntactic structures are CCG lexical categories which encode subcategorisation information. Supertagging for CCG can lead to a highly efficient parser (Clark and Curran 2007b).

6.2. Core parsing The parsing phase itself uses the rules of the grammar to build up an analysis spanning the whole sentence. Roughly speaking, parsing algorithms are typically either bottom-

58. Computational Syntax

2019

up or top-down. In a bottom-up approach, the parser starts out with the words and POS tags in the sentence (or elementary syntactic structures in the case of a lexicalised grammar such as LTAGS or CCG) and combines the non-terminal nodes in the partially built parse, using the rules of the grammar, until one or more analyses spanning the whole sentence are found. In a top-down approach, the parser works in the opposite direction: it begins with the root node of the grammar, for example S in a typical context-free grammar (CFG), and applies the rules of the grammar until all the words in the sentence have been covered. Here we will use bottom-up chart parsing as an example, since this is perhaps the most common technique currently employed by practical parsers. We will use context-free grammar parsing for the examples, but note that bottom-up chart-parsing techniques can be applied in a similar way to grammar formalisms such as LFG and HPSG. For an introduction to basic parsing techniques, see Jurafsky and Martin (2008). A chart is a data structure for storing partial analyses, i.e. non-terminal nodes in the parse covering a part of the sentence. It can also be thought of as an array, chart[i,j], where each chart cell, indexed by i and j, contains all the non-terminal nodes which span the sentence beginning at position i with span j, where the position is numbered from left to right and the span is the length of the constituent. A simple bottom-up chartparser begins by filling in all the cells with span 1 (i.e. those covering just a single word), and then cells with span 2, and so on, until the corner cell is reached which spans the whole sentence. The advantage in using this order is that, at each point in the parsing process, any constituents which could be combined to give a new constituent of span j must be available in the chart because the combining constituents must have spans less than j. This is the idea behind the Cocke-Kasami-Younger (CKY) algorithm (Kasami 1965; Younger 1967; Cocke and Schwartz 1970). This algorithm requires a grammar in Chomsky Normal Form, in which there are at most two non-terminal nodes on the righthand side of any context-free grammar rule, but can be applied more generally in practice since any CFG can be binarised to produce such a grammar. Another popular chartbased parsing algorithm is the Earley algorithm; see Chapter 10 of Jurafsky and Martin (2008) for a description. The number of possible analyses for a sentence may grow exponentially in sentence length, for a typical grammar, because of the syntactic ambiguity inherent in natural language (section 5). Thus with a broad-coverage grammar it is impractical to represent all derivations in a chart by enumeration. A solution to this problem is to pack a chart, so that an exponential number of derivations is represented efficiently. The idea is that, if there is more than one way to build a constituent with the same non-terminal and the same span, then only one of these options needs to be considered for further parsing (or, more accurately, only one canonical representative needs to be considered). All options can be recorded in the chart, and back-pointers can be kept to the respective daughters so that all derivations can be recovered (see Moore and Alshawi 1992 for an example of an early exposition of this idea). Figure 58.3 demonstrates this idea with a simple example using a CFG. Note that there is a canonical representative of the two ways of analysing the verb phrase in cell [2,6], which is used to combine with the subject to form only one S node in cell [1,7]. There has been work on chart packing for constraint-based grammars, such as HPSG. Here the problem is complicated by the fact that equivalence cannot be simply based on the non-terminal label and category span, but also has to take into account the fact that

2020

IX. Beyond Syntax

(11)

Fig. 58.3: Example packed chart, with span length along the vertical axis

feature structures in these grammars are related via a hierarchical subsumption relation (Oepen and Carroll 2000). Packing for LFG is dealt with in Maxwell and Kaplan (1993). A packed chart can be used to efficiently decide if a sentence is grammatical, and to efficiently represent all possible derivations. It is still not possible to efficiently enumerate all derivations, since there are typically so many of them, but it is possible to find the highest scoring derivation according to some scoring function, which is described in the next section. (Again, additional efficiency concerns arise with unification, e.g. for HPSG and LFG implementations. For techniques for efficient unification, see papers and references in Maxwell and Kaplan 1991 and Oepen et al. 2004.)

6.3. Parse selection As explained in section 5, syntactic ambiguity is a significant problem for practical parsers. The standard approach is to use a statistical model to rank the parses, and treat

58. Computational Syntax

2021

statistical parsing as a search problem: Find the most probable parse from all the parses for the sentence. A seminal work on defining accurate probability models for parsing was Collins (1997). Collins showed how a Probabilistic Context Free Grammar (PCFG) could be extended to handle newspaper text. Some of the innovations in this approach can also be attributed to related work such as Eisner (1996) and Goodman (1996). A PCFG is a CFG where each rule has a probability attached to it, with the requirement that the probabilities of all rules with the same non-terminal on the left-hand side (LHS) sum to 1. The probability of a parse according to a PCFG is the product of the probabilities of the rules used to build the parse. A significant extension in Collins (1997), which greatly increases accuracy, is the use of lexicalisation. A lexicalised PCFG is one where each non-terminal in a rule has a word associated with it, namely the linguistic head of the constituent associated with the rule. This extension improves the model because the rules at all nodes in a parse tree now have access to the head words of the phrases, and not just those rules at the leaves. Not surprisingly, this lexical knowledge is crucial for resolving some syntactic ambiguities, such as coordination and PP attachment (see e.g. Collins and Brooks 1995). A practical parser requires a broad coverage grammar, and the probabilities associated with the rules have to be estimated, which requires an estimation method and a source of training data. The standard resource for building statistical parsing models for English is the Penn Treebank (Marcus et al. 1993). The part which has been used to develop broad-coverage parsers consists of around 1,000,000 words of newspaper text manually annotated with phrase structure trees. Similar phrase structure treebanks, as well as dependency banks, have been created for other languages. Figure 58.4 gives an example parse tree from the Penn Treebank (with the head annotation added), together with the head-lexicalised rules associated with the tree. The rules in this example are based on Model 1, the simplest parsing model from Collins (1997); Model 2 and Model 3 incorporate additional linguistic knowledge in the form of subcategorisation information and a gap-propagation analysis of Wh-movement. Estimating the probabilities for a PCFG is simple and intuitive: count the number of times that a particular rule is seen in the treebank, and divide by the number of times the LHS is seen in total. For example, if the rule S → NP VP is seen 1,000 times, and S is seen as the LHS of a rule 5,000 times, the estimate of the probability for this rule would be 1/5. This intuitive relative frequency estimate is also theoretically well motivated, as it is a maximum likelihood estimate for the probability of the rule (see Chapter 6 of Manning and Schütze 1999). Once lexicalisation is introduced, the estimation problem becomes significantly harder. For example, the rules in Figure 58.4 are unlikely to be seen often in a treebank because the particular combination of words is unlikely to occur frequently. A particular problem occurs if a rule has not been seen at all, since this results in a zero relative frequency estimate, which results in a zero probability for the whole parse. Not having access to enough data for estimating probabilities based on words is a common problem in statistical NLP, and is referred to as the sparse data problem. Collins (1999) explains in detail how smoothing techniques can alleviate the sparse data problem for statistical parsing. The approach to statistical parsing described here is designed for maximum robustness: since the grammar is automatically extracted from a large amount of naturally

2022

IX. Beyond Syntax

(12)

TOP

S(set)

NP(month)

,

JJ

NN

,

Last

month

NP(Curry)

VP(set)

NNP

NNP

VBD

Judge

Curry

set

NP(rate)

PP(at)

NP(rate)

PP(on)

IN

DT

NN

NN

IN

NP(refund)

the

interest

rate

on

DT

NN

the

refund

at

NP(%)

CD

NN

9

%

TOP → S(set) S(set) → NP(month) , NP(Curry) VP(set) NP(month) → JJ(Last) NN(month) NP(Curry) → NNP(Judge) NNP(Curry) VP(set) → VBD(set) NP(rate) PP(at) NP(rate) → NP(rate) PP(on) NP(rate) → DT(the) NN(interest) NN(rate) NP(refund) → DT(the) NN(refund) PP(on) → IN(on) NP(refund) PP(at) → IN(at) NP(%) NP(%) → CD(9) NP(%)

Fig. 58.4: Example parse tree with head-lexicalised rules

occurring text, the parser has broad coverage and is able to deal with a large number of constructions; furthermore, since the parser is based on statistical models, it is able to assign an analysis to sentences containing words not seen in the training data, effectively guessing the structure by using clues from the POS tag of the unknown word and the surrounding context. An alternative approach, which focuses on producing more detailed linguistic analyses, is to create the grammar by hand, but use a probability model for ambiguity management (section 5). Once a statistical model of parse trees has been estimated from training data it can be used for search, i.e. finding the most probable parse. Here there are two predominant approaches: dynamic programming (Cormen et al. 1990) and beam search. The use of a packed chart lends itself to a dynamic programming approach. The key idea is that, for each group of equivalent analyses in that chart (i.e. those with the same non-terminal mode and span which have been “packed”), only the highest scoring node needs to be retained. This is an example of dynamic programming because the highest scoring parse for the complete sentence is computed by finding the highest scoring sub-parse for substrings of the sentence, which forms the basis of the Viterbi algorithm applied to statistical parsing. This method, unlike the pruning methods described next, is exact, in that the Viterbi algorithm is guaranteed to find the highest probability parse. For a lexi-

58. Computational Syntax

2023

calised PCFG, the notion of equivalence needs to be modified slightly, since it must take the words in the rules into account. Even the use of dynamic programming is not sufficient for practical parsing, and so Collins (1997) uses a beam search strategy to reduce the search space. The idea is straightforward: for each constituent in the cell of the chart, remove those which have a low probability relative to the highest probability constituent in that cell. Since the grammars used by Collins are robust, they overgenerate to an enormous degree, producing many low probability, incorrect analyses, and so pruning the chart in this way is very effective in reducing the practical parsing complexity with only a small loss in accuracy. Finally, this section has focused on parsing with (lexicalised) context-free grammars. However, there has been much recent work showing that efficient parsing can be achieved with more complex ‘deep’ grammars, e.g. the work of Oepen et al. (2002) on efficient processing with constraint-based grammars such as HPSG and LFG. In addition there is a large body of work on automatically extracting grammars for parsing in a variety of formalisms from the Penn Treebank, for example LFG (Cahill et al. 2002), HPSG (Miyao and Tsujii 2004) and CCG (Hockenmaier and Steedman 2007; Clark and Curran 2007b). Increases in computing power have also meant that parsers can be applied to large corpora in a way that would have been unthinkable 15 years ago.

6.4. Parser evaluation There has been much recent work on parser evaluation, which is important for parser development and comparing different grammar formalisms and statistical modelling approaches. There are several dimensions along which parsers can be evaluated, e.g. parser speed, accuracy, and the amount of linguistic information contained in the output representation. Most work on parser evaluation has been concerned with accuracy, measuring how well the output of the parser matches a gold standard output manually produced or corrected by human annotators. One accuracy measure simply counts the number of parses in the gold standard resource which are reproduced completely by the parser. However, this is a rather strict measure on which parsers typically perform poorly (e.g. around 30 % for Penn Treebanktrained parsers) and it gives no indication of how well the parser performs on those sentences for which it makes at least one error. Hence parser evaluation metrics tend to work on a constituent, or dependency relation, level, giving the percentage of constituents or dependents which the parser identifies correctly. The standard evaluation metrics for comparing parsers developed using the Penn Treebank are the Parseval metrics (Black et al. 1991), which count the number of non-terminal nodes in a parse tree which are correctly labeled compared to a manually labeled gold-standard tree. The best parsing results according to these metrics are over 91 % (e.g. Charniak and Johnson 2005). However, there is much debate about whether the Parseval metrics are suitable for a general parser evaluation metric (Carroll et al. 1998), and 91 % perhaps overstates the performance of current practical parsing; for example, the accuracy on constructions such as coordination and PP attachment is much lower. A current line of research is concerned with developing parser evaluation metrics and resources which can be applied to parsers based on different grammar formalisms. For

2024

IX. Beyond Syntax

example, it would be useful to know whether a parser based on LFG is more or less accurate than a parser developed using the Penn Treebank (Kaplan et al. 2004). However, developing a parser evaluation which can be fairly applied to different grammar formalisms has proven difficult (e.g. Clark and Curran 2007a), and currently there is no accepted way of deciding if a parser based on a particular grammar formalism or treebank resource is any more or less accurate than one based on a different resource.

6.5. Summary This section has given a brief overview of parsing, highlighting the way that core parsing (the assignment of structures to sentences) relies on pre- and post-processing, and has illustrated one kind of parsing algorithm with reference to context-free grammars. There is an extensive literature on this topic and it remains an area of active work as researchers investigate how to improve parsing efficiency for machine-learned and precision grammars, parsing accuracy of machine-learned grammars, portability of parsing systems across languages, and techniques for evaluating parsing systems across languages and frameworks.

7. Multilingual computational syntax So far we have talked about computational syntax in terms of monolingual projects which create grammars for one language at a time. There are also a number of projects in multilingual computational syntax. Like monolingual computational syntax, multilingual computational syntax is motivated by both practical and theoretical concerns. On the practical side, multilingual projects can cut down the development time of grammars for additional languages by leveraging resources already developed for other languages. Furthermore, grammars developed together in the context of a multilingual project tend to share output representations, which facilitates reuse of down-stream processing components. That is, less effort is required to adapt the other components of a natural language processing system to use one grammar’s output instead of another’s. On the theoretical side, multilingual projects extend the hypothesis testing benefits (section 2.1) of computational syntax to cross-linguistic hypotheses. Research in multilingual computational syntax involves the creation of multiple implemented grammars which share resources or are otherwise harmonised, but there are different ways of doing this. They can be classified across several dimensions: whether the grammars share rules; whether the grammars are developed in parallel or with some serving as the basis for others; whether grammar development is shared by multiple grammar engineers; and the typological range of the languages considered. The Parallel Grammars (ParGram) project (Butt et al. 1999; Butt et al. 2002; King et al. 2005) represents possibly the longest running project in this domain. It began in 1994 with three grammars (English, French, and German), and currently involves active work on thirteen grammars, including Arabic (Attia 2008), Chinese (Fang and King 2007), English (Riezler et al. 2002), German (Dipper 2003; Rohrer and Forst 2006b), Indonesian, Japanese (Masuichi and Ohkuma 2003), Norwegian (Rosén et al. 2005), Turkish

58. Computational Syntax

2025

(Çetinoğlu and Oflazer 2006), Urdu (Bögel et al. 2009), and Welsh. As its name suggests, ParGram aims to produce parallel analyses for similar constructions across languages wherever possible. ParGram follows the parallel development approach, though some of the later ParGram grammars were initially developed through porting of a grammar for one language to another related language (e.g. Kim et al. 2003). The ParGram grammars have very few shared files beyond a core feature declaration and some common templates; consistency among analyses is ensured through close collaboration between the grammar developers. The LinGO Grammar Matrix project (Bender et al. 2002) takes the shared-code approach. The Grammar Matrix consists of a core grammar containing information that is cross-linguistically useful. This core grammar was developed on the basis of the English Resource Grammar (Flickinger 2000), with reference to the JACY Japanese Grammar (Siegel and Bender 2002) and then refined over time as the Matrix was used for grammars for other languages, including Norwegian (Hellan and Haugereid 2003), Modern Greek (Kordoni and Neu 2005), Spanish (Marimon et al. 2007), Portuguese (Branco and Costa 2008) and Wambaya (Bender 2008), as well as smaller grammars for 60+ languages developed in a classroom setting (Bender 2007). The Grammar Matrix also includes libraries of analyses of recurring but non-universal phenomena, which can be accessed through a web-based customisation system to create a typologically appropriate starter grammar (Bender and Flickinger 2005; Drellishak and Bender 2005). The Grammar Matrix project is part of the DELPH-IN consortium. In addition to the grammars mentioned above, DELPH-IN partners have developed large-scale grammars for German (Müller and Kasper 2000; Crysmann 2005) and Korean (Kim and Yang 2003). These grammars are compatible with the parsing, generation, grammar development and other tools developed by DELPH-IN. MedSLT provides speech-to-speech translation for the medical domain. One of the system’s distinguishing characteristics is that all grammars used (for recognition, analysis and generation) are compiled from a small number of general linguistically motivated unification grammars, using the open source Regulus platform (Rayner et al. 2006). The overall goal of the Regulus architecture is to simplify the writing and maintaining of a large number of closely related grammars, retaining internal coherence between them. In particular, coherence between the recognition and analysis grammars guarantees that any expression which is accepted by the recogniser can also be parsed. Recent versions of the system merge together grammars for closely related languages (Bouillon et al. 2006). These core grammars are automatically specialised to derive smaller grammars. There are several other computational syntax projects that have strong multilingual components. The MetaGrammar project (de la Clergerie 2005) provides a toolkit to edit and compile MetaGrammars, which involve factorised syntactic descriptions, into Tree Adjoining Grammars (TAGs). The OpenCCG project (Baldridge et al. 2007) provides parsing and realisation tools based on the Combinatory Categorial Grammar (CCG) formalism. The CoreGram project is building Head-driven Phrase Structure Grammars (HPSGs) for five languages that share a common core grammar (e.g., Müller 2009). Most of these systems can be freely downloaded for research and educational purposes. For example, linguists can use these systems to create grammars to test linguistic hypotheses and syntactic analyses, either within a given language or across several languages. As discussed above, the core syntactic rules must be created for each language either independently, via adaptation from a common base, or via grammar porting. However,

2026

IX. Beyond Syntax

if multilingual grammar engineering is to take into account naturally occurring text, then many additional resources need to be developed for each language. Depending on the particular parsing pipeline, these include: part-of-speech taggers, morphological analyzers (as required by the morphological complexity of the languages), named-entity recognisers, broad-coverage lexical resources, and treebanks (section 3). In addition, the grammar development platforms and the parsing algorithms may need to be adapted to account for unanticipated phenomena or to maintain efficiency. Just as in theoretical syntax, by taking multiple languages into account, computational systems can be made more effective and universal.

8. Summary and conclusion The above sections described major areas of research in computational syntax. Due to space limitations, some aspects were not covered. One of these that has a clear correlate in traditional generative linguistics is generation, whereby the system is provided with a (partial) syntactic or semantic representation and must produce the strings that can realise this meaning. Some computational grammars can be used both in the parsing (section 6) and generation directions, while other approaches use two different grammars for these. For more on generation see Reiter and Dale (2000) or the chapter on natural language generation in Jurafsky and Martin (2008). Another area is the relationship of computational syntax to pragmatics. Some computational syntax systems include basic context information such as when a document was created, its title, and who its authors are. This information is often referred to as metadata, and it helps the system to perform basic reference resolution (e.g. resolving today) and other types of pragmatics, as well as allowing systems that handle large numbers of documents to tie the syntactic information to its source. There is also much more to be said about the issues of efficiency and memory usage; for certain applications, it may be necessary to decrease syntactic coverage or accuracy in exchange for more rapid processing. Computational implementations can inform the development of syntactic theories. Syntactic theories vary in how closely related they are to implementation efforts, but even in the cases where they are closely related, non-computational work within a framework tends to differ from computational work in the formalism assumed. There are several possible reasons for any given divergence: First, syntax pursued in a non-implemented fashion (so-called pencil-and-paper syntax) can change formal assumptions much more rapidly than implemented work, as in the latter case any changes to formal assumptions require updates to multiple software systems. As a consequence, computational syntax tends to make a clear distinction between formalism and theory, such that new theoretical results do not always require formalism (and software) changes. Second, pencil-and-paper syntax does not face the same concerns of processing efficiency that computational syntax does. In some cases, formal devices that are appealing from a pure competence stand point are not implemented in computational processing (performance) systems for this reason. Finally, some divergences are best classified as historical accident, resulting from the theoretical or computational preferences of the computational syntacticians when selecting formal devices. Given these sources of divergence, it is not possible to simply compare implemented and pencil-and-paper theories and make

58. Computational Syntax

2027

generalisations about the kinds of formal devices that are or are not compatible with implementation. However, a more careful study could reveal such tendencies, and this is an interesting area for further research. This chapter provided a brief overview of computational syntax, describing what computational syntax is and why it is of interest. Some areas of research that are of particular interest to computational syntax were examined in detail: regression testing, robustness, ambiguity, parsing algorithms, and multilingual approaches. Many of these areas have clear correlates in theoretical syntax, but the demands of computational syntax require a focus on particular research areas, some of which have not generally figured in theoretical linguistic investigations. We hope that this chapter has been useful to theoretical syntacticians, as well as computational linguists, and that over time the interaction between the two groups will continue to increase so that each research field can benefit from the other.

58. Acknowledgements We would like to thank the following people for providing extensive and useful feedback on our chapter: Tania Avgustinova, Miriam Butt, Jeff Good, Ron Kaplan, Tibor Kiss, Laura Rimell and an anonymous reviewer.

9. References (selected) Abney, Steven 1996 Statistical methods and linguistics. In: J. Klavans and P. Resnik (eds.), The Balancing Act: Combining Symbolic and Statistical Approaches to Language, 1−26. Cambridge, MA.: Cambridge University Press. Adriaens, Geert, and Dirk Schreors 1992 From COGRAM to ALCOGRAM: toward a controlled English grammar checker. In: Proceedings of the 14 th Conference on Computational linguistics, 595−601, Morristown, NJ, USA. Association for Computational Linguistics. Attia, Mohammed 2008 Handling Arabic morphological and syntactic ambiguity within the LFG framework with a view to Machine Translation. PhD thesis, University of Manchester. Baldridge, Jason, Sudipta Chatterjee, Alexis Palmer, and Ben Wing 2007 DotCCG and VisCCG: Wiki and programming paradigms for improved grammar engineering with OpenCCG. In: T. H. King and E. M. Bender (eds.), Proceedings of the GEAF07 Workshop, 5−25. Stanford: CSLI. Baldwin, Timothy, John Beavers, Emily M. Bender, Dan Flickinger, Ara Kim, and Stephan Oepen 2005 Beauty and the beast: What running a broad-coverage precision grammar over the BNC taught us about the grammar − and the corpus. In: S. Kepser and M. Reis (eds.), Linguistic Evidence: Empirical, Theoretical, and Computational Perspectives. Berlin: Mouton de Gruyter. Baldwin, Timothy, Mark Dras, Julia Hockenmaier, Tracy Holloway King, and Gertjan van Noord 2007 The impact of deep linguistic processing on parsing technology. In: Proceedings of the Tenth International Conference on Parsing Technologies, 36−38.

2028

IX. Beyond Syntax

Bangalore, Srinivas, and Aravind Joshi 1999 Supertagging: An approach to almost parsing. Computational Linguistics 25(2): 237− 265. Bender, Emily, and Dan Flickinger 1999 Peripheral constructions and core phenomena: Agreement in tag questions. In: G. Webelhuth, J.-P. Koenig, and A. Kathol (eds.), Lexical and Constructional Aspects of Linguistic Explanation, 199−214. Stanford: CSLI. Bender, Emily M. 2007 Combining research and pedagogy in the development of a crosslinguistic grammar resource. In: T. H. King and E. M. Bender (eds.), Proceedings of the GEAF 2007 Workshop. Stanford: CSLI Publications. Bender, Emily M. 2008 Evaluating a crosslinguistic grammar resource: A case study of Wambaya. In: Proceedings of ACL-08: HLT, 977−985, Columbus, Ohio, June. Association for Computational Linguistics. Bender, Emily M., and Dan Flickinger 2005 Rapid prototyping of scalable grammars: Towards modularity in extensions to a language-independent core. In: Proceedings of the 2 nd International Joint Conference on Natural Language Processing IJCNLP-05 (Posters/Demos). Bender, Emily M., Dan Flickinger, and Stephan Oepen 2002 The grammar matrix: An open-source starter-kit for the rapid development of crosslinguistically consistent broad-coverage precision grammars. In: J. Carroll, N. Oostdijk, and R. Sutcliffe (eds.), Proceedings of the Workshop on Grammar Engineering and Evaluation at the 19 th International Conference on Computational Linguistics, 8−14. Black, E., S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski 1991 Procedure for quantitatively comparing the syntactic coverage of English grammars. In: HLT ’91: Proceedings of the Workshop on Speech and Natural Language, 306−311. Blunsom, Phil, and Timothy Baldwin 2006 Multilingual deep lexical acquisition for HPSGs via supertagging. In: Proceedings of EMNLP 2006, Sydney, Australia. Bod, Rens 2007 Is the end of supervised parsing in sight? In: Proceedings of the 45 th Meeting of the ACL, 400−407. Bögel, Tina, Miriam Butt, Annette Hautli, and Sebastian Sulger 2009 Urdu and the modular architecture of ParGram. In: Proceedings of the Conference on Language and Technology 2009 (CLT09). Bouillon, Pierrette, Manny Rayner, Bruna Novellas Vall, Marianne Starlander, Marianne Santaholma, Yukie Nakao, and Nikos Chatzichrisafis 2006 Une grammaire partage´e multi-taˆche pour le traitement de la parole : application aux langues romanes. TAL (Traitement Automatique des Langues) 47. Bouma, Gosse, Gertjan van Noord, and Robert Malouf 2001 Alpino: Wide coverage computational analysis of Dutch. In: Computational Linguistics in the Netherlands, 45−49. Branco, Anto´nio, and Francisco Costa 2008 A computational grammar for deep linguistic processing of Portuguese: LXGram, version a.4.1. Technical report, University of Lisbon, Department of Informatics. Technical Report. Butt, Miriam, Helge Dyvik, Tracy Holloway King, Hiroshi Masuichi, and Christian Rohrer 2002 The parallel grammar project. In: J. Carroll, N. Oostdijk, and R. Sutcliffe (eds.), Proceedings of the Workshop on Grammar Engineering and Evaluation at the 19 th International Conference on Computational Linguistics, 1−7.

58. Computational Syntax

2029

Butt, Miriam, Tracy Holloway King, Maria-Eugenia Nino, and Frédérique Segond 1999 A Grammar Writer’s Cookbook. Stanford: CSLI Publications. Cahill, Aoife, John T. Maxwell III, Paul Meurer, Christian Rohrer, and Victoria Rosen 2008 Speeding up LFG parsing using c-structure pruning. In: Coling 2008: Proceedings of the workshop on Grammar Engineering Across Frameworks, 33−40. Coling 2008 Organizing Committee. Cahill, Aoife, Mairead McCarthy, Josef van Genabith, and Andy Way 2002 Parsing text with a PCFG derived from Penn-II with an automatic f-structure annotation procedure. In: Proceedings of LFG02, 76−95. Carroll, John, Ted Briscoe, and Antonio Sanfilippo 1998 Parser evaluation: A survey and a new proposal. In: Proceedings of the 1 st LREC Conference, 447−454. Carter, David 1997 The TreeBanker: A tool for supervised training of parsed corpora. In: Proceedings of the Fourteenth National Conference on Articial Intelligence, 598−603. Providence, RI. Çetinoğlu, Özlem, and Kemal Oflazer 2006 Morphology-syntax interface for Turkish LFG. In: Proceedings of the 21 st International Conference on Computational Linguistics and 44 th Annual Meeting of the Association for Computational Linguistics, 153−160. Association for Computational Linguistics. Charniak, Eugene 2001 Immediate-head parsing for language models. In: ACL’01: Proceedings of the 39 th Annual Meeting on Association for Computational Linguistics, 124−131. Association for Computational Linguistics. Charniak, Eugene, and Mark Johnson 2005 Coarse-to-fine n-best parsing and maxent discriminative reranking. In: ACL ’05: Proceedings of the 43 rd Annual Meeting on Association for Computational Linguistics, 173−180. Chatzichrisafis, Nikos, Dick Crouch, Tracy Holloway King, Rowan Nairn, Manny Rayner, and Marianne Santaholma 2007 Regression testing for grammar-based systems. In: T. H. King and E. M. Bender (eds.), Proceedings of the GEAF07 Workshop, 128−143. Stanford: CSLI. Chomsky, Noam 1986 Lectures on Government and Binding. Foris Publications. Chomsky, Noam 1995 The Minimalist Program. MIT Press. Clark, Stephen, and James R. Curran 2007a Formalism-independent parser evaluation with CCG and DepBank. In: Proceedings of the 45 th Meeting of the ACL, 249−255. Clark, Stephen, and James R. Curran 2007b Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics 33(4): 493−552. Cocke, John, and Jacob T. Schwartz 1970 Programming Languages and their Compilers. New York: New York University, Courant Institute. Collins, Michael 1997 Three generative, lexicalised models for statistical parsing. In: Proceedings of ACL, 16−23. Collins, Michael 1999 Head-driven statistical models for natural language parsing. PhD thesis, University of Pennsylvania. Collins, Michael, and James Brooks 1995 Prepositional phrase attachment through a backed-off model. In: Proceedings of the 3 rd Workshop on Very Large Corpora, 27−38.

2030

IX. Beyond Syntax

Collins, Michael, Murat Saraclar, and Brian Roark 2005 Discriminative syntactic language modeling for speech recognition. In: Proceedings of the 43 rd Annual Meeting of the Association for Computational Linguistics (ACL-05), 507−514. Copestake, Ann 2002 Implementing Types Feature Structure Grammars. Stanford: CSLI Publications. Copestake, Ann, and Dan Flickinger 2000 An open-source grammar development environment and broad-coverage English grammar using HPSG. In: Proceedings of the Second Linguistic Resources and Evaluation Conference, 591−600. Copestake, Ann, Dan Flickinger, Ivan Sag, and Carl Pollard 2005 Minimal recursion semantics: An introduction. Journal of Research on Language and Computation 3(2−3): 281−332. Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest 1990 Introduction to Algorithms. The MIT Press. Cramer, Bart, and Yi Zhang 2009 Construction of a German HPSG grammar from a detailed treebank. In: Proceedings of the 2009 Workshop on Grammar Engineering Across Frameworks (GEAF 2009), 37− 45, Suntec, Singapore, August. Association for Computational Linguistics. Crouch, Dick, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman 2009 XLE documentation. Unpublished ms., On-line documentation, Palo Alto Research Center (PARC). Crysmann, Berthold 2005 Relative clause extraposition in German: An efficient and portable implementation. Research on Language and Computation 3(1): 1−82. Crysmann, Berthold, Nuria Bartomeu, Peter Adolphs, Dan Flickinger, and Tina Klwer 2008 Hybrid processing for grammar and style checking. In: Proceedings of COLING, 153− 160. Manchester, England. Curran, James, Stephen Clark, and David Vadas 2006 Multi-tagging for lexicalized-grammar parsing. In: Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics (COLING/ACL-06), 697−704. de la Clergerie, Eric Villemonte 2005 From metagrammars to factorized TAG/TIG parsers. In: Proceedings of IWPT’05, 190−191. de Paiva, Valeria, and Tracy Holloway King 2008 Designing testsuites for grammar-based systems in applications. In: Proceedings of COLING 2008 GEAF Workshop, 49−56. Dipper, Stefanie 2003 Implementing and Documenting Large-scale Grammars − German LFG. PhD thesis, IMS, University of Stuttgart Arbeitspapiere des Instituts für Maschinelle Sprachverarbeitung (AIMS), Volume 9, Number 1. Drellishak, Scott, and Emily M. Bender 2005 A coordination module for a crosslinguistic grammar resource. In: S. Müller (ed.), The Proceedings of the 12 th International Conference on Head-Driven Phrase Structure Grammar, Department of Informatics, University of Lisbon, 108−128. Stanford: CSLI Publications. Eisner, Jason 1996 Three new probabilistic models for dependency parsing: An exploration. In: Proceedings of the 16 th International Conference on Computational Linguistics (COLING-96), 340−345.

58. Computational Syntax

2031

Fang, Ji, and Tracy Holloway King 2007 An LFG Chinese grammar for machine use. In: Proceedings of the Grammar Engineering Across Frameworks (GEAF07) Workshop, 144−160. Stanford: CSLI Publications. Flickinger, Dan 2000 On building a more efficient grammar by exploiting types. Natural Language Engineering 6(1) (Special Issue on Efficient Processing with HPSG): 5−28. Flickinger, Daniel, John Nerbonne, Ivan A. Sag, and Thomas Wassow 1987 Toward evaluation of NLP systems. Technical report, Hewlett-Packard Laboratories. Distributed at the 24th Annual Meeting of the Association for Computational Linguistics. Fong, Sandiway 1991 Computational properties of principle-based grammatical theories. PhD thesis, Massachusetts Institute of Technology. Frank, Anette, Tracy Holloway King, Jonas Kuhn, and John T. Maxwell III 2001 Optimality Theory-style constraint ranking in large-scale LFG grammars. In: P. Sells (ed.), Formal and Empirical Issues in Optimality Theoretical Syntax, 367−398. CSLI Publications. Friedman, Joyce, Thomas H. Bredt, Robert W. Doran, Bary W. Pollack, and Theodore S. Martner 1971 A Computer Model of Transformational Grammar. Elsevier. Goodman, Joshua 1996 Parsing algorithms and metrics. In: Proceedings of the 34 th Meeting of the ACL, 177− 183, Santa Cruz, CA. Harabagiu, Sanda M., Steven J. Maiorano, and Marius A. Pasca 2003 Opendomain textual question answering techniques. Natural Language Engineering 9(3)−38. Hellan, Lars, and Petter Haugereid 2003 NorSource: An exercise in Matrix grammar-building design. In: E. M. Bender, D. Flickinger, F. Fouvry, and M. Siegel (eds.), Proceedings of the Workshop on Ideas and Strategies for Multilingual Grammar Development, ESSLLI 2003, 41−48. Hockenmaier, Julia, and Mark Steedman 2007 CCGbank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics 33(3): 355−396. Joshi, Aravid, Leon S. Levy, and Masako Takahashi 1975 Tree adjunct grammars. Journal of Computer Systems Science 136−163. Jurafsky, Daniel, and James H. Martin 2008 Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice-Hall. Second edition. Kaplan, Ron, Stefan Riezler, Tracy H. King, John T. Maxwell III, Alexander Vasserman, and Richard Crouch 2004 Speed and accuracy in shallow and deep stochastic parsing. In: Proceedings of the Human Language Technology Conference and the 4 th Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL’04), Boston, MA. Kaplan, Ronald M. 2009 Deep natural language processing for web-scale search. Unpublished ms., Presented at LFG09; abstract available in the LFG09 on-line proceedings. Kaplan, Ronald M., and Joan Bresnan 1982 Lexical-Functional Grammar: a formal system for grammatical representation. In: J. Bresnan (ed.), The Mental Representation of Grammatical Relations, 173−281. MIT Press. Kasami, Tadao 1965 An efficient recognition and syntax analysis algorithm for context-free languages. Technical Report AFCRL-65-758, Air Force Cambridge Research Laboratory.

2032

IX. Beyond Syntax

Kay, Martin 1967 Experiments with a powerful parser. In: COLING 1967 Volume 1: Conference Internationale Sure Le Traitement Automatique Des Langues, 1−20. Kim, Jong-Bok, and Jaehyung Yang 2003 Korean phrase structure grammar and its implementations into the LKB system. In: Proceedings of the 17 th Pacific Asia Conference on Language, Information and Computation, 88−97. Kim, Roger, Mary Dalrymple, Ronald M. Kaplan, Tracy Holloway King, Hiroshi Masuichi, and Tomoko Ohkuma 2003 Multilingual grammar development via grammar porting. In: E. M. Bender, D. Flickinger, F. Fouvry, and M. Siegel (eds.), Proceedings of the Workshop on Ideas and Strategies for Multilingual Grammar Development, ESSLLI 2003, 49−56. King, Tracy Holloway, Stefanie Dipper, Anette Frank, Jonas Kuhn, and John T. Maxwell 2004 Ambiguity management in grammar writing. Journal of Language and Computation 259−280. King, Tracy Holloway, Martin Forst, Jonas Kuhn, and Miriam Butt 2005 The feature space in parallel grammar writing. Research on Language and Computation, Special Issue on Shared Representations in Multilingual Grammar Engineering 3(2): 39−163. Kiss, Tibor, and Jan Strunk 2006 Unsupervised multilingual sentence boundary detection. Computational Linguistics 32(4): 485−525. Klein, Dan, and Chris Manning 2004 Corpus-based induction of syntactic structure: Models of dependency and constituency. In: Proceedings of the 42 nd Meeting of the Association for Computational Linguistics (ACL’04). Kordoni, Valia, and Julia Neu 2005 Deep analysis of Modern Greek. In: K.-Y. Su, J. Tsujii, and J.-H. Lee (eds.), Lecture Notes in Computer Science, Vol. 3248, 674−683. Springer-Verlag. Lehmann, Sabine, Stephan Oepen, Sylvie Regnier-Prost, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Estival, Eva Dauphin, Herve Compagnion, Judith Baur, Lorna Balkan, and Doug Arnold. 1996 TSNLP − Test Suites for Natural Language Processing. In: Proceedings of the 16 th International Conference on Computational Linguistics, 711−716. Li, Z., C. Callison-Burch, C. Dyer, J. Ganitkevitch, S. Khudanpur, L. Schwartz, W. Thornton, J. Weese, and O. Zaidan 2009 Joshua: Open source toolkit for parsing-based machine translation. In: Proceedings of the EACL 2009 Fourth Workshop on Statistical Machine Translation. Manning, Christopher, and Hinrich Schütze 1999 Foundations of Statistical Natural Language Processing. The MIT Press. Marcus, Mitchell, Beatrice Santorini, and Mary Marcinkiewicz 1993 Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19(2): 313−330. Marimon, Montserrat, Nuria Bel, and Natalia Seghezzi 2007 Test-suite construction for a Spanish grammar. In: T. H. King and E. M. Bender (eds.), Proceedings of the GEAF 2007 Workshop. CSLI Publications. Masuichi, Hiroshi, and Tomoko Ohkuma 2003 Constructing a practical Japanese parser based on Lexical-Functional Grammar. Journal of Natural Language Processing 10(2): 79−109. in Japanese. Matsuzaki, Takuya, Yusuke Miyao, and Jun ichi Tsujii 2007 Efficient HPSG parsing with supertagging and CFG-filtering. In: Proceedings of IJCAI, 1671−1676.

58. Computational Syntax

2033

Maxwell, John, and Ron Kaplan 1993 The interface between phrasal and fuctional constraints. Computational Linguistics 19(4): 571−589. Maxwell, John T., and Ronald M. Kaplan 1991 A method for disjunctive constraint satisfaction. Current Issues in Parsing Technologies. Miyao, Yusuke, Takashi Ninomiya, and Junichi Tsujii 2004 Corpus-oriented grammar development for acquiring a Head-Driven Phrase Structure Grammar from the Penn Treebank. In: Proceedings of IJCNLP-2004, 684−693. Miyao, Yusuke, and Jun’ichi Tsujii 2004 Deep linguistic analysis for the accurate identification of predicate-argument relations. In: Proceedings of COLING-2004, 1392−1397. Moore, Robert C., and Hiyan Alshawi 1992 Syntactic and semantic processing. In: H. Alshawi (ed.), The Core Language Engine, 129−148. Cambridge, MA: MIT Press. Müller, Stefan 2009 Towards an HPSG analysis of Maltese. In: B. Comrie, R. Fabri, B. Hume, M. Mifsud, T. Stolz, and M. Vanhove (eds.), Introducing Maltese Linguistics. Papers from the 1 st International Conference on Maltese Linguistics (Bremen/Germany, 18−20 October, 2007), 83−112. (Studies in Language Companion Series 113.) Amsterdam, Philadelphia: John Benjamins Publishing Co. Müller, Stefan, and Walter Kasper 2000 HPSG analysis of German. In: W. Wahlster (ed.), Verbmobil: Foundations of Speechto-Speech Translation, 238−253. Berlin: Springer. Nivre, Joakim (ed.) 2006 Inductive Dependency Parsing. Dordrecht: Springer. O’Donovan, Ruth, Michael Burke, Aoife Cahill, Josef van Genabith, and Andy Way 2005 Large-scale induction and evaluation of lexical resources from the Penn-II and Penn-III treebanks. Computational Linguistics 31(3): 329−365. Oepen, Stephan, Emily M. Bender, Uli Callmeier, Dan Flickinger, and Melanie Siegel 2002 Parallel distributed grammar engineering for practical applications. In: J. Carroll, N. Oostdijk, and R. Sutcliffe (eds.), Proceedings of the Workshop on Grammar Engineering and Evaluation at the 19 th International Conference on Computational Linguistics, 15−21. Oepen, Stephan, and John Carroll 2000 Ambiguity packing in constraint-based parsing−practical results. In: Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-00), 162−169. Seattle, Washington. Oepen, Stephan, Daniel Flickinger, Kristina Toutanova, and Christopher D. Manning 2004 LinGO Redwoods. A rich and dynamic treebank for HPSG. Research on Language and Computation 2(4): 575−596. Oepen, Stephan, Daniel Flickinger, Junichi Tsujii, and Hans Uszkoreit (eds.). 2002 Collaborative Language Engineering. A Case Study in Efficient Grammar-based Processing. Stanford, CA: CSLI Publications. Oepen, Stephan, and Daniel P. Flickinger 1998 Towards systematic grammar profiling. Test suite technology ten years after. Journal of Computer Speech and Language 12(4) (Special Issue on Evaluation): 11−436. Oepen, Stephan, Erik Velldal, Jan Tore Lønning, Paul Meurer, Victoria Rosen, and Dan Flickinger 2007 Towards hybrid quality-oriented machine translation. On linguistics and probabilities in MT. In: Proceedings of the 10 th International Conference on Theoretical and Methodological Issues in Machine Translation, 144−153. Petrick, Stanley R 1965 A recognition procedure for transformational grammars. PhD thesis, MIT.

2034

IX. Beyond Syntax

Pollard, Carl, and Ivan A. Sag 1994 Head-Driven Phrase Structure Grammar. The University of Chicago Press and CSLI Publications. Price, Charlotte, Gordon McCalla, and Andrea Bunt 1999 L2tutor: A mixed-initiative dialogue system for improving fluency. Computer Assisted Language Learning 12(2): 83−112. Prost, Jean-Philippe 2009 Grammar error detection with best approximated parse. In: Proceedings of the 11 th International Conference on Parsing Technologies (IWPT), 172−175. Association for Computational Linguistics. Quirk, Chris, Arul Menezes, and Colin Cherry 2005 Dependency treelet translation: syntactically informed phrasal SMT. In: ACL ’05: Proceedings of the 43 rd Annual Meeting on Association for Computational Linguistics, 271−279. Ratnaparkhi, Adwait 1996 A maximum entropy model for part-of-speech tagging. In: Conference on Empirical Methods in Natural Language Processing, 133−142. Rayner, Manny, Beth Ann Hockey, and Pierrette Bouillon 2006 Putting Linguistics into Speech Recognition: The Regulus Grammar Compiler. Stanford: CSLI Publications. Rayner, Manny, Beth Ann Hockey, Jean-Michel Renders, Nikos Chatzichrisafis, and Kim Farrell 2005 Spoken language processing in the Clarissa procedure browser. Unpublished ms., Tech Report TR-05-005, International Computer Science Institute, Berkeley, CA. Reiter, Ehud, and Robert Dale 2000 Building Natural Language Generation Systems. Cambridge: Cambridge University Press. Riezler, Stefan, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell III, and Mark Johnson 2002 Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In: Proceedings of the 40 th Meeting of the ACL, 271−278. Riezler, Stefan, and John T. Maxwell 2006 Grammatical machine translation. In: Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, 248−255. Association for Computational Linguistics. Rohrer, Christian, and Martin Forst 2006 Improving coverage and parsing quality of a large-scale LFG for German. In: Proceedings of the 5 th Conference on Language Resources and Evaluation. Rosén, Victoria, Paul Meurer, and Koenraad de Smedt 2005 Constructing a parsed corpus with a large LFG grammar. In: Proceedings of the LFG05 Conference, 371−387. Stanford: CSLI Publications. Rosén, Victoria, Paul Meurer, and Koenraad De Smedt 2007 Designing and implementing discriminants for LFG grammars. In: Proceedings of LFG07, 397− 417. CSLI On-line Publications. Siegel, Melanie, and Emily M. Bender 2002 Efficient deep processing of Japanese. In: Proceedings of the 3 rd Workshop on Asian Language Resources and International Standardization at the 19 th International Conference on Computational Linguistics. Stabler, Edward 2001 Minimalist grammars and recognition. In: C. Rohrer, A. Rossdeutscher, and H. Kamp (eds.), Linguistic Form and its Computation, 327−352. Stanford, CA: CSLI. Steedman, Mark 2000 The Syntactic Process. Cambridge, MA: The MIT Press.

58. Computational Syntax

2035

Thurmair, Gregor 1990 Parsing for grammar and style checking. In: Proceedings of COLING. Association for Computational Linguistics. Toutanova, Kristina, Christopher Manning, Stuart Shieber, Dan Flickinger, and Stephan Oepen 2002 Parse disambiguation for a rich HPSG grammar. In: Proceedings of the First Workshop on Treebanks and Linguistic Theories, 253−263. van Langendonck, Willy 2007 Theory and Typology of Proper Names. Berlin: Mouton de Gruyter. Vandeventer, Anne 2001 Creating a grammar checker for CALL by constraint relaxation: A feasibility study. ReCALL 13: 110−120. Vauquois, Bernard 1968 A survey of formal grammars and algorithms for recognition and transformation in mechanical translation. In: IFIP Congress, 1114−1122. Wahlster, Wolfgang (ed.) 2000 Verbmobil: Foundations of Speech-to-Speech Translation. Springer. Wood, Mary McGee 1993 Categorial Grammars. London and New York: Routledge. Woods, William A. 1970 Transition network grammars for natural language analysis. In: Communications of the ACM 13(10): 591−606. Xu, Peng, Ciprian Chelba, and Frederick Jelinek 2001 A study on richer syntactic dependencies for structured language modeling. In: ACL ’02: Proceedings of the 40 th Annual Meeting on Association for Computational Linguistics, 191−198. Younger, Daniel H. 1967 Recognition and parsing of context-free languages in time n3. Information and Control 10(2): 189−208. Zhang, Yi, and Valia Kordoni 2008 Robust parsing with a large HPSG grammar. In: Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08). Zwicky, Arnold, Joyce Friedman, Barbara C. Hall, and D.E. Walker 1965 The MITRE syntactic analysis procedure for transformational grammars. In: Proc. Fall Joint Computer Conference, Vol. 67, Pt 1: 317−326.

Emily M. Bender, Seattle (USA) Stephen Clark, Cambridge (UK) Tracy Holloway King, San Jose (USA)

2036

IX. Beyond Syntax

59. Reference Grammars 1. 2. 3. 4. 5. 6.

Introduction Older reference grammars The descriptive framework of Basic Linguistic Theory Case studies Conclusion References (selected)

Abstract Since the mid-1980s most reference grammars describe syntax in terms of Basic Linguistic Theory, a cumulative descriptive framework that has acquired its methods and concepts from various sources, from traditional grammar to linguistic typology and theoretical syntax. It employs an informal, user-friendly metalanguage so that every reader is able to access the language data independently of the theoretical framework in which (s)he is working. The role of syntactic theory in language description mainly consists in defining what is considered to be essential for the structure of the language and what questions a fieldworker can ask when collecting the data. In return, the data from reference grammars enrich syntactic theory, as they can be used to evaluate possible constraints on human language and extend our understanding of human linguistic capacity.

1. Introduction This chapter addresses the presentation of syntax in reference grammars. The emphasis will be on relatively comprehensive descriptions of languages that had previously been completely undescribed or described only scarcely. In the words of Evans (2007: 480), these grammars “are our main vehicle for representing the linguistic structure of the world’s 6,000 languages”. It is well known that different types of grammars target different groups of readers (e.g. Mosel 2006: 42), but I will only be dealing with grammars meant to be used by academic specialists. This audience can be taken to include scholars in general linguistics, especially in typology, language universals and comparative syntax, together with students of the relevant language and language family, and of language endangerment in general (cf. Comrie and Smith 1977). I will consider the grammars that have appeared within the various series of grammatical monographs, such as Mouton Grammar Library, Lingua Descriptive Series and Cambridge Grammatical Descriptions (although the latter series was discontinued), as well as those within a series of studies specialising in a particular region (e.g. Languages of the Greater Himalayan Region, Grammatical Analyses of African Languages, Studies in African Linguistics (Lincom), Studies in Asian Linguistics, and Pacific Linguistics), together with some grammars that were published outside the framework of any series. The chapter will not for the most part discuss in detail shorter grammatical sketches (e.g. LINCOM’s series Languages of the World. Materials), grammars which specifically state that they emphasise phonology

59. Reference Grammars

2037

and morphology, grammars written in languages other than English, grammars that cite language examples using national scripts, nor grammars that do not follow modern linguistic practice in providing glosses. The chapter will start with a short overview of older reference grammars written within structuralist and early transformational frameworks (section 2). However, it will mostly focus on the descriptions written from approximately the mid-1980s, because syntax occupies a much more prominent role in grammars written in the past 25 years or so than in those written earlier. In section 3, I will address the major descriptive framework of Basic Linguistic Theory and will argue, following several previous authors, that it is an important theoretical tool. The following interrelated points will be discussed: the relation to other linguistic theories and linguistic typology, the methods of collecting the data, the choice of topics in a syntactic description, and the terminology. In section 4, I will present two detailed case studies of how syntactic topics are treated in reference grammars. Section 5 concludes the paper.

2. Older reference grammars Earlier grammars often privileged phonology and morphology, but devoted relatively little attention to sentence structure. This is the case with many grammars of native American languages published in the University of California Publications in Linguistics series, initiated in the 1930s by Mary Haas. For example, in the description of Eastern Pomo morphology is discussed on 138 pages (McLendon 1972: 37−175) and a very detailed description of nominal and verbal forms and paradigms (word formation and derivation) is provided. However, only five pages are devoted to syntax (McLendon 1972: 176−181). All they discuss is the basics of word order and the expression of major grammatical functions such as subject, direct object and locative adjuncts. This is also the case for Langdon’s (1970) grammar of Diegueño and LeCron’s (1969) grammar of Tarascan. Grammars written in the 1960s and 1970s that do deal with syntax at some length display considerable variation in their theoretical frameworks. In the words of Payne and Weber (2006), they are not examples of language documentation and description, but rather provide a “ground for competing conceptualizations of linguistic theory”. European structuralism is represented by French- and English-language grammars (e.g. Dez 1980; Adelaar 1977; Cloarec-Heiss 1986). The basic unit of a structuralist description is taken to be the morpheme, which leads to a primarily distributional approach: these grammars discuss the position and function of each morpheme in great detail. The syntax is treated together with morphology. Most syntactic facts are described in terms of the use of the forms that result from the application of morphological rules and the positional processes that combine words into greater units. Only basic syntactic facts are usually described, while the information on e.g. pragmatically motivated variations in constituent order, anaphoric processes or the structure of complex sentences is minimal. A number of grammars represent the tagmemics approach, a neo-structuralist framework developed in the 1950s by Kenneth Pike (with later additions by Robert Longacre) and associated mostly with the work of the Summer Institute of Linguistics, an organiza-

2038

IX. Beyond Syntax

tion of missionary linguists devoted to Bible translations. Tagmemics as a system of linguistic analysis was primarily designed to assist linguists efficiently to extract coherent descriptions out of the corpora of fieldwork data. It was applied to the description of a large number of hitherto unrecorded languages, mostly from the Americas, in the 1960s and 1970s, for instance, Allin (1970) and Glass and Hackett (1970). Tagmemics differs from alternative systems of grammatical analysis in that it defines the basic units of language (tagmemes) as a composite of form and meaning, a “unit-incontext.” The tagmeme, as a fundamental unit of language, is a combination of a syntagmatic position and the actual elements that can fill it. One part of the tagmeme is the “slot” and the other is the “filler”. For example, one such tagmeme, at the syntactic level of analysis, might be the noun-as-subject, where the noun is a class that fills the subject slot in a construction. A filler may also be a bound morpheme. For instance, in the grammar of Pitjantjatjara (Glass and Hackett 1970), oblique enclitics (pronouns) typically manifest the object tagmeme, but sometimes they represent location or source tagmemes (probably due to applicativization). The language structure is deeply hierarchical, in several simultaneous ways. Sounds and intonation form a phonological hierarchy; words and sentences form a grammatical hierarchy; and meanings form a referential hierarchy. The very same structure that appears at the lower levels also appears at the higher levels: sounds form words, words form sentences, and sentences form discourse. The discussion normally starts with the lower syntactic unit, the word, and then concentrates on phrases, clauses, sentences and several levels of discourse. Transformational-generative grammars usually apply some version of Chomsky (1965). These grammars are written based on a corpus of grammatical and ungrammatical sentences obtained through introspection or direct elicitation. The grammar ensures that it generates all grammatical sentences and excludes ungrammatical ones. According to Koutsoudas (1966), the general analytic procedure of grammar writing consists of several steps: (i) establishing a list of morphemes with a gloss and a categorial label; (ii) establishing the categories such as the subject or the object and the basic sentence patterns; (iii) establishing which classes occur in which position; (iv) establishing their combinations, variations and optionality; (v) positing larger syntactic constituents and representing them by tree diagrams; (vi) comparing the types; (vii) ordering the syntactic rules; and (viii) checking if the solution works. The only way to check the data is to rely on a native speaker’s intuition. When a native speaker is not available, the analyst should write “the most general and plausible grammar possible” (Koutsoudas 1966: 47), but in any case there must be only one structural description assigned to each sentence in the grammar. True, this theory was not designed to be a purely descriptive tool, and Koutsoudas’ recommendations mostly concern the so-called “scientific” grammars. However, a similar methodology was applied in reference grammars, which tested the applicability of the generative model to little studied non-Indo-European languages. Examples include a grammar of Wichita (Rood 1976), a grammar of Tuscarora (Mithun 1976), a grammar of Mokelese (Harrison 1976), a grammar of Slave (Rice 1989), and a grammar of Hua (Haiman 1980) (although in the latter grammatical relations are defined “by labels” rather than configurationally). These grammars assume deep and surface structure, phrase-structure rules, transformational rules such as pronoun deletion, agreement, pronoun shift and the like, and introduce visual representations in the form of phrase structure trees.

59. Reference Grammars

2039

One example of a grammatical description of basic syntactic processes completed within the framework of Relational Grammar is the grammar of Kinyarwanda (Kimenyi 1978). The major syntactic processes in this language appear to be precisely those that are central to this theory, because they affect grammatical relations. The grammar provides a very careful description of behavioural properties of major grammatical relations understood as syntactic primitives, and valence-changing rules of objectivization (applicativization), subjectivization, possessor advancement and so on. The properties of derived grammatical relations are also addressed. Since the description is very much influenced by what counted as central to the theory of Relational Grammar, the presentation of other aspects of syntax are sketchy. For example, the TAM system is dealt with in the Appendix, and such topics as coordination, dependent clauses or questions are not addressed at all. The majority of modern users agree that many of these earlier grammars have become dated. First, they are very difficult to read due to idiosyncratic terminology and representational conventions. A grammar that draws heavily on the terminology and formalism of a particular syntactic framework is not easily accessible for a reader who is not trained in the same grammatical tradition as the author of the desription. This point is strongly made in Dench and Evans (2006: 6−7) and Rice (2006b: 390), who emphasize that for the purpose of general accessibility things should be said in plain words, and that technical terminology should be used sparingly and with great care. Linguists setting out to write a reference grammar of a little-known language should write in a kind of theoretical “lingua franca” whatever their own theoretical orientation, if they want the grammar to be accessible to as broad a readership as possible. Second, earlier grammars attempted to describe a particular language with little reference to notions that are employed in describing other languages. They were “almost deliberately isolationist” (from Bernard Comrie’s editorial statement of the Lingua Descriptive Series), since the descriptive framework and terminology were often developed for an individual language. This made typological comparability a serious issue. Finally, there has been a long-standing debate as to which organization is preferable for reference grammars: form-based/semasiological/analytic or function-based/onomasiological/synthetic (Lehmann 1989; Evans and Dench 2006: 15; Cristofaro 2006: 137; Mosel 2006). The form-based grammars take as their starting point the existing structures of the language, that is, they select constructions and describe the range of functions associated with them. The function-based grammars are organized around particular meaning categories or functional domains such as, for example, nominal modification, attribution, reference, modality, negation, questions etc., and show how the language expresses them. Early grammars usually belong to the first type. As noticed in Evans and Dench (2006: 15), this is because the techniques of structural linguistics have always been more suitable for dealing with form than meaning. In fact, they often ignore semantics, usage and communicative function altogether, to the extent that Payne (2006) refers to structuralist and early transformational descriptions by the “grammar-as-machine” metaphor. The information about each particular structure is mostly distributional, so a grammar looks like a taxonomy of forms provided with labels that are sometimes difficult to interpret. This led to radical changes in the approach to grammar writing.

2040

IX. Beyond Syntax

3. The descriptive framework of Basic Linguistic Theory The expression “Basic Linguistic Theory” refers to the theoretical framework that is widely employed in language description, particularly grammatical descriptions of the entire language, and is an example of fruitful interaction between theoretical and descriptive linguistics. The same framework is referred to as “general comparative language” in Lehmann (1989), but I will use the former term as it is more widely accepted. The expression itself originates from Dixon (1997: 132), who defines Basic Linguistic Theory as “the fundamental theoretical apparatus that underlies all work in describing languages and formulating universals about the nature of human language”. Most reference grammars written in the past 25 years or so adopt Basic Linguistic Theory as their theoretical tool, even though not all of them explicitly mention it.

3.1. Basic Linguistic Theory as a theoretical tool The status of Basic Linguistic Theory as a theoretical framework is not always recognized. Many grammars explicitly state that the description they provide is atheoretical, theoretically eclectic or theory-neutral. For instance, the author of the grammar of Wolane says: “The presentation of the data is not based on a single linguistic model or theory but is deliberately descriptive” (Meyer 2006: 21). Evans writes about his Kayardild grammar: “The grammar deliberately eschews theory-specific assumptions and formalism” (Evans 1995: ix). Foley (1991: vii) comments that his grammar of Yimas “is not written in any set theoretical framework. (…) I have deliberately chosen to be eclectic, choosing various ideas from different theories when these seem to elucidate the structure of the language best”. Crowley (1998: 5) states that “the theoretical approach has been deliberately eclectic in order to maximize ineligibility”. In this latter grammar constituents order is dealt with using the notions of slots and fillers, although the approach is not really tagmemic, and movement rules are also mentioned although the approach is not entirely transformational. However, Dryer (2006) refutes the myth that people who write grammars work without a theory and emphasises that there is no such thing as atheoretical or theory-neutral description. According to Gil (2001: 126), it is an illusion to believe that description can be separated from theory and to engage in the former without the latter. One cannot describe anything without making at least some theoretical assumptions and analytical decisions. The bare facts of the language are infinite in number and do not present themselves in a systematic way to an observer. It is the decision of the analyst to choose which facts should be presented in a finite description, how they should be classified and labelled, and which generalizations can be made on this basis. This cannot be done without a theory. Therefore any presentation of linguistic data in a reference grammar involves an analytical procedure and every label in a grammatical description is a theoretical statement. Dryer further makes a rare distinction between “descriptive theory” and “explanatory theory”. This distinction is not widely recognized because mainstream linguistics maintains that one and the same theory is suitable for both purposes and that the central task of linguistics is explanation for linguistic facts. According to Dryer, this is the main

59. Reference Grammars

2041

reason why mainstream linguistics fails to recognize Basic Linguistic Theory as theoretically relevant. Dryer rejects this view and calls for the recognition that description requires a different theory which does not aim to provide an explanation of why languages are the way they are. The extent to which descriptive work shares the basic assumptions is actually rather striking and for Dryer this reflects the common theoretical background. In other words, the relevant question is not whether reference grammars use a theory at all, but what theory they use and how it interacts with other theoretical frameworks (cf. Rice 2006a: 235).

3.2. Sources Basic Linguistic Theory can be seen as having evolved out of so-called traditional grammar, which goes back to the Graeco-Latin grammatical tradition, acquiring most of its basic notions such as word classes, inflectional categories, and non-finite forms. The use of terminology taken from traditional grammar has become increasingly popular in reference grammars, starting from the 1980s. In the words of Mosel (2006: 51), “even those who are convinced that these categories do not exist in the language they describe have difficulties doing without the traditional terms”. In Dixon’s view, Basic Linguistic Theory has something of a timeless character, existing more or less unchanged since the ancient Greeks and Romans (Dixon 1997: 128− 130). It is conservative as a matter of principle. However, even though Basic Linguistic Theory is grounded in traditional grammar, it has acquired ideas and terms from other sources too. Unlike many other linguistic theories, it is a cumulative framework. Dryer (2006) describes Basic Linguistic Theory as “traditional grammar, minus its bad features (such as a tendency to describe all languages in terms of concepts motivated for European languages), plus necessary concepts absent from traditional grammar”. This means that Basic Linguistic Theory is constantly updated by linguistic typology and theoretical linguistics. First, Basic Linguistic Theory has taken some analytical techniques from the structuralist tradition, particularly in the areas of phonology and morphology. In contrast to traditional grammar and many recent theoretical frameworks, it emphasises the structuralism-based need to describe each language in its own terms, instead of imposing on individual languages concepts whose primary motivation comes from other (mostly European) languages, as will be explained in more detail in section 3.3. However, it contrasts with structuralist work in attempting to describe languages in a more user-friendly fashion, including semantic considerations in the analyses, and employing terminology that has been used for similar phenomena in other languages. Second, Basic Linguistic Theory has been influenced to a certain extent by the generative-transformational paradigm, though this influence is often indirect. This mostly concerns the earlier versions of the theory, while most recent work, especially the Minimalist Program, has had essentially no impact on Basic Linguistic Theory so far. Early generative grammar examined many aspects of the syntax of English in great detail, and the insights of that research have affected how Basic Linguistic Theory was applied to the syntax of other languages. In fact, Rice (2006a: 239) attributes the increased role of syntax in language description mostly to the influence of the generative paradigm, which

2042

IX. Beyond Syntax

raised awareness of many linguistic phenomena not previously considered interesting. Descriptive work has also borrowed some generative terminology. For example, the term “complementizer” and the respective concept, unknown in traditional grammar, are now used in some reference grammars (e.g. Heath 1999). Some grammars employ the notion of KP instead of NP. For instance, Donohue (1999) argues for a hierarchical structure of nominal phrases in Turkang Besi, with KP being higher than NP, and introduces the rule KP → ART NP. An NP is generated inside a KP since all NPs must appear either with an article or a preposition, sometimes both. Most importantly, the influence of the generative paradigm can be seen in what syntactic constructions are identified and how they are analyzed. The phenomena that are addressed in virtually every grammar written after 1980 are topic and focus constructions, complement clauses, relative clauses, wh-questions and anaphora. Mosel (2006: 52) noticed that the Lingua Descriptive Series questionnaire first published in 1977 reflects the interest of the generative syntax of its time: it contains 79 questions on reflexives (and only five on negation). However, it should be mentioned that variations in constituent order, which have been central to syntactic analysis within the transformational paradigm, are still often described in a rather sketchy manner, and information on syntactic constituency is not easily retrievable from descriptions completed using this questionnaire. Rice (2006a: 236, 2006b: 403) argues that formal syntactic theory forces a grammar writer to ask questions that are not very likely to be asked otherwise. In her description of wh-questions in Slave (Rice 1989) she had to address the following topics: whether a sentence-initial wh-word leaves a gap, whether all “moved” wh-words follow the same pattern, whether a question word behaves in the same way as topicalized NPs, and whether there are locality constraints on wh-movement. These questions were inspired by the development of the syntactic theory of the time, being largely based on the insights of Chomsky (1977), and raised the issue of how Slave fits into the proposed typology of questions. Similarly, Evans (Evans and Dench 2006: 4) explains that he had to address certain aspects of polysynthesis of Bininj Gun-wok predicted by Baker’s movement-based theory of incorporation (Baker 1988, 1995). This resulted in a more detailed description of the language than would have been written otherwise. The reference grammar (Evans 2003) argues that Bininj Gun-wok data suggest an alternative lexicalist view of incorporation, much in line with Rosen (1987). One of the arguments for this alternative is that the language allows the so-called “doubling”, i.e. the repetition of the nominal root inside and outside the incorporating word, see (1). In this example and all other examples cited in this chapter the transcription comes from the original source; glosses are simplified. (1)

djaying kun-murrng birri-murrng-moyhme-y is.said IV-bone 3bone-get-PST.PRF ‘They reckon they got those bones.’ (Evans 2003: 453)

[Bininj Gun-wok]

This fact is predicted by the lexicalist analysis of “classifier” incorporation but is problematic for the movement analysis. In addition, a Baker-style account based on phrasestructure sisterhood in D-structure predicts that intransitive subjects are incorporated only in unaccusatives, but in Bininj Gun-wok typical unergative verbs such as ‘crawl’

59. Reference Grammars

2043

or ‘get up’ can incorporate their subjects. There are other arguments as well, and Evans concludes that movement analysis has no explanatory power for Bininj Gun-wok. The important point here is that, although the theory did not guide the description of the data, it raised the questions informing this description. Without the theory the description would have been poorer because the grammarian might not have been able to see an interesting aspect of language. So theory informs the grammarian about the topics worth investigating and serves as a tool for the discovery of new data (cf. Evans 2007). Finally and most importantly, Basic Linguistic Theory relies heavily on work in linguistic typology. In fact, the primary influence on Basic Linguistic Theory in the past 20 years has come from typological work and the theory of language universals understood as generalizations on observable cross-linguistic variation (Lehmann 1989; Dryer 2006; Cristofaro 2006). Descriptive linguistics has found the typological work especially useful because Basic Linguistic Theory is also the framework assumed by typology, so there is a considerable overlap in terminology between the two. For example, the grammar of Supyire (Carlson 1994: 3) states that the description is completed within a functional-typological framework, but the technical language it uses is not actually different from the recommendations of Basic Linguistic Theory. Like Basic Linguistic Theory, typology adopts a number of descriptive notions that to a large extent go back to traditional grammar, but it has also introduced new terms that are now widely used in descriptive practice but less so in “formal” syntactic work. One of such notions is “converb” defined by Nedjalkov (1995) as a non-finite verbal form used primarily for adverbial subordination. This term is employed in a grammar of Lezgian (Haspelmath 1993), a grammar of Udihe (Nikolaeva and Tolskaya 2001), a grammar of Kolyma Yukaghir (Maslova 2003), a grammar of Tariana (Aikhenvald 2003), a grammar of Wolane (Meyer 2006), and a number of other descriptions. The influence of typology also derives from the recognition of recurrent phenomena, and Basic Linguistic Theory has incorporated many substantive concepts discussed in the typological literature such as, for example, split intransitivity, head- vs. dependentmarking, Accessibility Hierarchy, antipassive, internally-headed relative clauses, switch reference, ergativity and morphological alignment. This led to the expansion of topics addressed in reference grammars (cf. Cristofaro 2006: 138; Dryer 2006). Conversely, the descriptive work on little known languages systematically forces typologists to ask new questions about construction types. Some suggestions on what information should be included in a syntactic description are offered in Payne (1997), intended as a standard guide for field linguists “who desire to write a description of the morphology and syntax of one of the many under-documented languages of the world” (Payne 1997: 1). The following syntactic topics are considered crucial: grammatical categories and word classes, constituent order (some notion of constituent structure, although it is sufficiently basic), the structure of noun phrases (noun classes and classifiers, possession, modification, determination, quantification), predicate nominals, case marking and grammatical relations, voice and valencechanging operations, verb phrases (the structure of predicates, complex predicates and serial verb constructions, verb phrase operations including nominalization), pragmatically marked structures (topicalization, focalization, negation, non-declarative clause types), and clause combining (subordination and coordination). These are the areas addressed in most recent grammars. One example is a grammar of Tariana (Aikhenvald 2003), which explicitly says that the theoretical framework used in the description is

2044

IX. Beyond Syntax

Basic Linguistic Theory but the presentation of the facts is interwoven with their analysis from the point of view of cross-linguistic comparison. The typological perspective is then crucial for the presentation of the data and the analysis of the language facts. Typological considerations have also influenced the organization of reference grammars. As mentioned above, earlier grammars preferred the form-to-function approach. However, it is now generally recognized that function-to-form organization is more appropriate for syntactic description, at least as far as the initial structuring of the material is concerned (Noonan 2006: 359). Cristofaro (2006) argues that there are certain practical considerations that may lead typologists to prefer the function-to-form approach. It is preferable to organise the description around functional domains and independently of actual forms of expression, because this will make it easy for typologists to conduct a cross-linguistic investigation of how the relevant phenomenon is encoded across languages. The form-to-function approach may lead the user to miss pieces of information that are actually there, but are located in parts of the grammar where they may not be easily recoverable. The assumption here is that there is a set of semantic categories instantiated in all languages by various grammatical means. This presupposes a “catalogue” of functions that are universally expressed, something that has never really been properly implemented. As a result, there are few comprehensive meaning-based grammars, perhaps also for the good reason that a reader should actually know that the forms and constructions exist before (s)he can understand that a particular meaning can be expressed by this form (Mosel 2006: 59). Some examples of functional or communicative grammars written from the fundamentally meaning-based perspective and with clear emphasis on usage of the described forms are Leech and Svartvik (1975) and Li and Thompson (1981). However, Lehmann (1989) and Mosel (2006) observe that if the function-toform approach is implemented consistently, information about the same syntactic structure may be dispersed throughout the grammar and any potential polyfunctionality patterns may remain unrecognized. So the consistent preference for one approach over another produces a rather onesided result. In practice most modern grammars combine the two approaches, as recommended in particular in Lehmann (1989). The macrostructure of the grammar is normally determined by the usual guidelines for descriptive linguistics, but the description of each functional domain has an internal microstructure determined by the categories of the language in question. One example is Maslova (2003), where this point is explicitly discussed. This is also true for the Lingua Descriptive Series questionnaire (Comrie and Smith 1977). It exemplifies the so-called top-down or templetic approach to the organization of grammar (the term is from Gil 2001: 126−127), where the author produces a description according to a pre-prepared list of questions, so that the grammar writing process consists in providing answers to these questions and mentioning both the absence and the presence of the relevant feature. The questionnaire has a “descending” organization syntax-morphology-phonology. Syntactic chapters are based on the following subdivisions: sentence types, subordinate clauses, the structure of syntactic phrases, the operational definition of word classes, coordination, negation, anaphora, reflexives, reciprocals, equatives, possession, emphasis, topicality, heavy shift and other “movement” processes. The general direction of description is from function to form. However, this principle is not totally consistent, as shown in Lehmann (1989) and Mosel (2006): some parts of the description are organized based on structural criteria.

59. Reference Grammars

2045

Generally speaking, both Basic Linguistic Theory and the functional-typological approach assume that a grammar is a form-meaning composite. The notion of grammatical construction, i.e. idiomatic pairing of syntactic and semantico-pragmatic information, has been widely used in reference grammars, even though the theoretical importance of this notion had not been recognised until recently. Since certain aspects of language structure can be explained in terms of meaning and usage, many grammarians are interested in the relation between structure and functional domains that are ultimately grounded in human cognition and symbiosis (Lehmann 1989). A descriptive theory is then a theory of how it is possible to express a particular meaning in a given language (Dryer 2006; Evans and Dench 2006: 7). In this, it differs from many other syntactic frameworks.

3.3. Terminology and representational conventions Basic Linguistic Theory avoids theory-specific terminology. Or rather, it employs established terminology that has entered various frameworks. This ensures the recognition of recurrent cross-linguistic phenomena and easy compatibility with all syntactic models. Only a few recent grammars use formalism of a particular syntactic framework. For example, the grammar of Turkang Besi (Donohue 1999) employs tree-structure diagrams representing syntactic constituency, some kind of LFG-style functional structures and the thematic hierarchy formulated as in Bresnan and Kanerva (1989), but this is rather unusual. The only metalanguage employed in most reference grammars is natural language. In fact, the absence of a technical metalanguage is an essential property of Basic Linguistic Theory, which differs from many other theoretical frameworks in that it is informal. However, Dryer (2006) warns against confusing the lack of formalism with imprecision. A description in plain prose may be explicit and precise as long as the terminology and assumptions are clearly defined. Many grammatical phenomena can generally be characterized with sufficient precision in English (or some other natural language) without the use of formalism. One reason why complicated formalism is not recommended is that, in the view of some descriptive linguists, the potential audience for a reference grammar is diverse and does not only include professional linguists (Lehmann 1989; Mithun 2006; Evans and Dench 2006: 6; Noonan 2006: 353; Mosel 2006: 45). If a description aims at the largest possible audience, excessive technical terminology is detrimental. Academic grammars may be consulted by a variety of users and serve diverse needs, including those of the linguistic community. This is particularly true of smaller communities whose language is severely endangered, for which a reference grammar can play an important role in language maintenance. Moreover, the linguistic audience is changing and so are the interests of theoretical linguistics and typology. Therefore a grammar of a little known language should not be framed in terms of some formal model but presented using representational conventions comprehensible across schools and times. Grammars written in a natural language are the most enduring and accessible to the largest possible audience. The Mouton Grammar Library’s grammar of Basque (Hualde and Urbina 2003) is a good example of a description informed by the insights and techniques of formal syntax,

2046

IX. Beyond Syntax

but with an informal write-up. The grammar is written by a team of linguists trained in the generative tradition of grammatical analysis. This ensures analytical rigor and attention to the fine points of syntactic structure, which are not always discussed in other reference grammars. For example, it provides a thorough discussion of embedded questions, types of complementation, relativization of deeply embedded NPs, ellipsis and gapping, multiple focus and multiple wh-questions. However the presentation avoids unnecessary formalism. The book is accessible to any linguist regardless of theoretical orientation. Although Basic Linguistic Theory is heavily grounded in empirical data, it is not merely a collection of facts from different languages but rather a systematization of generalizations which follow from cross-linguistic variations. A certain level of abstraction is therefore a must. There are two conflicting pressures reflecting the well-known tension between linguistic universality and linguistic diversity with respect to terminology. First, the authors of modern grammars prefer to employ fairly uniform terminology as opposed to structuralism-based descriptions, which tried to capture the unique properties of every linguistic system with idiosyncratic terms. The description of an individual language must crucially be tied to the knowledge of other languages and the way they are described. It is important for a grammarian to know what terms are generally in use and how they are commonly understood (Mithun 2006: 206). Inventing new terms for a well-defined phenomenon is not desirable. Cristofaro (2006) argues that standardization of terminology ensures greater typological adequacy and comparability, while the use of idiosyncratic terms makes it more difficult for the reader to establish differences and similarities between the language in question and other languages with the same phenomenon. What is more, reference grammars are not meant to be read from beginning to end; they are used when the need arises (Lehmann 1989). A grammar that is organized in an idiosyncratic way may not be easily accessible even if the description itself is sufficiently comprehensive, because it may simply be difficult to find relevant information and compare it with information drawn from descriptions of other languages. If the grammar does not allow easy identification of the information, the reader may not realise where exactly it is addressed. Generally speaking, grammars display an overwhelming tendency to use the same terminology. However, some grammars use invented terms to describe categories that are similar to categories of other languages and could be described with traditional labels familiar to most linguists. In his description of Tundra Nenets inflection, Salminen (1997) uses a number of highly idiosyncratic names for verbal moods, as well as the term “referentials”, which simply denotes agreeing adverbs. Cristofaro (2006: 159–60) discusses several other examples. For instance, she finds unnecessary the label “logical clause” employed in Morse and Maxwell (1999). This type encompasses purpose, cause, conditional, comparison and concession clauses. A new term can be introduced if it refers to a genuinely novel phenomenon, but in this case it should be carefully defined (Noonan 2006: 354). Thus, Dol (1999) employs the term “pseudo-quotative” in her description of Maybrat and takes an effort to explain that pseudo-quotatives, unlike regular quotatives, which express indirect speech, reflect the thought content and differ from the latter in their syntactic structure. Second, a number of typologists have recently argued for the view that linguistic categories identified in terms of particular structural features may not be cross-linguisti-

59. Reference Grammars

2047

cally robust (Dryer 1997; Croft 2001; Gil 2001). To put it differently, cross-linguistic notions are convenient abstractions which ensure comparability but they are not universal in terms of their actual content. If individual categories are language-specific in this sense, it is essential to provide a detailed discussion of the set of criteria used to identify them. This is now the common practice in descriptive grammars, which normally offer a detailed discussion of the grammatical criteria they apply for defining categories. A careful description of the facts then becomes a more important requirement than the choice of labels (Noonan 2006: 358). Grammars usually say what tests they use for identifying subjects and objects. For example, Nikolaeva and Tolskaya (2001: 533−546) discuss the following subject properties in Udihe: nominative case, verbal agreement, control of switch reference, infinitival clauses, reflexivization and secondary predicates, as well as coreferential deletion in the conjoined clause. In Dolakha Newar (Genetti 2007: 308−310) relevant subject tests include verbal agreement, the choice of relativization strategy, backwards control of anaphora in complementation structures and reflexives. This cluster of properties is fairly typical, but not necessarily uniform for all languages. In fact even in Udihe some of the relevant properties are shared by non-nominative elements referred to in the grammar as “subjectoids”. These include experiencer datives and accusative causee arguments in causative constructions. Similarly, in Newar no other syntactic units exhibit exactly the same cluster of properties as the subject, but two subject properties, the ability to serve as antecedents of reflexives and the choice of the relativization strategy, are shared by experiencer datives. So the choice of parameters is an empirical matter, but the knowledge of phenomena that may be relevant to a particular category, based on previous descriptions of other languages or general typological observations, may help deciding what parameters one should focus on when collecting data. More radically, Gil (2001) and La Polla and Poa (2006) state that there are no universal categories of grammatical relations instantiated in all languages. Each language is a unique set of grammatical conventions which cannot be reduced to several universal categories, even if some aspects are similar across languages. Many of the criteria traditionally taken as distinctive for subjects identify different argument roles in different languages. For example, Foley (1991: 195−200) shows that in Yimas establishing grammatical relations is not straightforward, because different person/number combinations of the core arguments align grammatical relations in different ways. Similar ideas have been advanced with respect to word classes. Croft (2001) and Cristofaro (2006: 138−140) argue that grammatical classes should be postulated for each language independently of the categories postulated for other languages. Payne (2006: 373) refers to universal word classes as convenient approximations that help readers to understand something important about the language but do not necessarily correspond to fixed categories even within one language, let alone cross-linguistically. They are defined in terms of grammatical properties, but we often have a continuum rather than a clear-cut distinction between neighbouring categories, because each subclass possesses a distinct cluster of (partially overlapping) characteristics. For example, in Kayardild (Evans 1995) parts of speech cannot always be unambiguously established based on clear-cut distributional criteria. Evans suggests that morphological, syntactic and semantic properties should equally be taken into consideration, even though they do not always converge. In other words, we might find instances of mismatch between various parameters. This conclusion is very much in line with recent lexicalist proposals

2048

IX. Beyond Syntax

that categoriality does not rely on one single definitional feature but may be sensitive to different cross-cutting kinds of information, so that “lexical class” means more than just “distributional class” (e.g. Malouf 2000; Spencer 2005). Generally speaking, although Basic Linguistic Theory makes extensive use of universal linguistic categories understood as typological generalizations, it emphasises the need to approach each analytical decision “as an open question” (Dixon 1997: 132). This ensures the correct balance between the universalist and relativist position in language description. Cross-linguistic variation in the content of relevant categories is systematically expressed, but an adequate level of comparability is ensured.

3.4. Data collection The issue of elicited as opposed to naturalistic data has been an object of long-standing debate among fieldworkers. According to one widespread view, grammars should only be based on naturally attested patterns of speech. This is the general recommendation in Payne and Weber (2006), who advise elicitation only for obtaining data for controlled systematic aspects of language such as, for example, morphophonemics or inventories of inflectional and derivational forms. The data for a syntactic description (e.g. constituent order, sentence particles, clause combining, voice, alignment, and pragmatically marked structures such as topicalization) should be extracted out of a large body of naturally occurring texts and may be supplemented by elicitation, if necessary. In a number of instances grammars explicitly state that they only rely on the materials obtained from spontaneous discourse. This choice sometimes reflects the conscious decision of the author to follow the so-called “usage-based” approach to language with its emphasis on the intimate relation between linguistic structures and linguistic events, that is, language use (e.g. Barlow and Kemmer 2000). In the usage-based models the function of language as a tool of communication is the central motivation for observed grammatical patterns. They are deeply grounded in ongoing human interaction, and grammatical knowledge is built up inductively from usage events and does not exist outside them. Since linguistic structures are so closely tied to usage, both theoretical analysis and language description should be based on observations of language data drawn from natural discourse (corpus data), rather than examples constructed by a linguist. The usual procedure is to record naturally occurring texts (conversations and narratives), transcribe them and classify utterances and their parts according to linguistic meta-notions drawn from the vocabulary of Basic Linguistic Theory such as, for example, “relative clauses”, “possessive constructions”, and so on. The emphasis on naturally occurring examples of speech also reflects the growing interest in language documentation, which is nowadays understood as collection and presentation of primary linguistic data (Himmelman 1998). Technically speaking, reference grammars do not belong to the realm of language documentation, as they do not present primary data for their own sake but only to the extent they illustrate a particular grammatical point. However, many grammar writers view their work as a documentary enterprise and therefore aim to present examples obtained from spontaneous discourse. For example, a grammar of Nunggubuyu (Heath 1984) and a grammar of Wolane (Meyer 2006) emphasize that they are based exclusively on corpus data.

59. Reference Grammars

2049

This point is important here because it can influence the presentation of syntax. Evans and Dench (2006: 11) suggest that grammars that only use corpus data are likely to overlook that speakers might have clear judgements on complex constructions which rarely if ever occur in textual corpora. No corpus, however large, contains information about all the areas of grammar a linguist might want to explore, and the corpora of little described languages are usually not even very large. Moreover, the method of data collection can affect an analytical decision. On the one hand, elicited data may drastically differ from natural data in terms of word order, which in some cases can lead to radically different analyses of word class systems (see Gil 2001 for an example). On the other hand, the usage-based approach may be insufficient for syntactic argumentation and can lead to questionable solutions. One example of the latter was discussed in Nikolaeva’s (2005) review of the grammar of Kolyma Yukaghir (Maslova 2003). The grammar states that clause-chaining constructions, which involve non-finite verbs and are employed both to conjoin clauses and to modify the proposition rendered by the finite clause, are structurally ambiguous (Maslova 2003: 379−380). The author concludes, much in line with Longacre (1985), that Yukaghir has no strict formal opposition between coordination and subordination. This analysis a priori excludes the possibility that the ambiguity of chains is only apparent and there is an underlying structural difference in how their syntactic relation with the finite clause is construed, even though it is not accompanied by any overt morphological or word order distinctions. According to Maslova (2003: 380), the structural ambiguity is “an essential property of this strategy of clause linking (rather than an artefact of inadequate tools of syntactic analysis)”. However, this conclusion can only be justified if a thorough syntactic analysis is provided. Coordinated and subordinated structures are known to exhibit different properties with respect to anaphoric binding, gapping, extractions, and scope of quantifiers (van Oirsouw 1987; Goodall 1987; Haspelmath 1995; Johannessen 1998, and others). These tests are not discussed in the grammar. A possible reason is that such data are difficult to obtain from texts. The grammar mostly uses corpus data, which are only rarely supplemented by elicited examples. However, in the absence of this discussion, the conclusion about ambiguity between coordination and subordination seems premature. The same problem arises with the grammar of Wolane (Meyer 2006), also based on corpus data. Converbial structures represent clause types that reportedly stand between subordinate and main clauses (so called co-subordination). Syntactic argumentation for this conclusion is not provided and it is not obvious from the suggested translations. As emphasized in Rice (2006a: 238), information received through elicitation forms a vital part of language and often cannot be obtained otherwise. It therefore deserves to be included in a reference grammar. Rice calls for a balanced approach using both types of data and this is what normally happens in the practice of grammar writing. Most modern grammars are based on a mixture of data obtained from naturally occurring discourse and elicited data, for example, a grammar of Mybrat (Dol 1999), a grammar of Mosetén (Sakel 2004), and a grammar of Urarina (Olawsky 2006). Typically the authors use elicitation as a supplementary method. For instance, Donohue (1999: 13) mentions that he started eliciting examples of double applicatives in Turkang Besi only after they were observed in spontaneous texts.

2050

IX. Beyond Syntax

4. Case studies In this section I present a brief overview of how syntactic topics are addressed in reference grammars concentrating, in particular, on relative clauses and differential object marking. I will identify a few specific questions which were raised in these areas by theoretical linguists and which, I feel, deserve more attention in the descriptive literature.

4.1. Relative clauses Relative clauses exhibit remarkable structural diversity across languages and have played a prominent role in linguistic typology and theoretical syntax. They are addressed in the overwhelming majority of reference grammars since all or most languages exhibit some kind of relativization. Surprising exceptions are a few recent grammars that have appeared in the series Languages of the Greater Himalayan Region. Although the descriptions are meant to be comprehensive, they generally deal with morphology and provide rather limited information on syntax. Relative clauses are not mentioned in the grammar of Lepcha (Plaisier 2007) and the grammar of Sunwar (Borchers 2008). In the grammar of Kulung (Tolsma 2006) attributive participial constructions are briefly discussed but their clausal status remains unclear. The grammar of Urarina (Olawsky 2006: 320−328) states that relativization is expressed by nominalizations, which do not constitute independent clauses. This solution seems to be based on the fact that nominalized constructions are non-finite and do not contain morphological expression of tense and aspect. Syntactic arguments for their non-clausal status are not actually provided. The author does not discuss the behavioural properties of the dependent subject, whether or not relative constructions have independent time reference, conditions on extractions, island constraints etc. The cross-linguistic taxonomy of relative clauses is based upon three commonly accepted diagnostic criteria. First, they are classified according to the syntactic relationship between the clause and its semantic head (embedded vs. adjoined; the postnominal vs. prenominal position of the embedded clause). The second common criterion concerns the so-called “grammatical function recoverability strategy”, i.e. the indication of the position of the modified noun in the relative clause (non-reduction, pronoun retention, relative pronouns, and gapping). The third parameter of classification is related to the syntactic status of the relativized noun within the relative clause. While the first two parameters tend to be systematically addressed in reference grammars as they concern the basics of the relative clause structure, the third parameter is not always discussed at sufficient length. It is well known since Keenan and Comrie (1977) that grammatical functions have different accessibility to relativization and that if a language has several relativization strategies their distribution is not random. This generalization is expressed in the famous Accessibility Hierarchy, where the sign “>”means ‘is more accessible to relativization’. (2)

subject > direct object > indirect object > oblique argument > possessor > object of comparison

59. Reference Grammars

2051

The claim is that each relativization strategy covers a continuous segment on the hierarchy and accessibility to relativization is progressively higher on its left periphery. If a language can relativize a given position on the hierarchy, then it can relativize all positions to the left. The problem is that it is sometimes difficult to extract information on accessibility to relativization from reference grammars because they do not describe the relativization of all grammatical functions. Some examples of grammars that do include this information are a grammar of Tzutujil (Dayley 1985), a grammar of Muna (van den Berg 1989), a grammar of Mosetén (Sakel 2004), and a grammar of Lao (Enfield 2007). Many other grammars provide explicit information only on the relativization of the grammatical functions located high on the hierarchy, that is, the subject and the direct object, and do not mention whether relativization of lower functions such as adjuncts, possessors or objects of comparison is possible. This fact is rather surprising, given that the Accessibility Hierarchy is commonly referred to in typology. Cristofaro (2006) attributes this lack to the method of data collection: as mentioned in section 3.4, some grammars are primarily based on corpus data and constructions relativizing lower functions are relatively infrequent in natural discourse. Textual data may provide no information on the relevant phenomenon and must be supplemented by elicitation. One example is the grammar of Mam (England 1983), which cites examples of the relativization of subjects and direct objects, but does not explicitly state whether other grammatical functions can be relativized. The grammar of Tepehuan (Willett 1991: 234− 237) describes in detail the structural differences between restrictive and non-restrictive relatives, but does not address any grammatical constraints on relativization. The grammar of Cubeo (Morse and Maxwell 1999) only mentions relativization of subjects, objects and locative adjuncts. The grammar of Wolane (Meyer 2006) discusses relativization of subjects, objects and some adjuncts, but in the latter case the applicative marker is required on the verb, so we are likely to be dealing with transitivization (this analysis is not explicitly suggested). There is no information of relativization of other functions. In Dolakha Newar (Genetti 2007) subjects, objects, locatives and time adjuncts are relativized, but no information is provided on possessors, objects of comparison, objects of postpositions and other lower functions. Possessor relativization is especially rarely discussed, and so is relativization out of complex constructions. Apart from the grammar of Basque mentioned above, it is only addressed in some detail in the grammar of Koyra Chiini (Heath 1999: 199−201). The semantics of relative clauses is another issue. Restrictive relative clauses are known to be cross-linguistically more common than non-restrictive ones: there are no languages with only non-restrictive relatives, but the reverse is not true. For instance, according to Noonan (1992) relative clauses in Lango are always restrictive, whereas non-restrictive relatives do not seem to have a grammaticalized counterpart. The same is observed in Supyire (Carlson 1994: 487), where non-restrictive meanings are rendered by mere parataxis. However, grammars do not always provide information on whether non-restrictive clauses are available. The examples cited in most grammars illustrate restrictive relatives. For example, non-restrictive relatives are not mentioned in the otherwise very detailed description of Mina relative clauses (Frayzyngier and Johnston 2005) or in the grammar of Boumaa Fijian (Dixon 1988). The variety of structural and semantic types of relative clauses raises the question of whether there is such a notion as a universal relative construction and if so, what are its

2052

IX. Beyond Syntax

properties. There are two competing approaches to the analysis of relative clauses in the theoretical literature on syntax, which differ according to their view on where in the structure the modified nominal originates and what its syntactic relation is to the relative clause. In the more standard account, the relative clause is adjoined to the higher projection of the modified nominal. The head of the relative is base-generated outside the relative clause and may be linked to the clause-internal relative phrase through some kind of interpretive mechanism. Under the alternative approach developed in Kayne (1994), the relative clause is a syntactic complement of the determiner head rather than an adjunct. Given the binary branching hypothesis, the modified noun cannot function as the complement of the determiner. Kayne proposes that it is generated internally to the relative clause from where it raises to the specifier of the respective CP. As was argued by Cinque (2007), cross-linguistic data provided by reference grammars of little studied languages are of primary importance for deciding between these alternatives. Some authors (Carlson 1977; Grosu 1994; Grosu and Landman 1998, and others) distinguish the third semantic type of relative clause, in addition to restrictive and non-restrictive relatives, amount or degree relatives, exemplified in (3). (3)

We will never be able to recruit the soldiers [that the Chinese paraded last May].

The relative clause in (3) is said to contain a phonetically null element designating a quantity or amount, so that the clause can be paraphrased as as many soldiers as the Chinese paraded last May. Degree relatives differ from restrictive relatives in a number of syntactic ways. In English they show determiners restrictions (they are only combined with strong determiners such as definite articles and universal quantifiers), do not admit wh-pronouns, do not allow extraposition, and have different stacking properties. Crucially, degree relatives normally require a raising analysis by which the head is interpreted inside the relative clause, but it is more questionable whether restrictive relatives are subject to raising or matching analysis. A typologically-interesting question then is whether all languages have degree relatives as opposed to restrictive relatives. However, at the time of writing I do not know of any reference grammar where degree relatives are addressed. Another typological question which is of relevance for theoretical syntax is the structure of head-internal relatives, namely, whether or not they show indefinite restrictions on internal head and concomitant syntactic effects such as sensitivity to island constraints (Cinque 2007: 102). Again, information on this is never found in reference grammars, even if the language does have head-internal relatives. Cinque concludes that providing relevant information in grammars will bear on the theoretical analysis of relative clauses. Conversely, attention to the findings of formal approaches to syntax may help strengthen the results of typology and grammar writing. Basic Linguistic Theory would greatly improve its syntactic sophistication if it were to follow the results of studies in formal syntax and semantics as closely as it adopts the results of the functional-typological approach to language.

59. Reference Grammars

2053

4.2. Differential object marking Many languages exhibit non-uniform marking targeting objects. Variations can occur within one and the same language with objects of one and the same verb. Such patterns are widely known as differential object marking or DOM (a term introduced by Bossong 1985), and are extensively discussed in Bossong (1991), Næss (2004), and de Swart (2007), Dalrymple and Nikolaeva (2011) among many others. Work by Aissen (2003) has been particularly influential in the typologically orientated research. DOM can involve both differential object agreement and differential case marking, if case marking is understood broadly as any type of dependent-marking (case or adpositions). It is known to be a very robust cross-linguistic phenomenon having been described in greater or lesser detail in the grammars of many languages such as, for example, Mandarin Chinese (Li and Thompson 1981), Palauan (Josephs 1975), Sinhala (Gair and Paolillo 1997), Kashmiri (Wali and Koul 1996), and Meithei (Chelliah 1997). However, the presentation of the phenomenon often avoids the discussion of syntactic issues. The syntax of DOM has been addressed within the transformational paradigm, where it is generally taken to be related to the phenomenon of object shift. Many analyses of the phrasal syntax assume two distinct positions for two types of objects, VPinternal and VP-external, and postulate a correlation between the position and the grammatical marking (Diesing 1992; Ritter and Rosen 2001; Woolford 1999, 2000, among many others). It is generally assumed that VP-internal objects are syntactically less “visible” than VP-external ones, but the cross-linguistic behaviour of objects in languages with DOM needs more investigation. Unfortunately, grammars do not normally address the position of the marked and the unmarked objects. Similarly their behavioural properties are rarely discussed, although it has been observed that in some languages they exhibit different behavioural profiles (Dalrymple and Nikolaeva 2011). Existing descriptions mostly concentrate on the distribution of morphological marking and its functional motivation. For example, in Turkish the object stands in the nominative if it is non-specific or generic, otherwise it must be marked by the accusative case (Kornfilt 1997: 219−220). However, the grammar does not discuss object properties and object position, even though it is known from other work that unmarked objects must be immediately adjacent to the verb while the accusative objects need not (Erguvanli 1984; King and Butt 1996). In the grammar of Udihe (Nikolaeva and Tolskaya 2001) DOM is addressed in the chapter on morphology, where different factors triggering the optional accusative are described. The syntax of this phenomenon remains unclear. It is unknown whether the marked and unmarked objects have identical properties, if they are located in the same position and bear the same grammatical function. One interesting exception is Genetti’s description of Dolakha Newar (Genetti 2007), which provides an extensive discussion of object properties. In this language objects are either unmarked (in the absolutive case) or marked with the dative -ta. Many languages are known to grammatically distinguish between two classes of objects, either direct objects (patient/theme) vs. indirect objects (recipient), or primary objects (patient/theme of monotransitive verbs and recipient) vs. secondary objects (patient/theme of ditransitive verbs) (Dryer 1986). However, Genetti argues that neither analysis is suitable for Newar. Both types of objects in ditransitive constructions can take the dative case under the appropriate discourse conditions (see example [4]), are the only grammatical argu-

2054

IX. Beyond Syntax

ments that have been found to antecedent the emphatic possessive pronouns āme tuŋ ‘his/her own’, and have identical relativization potential. (4)

āle āmta bhānche-ta bir-ju then she.DAT cook-DAT give-3SG.PST ‘Then he gave her (in marriage) to the cook.’ (Genetti 2007: 316)

[Dolakha Newar]

Genetti concludes that in Newar all object-like arguments appear to be expressed by a single undifferentiated category of object, which differs from subjects and other grammatical functions in a number of properties. This conclusion may seem rather surprising from the perspective of a theory such as LFG, where grammatical functions are assumed to be unique within a single functional structure; that is, doubling of the same function is impossible within the boundaries of a simple clause. Note also that the conditions on case marking differ for the two types of objects. Recipient objects are always marked with the dative, whereas for patient/theme objects the dative is optional and seems to depend on a number of discourse conditions: the patient/theme object is marked if its referent is either human and given in the discourse, or animate (non-human) and occurs in an utterance that is “crucial to the resolution of a narrative plot” (Genetti 2007: 113). For example, one story tells how a son bets with his friends and carefully prepares a trick he will use. The trick, the culmination of his plan, begins with the release of a calf and is described by the sentence ‘Then the son suddenly released the calf’, where the object ‘calf’ stands in the dative. This distinction suggests that we may be dealing with different grammatical functions after all, which appear to behave identically with respect to the tests discussed in the grammar. In any case the grammar offers a genuinely thought-provoking discussion. This also brings us to the question of how the differential marking of objects is motivated. Most previous work on DOM has appealed to referential features of the object, such as animacy, definiteness or specificity, to distinguish marked and unmarked objects. Typologists have demonstrated convincingly that these features, as an inherent part of the semantics of the object NP, often play a role in DOM. For instance, the influential proposal by Aissen (2003) is based on the hierarchies of animacy and definiteness. Most transformational work on DOM relies on the premise that object shift and object marking patterns are semantically driven. It is generally assumed that indefinite/ nonspecific objects are VP-internal and definite/specific ones are VP-external (Diesing 1992, and others). For example, work by Woolford (1999, 2000) proposes a family of Exclusion Principles which are based on specificity, humanness, animacy, and number. Ritter and Rosen (2001) use a general notion of boundedness which encompasses specificity and definiteness as well as event-boundedness. These criteria are indeed useful in explaining patterns of DOM in the languages where objects that are characterized as semantically “strong” or “definite” show more agreement with the verb or more casemarking than objects without these properties. Semantic factors seem to be sufficient for explaining patterns of DOM in Turkish, where the distribution of marked and unmarked objects is fairly straightforward and has to do with specificity. Yet these factors do not directly account for languages in which objects with the same referential features can be either unmarked or marked. As mentioned above, in Dolakha Newar the dative marking on the patient/theme object is only required if the

59. Reference Grammars

2055

object is characterized by some degree of discourse saliency in addition to being animate. In Sinhala (Gair and Paolillo 1997) the accusative marking is only possible on animate objects, but animate objects are sometimes marked and sometimes unmarked. There is no explanation of this pattern in the grammar. (5)

[Sinhala]

mamə miniha(-wə) dækka see.PST I man-ACC ‘I saw the man.’ (Gair and Paolillo 1997: 32)

There are different views on the object marker -ra in Persian. Mahootian (1997) argues that it has more than one function, but its primary role is to mark definiteness. A much more detailed discussion is provided in Lazard (1992: 183−194), where object marking is said to depend on a number of complex semantic, pragmatic and grammatical conditions. The basic idea is that the entity denoted by the marked object must be individualized: the more it is individual, the more likely the object is to be marked. This makes object marking available on indefinite nouns if they designate a clearly individuated entity (EZ = ezafe): (6)

peymud dâlân-e derâz-e târik-i-râ long-EZ dark-EZ corridor-INDF-OBJ run.down.PST ‘He ran down a long dark corridor.’ (Lazard 1992: 185)

[Persian]

Non-specific objects never take -ra, but it remains unclear whether it is required for all specific objects and therefore is a specificity marker. Lazard’s discussion rather suggests that it is not. Seemingly unpredictable variations in DOM can sometimes be explained by reference to information structure, a level of sentence grammar where propositions, as conceptual states of affairs, are structured in accordance with the informational value of sentence elements and contextual factors. As extensively argued in Dalrymple and Nikolaeva (2011) in many languages marked objects are topical, while unmarked objects are not. This is the solution offered for DOM in Tariana (Aikhenvald 2003). Only topical objects receive the marking -nuku/-naku, while non-topical objects remain unmarked if lexical or marked with -na if pronominal. The topical object has to satisfy one of the following conditions: (i) be the topic of the narrative; (ii) be referential, specific and/or definite; (iii) be important for the speaker. It should be noted, however, that while the general insight seems to be correct, the characterisation of topic provided in this grammar seems imprecise. Topicality is usually understood as having to do with the construal of the referent as pragmatically salient (Lambrecht 1994) and cannot be unambiguously established based on the referential features. The relationship between topicality and the referential properties of the object in Tariana is not spelled out. Another issue which the grammars do not always resolve is the pronominal status of differential agreement markers. A number of languages, e.g. Lango (Noonan 1992), exhibit incorporated object pronouns. Yet, in some languages the status of the object marker on the verb is not entirely obvious. One grammar that addresses this question albeit briefly is the grammar of Muna (van den Berg 1989). In this language a combina-

2056

IX. Beyond Syntax

tion of the (pronominal) object marker and an overt lexical object is possible in SVO type clauses, although the object marker is optional: (7)

do-fenamisi-e-mo ka-gharo-no taghi 3PL-fell-3SG-PFV NMLZ-hungry-POSS belly ‘They felt their hungry bellies.’ (van den Berg 1989: 165)

[Muna]

In most cases the referent of the object is known and can be analyzed as an afterthought, but this is not necessarily the case. According to van den Berg (1989: 165), “it is possible that a system of object agreement is gradually coming into existence”. This seems to be a reasonable conclusion but needs further investigation, for instance, along the lines suggested in Bresnan and Mchombo (1987).

5. Conclusion To sum up, the presentation of syntax in recent reference grammars has mostly been in terms of Basic Linguistic Theory, a cumulative descriptive framework that has acquired its methods and concepts from various sources, from traditional grammar to modern theoretical syntax. The metalanguage it uses is rather informal, which has the advantage of being user-friendly, as presumably every reader is able to access the language data independently of his or her training and the theoretical framework in which (s)he is working. Cross-linguistic comparability is ensured by the use of standard terminology and analytical tools, although the idea that the actual content of the typologically relevant phenomena/categories may differ is becoming increasingly popular among descriptive linguists. The ongoing research in linguistic theory and language typology does have a considerable impact on Basic Linguistic Theory. Rice (2006a: 259) suggests that changes will continue over time, although this is not necessarily a conscious enterprise. The role of syntactic theory in language description mainly consists in defining what is considered to be essential for the structure of the language and what questions a fieldworker can ask, as well as in shaping the analysis. Grammar writing requires a good theoretical background, the ability to recognize an analytical problem and to provide coherent and cohesive argumentation. On the other hand, the data from reference grammars enrich syntactic theory, as they can be used to evaluate possible constraints on human language and extend our understanding of human linguistic capacity. The linguists who work on “explanatory” theories, both “functionalists” and “formalists”, do not always find answers to all the questions they wish to investigate in descriptive grammars. This is understandable, since a grammar cannot address every topic in equal descriptive depth. Evans (2007) estimates that a theoretically satisfying description of reciprocal constructions is likely to be at least 40 pages long. Judging from the average proportion of pages devoted to reciprocals in existing reference grammars, this means that we would need a grammar of about 9,000 pages. Creating a grammar of this scale is hardly a realistic goal. Nonetheless grammars should aim at a thorough exploration of each syntactic phenomenon, informed by recent advances in theoretical syntax and syntactic typology.

59. Reference Grammars

2057

6. References (selected) Adelaar, Wilhelm 1977 Tarma Quechua: Grammar, Texts, Dictionary. Lisse: Peter de Ridder Press. Aikhenvald, Alexandra 2003 A Grammar of Tariana, from Northwest Amazonia. (Cambridge grammatical descriptions 3.) Cambridge: Cambridge University Press. Aissen, Judith 2003 Differential object marking: Iconicity vs. economy. Natural Language and Linguistic Theory 21: 435−483. Allin, Trevor 1970 A Grammar of Resígaro. 3 vols. Horsley Green, High Wycombe: Summer Institute of Linguistics. Baker, Mark 1988 Incorporation. Chicago: Chicago University Press. Baker, Mark 1995 The Polysynthesis Parameter. Oxford: Oxford University Press. Barlow, Michael, and Suzanne Kemmer (eds.) 2000 Usage-based Models of Language. Stanford, CA: CSLI Publications. Borchers, Dörte 2008 A Grammar of Sunwar: Descriptive Grammar, Paradigms, Texts and Glossary. (Brill’s Tibetan studies library. Languages of the Greater Himalayan Region.) Leiden/Boston: Brill. Bossong, Georg 1985 Empirische Universalienforschung: Differentielle Objektmarkierung in den Neuiranischen Sprachen. Tübingen: Gunter Narr. Bossong, Georg 1991 Differential object marking in Romance and beyond. In: Dieter Wanner, and Douglas A. Kibbee (eds.), New Analyses in Romance Linguistics, 143−170. Amsterdam: John Benjamins. Bresnan, Joan, and Sam Mchombo 1987 Topic, pronoun, and agreement in Chicheŵa. Language 63: 741−783. Bresnan, Joan, and Jonni Kanerva 1989 Locative inversion in Chicheŵa: a study of factorization of grammar. Linguistic Inquiry 20: 1−50. Butt, Miriam, and Tracy Holloway King 1996 Structural topic and focus without movement. In: Proceedings of the LFG96 Conference. Stanford: CSLI Publications. Carlson, Greg 1977 Amount relatives. Language 53: 520−542. Carlson, Robert 1994 A Grammar of Supyire. (Mouton Grammar Library 14.) Berlin: Mouton de Gruyter. Cinque, Guglielmo 2007 A note on linguistic theory and typology. Linguistic Typology 11: 93−106. Chelliah, Shobhana Lakshmi 1997 A Grammar of Meithei. (Mouton Grammar Library 17.) New York: Mouton de Gruyter. Chomsky, Noam 1965 Aspects of the Theory of Syntax. Cambridge, Massachusetts: MIT Press. Chomsky, Noam 1977 On WH-movement. In: Peter Culicover, Thomas Wasow, and Adrian Akmajian (eds.), Formal Syntax, 71−132. New York: Academic Press.

2058

IX. Beyond Syntax

Cloarec-Heiss, France 1986 Dynamique et e´quilibre d’une syntaxe: le banda-linda de Centrafrique. Cambridge: Cambridge University Press et E´ditions de la Maison des Sciences de l’Homme pour Socie´te´ d’e´tudes linguistiques et anthropologiques et France. Comrie, Bernard, and Norval Smith 1977 Lingua Descriptive Studies: Questionnaire. Lingua 42: 1−72. Croft, William 2001 Radical Construction Grammar: Syntactic Theory in Typological Perspective. Oxford: Oxford University Press. Cristofaro, Sonia 2006 The organization of reference grammars: A typologist user’s point of view. In: Felix K. Ameka, Alan Charles Dench, and Nicholas Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 137−170. Berlin: Mouton de Gruyter. Crowley, Terry 1998 An Erromangan (Sye) Grammar. Honolulu: University of Hawaii Press. Dayley, Jon P. 1985 Tzutujil Grammar. (University of California publications in linguistics 7.) Berkeley; London: University of California Press. de Swart, Peter 2007 Cross-linguistic variation in object marking. Ph.D. thesis, Radboud University Nijmegen. Dalrymple, Mary, and Irina Nikolaeva 2011 Objects and Information Structure. Cambridge: Cambridge University Press. Dez, Jacques 1980 Structures de la Langue Malgache: Éle´ments de Grammaire à l’usage des Francophones. Paris: Publications orientalistes de France. Diesing, Molly 1992 Indefinites. Cambridge, MA: The MIT Press. Dixon, Robert M. W. 1988 A Grammar of Boumaa Fijian. Chicago: University of Chicago Press. Dixon, Robert M. W. 1997 The Rise and Fall of Languages. Cambridge: Cambridge University Press. Dol, Philomena 1999 A Grammar of Maybrat. (Pacific Linguistics 586.) Canberra: Department of Linguistics, Research School of Pacific Studies, Australian National University. Donohue, Mark 1999 A Grammar of Turkang Besi. (Mouton Grammar Library 20.) Berlin: Mouton de Gruyter. Driem, George van 1987 A Grammar of Limbu. (Mouton Grammar Library 4.) Berlin; New York: Mouton de Gruyter. Dryer, Matthew 1986 Primary objects, secondary objects, and antidative. Language 62: 808−845. Dryer, Matthew 1997 Are grammatical relations universal? In: Joan Bybee, John Haiman, and Sandra Thompson (eds.), Essays on Language Function and Language Type, 115−143. Amsterdam: John Benjamins. Dryer, Matthew 2006 Descriptive theories, explanatory theories and Basic Linguistic Theory. In: Felix K. Ameka, Alan Charles Dench, and Nicholas Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 207−234. Berlin: Mouton de Gruyter.

59. Reference Grammars

2059

Enfield, Nick J. 2007 A Grammar of Lao. (Mouton Grammar Library 38.) New York: Mouton de Gruyter. England, Nora C. 1983 A Grammar of Mam, a Mayan Language. Austin: University of Texas Press. Erguvanli, Ezer Emine 1984 The Function of Word Order in Turkish Grammar. (University of California Publications in Linguistics 106.) Berkley: University of California Press. Evans, Nicholas 1995 A Grammar of Kayardild: with Historical-Comparative Notes on Tangkic. (Mouton Grammar Library 15.) Berlin: Mouton de Gruyter. Evans, Nicholas 2003 Bininj Gun-wok: a Pan-dialectal Grammar of Mayali, Kunwinjku and Kune (2 vols.). Canberra: Pacific Linguistics. Evans, Nicholas 2007 Reciprocals, grammar writing, and typology. Romanian Society for Romance Linguistics 7: 479−506. Evans, Nicholas, and Alan Dench 2006 Catching language. In: Felix K. Ameka, Alan Charles Dench, and Nicholas Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 1−39. Berlin: Mouton de Gruyter. Foley, William 1991 The Yimas Language of New Guinea. Stanford, CA: Stanford University Press. Frajzyngier, Zygmunt, and Eric Johnston 2005 A Grammar of Muna. (Mouton Grammar Library 35.) Berlin: Mouton de Gruyter. Gair, James W., and John C. Paolillo 1997 Sinhala. (Languages of the World Materials 34.) München: LINCOM Europa. Genetti, Carol 2007 A Grammar of Dolakha Newar. (Mouton Grammar Library 40.) Berlin: Mouton de Gruyter. Gil, David 2001 Escaping Eurocentrism: Fieldwork as a process of unlearning. In: Paul Newman, and Martha Ratliff (eds.), Linguistic Fieldwork, 102−132. Cambridge: Cambridge University Press. Glass, Amee, and Dorothy Hackett 1970 Pitjantjatjara Grammar: a Tagmemic View of Ngaanyatjara (Warburton Ranges) Dialect. (Linguistic Series 13. Australian Aboriginal Studies 34.) Canberra: Australian Institute of Aboriginal Studies. Grosu, Alexander 1994 Three Studies in Locality and Case. London; New York: Routledge. Grosu, Alexander, and Fred Landman 1998 Strange relatives of the third kind. Natural Language Semantics 6: 125−170. Johannessen, Janne 1998 Coordination. (Oxford Studies in Comparative Syntax.) New York: Oxford University Press. Josephs, Lewis 1975 Palauan Reference Grammar. Honolulu: University of Havaii Press. Haiman, John 1980 Hua, a Papuan Language of the Eastern Highlands of New Guinea. (Studies of Language Companion Series 5.) Amsterdam: John Benjamins. Harrison, Sheldon 1976 Mokelese Reference Grammar. Honolulu: University of Hawaii Press.

2060

IX. Beyond Syntax

Haspelmath, Martin 1993 A Grammar of Lezgian. (Mouton Grammar Library 9.) Berlin: Mouton de Gruyter. Heath, Jeffrey 1984 Functional Grammar of Nunggubuyu. Canberra: Australian Institute of Aboriginal Studies. Heath, Jeffrey 1999 A Grammar of Koyra Chiini: the Soghay of Timbuktu. (Mouton Grammar Library 19.) Berlin: Mouton de Gruyter. Himmelmann, Nikolaus 1998 Documentary and descriptive linguistics. Linguistics 36: 161−195. Hualde, Jose´ Ignacio, and Jon Ortiz de Urbina (eds.) 2003 A Grammar of Basque. (Mouton Grammar Library 26.) Berlin; New York: Mouton de Gruyter. Kayne, Richard 1994 The Antisymmetry of Syntax. Cambridge, Massachussets: MIT Press. Keenan, Edward L., and Bernard Comrie 1977 NP accessibility and universal grammar. Linguistic Inquiry 8: 63−100. Kimenyi, Alexandre 1978 A Relational Grammar of Kinyarwanda. Berkley/London: University of California Press. Kornfilt, Jaklin 1997 Turkish. (Descriptive Grammars.) London: Routledge. Koutsoudas, Andreas 1966 Writing Transformational Grammars: an Introduction. New York/St Louis/San Fransisco/Toronto/London/Sydney: McGraw-Hill Book Company. Lambrecht, Knut 1994 Information Structure and Sentence Form: Topic, Focus, and the Mental Representations of Discourse Referents. Cambridge: Cambridge University Press. Langdon, Margaret 1970 A Grammar of Diegueño. The Mesa Grande Dialect. (University of California Publications in Linguistics 84.) Berkeley; Los Angeles: University of California Press. La Polla, Randy, and Dory Poa 2006 On describing word order. In: Felix K. Ameka, Alan Charles Dench, and Nicholas Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 269−295. Berlin: Mouton de Gruyter. Lazard, Guilbert 1992 A Grammar of Contemporary Persian. Costa Mesa, CA: Mazda Publishers. LeCron, Mary 1969 The Tarascan Language. Berkeley: University of California Press. Lehmann, Christian 1989 Language description and general comparative grammar. In: Gottfried Graustein, and Gerhard Leitner (eds.), Reference Grammars and Modern Linguistic Theory, 133−162. (Linguistische Arbeiten 226.) Tübingen: M. Niemeyer. Leech, Geoffrey, and Jan Svarvik 1975 A Communicative Grammar of English: London: Longman. Li, Charles N., and Sandra A. Thompson 1981 Mandarin Chinese: a Functional Reference Grammar. Berkeley; London: University of California Press. Longacre, Robert E. 1985 Sentences as combination of clauses. In: Timothy Shopen (ed.), Language Typology and Syntactic Description. Complex Constructions, 235−286. Cambridge: Cambridge University Press.

59. Reference Grammars

2061

Mahootian, Shahrzad 1997 Persian. (Descriptive Grammars.) London: Routledge. Malouf, Robert 2000 Mixed Categories in the Hierarchical Lexicon. (Studies in Constraint-Based Lexicalism.) Stanford, CA: CSLI Publications Maslova, Elena 2003 A Grammar of Kolyma Yukaghir. (Mouton Grammar Library 27.) Berlin/New York: Mouton de Gruyter. Mc Lendon, Sally 1975 A Grammar of Eastern Pomo. (University of California Publications in Linguistics 74.) Berkeley: University of California Press. Meyer, Ronny 2006 Wolane: Descriptive Grammar of an East Gurage Language (Ethiosemitic). Köln: Rüdiger Köppe. Mithun, Marianne 1976 A Grammar of Tuscarora. (Garland Studies in American Indian Linguistics.) New York: Garland Publications. Mithun, Marianne 2006 Grammars and community. In: Thomas Payne, and David Weber (eds.), Perspectives on Grammar Writing, 281−306. (Studies in Language 30, 2.) Berlin: Mouton de Gruyter. Mosel, Ulrike 2006 Grammaticography: The art and craft of writing grammars. In: Felix K. Ameka, Alan Charles Dench, and Nicholas Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 41−68. Berlin: Mouton de Gruyter. Morse, Nancy L., and Michael B. Maxwell 1999 Cubeo Grammar. (Studies in the Languages of Colombia 5.) Summer Institute of Linguistics and The University of Texas at Arlington. Munro, Pamela 2006 From parts of speech to the grammar. In: Thomas Payne, and David Weber (eds.), Perspectives on Grammar Writing, 307−349. (Studies in Language 30, 2.) Berlin: Mouton de Gruyter. Næss, Åshild 2004 What markedness marks: The markedness problem with direct objects. Lingua 114: 1186−1212. Nedjalkov, Vladimir 1995 Some typological parameters of converbs. In: Martin Haspelmath, and Ekkehard König (eds.), Converbs in Cross-linguistic Perspective, 97−136. (Empirical Approaches to Language Typology 13.) Berlin; New York: Mouton de Gruyter. Nikolaeva, Irina 2005 [review article of:] E. Maslova “A grammar of Kolyma Yukaghir”. Linguistic Typology 9: 299−325. Nikolaeva, Irina, and Maria Tolskaya 2001 A Grammar of Udihe. (Mouton Grammar Library 21.) Berlin: Mouton de Gruyter. Noonan, Michael 1992 A Grammar of Lango. (Mouton Grammar Library 7.) Berlin: Mouton de Gruyter. Noonan, Michael 2006 Grammar writing for a grammar reading audience. In: Thomas Payne, and David Weber (eds.), Perspectives on Grammar Writing, 351−365. (Studies in Language 30, 2.) Berlin: Mouton de Gruyter. Olawsky, Knut J. 2006 A Grammar of Urarina. (Mouton Grammar Library 37.) Berlin: Mouton de Gruyter.

2062

IX. Beyond Syntax

Payne, Thomas 1997 Describing Morphosyntax. A Guide for Field Linguists. Cambridge: Cambridge University Press. Payne, Thomas 2006 A grammar as a communicative act or What does a grammatical description really describe? In: Thomas Payne, and David Weber (eds.), Perspectives on Grammar Writing, 367−383. (Studies in Language 30, 2.) Berlin: Mouton de Gruyter. Payne, Thomas, and David Weber 2006 Introduction. In: Thomas Payne, and David Weber (eds.), Perspectives on Grammar Writing, 236−243. (Studies in Language 30, 2.) Berlin: Mouton de Gruyter. Pike, Kenneth L. 1954−1960 Language in Relation to a Unified Theory of the Structure of Human Behavior, 3 vols. The Hague: Mouton. Plaisier, Heleen 2007 A Grammar of Lepcha. (Brill’s Tibetan Studies Library. Languages of the Greater Himalayan Region 5/5.) Leiden; Boston: Brill. Rice, Keren 1989 A Grammar of Slave. Berlin; New York: Mouton de Gruyter. Rice, Keren 2006a Let the language tell the story? The role of linguistic theory in writing grammars. In: Felix K. Ameka, Alan Charles Dench, and Nicholas Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 235−268. Berlin: Mouton de Gruyter. Rice, Keren 2006b A typology of good grammars. In: Thomas Payne, and David Weber (eds.), Perspectives on Grammar Writing, 385−415. (Studies in Language 30, 2.) Berlin: Mouton de Gruyter. Ritter, Elizabeth, and Sara Rosen 2001 The interpretive value of object splits. Language Sciences 23: 425−451. Rood, David 1976 Wichita Grammar. New York: Garland Publications. Rosen, Sara 1989 Two types of noun incorporation: A lexical analysis. Language 65: 294−317. Sakel, Jeanette 2004 A Grammar of Mosetén. (Mouton Grammar Library 33.) Berlin: Mouton de Gruyter. Salminen, Tapani 1997 Tundra Nenets Inflection. (Mémoires de la Société Finno-Ougrienne 227.) Helsinki: Finno-Ugric Society. Spencer, Andrew 2005 Towards a typology of ‘mixed categories’. In: C. Orhan Orgun, and Peter Sells (eds.), Morphology and the Web of Grammar: Essays in Memory of Steven G. Lapointe, 95− 138. Stanford: CSLI Publications. Tolsma, Gerard Jacobus 2006 A Grammar of Kulung. (Brill’s Tibetan Studies Library 5/4 Languages of the Greater Himalayan Region.) Leiden; Boston: Brill. Van Oirsouw, Robert R. 1987 The Syntax of Coordination. (Croom Helm Linguistics Series.) London: Croom Helm. Wali, Kashi, and Omkar Koul 1996 Kashmiri: a Cognitive-Descriptive Grammar. New York: Routledge. van den Berg, René 1989 A Grammar of the Muna Language. (Verhandelingen van het Koninklijk Instituut voor Taal,- Land- en Volkenkunde 139.) Dordrecht: Foris. Willett, Thomas L. 1991 A Reference Grammar of Southeastern Tepehuan. (Summer Institute of Linguistics Publications in Linguistics 100.) Dallas: Summer Institute of Linguistics.

60. Language Documentation

2063

Woolford, Ellen 1999 Animacy hierarchy effects on object agreement. In: Paul Kotey (ed.), New Dimensions in African Linguistics and Languages, 203−216. (Trends in African Linguistics 3.) Trenton: Africa World Press. Woolford, Ellen 2000 Object agreement in Palauan: Specificity, humanness, economy and optimality. In: Ileana Paul, Vivianne Phillips, and Lisa Travis (eds.), Formal Issues in Austronesian Linguistics, 215−245. Dordrecht: Kluwer Academic Publishers.

Irina Nikolaeva, London (UK)

60. Language Documentation 1. 2. 3. 4. 5. 6.

What is language documentation? Linguistic fieldwork and community involvement Text data in language documentation Annotation and metadata for text data Other materials in language documentation On the relationship between linguistic theory, language description and language documentation 7. References (selected)

Abstract Language Documentation has developed relatively recently as a subfield of linguistics in response to the challenge of documenting endangered languages in a fieldwork setting and the ethical, methodological and practical issues accompanying such a task. This article provides an overview of recommended standards in the field and of the factors likely to influence methodological decisions in individual documentation projects. It also discusses similarities and differences between the related fields of documentary linguistics and corpus linguistics, and places documentary practices in the wider context of questions of evidence in linguistics.

1. What is language documentation? The term “Language Documentation” can be interpreted as denoting both a process and a result. In the result reading, language documentation has been defined as a lasting, multipurpose record of a language in the sense of a “comprehensive corpus of primary data which leaves nothing to be desired by later generations wanting to explore whatever aspect of the language they are interested in” (Himmelmann 2006a: 3). In other words,

2064

IX. Beyond Syntax

the result of language documentation is a record which is both accessible and likely to be of interest to various potential users − including members of the speech community and their descendants, historians, anthropologists, people involved in education and language planning, and of course linguists with a multitude of different research interests and a variety of theoretical persuasions. In the extreme case, an existing record of the language may form the basis for revitalisation efforts even in the absence of fluent firstlanguage speakers. In line with accepted standards in the field, for the purposes of this paper the “record” will be conceptualised as a digital archive with a corpus of annotated audio-visual recordings of spoken language (potentially complemeted by primary written texts) at its core, and in addition a lexical database, a variety of additional materials such as edited texts in printable format, subtitled videos, a grammatical sketch, and photographs, and, importantly, information about the content of the archive and various conventions adopted in the annotation. Such an archive ideally will be in a format and location that ensures both longevity and continuing accessibility to the relevant interested parties. In the process reading of the term, language documentation refers to the activity of compiling such a record. This process involves, by necessity, fieldwork, in the sense of interaction of the person or team undertaking the language documentation (who may or may not themselves be members of the speech community in question) with native speakers of the language to be documented, in their speech communities, a process which raises practical and ethical issues to be discussed in section 2. It further involves the compilation of a sample of primary data, i.e. recorded instances of actual language use. The nature of these data and the question of representativeness will be addressed in section 3. Section 4 deals with the annotation of these primary data, i.e. the creation of secondary data such as metadata, transcriptions, and translations into languages of wider communication, which serve to make the record accessible to people other than the original speakers and compilers. Additional materials which may be produced in the course of language documentation or supplement it, and questions of archiving, are addressed in section 5. Language documentation in the sense outlined above has only relatively recently come to be regarded as a linguistic subdiscipline in its own right (see Himmelmann 2008 for an overview of the history of the field). While in practice, language documentation has been linked with language description for a long time, and documentary practices − with a focus on primary data − have been prevalent in the more philological traditions in linguistics, most frequently documentation is seen as included in, and ancillary to, description, as in the statement “By documentation I mean grammar, lexicon, and corpus of texts. This is a tradition well proven in the history of linguistics. To this we can now add documentation on audio- and videotape.” (Krauss 1992: 8). For some authors, the recognition of Language Documentation as a subfield of linguistics entails the view that this relationship is reversed − i.e. that the task of purposeful compilation and annotation of a representative corpus of primary data could and should be separated from the task of description and analysis, and that in certain cases the first should take priority over the second (e.g. Himmelmann 1998, 2006a). We will return to the question of the relationship between linguistic theory, language description, and language documentation in section 6 (without however addressing in detail the issue of writing reference grammars). In a less radical perspective, regarding Language Documentation as a subfield of linguistics entails no more than recognising that the practical,

60. Language Documentation

2065

technological, methodological and ethical questions associated with documentation deserve explicit attention and discussion. This change in conceptualisation of documentation activities is quite obviously associated with recent changes in technological possibilities. Within the last two decades, it has become possible to manipulate, store and disseminate not only digital text data, but also digital audio and video data, to link the latter with the former, and to undertake sophisticated searches on vast quantities of textual data. Equally importantly, the emergence of language documentation as a subdiscipline of linguistics is strongly linked with a revived interest in linguistic and cultural diversity, as opposed to universalism (Woodbury 2003: 37). Language Documentation is currently most strongly associated with the documentation of unwritten languages, usually minority languages which are affected by processes of language shift and language loss, since it is in these cases that a comprehensive documentation is perceived as most urgent. For the same reason, current and recent specific funding initiatives such as the DoBeS Programme funded by the Volkswagen Foundation, and the Endangered Languages Documentation Programme (ELDP) funded by the Hans Rausing Endangered Languages Project, are explicitly restricted to endangered languages, and some government-funded research programs, e.g. the NSF in the USA, have recently embraced the urgency of the documentation of endangered languages in their policies. However, this association of Language Documentation with endangered languages is by no means a necessary one. As Himmelmann (2006a: 5−7) points out, a documentary approach not only serves to create a record of a language which may no longer be spoken in the future, but it also strengthens the empirical basis for research on any language, endangered or not, and thereby increases research economy. It also increases the verifiability of any claims and analyses put forward about particular characteristics of a language. Again, this holds for research on any language, although it may be perceived as more crucial in the case of a language which is not widely spoken. The second and third of these concerns are shared with the related field of corpus linguistics. However, the technological and methodological framework of corpus linguistics as it is currently understood has been largely developed for corpora of major languages with a tradition of writing and of standardisation, while documentary linguistics has developed around dealing with spoken language data from minority languages. This leads to different priorities and issues in the two fields, which will be addressed throughout this paper.

2. Linguistic fieldwork and community involvement Research on any aspect of any language or variety, unless it can be carried out on the basis of existing data (e.g. corpora), involves fieldwork in the broad sense of interaction with native speakers of the variety in question, be it with the purpose of recording spontaneous language use, of eliciting particular structures, or of obtaining acceptability judgments. Ideally, even a researcher who is a native speaker of the language would carry out such fieldwork in order to corroborate his or her own intuitions. If the aim of a project is a comprehensive language documentation in the sense introduced in section 1, fieldwork becomes a necessity (although one can, of course, be involved in aspects of language documentation such as annotation without undertaking fieldwork). This is

2066

IX. Beyond Syntax

because a comprehensive, multi-purpose record of a language should document the function of language in social interaction, and therefore include interactions between speakers in as many everyday settings as possible − although other, less “natural” speech events are not necessarily to be excluded from a language documentation (see further section 3). This leads to the thorny question − which cannot be addressed here − of how to define a language or variety, and its correlate, a speech community (see also section 2.2). Linguistic fieldwork in practice can differ vastly depending on the location, the social and political situation, the background of the researcher (in particular, the question whether they are a member of the speech community or not) and the time frame and budget for the project. Before embarking on a documentation project it is therefore advisable to consult people with research experience in the region in question. Published introductions to linguistic fieldwork vary in their focus on practical issues, interpersonal relationships, or the methodology of linguistic elicitation (some of which are also relevant for a field methods course in the classroom). Two recent very useful general introductions to fieldwork are Crowley (2007) and Bowern (2008), the former including in particular many insightful discussions and examples of interpersonal dilemmas arising during fieldwork, the latter including sections on a number of practical issues such as grant application writing, technology and archiving. The contributions to Newman and Ratliff (2001) and Thieberger (2011), likewise, are highly recommended reading for beginning fieldworkers. Introductions to fieldwork with a specialist focus include Abbi (2001) (with a focus on Indian languages), Storch and Leger (2002) (with a focus on African languages), and Ladefoged (2003) (with a focus on phonetic fieldwork). The following subsections will address a few of the main issues which are likely to arise during fieldwork.

2.1. Fieldwork team It is perhaps evident that not every person will be suited to, or interested in, language documentation in general, and fieldwork in particular. In terms of personality and interpersonal skills, fieldwork requires a certain amount of stamina as well as flexibility, the ability to recognise and negotiate differing backgrounds and interests, the ability to communicate one’s expectations, if necessary by creatively exploring various avenues, the ability to put up with stress, insecurity, potentially culture shock, and, depending on the language and location, also some discomfort. These issues are addressed in works on linguistic fieldwork such as McLaughlin and Sall (2001) or Crowley (2007: 57−61), and, to a greater extent, in the anthropological literature (Duranti 1997). In terms of linguistic interests and training, the task of language documentation requires a variety of interests and skills, among them recording verb paradigms, working out all the functions of a particular case marker, ensuring that the documentation includes all possible relative clause types, exploring the semantic differences between different positional verbs, describing the kinship system, collaborating with speaker experts and biologists on identifying bird species, and trying to understand mythical narratives, to give just a few examples. If documentation is a collaborative effort rather than relying on a single individual, it is more likely that all these domains will be covered. Currently, documentation projects are often conducted by teams of (native speaker and/or non-

60. Language Documentation

2067

native speaker) researchers specialising in fields such as linguistic anthropology, geology, biology, and musicology, in collaboration with speakers who, even if not experts in any of these fields, are trained in some aspect of language documentation. Often, of course, there are severe limits in terms of time and financial resources on this kind of team effort and interdisciplinary endeavour. In order to compile a representative record of a speech community, the speakers involved will ideally come from a range of age groups, genders, and other social groups. This may raise issues of practicality and appropriateness; for example, it is often not appropriate for someone to work closely with a member of the opposite sex. Issues like these can also be circumvented by teamwork as well as by capacity building within the community. It is important to recognise speakers’ different talents − a speaker who is most able to explain subtle semantic differences may not be the best story-teller, for example (Dimmendaal 2001; Mithun 2001; Mosel 2006a). It may also well be the case that representatives of the speech community have a different view from researchers in who should be involved in the project. It goes without saying that members of a given community will face different issues from non-members in a fieldwork setting. The advantages are obvious − insiders are already familiar with the people they are likely to work with, and are much less likely to commit blunders than an outsider. The disadvantages are that members of a community will often have obligations going far beyond the project at hand, may find it more difficult to avoid working with people who are not necessarily ideal collaborators, and may, in some cases, not be tolerated in asking direct questions, as they are expected to know the language and culture. Ameka (2006), in an insightful comparison of the advantages and disadvantages of being a native speaker or non-native speaker linguist, recommends a collaboration of community members and outsiders as potentially resulting in the best possible record of a language.

2.2. Fieldwork ethics and interaction with members of the speech community Today it is a generally accepted principle, embodied in statements by linguistic associations and funding initiatives, that fieldwork needs to be undertaken in accordance with certain ethical principles, based on the view that speakers of the language to be documented are equal partners in such a project and that the latter cannot proceed without their consent, which may indeed be withheld. These ethical principles rule out covert research, and include obtaining research permits according to local procedures, explicit informed consent of the speech community and each individual participant, protection of participants’ privacy, and compensation for their contribution in accordance with local standards. For a documentation project, the question of archiving and dissemination of audiovisual recordings and annotations raises additional issues of intellectual property and copyright. Some of these questions (such as privacy), but by no means all of them, will be raised by an ethics committee in many institutions associated with fieldwork projects. In some cases generic recommendations of ethics committees are not necessarily adequate, e.g. in the case of speakers wishing to be named, rather than anonymized, in association with

2068

IX. Beyond Syntax

archived recordings. And an insistence of an ethics committee on the destruction of any primary data, as reported by Crowley (2007: 54), obviously directly counteracts the goal of language documentation. Even an adherence to the basic ethical principles mentioned above raises thorny issues in practice. First, a “speech community” is rarely a homogeneous group, and it may be difficult to decide who exactly needs to be asked for consent, and who benefits, or is seen as benefitting, from the project (see e.g. Matras 2005; Mosel 2006a: 69; Holton 2009: 169−170). Second, it may not be straightforward to ensure informed consent when the outcome of the documentation and its possible dissemination outlets (a digital archive accessible by internet, examples in linguistic publications, etc.) are not easily explained, e.g. in cultures without a tradition of literacy or in places without internet access. A related question is the actual documentation of consent since verbal consent in many situations may be more appropriate than written consent (Dwyer 2006: 44). Third, the principle of compensation has the danger of potentially disrupting the socio-economic structure of a community (Dwyer 2006: 38; Rice 2006a: 138), and a first-world documentation team may end up being less than transparent about its project budget because the compensation, while entirely adequate by local standards, may well amount to only a small fraction of the overall budget (Dwyer 2006: 58−59). Moreover, funding agencies, researchers, and local participants may hold different views on what types of activities qualify for compensation. Going beyond the ethical principles outlined above, today there are usually expectations by fieldworkers themselves, by funding agencies (to different degrees), and of course by the speakers, that researchers will to some extent contribute to the speech community; this is what Cameron et al. (1992) term the “advocacy” framework as opposed to the merely ethical framework of research. Examples that are often cited (e.g. by Rice 2006a) are the assistance of researchers with the development of an orthography, involvement with questions of language policy, and the production of teaching and literacy materials, learner’s dictionaries, illustrated texts, edited and subtitled videos, or multimedia DVDs (see also section 5.3). In addition, the documentation team has a responsibility for ensuring that the primary data themselves are available and accessible to members of the speech community (Dwyer 2006: 42). Again, while usually whole-heartedly embraced in grant applications and in the general rhetoric of language documentation especially in the endangered languages context, very often the production of such resources falls victim to constraints on finances and participants’ time (Dwyer 2006: 59), or such resources, while produced with the best of intentions, are not really well suited to the needs e.g. of language learners within the community (Rice 2006a: 148). Moreover, the use of the language in literacy and formal teaching may itself be regarded as alien to the culture and may raise ethical questions, e.g. in threatening traditional legal systems based on negotiation, or in making information available to outsiders who are not part of a traditional system of knowledge management. In sum, no standard solution can be advocated for all documentation settings. It is however clear that allowing time for negotiation of interests, collaborative design and capacity building is not only a question of ethical behaviour but will result in a much more productive work flow and a richer documentation. Ethical issues in fieldwork are discussed in more detail in Cameron et al. (1992), Dwyer (2006), Mosel (2006a), Crowley (2007: Ch. 2), Bowern (2008: Ch. 11), Musgrave and Thieberger (2006), and Rice (2006a). Ethical questions arising in the context of archiving and dissemination will be further addressed in section 5.4.

60. Language Documentation

2069

2.3. Practical and technological aspects of fieldwork Undertaking linguistic fieldwork involves many further practical issues such as applying for funding, deciding on and purchasing equipment, taking medical precautions, and so on. Sufficient time needs to be allowed for the preparation phase − at least one year if not more. In many cases it will also be advisable to gain some proficiency in a contact language (in the sense of a language of wider communication employed for initial interaction with speakers of the target language). This last point leads to the question of language choice in a fieldwork setting. As a general rule, fieldwork will be more successful if the members of the documentation team − as far as they are not native speakers already − actually try to learn the language under investigation, whether or not the fieldwork is conducted monolingually (Everett 2001). While monolingual fieldwork can be recommended in many settings, when documenting a non-standard variety with an in-group character, in a highly multilingual setting, or in a situation of language shift where many members of the speech community themselves do not speak the language fluently and can find it threatening for an outsider to be learning their heritage language, it may be highly inappropriate to insist on monolingual procedures, rather than resorting to a language of wider communication. Moving on to technical issues, the choice of recording equipment will depend on the requirements of the fieldwork situation (e.g. access to electricity, need for portability) and the budget. Standards of technology are changing fast so only very general recommendations can be given here. Digital audio recorders have by now become quite affordable and there is a choice of several models which are portable, easy to handle and produce high-quality uncompressed sound files. A widely accepted standard for audio recordings is 44kHz, 16 bit, encoded as .wav files; these files will generally be accepted by archives. Standards for digital video are less widely agreed on so it will be important to obtain guidelines on the standards employed by the archives involved. Several reviews of recording hardware (as well as software for processing and annotation) can be found in the Language Archive Newsletter (http://www.mpi.nl/LAN/, up to 2007) and the online journal Language Documentation and Conservation (http://nflrc.hawaii.edu/ldc/, from 2007).

3. Text data in language documentation It should be obvious by now that what is being documented, in any given language documentation project, is not a language as such, but a selection of communicative events, i.e. parole, not langue, in Saussure’s terms. Thus, a documentation (in the result reading of the term, and including annotations) meets the definition of a linguistic corpus as a “finite-sized body of machine-readable text, sampled in order to be maximally representative of the language variety under consideration” (McEnery and Wilson 2001: 32). In practice, though, documentation corpora will usually differ substantially from the corpora used in mainstream corpus linguistics both in size and in terms of the sample of communicative events that are included. The main motives for selecting certain communicative events for documentation include (a) their accessibility to the documenter(s), obviously a necessary condition for

2070

IX. Beyond Syntax

documentation, (b) their representativeness of communicative events occurring in the speech community, and (c) their representativeness of the structural possibilities of the language in question. Advocates of language documentation differ in their position on these motives. According to what might be termed the radical approach to documentation, only naturally occurring, i.e. observed, communicative events should be documented, thus fulfilling criterion (b) but not necessarily (c). The resulting documentation would have the closest resemblance to existing corpora of major languages in this respect, since indeed these are usually restricted to naturally occurring communicative events (albeit, in practice, most frequently of the written genre). This contrasts with what might be termed the pragmatic approach, which recognises that in order to derive conclusions about the language structure from a documentation compiled with limited resources, a mixture of data types are needed, including not only observed communicative events, but also elicitation and grammaticality judgments. The fruitfulness of an approach which takes into account both naturally occurring speech and other types of evidence is widely acknowledged by practitioners of language documentation and description (see e.g. Chelliah 2001; Mithun 2001; Rice 2006b) as well as by linguists outside this field (e.g. Wasow and Arnold 2005: 1486). The following sections provide an overview of types of data which are potentially included in a language documentation (based largely on Himmelmann 1998), with a brief discussion of their advantages and disadvantages. All methods of obtaining data discussed below (except for, obviously, elicitation by translation) can be applied either in a monolingual fieldwork settings or by relying on a contact language, contrary to what Everett (2001) appears to assume.

3.1. Observed communicative events Observed communicative events are those communicative events that would have taken place even in the absence of any person documenting them. For the fieldworker or documenter, this means adopting the method of participant observation (at least in the case that texts which are broadcast or written for an anonymous audience − corresponding to the bulk of data in standard major corpora − either do not exist or play a marginal role in the speech community). The problems arising from the observer’s paradox in this context are well known and place severe limits on the documentation. A member of a speech community or someone with a long-standing association with a speech community working in close collaboration with speakers will fare best in selecting, and being allowed access to, a wide range of communicative events, but even in that case, considerations of privacy in the case of personal/intimate interactions and restrictions of certain communicative contexts to members of one gender, age group or social class will restrict the types of communicative events that can be recorded. Another issue with observed communicative events is that they often do not present the best recording environment because background noise may be difficult to control. The same factors can place severe restrictions on the representativeness of a documentation of actual linguistic practices of a speech community (Seifart 2008; Wichmann 2008). One reason for this is that the practical issues associated with accessibility, i.e. the restrictions on time and person-power and specific research interests on the part of

60. Language Documentation

2071

the documentation team, and the degree of openness of speakers to the documentation of a range of communicative events, so often override any abstract methodological considerations. In theory, however, a documentation should follow the Ethnography of Communication approach in order to identify the types of communicative events that are distinguished by members of a given speech community (see e.g. Hymes 1971; SavilleTroike 2003; Franchetto 2006; Hill 2006a; Michael 2011). These may include greetings, public speeches, informal everyday conversations and gossip, stories told to children, requests, sports commentaries, songs, religious ceremonies etc. These observed communicative events can in turn be divided into categories according to their formality (more or less formal), planning time (more or less spontaneous), and interactivity (more or less interactive); again, a representative documentation would aim to include samples of all of these subtypes. Decisions on what communicative events to include not only bear on the representativeness but also on the presentation of the culture of the speech community (Hill 2006b). In contrast to standard corpora of major languages, documentation corpora are often not just seen as a linguistic resource but also as a cultural archive (Ostler 2008). Participants in a given documentation project will often have different views on how the culture should be represented, with approaches ranging from a “traditionalist” one which insists on documenting traditional practices which may in fact no longer be part of everyday life (and thus can only be documented in a “staged” version, see section 3.2), and a “current lifestyle” one which aims at only including actually occurring practices. Documenting observed communicative events will thus require a lot of negotiation.

3.2. Staged (“elicited”) communicative events While a representative sample of observed communicative events remains the ideal in language documentation, typical fieldwork data are likely to consist to a large degree of what Himmelmann (1998) terms “staged communicative events”. These are communicative events that would not have taken place if it were not for the presence of an observer. An advantage of resorting to staged communicative events is that they allow for more control of the documenter over topics, participants and recording environment than observed communicative events. Staged communicative events can be divided into a number of rather divergent subtypes, ordered here according to their distinctiveness from observed communicative events. Staged versions of naturally occuring events may include traditional narratives or life histories told to the documenter(s) rather than a wider audience, staged speeches, or mock conversations (note that these speech events are actually often subsumed under “natural”, as opposed to “elicited”, data in linguistic descriptions). Another type of staged communicative event which is frequent in fieldwork-based documentations is the explanation of cultural practices for the benefit of the fieldworker, especially if they are not a member of the speech community. Examples include discourse on kinship and social structure, explanations accompanying artefact production, hunting or cooking procedures, knowledge about local flora and fauna, or explications of sports rules. Although these may be similar to instructions given to a child or adolescent within the community, in many regions the latter learn by observation rather than instruction (see Mosel 2006a:

2072

IX. Beyond Syntax

73−74 for an example), making any explanation a truly “staged” rather than “naturally observed” speech event. A type of staged communicative event that has enjoyed a lot of popularity among fieldworkers recently is the response to nonverbal stimuli. The use of such stimuli serves to circumvent some of the problems associated with more directly elicited data (see below) while giving the documenter some control over the content and structure of the communicative event. A wide range of stimuli may be used, differing in their degree of formality and their likelihood of eliciting responses that are close in nature to observed communicative events. On the one end of the scale, a stimulus may be specific to the cultural context, e.g. a video-recorded observed or staged event that took place within the speech community, played back to speakers to elicit a commentary. On the other end of the scale, stimuli may have been designed (or used) to elicit comparable cross-linguistic data, as in the case of videos such as the Pear Story (Chafe 1980), picture books such as the Frog Story or sets of pictures such as the QUIS stimuli designed for the elicitation of information structure categories (Skopeteas et al. 2006), with the result that depicted objects or events may have little relevance in the fieldwork location. Stimuli may further consist of interactive tasks (e.g. a route description task), or of sets of arrangements of actual objects, pictures, smell samples, etc. As this list shows, stimuli may be culturally relevant and acceptable to different degrees, to the extent that their interpretation poses difficulties (my own attempt at eliciting a Pear Story narrative from an elderly speaker of the northern Australian language Jaminjung was stalled right at the beginning with the close-up of the pear tree, which the speaker, to her frustration, could not name), or that speakers reject the task as not worthy of their attention − a scenario which Berthele (2009) encountered while attempting to elicit Frog Story narratives from speakers of Swiss German dialects. For the sake of representativeness of a language documentation of actual communicative practices it is probably advisable to sacrifice standardisation and cross-linguistic comparability and adapt or create stimuli which are culturally appropriate and relevant (see Hellwig 2006 for an account of the use of such stimuli in semantic fieldwork). Moving closer to the “elicitation” end of the range of staged communicative events, traditional fieldwork data are frequently responses elicited by means of creating a context by verbal means (either in the object language or in a contact language). An example of this is the questionnaire on Tense and Aspect in Dahl (1985: 198−206) where target sentences are embedded in a context which provides information on the temporal relationship between the events in question. The procedure can also be reversed by asking speakers for a context in which they are likely to use a certain expression. This method can be very helpful in working out contrasts in meaning and morphosyntactic behaviour between lexical and grammatical items. Finally, the category of staged communicative events also includes responses elicited by means of translation equivalents in another language. A well-known problem associated with elicitation is the possible interference of the contact language on the elicited expression, in other words, the answers may be calques from the contact language which do not necessarily correspond to expressions that would ever be used in spontaneous speech (see e.g. Mithun 2001). Another recurring problem with elicitation is the possibility of misunderstandings due to the lack of competence in the contact language on the part of the researcher(s) and/or the speakers, or their acquaintance with different varieties of the contact language. Still, if one of the goals of a documentation is a representation

60. Language Documentation

2073

of the structural possibilities of the language, a mixture of elicitation by context and elicitation by translation equivalent will often be the easiest (and a relatively unproblematic) way of assuring that e.g. complete inflectional paradigms are being documented, since certain forms may be exceedingly rare in staged communicative events, let alone observed communicative events.

3.3. Edited versions A genre of communicative events not included in Himmelmann’s (1998) original classification often emerges during the documentation process. This genre consists of written versions of observed or elicited (spoken) communicative events which are edited for publication by, or in close consultation with, members of the speech community. The reason for editing to take place at all is that the spoken version is likely to contain repetitions, hesitation pauses, false starts or code-switching into a regionally dominant language and is therefore not seen as fit for publication by the consultants. In speech communities without a tradition of writing, such edited versions not only introduce a novel genre into the community but in fact introduce practices of literacy as such, which may or may not be influenced by the writing style of a dominant surrounding culture and/or the fieldworker (see Foley 2003; Mosel 2006a).

3.4. Grammaticality judgments Grammaticality judgments are the most controversial type of data. On the one hand, they are the only method of obtaining negative data, which makes them indispensable for testing the limits of applicability of certain syntactic constructions, co-occurrence restrictions, or semantic entailments of lexical and grammatical items. On the other hand, there are well-known problems associated with this method. One point of contention concerns the question whether judgments are dichotomous or may be gradient (Sorace and Keller 2005; Tremblay 2006). Moreover, there is likely to be variation between speakers not only in their judgments (Wasow and Arnold 2005: 1482−1483) but also in their ability and willingness to perform this rather unnatural task. Another question concerns the nature of the relationship between metalinguistic judgments and grammatical knowledge. As pointed out by Wasow and Arnold (2005: 1484−1845), intuitions about well-formedness do not actually escape the semantic and pragmatic dimensions of language use, but rather constitute a particularly unusual type of language use, which is particularly worrying if all contextual factors are left to the imagination and are not controlled. It thus seems fair to state (despite frequent claims to the contrary) that grammaticality judgments do measure acceptability rather than grammaticality (Tremblay 2006: 133). This is even more true in an oral culture with no tradition of language standardisation where it is unlikely that people have ever been exposed to metalinguistic judgments of this type. Such speakers are likely to make judgments based on naturalness, pragmatic felicitousness, or even truth. Exchanges like the following (from Henry 2005: 1604, who was working on a nonstandard variety of English) are probably familiar to every fieldworker:

2074 (1)

IX. Beyond Syntax ‘Does this sentence sound right or not: There’s lots of new people moving in round here.’ ‘No’ ‘What would you say?’ ‘There’s not many people moving in round here. People like this area, they tend to hang on to their houses.’

Moreover, most methods of systematically obtaining grammaticality judgments as discussed in the literature (e.g. Schütze 1996; Wasow and Arnold 2005; Tremblay 2006) rely on written language, and/or on an experimental setup which may be impossible to create in a fieldwork situation or which will skew the results because of the unfamiliarity of the task. Eliciting grammaticality judgments for non-standard varieties of major languages poses similar problems to the documentation of unwritten languages, namely the unsuitability of written questionnaires, and the danger of a judgment on the basis of what is perceived to be correct (i.e. the standard form) rather than what speakers actually say (Henry 2005; Tremblay 2006: 135). The recommendations made by Henry (2005: 1604) for minimising these problems in the case of non-standard varieties can be extended to working with speakers of unwritten languages: The initial question we have found to produce the best results is simply ‘Could you say ...?’ The modal here indicates a hypothetical situation and reduces the number of sentences rejected on pragmatic or lexical grounds. It is however important, where a sentence is judged to be ungrammatical, to follow up with the question ‘What would you say?’; the way in which a speaker corrects the sentence shows which aspect − syntactic, lexical, pragmatic − the speaker finds problematic, which serves to distinguish rejection on pragmatic or lexical grounds from rejection on the basis of grammatical structure. Where rejection turns out to be apparently on pragmatic or lexical grounds, one cannot of course assume that the original sentence tested is necessarily grammatical from the syntactic point of view; it is necessary to construct further examples excluding the pragmatic or lexical difficulty, and check again with informants.

Should grammaticality judgments be part of a language documentation at all? According to the pragmatic view of language documentation, they should be, to the extent that they are, indeed, documented. As already mentioned above, grammaticality judgments can actually be regarded as a specific type of performance or (metalinguistic) communicative event, albeit one that may never otherwise occur in a speech community. Recording and annotating such judgments (together with triggering questions etc.) in the same way as other types of data may reveal patterns in the variation between speakers that are otherwise missed and allows one to check details and nuances in their responses (e.g. hesitation before agreement or disagreement, or apparent fatigue). As Himmelmann (2006a: 8−9) points out, there are other types of metalinguistic knowledge besides grammaticality judgments which can also be documented in this way, e.g. judgments about appropriateness in certain context (taboo words etc.), semantic relationships (relationships of taxonomy, antonymy etc.), or dialect affiliation of pronunciation variants or lexical items.

3.5. Further discussion An issue not addressed so far is the medium of recording, i.e. the question whether (for spoken language) audio or video recording is preferable. The choice will depend on the

60. Language Documentation

2075

local situation, the preferences of community members, and the budget and capacities of the team. The obvious advantage of video recording is the possibility of capturing aspects of the extralinguistic context as well as gestures of the participants, since the analysis of certain expressions, e.g. spatial expressions or deictics, relies rather crucially on the availability of information on the context and/or on gestures (see also section 4.5). Apart from these considerations, as a rule of thumb, for observed communicative events and more contextualised staged video recordings will be preferable to audio recordings in principle. In many communities, video recordings, especially when edited and possibly subtitled, will also be much appreciated as immediately accessible documentary materials. On the other hand, video storage is still costly, and compression will result in a considerable loss of quality as well as portability across platforms, since no reliable long-term standard has yet emerged. It goes without saying that apart from audio- and video-recordings, primary data may also include written genres (which by their very nature are designed to rely less on context) as well as written representations of unrecorded spoken utterances or texts, either because they are legacy materials from an earlier period or because recording was undesirable or impossible at the time. In current documentation projects, musical performances are also often included, which would require a different set of annotations, outside the scope of the present survey article.

4. Annotation and metadata for text data 4.1. General and technical issues The task of documenting communicative events of course does not stop at merely recording them (by producing e.g. audio- or video-recordings). If the documentation project concerns a language which is not widely spoken, such a recording would not be understood by most of the people with a potential interest in the language, e.g. linguists, anthropologists, historians, or the general public. Potentially − in the case of a situation of language shift − it would not even be understood by the descendants of the speakers themselves. Therefore, a recording has to be accompanied by further information such as metadata, a transcription, and translations. The term annotation is used here to cover such information which is directly related to primary or raw data (Bird and Liberman 2001; Himmelmann 2006a). Until recently, discussions of language documentation corpora assumed that, just like in major corpora, annotations would be in written form. Recently, a method of Basic Oral Language Documentation (BOLD) has been pioneered and advocated by some practitioners in the field (e.g. Boerger 2011). The idea behind this is that a documentation may proceed more quickly (and may be more accessible to non-literate users in the speech community) if the original source text is accompanied by spoken repetitions of the text in careful slow speech as well as oral translations into a language of wider currency. The drawback of such an approach is obviously that the resulting annotations are not machine-searchable. In the following subsections, only written annotations will be discussed. When making decisions about what types of annotation to include, just like in the case of the data to be included in a language documentation (discussed in section 3),

2076

IX. Beyond Syntax

one is quickly faced with discrepancies between the ideal documentation or best record, and practical and financial constraints of feasibility and practicality. Deriving annotations from raw data, moreover, is not a trivial task, and one of the merits of the nascent field of language documentation is that of drawing attention to the theoretical and methodological decisions that are involved in the representation of data. On the one hand, the annotation may reduce the information present in the primary data, e.g. in the case of a transcription which does not capture prosody. On the other hand, the annotation may also enrich it, in that it incorporates an analysis of different aspects of the linguistic system underlying the communicative event, e.g. a phonological analysis (in the case of a phonological transcription), a semantic analysis − however preliminary − in the case of glossing and translation, and a grammatical analysis in the case of grammatical annotation. On a technical level, it is useful to distinguish between character encoding (with Unicode as the standard), data encoding, and file encoding (the digital file format as usually indicated by the file name extension, e.g. .wav, .doc, .txt, .pdf) (Austin 2006: 96). Data encoding or markup concerns the standardized representation of the structure and format of a text for the purpose of exchange of digitally encoded text. Appropriate markup ensures that all annotations related to a given unit of annotation are associated with one another, and that the type of annotation is unambiguously indicated (e.g. as orthographic transcription or free translation) in the document. This is achieved by conceiving of an annotation as a set of labelled tiers (attribute-value pairs) where the labels or attributes specify the type of information and the tiers or values themselves correspond to the content (Edwards 2001: 327; Austin 2006). The current standard for markup is XML (EXtensible Markup Language). Suggestions for annotations in an XML-compatible format have been developed by the Text Encoding Initiative (TEI_Consortium 2007). XML documents are usually created by export from linguistic annotation software (see below) or other editors which are more readable than the actual markup; see Austin (2006: 101−107) for examples. Standard, preferably non-proprietary formats of file encoding will make it easier for an archive to migrate data ensuring future accessibility. Further issues of data management are discussed in Thieberger and Berez (2011). Following brief discussions of the issues of the metalanguage to be used in annotation (4.2) and of segmentation (4.3), five main levels of linguistic annotation will be considered here which to some extent are crucial for a language documentation. These are: metadata pertaining to an entire session (4.4), transcription (4.5), translation (4.6), contextual commentary (4.7), and grammatical annotation (4.8). Except for metadata, all of these are discussed in more detail in Schultze-Berndt (2006).

4.2. Metalanguage(s) used in annotations The first problem to be addressed in the context of annotations is the choice of the language(s) to be employed e.g. for metadata, translations or commentaries. As stated above, an ideal documentation will be accessible to all potential users; this includes not only academics, but speakers of the documented language and their descendants, and members of the general public in the region where the language is spoken. The documen-

60. Language Documentation

2077

tary linguist therefore not only needs to consider an academic lingua franca or major world language as the metalanguage, but also a regional lingua franca or an official language of the country in question, or a standard language in the case of the documentation of nonstandard varieties or dialects of a language for which a written standard exists. While it is of course quite possible to combine annotations in more than one language, the cost in terms of the additional time involved is immediately obvious, and correspondingly, multilingual text compilations are still the exception. Factors influencing the decision between the different possibilities include the abilities of the person undertaking the translation and/or the possibility of funding for additional translators, requirements of the speech community, and requirements of the organisation funding the research. Today it seems to be assumed by many advocators of language documentation that English should be at least one of the metalanguages used, with the aim of making the documentation accessible to the international academic community. However, this clearly reflects an academic angle on documentation which will need to be balanced with the other considerations mentioned above.

4.3. Segmentation The first decision to make in transcription is one of segmentation of primary data into different units of annotation. Session is a term that has been adopted for the level of annotation with which metadata are associated; a session corresponds to an individual text in corpus linguistics. On the content level, a session will usually be defined on the basis of unity of time, place, participants and topics, although it is not unusual that several topics are covered within a single session (e.g. a conversation or elicitation session). On the technical level, a session will correspond to a single sound or video file and the corresponding annotation file (or files). For technical reasons − i.e. the resulting file size − it may be necessary to divide what is a single coherent session on the content level into several files, in particular in the case of video data (Austin 2006: 95). In the case of spoken language, the next task is to divide the written text or stream of speech within a session into further units of annotation such as turns, sentences, clauses, or intonation units, and into words. If a recording is available, this can be achieved on a technical level by linking units of annotation to the corresponding units of the media file, using time codes of the latter. In this way, each unit of annotation can be accessed and played back individually. The output of annotation software such as Transcriber, ELAN, CLAN/CHAT and EXMARaLDA consists of annotation files of this type. The resulting annotations are also searchable and can be exported to various formats. In the annotation of spoken language, especially in a language for which there is no written standard, it is recommended to use intonation units (rather than clauses or sentences) as the main unit of annotation. Intonation units, as opposed to clauses or sentences, can usually be identified even with limited knowledge of the language, and will have syntactic relevance in that they correspond to e.g. clauses or dislocated noun phrase constituents. Intonation units are defined in terms of their coherent intonation contour with at least one pitch accent, delimited by boundary intonation (a rising or falling pitch movement) and pitch reset, and typically, but not always, also by a pause (see Himmelmann 2006b and references therein).

2078

IX. Beyond Syntax

In text annotation software such as ELAN, CLAN/CHAT or Toolbox, turns in nonmonological texts will not be marked as distinct units, but rather by associating intonation units with different speakers. One issue arising here concerns the representation of overlapping speech (McEnery and Wilson 2001: 44−45; Himmelmann 2006b). This can be addressed by associating time-codes with each speaker’s utterances, making obsolete other conventions (e.g. brackets) to represent overlap. The interaction with a researcher who is not a member of the speech community can be treated as a special type of multi-speaker discourse. This implies that the researcher’s part of the interaction is also documented, even if this is done in a more cursory fashion. Documenting the researcher’s questions and comments may help to uncover misunderstandings and mistakes in the translation later on. Below the level of the intonation unit, speech will be divided into words in the transcription, conventionally separated by spaces. Obviously, recognition of phonological words presupposes a phonotactic and (partial) prosodic analysis. Although word boundaries are not easily recognised in connected speech, in the actual practice of linguistic fieldwork the integrity of a lexical word is fairly easily established in most cases: words are those units that can be uttered and often also translated in isolation by native speakers. The analysis and hence representation of clitics and function words can create notorious problems, though (see further Himmelmann 2006b).

4.4. Metadata Metadata are data about data, i.e. catalogue information just like in a library. In a digital archive, metadata are a crucial aspect of making the documentation accessible, and metadata will usually be searchable even if the access to the resources themselves is restricted. For this reason, metadata for each session in the text corpus are stored in files that are separate from the primary data (i.e. media files) and annotation files. In addition, it is a good idea to record metadata − a statement about the person recording, the speaker recorded, the place, the date, and possibly the topic of the recording − at the beginning of each recording session, or to include some of the same information at the beginning of an annotation file. The minimal requirements for metadata associated with corpus data are those contained in the OLAC scheme developed by the Dublin Core Metadata Initiative (http:// dublincore.org/) (see Bird and Simons 2003). A more complex metadata standard is the IMDI scheme developed by the ISLE Metadata initiative. Both schemes rely on XML as the markup standard. For one’s own housekeeping, it is often easier to keep metadata in structured plain text files or worksheets in a database, but export to XML should be a crucial consideration. Metadata are associated with five different functions (Austin 2006). Cataloguing information is the minimal information required to identify and locate data; this includes the name of the language(s) used in the recording or written text, the place and date of recording/creation, title and file name(s) of the resources associated with the metadata, and the name(s) of the people involved in the recording, creation or annotation of the text (speaker/writer, recorder, transcriber). If speakers wish for their anonymity to be preserved (see section 2.2 on ethics), their identity can be protected by the use of a code name or initials.

60. Language Documentation

2079

Descriptive information is information about the contents of the recording session, which can offer different levels of detail. Information about the text genre, for which a set of standardised keywords are included in the IMDI scheme, also falls in this category. Another key feature of the IMDI scheme is the possibility of providing systematic descriptive information on each contributor (speaker, fieldworker, transcriber) in terms of their age, gender, languages spoken, education level, and (in the case of fieldworker and transcriber) experience at the time of creating the resource. Structural information concerns the organisation of files with an internal structure, e.g. several layers of annotation in an annotation file or the structure of entries in a lexical database. Technical information, crucial for ensuring accessibility of the resource, is information about the file format. Finally, indications of access rights and restrictions, intellectual property rights, and work log information (e.g. the date of last modification) fall under administrative information. Metadata are also necessary for all other resources contained in the archive (on which more in section 5), e.g. lexical databases, collections of photographs, maps, etc. While the details of the metadata will vary in these cases, the same types of information are relevant for these other types of data.

4.5. Transcription The term transcription comprises various types of symbolic representation of the formal or signifier side of spoken language. The issue of transcription obviously does not arise for genuine cases of written communicative events that may be included in the documentation, such as newspaper articles, novels, letters, or graffiti. Written communicative events usually employ an orthographic representation (which may or may not be standardised; in the latter case, a rendition in standardised orthography could be added to the documentation). In the case of spoken language, it will be necessary to decide on the type(s) of transcription to be employed. Possibilities to be considered below are orthographic, phonemic and phonetic transcriptions of segmental information, and transcription of prosody and of paralinguistic phenomena. It is recommended practice that a transcription of segmental information − whether on the orthographic, phonemic or phonetic level − represents as faithfully as possible what is being said. This includes so-called filled pauses, false starts and self-repair, and repetitions (the possibility of creating an edited version of the same text without these features is briefly discussed in section 3.3). It is also recommended that a stretch of speech which is not transcribed because it is not intelligible to the transcriber is marked in the transcript − a common convention is to use the letter ‘x’ for each unintelligible syllable. Still, even a faithful transcript cannot be regarded as a direct, unbiased representation of a communicative event − it is by necessity filtered and influenced by the annotator’s decisions, usually according to his or her “theoretical goals and definitions” (Ochs 1979: 44; Edwards 2001: 321). If an orthography for the language under investigation is already established and accepted by the speech community, it is virtually an obligation for a documentary linguist to provide an orthographic transcription as part of the annotation, since this greatly adds

2080

IX. Beyond Syntax

to the accessibility of the documentation for the members of the speech community themselves. In cases where no established orthography exists, or where an existing orthography is not acceptable to the current speech community for some reason, the documenter(s) will often be involved in devising a new orthography; this also involves deciding on a script (Seifart 2006; Lüpke 2011). If phonemic contrasts are under-represented in the established orthography or if the orthography has a non-transparent relationship to the phonology in other respects, it may be advisable to provide a phonemic transcription in addition. A phonemic transcription or transliteration in Roman script may also prove practicable if a different script is used; again, the conventions of transliteration will have to be stated explicitly as part of the general information provided with the language documentation. Obviously, the use of a phonemic transcription presupposes at least a preliminary phonological analysis of the language. Some training in the basis of phonetics and phonetic transcription and familiarity with the IPA alphabet (devised by the International Phonetic Association) can be considered essential for anybody undertaking a language documentation. Since a phonetic transcription can be undertaken without prior phonological analysis, it is often the type of transcription used in the initial stages of linguistic fieldwork. Once a phonological analysis has been undertaken, it is not strictly speaking necessary to include a phonetic transcription if the original recording is provided together with the annotation (unless a phonetic analysis is one of the major goals of the documentation project), although it can be useful for further analysis, e.g. by providing information on the distribution of allophones, or dialectal variation. Another possible use for a phonetic transcription tier is a faithful rendition of variation in pronunciation, allegro forms (fast speech forms), or forms which deviate in other ways from forms used in careful speech. While the orthographic or phonemic transcription may take a standard or careful speech form as its basis (which makes sense if one wants the form to be included in searches), the phonetic tier can represent the actual pronunciation. Whereas an orthographic or phonemic transcription is essential for any language documentation, a prosodic transcription − representing pitch movements, accents, suprasegmental lengthening, and rhythm − is only very rarely included. Not only is this type of transcription very time-consuming, there is also no standard transcription system for pitch movements on the phonetic level comparable to the IPA system for segmental phonetic transcription (the IPA does, however, include conventions for representing word stress and lengthening). A system of prosodic annotation which is popular in prosodic research in the autosegmental-metrical tradition, ToBI (Tones and Breaks Index) and its adaptations, presupposes a phonological analysis of the prosodic system in question. Prosodic annotation in ToBI style can therefore only be undertaken by annotators who are seriously concerned with the prosody of the language in question. A more practical solution for most purposes may be to adopt one of the various related conventions employed for prosodic transcription in conversation analysis, which are all superimposed on the segmental − phonemic or orthographic − annotation. The annotation systems described in Ochs, Schegloff, and Thompson (1996: 461−465) and Couper-Kuhlen (2001), as well as those employed in the CHAT conventions (MacWhinney 2000) and the guidelines published by the TEI Consortium (2007) all belong to this type. Since prosodic annotation of this type is not primarily undertaken as a prerequisite for prosodic analysis, but rather as a representation of units in spoken language, these annotation systems only include conventions for the representation of boundary contours

60. Language Documentation

2081

(e.g. falling or rising), primary accents, and pauses. This minimal type of prosodic annotation, while being relatively easy to produce, can thus be very helpful for an assessment of syntactic structures. A similar comment holds for the transcription of paralinguistic features such as shifts or changes in vocal quality (e.g. whispering or shouting) or speech tempo, nonverbal aspects of the interaction such as coughing, whistling, laughing, and non-vocal or kinesic events such as the slamming of a door (which may or may not have a communicative impact). For many of these, transcription conventions are suggested in the conversationanalytic transcription systems referred to above. For the purposes of most language documentation projects, it will prove too time-consuming to produce a detailed transcription of non-linguistic and paralinguistic aspects of all documented speech events. However, some of these aspects can be transcribed relatively easily and their inclusion can greatly facilitate the understanding of the interaction. These include hesitations and filled pauses (e.g. uhm), laughter (which can be represented by L), and significant changes of vocal quality, such as whispering. Non-linguistic events can often be considered part of the contextual information and may be described in the tier devoted to the contextual commentary (see section 4.7). While many paralinguistic and non-linguistic vocal events can be transcribed relatively easily, the transcription of gesture − although often a very important part of the interaction − is difficult and extremely time-consuming; an introduction is provided by Seyfeddinipur (2011). Obviously, the possibility of annotating gesture crucially depends on the availability of video recordings.

4.6. Translation The second level of annotation, that of translation, comprises any type of annotation that attempts to capture, in terms of one or more metalanguages, the meaning and function of the communicative event. A translation of the transcribed speech events into a more widely accessible language is essential in the documentation of a less widely known language (see section 4.1 for a discussion of the choice of metalanguage). This is one feature that distinguishes a language documentation corpus from a corpus of a widely spoken language such as English or Japanese, for which often no translation is made available. It is unrealistic to expect translations in language documentation to meet professional standards in the field of translation. Practitioners of language documentation are rarely trained in translation and translation studies, and will often not be able to afford to spend too much time and effort on the translation. Moreover, the task of translation will often be undertaken by speakers of the documented language who may or may not be fluent speakers/writers of the metalanguage, in collaboration with outsiders to the speech community who are only just beginning to learn and appreciate the object language and its cultural background, and who in addition may also not be native speakers of the metalanguage, or entirely fluent in it. The imperfect nature of the resulting translations is not necessarily a problem as long as later users are aware of it, and regard the translations as no more than an aid for the interpretation of the original utterance, rather than a potential basis for analysis.

2082

IX. Beyond Syntax

Apart from the choice of the metalanguage, one choice to be made in translating is the choice between a free translation and a more literal translation. A literal translation remains closer to the structure of the object language and is therefore potentially more helpful for understanding the structure of the language, and less likely to be misleading. A free translation is idiomatic in the metalanguage and therefore more readable for people fluent in this language. It may also be richer in that it incorporates the pragmatic effect of the utterance, and in this respect the translator has to be careful in order not to create a completely different pragmatic effect than the one intended in the original utterance. Another point worth noting is that a free translation tends to assume the stylistic features of written as opposed to spoken language. Of course there is no reason, beyond the additional time required, not to include both a literal translation and a free translation. If a free translation is chosen, a common practice is to provide a translation for larger units of segmentation such as paragraphs instead of translating each individual clause or intonation unit. In the linguistic literature, examples from lesser-known languages often also include a word-by-word or morpheme-by-morpheme translation, known as interlinear glossing. While certain annotation software tools allow for semi-automatic interlinear glossing if a lexical database has been set up, in practice this is still time-consuming. Whether or not to include such a level of translation in a language documentation thus again depends on the purposes and most likely users of the documentation, and on time constraints. Providing interlinear glosses obviously presupposes a certain level of morphological analysis and generalisations about the meaning of lexical items and grammatical morphemes. Existing guidelines for interlinear glossing (Bickel et al. 2004; Lehmann 2004) also include abbreviations for common grammatical morphemes. While the adherence to such standards facilitates the use of a documentation for linguists, choosing whole words in the metalanguage rather than abbreviations may make the result more accessible for non-linguists. In any case an explanation of all glossing conventions should be included with the documentation (see section 5.1). Finally, a recommended practice is to include any literal translations provided by native speakers. (Alternatively, this can be done by cross-referencing if such translations are documented as communicative events in their own right). This can be very helpful in capturing nuances of meaning and aspects of the context and pragmatics of the utterance that may otherwise escape the attention of the translator or reader. It may also provide a fruitful basis for the study of language contact phenomena. The following example from my own fieldwork on Jaminjung (ES97_A03_04.738), a Western Mirndi language of Northern Australia, illustrates the four different layers of translation briefly discussed above: a) a morpheme-by-morpheme gloss following linguistic conventions, b) a word-by-word gloss for nonlinguists, c) a literal translation, d) a free translation, and e) a speaker’s translation into the contact language, in this case, Northern Australian Kriol. (In actual practice I only use [a], [d] and [e]). The gloss ITER stands for iterative marking. (2)

Ganma yurru-yu gurrany buja~bujag-mayan a. not.do 1PL.INCL-be.PRS NEG DISTR~start-ITER b. not.do we-are/stay

not

[Jaminjung]

fight-ing

c. ‘We (including you) sit down not doing anything, not starting up (a fight)’

60. Language Documentation

2083

d. ‘Let’s sit down peacefully, not fighting.’ e. ‘Hi telim tubala not tu go na faitabat \ jidan kwait.\’

4.7. Contextual commentary and metacommentary Adding contextual information may be crucial for the interpretation of an utterance by anybody who was not present during the original speech event. Relevant information of this kind may pertain to the entity or event referred to by the speaker, to the addressee and the intended pragmatic effect of the utterance, or to an action of the speaker or other participants accompanying the speech event. Providing contextual information is particularly important when an utterance is not embedded in a longer text which would aid its interpretation. Contextual information can consist of a prose description of the context, but also of links to photographs of some aspect of the speech situation (e.g. an artefact under discussion, or stimuli used in elicitation; see also section 5.3). This information may overlap with, complement or partly replace a transcription of non-linguistic aspects of the interaction (see section 4.5) and also overlap with an ethnographic commentary (see section 5.1). A specific additional type of commentary is recommended for multilingual discourse. If this is a frequent feature of the documented speech events, it is probably worth reserving one tier in a multi-tiered annotation format for an indication of the language used. If code-switching takes place within a single unit of annotation (intonation unit), a markup of the transcription itself indicating which languages are used may aid future users of the documentation. Alternatively, especially when the languages can easily be identified, a note on code-switching in the commentary may suffice. In the actual practice of annotating a recorded speech event, the annotator will often wish to add notes or “metacommentaries” on some aspects of the annotation, such as questions or uncertainties about aspects of the transcription and translation. Such metacommentaries may be of a preliminary nature since such questions may be resolved in later stages of the documentation process; even if this is not the case, their inclusion will help later users of the documentation to interpret the set of annotations in question. Notes may also include any commentary on an aspect of the recording that is not systematically incorporated into the annotation − for example when prosodic information is not generally transcribed but the annotator wishes to indicate that a particular word was spoken with extra high pitch. A metacommentary annotation can also be used to cross-reference between contrasting or otherwise related utterances (Schultze-Berndt 2006).

4.8. Grammatical annotation The level of grammatical annotation comprises all annotation related to structural aspects of the object language utterance. Questions of grammatical annotation (tagging and parsing) play a major role in corpora of widely spoken languages (although even in this field the scope of such annotation is often limited by constraints on time and finances). In a documentation of a lesser-studied language, a consistent annotation of grammatical structure, in particular of constituent structure, will very often not be feasible, not only be-

2084

IX. Beyond Syntax

cause of time constraints but also because the insights into the morphological and syntactic structure of the language under investigation will only gradually develop. However, grammatical analysis can be incorporated into the documentation in a number of ways. Interlinear glosses, already mentioned in section 4.6, indicate a basic morphological analysis. Part-of-speech tagging can be achieved semi-automatically by lookup in a lexical database at the same time as interlinear glossing (obviously, this presupposes confidence in the assignment of lexical items to parts of speech on the basis of distributional criteria in the first place, which is a non-trivial aspect of analysis). Finally, grammatical keywords or a more full-fledged commentary on the structure in question can be incorporated into the annotation in a dedicated tier (Mosel 2006b; Schultze-Berndt 2006). If keywords are used, it is advisable to employ a controlled vocabulary in order to facilitate later searches. Ideally, the items in the list will also be commented on in a sketch grammar accompanying the documentation alongside a lexical database (see section 5), or at least in a glossary of grammatical terms.

4.9. Further discussion Any documentation project will have to find a balance between the most desirable detail of annotation on the one hand, and on the other hand, the considerable time and effort involved in producing it − especially when there still exists no single software package that would cater for all the needs of documentation and analysis, with the result that time-consuming conversion procedures are part and parcel of the annotation workflow of many documentation projects. While the software packages mentioned in section 4.3 will enable linking to media files during transcription, Shoebox/Toolbox and its successor FieldWorks Language Explorer are still the only software that will undertake semiautomatic morpheme analysis, interlinear glossing and part-of-speech tagging. Phonetic analysis or even display of a pitch track, on the other hand, will only be possible after export or conversion to specialised packages such as Praat. Faced with these problems, it is worth keeping in mind that corpus production may be ongoing, distributed, and opportunistic (Woodbury 2003: 47), in the sense that an initial annotation can be supplemented later by different annotators or users with different goals or research interests. Moreover, it is likely that any initial annotations will be revised in the course of a documentation project. The minimal recommended annotation which will serve to make the documentation accessible to most types of potential users is a transcription in a practical orthography and a free translation in an official language of the region and a world language (Himmelmann 2006a: 13), in addition to the session metadata. This ongoing, distributed, and opportunistic nature of language documentation not only concerns the process of annotation itself but also any outputs derived from the annotation such as edited texts, subtitled videos, or multimedia products (see also section 5.3).

5. Other materials in language documentation The corpus of primary data with annotations discussed in sections 3 and 4 constitutes the core of a language documentation. A language documentation will, however, often

60. Language Documentation

2085

include other materials such as a lexical database and (as outputs) printed or online dictionaries (section 5.2), additional materials derived from the primary corpus or providing the context for parts of the corpus (section 5.3), or an introductory grammatical description or glossary (see section 4.8). Again, this distinguishes language documentations from corpora of major languages. Standard corpora and documentations alike will however usually contain introductory materials facilitating access to the main resource; these will be discussed in section 5.1.

5.1. General metadata and general access resources The term general access resources is used here, following (Himmelmann 2006a: 13), to cover information that is relevant for the documentation as a whole. This includes information about the family affiliation of the language in question, a basic ethnographic and sociolinguistic description of the speech community, information about the participants and the circumstances of the documentation project, guidance to the structure of the database/archive as a whole, and explanations of the structure of the annotation and lexical database files. Conventions of orthographic representation, other transcription conventions, and abbreviations used in glossing will also be included in this part of the documentation. Finally, bibliographic references may be part of the general access resources. General access resources will most likely take the form of clearly labelled documents in a portable format which can be easily accessed and identified in the overall archive.

5.2. Lexical database / dictionary In principle, a lexical database or dictionary, just like a descriptive grammar or analysis of a particular syntactic phenomenon, is derived from the primary data through generalisation, in this case, about the form, part of speech membership, meaning, and grammatical properties of lexical items. Thus, writing lexical entries involves phonological (and, depending on the language type, also morphological) analysis as well as syntactic and semantic analysis (in determining a general meaning of the item in question based on its uses in context and further metalinguistic probing). In theory, therefore, a lexical database is not a core part of a language documentation. In practice, however, it usually is, for a number of reasons. First, it is actually beneficial for the documentation process if the documenters are forced to keep track of the basic lexical units of the language. Second, a printed or online dictionary can be derived from the lexical database relatively easily, and this is often one of the outcomes of the documentation process which is regarded as particularly relevant or desirable by members of the speech community. Third, a lexical database, even if it just takes the format of a basic word list with glosses and part of speech information, can be used for the process of semi-automatic interlinear glossing (see section 4.6) and part-of-speech tagging (see section 4.8). Finally, an extended lexical database with (monolingual or bilingual) definitions of words can be an important repository of encyclopaedic and ethnographic information, and the process of creating it an important part of a documentation process which includes members of the

2086

IX. Beyond Syntax

speech community. In this regard, thematic dictionaries (thesauri) which cover specific lexical fields are particularly relevant. A well-structured lexical database can be used to produce a variety of outputs from a single source file, for example a comprehensive alphabetical dictionary, a selective dictionary of basic words e.g. for schools, finder lists in the metalanguages, and a thesaurus. A detailed discussion and reference section on lexicographical issues is beyond the scope of this survey article.

5.3. Additional materials Additional materials may well be included in a language documentation in a variety of functions and hierarchical positions in the archive. First, there are materials that may be regarded as ancillary to the general access resources (see section 5.1) − e.g. photographs showing the participants of the language documentation and illustrating daily life in the speech community (as far as this is compatible with concerns of anonymity), genealogies, or maps of the area. Non-textual data associated with the archive as a whole may include e.g. results of psycholinguistic tests or phonetic measurements across large datasets. Other materials may be closely associated with a particular session and its annotations, e.g. pictures or videos used as elicitation stimuli (see section 3.2) if not generally accessible elsewhere, photographs taken or drawings produced during the session, or the map of a route described in a particular text. Materials may also be associated with a particular intonation unit or utterance (in which case a reference could be added to the contextual commentary; see section 4.7). Again, this may concern stimuli or photographs, but also the results of instrumental phonetic analysis such as pitch tracks. Finally, outputs derived from corpus data, such as edited printed texts, edited and subtitled videos, or interactive multimedia products could be included in the archive, although long-term archiving of multimedia products is problematic as the formats and software used to create and play them changes (Austin 2006: 108). Such outputs can be associated either with a particular session or with the archive as a whole, depending on accessibility concerns and the degree of relationship to the primary data.

5.4. Notes on archiving A language documentation was defined in section 1 as a lasting, multipurpose record of a language. It can only meet this definition if the recordings, associated annotations and various other materials discussed so far will not simply remain on the hard drives of members of the documentation team. Only an archive can ensure longevity of the data as well as accessibility to all envisaged potential users (even if this is a small group in the case of restricted data, on which further below). A number of (often regional) archives, under the umbrella of The Digital Endangered Languages and Musics Archives Network established in 2003 (http://www.delaman.org/ participants.html), specialise in the documentation of minority languages and music. Finding repositories for documentary work on languages or varieties which are not en-

60. Language Documentation

2087

dangered may actually prove more difficult. Academic Institutions and Research Funding Agencies are often reluctant to commit to long-term storage of the results of research funded by them. The International Association of Sound and Audio-Visual Archives (http://www.iasa-web.org) lists as its members archives around the world which could be approached. Archives may put restrictions on the size of deposits or refuse recordings without annotation; for example, Austin (2006: 100) suggests that unprocessed video data should not be archived. Issues arising in the context of archiving are the proper documentation of all archived materials in terms of metadata (section 4.4), the format and encoding of materials (section 4.1), the internal structure of the archive (usually as a hierarchically organised set of files), its user-friendliness (related to the structure as well as the layout and interactivity of the archive interface), and access restrictions. The procedures for granting or restricting access vary from archive to archive; some, for example the DoBeS archive, have a graded system of access distinguishing between general open access, access by permission, and closed access. Severe restrictions on accessibility of materials may be appropriate where certain information (including language or aspects of language use) is restricted to members of a specific ethnic group, descent group, or gender (Rice 2006a: 146−147). Often an outsider does not necessarily have the background knowledge to assess the potential or actual political implications of a certain text, for example, the role of a narrative linking certain people to a certain place in asserting land rights to this place (see Crowley 2007: 17−18 for an example). It is therefore of paramount importance that questions of the location of the archive, access restrictions, and intellectual property rights are included in negotiations with speakers before beginning the recording process, and that informed consent is obtained (see section 2.2). The question of whether highly sensitive data should be archived at all will have to be decided for every documentation project individually (Austin 2006: 100 argues that they should). The documentary team is faced here with a dilemma between safeguarding knowledge for future generations and respecting restrictions on the distribution of knowledge.

6. On the relationship between linguistic theory, language description, and language documentation It will have become apparent from the discussion in the preceding sections that, as (Himmelmann 2006a: 3−4) puts it, the “task of compiling a language documentation is enormous, and there is no principled upper limit for it. Obviously, every specific documentation project will have to limit its scope and set specific targets.” We have seen that a documentation requires an in-depth knowledge of the cultural context of the language and of different communication situations and genres, a knowledge of the regional geology, flora and fauna, expertise in choosing suitable recording equipment and relevant annotation software, painstaking attention to detail in creating annotations and metadata that are consistent and suitable for archiving, expertise in creating derived products such as dictionaries, subtitled videos and interactive multimedia products in formats which appeal to the intended users, and creativity in juggling all these diverse tasks, dealing with ethical dilemmas, and overcoming obstacles and difficulties of communication. This is why a language documentation project is best con-

2088

IX. Beyond Syntax

ceived of as a collaboration between members of an interdisciplinary team who enter a long-term commitment and (if the team includes outsiders to the speech community) work in close collaboration with members of the speech community concerning the scope and purpose of the documentation. The questions that will concern us in the remainder of this section is how much linguistic expertise is required within a documentation project, and how useful language documentations are for linguists who are not part of the documentation team. As already indicated in section 1, linguists involved in language documentation have expressed a variety of views on this matter. On the one hand, it has been argued that documentation, while not excluding analysis (which should still be pursued on the ground of career opportunities), can be conducted with very limited levels of linguistic analysis (Himmelmann 1998, 2006a). A consequence of such a radical approach, ultimately, would be the irrelevance of linguistic expertise for the purpose of documentation. Others have argued that ongoing analysis − ideally, the process of writing a reference grammar alongside the documentation − is a prerequisite for a successful documentation (Evans 2008). In the pragmatic view of language documentation defended here, the answer lies in the middle. Certainly a valuable and lasting record of a language can be compiled by a nonlinguist with a very good knowledge of the practices of a speech community, in particular by a member of the speech community in question. Such a documentation would be restricted to observed communicative events (section 3.1), or perhaps staged versions of authentic communicative events (section 3.2). However, a documentation that aims at a representation of the structural possibilities of the language does well to include other types of data. The collection of grammaticality judgements (section 3.4) and of data which are not authentic communicative events but have been elicited in one of a number of ways (see section 3.2) will by necessity be informed by theoretical interests and previous analysis. Still, it is crucial for these materials to be complemented by observed communicative events. This is because theoretical concerns change, and a documentation that is limited too much by concentration on certain types of data will not contain examples of phenomena which may well turn out to be of interest to later practitioners in the field (Himmelmann 2006a: 19). For example, they may systematically explore all types of relative clauses but ignore the phenomenon of secondary predication. It should also have become obvious that annotation of primary data is not an atheoretical activity but presupposes phonological analysis in the case of a phonemic transcription (section 4.5), and morphological, syntactic and semantic analysis for translation, interlinear glossing and any grammatical annotation (sections 4.6 and 4.8) as well as for the related process of lexicography (section 5.2). This analysis may still be ongoing during the process of documentation, but it will also shape the record itself in that a documentary linguist will attempt to answer open questions by further elicitation or by asking for grammaticality judgments. In a pragmatic perspective, regarding language documentation as a distinct subfield of linguistics thus does not entail separation from analysis, but rather no more than a recognition of specific practical, technological, methodological and ethical questions associated with the documentation process which deserve explicit attention and discussion. In the fractioned field of contemporary linguistics, stereotypes persist of lone fieldworkers working with a single speaker of an “exotic language” eliciting paradigms and other basic features of the language uninformed by theoretical advances in the field, of

60. Language Documentation

2089

corpus linguists mostly interested in frequencies and collocations in actual language use, but in practice mostly restricting their research to written language, of conversation analysts spending many hours meticulously transcribing selected spoken conversations, of syntacticians pursuing their favourite theory for their own native language, on the basis of introspection, and of typologists mining existing reference grammars for particular features with varying degrees of insight into the working of the languages they include in their samples. Unfortunately, such stereotyping often still impedes the dialogue between these fields. In reality, there is much more overlap in methodology than these stereotypes would suggest. Fieldworkers come from many different theoretical persuasions, and today are often speakers of the language they work on; fieldworkers, as we have seen here, also compile corpora (modest in size as they may be) of narratives, conversations and other genres and are faced with similar methodological issues concerning sampling and annotation as corpus linguists (and, for that matter, researchers in language acquisition and conversation analysts). Fieldwork and grammatical description continues to be informed by current theoretical questions and typological insights (see e.g. Dryer 2006: 211−212). Typologists − whether themselves fieldworkers or not − today are becoming interested in examining linguistic features on the basis of texts rather than the descriptions and decontextualised examples found in reference grammars. Syntacticians are increasingly eager to provide evidence for any claims in terms of controlled and documented grammaticality judgments as well as corpus data (see references in section 3.4), and undertake research on spoken regional varieties. Sophisticated reference grammars such as the Longman Grammar of Spoken and Written English (Biber et al. 1999) incorporate recent insights from corpus linguistics about grammatical differences between different genres, in particular between spoken and written language. In such a less stereotypical conception of the field, principles of documentary linguistics have a relevance beyond the description of endangered languages (and of course, documentary linguistics can itself still profit from a more intensive dialogue with other subdisciplines). For example, the study of sociolinguistic variables or language contact phenomena, especially code-switching, is not possible without reliance on documentary corpora which in this case may be modest in size, restricted to certain types of data (only conversations, or controlled interviews) and based on a small community of practice. In this wider context, the most important of the principles of documentary linguistics is that of accountability: only if a documentation (in the sense of an archived record of annotated primary data with appropriate metadata) is available and accessible can claims about the existence of a particular phonological or grammatical phenomenon in some variety of a particular language be verified and assessed. For example, claims about the existence of a particular word order or freedom of word order in a given language could be verified on the basis of spoken language data which may well reveal that a particular constituent order entails the presence of a prosodic break (e.g. in the case of afterthoughts) or a particular intonation contour associated with topic or focus function. The same principle of accountability would also require authors of research papers or descriptive grammars to distinguish, e.g., between expressions that were uttered spontaneously, elicited by means of translation equivalents, or judged as acceptable in a grammaticality judgment experiment; this still has not become common practice in linguistics. Stressing the importance of accountability serves to shift the focus away from language documentation as a discipline mainly aiming at providing a best possible record of endangered languages. As should have become clear in the discussion in this article,

2090

IX. Beyond Syntax

in practice more often than not a comprehensive documentation remains an unattainable ideal when one is faced with the realities of restrictions imposed by ethical concerns, interests and skills of all the parties involved, time, and budget. However, one could conceive of a field of linguistics where any fieldwork project (in a broad sense, as defined in section 2), was conducted according to documentary principles, even if its main purpose was the analysis of a specific syntactic phenomenon in a given language or variety. This would mean that all resulting data are archived and provided with at least a minimal annotation including metadata about the speakers (e.g. their age and competence in various languages or varieties), and information about the type of data (spontaneous communicative event, elicitation, or grammaticality judgment). This would truly strengthen the empirical basis of linguistics not only by increasing the verifiability of any claims but also by creating records which, albeit limited in scope, may lend themselves to future uses. The most important contribution of documentary linguistics to the field is thus the wider evidence base provided by documentary records for the formulation and testing of linguistic hypotheses and theories, and the availability of verifiable data which could serve to challenge prevalent assumptions about the nature of language. As pointed out repeatedly throughout this chapter, this stronger empirical base potentially not only includes previously undocumented languages but also contextualised, conversational data for languages which are generally assumed to be well-described. The scope of this chapter does not permit an in-depth discussion of these points. However, it seems no accident that since the establishment of documentary linguistics as a field, an increasing body of research has been exploring the relevance of an interactional setting for “core” grammatical features, based on corpus data. For example, recent research on evidentials and markers of epistemic stance has pointed to the importance of addressee-oriented and intersubjective epistemics. Similarly, it has been shown that variable or “optional” case marking is often related to information structure, a relationship which can only be established on the basis of interactional data of some sort. Research, in particular comparative research, on features such as these in documentary corpora compiled during the last two decades is only just beginning, and therefore the contribution of documentary linguistics to our knowledge about the true range of linguistic diversity will have to be properly appreciated at a later stage.

7. References (selected) Abbi, Anvita 2001 A Manual of Linguistic Field Work and Structures of Indian Languages. München: Lincom. Ameka, Felix K. 2006 Real descriptions: Reflections on native speaker and non-native speaker descriptions of a language. In: Felix Ameka, Alan Dench, and Nicholas Evans (eds.), Catching Language. The Standing Challenge of Grammar Writing, 69−112. Berlin: Mouton de Gruyter. Austin, Peter 2006 Data and language documentation. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 87−112. Berlin, New York: Mouton de Gruyter.

60. Language Documentation

2091

Berthele, Raphael 2009 The many ways to search for a Frog Story: On fieldworker’s troubles collecting spatial language data. In: Jiansheng Guo, Elena Lieven, Nancy Budwig, Susan Ervin-Tripp, Keiko Nakamura, and Seyda Ozcaliskan (eds.), Crosslinguistic Approaches to the Psychology of Language. Research in the Tradition of Dan Isaac Slobin, 163−174. New York: Psychology Press. Biber, Douglas, Stig Johansson, Geoffrey Leech, Susan Conrad, and Eward Finegan 1999 Longman Grammar of Spoken and Written English. Harlow: Pearson Education Ltd. Bickel, Balthasar, Bernard Comrie, and Martin Haspelmath 2004 The Leipzig Glossing Rules. Conventions for Interlinear Morpheme by Morpheme Glosses, 1−6. Leipzig: Max Planck Institute for Evolutionary Anthropology. http:// www.eva.mpg.de/lingua/files/morpheme.html Bird, Steven, and Gary Simons 2003 Seven dimensions of portability for language documentation and description. Language 93: 557−582. Bird, Steven, and Mark Liberman 2001 A formal framework for linguistic annotation. Speech Communication 33(1−2): 23−60. Boerger, Brenda H. 2011 To BOLDly Go Where No One Has Gone Before. Language Documentation & Conservation 5: 208−233. Bowern, Claire 2008 Linguistic Fieldwork. A Practical Guide. Houndmills and New York: Palgrave MacMillan. Cameron, Deborah, E. Fraser, P. Harvey, M.B. Rampton, and K. Richardson 1992 Ethics, advocacy and empowerment: Issues of method in researching language. Language and Communication 12(2): 81−94. Chafe, Wallace 1980 The Pear Stories: Cognitive, Cultural, and Linguistic Aspects of Narrative Production. Norwood, NJ: Ablex. Chelliah, Shobhana 2001 The role of text collection and elicitation in linguistic fieldwork. In: Paul Newman, and Martha Ratliff (eds.), Linguistic Fieldwork, 152−165. Cambridge: Cambridge University Press. Couper-Kuhlen, Elizabeth 2001 Intonation and discourse: current views from within. In: D. Schiffrin, D. Tannen, and H. E. Hamilton (eds.), The Handbook of Discourse Analysis, 13−34. Oxford: Blackwell. Crowley, Terry 2007 Field Linguistics: A Beginner’s Guide (posthumously edited by Nick Thieberger). Oxford: Oxford University Press. Dahl, Östen 1985 Tense and Aspect Systems. Oxford: Basil Blackwell. Dimmendaal, Gerrit J. 2001 Places and people: field sites and informants. In: Paul Newman, and Martha Ratliff (eds.), Linguistic Fieldwork, 55−75. Cambridge: Cambridge University Press. Dryer, Matthew S. 2006 Descriptive theories, explanatory theories, and Basic Linguistic Theory. In: Felix Ameka, Alan Dench, and Nicholas Evans (eds.), Catching Language. The Standing Challenge of Grammar Writing, 207−234. Berlin: Mouton de Gruyter. Duranti, Alessandro 1997 Linguistic Anthropology. Cambridge: Cambridge University Press.

2092

IX. Beyond Syntax

Dwyer, Arienne 2006 Ethics and practicalities of cooperative fieldwork and analysis. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 31− 66. Berlin, New York: Mouton de Gruyter. Edwards, Jane A. 2001 The transcription of discourse. In: Deborah Schiffrin, Deborah Tannen, and Heidi E. Hamilton (eds.), The Handbook of Discourse Analysis, 321−348. Oxford: Blackwell. Evans, Nicholas 2008 Review of Gippert, Jost, Nikolaus Himmelmann, and Ulrike Mosel (eds.), 2006. Essentials of Language Documentation. Berlin, New York: Walter de Gruyter. Language Documentation & Conservation 2(2): 340−350. Everett, Daniel 2001 Monolingual field research. In: Paul Newman, and Martha Ratliff (eds.), Linguistic Fieldwork, 166−188. Cambridge: Cambridge University Press. Foley, William A. 2003 Gender, register and language documentation in literate and preliterate communities. In: Peter Austin (ed.), Language Description and Documentation, Vol. 2.: 85−98. London: School of Oriental and African Studies. Franchetto, Bruna 2006 Ethnography in language documentation. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 183−211. Berlin, New York: Mouton de Gruyter. Hellwig, Birgit 2006 Field semantics and grammar-writing: Stimuli-based techniques and the study of locative verbs. In: Felix Ameka, Alan Dench, and Nicholas Evans (eds.), Catching Language: The Standing Challenge of Grammar Writing, 321−358. Berlin, New York: Mouton de Gruyter. Henry, Alison 2005 Non-standard dialects and linguistic data. Lingua 115(11): 1599−1617. Hill, Jane 2006a The ethnography of language and language documentation. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 113− 128. Berlin, New York: Mouton de Gruyter. Hill, Jane H. 2006b Writing culture in grammar in the Americanist tradition. In: Felix Ameka, Alan Dench, and Nicholas Evans (eds.), Catching Language. The Standing Challenge of Grammar Writing, 609−628. Berlin: Mouton de Gruyter. Himmelmann, Nikolaus P. 1998 Documentary and descriptive linguistics. Linguistics 36: 161−195. Himmelmann, Nikolaus P. 2006a Language documentation: What is it and what is it good for? In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 1− 30. Berlin, New York: Mouton de Gruyter. Himmelmann, Nikolaus P. 2006b The challenges of segmenting spoken language. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 253−274. Berlin, New York: Mouton de Gruyter. Himmelmann, Nikolaus P. 2008 Reproduction and preservation of linguistic knowledge: Linguistics’ response to language endangerment. Annual Review of Anthropology 37: 337−350. Holton, Gary 2009 Relatively ethical: A comparison of linguistic research paradigms in Alaska and Indonesia. Language Documentation & Conservation 3(2): 161−175.

60. Language Documentation

2093

Hymes, Dell 1971 Foundations of Sociolinguistics: the Ethnography of Speaking. Philadelphia: University of Pennsylvania Press. Krauss, Michael 1992 The world’s languages in crisis. Language 68(1): 4−10. Ladefoged, Peter 2003 Phonetic Data Analysis: an Introduction to Fieldwork and Instrumental Phonetics. Oxford: Blackwell. Lehmann, Christian 2004 Interlinear Morphemic Glossing. In: Geert Booij, Christian Lehmann, Joachim Mugdan, and Stavros Skopeteas (eds.), Morphologie. Ein Internationales Handbuch zur Flexion und Wortbildung, 1834−1857. Berlin: Walter de Gruyter. Lüpke, Friederike 2011 Orthography development. In: Peter K. Austin, and Julia Sallabank (eds.), The Cambridge Handbook of Endangered Languages, 312−335. Cambridge: Cambridge University Press. MacWhinney, Brian 2000 The CHILDES Project: Tools for Analyzing Talk. 3 rd Edition. Mahwah, NJ: Lawrence Erlbaum Associates. Matras, Yaron 2005 Language contact, language endangerment, and the role of the ‘salvation’ linguist. In: Peter K. Austin (ed.), Language Documentation and Description, Vol. 3: 225−251. McEnery, Tony, and Andrew Wilson 2001 Corpus Linguistics (Second Edition). Edinburgh: Edinburgh University Press. McLaughlin, Fiona, and Thierno Seydou Sall 2001 The give and take of fieldwork: noun classes and other concerns in Fatick, Senegal. In: Paul Newman, and Martha Ratliff (eds.), Linguistic Fieldwork, 189−210. Cambridge: Cambridge University Press. Michael, Lev 2011 Language and culture. In: Peter K. Austin, and Julia Sallabank (eds.), The Cambridge Handbook of Endangered Languages, 120−140. Cambridge: Cambridge University Press. Mithun, Marianne 2001 Who shapes the record: the speaker and the linguist. In: Paul Newman, and Martha Ratliff (eds.), Linguistic Fieldwork, 34−54. Cambridge: Cambridge University Press. Mosel, Ulrike 2006a Fieldwork and community language work. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 67−86. Berlin, New York: Mouton de Gruyter. Mosel, Ulrike 2006b Sketch grammar. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 301−309. Berlin, New York: Mouton de Gruyter. Musgrave, Simon, and Nicholas Thieberger 2006 Ethical challenges in documentary linguistics. In: Keith Allan (ed.), Selected Papers from the 2005 Conference of the Australian Linguistic Society. http://www.als.asn.au Newman, Paul, and Martha Ratliff (eds.) 2001 Linguistic Fieldwork. Cambridge: Cambridge University Press. Ochs, Elinor, Emanuel A. Schegloff, and Sandra A. Thompson (eds.) 1996 Interaction and Grammar. Cambridge: Cambridge University Press.

2094

IX. Beyond Syntax

Ostler, Nicholas 2008 Corpora of less studied languages. In: Anke Lüdeling, and Merja Kytö (eds.), Corpus Linguistics: An International Handbook, Vol. 1: 457−484. Berlin, New York: Walter de Gruyter. Rice, Keren 2006a Ethical issues in linguistic fieldwork: An overview. Journal of Academic Ethics 4: 123−155. Rice, Keren 2006b Let the language tell its story? The role of linguistic theory in writing grammars. In: Felix Ameka, Alan Dench, and Nicholas Evans (eds.), Catching Language. The Standing Challenge of Grammar Writing, 235−268. Berlin: Mouton de Gruyter. Saville-Troike, Muriel 2003 The Ethnography of Communication: An Introduction. Oxford: Blackwell. Schultze-Berndt, Eva 2006 Linguistic Annotation. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 213−251. Berlin, New York: Mouton de Gruyter. Schütze, Carson T. 1996 The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. Chicago: University of Chicago Press. Seifart, Frank 2006 Orthography development. In: Jost Gippert, Nikolaus P. Himmelmann, and Ulrike Mosel (eds.), Essentials of Language Documentation, 275−299. Berlin, New York: Mouton de Gruyter. Seifart, Frank 2008 On the representativeness of language documentations. In: Peter K. Austin (ed.), Language Documentation and Description 5: 60−76. London: School of Oriental and African Studies. Seyfeddinipur, Mandana 2011 Reasons for documenting gestures and suggestions for how to go about it. In: Nicholas Thieberger (ed.), The Oxford Handbook of Linguistic Fieldwork, 147−165. Oxford: Oxford University Press. Skopeteas, Stavros, Ines Fiedler, Sam Hellmuth, Anne Schwarz, Ruben Stoel, Gisbert Fanselow, Caroline Féry, and Manfred Krifka 2006 Questionnaire on Information Structure (QUIS): Reference Manual. Potsdam: Universitätsverlag Potsdam. Sorace, Antonia, and Frank Keller 2005 Gradience in linguistic data. Lingua 115(11): 1497−1524. Storch, Anne, and Rudolf Leger 2002 Die Afrikanistische Feldforschung. Köln: Rüdiger Köppe Verlag. TEI_Consortium 2007 TEI P5: Guidelines for Electronic Text Encoding and Interchange. TEI Consortium. http://www.tei-c.org/Guidelines/P5/ Thieberger, Nicholas (ed.) 2011 The Oxford Handbook of Linguistic Fieldwork. Oxford: Oxford University Press. Thieberger, Nicholas, and Andrea L. Berez 2011 Linguistic data management. In: Nicholas Thieberger (ed.), The Oxford Handbook of Linguistic Fieldwork, 90−119. Oxford: Oxford University Press. Tremblay, Annie 2006 Theoretical and methodological perspectives on the use of grammaticality judgement tasks in linguistic theory. Second Language Studies 24(1): 129−167.

61. Grammar in the Classroom

2095

Wasow, Thomas, and Jennifer Arnold 2005 Intuitions in linguistic argumentation. Lingua 115(11): 1481−1496. Wichmann, Ann 2008 Speech corpora and spoken corpora. In: Anke Lüdeling, and Merja Kytö (eds.), Corpus Linguistics: An International Handbook, Vol. 1: 187−206. Berlin, New York: Walter de Gruyter. Woodbury, Anthony C. 2003 Defining documentary linguistics. In: Peter Austin (ed.), Language Documentation and Description, 35−51. London: SOAS.

Eva Schultze-Berndt, Manchester (UK)

61. Grammar in the Classroom 1. 2. 3. 4. 5. 6. 7.

Introduction Teaching grammar: aims and methods Language awareness and linguistic competence Linguistics at school Syntax at school: a linguistic perspective Conclusion References (selected)

Abstract This chapter discusses the interrelation between academic linguistic research and the teaching of grammar at school from a linguistic point of view. We believe that linguistics mainly contributes to our current knowledge about language as an object of research, whereas didactics is responsible for deciding which of these insights are relevant for school and how they can be employed in a productive manner. We pursue the following two main objectives: (i) Modern linguistics has to support didactics by providing accessible and reliable insights into language, and (ii) modern linguistics also benefits from didactics by analyzing pupils’ linguistic competences. After discussing the general aims of teaching grammar at school, we introduce one prominent method of teaching grammar, the so-called Grammatik-Werkstatt. Subsequently, we elaborate on different concepts of grammar and linguistic knowledge and outline the cornerstones of modern language and linguistic instructions. Finally, we sketch the relevance of a fundamental knowledge of formal and functional aspects of syntax for teaching grammar and languages at school in order to show that linguistic phenomena can establish a basis for a linguistically informed teaching at school.

2096

IX. Beyond Syntax

1. Introduction Why, how and when have always been the most discussed questions in connection with grammar in the classroom. Although these questions have been not completely clarified, a consensus has been reached in the deeper insight that the object of language teaching in classroom is the language itself. The main task of modern language teaching is to develop linguistic abilities and language skills. Generally speaking, school aims at enabling all pupils to use language cognitively as well as communicatively in a competent way. The most general objectives of a comprehensive linguistic education can be summarized according to Frentz and Lehmann (2003): The human being shall be qualified (i) to accomplish all cognitive and communicative tasks that are linguistic in nature, (ii) to master the whole spectrum of the linguistic functional hierarchy in a controlled way, (iii) to pursue cognitive and communicative objectives according to the social and cultural circumstances including the respective languages, (iv) to reflect this activity in an adequate way and (v) to conceive essential principles of a functional-driven systematicity. To cope with these requirements language teaching may not only be directed towards oral and written language usage but must also address the analytic linguistic reasoning. Nowadays, it is almost common sense that a profound knowledge of grammar and the ability of grammatical analysis are − among others − a basic necessity for a successful usage of language and a competent exposure to language, cf. e.g. Becker and Peschel (2006), Köpcke and Ziegler (2007). Nevertheless, it is vividly debated which consequences follow from this for systematically teaching grammar at school and university. Hence, there is no consensus on the legitimation and design of teaching grammar at school. Additionally, the curricular profile of teacher training in the area of language and linguistics has not been finally settled, yet. For instance the relevance of different theories of grammar for the education of prospective teachers at universities and for teaching grammar and linguistics at school is still controversial. In this chapter we will unfold the interrelation between academic linguistic research and teaching grammar at school and university. To date this subject has been most notably approached from a didactic perspective. This chapter, however, tackles the problem from a linguistic point of view and it mainly addresses linguists and didacts involved in the education of teachers at university as well as teachers interested in teaching grammar. The chapter does not intend to develop innovative didactic models or to present elaborate concrete course materials since we think that only didactics has the means and the competences to do so. We argue that, on the one hand, modern linguistics has to support didactics by providing accessible and reliable insights into language, e.g. its systematic structures and the conditions of its use. On the other hand, modern linguistics also benefits from didactics by analyzing linguistic competences of pupils (for the interrelation between linguistics and didactics, cf. Günther 1998; Rothstein 2010a). Taking German as our main example, we want to show how certain well-investigated linguistic phenomena can establish a basis for a linguistically informed teaching at school. Note that we do not claim that empirical and/or theoretical results of linguistic research should be directly mapped to classrooms. We rather believe that linguistics can only contribute current knowledge about language as an object of research, whereas didactics is responsible for deciding which of these insights are relevant for school and how they can be employed in a productive manner. However, teachers at school will be able to make these decisions only if they are well instructed in linguistics. This means

61. Grammar in the Classroom

2097

that prospective teachers need to have a profound knowledge of linguistic data as well as a good command of the relevant empirical methods in linguistic research and they must be able to cope with mainstream linguistic theories. Guaranteeing this is one of the main tasks of teaching linguistics at university. At this point the natural relation between university training in linguistics on the one hand and teaching grammar at school on the other hand becomes obvious. There is no doubt that teaching grammar at school relates to many areas of linguistics; following the focus of this volume we will concentrate on syntactic issues. The chapter picks out the German conception in teaching grammar at school as a central theme because of the lack of internationally established uniform didactics in L1 grammar teaching. Although there exists relevant work addressing the issue from an English perspective (cf. for instance Hudson 2005; Locke 2010), the differences in history and conception between the traditions in various countries are so huge that a systematic comparison of these approaches would require a separate article. International cooperations in the field of L1 grammar teaching are still a desideratum. This chapter is organized as follows: In the next section, we first discuss the general aims of teaching grammar at school, before we briefly turn to one prominent method of teaching grammar, the so-called Grammatik-Werkstatt (‘grammar shop’) proposed by Menzel (1999). In section 3, we elaborate on concepts of grammar and linguistic knowledge in linguistics and didactics. In particular, we discuss several aspects of the acquisition of linguistic competence, which has become one of the main criteria of a successful school education in the last few years. In section 4, we outline the cornerstones of modern language and linguistic instructions. Finally, we sketch by means of four case studies the relevance of a fundamental knowledge of formal and functional aspects of syntax for teaching grammar and languages at school.

2. Teaching grammar: aims and methods Teaching grammar has a long and variable tradition going back to the ancient world. For a survey of the history of grammar see for instance Sucharowski (1999), Glinz (2003), and Jungen and Lohnstein (2007). At the very beginning teaching grammar had a propaedeutic function for the other seven liberal arts; later on the focus was on the instruction in good language usage and the procurement of Latin proficiency; at last a systematic teaching of grammatical knowledge came to the fore. Nowadays, one of the fundamental issues that is debated in connection with linguistics at school and in particular with first language teaching is the question of whether grammar is a subject of its own and to what extent teaching grammar contributes to the acquisition of language competences (for the development of teaching grammar at school in the last three centuries, cf. Bredel 2007b). On the one hand, Helbig’s (1972) view according to which grammar is indispensable for all people since grammatical regularities investigated by linguists are a constitutive part of language and must be mastered by every speaker of a language, is not completely agreed with by the entire scientific community. On the other hand, there is almost no one who advocates Ingendahl’s (1999) verdict against teaching grammar at school in general. It is widely accepted that grammar has to be taught at schools and universities, and that linguistics as a scientific discipline has to take the opportunity to

2098

IX. Beyond Syntax

influence what exactly is taught. Nevertheless, the question first raised by Gaiser (1973) of how much grammatical instruction a human being needs is not finally pursued (cf. also Eisenberg 2004a). In order to answer this question it is necessary to clarify the purposes of grammar and teaching grammar at school. The spectrum of arguments for the legitimation of teaching grammar and its main tasks is, however, very broad. It spans from the development of an understanding of the actual general state of language via the instruction of the pupils’ productive and receptive skills to the point of boosting language awareness. The only issue that is widely agreed on is the fact that grammar is a peculiar subject in the sense that pupils shall come to insights into something that they have practically already at their disposal. In other words, dealing with grammar in the classroom does not involve learning something completely new, but to discuss and develop skills pupils already have. Teaching grammar, thus, always means to address implicit knowledge of language and to make it explicit. Since teaching grammar is in one or the other way always directed at the acquisition of language and the development of its proficient usage, the question of how explicit knowledge of grammar can be transferred to implicit knowledge and language proficiency and vice versa inevitably arises.

2.1. Aims The objectives of teaching grammar at school can roughly be structured into three groups: (1a) general cultural and educational objectives, (1b) objectives concerning language skills and linguistic knowledge, and (1c) objectives concerning language consciousness and language awareness (cf. Boettcher and Sitta 1978, 1979). (1)

a. General cultural and educational objectives − A deeper understanding of the structure of language doubtlessly belongs to the cultural assets. It allows people to develop an awareness of their own identity. Since schools are obligated to disseminate culture, a systematic instruction in the grammar of natural human languages must belong to the educational commitments of any school. − Teaching grammar has an important auxiliary function for other subjects at school. − Grammar forms the elementary basis for linguistic knowledge as well as for certain intellectual skills. Dealing with the structure of language for instance trains abstract and analytic thinking. By means of language analysis pupils can learn to single out parts of a complex whole and to construct simplified models. They can practice to think inductively and deductively and to reason logically. − Grammar at school permits the introduction of pupils into scientific work by using grammar itself as an area of scientific investigation where pupils can learn to formulate hypotheses and test them. − Last but not least, grammar qualifies for introducing the manifold sociological, historic, cultural, biological and psychological aspects of human life.

61. Grammar in the Classroom

2099

b. Objectives concerning language skills and linguistic proficiency − The practical aim of improving the use of language in writing/reading and speaking/listening is the main justification for teaching grammar. Teaching grammar supports the acquisition of standard language (i.e. standard German at German schools) as well as the development and improvement of communicative competences. − Knowledge of linguistic structures also enhances the analysis and comprehension of texts, which is one of the key competences of the PISA evaluation. − Pupils usually have native speaker competence in at least one language and a specific (native) feeling for their native language(s). This should explicitly be addressed in a multilingual classroom in order to reach a better understanding of grammar and the similarities and differences between languages (spoken in the classroom). This facilitates the acquisition and the development of language skills in and linguistic knowledge about L1 and L2. − Teaching grammar may stimulate the pupils’ creative usage of language. They are enabled to use language in a communicatively adequate way including also a more conscious and precise use of linguistic means depending on the respective communicative situation. c. Objectives concerning language consciousness and language awareness − Teaching grammar may also stimulate the pupils’ linguistic reasoning. Pupils get acquainted with the language system. This includes not only an increase in the awareness of the form and function of syntactic constructions, but also the reflection of the normative standard as well as varieties and registers. Pupils shall recognize that language is vague, has exceptions, and undergoes change. − As already mentioned, pupils usually have a native speaker’s competence in at least one language as well as a metalinguistic awareness. Teaching grammar picks up pupils’ linguistic knowledge (L1 and L2), and instructs them to reflect the difference between language being subject language on the one hand, and language being language of instruction as well as metalanguage on the other hand. Teaching grammar, hence, supports the development of pupils’ meta-linguistic knowledge. Guiding pupils to represent linguistic structure in a more formalized way is more than just teaching them terminology and classification procedures. − Knowledge of the structure of language(s) is a precondition for the ability to analyze texts and interpret them. Teaching grammar provides a specification language that enables pupils to parse textual structures and to describe syntactic, semantic and pragmatic properties of texts. − Awareness of grammar helps pupil to dissect communicative acts and to understand linguistic manipulation in a better way. By analyzing the linguistic aspects of communication, pupils are enabled to gain a certain distance to communicative acts, their causes and their effects. They understand the general regularities of communicative processes.

2100

IX. Beyond Syntax

None of the objectives listed could be achieved without teaching grammar in a comprehensive way. Apart from that, language, like physics and history, is certainly an important subject matter of its own. Analyzing grammatical structures and discovering linguistic regularities has an independent eligibility − at university as well as at school.

2.2. Methods As said afore, explicit knowledge of grammar always ties in with the unconscious existent linguistic competences of pupils. This fact offers the great opportunity to include pupils as language experts into the instruction process at school. Pupils generally have a cognitive interest in language and its usage (cf. for example Menzel 1999; Bredel 2007b). It is one of the teacher’s main tasks to motivate pupils to act as “little researchers”, who try to reveal the components and structures of (their) language. To reach this goal, lessons must be designed in a way that appropriate formal and functional aspects of linguistic entities are picked out. Pupils can discover language hands-on without being forced to learn lists of grammatical terminology. However, linguistic research also needs additional systematic support by the teacher. There are various well-known strategies of teaching grammar: Grammar can be taught in a systematic, inductive, situational, and integrative way among others. These approaches can be combined and they have specific advantages and disadvantages. A wellestablished method that includes pupils as little researchers is the Grammatik-Werkstatt (cf. Menzel 1999). One of the main aims of the Grammatik-Werkstatt is that pupils become little linguists, who observe, describe, compare, categorize, and systematize linguistic data in order to gain deeper insight into the structure of language and to reveal its implicit internal system. Hence, pupils reconstruct formal and functional aspects of the grammatical system they use day-to-day and they do not only learn what this system can be used for but also how this system is designed. Eisenberg and Menzel (1995) discuss three reasons that support an inductive experimental method of teaching grammar at school: (2)

a. Psychological reason: Pupils memorize much easier facts about grammar and language if they learn these facts on the basis of their own experience. b. Educational reason: Pupils should be enabled to independently scrutinize and verify all facts about grammar and language they have learned in classroom. c. Epistemological reason: The way pupils learn facts about grammar and language should be consistent with the way humans have learned these facts throughout the course of history.

Consequently, teaching materials must not (only) provide the deductive training of grammatical categories introduced by the teacher but (mainly) the inductive development of these categories by pupils (see, for example, Menzel 1999, 2002a). These materials should address formal and functional aspects of grammatical categories. All examples and tasks must comply with the linguistic competences and the language experience of pupils at a specific level of education. Moreover, they should also consider different varieties and registers of spoken and written language. The basic idea of the Grammatik-

61. Grammar in the Classroom

2101

Werkstatt has, of course, also several shortcomings and must therefore be enhanced (cf. Ingendahl 1999; Ossner 2000, 2006b). It should (i) offer a broader range of grammatical tests designed for all levels of grammar including semantics and pragmatics, (ii) be supplemented by systematic deductive methods, (iii) address additional linguistic topics and perspectives, (iv) use additional methods such as computer based data collection and evaluation, and (v) implement new multimedia technologies (cf. also section 4). A new project that ties in with and further develops the tradition of the Grammatik-Werkstatt is Taaltrotters Abroad. This international educational multimedia project about multilingualism and language awareness uses various media and addresses new linguistic topics such as language typology, multilingualism, sign language, or youth language (for the German version, cf. Pfau et al. 2008; Antomo and Steinbach 2009; Antomo, Hübl, and Steinbach 2010).

3. Language awareness and linguistic competence As a result of the general educational turn towards an output orientation, the acquisition of linguistic competence has become one of the main criteria for successful school teaching. It is hence necessary to clarify the issue what exactly is meant by requiring pupils to gain linguistic competence. In this section we discuss several issues concerning the concept of linguistic competence. We show that the analysis of an individual’s linguistic competence requires a systematic distinction between his/her language ability on the one hand and his/her knowledge of language on the other hand.

3.1. Towards the concepts of grammar and linguistic knowledge in linguistics and didactics In modern linguistics, at least four different readings are related to the concept of grammar: (i) the systematic rules and regularities that are immanent to language itself, (ii) the system of rules, principles, constraints, etc. that the language capacity of a competent speaker builds on, (iii) the abstract, theoretically motivated model of the language system and (iv) the result of the codification of this rule- or principle-based system. All four readings refer to the systematic nature of natural language. They differ, however, with respect to the question whether grammar is taken as a term for the object language as in (i) and (ii), or as a term for the meta-language as in (iii) and (iv). In addition, the observable structuredness of uttered linguistic products, which equates to reading (i), has to be distinguished from the mental grammar in reading (ii), which refers to the cognitive system operating during language production and perception. Under a meta-linguistic perspective, grammar results from a reflective process which either explicates regular linguistic structures model-theoretically, cf. reading (iii), or (prescriptively) describes them, cf. reading (iv). Didactics also uses a terminological distinction between internal, cf. readings (i) and (ii), and external grammar, cf. readings (iii) and (iv). This differentiation is embedded into the well-established distinction between an unconscious command of language, on the one hand, and a conscious linguistic rea-

2102

IX. Beyond Syntax

soning, on the other hand. Note that in didactics several pairs of concepts and contrasting terms circulate, which describe by and large the mentioned dichotomy. For a terminological survey see Eichler (2007: 34−35), Andresen and Funke (2003: 440−441), Steinig and Huneke (2002), Ossner (2006a, b), Bredel (2007b). Widely accepted is the differentiation of implicit and explicit linguistic knowledge, where implicit linguistic knowledge characterizes the internalized language capacity of human beings, which results from natural, i.e. uncontrolled, language acquisition. In this sense the concept of implicit knowledge is on a par with the concept of mental grammar. Karmiloff-Smith (1992), however, proposes to describe implicit linguistic knowledge as being unavailable for cognitive operations that act independently of a specific cognitive domain. She pursues a multi-level approach to linguistic knowledge assuming that implicit knowledge exists in at least two levels: (i) automated implicit knowledge that capacitates a grammatically correct usage of a linguistic structure, and (ii) implicit knowledge with a monitor function that promotes the behavioral mastery of a linguistic structure. The term explicit linguistic knowledge usually denotes in didactics an inventory of linguistic knowledge that is consciously accessible and can be overtly formulated. This includes for instance a pupil’s conceivability about language. In the scientific literature, this kind of a conscious analytic access to language is discussed under the term of language awareness. By using this term, it is intended to reflect linguistic phenomena and communicative acts by means of a meta-language independently of a certain linguistic context. Karmiloff-Smith (1992) postulates several levels of explicitness of linguistic knowledge. Thus, a pure equalization of implicit knowledge and language proficiency, on the one hand, and explicit knowledge and meta-linguistic abilities, on the other hand, is not justified in such an approach. Similarly, it has been pointed out in recent work in didactics that procedural and declarative linguistic competence on the one hand must not be identified with practical linguistic knowledge and meta-linguistic knowledge on the other hand because both procedural meta-linguistic knowledge as well as declarative practical linguistic knowledge exists. Eichler (2007) argues that from a didactic perspective multi-level models for the description of linguistic (grammatical) knowledge are better suited to account for the acquisition of linguistic competences than simple twolevel distinctions. At this point of the discussion the question arises what is meant by linguistic competence. Just like the concept of grammar, the conceptualization of linguistic competence depends on the respective disciplinary points of view.

3.2. Language competence from the perspective of theoretical linguistics In the theory of language the concept of linguistic competence has been basically formed by assumptions of generative grammar. In accordance with Chomsky (1965) linguistic competence is equated with grammatical competence defined as the implicit linguistic knowledge of the native speaker. Linguistic competence is the ability of an idealized speaker-hearer to construct and to understand an infinite set of linguistic utterances based on a finite set of linguistic expressions and composition rules as well as to judge the well-formedness of any linguistic utterance. According to this view, linguistic competence becomes manifest in an abstract subsystem of the human mind called the language

61. Grammar in the Classroom

2103

faculty. This cognitive module comprises the speaker-hearer’s knowledge of his/her language, which is clearly distinguished from the actual use of language in concrete situations. This instantaneous linguistic realisation is called performance. According to Chomsky (1965) the grammar of a language establishes a model of an idealized competence; thus, performance is not a relevant subject matter of a linguistic grammar theory. Chomsky’s competence-performance-dichotomy and his reduction of linguistic competence to grammatical competence contrast with a sociolinguistic approach according to which linguistic competence is understood as communicative competence following Hymes (1962). According to this view, usage of language presumes a general ability to evaluate social, mental and linguistic conditions and hence to communicate in compliance with these contextual specifications. For a profound criticism of this view see Bierwisch (1980). Although Chomsky’s conception of linguistic competence has been very fruitful for the development of linguistic theory, it is less appropriate in contexts where an empirical assessment of linguistic competence is indispensable since it cannot be operationalized as there is no empirical correlate of an ideal native speaker, cf. Lehmann (2007: 12−14). Thus, in the context of classroom and university, it seems to be more suitable to base linguistic competence on a concept of competence which is generally applied in the field of psychology of learning in order to account for the fact that the linguistic competence of human beings develops over the entire lifespan of a person. This dynamic aspect of linguistic competence is important for native monolingual speakers as well as in the field of second language learning and teaching.

3.3. Linguistic competence from the perspective of psychology of learning Competence as a concept inspired by the psychology of learning describes domainspecific, cognitively driven abilities and skills that an individual possesses because of his/her genetic disposition or as a result of a learning process. Competence understood in this way forms the basis for situation-related adequate performative activity and behavior. Competences can be conceptualized as complex ability constructs that are closely related to performance in real-life situations. Generally, procedural competence and reflective competence are differentiated depending on the degree of consciousness. More or less automated skills, which require a low degree of consciousness, are thus distinguished from controlled and high-grade conscious reflection and expertise. Since competences are in principle not directly empirically accessible, conclusions on realized competences must be drawn from observed individual behavior and activities. Adopting this psychological concept of competence, Lehmann (2007) introduces a new notion of linguistic competence to describe the „capacity or set of capacities underlying the linguistic activity of the individual“ (Lehmann 2007: 234). Aiming at developing a theoretically well-founded concept that can be empirically tested, he articulates the concept of linguistic competence as a multi-factor composite along two dimensions, which he calls (i) cognitive levels and (ii) levels of generality and components. According to Lehmann (2007), the first dimension involves two levels of consciousness: procedural linguistic competence, which is a lower level competence comprising

2104

IX. Beyond Syntax

skills of speaking and understanding, as well as reflective linguistic competence, which is a higher level competence comprising declarative knowledge about language. Whereas the lower level relates to the language ability, the higher one refers to the language knowledge allowing a recursive linguistic reasoning. Speakers may differ in their procedural competence as well as in their reflective linguistic knowledge. According to this view, linguistics is no more than higher-level declarative knowledge of a language. The second dimension of linguistic competence exhibits two levels of generality: (i) the levels of universal semiotic competence and (ii) the levels of language-specific competence including language system competence, pragmatic competence and variational competence. Both levels of generality are individual competences and involve procedural as well as reflective competences. Both levels of competence relate to each other since language-specific competence is based on universal semiotic competence, while universal linguistic competence is bound up with a particular language. With the exception of pragmatic competence (i.e. the ability to use language in different social contexts) and variational competence (i.e. the ability to master the norm as well as sociolectal, dialectal or other varieties of language), competence in the language system is part of the language-specific competence. Hence grammatical competence, which is a component of the language system competence, forms an integral part of linguistic competence. Since Lehmann (2007) assumes that all of the levels of generality and components of linguistic competence involve both procedural and reflective knowledge, grammatical competence also comprises both linguistic skills and declarative knowledge of language. This clearly differs from Chomsky’s view of language competence. Under a psychological perspective of linguistic competence it is possible to account for the fact that language competence may vary, e.g. within a speech community or with respect to an individual, who may be proficient in different languages to different degrees. On the other hand, the fact that a speaker may have linguistic skills without being able to reflect on language supports Chomsky’s view that language competence is based on tacit knowledge. In educational contexts, however, it is a merit of a holistic notion of linguistic competence comprising both linguistic skills and knowledge of language that an empirical assessment of competences becomes feasible. A successful teaching of language at school must enable pupils to acquire procedural as well as declarative language competence. This requires qualified teachers who are able to provide language skills and linguistic knowledge and to apply and at least simple methods of evaluating and assessing the results of the acquisition of competence. To fulfill these tasks teachers definitely need linguistic expertise. They must not only have language proficiency but also a profound knowledge of the structure of language (cf. Eisenberg and Voigt 1990). Thus, the main question is not which scientific linguistic theory teachers can cope with but whether they are able to make practical use of their theoretical knowledge of language in class because a systematic didactic grammar taught in class is not just a reduced scientific grammar. An evaluation of the quality and effectiveness of a given method of teaching grammar at school has to resort to empirical methods that are generally appropriate to diagnose acquired competences and individual learning results. However, developing such an empirical methodology is still a challenge in the area of linguistic competence.

61. Grammar in the Classroom

2105

4. Linguistics at school Before we take a closer look at syntax in school in section 5, we first discuss some general issues of teaching linguistics at school. As already mentioned in section 1, the main subject of teaching a first language at school is, of course, the respective language(s) spoken in a country, that is, a German class in Germany has to deal at least with German in all its varieties. Consequently, the most important corresponding science for all language teachers is linguistics. Curiously, the gap between recent developments in linguistics and teaching language and linguistics at school has constantly increased. To improve linguistics at school, at least two things are necessary. On the one hand, didactics (L1 as well as L2 didactics) has to develop new methods of teaching language(s) to improve the curricular standards. In this context, one could also think about a new school subject language integrating the issues discussed in this section. On the other hand, linguistics and didactics must intensify their cooperation to improve the transfer between science and school thereby also incorporate new insights in the form and function of language and new linguistic methods (for the cooperation between linguistics and didactics, cf. Dürscheid 1993; Oomen-Welke 1997; Günther 1998; Rothstein 2010a). Especially linguistics should rediscover the importance of this transfer. For 20 years, the percentage of linguistic publications discussing didactic issues has been on the decrease (cf. for instance Eisenberg 2004b). What are the cornerstones of L1 and L2 language and linguistic instructions? First, it is obvious that linguistic instructions should include formal and functional aspects of language (cf. Eisenberg and Menzel 2002). Pupils should learn that human languages consist of two essential parts: On the one hand, languages have specific formal restrictions at all levels of grammar, which are subject to typological micro- and macro-variation. On the other hand, speakers use languages in specific communicative situations to achieve specific communicative aims. Hence, language is a useful tool for the social interaction of human beings. Modern language and linguistic instructions combine both perspectives on language: Formal aspects of language must not only be discussed in isolation but in the broader context of language use. Likewise, functional aspects of language should always be based on formal analyses of the grammatical phenomena in question. Second, teaching grammar should address all levels of grammar from phonetics to text and discourse. Pupils should be able to analyze the internal structure of the language(s) they use, to understand regularities at and interactions between various levels of the linguistic system, and to develop models explaining these regularities and interactions. Without a solid grammatical knowledge, pupils will not be able to describe and understand complex linguistic phenomena such as, for example, indirectness, metaphor, or the language of poems. Third, both spoken and written language should be a topic in class. Pupils should understand orthographic regularities and exceptions (based on the grammatical structures of a language), formal and functional principles of text structuring, and constraints regulating spoken discourse. Linguistic reasoning comprises normative and descriptive aspects of linguistic data. Pupils should learn about different dialects, sociolects, and registers and reflect differences between standard and colloquial variants of a language. Moreover, they should understand that there is a difference between the grammaticality of

2106

IX. Beyond Syntax

data and their appropriateness within a conversational setting and that grammaticality judgments can be difficult since real data are sometimes slippery. Fourth, linguistic reasoning is built on and integrates the linguistic knowledge of pupils. Unlike many other school subjects, language and linguistic classes do not need to start from the scratch. Pupils already have native speaker competence in at least one language. Moreover, they also possess good language awareness since language form and language use is an important subject of reflection and discussion from the beginning. In addition, a comprehensive understanding of grammatical phenomena in general and of the grammatical properties of a specific language in particular always includes a typological perspective on language. This does not only support the reflection of the pupils’ native language but also the development of grammatical concepts and the acquisition of a second language. And language comparison in linguistic classes meets the fact that pupils live in multilingual societies and it does not put a disadvantage on pupils with migration background (cf. Trim et al. 2001; Oomen-Welke 2003). Fifth, teaching grammar is integrative. Since language has many interfaces with various other subjects, modern language and linguistic instructions should address these connections explicitly. Integrative subjects are, for example, linguistics and literature, linguistics and second language acquisition, linguistics and mathematics, linguistics and social studies, and linguistics and biology (cf. Gnutzmann 1995; Fabb 2002, 2003; Klotz 2004; Kümmerling-Meibauer and Meibauer 2007; Rothstein 2010a). Sixth, as outlined in section 2.2, pupils should become little researchers who develop hypotheses about formal and functional aspects of language that can be tested empirically. For the verification of hypotheses, pupils can rely on their own linguistic knowledge or use methods of empirical linguistic research such as questionnaires, corpora, digital linguistic atlases, or even simple psycholinguistic methods. Furthermore, new subjects such as language and computer or language and mind are both important and interesting topics. And finally, teaching grammar is an essential part at all levels of primary and secondary school. Language and linguistic instructions should not be restricted to primary school and the beginning of secondary school. In primary school, pupils should be thought basic linguistic concepts and terminology. Good grammatical knowledge supports the acquisition of written language. Likewise, principles of orthography can be used to develop the grammatical knowledge of pupils (cf. for example Röber 2007; Röber 2009; Hübl and Steinbach 2011). In secondary school, pupils should acquire a more sophisticated grammatical terminology and learn more about formal and functional aspects of language as well as about the above mentioned interfaces between language and society, language and brain, language and literature and language and computers. It is well-known fact that most first-year students do not know much about the grammar of the languages they speak and the relevance of language for human societies. To sum up, linguistics at school is a fundamental subject with many interesting facets and an important integrative dimension. Of course, the cornerstones discussed in this section are not equally important at all levels of education and it is impossible for teachers to consider all aspects in class. Nevertheless, they show that teaching language(s) and linguistic topics is a very ambitious venture. Therefore, linguistics at school requires good course materials and excellent background knowledge of teachers as well as clear didactic reductions and progressions at all levels of teaching. Since linguistics at school has been neglected for a long time, much more interdisciplinary research is necessary to improve the quality of language classes at primary and secondary school.

61. Grammar in the Classroom

2107

5. Syntax at school: a linguistic perspective In this section we finally illustrate the relevance and potential of syntax for teaching grammar and languages at school. Four case studies illustrate that good knowledge of formal and functional aspects of syntax is essential for a better understanding of the grammar of specific languages and of language in general. On the one hand, pupils are expected to gain deeper insights in the structure of sentences and their respective functions. On the other hand, teachers are expected to have a fundamental knowledge of syntax and its interaction with other modules of grammar including knowledge about syntactic micro- and macro-variation and possible functional alternatives of specific syntactic structures. This knowledge is particularly important for developing a reasonable didactic reduction and to avoid typical didactic traps, which can easily be found at all levels of grammar (cf. Eisenberg and Voigt 1990; Meinunger 2008). Syntax is a particularly suitable topic for discussing essential properties of the human language faculty. In syntax, pupils can detect rules and (more or less systematic) exceptions, macro- and micro-variation, historical changes, as well as formal restrictions on and functional motivations of syntactic structures. They can discuss descriptive and normative aspects and odd data and they can become little linguists themselves by investigating the syntax of their own language(s) and of other languages. The latter is not only important in multilingual contexts but also for monolingual pupils since only a systematic comparison with other languages yields a better understanding of the syntactic structures of one’s own language(s). Besides, syntax also supports action-oriented methods of learning. Pupils can apply standard syntactic operations, develop questionnaires, search corpora, and compare different languages. In addition, syntax supports the investigation of interfaces, i.e. the interface between syntax and morphology, the interface between syntax and semantics/ pragmatics, the interface between syntax and text/discourse, or the interface between grammar and literature. Since syntax is a vast area, we pick out four case studies − subjects, word order, embedded clauses, and noun phrases − to briefly discuss various linguistic and didactic dimensions of typical syntactic topics. There are, of course, other interesting areas such as case, adverbials, prepositional phrases, passives, the typological model and fronting. However, we do not think that pupils need a comprehensive overview of all aspects of syntax but only a good understanding of the complexity of some selected areas, which they can then apply to other aspects of syntax. The four well-studied topics discussed in the following subsections have different interesting formal, functional, and typological aspects and share interfaces with other areas of language. Since the case studies mainly address linguists and didacts involved in the education of teachers at university, the main purpose of the following subsections is to sketch the linguistic potential for syntax classes. Moreover, we do not discuss didactic concepts nor do we draft teaching materials for syntax classes because this can only be done by didactics (for didactic concepts and teaching materials for various topics, cf. Menzel 1988, 1991, 1992, 2004; Näf 1995; Eisenberg and Linke 1996; Bredel 2007a; Rothstein 2010b among others). For lack of space, we cannot introduce the syntactic background and discuss linguistic details. A short list of selected references is provided in each section.

2108

IX. Beyond Syntax

5.1. Subjects Since the subject is the most prominent phrase of a sentence, identifying the subject of a sentence is one of the standard exercises at the beginning of secondary school. Moreover, the subject is supposed to be a simple category, which can easily be identified by pupils. Consequently, it has been argued that identifying and discussing the subjects of sentences in class does not give interesting insights into the structure of languages. This view, however, fails to meet the linguistic complexity of the notion “subject”. From a linguistic point of view, defining the subject of a sentence is much more complex since “subject”, like many other grammatical categories, is a prototypically organized category with specific properties on different grammatical levels. In addition “subjects” are subject to typological variation. Hence, investigating different kinds of subjects does not only reveal the formal and functional structure of basic grammatical concepts but also shows that grammatical categories (just like other natural categories) usually have fuzzy edges (cf. Reis 1986; Baurmann and Menzel 1989; Klotz 1992; Pittner and Berman 2004; Musan 2008). Let us first take a look at some typical definitions of the subject of a German sentence used in class. These definitions can, of course, be related to each other in various ways (for a detailed discussion, cf. Reis 1986): (3)

a. The subject can be replaced by the wh-expressions wer (‘who’) or was (‘what’). b. The subject is the theme/topic of a sentence (occupying the sentence initial position). c. The subject is the agent of a sentence. d. The subject receives nominative case. e. The subject controls verb agreement. f. The subject must not occur in infinitival clauses.

The first definition in (3a) is a mere substitution test, which is not really reliable since was can also be used to ask for accusative objects. Definitions (3b) and (3c) take a functional perspective. They rely either on discourse functional notions such as theme and topic or on semantic notions such as agent. Although there are statistical and theoretical correlations between subject on the one hand and topic/theme/agent on the other hand, pure functional definitions are faced with clear counterexamples such as (4). Example (4a) illustrates that the topic/theme of a sentence need not be the subject (i.e. niemand and alle). Likewise, not every subject in a sentence is linked to the agent and vice versa. In passives such as (4b), the subject (i.e. das Bild) is typically not the agent and the agent (i.e. ihm) is not the subject. In addition, agentless verbs such as gefallen (‘to please’) in (4c), which also subcategorize for a subject, do not assign the thematic role agent at all. (4)

a. Wer hat bei diesem Krieg gewonnen, wer hat verloren? who has at this war won who has lost Gewonnen hat niemand, verloren haben alle. won has no.one lost have all ‘Who won the war? No one won, everyone lost.’

[German]

61. Grammar in the Classroom

2109

b. Das Bild wurde von ihm gemalt. the picture was by him painted ‘The picture was painted by him.’ c. Dieser schöne alte Audi hat ihr gefallen. this beautiful old Audi has her pleased ‘This beautiful old Audi pleases her.’ A formal definition of subject is based on formal (morphosyntactic) properties of the respective phrase. Usually, the subject receives (or checks) nominative case (5a), it controls verbal agreement (5b), and it must not occur in infinitival clauses (5c). Moreover, even verbs like regnen (‘to rain’) that do not select a semantic argument subcategorize for a formal (impersonal) subject, i.e. the third person singular neuter pronoun es (‘it’) in (5d). (5)

Mann ]NOM / *[ den Mann ]ACC mag den Urlaub. a. [ Der the.NOM man.NOM the.ACC man.ACC likes the holiday ‘The man likes the holiday.’

[German]

b. [ Die Männer ]PL *magSG /mögenPL den Urlaub like.3SG like.3PL the holiday. the.PL men.PL ‘The men like the holiday.’ c. Sie verspricht [ (*sie) den Yogakurs zu leiten ] she promises she the yoga.class to instruct vs. Sie verspricht [ dass *(sie) den Yogakurs leitet ] she promises that she the yoga.class instructs ‘She promises to instruct the yoga class.’ d. Es regnet. it rains ‘It is raining.’ Although formal definitions of the notion of subject cover most standard cases, there are also some examples that are problematic for a pure formal definition. Sentential subjects such as dass Christian so schöne Bilder malt in (6a) do not receive nominative case in German since case is a nominal feature, which cannot be assigned to a clause. Hence, the subject in (6a) is not marked for nominative case. The counterpart to sentence (6a) is sentence (6b), which contains two noun phrases (NPs) marked for nominative but only one subject. Sentences like (6b) are in principle ambiguous and we have to fall back on functional criteria (i.e. definiteness, topic) in order to decide which of the two NPs is the subject. Even more problematic examples are subjectless sentences such as the impersonal passive in (6c). As opposed to (5d) above, impersonal passives do not permit a formal subject. Consequently, a typical mistake pupils make is the miscategorization of the sentence initial es (so-called Vorfeld-es) in (6d) as subject.

2110 (6)

IX. Beyond Syntax schöne Bilder malt ] gefällt mir. a. [ Dass Christian so that Christian such nice pictures paints, like I ‘I’m pleased that Christian paints such nice pictures.’

[German]

Gärtner ]NOM ist [ der Mörder ]NOM b. [ Der the.NOM gardener.NOM is the.NOM murderer.NOM ‘The gardener is the murderer.’ c. Jetzt wird (*es) geschlafen! now is it slept ‘Everyone goes to bed now!’ d. Es wird jetzt geschlafen! it is now slept ‘Everyone goes to bed now!’ The concept of subject also has interesting typological dimensions. Languages with strong verbal agreement systems such as Italian or Spanish permit subject drop since the inflectional affixes unambiguously mark 1st, 2nd, and 3rd person singular and plural. By contrast, in languages without (much) verbal agreement such as English or Swedish, a subject pronoun must not be dropped. Languages like German and French with a mixed verbal agreement system pattern with the latter. They, too, prohibit subject drop since the inflectional system has developed too many syncretisms and is thus not strong enough to license null subjects. Moreover, subject is not a universal concept of natural languages. Unlike German and most Indo-European languages, many languages do not have formally or functionally defined subjects but either discourse oriented or thematically oriented grammatical relations. (7)

singular: st

nd

plural: rd

st

nd

2

3rd

1

2

3

1

parl-o

parl-i

parl-a

parl-iamo parl-ate

parl-ano

red-e

red-est

red-et

red-en

red-et

red-en

talk

talk

talk-s

talk

talk

talk

[Italian] [German]

The few examples discussed in this subsection illustrate that the subject is an interesting grammatical relation that can be used in class to discuss the relevance and interaction of formal and functional properties of natural languages. Pupils explore the structure of prototypically organized (grammatical) categories. In addition, they realize that grammatical relations may have different properties in different languages. And finally, it should have become clear that teachers need a solid grammatical knowledge to avoid didactic pitfalls caused by fuzzy grammatical categories such as the subject of a sentence, especially when they make the didactic decision to use one of the “definitions” in (3).

61. Grammar in the Classroom

2111

5.2. Word order Languages differ in their degree of flexibility in word order they allow. In some languages such as English, constituents are subject to relatively strict word order rules. In other languages such as Latin, word order is relatively free. German has a mixed system. Some word order variations lead to ungrammaticality, others yield more or less marked structures but do not affect the grammaticality of a sentence in the strict sense. Within the NP, determiner and adjectives precede the noun (i.e. die schöne Wohnung, ‘the beautiful apartment’). The reverse order is ungrammatical (i.e. *Wohnung schöne die). By contrast, within the clause, the order of constituents is much more flexible. In this subsection we look at the constituent order in the so-called Mittelfeld, which in (8) is the region between the finite verb hat and the past participle gegeben. In the Mittelfeld almost all orders are grammatical though some orders may be more acceptable than others. In (8), example (8a) is the most acceptable (unmarked) sentence, whereas example (8f ) is the least acceptable (the most marked) one (cf. Reis 1987; Jacobs 1988, Eisenberg and Menzel 2002; Menzel 2002b; Wöllstein 2010). (8)

Junge ]NOM [ einem Mädchen ]DAT a. Gestern hat [ [ ein yesterday has a.NOM boy.NOM a.DAT girl.DAT [ einen Apfel ]ACC ] gegeben. a.ACC apple.ACC given ‘Yesterday a boy gave an apple to a girl.’ b. c. d. e. f.

Gestern Gestern Gestern Gestern Gestern

hat hat hat hat hat

[ [ [ [ [

ein Junge einen Apfel einem Mädchen einem Mädchen ein Junge einen Apfel einen Apfel ein Junge einem Mädchen einem Mädchen einen Apfel ein Junge einen Apfel einem Mädchen ein Junge

] ] ] ] ]

[German]

gegeben. gegeben. gegeben. gegeben. gegeben.

The problem of word order in the Mittelfeld can become quite complex. Imagine a simple sentence with six constituents in the Mittelfeld (i.e. sentence [8a] with three additional adverbials). Such a sentence has 720 different possible orders. The acceptability of a specific order depends on various formal and functional constraints, which relate to different levels of grammar. Word order in the Mittelfeld is thus governed by various morphosyntactic, syntactic, semantic, pragmatic, and stylistic restrictions. A prominent example for a syntactic restriction is illustrated in (9). In (9a) the subject precedes the object, which is the most acceptable order (> stands for ‘precedes’). In contrast to this, the reverse order in (9b) is less acceptable. (9c) is even less acceptable than (9b) since both NPs (i.e. die Kirsche and das Mädchen) have no overt case marking. Without further context, (9c) receives the (absurd) interpretation that the cherry (subject) ate the girl (object). (9)

subject > object a. Gestern hat [ [ der Junge ]SUBJ [ den Apfel ]OBJ ] gegessen. eaten the apple yesterday has the boy ‘Yesterday, the boy ate the apple.’

[German]

2112

IX. Beyond Syntax b. ?Gestern hat [ den Apfel der Junge ] gegessen. yesterday has the apple the boy eaten c. ??Gestern hat [ die Kirsche das Mädchen ] gegessen. yesterday has the cherry the girl eaten ‘Yesterday, the girl ate the cherry.’

A semantic restriction can be observed in (10). Sentence (10b) violates the constraint that constituents referring to human entities precede constituents referring to nonhuman ones. (10) human > nonhuman a. Gestern hat [ die Hexe [ die Kinder ]HUMAN [ der yesterday has the witch the children the Kälte ]NONHUMAN ] ausgesetzt. coldness exposed.to ‘Yesterday, the witch exposed the children to the coldness.’

[German]

b. ?Gestern hat [ die Hexe der Kälte die Kinder ] ausgesetzt. yesterday has the witch the coldness the children exposed.to Word order in the Mittelfeld does not only obey grammatical and semantic but also pragmatic constraints. Contextual aspects are relevant in example (11). The less acceptable sentence in (11b) violates the restriction that backgrounded constituents precede focused constituents. (11) background > focus What did he show to the professor yesterday? a. Gestern hat [ er [ der Professorin ]BACKGROUND [ die yesterday has he the professor the StuDENten ]FOCUS ] gezeigt. students shown ‘Yesterday, he showed the students to the professor.’

[German]

b. ?Gestern hat [ er die StuDENten der Professorin ] gezeigt. yesterday has he the students the professor shown Unlike many rules of grammar, the constraints governing word order in the Mittelfeld are soft constraints, that is, they are violable and they can conflict with each other. In examples (12a, b) the constraint ‘subject > object’ conflicts with the constraint ‘human > nonhuman’ because the verb auffallen does not assign the thematic role agent to the subject. Both examples seem equally acceptable. Likewise, in examples (12c, d), the constraint ‘subject > object’ is neutralized by the constraint that pronouns precede full NPs. Again, both sentences are more or less equally acceptable. (12) a. Gestern ist einem Passanten ein Auto aufgefallen. yesterday is a pedestrian a car attracted.the.attention ‘Yesterday, a car attracted the attention of a pedestrian.’

[German]

61. Grammar in the Classroom

2113

b. Gestern ist ein Auto einem Passanten aufgefallen. yesterday is a car a pedestrian attracted.the.attention ‘Yesterday, a car attracted a pedestrian’s attention.’ c. Gestern hat das Mädchen ihn gesehen. yesterday has a girl him seen ‘Yesterday, a girl saw him.’ d. Gestern hat ihn das Mädchen gesehen. yesterday has him the girl seen ‘Yesterday, the girl saw him.’ Consequently, changes in word order do not lead to strict ungrammaticality but only to a certain degree of unacceptability. Hence, we are dealing with odd data, which moreover are also influenced by the context of utterance and stylistic aspects. Therefore, word order data reveal that sentences cannot simply be divided in grammatical and ungrammatical examples. Pupils learn that sometimes various formal alternatives exist which may be more or less acceptable in a specific context. Moreover, word order variation can also affect the meaning of a sentence as is illustrated in (13). If the adverbial oft (‘often’) follows the direct object (13a), the sentences means that there is a specific book which she often read. If the adverbial precedes the direct object (13b), it means that she often reads books. (13) a. Letztes Jahr hat sie ein Buch oft gelesen. last year has she a book often read ‘Last year, she read a book often.’

[German]

b. Letztes Jahr hat sie oft ein Buch gelesen. In sum, word order in the Mittelfeld can be used in class to illustrate the interaction of formal and functional properties in complex sentence structures. Another interesting issue is the empirical investigation and the formal modeling of word order. Pupils can be instructed to develop questionnaires or to make little corpus studies and simple formal models can be integrated to explain the interaction of soft and hard constraints on word order. In addition, it can be relevant for the analysis of text structures. And finally, the restrictions on word order in German can be compared to word order restrictions in other languages (cf. Twain 2009).

5.3. Embedded clauses German uses various strategies to mark embedded clauses. As opposed to the main clause in (14a), the embedded clause in (14b) is introduced by the complementizer dass (‘that’), the finite verb does not occupy the second but the final position and it is inflected for subjunctive mood.

2114

IX. Beyond Syntax

(14) a. Sie hat ihre Hausaufgaben gemacht. she has her homework done ‘She did her homework.’

[German]

b. Er sagt, dass sie ihre Hausaufgaben gemacht habe. he says that she her homework done have.SBJV ‘He says that she did her homework.’ Whereas the use of the subjunctive in embedded contexts is restricted to high registers, complementizer initial verb-final clauses are the unmarked form to mark embedded clauses in German. However, in certain contexts, the finite verb may also occupy the second position of the embedded clause. This is possible in certain complement clauses such as the object clause in (15a). Moreover, embedded verb second can also be found in causal clauses introduced by denn and weil (15b) and in relative clauses (15c). Hence, in German, just like in many other languages, embedded clauses can also have main clause structure (cf. Reis 1997; Uhmann 1998; Menzel 1999; Holler 2008; Antomo and Steinbach 2010). ihre Hausaufgaben gemacht. (15) a. Er glaubt, sie habe he believes she have.SBJV her homework done ‘He believes that she did her homework.’

[German]

b. Sie ist ziemlich müde, weil sie hat zu viel gearbeitet. she is very tired since she has too much worked ‘She is very tired since she worked too much.’ in der Südstadt. c. Sie hat eine Wohnung, die liegt she has an apartment that is.located in the southern.part.of.town ‘She has an apartment which is located in the southern part of the town.’ Examples (15a) and (15c) are clearly standard German. By contrast, example (15b) is a colloquial variant of the corresponding more standard verb-final causal clauses and are sometimes judged as less grammatical or “bad German” (cf. Sick 2005). Interestingly, embedded verb-second clauses are neither formally nor functionally equivalent to verbfinal clauses. Unlike the corresponding verb-final clauses, the verb-second clauses are only licensed in certain contexts. They are, for example, ungrammatical in the scope of the matrix negation (cf. 16). (16) a. Er glaubt nicht, … dass sie ihre Hausaufgaben gemacht hat. he believes not that she her homework done has … *sie hat ihre Hausaufgaben gemacht. she has her homework done ‘He doesn’t believe … that she did her homework.’

[German]

In addition, verb-second clauses have specific functional properties not available to the corresponding verb-final clauses. This is nicely illustrated by the contrast between the verb-second and verb-final causal and concessive clauses in (17a, b). While the preferred reading of the verb-final clause in (17a) is that the inflated airbag caused an accident,

61. Grammar in the Classroom

2115

the corresponding verb-second clause yields a different interpretation: in this example the inflated airbag is the reason why the speaker thinks that an accident must have happened. Likewise, only the concessive verb-second clause does cancel or challenge the statement made by the main clause: in the verb-final clause, she will definitely come by car although there is no parking space. By contrast, in the verb-second clause, it is highly unlikely that she will use her car. [German] (17) a. Es hat einen Unfall gegeben, … weil der Airbag aufgegangen ist. it has an accident given, because the airbag inflated is … weil der Airbag ist aufgegangen. because the airbag is inflated ‘There has been an accident … because the airbag inflated.’ b. Sie kommt sicher mit dem Auto, … obwohl es keine Parkplätze gibt. she comes surely with the car although it no parking.spaces gives … obwohl es gibt keine Parkplätze. although it gives no parking.spaces ‘She will surely come by car … although there are no parking spaces.’ Embedded clauses that look like main clauses can be found in many other languages. However, the difference between the two variants is not always as clear as in German. English, for example, allows complementizer-drop in complement clauses but does not allow verb-movement (since English is not a verb-second language). In causal clauses complementizer-drop is not available for semantic reasons and again verb-second is prohibited for syntactic reasons. Therefore, English uses a lexical strategy to distinguish the two kinds of causal clauses, i.e. it has two different causal complementizers, because and since. Interestingly, this lexical strategy can also be found in German (denn vs. da). However, German also uses the syntactic strategy illustrated in (17a). Crosslinguistically, embedded “main clauses” seem to fulfill a communicative need by promoting the embedded clause in discourse. In German, verb-second clauses are structurally less embedded than the corresponding verb-final clauses. As a consequence, the verb-second clause becomes more prominent and has specific functions not available for the verb-final counterpart. Therefore, embedded verb-second is prohibited in certain contexts and unlike verb-final clauses they cannot be backgrounded. Embedded clauses can be used to discuss various interesting properties of grammar. Pupils can scrutinize normative and descriptive concepts of grammar by investigating the distribution of verb-final and verb-second in embedded clauses in different varieties or registers, especially in spoken and written language. They learn that language is a quite flexible system that systematically uses differences in form to distinguish different functions. And they understand that there is no clear-cut border between embedded and unembedded clauses in syntax but rather that a clause can be more or less embedded in another one. So syntax is a flexible system that allows the generation of hybrid constructions such as embedded verb-second clauses. Note finally that embedded clauses can also be linked to reported speech and to the distinction between direct and indirect speech (cf. Bredel 2007a; Brendel et al. 2011).

2116

IX. Beyond Syntax

5.4. Noun phrases In section 5.2, we already mentioned that, unlike the constituent order in the Mittelfeld, the word order in NPs is relatively fixed. Example (18a) illustrates that in German determiners, quantifiers, and adjectives precede the head noun, whereas genitive NPs, PPs and relative clauses follow the head noun (cf. Vater 1986; Haider 1988; Gallmann 1990; Klotz 1992; Lindauer 1995; Röber-Siekmeyer 1999). (18) die vielen schönen Bilder meines Kollegen von seinem Auto [German] the many nice pictures my.GEN colleague.GEN of his car ‘my colleague’s many nice pictures of his car’ As can be seen in (19), the NP-internal word order is subject to typological variation. Unlike in German, in most languages (approx. two-thirds) adjectives follow the head noun as is illustrated by the example from Apatani in (19a), a language spoken in northeast India, cf. Haspelmath et al. (2005). In some languages, both orders occur. In this case, the A/N-order either depends on the kind of adjective as in French (19a′) or both orders are equally unmarked with all kinds of adjectives. Some of these languages, as for instance German Sign Language (DGS), make a semantic difference between the A/ N and the N/A-order. The former is used with definite noun phrases, the latter with indefinite ones. Concerning ordinal numerals, both orders (numeral precedes noun and noun precedes numeral) can be found across languages whereas the latter order is a bit more frequent (approx. 55 %). German belongs to the first group of languages, where the numeral precedes the noun (19b). An example for the second group is Jul’hoan, a language spoken in Namibia (19b′). (19) a. aki atu dog small ‘small dog’ a′. une voiture allemande a car German ‘a German car’

[Apatani]

vs.

une grande voiture a big car ‘a big car’

[French]

b. acht Schweine eight pigs ‘eight pigs’

[German]

b′. qüa xüé pigs eight ‘eight pigs’

[Jul’hoan]

Although the constraints governing the general order within NPs are quite strict in German, there is also some flexibility even within NPs (cf. Lindauer 1995). Examples (20bc) show that either the postnominal genitive Roms or the NP contained in the prepositional attribute durch Cäsar can occupy the initial position of the prenominal genitive. In this case, the determiner must not be overtly realized. Note that in German, the prenominal genitive differs from its postnominal counterpart: it is usually restricted to simple nouns

61. Grammar in the Classroom

2117

(usually proper names) and it is always formed with the suffix -s; i.e. der Mutter vs. Mutters, ‘(the) mother.GEN’. (20) a. Die Zerstörung Roms durch Cäsar The destruction Rome.GEN through Caesar

[German]

b. Roms Zerstörung durch Cäsar Rome.GEN destruction through Ceasar c. Cäsars Zerstörung Roms Ceasar.GEN destruction Rome.GEN ‘The destruction of Rome by Caesar’ Another interesting phenomenon is NP-internal agreement. In German, adjectives and determiners agree with the head noun in gender, case and number features (21a). Other languages such as for example English do not show NP-internal agreement (21b). In addition to German-like systems, which always show obligatory agreement, some languages like Turkish have mixed systems. The examples in (21c) illustrate that although Turkish nouns have a plural form, the head noun is not morphologically marked for plural if the NP contains a numeral or plural quantifier. Unlike German, Turkish is subject to an economy constraint that blocks multiple morphological realizations of the plural within NP (21c). (21) a. ein alt-es Buch a.N.SG old-N.SG book(N).SG ‘an old book’

[German]

a′. zwei alt-e Bücher two old-N.PL book(N).PL ‘two old books’ a″. ein alt-er Roman a.M.SG.NOM old-M.SG.NOM novel(M).SG.NOM ‘an old novel’ a‴. ein-en alt-en Roman a-M.SG.ACC old-M.SG.ACC novel(M).SG.ACC ‘an old novel’ b. an old car b′. two old cars c. elma-lar apple-PL ‘apples’ c′. *üc elma-lar three apple-PL ‘three apples’

[Turkish]

2118

IX. Beyond Syntax elma c″. üc three apple ‘three apples’

Turning to the functional side of NPs, it can be observed that NPs are used to express various grammatical relations. In (22a), subject, direct object, and indirect object are marked by overt case morphology. In German, the subject receives nominative case (cf. section 5.1), the direct object accusative, and the indirect object dative. Genitive is used to mark NP-internal attributes (cf. example [20] above). If NPs lack overt case morphology, grammatical functions are usually marked by word order. This is illustrated in (22b), where the first NP is interpreted as the subject of the sentence and the second one as the direct object. NPs are not only used as subject and object, they can also be interpreted adverbially. Hence, there is no strict relation between the form and the function of NPs. (22c) shows that the NP den ganzen Abend can be interpreted either as temporal adverbial (22c) or as direct object (22c′) depending on the sentential context. Außerirdische] NOM gab [dem Außerirdischen] DAT (22) a. [Der the.NOM alien.NOM gave the.DAT alien.DAT [einen Außerirdischen] ACC an.ACC alien.ACC ‘The alien gave an alien to the alien.’

[German]

sieht das Kind. b. Die Frau the woman sees the child ‘The woman sees the child.’ b′. Das Kind sieht die Frau. the child sees the woman. ‘The child sees the woman.’ c. Christian trank diesen Abend Rotwein. Christian drank this evening red.wine ‘This evening, Christian drank red wine.’ c′. Elke genoss diesen Abend. Elke enjoyed this evening ‘Elke enjoyed this evening.’ Noun phrases combine various formal and functional properties that make them an interesting topic in class. Besides the above mentioned typological aspects and the interaction between grammatical form and grammatical functions, deeper insights in the internal structure of NPs are important for text production, especially in languages like German, which has quite complex nominal constructions in written language as illustrated in (23a). In addition, many languages show some degree of NP-internal micro-variation, which is often banned by normative grammarians. In colloquial German, the possessive construction in (23b) replaces the prenominal genitive construction. And grammatical uncertainties like the NP with a measure expression in (23c) shed light on the periphery of the grammatical system.

61. Grammar in the Classroom

2119

(23) a. Die meinem älteren Bruder schon seit Jahren versprochene [German] the my older brother already for years espoused Braut bride ‘The bride who has already been espoused to my older brother for years.’ b. Meinem netten Kollegen sein alter Mercedes my nice colleague his old Mercedes ‘the old Mercedes of my nice colleague’ teur-em Wein/… c. mit drei Gläsern teur-en Wein-s/ with three glasses expensive-GEN wine-GEN/ expensive-DAT wine.DAT/… ‘with three glasses of expensive wine’ Finally, the structure of NPs is important for the discussion of sentence-internal capitalization in German (cf. e.g. [24] − problematic cases are in bold face), which is not only constrained by a lexical principle such as the one in (25a), which refers to parts of speech, but also by a syntactic principle such as (25b). Empirical studies show that pupils (just like adults) systematically use their implicit knowledge of the internal structure of NPs for capitalization. Therefore, a better understanding of the syntax of noun phrases also supports the acquisition of sentence-internal capitalization in German. Likewise, the sentence-internal capitalization can be used to unfold the syntactic structure of NPs (cf. Gallmann 1997; Röber-Siekmeyer 1999; Günther 2007; Hübl and Steinbach 2011). (24) Beim Essen gestern Abend standen die Alten am meisten kopf. [German] at.the eat yesterday evening stand the old at.the most head ‘During dinner yesterday evening, the old people went really mad.’ (25) a. In German, words that belong to the class of nouns are capitalized. b. In German, heads of NPs are capitalized.

5.5. Summary Taking these four case studies into account that discuss fundamental aspects of syntax, it seems to be apparent that linguistics, in particular syntax, can contribute to vivid school lessons sustaining pupils’ command of language. Moreover, it should have become clear that teachers need a good expertise in syntax especially if they incorporate typological and integrative aspects.

6. Conclusion In this article we argued for a systematic teaching of formal and functional aspects of language at school. From a linguistic point of view we advocated an education of grammar that is first and foremost directed to the structure and function of language. Gaining insight into the language system is a merit by itself. Profound linguistic instruction

2120

IX. Beyond Syntax

may contribute to both improving the pupils’ language proficiency and extending their knowledge of language. In this way syntax at school supports the fundamental aim of modern language education, which is the acquisition of linguistic competence in its broadest sense.

7. References (selected) Andresen, Helga, and Reinhold Funke 2003 Entwicklung sprachlichen Wissens und sprachlicher Bewusstheit. In: Bredel, Ursula et al. (eds.), Didaktik der deutschen Sprache, 438−451. Paderborn: Schöningh. Antomo, Mailin, Annika Hübl, and Markus Steinbach 2010 Die SprachChecker: Language awareness, multilingualism, and linguistic diversity at school. SDV − Sprache und Datenverarbeitung. International Journal for Language Data Processing 34: 7−21. Antomo, Mailin, and Markus Steinbach 2009 Die SprachChecker − mit Sprache spielend arbeiten. In: Bonsen, Martin et al. (eds.), Unterrichtsqualität sichern − Sekundarstufe, G 4.3., 1−24. Stuttgart: Raabe Verlag. Antomo, Mailin, and Markus Steinbach 2010 Desintegration und Interpretation. Weil-V2-Sätze an der Schnittstelle zwischen Syntax, Semantik und Pragmatik. Zeitschrift für Sprachwissenschaft 29: 1−37. Augst, Gerhard 1976 Welchen Sinn hat der Grammatikunterricht in der Schule? Diskussion Deutsch 7: 211− 227. Baurmann, Jürgen, and Wolfgang Menzel 1989 Satzglieder. Praxis Deutsch 94: 17−24. Becker, Tabea, and Corinna Peschel 2006 Gesteuerter und ungesteuerter Grammatikerwerb. Baltmannsweiler: Schneider Verlag Hohengehren. Biere, Bernd-Ulrich, and Hajo Diekmannshenke 2000 Sprachdidaktik Deutsch. Tübingen: Groos. Bierwisch, Manfred 1980 Semantic structure and illocutionary force. In: Searle, John F. et al. (eds.), Speech Act Theory and Pragmatics, 1−35. Dordrecht: Reidel. Boettcher, Wolfgang, and Horst Sitta 1978 Der andere Grammatikunterricht. Veränderung des klassischen Grammatikunterrichts. Neue Modelle und Lehrmethoden. München: Urban&Schwarzenberg. Boettcher, Wolfgang, and Horst Sitta 1979 Grammatik in Situationen. Praxis Deutsch 34: 12−21. Bredel, Ursula 2003 Zwiespältig. Wolfgang Steinig & Hans-Werner Huneke: Einführung in die Sprachdidaktik. Berlin: Erich Schmidt Verlag 2002. Didaktik Deutsch 15: 92−98. Bredel, Ursula 2007a Die Didaktik der Gänsefüßchen. In: Bredel, Ursula et al. (eds.), Schriftspracherwerb und Orthographie, 207−240. Berlin: Schneide. Bredel, Ursula 2007b Sprachbetrachtung und Grammatikunterricht. Paderborn: Schöningh. Bredel, Ursula et al. (eds.) 2003 Didaktik der deutschen Sprache. 2 Bde. Paderborn: Schöningh. Brendel, Elke, Jörg Meibauer, and Markus Steinbach 2011 Exploring the Meaning of Quotation. In: Brendel, Elke et al. (eds.), Understanding Quotation, 1−33. Berlin: Mouton de Gruyter.

61. Grammar in the Classroom

2121

Chomsky, Noam 1965 Aspects of the Theory of Syntax. Cambridge/Mass: The MIT Press. Deutsches PISA-Konsortium 2001 PISA 2000. Basiskompetenzen von Schülerinnen und Schülern im internationalen Vergleich. Opladen: Leske + Budrich. Dürscheid, Christa 1993 Sprachwissenschaft und Gymnasialer Deutschunterricht. Bilanz einer Entwicklung. Hürth-Efferen: Gabel Verlag. Eichler, Wolfgang 19986 Grammatikunterricht. In: Lange, Günter et al. (eds.), Taschenbuch des Deutschunterrichts. Bd. 1, 226−257. Baltmannsweiler: Schneider Verlag Hohengehren. Eichler, Wolfgang, and Walter Henze 19986 Sprachwissenschaft und Sprachdidaktik. In: Lange, Günter et al. (eds.), Taschenbuch des Deutschunterrichts. Bd. 1, 101−123. Baltmannsweiler: Schneider Verlag Hohengehren. Eichler, Wolfgang 2007 Sprachbewusstheit und grammatisches Wissen − Bemerkungen zu einem lernbegleitenden Grammatikunterricht in der Sekundarstufe. In: Köpcke, Klaus-Michael, and Arne Ziegler (eds.), Grammatik in der Universität und für die Schule. Theorie, Empirie und Modellbildung, 33−44. Tübingen: Niemeyer. Eisenberg, Peter 2004a Wie viel Grammatik braucht die Schule? Didaktik Deutsch 17: 4−25. Eisenberg, Peter 2004b Die Linguistischen Berichte in den Jahren 1972−2003. Linguistische Berichte 200: 379−395. Eisenberg, Peter, and Nanna Fuhrhop 2007 Schulorthographie und Graphematik. Zeitschrift für Sprachwissenschaft 26: 13−41. Eisenberg, Peter, and Angelika Linke 1996 Wörter. Praxis Deutsch 139: 20−30. Eisenberg, Peter, and Wolfgang Menzel 1995 Grammatik-Werkstatt. Praxis Deutsch 129: 14−26. Eisenberg, Peter, and Wolfgang Menzel 2002 Die Stellung der Wörter im Satz. Praxis Deutsch 172: 6−13. Eisenberg, Peter, and Gerhard Voigt 1990 Grammatikfehler? Praxis Deutsch 102: 10−15. Fabb, Nigel 2002 Language and Literary Structure. The Linguistic Analysis of Form in Verse and Narrative. Cambridge: Cambridge University Press. Fabb, Nigel 2003 Linguistics and literature. In: Aronoff, Mark, and Janie Rees-Miller (eds.), The Handbook of Linguistics, 446−465. Oxford: Blackwell. Frentz, Hartmut, and Christian Lehmann 2003 Der gymnasiale Lernbereich ‘Reflexion über Sprache’ und das Hochschulzugangsniveau für sprachliche Fächer. Didaktik Deutsch 14: 92−98. Gallmann, Peter 1990 Kategoriell komplexe Wortformen. Das Zusammenwirken von Morphologie und Syntax bei der Flexion von Nomen und Adjektiv. Tübingen: Niemeyer. Gallmann, Peter 1997 Konzepte der Nominalität. In: Augst, Gerhard et al. (eds.), Zur Neuregelung der deutschen Orthographie. Begründung und Kritik, 209−24. Tübingen: Niemeyer. Gallmann, Peter, and Horst Sitta 19984 Schülerduden Grammatik. Mannheim: Dudenverlag.

2122

IX. Beyond Syntax

Gaiser, Konrad 1973 Wieviel Grammatik braucht der Mensch? In: Rötzer, Hans G. (ed.), Zur Didaktik der deutschen Grammatik, 1−15. Darmstadt: WBG. Glinz, Hans 2003 Geschichte der Didaktik der Grammatik. In: Bredel, Ursula et al. (eds.), Didaktik der deutschen Sprache, 423−437. Paderborn: Schöningh. Gnutzmann, Claus 1995 Sprachbewußtsein (‘Language Awareness’) und integrativer Grammatikunterricht. In: Gnutzmann, Claus, and Frank G. Königs (eds.), Perspektiven des Grammatikunterrichts, 267−284. Tübingen: Narr. Gornik, Hildegard 1989 Metasprachliche Entwicklung bei Kindern. Definitionsprobleme und Forschungsergebnisse − ein Überblick. Osnabrücker Beiträge zur Sprachtheorie 40: 39−58. Granzow-Emden, Matthias 2002 Zeigen und Nennen. Sprachwissenschaftliche Impulse zur Revision der Schulgrammatik am Beispiel der ‘Nominalgruppe’. Tübingen: Stauffenburg. Günther, Hartmut 1993 Erziehung zur Schriftlichkeit. In: Eisenberg, Peter, and Peter Klotz (eds.), Sprache Gebrauchen − Sprachwissen Erwerben, 85−96. Stuttgart: Klett. Günther, Hartmut 1998 Sprachwissenschaft und Sprachdidaktik. Am Beispiel kleiner und großer Buchstaben im Deutschen. Didaktik Deutsch 4: 17−32. Günther, Hartmut 2007 Der Vistembor brehlte dem Luhr Knotten auf den bänken Leuster − Wie sich die Fähigkeit zur satzinternen Großschreibung entwickelt. Zeitschrift für Sprachwissenschaft 26: 155−179. Haider, Hubert 1988 Die Struktur der deutschen Nominalphrase. Zeitschrift für Sprachwissenschaft 7: 32−59. Haspelmath, Martin et al. (eds.) 2005 The World Atlas of Language Structures. Oxford: Oxford University Press. Helbig, Gerhard 1972 Wieviel Grammatik braucht der Mensch? Deutsch als Fremdsprache 3: 150−155. Heuer, Hans-Dieter 1997 Sprachbetrachtung in der gymnasialen Oberstufe. Unterrichtsvorschlag zur sprachlichen Sensibilisierung. Deutschunterricht 50: 81−84. Hoffmann, Ludger 1995 Gewichtung: ein funktionaler Zugang zur Grammatik. Der Deutschunterricht 47: 23−36. Holler, Anke 2008 German dependent clauses from a constraint-based perspective. In: Fabricius-Hansen, Cathrine (ed.), ‘Subordination’ vs. ‘Coordination’ in Sentence and Text: A Cross-Linguistic Perspective, 187−216. Amsterdam: Benjamins. Homberger, Dietrich 1993 Das Prädikat im Deutschen. Linguistische Terminologien in Sprachwissenschaft und Sprachdidaktik. Opladen: Westdeutscher Verlag. Hoppe, Almut 2001 Kernfach Deutsch: Anspruch und Wirklichkeit − Defizit und Leistungen. Zum Stellenwert des Faches Deutsch. Mitteilungen des Deutschen Germanistenverbandes 48: 222− 262. Hübl, Annika, and Markus Steinbach 2011 Wie viel Syntax steckt in der satzinternen Großschreibung? To appear in Linguistische Berichte.

61. Grammar in the Classroom

2123

Hudson, Richard 2005 The English Patient: English grammar and teaching in the twentieth century. Journal of Linguistics 41: 593−622. Hymes, Dell 1962 The ethnography of speaking. Blount LCS, 248−283. Ingendahl, Werner 1999 Sprachreflexion statt Grammatik. Tübingen: Niemeyer. Jacobs, Joachim 1988 Probleme der freien Wortstellung im Deutschen. Sprache und Pragmatik 5: 8−37. Karmiloff-Smith, Anette 1992 Beyond Modularity: A Developmental Perspective on Cognitive Science. Cambridge/ Mass: MIT Press. Klotz, Peter 1992 Was ist ein Satzglied, was ein Attribut? Ein Beitrag zu Grenzen und Freuden des Grammatikwissens. Der Deutschunterricht 44: 84−92. Klotz, Peter 2004 Integrativer Deutschunterricht. In: Kämper-van den Boogaart, Michael (ed.), Deutsch Didaktik. Leitfaden für die Sekundarstufe I und II, 46−59 Berlin: Cornelsen. Knobloch, Clemens 1986 Wortarten in der Schulgrammatik? Probleme und Vorschläge. Deutschunterricht 38: 37−49. Knobloch, Clemens 2000 Schulgrammatik als Modell linguistischer Beschreibung. In: Booij, Geert et al. (eds.), Morphologie. Ein Internationales Handbuch zur Flexion und Wortbildung. 104−117. Berlin: de Gruyter. Knobloch, Clemens, and Burkhard Schaeder 2000 Kriterien für die Definition von Wortarten. In: Booij, Geert et al. (eds.), Morphologie. Ein Internationales Handbuch zur Flexion und Wortbildung, 674−692. Berlin: de Gruyter. Köpcke, Klaus-Michael, and Arne Ziegler (eds.) 2007 Grammatik in der Universität und für die Schule. Theorie, Empirie und Modellbildung. Tübingen: Niemeyer. Kümmerling-Meibauer, Bettina, and Jörg Meibauer 2007 Linguistik und Literatur. In: Steinbach, Markus et al. (eds.), Schnittstellen der germanistischen Linguistik, 257−290. Stuttgart: Metzler. Kultusministerkonferenz 2004 Bildungsstandards im Fach Deutsch für den Mittleren Schulabschluss (Jahrgangsstufe 10). Beschlüsse der Kultusministerkonferenz. Neuwied: Luchterhand. Kultusministerkonferenz 2005 Bildungsstandards im Fach Deutsch für den Primarbereich (Jahrgangsstufe 4). Beschlüsse der Kultusministerkonferenz. Neuwied: Luchterhand. (Online: www.kmk.org/ schul/Bildungsstandards/Grundschule_Deutsch_BS_307KMK.pdf). Lehmann, Christian 2007 Linguistic competence. Theory and empiry. Folia Linguistica 41: 223−278. Lindauer, Thomas 1995 Genitivattribute. Eine morphosyntaktische Untersuchung zum deutschen DP/NP-System. Tübingen: Niemeyer. List, Gudula 1992 Zur Entwicklung metasprachlicher Fähigkeiten. Aus der Sicht der Sprachpsychologie. Der Deutschunterricht 44: 15−23. Locke, Terry (ed.) 2010 Beyond the Grammar Wars. A Recourse for Teachers and Students on Developing Language Knowledge in the English/Literacy Classroom. New York: Routledge.

2124

IX. Beyond Syntax

Lohnstein, Horst, and Oliver Jungen 2007 Geschichte der Grammatiktheorie. Von Dionysius Thrax bis Noam Chomsky. München: Wilhelm Fink Verlag. Luchtenberg, Sigrid 1995 Language Awareness-Konzeptionen. Ein Weg zur Aktualisierung des Lernbereichs ‘Reflexion über Sprache’. Der Deutschunterricht 47: 93−108. Ludwig, Otto 1985 Könnten wir uns abfinden mit einer Sprache ohne Flügel? Zum Konjunktiv. Praxis Deutsch 71: 16−24. Meinunger, André 2008 Sick of Sick? Ein Streifzug durch die Sprache als Antwort auf den ‘Zwiebelfisch’. Berlin: Kadmos. Menzel, Wolfgang 1986 Wortarten. Praxis Deutsch 77: 12−18. Menzel, Wolfgang 1988 Nebensätze. Praxis Deutsch 90: 21−40. Menzel, Wolfgang 1991 Das Adjektiv. Praxis Deutsch 106: 25−44. Menzel, Wolfgang 1992 Passiv. Praxis Deutsch 112: 20−27. Menzel, Wolfgang 1998a Sätze verbinden. Praxis Deutsch 151: 37−41. Menzel, Wolfgang 1998b Nachdenken über daß (dass) und das. Praxis Deutsch 151: 37−41. Menzel, Wolfgang 1999 Grammatik-Werkstatt. Theorie und Praxis eines prozess-orientierten Grammatikunterrichts für die Primar- und Sekundarstufe. Seelze-Velber: Kallmeyer. Menzel, Wolfgang (ed.) 2002a Praxis Sprache. Ausgabe für Realschulen und Gesamtschulen. Band 5−10. Braunschweig: Westermann. Menzel, Wolfgang 2002b Mäuse fressen Katzen besonders gern. Von der Reihenfolge der Wörter im Satz. Praxis Deutsch 172: 29−35. Menzel, Wolfgang 2004 55 Texte erzählender Grammatik. Braunschweig: Westermann. Meraner, Rudolf 1988 Satzverknüpfung durch Pronomen. Der Deutschunterricht 40: 69−83. Musan, Renate 2008 Satzgliedanalyse. Heidelberg: Winter. Näf, Anton 1995 Die Satzarten als Lern- und Reflexionsgegenstand in der Schule. Der Deutschunterricht 47: 51−69. Oomen-Welke, Ingelore 1997 Szenen einer Ehe: Linguistik und Sprachdidaktik Deutsch im letzten Drittel des Jahrhunderts. Wirkendes Wort 3: 448−468. Oomen-Welke, Ingelore 2003 Entwicklung sprachlichen Wissens und Bewusstseins im mehrsprachigen Kontext. In: Bredel, Ursula et al. (eds.), Didaktik der deutschen Sprache, 452−463. Paderborn: Schöningh. Ossner, Jakob 2000 Die nächsten Aufgaben lösen, ‘ohne kleine Brötchen zu backen’. Bemerkungen zu Bernd Switalla: Grammatik-Notizen. In: Balhorn, Heiko, Heinz Giese, and Claudia Os-

61. Grammar in the Classroom

2125

burg (eds.), Betrachtungen über Sprachbetrachtungen. Grammatik und Unterricht, 232 −241. Seelze: Kallmeyer. Ossner, Jakob 2006a Sprachwissen und Sprachbewusstheit. In: Becker, Tabea, and Corinna Peschel (eds.), Gesteuerter und ungesteuerter Grammatikunterricht, 8−19. Baltmannsweiler: Schneider. Ossner, Jakob 2006b Sprachdidaktik Deutsch. Paderborn: Schöningh. Peyer, Ann 1998 Sätze verknüpfen: und? als? wegen? Praxis Deutsch 151: 47−54. Pfau, Roland et al. 2008 Die SprachChecker. Stuttgart: TeMaCom. (www.sprachchecker.de). Pittner, Karin, and Judith Berman 2004 Deutsche Syntax. Ein Arbeitsbuch. Tübingen: Narr. Reis, Marga 1986 Subjekt-Fragen in der Schulgrammatik? In: Deutschunterricht 38: 64−84. Reis, Marga 1987 Die Stellung der Verbargumente im Deutschen. Stilübungen zum Grammatik-PragmatikVerhältnis. In: Rosengren, Inger (ed.), Sprache und Pragmatik, 139−177. Stockholm: Almqvist & Wiksell. Reis, Marga 1997 Zum syntaktischen Status unselbstständiger Verbzweit-Sätze. In: Dürscheid, Christa et al. (eds.), Syntax im Fokus, 112−144. Tübingen: Niemeyer. Röber, Christa 2007 Schrift lehrt Sprechen. Die Heranführung von Deutschlernern an die Artikulation deutscher Wörter und Sätze durch die systematische Nutzung des orthographischen Markierungssystems im Deutschen. daf. Halbjahresschrift des Zentrums für die Didaktik der deutschen Sprache an der Universität Siena: 61−76. Röber, Christa 2009 Die Leistungen der Kinder beim Lesen- und Schreibenlernen. Grundlagen der Silbenanalytischen Methode. Baltmannsweiler: Schneider Hohengehren. Rose, Kurt 1997 Ist das nun ein Adjektiv, ein Substantiv, oder was? Wie Schüler sprachliche Kategoriebegriffe repräsentieren. Versuch einer Annäherung. Deutschunterricht 50: 394−401. Rothstein, Björn 2010a Sprachintegrativer Grammatikunterricht. Tübingen: Stauffenburg. Rothstein, Björn 2010b Linguistische Inhalte im Deutschunterricht: Studentische Stimmen zu einem umstrittenen Thema. Hannover: Ibidem. Seidel, Brigitte 2002 Wo stehen die Attribute im Deutschen? Praxis Deutsch 172: 47−52. Sick, Bastian 2005 Der Dativ ist dem Genitiv sein Tod. Folge 2. Köln: Kiepenheuer & Witsch. Siebert-Ott, Gesa 2003 Muttersprachendidaktik − Zweitsprachendidaktik − Fremdsprachendidaktik − Multilingualität. In: Bredel, Ursula et al. (eds.), Didaktik der Deutschen Sprache, 30−41. Paderborn: Schöningh. Steinig, Wolfgang, and Hans-Werner Huneke 2002 Sprachdidaktik Deutsch. Eine Einführung. Berlin: Schmidt. Strecker, Bruno 2001 Grammatik in Forschung und Unterricht. Mitteilungen des Deutschen Germanistenverbands 48: 10−17.

2126

IX. Beyond Syntax

Sucharowski, Wolfgang 1995 Syntax und Sprachdidaktik. In: Jacobs, Joachim et al. (eds.), Syntax. Ein Internationales Handbuch Zeitgenössischer Forschung, 1545−1575. Berlin: de Gruyter. Trim, John, Brian North, and Daniel Coste 2001 Gemeinsamer europäischer Referenzrahmen für Sprachen: Lernen, Lehren, Beurteilen. Niveau A1, A2, B1, B2. München: Langenscheidt. Twain, Mark 2009 Die Schreckliche Deutsche Sprache/The Awful German Language. Hamburg: Nikol Verlag. Uhmann, Susanne 1998 Verbstellungsvariation in weil-Sätzen: Lexikalische Differenzierung mit grammatischen Folgen. Zeitschrift für Sprachwissenschaft 17: 92−139. Vater, Heinz 1986 Die NP-Struktur im Deutschen. In: Vater, Heinz (ed.), Zur Struktur der Determinantien, 123 −145. Tübingen: Narr. Wöllstein, Angelika 2010 Topologisches Satzmodell. Heidelberg: Winter. Wunderlich, Dieter 1980 Deutsche Grammatik in der Schule. Studium Linguistik 8/9: 90−118. Wunderlich, Dieter 1988a Das Prädikat in der Schulgrammatik. Diskussion Deutsch 103: 460−474.

Anke Holler, Göttingen (Germany) Markus Steinbach, Göttingen (Germany)

Indexes Language index A Afrikaans 280, 300 f., 303 f., 307, 347 f., 352 f., 356, 377 f., 510, 820, 832, 1122, 1725 Albanian 180, 185, 193, 196, 198 f., 210, 418, 631, 756, 767, 1101 Amharic 197, 205, 210 Anglo-Caribbean 1713 Annobonese 1720 Apalaí 190 ff., 214 Apatani 2116 Arabic 99 ff., 103 ff., 107, 112−122, 124− 133, 192, 196 f., 199, 202, 206, 210, 345, 377, 379, 427, 711, 726, 752, 771, 775, 867, 1049, 1054, 1065, 1081, 1083, 1478 f., 1925, 2024, 2027 Assamese 439, 443 Avar 220, 229 f., 240 f., 243

B Bajau cf. Sama Basque 197, 317, 385, 410, 657, 659, 680 f., 685−688, 702, 705, 707, 710, 717, 748, 827, 962, 1181, 1188, 1318, 1480, 2045, 2051, 2060 Bella Coola 771, 1727 Berber 196, 279, 596, 649, 1125 Berbice Dutch 1711, 1714 f., 1725 f. Bininj Gun-Wok 316, 339, 2042 f., 2059 Bora 1764−1767, 1769−1771, 1775−1779, 1781 f., 1785−1791 Boumaa Fijian 516, 2051, 2058 Breton 342 f., 346 ff., 350 f., 357, 367 f., 371, 375−378, 380, 382 Bukusu 1636 f. Bulgarian 324, 651, 715, 717, 817, 822, 832, 916, 1220

C Caboverdiense 1719 Canela 669 ff.

Catalan 28 f., 279 f., 300 f., 385, 618, 623, 629, 646, 650, 1945 − Old Catalan 629 Chamorro 689 f., 703 Chichewa 64, 270, 312 ff., 318, 328, 338, 822, 857 f., 870, 1192, 1292, 1652, 1655, 2057 Chinese 58, 68, 271, 315, 340, 377, 393, 517, 647, 656 f., 743 ff., 1189, 1249, 1480, 1519, 1523 f., 1528 ff., 1540, 1545, 1547, 1549, 1552−1559, 1925, 1935, 2024, 2031 − Cantonese 1519, 1522, 1526, 1553 f. − Hokkien Min 9 f., 1519 − Mandarin 25, 206, 464, 466, 710 f., 743 f., 1161, 1167 f., 1185, 1320, 1518−1535, 1537, 1539, 1541 ff., 1545−1559, 1759, 2053, 2060 − Shanghai 41, 68 − Taiwanese 570, 593, 1160, 1167 f. − Xiamen 41, 1160, 1189 Chingoni 257 Chol 660, 664, 666, 704 Chukchi 257, 661, 678, 685 Cree Plains 233, 755, 772 f. Crioulo 1711, 1715 Croatian 46, 311, 321 ff., 326−329, 333 f., 341, 570, 629, 633, 645, 651, 712, 715, 821 f. Cubeo 1790, 2051, 2061 Czech 201, 220, 277 ff., 283, 286, 292, 324, 498, 613, 628, 631, 633, 914−917, 989 f. , 1001, 1930

D Dagbani 483, 512 Danish 348, 351, 358 f., 370, 375, 377, 465, 477, 570, 745, 970 f., 1814 Diegueno 2037, 2060 Djapu 683, 701 f. Dolakha Newar 2047, 2051, 2053 f., 2059 Dominican 1714 Dullay 184, 198, 210 Dutch 135, 137, 154, 235 f., 244, 250, 258, 269, 274, 276, 278−283, 293 f., 296 f., 304,

2128 306, 317, 349, 354, 358, 362, 364, 380, 383 ff., 452−456, 459, 476, 516 f., 520, 525, 532, 541 f., 545, 547 f., 557, 564, 567, 570, 579, 584, 586 f., 589, 594, 638, 752 f., 758, 760, 774, 776, 832, 860, 874, 885, 943, 965, 1086, 1127, 1181, 1189, 1193, 1216, 1319, 1363 ff., 1367, 1369, 1379, 1382 f., 1385 f., 1398, 1402, 1419, 1427, 1439, 1445, 1452, 1463, 1470, 1474, 1477, 1539, 1711, 1714 f., 1721, 1725 f., 1806 ff., 1814, 1825, 1829, 1846, 1866, 1936, 2028 Dyirbal 180, 189, 212, 271, 657, 661 f., 665 f., 679 f., 682, 694, 704, 754, 772

E Egyptian 135, 206, 1710 Émérillon 1788 English 7−12, 16 f., 19 f., 24, 27 f., 33, 35 f., 44 f., 51, 56, 58, 64−67, 97 f., 113, 135, 140−143, 145 f., 148, 156, 159, 161 f., 166, 168, 171 f., 176−179, 183−186, 188−191, 194−199, 201 f., 204 ff., 208 ff., 212, 214, 217, 220 ff., 226 f., 233, 237 ff., 242, 244, 249 ff., 254−257, 261, 266−271, 276, 280 f., 284 f., 287 ff., 291, 297, 305 f., 308 f., 311, 314, 317, 319, 321 f., 325 f., 328, 334 f., 339 f., 343 f., 347 f., 359, 378− 382, 384 ff., 391 f., 399−400, 402, 404, 406 ff., 410, 419 f., 422, 424, 426, 436 f., 444−447, 451 ff., 455, 459, 466, 468 f., 473−476, 478, 480, 482, 484, 497, 501, 508, 510, 515, 519 f., 545 f., 562, 564−567, 569, 576 f., 584 f., 587, 590, 592, 594, 604 f., 610, 632, 638, 642 f., 650, 668, 671, 676 f., 680 f., 694, 698, 710 f., 715, 722, 724 ff., 728, 740 ff., 744−748, 751 f., 767, 772 ff., 778, 780 ff., 784 ff., 788 f., 791, 799 ff., 821−824, 837, 841, 844, 847 ff., 851 f., 855, 862 ff., 867, 870, 873 f., 882− 886, 890 f., 893, 895, 902, 905, 908, 911, 923, 926 f., 941, 943 ff., 960, 967, 969 f., 972 f., 975, 977, 980−983, 985, 987 f., 990, 992 ff., 998, 1000, 1009, 1011, 1014 ff., 1020 f., 1023 ff., 1028 f., 1033, 1043 f. 1047, 1050 f., 1053 f., 1056−1059, 1061, 1067, 1086, 1089, 1101, 1104, 1106 f., 1111, 1114, 1118−1126, 1129−1132, 1140, 1143−1146, 1148, 1150 ff., 1155, 1157 ff., 1161, 1164, 1166, 1168, 1177−1182, 1184, 1189−1196, 1198, 1217, 1219, 1223, 1226,

Indexes 1246 f., 1250, 1252, 1255, 1263 f., 1266, 1268, 1282 f., 1291, 1295, 1306 ff., 1318, 1320, 1331, 1333, 1356, 1358, 1363 f., 1369, 1380, 1388, 1390−1393, 1395, 1398 f., 1414, 1417, 1420 f., 1427, 1445, 1453, 1463, 1476, 1479, 1480, 1506, 1510, 1515 f., 1518 f., 1528 ff., 1532, 1535, 1537 f., 1560, 1562, 1565, 1568, 1572, 1576 f., 1586, 1592, 1596, 1611, 1615, 1619, 1627, 1630, 1641, 1645, 1658, 1660, 1667, 1669, 1701 ff., 1710−1718, 1720, 1738, 1762, 1773, 1789, 1799, 1800, 1802 f. 1806−1809, 1811 ff., 1820, 1822 f., 1825, 1827, 1829−1833, 1842, 1849−1852, 1866 f., 1870, 1872, 1879, 1889, 1895, 1901, 1905, 1909, 1911, 1914 ff., 1918,, 1924, 1926 ff., 1930, 1932 f., 1935−1942, 1944, 1947 f., 1953 ff., 1957−1960, 1962 f., 1968, 1971 ff., 1975−1978, 1980, 1983 f., 1989, 1990−1998, 2001, 2004 f., 2009 f., 2012 f., 2015 ff., 2021, 2024, 2027 f., 2030, 2032, 2037, 2041, 2045, 2052, 2060, 2073, 2077, 2081, 2089, 2091, 2097, 2110, 2111, 2115, 2117, 2123 Estonian 343, 345 f., 377, 379 Evenki 276, 438, 445 Ewe 40, 203, 475

F Faroese 356, 382, 459, 462 ff., 466, 474, 476 Finnish 233, 235, 239, 251, 515, 591, 738, 925, 931, 1852, 1905 French 40, 45, 135 f. 138, 194, 196, 201, 205, 222, 244, 270, 277, 280, 285 f. 292, 300−303, 308, 310, 320 f., 324, 327, 330− 333, 339, 343, 381, 398, 407, 409, 436, 487, 490, 515, 558, 564 f., 570, 584 f., 590, 596, 601, 603, 606, 610, 614, 621, 623, 627, 630, 632 ff., 636, 640, 644, 646, 648, 650, 652, 743, 752, 756, 758 f., 762 f., 774, 778, 787, 791, 798, 823, 826, 862, 867, 872, 874, 902, 904, 964, 994, 1002, 1011 f., 1018 f., 1029, 1084, 1101, 1130 f., 1135, 1153, 1155, 1157 f., 1160, 1174 f., 1177, 1181 f., 1190, 1192−1195, 1263, 1267, 1281, 1313, 1456, 1470, 1712 f., 1715, 1717, 1719 f., 1814, 1826, 1828, 1846, 1870, 1879, 1911, 1970, 1974 f., 1977, 1980, 1989, 1999, 2010, 2012, 2024, 2037, 2110, 2116

Language index Frisian 358−362, 379, 820 Fula 336

G Galician 628 ff., 633 Gan 1519 Georgian 68, 317, 658 ff., 680, 705, 756, 773, 1150−1153, 1292, 1480, 1588−1622 German 4, 7−10, 51, 94, 99, 126, 135, 137, 142, 144, 149, 162, 164, 166, 170, 179, 183 f., 186, 188−192, 194, 196, 198 f., 201 ff., 208, 210, 212, 219−222, 229−232, 236, 238, 240 ff., 245 f., 279 ff., 293−298, 304, 306, 328, 348, 350, 353−356, 358− 362, 364 ff., 368−371, 374 f., 378 f., 381, 386, 390, 402, 406, 408 ff., 413, 417, 419 f., 422, 424−427, 430, 433, 436 f., 439, 452−457, 459, 465, 469, 473 ff., 477−488, 490−512, 514−561, 570, 577 f., 585, 645, 706, 710, 718, 724, 743, 752, 756, 759 f., 775, 780, 784−791, 795, 798−802, 820, 832,f., 841, 844, 855, 867, 869, 874 f., 885−888, 890, 896−899, 901−904, 907, 914 f., 918, 923, 930, 932,ff., 936, 938, 939, 945 f., 952 f., 956 f., 959, 961 f., 964− 971, 973 1006, 1010 ff., 1016, 1020 f., 1029, 1032, 1073, 1084, 1174, 1178−1183, 1189, 1196, 1215 ff., 1234 ff., 1241, 1250, 1255, 1290 ff., 1303, 1306 f., 1319, 1322, 1324 f., 1343 ff., 1348, 1353 f., 1358, 1364, 1369 f., 1376, 1388−1395, 1398, 1400− 1412, 1415−1422, 1424−1428, 1430−1434, 1436, 1438−1445, 1447−1477, 1770, 1793, 1801 f., 1806−1809, 1812, 1819, 1821− 1825, 1827 ff. 1832, 1840, 1846, 1848, 1850 ff., 1858 f., 1863, 1867, 1869, 1871− 1874, 1877, 1879, 1883, 1889, 1895, 1900 f., 1903, 1914, 1916, 1919, 1931, 1935 f., 1938, 1941 f., 1944 f., 1947 f., 1953 ff., 1957 ff., 1961 ff., 1968, 1974 f., 1977, 1979, 1980 f., 1983 f., 1986, 1992, 1995, 2010, 2024 f., 2030, 2033 f., 2072, 2096 f., 2099, 2101, 2105, 2108−2119, 2122, 2126 Greek − Classical 135, 159, 165, 170, 185 f., 188 f., 192, 196, 201, 218 f., 222, 596, 749 − Modern 135, 180, 183, 189, 196, 198 f., 201, 205, 290, 317, 393, 411, 446, 577, 596, 674 f., 751, 756, 759, 761 f., 767−770,

2129 776, 824, 1003, 1101, 1104, 1106, 1119, 1121, 1125, 1149, 1263, 1277, 1848, 1871, 1879, 1908, 2025, 2032 Greenlandic West 179, 657, 665, 672, 691− 694, 696, 698 f., 773 Gujarati 669 f., 673 Gullah 1710 Gungbe 403, 820 f. Guyanese 1711 f., 1714, 1717−1722

H Haida 385, 393 f., 408 Haitian 1710 ff., 1714−1717, 1719−1723, 1725 Halkomelem 659 ff., 673, 677, 704, 708, 1728, 1733, 1738, 1750, 1758, 1761 Hausa 44, 194, 196, 202 f., 482 f., 510, 512 Hawaiian Creole 1710 f., 1717 f., 1720, 1722, 1724, 1726 Hebrew 38, 125 f., 130, 132, 202, 569, 636, 711, 726, 751 f., 756−759, 761−765, 768 ff., 820 f., 837, 1065, 1104, 1190, 1487, 1812, 1825, 1849, 1852, 1868 Herero 1636 f. Hindi/Urdu 338, 462, 658 ff., 675 f., 709− 712, 715 ff., 722 ff., 726, 1402 f., 1407, 1440, 1478−1518 − Hindi 239, 245, 462, 675, 706, 747, 826, 873, 1427, 1478 f., 1882, 1907 − Urdu 852, 858, 867, 870, 1478 f., 2025, 2028 Hua 183, 2038, 2059 Hungarian 30, 58, 60, 67, 179, 185, 233, 235, 239, 251, 256, 317, 338, 384−388, 390 f., 393−396, 400 f., 403−408, 410 f., 415, 418, 429, 433, 566, 590, 715−717, 726, 867 f., 885, 1071, 1101, 1126, 1137, 1186, 1196, 1198, 1402, 1435, 1440 Hup 1788 f.

I Icelandic 221 f., 224−227, 232, 239−242, 245 f., 333 f., 343, 349−353, 355 ff., 362 f., 367, 372, 377, 381 f., 430, 442, 453 f., 458 f., 462 ff., 466, 474, 476 f., 545, 703, 751, 753, 756, 773, 775 f., 817, 832 f., 835, 852, 855, 874, 891 ff., 936, 1101, 1131,

2130 1364, 1376, 1377, 1397, 1465, 1478, 1814, 1831 Indonesian 590, 867, 1658, 2024 Irish 519, 569, 727, 740, 747, 1169, 1190, 1193, 1945 Isizulu 1623 Italian 40, 135, 183, 187, 196, 273, 277 ff., 281 ff., 286, 288 f., 291 f., 298, 302, 307, 316, 352, 373 f., 379, 381, 385, 392, 394, 397, 407, 415, 545, 570, 590, 593, 595 ff., 599−602, 606, 608, 610 ff., 618 ff., 622, 624, 626−630, 632 f., 636−645, 647−653, 675, 733 f., 741, 824, 872, 1109, 1122 f., 1181, 1190 f., 1277, 1295, 1303, 1310, 1363 ff., 1461, 1472, 1477, 1539, 1553, 1633, 1643, 1793, 1799 f., 1802, 1807, 1809, 1822, 1832, 1839, 1846, 1849, 1852, 1882, 1902, 1992, 1999, 2012, 2110 − Old Italian 629

J Jamaican 1713, 1718, 1723 Jaminjung 216, 2072, 2082 Japanese 28 f., 30 ff., 58, 64, 66 ff., 193, 197, 202, 215, 236, 255, 315, 388 f., 393, 515, 545, 560, 570, 713 f., 748, 779, 867, 891, 893, 923, 946, 967, 1003, 1049, 1106, 1121, 1137, 1145, 1153, 1182, 1191 f., 1196, 1216, 1250, 1263, 1267 f., 1282 f., 1351, 1356, 1401 ff., 1407−1411, 1424, 1427, 1439 f. 1444 f., 1460, 1472, 1480, 1559−1588, 1707, 1762, 1877, 1883, 1886, 1889, 1894, 1905 f.-1909, 2024 f., 2032, 2034, 2081 Jarawara 1788 f. Jul’hoan 2116

K Kanjobal (Q’anjob’al) 666, 695 Kaqchikel 666, 694, 700 Karitiana 343, 346, 382 Kashmiri 317, 342 f., 345−349, 357, 368, 373 f., 376, 378, 667, 1517, 2053, 2062 Kavalan 422 f. Kayardild 2040, 2047, 2059 K’ekchí (Q’eqchi) 417 ff., 429, 442, 444, 689, 692 Kimatuumbi 40 f., 1161, 1193

Indexes Kinyarwanda 1634, 1654, 2039, 2060 Kiswahili 1623, 1653, 1655 Klallam 1727, 1730, 1738, 1741, 1744, 1746 ff., 1763 Kolyma Yukaghir 444, 2043, 2049, 2061 Korean 43, 58, 66, 407, 417 f., 443, 710, 903, 973, 1132, 1151 f., 1192, 1263 f., 1267−1271, 1282 f., 1351, 1402 f., 1411, 1441, 1480, 1870, 1907 f., 2025, 2032 Koyra Chiini 2051, 2060 Kulung 2050, 2062 Kwaza 1788, 1791

L Lango 433, 2051, 2055, 2061 Lao 2051, 2059 Latin 6, 8, 94, 140, 158 ff., 165, 185, 188, 192 f., 196, 198 f., 203, 206, 219, 222, 280, 317, 333, 515, 612, 653, 656 f., 712, 1012, 1033, 1035, 1401, 2041, 2097, 2111 Lega 1647, 1651 Lepcha 2050, 2062 Lezgian 2043, 2060 Lajamanu Warlpiri 1686 Luganda 331 f., 1161, 1192, 1653 f. Lummi 212, 233, 242, 772, 926, 1727, 1730−1742, 1745−1750, 1752 f., 1757, 1759−1762

M Madurese 431, 435, 443 Makhuwa 1642 f., 1647, 1657 Malagasy 30, 392, 410, 722, 746, 867, 1110, 1351 Malay 463, 466, 468, 659, 914 Mam 423, 443, 661, 666, 704, 773, 1664, 2051, 2059 Marathi 712, 716, 724, 746 f., 858, 1506 Mari (Eastern) 498 Martuthunira 853, 871 Matses 1788 f. Maybrat 2046, 2058 Mbundu 1642 Meithei 2053, 2057 Mina 2051 Mohawk 37, 316 f., 1141 Mokelese 2038, 2059 Morisyen 1710, 1717 ff.

Language index Mosetén 2049, 2051, 2062 Movima 1788 Muinane 1765, 1791 Muna 1668, 2051, 2055 f., 2059, 2062 Mundani 200, 215 Murrinh-Patha 867 Mybrat 2049

2131 Portuguese 43, 65, 135, 415, 445, 570, 593 f., 618, 629, 631, 633, 643, 647, 649 ff., 653, 873, 1712 f., 1718, 2025 − Brazilian 300 f., 308, 565, 590, 594, 612, 633, 1181, 1351, 1398 − Old Portuguese 629

Q N Nahuatl 33 Ndebele 34, 330, 340 Nepali 673 Newar 966, 2047, 2051, 2053 f., 2059 Nez Perce 656 f., 659, 663, 666, 668 f., 671, 675, 679, 682−687, 702, 704, 754, 772, 775, 2054 Nias 666, 1872 Nlhe7kepmxcin 1754 Nonuya 1765, 1790 Nootka 171, 176 ff., 214, 217 Northern Sotho 1637, 1648, 1657 Northern Straits Salish 6, 10, 233, 1726− 1763 Norwegian 222, 233, 245, 346, 349 ff., 355 f., 358, 364, 374, 465, 489, 497, 734, 867, 1181, 1833, 1962, 2009, 2024 f. Nsenga 1640 Nunggubuyu 1790, 2048, 2060

O Ocaina 1765, 1790 Occitan 629 Okanagan 1754

P Paiute 142 f., 206 Palauan 202, 2053, 2059, 2063 Palenquero 1712 f., 1715, 1718, 1725 Palikur 1788 Papiamentu 1710, 1713 f., 1717 f., 1720 f., 1725 Persian 189, 418, 560, 711, 744, 944, 970, 1402 f., 1407, 1427, 1437, 1478 ff., 1487, 1498, 1508 f., 2055, 2060 f. Pitjantjatjara 2038, 2059 Polish 426, 445, 515, 529 f., 562, 781, 1852

Q’anjob’al cf. Kanjobal Q’eqchi cf. K’ekchí Quechua 185, 197, 203, 206, 217, 713 f., 746, 2057

R Roviana 660 f., 663, 688, 692, 702, 704 Rumanian 197, 233, 235, 239, 622, 626, 629 Rundi 1642 f. Russian 135, 196, 198 f., 200, 217, 219, 222, 304, 317, 334 f., 392, 409, 437 f., 449, 459, 466, 476, 521, 525, 528, 545, 559, 561, 715, 724, 756 ff., 760 ff., 776, 820, 863, 873, 1220, 1308, 1391, 1402, 1428, 1594, 1600, 1617, 1803, 1814, 1820, 1832, 1849

S Saanich 1727−1741, 1743−1751, 1756 f., 1759, 1762 f. Sahaptin 684 Samoan 256, 484, 512 Sama (Bajau) 662 f., 665, 702, 706, 708 Sambaa 1648, 1656 Samish 1727, 1743 f., 1746 f., 1758, 1761, 1763 Sanskrit 70 ff., 78 f., 82−86, 91, 94, 97 ff., 1478 f., 1509, 1516 Sao Tomense 1710, 1714 f. Saramaccan 1710 f., 1713−1720, 1723 Secwepemctsin 1754 Seediq 661 f., 666, 698 Selayarese 1402 Semiahmoo 1727 Serbian 311, 321 ff., 326−329, 333 f. Serbo-Croatian 46, 341, 570, 651, 712, 715, 821 f. Seselwa 1710, 1712, 1715 ff., 1719−1722, 1724, 1726

2132 Shilluk 657, 667, 706 Shipibo 657, 667, 671, 676, 702 f., 708 Shona 1645 f., 1655 Sinhala 2053, 2055, 2059 Slovene 56, 58, 333 Songay 1710 Songish 1727, 1740, 1763 Sooke 1727, 1761 Sorbian 38 f., 343 Spanish 135, 163, 196, 202, 234 f., 239, 286 ff., 292, 315, 343 f., 379, 415, 418, 421, 440, 442, 570, 590, 594, 602, 618, 622−626, 628 f., 631, 633, 644−647, 650− 653, 756, 760, 762 f., 765, 872, 926, 929 f., 964, 1133, 1175, 1181, 1318, 1355, 1462, 1480, 1487, 1633, 1635, 1658, 1713, 1718, 1765, 1807, 1825, 1830, 1852, 1879, 1882, 1885, 1894, 1900, 1902, 2025, 2032, 2110 − Old Spanish 629 Squamish 214, 1758, 1761 f. Sranan 1710 ff., 1716 f., 1720−1723 St’at’imcets 1754 Sunwar 2050, 2057 Supyire 2043, 2051, 2057 Sursurunga 195, 198, 200, 214 Swahili 198, 207, 341, 570, 1623−1633, 1635, 1638−1641, 1644 ff., 1648, 1651 f., 1654 f. Swati 1649 Swedish 233, 311, 321, 335, 341, 343, 348− 355, 358−362, 370−373, 375, 379−382, 397, 406, 519, 570, 719, 885, 923, 928, 934, 994, 1003, 1195 Swiss German 1073, 2072

Indexes Turkang Besi 2041, 2045, 2049, 2058 Turkish 126, 129, 170, 179, 185 f., 193, 196, 199, 208, 219, 236 f., 277, 279, 397, 417, 444 f., 483, 743, 764 f., 867, 1051, 1073, 1083, 1121, 1402, 1441, 1487, 1827, 1868, 2024, 2029, 2053 f., 2059 f., 2117 Tuscarora 2038, 2061 Tswana 1637, 1642, 1653 Tzutujil 2051, 2058

U Udi 39, 66, 659, 1597, 1621 Udihe 2043, 2047, 2053 Urarina 2049 f., 2061

W Wambaya 853 f., 947, 964, 2025, 2028 Warlpiri 35, 221, 224 f., 230, 232, 240, 244, 271, 314 f., 340 f., 439, 443, 655 ff., 663, 668, 671 f., 677, 679−683, 688, 701 f., 705, 707, 715, 946, 966, 1131, 1402, 1434 f., 1438−1441, 1564, 1677−1709, 1966, 1997 Welsh 382 f., 515, 519, 562, 570, 593, 867, 874, 2025 West Flemish 280, 283, 300 f., 303, 306 Wichita 2038, 2062 Witoto 1765, 1789 f. Wolane 2040, 2042, 2048 f., 2051, 2061 Wu 743, 1519

X T Tagalog 30, 35, 144, 162, 172, 175 f., 178, 180, 182, 212 f., 234 f., 239, 244, 430 f., 444, 698, 702, 722, 1110, 1658−1676 Tarascan 2037, 2060 Tariana 416, 442, 1789, 2043, 2055, 2057 Telugu 439, 443 Tepehuan 2051, 2062 Tharaka 1650 Tillamook 1727 Tigrinya 192, 867 Tojolabal 431 f. Tongan 161, 174 f., 178, 211, 256, 707 Tsez 340, 660, 672 f., 677, 707, 826, 1351 ff. Tundra Nenet 2046, 2062

Xhosa

1626 f., 1653, 1657

Y Yaqui 415 f., 443 Yiddish 343, 349 ff., 356, 362 f., 379 f., 382, 557, 1428, 1438 Yimas 1789, 2040, 2047, 2059 Yoruba 43, 190, 211, 710, 820

Z Zulu 1137, 1623 f., 1628−1631, 1633−1645, 1647 ff., 1651−1654

Subject index A Accentual Phrase 42, 1157, 1271 ff. Accessibility Hierarchy 237, 721, 941, 1433, 2050 f. Active cf. Alternations/Diatheses Accusativus-cum-infinitivo (AcI) 11, 19 f., 435 f., 888, 1101, 1367 f., 1501 f. Across-the-board (ATB) cf. Transformations Adequacy − descriptive 804 f., 829, 895 f., 974 − explanatory 22, 804 f., 895 f., 974 Adjective cf. Category Adjunct 188, 259–269, 312, 316, 337, 517, 598, 615, 674 ff., 694, 696, 698, 735 ff., 739, 751, 762, 849 f., 1008 f., 1011 f., 1014, 1017 ff., 1041, 1174 f., 1229 f., 1232 ff., 1284–1287, 1290 f., 1295–1317, 1415 f., 1418 f., 1426, 1435, 1454–1457, 1460, 1463, 1480, 1506, 1508–1514, 1543, 1551 f., 1560 f., 1563, 1571–1576, 1583, 1634 f., 1691 f., 1701 f., 1752–1755, 1883, 1954 Adverb cf. Category Affix 28 f., 31–39, 115 ff., 147, 174 f., 177 f., 181 f., 188 f., 192 f., 196, 198, 201 f., 205, 207, 234, 277, 279, 311–318, 483 f., 519, 601, 659, 668, 684, 689 f., 695 f., 960, 1128 ff., 1132, 1136–1140, 1145 ff., 1292, 1295, 1303, 1421 f., 1483 f., 1487, 1489, 1494, 1503, 1506, 1519, 1539 f., 1542, 1564, 1590 f., 1594, 1597 f., 1603, 1607, 1624 ff., 1628 ff., 1632–1635, 1638, 1640 f., 1647, 1650, 1659–1666, 1685 f., 1691 f., 1695, 1697–1700, 1728–1731, 1738 ff., 1741, 1744, 1747–1751, 1753 f., 1757, 1765 f., 1776 f., 1780, 1848–1852, 1860, 1881, 1927, 2011, 2018, 2110 Agreement 3 f., 36–39, 116–119, 180, 182 f., 219–222, 224 f., 229 f., 236–241, 255, 257, 284, 291 f., 294, 298 f., 309–337, 416 f., 423, 429 f., 449, 484, 489, 496, 498 f., 505, 571, 584 f., 620 f., 623, 641 f., 659 f., 662– 666, 668 ff., 672–678, 681 f., 684, 689 f., 695, 700, 807 f., 822–830, 850 f., 857, 888, 918 f., 938 ff., 962, 989, 991, 996, 1048, 1054, 1175, 1351 f., 1376 f., 1426, 1467 f., 1479 f., 1483 f., 1487, 1494, 1496, 1500,

1503 ff., 1507 f., 1513 f., 1591–1596, 1603, 1607, 1616, 1624–1638, 1644, 1646 f., 1651, 1679 ff., 1685–1692, 1699, 1730, 1747 f., 1768 f., 1779–1782, 1800, 1807, 1810, 1817, 1841, 1845, 1847–1852, 1860, 1882 f., 1894, 2013, 2016 f., 2053–2056, 2108 ff., 2117 f. Aktionsart 1530–1535, 1538 f., 1596 − Activity 780, 1092, 1109, 1534 f., 1596 − Accomplishment 1103, 1531–1534, 1536, 1596 − State 1530 f., 1538, 1596 Alternations/Diatheses (cf. also Constructions) 19, 233 ff., 254–259, 749–770, 857–860, 983, 1012 f., 1089–1107, 1487– 1492, 1690 − Active 19, 83, 94 ff., 114, 181, 234, 255 f., 265, 269, 581, 750 f. 756 f., 762 f., 767 ff., 1020, 1116, 1466 ff., 1489 f., 1730, 1733 f., 1842, 1853 f., 1857, 1947 − Antipassive 257 f., 691 ff., 697, 753 f., 1730, 1750 f. − Applicative 34 f., 257, 435, 667 f., 676, 750, 988, 1129 f., 1292, 1632, 1634, 1638– 1641, 1730, 1749 − Causative/Causativization 256, 764 f., 779, 1012 f., 1089 ff., 1094, 1100, 1103– 1106, 1109, 1116, 1119, 1145 f., 1289 f., 1487, 1489 f., 1748, 1775, 1970, 1988 − Conative 19, 1090 f., 1690 − Dative 254, 545 f., 1094, 1099, 1290 ff., 1933 − Inchoative 256, 764, 1145 f., 1700 f., 1970, 1988 − Middle 19, 255 f., 258, 265 f., 268 f., 756– 770, 1090 f., 1730, 1749 f. − Passive 19, 34 f., 55, 83, 94 ff., 113 ff., 181, 201, 234, 250, 255 f., 265, 268, 402, 416, 424 f., 427, 431 f., 436 f., 462, 519, 581, 668, 674, 722 f., 750–756, 760–764, 766–769, 781, 791–794, 796, 856, 861, 956 f., 992 f., 1012, 1020 f., 1100, 1116, 1339, 1341, 1347 f., 1404 f., 1465–1468, 1488 ff., 1537, 1567–1570, 1577 ff., 1596 ff., 1607, 1637–1640, 1721, 1730, 1733 f., 1750, 1812 f.,, 1840, 1842, 1852 ff., 1856 f., 1863 ff., 1885, 1946 f., 1984 f., 2108 ff.

2134 Ambiguity 28 f., 50–54, 235 ff., 285–289, 292, 361, 529, 535, 537, 556, 713 f., 886 f., 952 f., 1147, 1158 f., 1170, 1199, 1205, 1208 ff., 1216, 1222 f., 1238, 1241, 1299, 1343, 1352, 1406 f., 1412, 1415 f., 1432 ff., 1460 ff., 1504 f., 1523, 1566, 1576–1579, 1610, 1666, 1702, 1719, 1878 f., 1884, 1892, 1896 f., 1913, 1919, 1926, 1932, 1949 f., 2009, 2014–2017, 2020 f., 2049, 2109 f. Anaphor 223, 317, 446–474, 1066, 1227– 1241, 1243, 1357–1396, 1413 f., 1416 f., 1422, 1504 f., 1508, 1687 f., 1692 ff., 1718 f., 1813 ff., 1843, 1856 f., 1865, 1890– 1893, 1924 f., 1930 f. − Locally free reflexives (exempt anaphora) 466–474, 1368, 1388–1396 − Logophoricity 203, 461–464, 467 ff., 471– 474, 1489 − Long-distance reflexives 453 f., 457–466, 1364, 1375, 1380, 1480, 1504 f. − Picture noun phrases 466 f., 471–474, 1362, 1388–1395, 1892 − Reciprocals 200 f., 447 ff., 531 f., 1012 f., 1240 f., 1362 f., 1416 f., 1638, 1640 f., 1687 ff., 1692, 1719, 1749, 1775 − Reflexives 199 ff., 221, 238 ff., 256, 311, 317, 328 f., 426, 446–474, 1012 f., 1066, 1100 f., 1229 f., 1323, 1338, 1352 ff., 1357– 1396, 1485 ff., 1489 f., 1492, 1504 f., 1507 f., 1513 f., 1561 f., 1576–1579, 1675, 1687 f., 1691–1694, 1718 f., 1749, 1775, 1813 ff., 1843, 1856 f., 1890 ff., 1897 Animacy 180 f., 205, 227, 233 ff., 237 ff., 315, 332, 435, 472, 522, 525, 531, 546, 552, 663 f., 827, 840, 909 ff., 1337, 1459, 1481 f., 1485, 1487, 1602, 1625, 1633 ff., 1714, 1766–1770, 1772 f., 1777, 1786 f., 1884, 1933 f., 1947 f., 2054 f. Annotation 841 f., 850, 862 f., 949, 1041, 1076 f., 1913 f., 1916–1935, 1968, 1987 f., 2002, 2009 f., 2016, 2021 f., 2023, 2064, 2069, 2074–2090 Antipassive cf. Alternations/Diatheses Aphasia 172, 1524, 1834–1840, 1842–1856, 1858 ff., 1862 ff. Apposition 118, 184, 888, 1261, 1612, 1683 f., 1780, 1928 Argument structure 19 f., 22, 31, 47, 138, 140 f., 222–227, 246–251, 264, 254, 258, 265, 545, 581, 784 ff., 811, 854–857, 866, 878, 882, 941 f., 1039 ff., 1093–1098, 1107,

Indexes 1111, 1113, 1147, 1284–1289, 1302 f., 1305, 1342, 1344, 1351, 1391, 1495 f., 1564 f., 1569 f., 1638 ff., 1687–1690, 1845, 1883 f., 1929 ff., 2006, 2010 Article cf. Category Aspect 180 f., 196, 664, 1292, 1480, 1483 f., 1494, 1498 f., 1513 f., 1539 ff., 1560, 1583 f., 1596, 1632, 1644 f., 1659, 1667, 1685 f., 1695, 1715 f., 1729 ff., 1736, 1745, 1774 f. − completive 196, 1539 ff., 1715 − experiential 1539 f., 1583 − habitual 196, 1017, 1539 − imperfective 181, 664, 673, 762, 1483 f., 1501 f., 1596, 1685 f., 1730 − perfective 181 f., 664, 669, 673, 762, 1017, 1145, 1480, 1482 ff., 1501 f., 1539, 1583, 1596, 1685 f., 1730 − progressive 196, 664, 1017, 1145 f., 1484, 1539 f., 1583, 1730, 1812 Attribute-Value-Matrix (AVM) 840, 855, 863, 937 f., 942, 948 f., 959, 990, 2002 f. Autolexical Syntax 38, 1134

B Basic Linguistic Theory 2040–2049, 2052, 2056 Behaviourism 136, 144 f., 149, 151 Binding Theory 449 f., 731, 737, 1066, 1228–1234, 1239, 1327, 1338–1342, 1347, 1357–1396, 1417, 1424 ff., 1813 ff., 1890– 1893 − Principle A 450, 466, 1227 ff., 1240, 1361–1365, 1375 f., 1388, 1813 f., 1890 ff. − Principle B 3, 450 f., 1361–1365, 1374 f., 1813 ff., 1890 f. − Principle C 429, 529 ff., 535 ff., 733 f., 1214, 1228–1234, 1240, 1312, 1353 f., 1361 f., 1380, 1410, 1435, 1813 f., 1893

C Case − Case checking 239, 398, 627, 636, 643, 691, 823, 825, 1142, 1232, 1370–1374, 1378 ff., 1851 − Constructive case 853 f. − dative 92 f., 219, 221, 226, 430, 524, 526 f., 541, 544 ff., 552, 577 f., 597, 603,

Subject index 671 f., 723 f., 785 f., 827, 844, 852 f., 891 ff., 895, 918, 1131, 1241, 1290 ff., 1353 f., 1412 f., 1416 f., 1464–1467, 1485, 1487–1492, 1496 f., 1503, 1514, 1563, 1592 f., 1598, 1611 f., 1679 f., 1688–1693, 1696, 1698, 1700, 1753, 1883, 2053 f., 2118 − default 671, 683, 852, 887–890, 892, 895, 1481 f., 1496, 1502, 1802, 1807 − Differential case marking 233 ff., 239, 1487 f., 1492, 1496, 1772, 2053–2056 − Ergative-absolutive 257, 318, 655, 657, 659 f., 663, 665–671, 674, 682, 685 f., 689, 753, 828, 1131, 1688 − genitive 92, 109, 172, 183, 198, 219, 267, 417, 517, 541, 851 f., 889, 891, 941, 1485, 1487, 1496 ff., 1502, 1514, 1571, 1581, 1598, 1612, 1616, 1713, 1802, 2116 ff. − inherent 226, 679, 753, 891 ff., 1369 − instrumental 92 f., 95, 257, 1490, 1773 f. − lexical 225 ff., 671 f., 890–894, 956, 1377, 1466 f., 1483, 1491, 1513 f., 1802 − Nominative-accusative 180, 318, 670 f., 673, 677, 828, 1731, 1753, 1766, 1772 − oblique 219, 753, 1502, 1773 f., 1784, 1883 − quirky 221 ff., 225 f., 239 ff., 430 f., 852, 891 ff. − semantic 852 f., 1700 − structural 225, 239, 699, 753, 852, 891, 946, 956, 1466 f., 1483, 1488, 1492, 1513 f., 1751 f., 1883 Category (syntactic) − Adjective 6, 117 ff., 159, 161 ff., 167, 169 f., 173, 182–189, 190, 247, 310, 320, 324 ff., 515 ff., 634, 789 f., 938 f., 946 f., 975 f., 984 f., 995 f., 1005 f., 1293–1310, 1313 f., 1448, 1497, 1519, 1524, 1530, 1560, 1563, 1571 f., 1575 f., 1589, 1629 f., 1666 f., 1712 f., 1736 ff., 1747, 1971, 2116 f. − Adverb 6, 170, 159, 186–189, 261 f., 316, 348 ff., 354 ff., 372, 425 f., 437, 552, 630, 640 f., 759, 766, 1016, 1056, 1225, 1295 f., 1301, 1303 f., 1309 f., 1315, 1454 f., 1460, 1462 f., 1538, 1542 f., 1572, 1626 f., 1669, 1714, 1954 − Article/Determiner 4, 179, 199, 208, 252 f., 296, 310 f., 334 f., 501 f., 552, 610 ff., 627 f., 718 f., 788 f., 797, 851, 938 f., 990 f., 995 f., 1032, 1174 f., 1526 f.,

2135



− − −





− −



− − −

− −

1629, 1712, 1737 f., 1745, 1781 f., 1794, 1800 f., 1808, 1839, 1850, 2052, 2116 f. Auxiliary verb 19, 208 f., 276, 493, 516 f., 569, 583 ff., 618, 621, 821, 860–864, 886, 1011, 1040, 1133, 1226, 1434, 1449, 1467, 1486, 1559 ff., 1589, 1597, 1607, 1632 f., 1641, 1678–1987, 1690 ff., 1799, 1803, 1839, 1845, 1850 Bridge verb 360, 899 Classifier 319, 321, 1520–1524, 1526, 1528 f., 1581, 1769 Complementizer (COMP) 207, 351 f., 359–363, 419, 517, 520, 725, 902, 957, 1216, 1344 f., 1447 f., 1500, 1506, 1560, 1609–1612, 1679, 1683, 1685 f., 1695– 1699, 1701, 1717 f., 1757, 1799 ff., 1809 f., 1839, 1845, 2114 f. Conjunction 207, 347, 351, 479 f.−484, 488 ff., 492, 1016, 1448, 1500, 1508, 1560, 1610, 1613, 1724, 1787, 1986 Copula 180, 182, 495, 498, 985 f., 1448, 1451, 1483 f., 1531, 1722, 1724, 1742 f., 1746, 1772, 1975 Count noun 319, 501, 990, 1528 f., 1767 ff., 2005, 2012, 2013 Expletive 224, 233, 249 f., 293, 320, 372 f., 375 f., 436, 472, 602 f., 760, 763, 813, 816, 831, 846, 878 f., 907, 1331 f., 1506, 1511, 1628, 1631, 1635 ff., 1799 f., 1802, 1809, 1948, 1955, 1957 Infinitive verb 19 f., 319 f., 415, 418 ff., 440, 459, 617, 632 ff., 642, 710, 864, 886, 888, 1017, 1179 f., 1215 f., 1331, 1344 f., 1359, 1418–1421, 1465 f., 1483, 1501– 1505, 1599, 1608, 1612, 1614–1617, 1626 f., 1645, 1673, 1681, 1683, 1685, 1690, 1695–1699, 1806 f., 1839 f., 1851, 1863, 1893 f., 1921, 1974, 1978, 2108 f. Interjection 159, 189 f., 209 f. Mass noun 319, 1528 f., 1767 ff., 2012 Modal verb 281 f., 293, 295 ff., 366, 424, 434, 569 f., 618, 760 f., 786 f., 886, 1020, 1055, 1225 f., 1539, 1809 ff. Negative Polarity Item 284, 287–294, 303, 786 f., 791, 824, 1225 f., 1236 f. Noun 38, 103 f., 159, 173–182, 247, 252 f., 262, 264 f., 310 f., 319–362, 417, 501 ff., 516 f., 663 f., 779 f., 782, 788 ff., 938 f., 1032, 1497, 1520, 1526 ff., 1560, 1602 f., 1625, 1642 f., 1666 f., 1670, 1677 f., 1681, 1712, 1737, 1747 f., 1765–

2136 1768, 1776 ff., 1802, 1844, 1882, 2012, 2116–2119 − Noun classes 331 f., 857, 1351 f., 1624– 1628, 1636 f., 1647, 1766–1770, 1779 ff. − Particle 16, 18 f., 103, 110–113, 171 f., 177, 207 ff., 277–280, 300–303, 350 ff., 391, 397 f., 401, 403, 406, 440 f., 492, 517, 542 f., 1016, 1117, 1130, 1235 ff., 1448, 1453 f., 1498, 1532, 1539, 1545 ff., 1550 f., 1603 f., 1617, 1630, 1644, 1648 ff., 1668, 1673, 1715, 1728, 1731 f., 1735 f., 1785, 1886, 1948, 1955, 1957, 1959 − Preposition (adposition, circumposition, postposition) 105 f., 159, 208, 247, 266 f., 269, 516 ff., 566 f., 576 f., 637 f., 779, 789, 1117, 1415, 1480–1483, 1497 f. 1531, 1563 f., 1571 f., 1631 f., 1740 f., 1986 − Pronoun 3, 159, 192–206, 237, 252 f., 311–317, 322–329, 335 ff., 365 f., 447 f., 450–457, 522, 548 f., 555, 595–643, 784 ff., 820, 832 f., 898, 918 ff., 1011, 1357–1379, 1431, 1459, 1565, 1584, 1602 f., 1610 ff., 1661 f., 1692 f., 1711 f., 1752 f., 1776 f., 1813 ff., 1843, 1856 f., 1890–1893, − Psych verb 222, 224, 231 f., 467, 470 f., 530, 545, 1099, 1390, 1395, 1491, 1739 f. − Resumptive pronoun 21, 123 f., 354 f., 374, 393 f., 711, 724, 726 f., 884 f., 898 f., 921, 1065, 1783, 1802, 1887 f. − Verb 103, 138, 159, 173–182, 247 f., 276, 310 f., 342 f., 422, 433, 516, 519, 527, 636, 752, 765 f., 781 f., 823, 960, 1012, 1015 f., 1047, 1089–1119, 1130, 1447 f., 1453 f. 1466, 1480, 1491 f., 1494 ff., 1500, 1536 f., 1562–1565, 1590 f., 1599 f., 1638–1641, 1658–1666, 1678 f., 1681, 1687–1690, 1711, 1714, 1722 ff., 1744–1748, 1751, 1765 f., 1771, 1844, 1975 − Weather verb 436, 602, 1802 Categorial Grammar (CG) 9, 160, 941, 944, 961, 1045–1079, 1199, 1209, 1287, 1296, 1404, 1876, 1898, 2003 f., 2018, 2025 Causative/Causativization cf. Alternations/Diatheses C-command 37, 52, 222 f., 253, 392 f., 429, 449, 451, 457, 462 f., 466 f., 470, 520, 528 ff., 533, 543, 568, 814 f., 817, 819, 1137, 1140 f., 1213 f., 1227, 1229 ff., 1237– 1240, 1312, 1332, 1334, 1359–1363, 1373, 1378, 1388 f., 1412 ff., 1417, 1432 f., 1564, 1580, 1631, 1813 f., 1892 f.

Indexes Clause − adverbial 357, 392 f., 439, 1508, 1539 f., 1613–1617, 1784 − complement 262, 286, 357, 359 f., 435, 458 ff., 462, 959, 1174, 1181, 1323, 1380, 1503 f., 1506 ff., 1511, 1607–1613, 1616 f., 1646, 2114 f. − copula 1448, 1772, 1812 f. − declarative 209, 345 f., 352 f., 369, 515 f., 632, 640, 642, 689, 899, 902, 925, 1009, 1020 f., 1186 f., 1311 f., 1450 ff., 1457 f., 1545 f., 1887 − embedded 266 f., 347, 356 ff., 363 f., 367 f., 370, 392 f., 422, 429, 435, 450, 456, 458 ff., 582, 688, 831, 844 ff., 848, 904, 916, 925, 1101, 1276, 1292, 1332, 1351 ff., 1364, 1377, 1409, 1453, 1457, 1503 ff., 1508, 1510 f., 1576, 1578, 1610, 1645, 1799 f., 1809 ff., 1880 f., 2050, 2113 ff. − exclamative 190, 209, 369 f., 794 − existential 315, 1244, 1667, 1674, 1717, 1720 − imperative 209 f., 302, 352 f., 619, 631 ff., 640, 642, 993, 1009, 1020 f., 1029, 1409 f., 1449 f., 1465, 1484, 1499 ff., 1673 ff. − interrogative 18, 205, 209, 352 f., 369 f., 374, 419 f., 606 f. 632, 642, 725 f., 847, 902, 1186, 1219, 1223 f., 1402, 1510–1513, 1584 f., 1673, 1702, 1886 f. − main 207, 342 f., 351, 354, 356 f., 361– 364, 366 ff., 371, 392, 480, 516, 715 ff., 1402, 1409, 1414, 1450, 1453, 1457 f., 1512, 1539 f., 1604, 1610 ff., 1616, 1701 f., 1730–1744, 1771, 1783, 1785, 1801, 1840, 1846, 1984, 2113 ff. − root 354, 365, 369, 392, 1806, 1840, 1863 − subordinate 180, 516, 606 f., 1402, 1480, 1484, 1501 ff., 1506–1513, 1645, 1718, 1729 ff., 1740, 1755, 1757 f., 1766, 1776, 1782–1785, 1840, 1849, 1863 ff. Clitics/Cliticization 43, 196, 234, 595–643, 814, 832 f., 863 f., 1132 ff., 1226, 1479 ff., 1497 ff., 1603, 1633, 1646 f., 1668 ff., 1673 f., 1678 ff., 1686–1692, 1695, 1718 f., 1728, 1730–1733, 1774–1777, 1782 f. − Clitic climbing 596, 617–620, 636, 639, 1133, 1633 − Clitic doubling 315, 596 f., 621- 626, 1633

Subject index Cognitive Grammar 150, 979 f., 999 Collocation 777–798, 997, 1160 f., 1735 f., 1754, 1965, 1971, 1987 Competence vs. performance 10 ff., 1262, 1795, 1797 f., 1805–1808, 1810 f., 1814, 1818 , 1836 f., 1839, 1841 f., 1844, 1847, 1855, 1857–1860, 1862, 1899, 2102 ff. Complementation 7–10, 135, 413–421, 993 f., 1016, 1288 f., 1607–1613, 1641–1644, 1965 f., 1972–1975, 1977–1980, 1984 f. Complementizer (COMP) cf. Category Compositionality 47–55, 282, 284, 287, 290 f., 720, 778 ff., 782 f., 787 f., 791, 793 f., 796 ff., 975 f., 980–984, 995, 1036 f., 1046, 1049, 1059, 1074, 1078 f., 1089, 1199, 1201, 1203–1209, 1496 f., 1682, 1751 Computational Linguistics 840 f., 844, 865, 867 f., 942, 948, 978, 999, 1041 ff., 1075 ff., 1913, 1919, 1930, 1987–1990, 2001–2027 Conditional 663 f., 715, 1508, 1613, 1615, 1645, 1702 Configurationality 848, 941, 1342 f., 1433, 1712, 1889, 1948 Constituent structure 140–144, 225, 942 ff., 946, 980 f., 1156 f., 1172–1175, 1288 f., 1317, 1403 f., 1564, 1682–1685 Construction Grammar (CxG) 5, 10, 15–18, 21, 23, 27, 60, 794 ff., 961, 974–1000, 1074 f., 1119 f., 1302 Construction (syntactic) − causative 27–32, 418, 422, 436, 614, 686 f., 750, 858 f., 1120, 1145 f., 1566– 1570, 1577, 1598 f., 1665 f. − Cleft 20 f., 358, 404, 406, 791, 796, 1261, 1415, 1649 f., 1669 f., 1743 f., 1758 f., 1842, 1852, 1948 f., 1951, 1955 − comparative 20 f., 286, 569, 715, 1128, 1132, 1224, 1234, 1530, 1589 − Dative shift 402, 1713 − Left dislocation 316, 347, 354 f., 374, 377, 390, 888, 1263 f., 1448 f., 1548 − Object drop 1633, 1635 − Object shift 698 f., 816 f., 832 f., 943, 1241, 1262, 2053 f. − Parasitic gaps 725, 741 f., 1053, 1060 f., 1332 f., 1335, 1348, 1350, 1415 f., 1418– 1421, 1888 − Pied Piping 403, 726, 1234–1237, 2005 − possessive 16 ff., 184, 198 f., 908–911, 1738 ff., 1778 ff., 2118 f.

2137 − Pro-drop 236, 315, 1519, 1535, 1634 f., 1799 f., 1807, 1809 − Pseudo cleft 20 f., 406, 1028 f. − Raising 19 ff., 414 ff., 435 ff., 441, 519, 676, 687, 845 f., 994, 1230 f., 1237 f., 1240, 1353 f., 1368, 1404 f., 1414 f., 1501 f., 1645 f., 1893 ff. − resultative 19 f., 540, 547, 794, 961 f., 1117, 1120, 1130, 1448, 1533, 1534 ff., 1541, 1543 f., 1700 f. − Right Node Raising 481, 494, 503–507, 509, 1051, 1058 f., 1070, 1172 − Serial verb construction (SVC) 414, 416, 1531, 1714 ff., 1722 f., 1741 − Small clause 1101, 1117, 1532–1536, 1541 − there construction 220 f., 224, 233, 238 f. − Topicalization 20 f., 58, 251 f., 344 f., 350, 353, 355, 358, 361, 390–396, 398, 402, 494 f., 499 f., 544, 572, 726, 791, 846 f., 1028 f., 1035 f., 1038 f., 1268 f., 1390, 1408 ff., 1414, 1421 f., 1425, 1572–1575, 1649, 1684, 1842, 1852, 1863, 1931 f., 1947, 1951–1954 − Wh-imperative 1409 f. − Wh-question 16, 20 f., 57, 62, 284 f., 344 ff., 349, 360, 374, 494 f., 524 f., 576 f., 584 f., 847, 882 f., 982, 994, 1264, 1551 f., 1759, 1794 f., 1803, 1809 ff., 1840, 1847, 1849, 1854, 1858 f., 1863 Control 19 f., 221, 412–441, 661 ff., 665 f., 677 ff., 766, 834, 844 ff., 848 f., 994, 1050, 1207 f., 1321–1355, 1465 f., 1502 f., 1567, 1578, 1645 f., 1688–1692, 1694–1698, 1890, 1893 ff., 1920 f., 1978 f., 1985 f. − anaphoric 848 f. − Adjunct control 437 ff., 1322–1325, 1331 ff., 1335, 1348–1351 − Arbitrary control 427, 439 f., 1325, 1331– 1334 − Backward control 429, 439, 822, 834, 1351–1355 − Control shift (control coercion) 423–427, 1322 f., 1336–1342, 1346 ff., 1350 f. − implicit 426 ff. − partial 425 f., 434, 440 f. − split 415, 425, 440 f. Coordination 207, 319 f., 330–337, 404 f., 478–510, 568 f., 600, 605 f., 608, 675, 720, 738, 850 f., 1010, 1013 f., 1050 f., 1055, 1058 ff., 1064, 1167, 1175, 1344 f., 1367, 1566, 1768 f., 1896, 2014, 2021, 2023

2138 Copy Theory of Movement cf. Transformations Coreference 203, 221, 239, 326, 328 f., 458, 730, 767, 813, 959, 1205, 1229 f., 1233 f., 1240, 1312, 1328, 1410 f., 1564, 1616, 1693 f., 1890–1893 Core vs. periphery 11 f., 522, 792, 795, 974 f., 1335, 2005 f., 2118 Corpus linguistics (Corpora) 12 f., 523, 548, 777, 780, 788–791, 794–798, 908, 911, 996 f., 1015, 1076, 1765, 1796, 1803, 1811 f., 1818 f., 1876, 1912–1935, 1965, 1969, 1977, 1980, 1984–1989, 2009 ff., 2014 f., 2038, 2048 f., 2051, 2063 ff., 2069 ff., 2075, 2078, 2081, 2083 ff., 2089 f., 2106 f., 2113 Countability (count-mass distinction) 180, 1528 f., 1738, 1777 f., 2012 Crossover 735, 1213 f., 1353 f., 1888

D Dative cf. Alternations/Diatheses and Case Deep Structure (D-Structure) cf. Levels of Representation Definiteness 38, 118 f., 177, 181 f., 199, 227, 234, 238 f., 502 f., 516, 522, 713, 840, 909, 1244, 1487, 1526 ff., 1543, 1659, 1738, 1800 f., 1933, 2054 f. Dependency Grammar 101, 128, 138 f., 1004–1014, 1027–1043, 1074, 1284 f., 1296 Determiner cf. Category Dictionary 161 f., 185, 1015, 1020 f., 1964– 1990, 1018, 2068, 2085 ff. Differential object marking cf. Case Discourse-configurationality 58, 233, 235, 239, 251, 383–406 Discourse Representation Theory 3, 1950 Distributed Morphology 30, 35 f., 277, 297, 750, 767–770, 888, 1107, 1115–1118, 1137 ff., 1141–1145, 1147 f., 1748 Duke of York 1234–1237

Indexes − Gapping 494, 496, 503–506, 509, 569, 1035, 1037 f., 1896 − Sluicing 494, 566 ff., 571–574, 576 ff., 581, 584 f., 887, 1035, 1038, 1896 f. − VP-ellipsis 432, 462 ff., 494, 563–566, 568 ff., 574, 576, 578, 581–585, 886 f., 1035, 1038, 1222, 1232 f., 1754 f., 1896 f. Elsewhere Principle 1142–1146 Empty Category Principle cf. Transformations Endocentricity 7 ff., 118 f., 127 f., 320 f., 489 f., 979, 1005 f., 1285 f., 1291, 1296, 1302, 1310 f., 1317 English Resource Grammar (ERG) 1932, 1990, 2010, 2025 Ergativity 220, 224, 229, 232, 235, 250 f., 515, 654–701, 753 f. Ersatzinfinitiv cf. Infinitivus pro participio E-Type Pronoun 328 f. Event 27 f., 164, 175 f., 179 f., 182, 230 f., 247, 256, 259, 264 f., 384, 394, 396, 414, 417, 434, 482–485, 487 f., 495, 532 ff., 536, 553 f., 750, 754, 764 ff., 788, 858 f., 981, 992, 1097 ff., 1101–1105, 1110–1113, 1115–1118, 1120 f., 1147 f., 1289 f., 1292, 1300 f., 1311 f., 1484, 1495 f., 1502, 1537– 1541, 1599, 1641, 1660, 1664 f., 1681, 1691 f., 1694–1699, 1773, 1816 Event related brain potentials (ERP) 236, 1876, 1882, 1884, 1889 f., 1892, 1895 f. Exceptional case marking (ECM) cf. Accusativus-cum-infinitivo (AcI) Exocentricity 331, 1031 Extended Projection Principle (EPP) 224, 227, 372, 374–377, 696 ff., 823 f., 826, 828, 1266, 1279, 1371, 1374 Extended Standard Theory (EST) cf. Generative Grammar Extraposition 492, 496, 520, 710, 712, 718 f., 728, 738 f., 902 ff., 994, 1035 f., 1039, 1173 f., 1176, 1181, 1232 ff., 1291 f., 1327 f., 1448, 1455, 1462 ff., 1506, 1509, 1604 f., 1985, 2052

F E Ellipsis 290 f., 352 f., 491, 494, 496, 503– 507, 509, 562–588, 1035, 1037 f., 1226 f.,, 1522, 1896 f., 1928

Focus 56 ff., 60 ff., 196, 200, 205, 313, 336, 347 f., 352, 368, 372 ff., 376 f., 385 f., 392, 394, 396, 398–406, 467 f., 490, 508 f., 522, 524–527, 530, 542 f., 546 f., 550, 555 f., 566, 689, 696, 700, 722, 725 f., 821, 857 f.,

Subject index 862 f., 866, 1120, 1178, 1183–1186, 1263– 1273, 1413, 1421, 1461 f., 1498 f., 1510, 1533, 1547 ff., 1551 f., 1600, 1636, 1641– 1644, 1648 ff., 1659 f., 1722 f., 1752, 1785, 1930, 1945 f., 1948 f., 1952–1955, 1957, 1959, 2089, 2112 FrameNet 1985–1988 Functional application 1047 f., 1054 f., 1067, 1078, 1293 ff., 1296–1299, 1310, 1316

G Generalized Phrase Structure Grammar (GPSG) 7, 13, 944, 954 f., 962, 1008, 1074, 1404, 1853 f. Generative Grammar 2 f., 12, 22, 135, 139, 154, 160, 219, 224, 227 f., 233, 363, 392, 399, 447, 517, 532, 542, 791 f., 803 f., 810, 1009, 1032, 1140 f., 1202, 1285, 1288, 1290, 1325, 1345, 1350, 1727, 1799 f., 1812, 1835, 1844, 1880, 2038, 2041, 2102 f. − Bare Phrase Structure 609, 810–814, 827, 1030 f. − Extended Standard Theory (EST) 834, 1404 − First Phase Syntax 1107, 1111 ff. − Generative Semantics 953, 1140, 1983 − Government and Binding Theory (GB) 22, 363 f., 655, 693, 803 ff., 808– 812, 814, 816, 818, 822 f., 829, 834, 876, 879 f., 1009, 1140, 1328, 1404, 1424, 1452, 1467, 1568 f., 1634 f., 1845, 1849 f., 1928, 2004 − Minimalism 7, 9, 13 f., 27, 46, 59, 223 ff., 227 f., 240, 364, 585 ff., 803–835, 876, 879 f., 896, 898, 912 ff., 917, 921–925, 953, 961 ff., 1030 f., 1046, 1075, 1135 ff., 1139, 1141 f., 1144, 1201 f., 1228, 1257, 1259, 1358, 1370–1375, 1404 f., 1411, 1422 ff., 1452, 1461 f., 1634 f., 1800, 1848, 1850 f., 1855, 1876, 1898, 2004 − Standard Theory 26, 834, 1008, 1404 − Theory of Principles and Parameters (PPT) 6, 11, 22, 26 f., 803 ff., 808, 912, 923, 1353, 1405, 1411, 1413, 1415, 1418, 1799 f., 1876 − Transformational Grammar 13, 48, 135, 139, 151, 154 f., 159 f., 791, 913, 1008, 1325 f., 1329, 1332, 1350, 1459 f., 1570,

2139 1876, 1880, 1885, 1889, 1894 ff., 1898, 2002 f., 2038 f., 2041 f., 2053 f. − X-bar-theory 35, 45, 160, 260, 343, 363, 810 f., 813 f., 816, 1008 f., 1285 Genericity 177, 264, 387 ff., 420, 427 f., 440, 533 f., 551, 717, 993, 996, 1315 Givenness 57 f., 386–390, 522, 549 ff., 553 f., 583 ff., 1178, 1183–1186, 1459, 1584, 1933 f., 1944, 1948 f., 1951 f., 1955, 1958 Glue Semantics 50, 54, 865 Gold standard 1920 f., 1926, 2009 f., 2016, 2018, 2023 Government 208, 585, 587, 622, 809, 822, 888 ff., 991, 1006, 1009 f., 1034, 1287, 1343 f.,1359 f. Government and Binding Theory cf. Generative Grammar Grammaticality 12, 27, 398, 468, 521 f., 539 f., 719, 735, 735 ff., 876, 883, 1213 f., 1244, 1351, 1369 f., 1396, 1482, 1887, 2013, 2105 f., 2111, 2113 Grammaticality judgements 523, 537, 1794, 1798, 1836, 1883, 2070, 2073 f., 2088 ff., 2106 Grammaticalization 172 f., 179 f., 198–201, 601, 997 f., 1019, 1788, 1948, 1955

H Head-driven Phrase Structure Grammar (HPSG) 3, 7 f., 13 f., 21 f., 25, 27, 31, 60, 792 ff., 796, 844, 851, 937–963, 979, 986, 1046, 1048, 1074, 1135, 1148, 1199, 1287, 1296, 1305, 1388–1395, 1428–1432, 1460, 1876, 1898, 1932, 1990, 2003, 2009 f., 2012, 2019 f., 2023, 2025 Head final cf. Word order Head initial cf. Word order Head Movement cf. Transformations

I Idioms 17, 21, 136 f., 436, 486, 501, 603 ff., 627, 730 ff., 737, 742, 777 f., 782–798, 958 f., 963, 975 f., 983, 996, 1016, 1035 ff., 1088 f., 1116, 1404, 1452 f., 1488 f., 1933 Implicature 122, 485, 487 f., 1186, 1261, 1273–1278, 1280 Incorporation 27, 32–38, 311–317, 336 f., 414 ff., 541 f., 601, 677, 833, 1108, 1141,

2140 1496 f., 1521, 1634 f., 1715, 1729, 2042 f., 2056 Indefinite/indefiniteness 206, 276, 281–288, 293 ff., 314 ff., 524 f., 534, 781, 1217 f., 1527 f. Individual-level predicates 1315 Infinitivus pro participio (Ersatzinfinitiv) 366, 885 Inflection 27, 33, 36 ff., 311, 317, 566, 788, 821, 918 f., 1011, 1129–1136, 1500 ff., 1512, 1563, 1835, 1839, 1849–1852, 1927 Information structure (information packaging) 56–63, 121 f., 385 ff., 507 f., 549, 862, 1183, 1263–1273, 1641, 1945 f., 1952 Interfaces − Syntax-lexicon interface 1289 − Syntax-morphology interface 32, 1128– 1149 − Syntax-phonology interface 40, 44 ff., 863 − Syntax-pragmatics interface 55, 61 f., 1256–1280 − Syntax-semantics interface 47, 50, 53 f., 953, 1198–1245 Intonation 42, 294, 395, 402 f., 507, 1156 f., 1186 f., 1216, 1263, 1615, 2077

L Lambda calculus 1046, 1049, 1052 f., 1203, 1293 f. Lambek calculus 1047, 1073 Language acquisition 805, 962, 998, 1524, 1792–1819, 2102 Langue 10, 136 f., 2070 Late closure 1879 Levels of Representation − Deep Structure (D-Structure) 6, 26, 151, 812, 1106, 1331, 1413, 1845, 2042 − Lexical Conceptual Structure 1095 f., 1104 − Logical Form (LF) 48, 52, 290, 571, 580, 741, 812, 1094, 1200–1211, 1244, 1354, 1361 − Phonetic Form (PF) 368, 571, 575, 807, 812, 818, 831 ff., 1142 f., 1259 f., 1372 f., 1580 ff. − Surface Structure (S-Structure) 223, 299, 812, 819, 830, 834, 882, 1163, 1173, 1227, 1361, 1499, 1684, 1845 f., 1852, 1894, 2038 Lexical decomposition 296 ff., 689, 1202

Indexes Lexical-Functional Grammar (LFG) 225, 227 f., 839–868, 877, 1424, 1564, 1634 Lexical integrity 4, 38, 961, 1134 Lexicalist hypothesis 26 f., 36–39, 1134 f. Lexical rule 4, 22, 26 f., 861, 956 f., 960, 1063, 1339 f. Linear precedence (LP) rules cf. Word order Logical Form (LF) cf. Levels of Representation Long-distance dependencies, non-local dependencies, unbounded dependencies 13, 20 ff., 721, 845, 921, 954 f., 1065, 1457, 1561, 1572, 1886 f.

M Machine translation (MT) 867, 2008 f. Markedness 168, 314, 320 f., 333 f., 678, 879, 892, 963, 1333 f., 1802 Masdar 1599, 1608, 1614 Merge 223 f., 617, 627, 699 ff., 810–818, 828, 830 f., 918, 924, 1117, 1226 f., 1229 Middle cf. Alternations/Diatheses Minimal Attachment 1878, 1956 Minimalism (Minimal Program) cf. Generative Grammar Minimal Recursion Semantics (MRS) 948 ff., 952 f. Mittelfeld cf. Word order Modification 4 f., 183, 186, 261, 709, 719, 729, 742, 779 f., 783, 789 f., 849, 976, 981, 985, 990, 1297–1302, 1308 ff., 1315 f., 1460, 1508, 1524 ff., 1629, 1672 Modularity 963, 1837, 1865 Montague Grammar 1199, 1203, 1209 Morphological language types − agglutinating 1765, 1788 − analytic 172, 1519, 1852 − isolating 165, 174, 1519 − polysynthetic 5, 189, 316 f. Move (Move α, Movement) cf. Transformations Multilingualism (bilingualism) 1816–1819, 2101

N Negation 111, 274–304, 350, 361, 401, 405, 481, 487, 542 ff., 786, 1225 f., 1499 ff., 1550 ff., 1673 f., 1715, 1741, 1784 f., 2114

Subject index − Negative concord 281 f. − Negative polarity 284, 287, 294, 543, 786 f., 824, 1225 Nominalization 36, 264, 417, 1147, 1625 Non-configurationality 944 f., 1434 f., 1752 f., 1780 Non-tangling condition 1429 Noun cf. Category Noun-verb distinction 173–182, 1744–1748 Number 179, 194 f., 310 f., 319 ff., 498, 788 f., 1006, 1523, 1777 − Singular 33, 117, 194 ff., 309–335, 496, 717, 779, 1497, 1602, 1740, 1777 − Plural 33, 118, 180, 195, 201, 319 f., 323 ff., 425, 502, 1497, 1523, 1602 f., 1670, 1713, 1747, 1766, 1882, 1928, 2117

O Opacity 1211, 1226 ff., 1234, 1238 f., 1243, 1507 Optimality Theory (OT) 45, 225 ff., 366 f., 866 f., 875–925, 1380–1388, 1802 Optionality 283, 390, 454 ff., 659, 807, 902– 907, 1019 f., 1375, 1384 f., 1755,

P Parallel Grammar (ParGram) 867 f., 2024 f. Parenthesis 492 Parole 10, 137, 2069 Parsing 1041 f., 1073, 1075 f., 1877 ff., 1882, 1896, 1898, 2017 ff. , 2021–2026 Part-of-Speech (POS) tagging 1075 f., 1927, 2012, 2018 Passive cf. Alternations/Diatheses Parasitic gaps cf. Constructions Person 91, 193 f., 202 f. ,233, 335 ff., 602, 755, 1011,, 1730, 1776 f. Phonetic Form (PF) cf. Levels of Representation Phrase structure 261, 805, 841, 1008, 1424 f., 1564, 1844, 1928 f. Preposition (adposition, circumposition, postposition) cf. Category PRO 221, 661 ff., 694, 848, 1327 ff., 1343 ff., 1489, 1893 f. Pronominal Argument Hypothesis 316, 1752–1755 Prosody 1155, 1162–1175

2141

Q Quantification 49–54, 281–289, 295 f., 886, 953, 961, 1199, 1201–1206, 1208–1212, 1215–1218., 1221 f., 1226, 1243, 1314, 1815 f. Question 16, 18, 53, 57 f., 88 f., 205 f., 233, 268, 351, 370, 384 f., 508, 568, 572, 915 f., 1183, 1546, 1600 ff., 1609, 1611

R Raising cf. Constructions Reconstruction 440 f., 470, 500, 530, 731– 735, 818, 1216, 1228 f., 1237–1244, 1432 f., 1461, 1580 Reference 163 f., 167, 232–235, 253, 315, 421, 425, 599, 638, 719, 990, 1230 ff., 1261, 1310, 1482, 1485, 1523, 1893 Reference tracking 438, 1786 f. Reflexivity 199 ff., 1012, 1339, 1350, 1365, 1388 Relational Grammar 228, 1594, 1597 ff., 2039 Relativization (relative clause, relative pronoun, correlative clause) 204, 237, 688, 691, 709–749, 791, 958, 1575 f., 1604 f., 1630, 1672, 1703, 1720, 2050 f., 2054 Resultative construction cf. Constructions Right Node Raising cf. Constructions Right dislocation 622, 1649 Right Roof Constraint cf. Transformations Role and Reference Grammar 61, 229, 1095, 1098

S Sandhi 10, 40 f., 73 f., 1157–1162, 1167 f., 1522, 1765 Satzklammer 520, 541, 1448 f., 1454 ff. Scrambling cf. Word order Secondary predicate 19, 326, 1681, 1695– 1700 Segmentation 73, 152 f., 1804, 1917, 2077 Serial Verb Constructions cf. Constructions Specifier 21, 223 f., 252 f., 277, 375, 392, 471, 489, 680 f., 699 f., 810, 816 ff., 915, 940 f., 943, 1096, 1266, 1288 f., 1310 ff., 1329–1332, 1378 Specificity 314 f., 603, 853, 1787

2142 Stage-level predicates 163, 1315 Standard Theory cf. Generative Grammar Structuralism 135 ff., 2037 Subcategorization 810, 846 f., 878, 1047 f., 1066, 1287, 1326 Subjacency cf. Transformations Subject-auxiliary-inversion (SAI) cf. Transformations Surface Structure (S-Structure) cf. Levels of Representation

T Tagmemics 149 f., 2037 f. Tense 179, 227, 239, 279, 317, 788, 864, 987, 1132, 1292, 1449, 1483, 1503, 1537, 1560, 1644 ff., 1745 − Future 94, 196, 493, 1449, 1479 ff., 1500, 1539, 1590, 1645, 1697 − Past 108, 124, 138 f., 146, 174, 191, 203, 317, 442, 493, 702, 857, 977, 1142 f., 1483 f., 1494, 1500, 1537 ff., 1561, 1583, 1645, 1685, 1697, 1715, 1717 f.[49], 1736, 1783, 1878, 1927 − Perfect 94, 182, 886, 1449, 1467, 1479 f., 1484 − Present 75, 94, 311, 317, 335, 722, 724, 982, 1143, 1451, 1480, 1483 f., 1500, 1537, 1583, 1607, 1644 f., 1678, 1704, 1927 Tense-Aspect-Mood (TAM) marker 165, 174, 1589 ff., 1638, 1640 f., 1715, 1723 Thematic role (theta (θ) role) 31 f., 44, 223, 228, 388, 490 f., 522, 782, 793, 854–860, 1095 f., 1098–1101, 1107, 1660, 1844, 1853 ff., 1883 − Agent 31, 75–84, 93 ff., 105, 113–117, 223, 229 ff., 247, 256, 427, 490, 522, 545, 685, 751, 757 ff., 769 f., 855–861, 1096, 1099, 1101, 1141, 1289 ff., 1424, 1489 f., 1584, 1663, 1734, 1853 ff. − Benefactive 248, 597, 1019, 1638 − Experiencer 222, 230, 232, 248, 420, 545, 752, 758, 853, 855, 1099, 1336 f., 1339, 1353 f., 1390, 1414, 1492, 1494, 1660 − Goal 248 ff., 268, 522, 545, 626, 784, 853, 1096, 1099, 1488 f., 1660–1664, 1753 − Patient 31[3], 96, 198, 220, 229–235, 431, 808, 855, 859 f., 993, 1120, 1640, 1740, 2054

Indexes − Recipient 31[3], 76, 78, 93, 229, 431, 490, 493, 522, 545, 550, 751 f., 782 ff., 855, 1488, 1749, 1773, 1933 f., 2053 f. − Theme 223, 231, 247, 249 ff., 255 ff., 268, 389, 394, 417, 431, 491, 522, 529, 545, 751, 759, 855–858, 992, 1096, 1099 f., 1424, 1489, 1582, 1660–1666, 1713 f., 1729, 1749, 1773, 1844, 1853 ff., 2053 ff. Thematic role hierarchy 855–860 Theory of Principles and Parameters (PPT) cf. Generative Grammar Theta Criterion 228, 808, 816, 834, 1095, 1106 f. Topic 25, 46, 57–60, 102, 105–109, 116 f., 140 f., 176, 232 f., 251, 313 f., 348 f., 354, 371, 373 f., 376, 385–396, 517, 522, 549, 841, 847, 862 f., 1021, 1263–1271, 1461 f., 1547 ff., 1572 ff., 1583 ff., 1637, 1659, 1648, 1946 f., 2108 f. Topicalization cf. Constructions Transcription 1156 f., 1918, 2064, 2075 ff., 2079 ff., 2084 Transformational Grammar cf. Generative Grammar Transformations (transformational rules, constraints and conditions) 5 f., 13, 22 f., 27, 151, 160, 393, 402 f., 796, 811, 830, 883, 913, 954, 1140, 1178, 1325, 1352, 1413, 1459 f., 1568, 1845, 1876 − Across-the-board extraction (ATB) 494, 500 f., 569, 811, 1420 f., 1896 − Affix Hopping 823, 1140, 1143 − Antecedent-contained deletion (ACD) 574, 1233, 1896 − Case filter 692, 808 f., 825 f., 834, 852 − Chain Condition 1365 ff., 1369 f. − Complex NP Constraint (CNPC) 506, 725, 884 f., 1213 − Coordinate Structure Constraint (CSC) 494 ff., 501, 507, 725, 1064, 1214 − Copy Theory of Movement 699, 818 f., 821 f., 834, 924, 1228, 1239 f., 1352, 1422 − Empty Category Principle (ECP) 34, 565, 584, 693 f., 808, 834, 876, 1413, 1426 − Equi NP deletion 412, 1324 ff. − Head movement 620 f., 946, 957, 1137, 1141, 1224, 1226, 1459, 1641, − Head Movement Constraint 11 f., 34, 278, 815 f., 1141, − Heavy-NP shift 782, 1051, 1056, 1070 − Islands (syntactic) 267 f., 366, 484, 494, 574, 576, 694, 717, 721, 725, 815, 828 f.,

Subject index 847, 884, 896–901, 907, 921 f. , 1064, 1217 f., 1330, 1332, 1335, 1512, 1550, 1754 f., 1881, 1887 f., 2052 − i-within-i condition 1328, 1362 − Minimal Distance Principle (MDP) 422, 1324 ff., 1342, 1893 − Minimal Link Condition (MLC) 422, 816, 825, 918 ff., 1219, 1328 ff., 1333 f., 1348, 1353, − Move/Move α 22, 367, 807, 818, 831 f., 920, 923 f., 1115, 1413, 1568 f. − Phase Impenetrability Condition 587, 695, 820, 1378 − Relativized Minimality 815, − Right Roof Constraint 1039, 1464 − Shortest Move 806, 816, 913, 1219 − Subjacency 53, 361, 403, 829, 834, 1463, 1722, 1887 f. − Subject-auxiliary-inversion (SAI) 18, 983, 987, 1794, 1810 f. − Superiority Condition 1219 − that-trace effect 268, 725 − Trace 36 f., 52, 500, 519, 526, 528 ff., 572, 693, 814, 818 f., 847 f., 878, 883 ff., 954 f., 1141, 1211, 1242 f., 1370, 1411– 1414, 1425, 1432 f., 1853 ff. − VP internal subject hypothesis 456, 1107, 1405, 1855 − Wh in-situ 255, 904, 1214, 1223, 1234 f., 1886 − Wh-movement 267, 349, 574, 576 f., 725, 817, 828, 887, 897, 899, 902, 904, 915 ff., 1220, 1414, 1425, 1585, 1722 Transitivity 108, 113 ff., 230, 655, 657, 663, 691, 1089, 1481, 1495, 1591, 1686, 1748, 1751, 1975 Tree-Adjoining Grammar (TAG) 959, 1073 f., 1404, 1876, 2003 Treebank 1916, 1919, 1927–1933,, 1987, 2002, 2009, 2016, 2021 Type-logical Grammar 1046, 1066, 1210 Type shifting 53, 54, 1209, 1210, 1313, 1314 Typology 5, 166[7], 219, 232, 516, 998,, 1710, 2036, 2043

U Unaccusativity (Unaccusative Hypothesis) 470, 860, 1106, 1390, 1395 , 1466, 1812, 1884

2143 Underspecification 948, 952, 953, 959, 989, 1210 Uniformity of Theta-Assignment Hypothesis (UTAH) 32, 228, 232, 258, 545, 1096, 1412 Unification 850, 983, 988, 990, 991, 1287, 1898, 2015 Universals 5, 6, 7, 9, 159, 167, 227, 515, 664, 672, 1073, 1596, 1799, 2036, 2043 Universal Grammar (UG) 5, 7, 22, 373, 515, 643, 804, 805, 883, 953, 962, 1060, 1096, 1799, 1817, 1845, 1881

V Valency 4, 19, 247, 248, 390, 1004, 1031, 1391, 1965, 1983, 1985 Valency Grammar 1018 Verb cf. Category Verb first/Verb second/Verb third cf. Word order Visser’s generalization 427, 1985 Vocabulary insertion 371, 586, 1143 Voice 176, 199, 235, 430, 581, 665, 679, 749, 1116, 1289, 1583, 1596, 1659, 1674, 1729, 1730, 1748 Vorfeld cf. Word order

W Wh-movement cf. Transformations Wh-question cf. Constructions Wh-scope 885, 925, 1484, 1503, 1505, 1512 Word Grammar 1032, 1035 Word order 17, 18, 58 , 61, 169, 208, 238, 313, 370, 387, 497, 498, 514, 667, 714, 823, 841, 886, 980, 1034, 1066, 1175, 1179, 1308, 1400, 1519, 1579, 1589, 1616, 1633, 1667, 1732, 1942, 2111 − free 8, 56, 386, 515 , 523, 667, 841, 853, 903, 1403, 1579, 1677, 1678, 1753, 1803, 1879, 1883, 1889, 1942, 1954 − head final 35, 364, 832, 944, 1427, 1428, 1479, 1493, 1500 , 1559, 1561, 1582, 1764, 1846, 1877, 1883, 1894 − head initial 35, 543, 832, 943, 1174, 1427, 1498, 1499, 1541, 1543 − Linear precedence (LP rules) 7, 944, 1431 − Mittelfeld 350, 365, 520, 521, 529, 538,

2144

− −

− −



Indexes 544, 550, 551, 1215, 1402, 1411, 1422, 1427, 1448, 1459, 2111, 2116 OSV 515, 548 OV 208, 254, 297, 343, 363, 364, 516, 517, 520, 543, 832, 833, 1179, 1182, 1404, 1561, 1766 OVS 515, 667, 1840 Scrambling 365, 392, 402, 522, 526, 528, 530, 538, 539, 547, 548, 549, 552, 553, 556, 1039, 1051, 1070, 1215, 1216, 1217, 1241, 1268, 1269, 1352, 1390, 1395, 1401, 1402, 1405, 1407, 1408, 1409, 1410, 1411, 1415, 1420, 1423, 1425, 1432, 1493, 1580, 1723, 1889 SOV 236, 515, 516, 517, 519, 548, 1402, 1452, 1453, 1458, 1559, 1561, 1589, 1714, 1954

− SVO 56, 58, 233, 347, 356, 363, 515, 516, 517, 519, 667, 1456, 1541, 1712, 1840, 1842, 1852, 1942, 2056 − Verb first/second/third 342, 345, 352, 363, 498, 499, 515, 541, 667, 946, 959, 1450, 1453, 1456, 1457, 1723, 2114 − VSO 181, 345, 357, 376, 392, 515, 516, 517, 519, 1066, 1169, 1733 − VO 208, 254, 297, 343, 516, 517, 519, 520, 545, 832, 833, 1179, 1182, 1404 − Vorfeld 499, 500, 520, 529, 539, 541, 544, 551, 1448, 1456, 1458, 2109 − VOS 516, 1066

X X-bar Theory

cf. Generative Grammar